{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%reload_ext autoreload\n", "%autoreload 2\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# FastAI Audio\n", "\n", "This notebook will show you the fastest way to get started with FastAI audio by demonstrating only the most essential functionality. In the `examples` folder, we have included a number of other notebooks that show more features, and teach you about audio in general. If you'd like to follow along in a colab notebook, please click [here](https://colab.research.google.com/drive/1s0Ouw5PxvrmHdm_gBU0qiA6piOf3VSWO) and copy this into your own google drive.\n", "\n", "First, import fastai audio, this will import all the dependecies you will need to work with Audio." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from audio import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AudioItem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we create an `AudioItem` to load an audio file and listen to it by passing the filename (either `str` or `PosixPath`) to `open_audio()`, we can also see some information about the audio." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path_example = Path('data/misc/whale/Right_whale.wav')\n", "sound = open_audio(path_example)\n", "sound" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This clip is 87.73 seconds long. Audio is a continuous wave that is \"sampled\" by measuring the amplitude of the wave at a given point in time. How many times you sample per second is called the \"sample rate\" and can be thought of as the resolution of the audio. In our example, the audio was sampled 44100 times per second, so our data is a rank 1 tensor with length 44100*time in seconds = 3869019 samples. \n", "\n", "If any of this is new to you, definitely check out our **Intro to Audio Notebook** in the `examples` folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sound.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Important attributes inside of an AudioItem" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#sig means signal, it's a rank one tensor with the amplitudes sampled from the raw sound wave\n", "sound.sig" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#sr means sample rate\n", "sound.sr" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#path is a reference to the location of the sound file\n", "sound.path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AudioList and Speaker Recognition Example\n", "\n", "We'll work with a fairly small dataset that has 10 speakers, 5 male and 5 female, with the goal of recognizing who is speaking.\n", "\n", "We can download the data into our default fastai data directory" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_url = 'http://www.openslr.org/resources/45/ST-AEDS-20180100_1-OS'\n", "data_folder = datapath4file(url2name(data_url))\n", "untar_data(data_url, dest=data_folder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first create an AudioList. This extends fastai ItemList so you can use other methods like `from_csv()` to load your data as well" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "audios = AudioList.from_folder(data_folder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because audio data can be so variable, we provide a convenience function `.stats()` that will display a list of sample rates, and how many files have that sample rate, as well as a plot of the lengths, in seconds, of the audio files in your `AudioList`. You can also specify `prec` to set the number of digits the file lengths are rounded to before plotting the graph (default is 0). Expect it to take about 2 seconds per 5000 files in your dataset, a progress bar is provided." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len_dict = audios.stats(prec=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`stats` will pass you back a dictionary with the file lengths, and file names, so that you may do with it what you want. \n", "\n", "One option is to call `get_outliers` which will return a sorted list of tuples containing the filename, and length of files that are more than `devs` (float) standard deviations from the mean length. This can be helpful for weeding out bad data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "outliers = get_outliers(len_dict, devs=3)\n", "print(\"Total Outliers:\", len(outliers))\n", "outliers[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `stats` method showed us that this dataset has only one sample rate. If you have multiple sample rates, you will need to resample to a single sample rate by setting `resample_to` in the configuration settings. If you want to do any customization, you'll need to pass a config object to the AudioList constructor, so before we go any further, here's how to use it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Audio Configuration\n", "\n", "All config settings are managed through an `AudioConfig object`. It also contains within it a `SpectrogramConfig` object that holds settings related to spectrograms and MFCC (mel-frequency cepstral coefficients). The inner config can be changed just like the outer one by nesting. `config.sg_cfg.top_db=80` for instance" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config = AudioConfig()\n", "config" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see there are tons of features here, most of which you will not need to adjust to get pretty good results. If you plan on doing a lot of work on audio, or have a dataset with lots of silence, or a wide variety of audio lengths, check out our **Features Notebook** in the example folder, it shows when and how to adjust each of these settings.\n", "\n", "For now we will only cover the most essential features `resample_to`, `max_to_pad` and `duration`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `duration` and `max_to_pad`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Eventually, our audio will become spectrograms (visual representations of audio that can be passed to an image classifier). \n", "Like images, it is important that our spectrograms be the same size so that the GPU can handle them efficiently. Since audio clips rarely have precisely equal length, we give you two options for generating fixed width spectrograms. Which one is best for you will depend on the nature of your data. If your data varies in length by even a moderate amount, you will want to use `duration`.\n", "\n", "1. Specify the `duration` setting of your config. This will compute the spectrogram using the entire clip regardless of length, but at train time will grab random sections that are `duration` milliseconds long. If duration is greater than the length of the clip, it will pad your spectrogram with zeroes to be the same length as the others. \n", "\n", "2. Set the `max_to_pad` attribute of your config (in milliseconds) to be the length you want your audio to be. This will pad or trim the underlying audio, and then generate spectrograms from that resulting audio. It will zero-pad clips that are too short, and trim clips that are too long, throwing away the remaining data. \n", "\n", "For this dataset, let's use duration so we don't throwaway data from the longer clips, and let's use 4000ms (4s). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config.duration = 4000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `resample_to`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also it is important that all of the data is the same sample rate. If one spectrogram has a sample rate of 44100, and another's is 16000, the x-axis of the spectrogram will represent different amounts of time, and thus they won't be comparable. So if you see more than one sample rate when you call the `.stats()` method above, you will need to set `resample_to` to be an int representing the sample rate you wish to use. It is best practice to use common sample rates (44100, 22050, 16000 or 8000) as they will be faster to resample. \n", "\n", "For our data, there is no need to resample, but if we did, the code to downsample to 8000 would just be `config.resample_to=8000`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a databunch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we follow the normal fastai datablock API, making sure to pass our config to the AudioList" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_pattern = r'_([mf]\\d+)_'\n", "audios = AudioList.from_folder(data_folder, config=config).split_by_rand_pct(.2, seed=4).label_from_re(label_pattern)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fastai Audio performs on the fly data augmentation directly on spectrograms. Try uncommenting the second line and playing around with the transform manager and for more detail check out the Features Notebook" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfms = None\n", "#tfms = get_spectro_transforms(mask_time=False, mask_freq=True, roll=False, num_rows=12)\n", "db = audios.transform(tfms).databunch(bs=64)\n", "db.show_batch(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When audio is longer than the duration you've selected for training, it is clipped at random, but those items will tell you what time portion of the original audio clip the spectrogram and displayed audio represent. It will appear as '2.53s-6.53s of original clip'. Clips that are shorter than duration are padded with zeros, this will appear as a blue-green bar on the right hand side of the spectrogram" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Learner\n", "\n", "An Audio learner takes a databunch, base_arch(optional, defaults to resnet18 for now), and metrics(optional, defaults to accuracy) and returns a cnn_learner. For now it is just a wrapper, but additional functionality is coming soon. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = audio_learner(db)\n", "learn.lr_find()\n", "learn.recorder.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit_one_cycle(5, slice(2e-3, 2e-2))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "interp = ClassificationInterpretation.from_learner(learn)\n", "interp.plot_confusion_matrix()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Conclusion " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With 30 seconds of compute, and no preprocessing or fine tuning, you just created a voice-recognition system with 99% accuracy. \n", "But this is really just scratching the surface, so please check out our other notebooks in the examples folder and see what else is possible. \n", "\n", "# Acknowledgements\n", "This library builds on the work of many others. It is of course built on top of fastai, so thank you to **Jeremy, Rachel, Stas, Sylvain** and countless others. It is a fork of https://github.com/zcaceres/fastai-audio and so we owe a lot to **@aamir7117 @marii @simonjhb @ste @ThomM @zachcaceres**. And it is built on top of torchaudio which helps us do things many things much faster than would otherwise be possible. Thanks as well to those who have been active in the [fastai audio thread](https://forums.fast.ai/t/deep-learning-with-audio-thread/38123). \n", "\n", "Also we would love feedback, bug reports, feature requests, and whatever else you have to offer. We welcome contributors of all skill levels. If you need to get in touch for any reason, please post in the [fastai audio thread](https://forums.fast.ai/t/deep-learning-with-audio-thread/38123) or contact us via PM [@baz](https://forums.fast.ai/u/baz/) or [@madeupmasters](https://forums.fast.ai/u/MadeUpMasters/) Let's build an audio machine learning community!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }