{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Rhythm and Timbre Analysis from Music

\n", "

Rhythm Pattern Music Features

\n", "

Extraction and Application Tutorial

\n", "
\n", "

Thomas Lidy and Alexander Schindler

\n", "

lidy@ifs.tuwien.ac.at


\n", "Institute of Software Technology and Interactive Systems
TU Wien\n", "
\n", "

http://www.ifs.tuwien.ac.at/mir

\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Requirements\n", "2. Audio Processing\n", "3. Audio Feature Extraction\n", "4. Application Scenarios
\n", " 4.1 Getting Songs from Soundcloud
\n", " 4.2. Finding Similar Sounding Songs\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Requirements\n", "\n", "This Tutorial uses iPython Notebook for interactive coding. If you use iPython Notebook, you can interactively execute your code (and the code here in the tutorial) directly in the Web browser. Otherwise you can copy & paste code from here to your prefered Python editor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# to install iPython notebook on your computer, use this in Terminal\n", "sudo pip install \"ipython[notebook]\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### RP Extract Library\n", "\n", "This is our mean library for rhythmic and timbral audio feature analysis:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "download ZIP or check out from GitHub:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# in Terminal\n", "git clone https://github.com/tuwien-musicir/rp_extract.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Python Libraries\n", "\n", "RP_extract depends on the following libraries. If not already included in your Python installation,\n", "please install these Python libraries using pip or easy_install:\n", "\n", " \n", "\n", "They can usually be installed via Python PIP installer on command line:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# in Terminal\n", "sudo pip install numpy scipy matplotlib" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Additional Libraries \n", "\n", "These libraries are used in the later tutorial steps, but not necessarily needed if you want to use the RP_extract library alone:\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# in Terminal\n", "sudo pip install soundcloud urllib unicsv scikit-learn\n", "\n", "git clone https://github.com/tuwien-musicir/mir_utils.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### MP3 Decoder\n", "If you want to use MP3 files as input, you need to have one of the following MP3 decoders installed in your system:\n", "\n", "\n", "\n", "Note: If you don't install it to a path which can be found by the operating system, use this to add path where you installed the MP3 decoder binary to your system PATH so Python can call it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "path = '/path/to/ffmpeg/'\n", "os.environ['PATH'] += os.pathsep + path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Import + Test your Environment\n", "If you have installed all required libraries, the follwing imports should run without errors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pylab inline\n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')\n", "\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# numerical processing and scientific libraries\n", "import numpy as np\n", "\n", "# plotting\n", "import matplotlib.pyplot as plt\n", "\n", "# reading wav and mp3 files\n", "from audiofile_read import * # included in the rp_extract git package\n", "\n", "# Rhythm Pattern Audio Extraction Library\n", "from rp_extract_python import rp_extract\n", "from rp_plot import * # can be skipped if you don't want to do any plots\n", "\n", "\n", "# misc\n", "from urllib import urlopen\n", "import urllib2\n", "import gzip\n", "import StringIO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Audio Processing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Feature Extraction is the core of content-based description of audio files. With feature extraction from audio, a computer is able to recognize the content of a piece of music without the need of annotated labels such as artist, song title or genre. This is the essential basis for information retrieval tasks, such as similarity based searches (query-by-example, query-by-humming, etc.), automatic classification into categories, or automatic organization and clustering of music archives.\n", "\n", "Content-based description requires the development of feature extraction techniques that analyze the acoustic characteristics of the signal. Features extracted from the audio signal are intended to describe the stylistic content of the music, e.g. beat, presence of voice, timbre, etc.\n", "\n", "We use methods from digital signal processing and consider psycho-acoustic models in order to extract suitable semantic information from music. We developed various feature sets, which are appropriate for different tasks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Audio Files\n", "\n", "### Load audio data from wav or mp3 file\n", "\n", "We provide a library (audiofile_read.py) that is capable of reading WAV and MP3 files (MP3 through an external decoder, see Installation Requirements above).\n", "\n", "Take any MP3 or WAV file on your disk - or download one from e.g. freemusicarchive.org.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# provide/adjust the path to your wav or mp3 file\n", "\n", "audiofile = \"music/1972-048 Elvis Presley - Burning Love 22khz.mp3\"\n", "\n", "samplerate, samplewidth, wavedata = audiofile_read(audiofile)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "samplerate, samplewidth, wavedata = audiofile_read(audiofile, normalize=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wavedata.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note about Normalization: Normalization is automatically done by audiofile_read() above.\n", "\n", "Usually, an audio file stores integer values for the samples. However, for audio processing we need float values that's why the audiofile_read library already converts the input data to float values in the range of (-1,1). \n", "\n", "This is taken care of by audiofile_read. In the rare case you don't want to normalize, use this line instead of the one above:\n", "\n", " samplerate, samplewidth, wavedata = audiofile_read(audiofile, normalize=False)\n", "\n", "In case you use another library to read in WAV files (such as scipy.io.wavfile.read) please have a look into audiofile_read code to do the normalization in the same way. Note that scipy.io.wavfile.read does not correctly read 24bit WAV files." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Audio Information\n", "\n", "Let's print some information about the audio file just read:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nsamples = wavedata.shape[0]\n", "nchannels = wavedata.shape[1]\n", "\n", "print \"Successfully read audio file:\", audiofile\n", "print samplerate, \"Hz,\", samplewidth*8, \"bit,\", nchannels, \"channel(s),\", nsamples, \"samples\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot Wave form\n", "we use this to check if the WAV or MP3 file has been correctly loaded" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_samples_plot = 4 * samplerate # limit number of samples to plot (to 4 sec), to avoid graphical overflow\n", "\n", "if nsamples < max_samples_plot:\n", " max_samples_plot = nsamples\n", "\n", "plot_waveform(wavedata[0:max_samples_plot], 16, 5);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Audio Pre-processing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For audio processing and feature extraction, we use a single channel only.\n", "\n", "Therefore in case we have a stereo signal, we combine the separate channels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use combine the channels by calculating their geometric mean\n", "wavedata_mono = np.mean(wavedata, axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below an example waveform of a mono channel after combining the stereo channels by arithmetic mean:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_waveform(wavedata_mono[0:max_samples_plot], 16, 3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotstft(wavedata_mono, samplerate, binsize=512, ignore=True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Audio Feature Extraction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rhythm Patterns " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Rhythm Patterns (also called Fluctuation Patterns) describe modulation amplitudes for a range of modulation frequencies on \"critical bands\" of the human auditory range, i.e. fluctuations (or rhythm) on a number of frequency bands. The feature extraction process for the Rhythm Patterns is composed of two stages:\n", "\n", "First, the specific loudness sensation in different frequency bands is computed, by using a Short Time FFT, grouping the resulting frequency bands to psycho-acoustically motivated critical-bands, applying spreading functions to account for masking effects and successive transformation into the decibel, Phon and Sone scales. This results in a power spectrum that reflects human loudness sensation (Sonogram).\n", "\n", "In the second step, the spectrum is transformed into a time-invariant representation based on the modulation frequency, which is achieved by applying another discrete Fourier transform, resulting in amplitude modulations of the loudness in individual critical bands. These amplitude modulations have different effects on human hearing sensation depending on their frequency, the most significant of which, referred to as fluctuation strength, is most intense at 4 Hz and decreasing towards 15 Hz. From that data, reoccurring patterns in the individual critical bands, resembling rhythm, are extracted, which – after applying Gaussian smoothing to diminish small variations – result in a time-invariant, comparable representation of the rhythmic patterns in the individual critical bands." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "features = rp_extract(wavedata, # the two-channel wave-data of the audio-file\n", " samplerate, # the samplerate of the audio-file\n", " extract_rp = True, # <== extract this feature!\n", " transform_db = True, # apply psycho-accoustic transformation\n", " transform_phon = True, # apply psycho-accoustic transformation\n", " transform_sone = True, # apply psycho-accoustic transformation\n", " fluctuation_strength_weighting=True, # apply psycho-accoustic transformation\n", " skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped\n", " step_width = 1) # " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotrp(features['rp'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Statistical Spectrum Descriptor " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Sonogram is calculated as in the first part of the Rhythm Patterns calculation. According to the occurrence of beats or other rhythmic variation of energy on a specific critical band, statistical measures are able to describe the audio content. Our goal is to describe the rhythmic content of a piece of audio by computing the following statistical moments on the Sonogram values of each of the critical bands:\n", "\n", " * mean, median, variance, skewness, kurtosis, min- and max-value" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "features = rp_extract(wavedata, # the two-channel wave-data of the audio-file\n", " samplerate, # the samplerate of the audio-file\n", " extract_ssd = True, # <== extract this feature!\n", " transform_db = True, # apply psycho-accoustic transformation\n", " transform_phon = True, # apply psycho-accoustic transformation\n", " transform_sone = True, # apply psycho-accoustic transformation\n", " fluctuation_strength_weighting=True, # apply psycho-accoustic transformation\n", " skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped\n", " step_width = 1) # " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotssd(features['ssd'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rhythm Histogram" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Rhythm Histogram features we use are a descriptor for general rhythmics in an audio document. Contrary to the Rhythm Patterns and the Statistical Spectrum Descriptor, information is not stored per critical band. Rather, the magnitudes of each modulation frequency bin of all critical bands are summed up, to form a histogram of \"rhythmic energy\" per modulation frequency. The histogram contains 60 bins which reflect modulation frequency between 0 and 10 Hz. For a given piece of audio, the Rhythm Histogram feature set is calculated by taking the median of the histograms of every 6 second segment processed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "features = rp_extract(wavedata, # the two-channel wave-data of the audio-file\n", " samplerate, # the samplerate of the audio-file\n", " extract_rh = True, # <== extract this feature!\n", " transform_db = True, # apply psycho-accoustic transformation\n", " transform_phon = True, # apply psycho-accoustic transformation\n", " transform_sone = True, # apply psycho-accoustic transformation\n", " fluctuation_strength_weighting=True, # apply psycho-accoustic transformation\n", " skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped\n", " step_width = 1) # " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotrh(features['rh'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get rough BPM from Rhythm Histogram\n", "\n", "By looking at the maximum peak of a Rhythm Histogram, we can determine the beats per minute (BPM) very roughly by multiplying the Index of the Rhythm Histogram bin by the modulation frequency resolution (0.168 Hz) * 60. The resolution of this is however only at +/- 10 bpm." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "maxbin = features['rh'].argmax(axis=0) + 1 # +1 because it starts from 0\n", "\n", "mod_freq_res = 1.0 / (2**18/44100.0) # resolution of modulation frequency axis (0.168 Hz) (= 1/(segment_size/samplerate))\n", "#print mod_freq_res * 60 # resolution\n", "\n", "bpm = maxbin * mod_freq_res * 60\n", "\n", "print bpm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modulation Frequency Variance Descriptor " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This descriptor measures variations over the critical frequency bands for a specific modulation frequency (derived from a rhythm pattern).\n", "\n", "Considering a rhythm pattern, i.e. a matrix representing the amplitudes of 60 modulation frequencies on 24 critical bands, an MVD vector is derived by computing statistical measures (mean, median, variance, skewness, kurtosis, min and max) for each modulation frequency over the 24 bands. A vector is computed for each of the 60 modulation frequencies. Then, an MVD descriptor for an audio file is computed by the mean of multiple MVDs from the audio file's segments, leading to a 420-dimensional vector. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Temporal Statistical Spectrum Descriptor " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Feature sets are frequently computed on a per segment basis and do not incorporate time series aspects. As a consequence, TSSD features describe variations over time by including a temporal dimension. Statistical measures (mean, median, variance, skewness, kurtosis, min and max) are computed over the individual statistical spec- trum descriptors extracted from segments at different time positions within a piece of audio. This captures timbral variations and changes over time in the audio spectrum, for all the critical Bark-bands. Thus, a change of rhythmic, instruments, voices, etc. over time is reflected by this feature set. The dimension is 7 times the dimension of an SSD (i.e. 1176)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Temporal Rhythm Histograms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Statistical measures (mean, median, variance, skewness, kurtosis, min and max) are computed over the individual Rhythm Histograms extracted from various segments in a piece of audio. Thus, change and variation of rhythmic aspects in time are captured by this descriptor. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extract All Features\n", "\n", "To extract ALL or selected ones of the before described features, you can use this command: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# adapt the fext array to your needs:\n", "fext = ['rp','ssd','rh','mvd'] # sh, tssd, trh\n", "\n", "features = rp_extract(wavedata,\n", " samplerate,\n", " extract_rp = ('rp' in fext), # extract Rhythm Patterns features\n", " extract_ssd = ('ssd' in fext), # extract Statistical Spectrum Descriptor\n", " extract_sh = ('sh' in fext), # extract Statistical Histograms\n", " extract_tssd = ('tssd' in fext), # extract temporal Statistical Spectrum Descriptor\n", " extract_rh = ('rh' in fext), # extract Rhythm Histogram features\n", " extract_trh = ('trh' in fext), # extract temporal Rhythm Histogram features\n", " extract_mvd = ('mvd' in fext), # extract Modulation Frequency Variance Descriptor\n", " spectral_masking=True,\n", " transform_db=True,\n", " transform_phon=True,\n", " transform_sone=True,\n", " fluctuation_strength_weighting=True,\n", " skip_leadin_fadeout=1,\n", " step_width=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# let's see what we got in our dict\n", "print features.keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# list the feature type dimensions\n", "\n", "for k in features.keys():\n", " print k, features[k].shape\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Application Scenarios" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyze Songs from Soundcloud\n", "\n", "## 4.1. Getting Songs from Soundcloud\n", "\n", "In this step we are going to analyze songs from Soundcloud, using the Soundcloud API.\n", "\n", "Please get your own API key first by clicking \"Register New App\" on https://developers.soundcloud.com.\n", "\n", "Then we can start using the Soundcloud API:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# START SOUNDCLOUD API\n", "import soundcloud\n", "import urllib # for mp3 download\n", "\n", "# To use soundcloud-python, you must first create a Client instance, passing at a minimum the client id you \n", "# obtained when you registered your app:\n", "\n", "# If you only need read-only access to public resources, simply provide a client id when creating a Client instance:\n", "my_client_id= 'insert your soundcloud client id here'\n", "\n", "client = soundcloud.Client(client_id=my_client_id)\n", "# if there is no error after this, it should have worked" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get Track Info" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# GET TRACK INFO\n", "\n", "#soundcloud_url = 'http://soundcloud.com/forss/flickermood'\n", "soundcloud_url = 'https://soundcloud.com/majorlazer/be-together-feat-wild-belle'\n", "\n", "track = client.get('/resolve', url=soundcloud_url)\n", "\n", "print \"TRACK ID:\", track.id\n", "print \"Title:\", track.title\n", "print \"Artist: \", track.user['username']\n", "print \"Genre: \", track.genre\n", "print track.bpm, \"bpm\"\n", "print track.playback_count, \"times played\"\n", "print track.download_count, \"times downloaded\"\n", "print \"Downloadable?\", track.downloadable" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# if you want to see all information contained in 'track':\n", "print vars(track)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get Track URLs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if hasattr(track, 'download_url'):\n", " print track.download_url\n", "print track.stream_url\n", "stream = client.get('/tracks/%d/streams' % track.id)\n", "#print vars(stream)\n", "print stream.http_mp3_128_url" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download Preview MP3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set the MP3 download directory\n", "mp3_dir = './music'\n", "\n", "mp3_file = mp3_dir + os.sep + \"%s.mp3\" % track.title\n", "\n", "# Download the 128 kbit stream MP3\n", "urllib.urlretrieve (stream.http_mp3_128_url, mp3_file)\n", "\n", "print \"Downloaded \" + mp3_file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Iterate over a List of Soundcloud Tracks\n", "This will take a number of Souncloud URLs and get the track info for them and download the mp3 stream if available." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use your own soundcloud urls here\n", "soundcloud_urls = [\n", " 'https://soundcloud.com/absencemusik/lana-del-rey-born-to-die-absence-remix',\n", " 'https://soundcloud.com/princefoxmusic/raindrops-feat-kerli-prince-fox-remix',\n", " 'https://soundcloud.com/octobersveryown/remyboyz-my-way-rmx-ft-drake'\n", " ]\n", "\n", "mp3_dir = './music'\n", "mp3_files = []\n", "own_track_ids = []\n", "\n", "for url in soundcloud_urls:\n", " print url\n", " track = client.get('/resolve', url=url)\n", " mp3_file = mp3_dir + os.sep + \"%s.mp3\" % track.title\n", " mp3_files.append(mp3_file)\n", " own_track_ids.append(track.id)\n", " \n", " stream = client.get('/tracks/%d/streams' % track.id)\n", "\n", " if hasattr(stream, 'http_mp3_128_url'):\n", " mp3_url = stream.http_mp3_128_url\n", " elif hasattr(stream, 'preview_mp3_128_url'): # if we cant get the full mp3 we take the 1:30 preview\n", " mp3_url = stream.preview_mp3_128_url\n", " else:\n", " print \"No MP3 can be downloaded for this song.\"\n", " mp3_url = None # in this case we can't get an mp3\n", " \n", " if not mp3_url == None:\n", " urllib.urlretrieve (mp3_url, mp3_file) # Download the 128 kbit stream MP3\n", " print \"Downloaded \" + mp3_file\n", " \n", "# show list of mp3 files we got:\n", "# print mp3_files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4.2. Analyzing Songs from Soundcloud" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Analyze the previously loaded Songs\n", "\n", "Now this combines reading all the MP3s we've got and analyzing the features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mp3_files is the list of downloaded Soundcloud files as stored above (mp3_files.append())\n", "\n", "# all_features will be a list of dict entries for all files\n", "all_features = []\n", "\n", "for mp3 in mp3_files:\n", "\n", " # Read the Audio file\n", " samplerate, samplewidth, wavedata = audiofile_read(mp3)\n", " print \"Successfully read audio file:\", mp3\n", " nsamples = wavedata.shape[0]\n", " nchannels = wavedata.shape[1]\n", " print samplerate, \"Hz,\", samplewidth*8, \"bit,\", nchannels, \"channel(s),\", nsamples, \"samples\"\n", " \n", " # Extract the Audio Features\n", " # (adapt the fext array to your needs)\n", " fext = ['rp','ssd','rh','mvd'] # sh, tssd, trh\n", "\n", " features = rp_extract(wavedata,\n", " samplerate,\n", " extract_rp = ('rp' in fext), # extract Rhythm Patterns features\n", " extract_ssd = ('ssd' in fext), # extract Statistical Spectrum Descriptor\n", " extract_sh = ('sh' in fext), # extract Statistical Histograms\n", " extract_tssd = ('tssd' in fext), # extract temporal Statistical Spectrum Descriptor\n", " extract_rh = ('rh' in fext), # extract Rhythm Histogram features\n", " extract_trh = ('trh' in fext), # extract temporal Rhythm Histogram features\n", " extract_mvd = ('mvd' in fext), # extract Modulation Frequency Variance Descriptor\n", " )\n", " \n", " all_features.append(features)\n", " \n", "print \"Finished analyzing\", len(mp3_files), \"files.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: also see source file rp_extract_files.py on how to iterate over ALL mp3 or wav files in a directory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Look at the results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# iterates over all featuers (files) we extracted\n", "\n", "for feat in all_features:\n", " plotrp(feat['rp'])\n", " plotrh(feat['rh'])\n", "\n", " maxbin = feat['rh'].argmax(axis=0) + 1 # +1 because it starts from 0\n", " bpm = maxbin * mod_freq_res * 60\n", " print \"roughly\", round(bpm), \"bpm\"\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Further Example: Get a list of tracks by Genre\n", "\n", "This is an example on how to retrieve Songs from Soundcloud by genre and/or bpm.\n", "\n", "currently this does not work ... (issue on Soundcloud side?)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# currently this does not work \n", "genre = 'Dancehall'\n", "\n", "curr_offset = 0 # Note: the API has a limit of 50 items per response, so to get more you have to query multiple times with an offset.\n", "tracks = client.get('/tracks', genres=genre, offset=curr_offset)\n", "print \"Retrieved\", len(tracks), \"track objects data\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# original Soundcloud example, searching for genre and bpm\n", "# currently this does not work \n", "tracks = client.get('/tracks', genres='punk', bpm={'from': 120})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4.3. Finding Similar Sounding Songs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In these application scenarios we try to find similar songs or classify music into different categories.\n", "\n", "For these Use Cases we need to import a few additional functions from the sklearn package and from mir_utils (installed from git above in parallel to rp_extract):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# IMPORTING mir_utils (installed from git above in parallel to rp_extract (otherwise ajust path))\n", "import sys\n", "sys.path.append(\"../mir_utils\")\n", "from demo.NotebookUtils import *\n", "from demo.PlottingUtils import *\n", "from demo.Soundcloud_Demo_Dataset import SoundcloudDemodatasetHandler" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# IMPORTS for NearestNeighbor Search\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.neighbors import NearestNeighbors" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Soundcloud Demo Dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Soundcloud Demo Dataset is a collection of commonly known mainstream radio songs hosted on the online streaming platform Soundcloud. The Dataset is available as playlist and is intended to be used to demonstrate the performance of MIR algorithms with the help of well known songs.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# show the data set as Souncloud playlist\n", "\n", "iframe = ''\n", "\n", "HTML(iframe)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The *SoundcloudDemodatasetHandler* abstracts the access to the TU-Wien server. On this server the extracted features are stored as csv-files. The *SoundcloudDemodatasetHandler* remotely loads the features and returns them by request. The features have been extracted using the method explained in the previous sections." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# first argument is local file path for downloaded MP3s and local metadata (if present, otherwise None)\n", "scds = SoundcloudDemodatasetHandler(None, lazy=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding rhythmically similar songs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialize the similarity search object\n", "\n", "sim_song_search = NearestNeighbors(n_neighbors = 6, metric='euclidean')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding rhythmically similar songs using Rhythm Histograms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set feature type\n", "feature_set = 'rh'\n", "\n", "# get features from Soundcloud demo set\n", "demoset_features = scds.features[feature_set][\"data\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Normalize the extracted features\n", "scaled_feature_space = StandardScaler().fit_transform(demoset_features)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Fit the Nearest-Neighbor search object to the extracted features\n", "sim_song_search.fit(scaled_feature_space)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Our query-song:\n", "\n", "This is a query song from the pre-analyzed data set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query_track_soundcloud_id = 68687842 # Mr. Saxobeat\n", "\n", "HTML(scds.getPlayerHTMLForID(query_track_soundcloud_id))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Retrieve the feature vector for the query song" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query_track_feature_vector = scaled_feature_space[scds.features[feature_set][\"ids\"] == query_track_soundcloud_id]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Search the nearest neighbors of the query-feature-vector\n", "This retrieves the most similar song indices and their distance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(distances, similar_songs) = sim_song_search.kneighbors(query_track_feature_vector, return_distance=True)\n", " \n", "print distances\n", "print similar_songs\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For now we use only the song indices without distances\n", "similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]\n", "\n", "# because we are searching in the entire collection, the top-most result is the query song itself. Thus, we can skip it.\n", "similar_songs = similar_songs[1:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Lookup the corresponding Soundcloud-IDs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "similar_soundcloud_ids = scds.features[feature_set][\"ids\"][similar_songs]\n", "\n", "print similar_soundcloud_ids" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Listen to the results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding rhythmically similar songs using Rhythm Patterns\n", "This time we define a function that performs steps analogously to the RH retrieval above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def search_similar_songs_by_id(query_song_id, feature_set, skip_query=True):\n", "\n", " scaled_feature_space = StandardScaler().fit_transform(scds.features[feature_set][\"data\"])\n", "\n", " sim_song_search.fit(scaled_feature_space);\n", "\n", " query_track_feature_vector = scaled_feature_space[scds.features[feature_set][\"ids\"] == query_song_id]\n", "\n", " similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]\n", " \n", " if skip_query:\n", " similar_songs = similar_songs[1:]\n", "\n", " similar_soundcloud_ids = scds.features[feature_set][\"ids\"][similar_songs]\n", " \n", " return similar_soundcloud_ids" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "similar_soundcloud_ids = search_similar_songs_by_id(query_track_soundcloud_id, \n", " feature_set='rp')\n", "\n", "SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding songs based on Timbral Similarity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Finding songs based on timbral similarity using Statistical Spectral Descriptors" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "similar_soundcloud_ids = search_similar_songs_by_id(query_track_soundcloud_id, \n", " feature_set='ssd')\n", "\n", "SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Compare the Results of Timbral and Rhythmic Similarity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First entry is query-track" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "track_id = 68687842 # 40439758\n", "results_track_1 = search_similar_songs_by_id(track_id, feature_set='ssd', skip_query=False)\n", "results_track_2 = search_similar_songs_by_id(track_id, feature_set='rh', skip_query=False)\n", "\n", "compareSimilarityResults([results_track_1, results_track_2],\n", " width=100, height=120, visual=False,\n", " columns=['Statistical Spectrum Descriptors', 'Rhythm Histograms'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using your Own Query Song from the self-extracted Souncloud tracks above" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# check which files we got\n", "mp3_files" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# select from the list above the number of the song you want to use as a query (counting from 1)\n", "song_id = 3 # count from 1\n", "\n", "# select the feature vector type\n", "feat_type = 'rp' # 'rh' or 'ssd' or 'rp'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# from the all_features data structure, we get the desired feature vector belonging to that song\n", "query_feature_vector = all_features[song_id - 1][feat_type]\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get all the feature vectors of desired feature type from the Soundcloud demo set\n", "demo_features = scds.features[feat_type][\"data\"]\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialize Neighbour Search space with demo set features\n", "\n", "sim_song_search.fit(demo_features)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use our own query_feature_vector for search in the demo set \n", "(distances, similar_songs) = sim_song_search.kneighbors(query_feature_vector, return_distance=True)\n", " \n", "print distances\n", "print similar_songs\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# now we got the song indices for similar songs in the demo set\n", "similar_songs = similar_songs[0]\n", "similar_songs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# and we get the according Soundcloud Track IDs\n", "similar_soundcloud_ids = scds.features[feat_type][\"ids\"][similar_songs]\n", "\n", "similar_soundcloud_ids" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# we add our own Track ID at the beginning to show the seed song below:\n", "\n", "my_track_id = own_track_ids[song_id - 1]\n", "print my_track_id\n", "result = np.insert(similar_soundcloud_ids,0,my_track_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Visual Player with the Songs most similar to our Own Song\n", "first song is the query song" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print \"Feature Type:\", feat_type\n", "SoundcloudTracklist(result, width=90, height=120, visual=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Add On: Combining different Music Descriptors\n", "\n", "Here we merge SSD and RH features together to account for both timbral and rhythmic similarity:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def search_similar_songs_with_combined_sets(scds, query_song_id, feature_sets, skip_query=True, n_neighbors=6):\n", " \n", " features = scds.getCombinedFeaturesets(feature_sets)\n", " \n", " sim_song_search = NearestNeighbors(n_neighbors = n_neighbors, metric='l2')\n", "\n", " #\n", " scaled_feature_space = StandardScaler().fit_transform(features)\n", "\n", " #\n", " sim_song_search.fit(scaled_feature_space);\n", "\n", " #\n", " query_track_feature_vector = scaled_feature_space[scds.getFeatureIndexByID(query_song_id, feature_sets[0])]\n", " \n", " #\n", " similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]\n", " \n", " if skip_query:\n", " similar_songs = similar_songs[1:]\n", "\n", " #\n", " similar_soundcloud_ids = scds.getIdsByIndex(similar_songs, feature_sets[0])\n", " \n", " return similar_soundcloud_ids" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_sets = ['ssd','rh']\n", "\n", "compareSimilarityResults([search_similar_songs_with_combined_sets(scds, 68687842, feature_sets=feature_sets, n_neighbors=5),\n", " search_similar_songs_with_combined_sets(scds, 40439758, feature_sets=feature_sets, n_neighbors=5)],\n", " width=100, height=120, visual=False,\n", " columns=[scds.getNameByID(68687842),\n", " scds.getNameByID(40439758)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Further Reading" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " * [Audio Feature Extraction site of the MIR-Team @TU-Wien](http://www.ifs.tuwien.ac.at/mir/audiofeatureextraction.html)\n", " * Blog-post: [A gentle Introduction to Music Information Retrieval](http://www.europeanasounds.eu/news/a-gentle-introduction-to-music-information-retrieval-making-computers-understand-music)\n", " * [Same Blog-post with Python code](http://wwwnew.schindler.eu.com/blog/mir_intro/blog_with_code.html)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2" } }, "nbformat": 4, "nbformat_minor": 1 }