{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "### Quickstart\n", "To run the code below:\n", "\n", "1. Click on the cell to select it.\n", "2. Press `SHIFT+ENTER` on your keyboard or press the play button\n", " () in the toolbar above.\n", "\n", "Feel free to create new cells using the plus button\n", "(), or pressing `SHIFT+ENTER` while this cell\n", "is selected." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Example 4 (neural pitch processing with audio input)\n", "\n", "This example is a crude \"pitch detector\" network that performs autocorrelation of an audio signal. It works with coincidence detector neurons that receive two copies of the input (which has been transformed into spikes by an equally crude \"periphery neuron\" model), with a certain delay between the two inputs. Depending on their respective delay, neurons are sensitive to different periodicities.\n", "\n", "The example shows how Brian's high-level model descriptions can be seemlessly combined with low-level code in a target language, in this case C++. Such code can be necessary to extend Brian functionalities without sacrificing performance, e.g. in applications that necessitate real-time processing of external stimuli.\n", "\n", "In this code, we use this mechanism to provide two possible sources of audio input: a pre-recorded audio file (`use_microphone = False`), or real-time input from a microphone (`use_microphone = True`). For the latter, the `portaudio` library needs to be installed. Also note that access to the computers microphone is not possible when running the notebook on an external server such as [mybinder](https://mybinder.org)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from brian2 import *\n", "import os" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use the high-performance C++ standalone mode, otherwise we are not guaranteed that processing is faster than realtime (necessary when using microphone input)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "set_device('cpp_standalone', directory='example_4')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first set a few global parameters of the model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_rate = 44.1*kHz\n", "buffer_size = 128\n", "defaultclock.dt = 1/sample_rate\n", "runtime = 4.5*second\n", "\n", "# Receptor neurons (\"ear\")\n", "max_delay = 20*ms # 50 Hz\n", "tau_ear = 1*ms\n", "tau_th = 5*ms\n", "# Coincidence detectors\n", "min_freq = 50*Hz\n", "max_freq = 1000*Hz\n", "num_neurons = 300\n", "tau = 1*ms\n", "sigma = .1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model equations (see code further down below) refer to a `get_sample` function that returns the next available audio sample. Since we cannot express this function as mathematical equations, we directly provide its implementation in the target language C++. Since the code is relatively long we do not include it directly here, but instead store the source code in a separate file. We provide two such files, `sound_from_mic.cpp` for sound input from a microphone (used when `use_microphone` is set to `True`), and `sound_from_file.cpp` for sound read out from an uncompressed WAV audio file. These files will be compiled when needed because we provide their names as arguments to the `sources` keyword. Similarly, we include the function declaration that is present in the `sound_input.h` header file (identical for both cases), by specifying it to the `headers` keyword. We further customize the code that gets compiled by providing preprocessor macros via the `define_macros` keyword. Finally, since we are making use of the `portaudio` library for microphone input, we link to the `portaudio` library via the `libraries` keyword:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "use_microphone = False\n", "if use_microphone:\n", " # Now comes the connection to the microphone code.\n", " @implementation('cpp','//actual code in sound_from_mic.cpp',\n", " sources=[os.path.abspath('sound_from_mic.cpp')],\n", " headers=['\"{}\"'.format(os.path.abspath('sound_input.h'))],\n", " libraries=['portaudio'],\n", " define_macros=[('BUFFER_SIZE', buffer_size),\n", " ('SAMPLE_RATE', sample_rate/Hz)])\n", " @check_units(t=second, result=1)\n", " def get_sample(t):\n", " raise NotImplementedError('Use a C++-based code generation target.')\n", "else:\n", " # Instead of using the microphone, use a sound file\n", " @implementation('cpp','//actual code in sound_from_file.cpp',\n", " sources=[os.path.abspath('sound_from_file.cpp')],\n", " headers=['\"{}\"'.format(os.path.abspath('sound_input.h'))],\n", " define_macros=[('FILENAME', r'\\\"{}\\\"'.format(os.path.abspath('scale_flute.wav')))])\n", " @check_units(t=second, result=1)\n", " def get_sample(t):\n", " raise NotImplementedError('Use a C++-based code generation target.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now specify our neural and synaptic model, making use of the `get_sample` function as if it were one of the standard functions provided by Brian:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gain = 50\n", "\n", "# Note that the `get_sample` function does not actually make use of the time `t` that it is given, for simplicity\n", "# it assumes that it is called only once per time step. This is actually enforced by using our `constant over dt`\n", "# feature -- the variable `sound` can be used in several places (which is important here, since we want to\n", "# record it as well):\n", "eqs_ear = '''\n", "dx/dt = (sound - x)/tau_ear: 1 (unless refractory)\n", "dth/dt = (0.1*x - th)/tau_th : 1\n", "sound = clip(get_sample(t), 0, inf) : 1 (constant over dt)\n", "'''\n", "receptors = NeuronGroup(1, eqs_ear, threshold='x>th', reset='x=0; th = th*2.5 + 0.01',\n", " refractory=2*ms, method='exact')\n", "receptors.th = 1\n", "sound_mon = StateMonitor(receptors, 'sound', record=0)\n", "\n", "eqs_neurons = '''\n", "dv/dt = -v/tau+sigma*(2./tau)**.5*xi : 1\n", "freq : Hz (constant)\n", "'''\n", "\n", "neurons = NeuronGroup(num_neurons, eqs_neurons, threshold='v>1', reset='v=0',\n", " method='euler')\n", "neurons.freq = 'exp(log(min_freq/Hz)+(i*1.0/(num_neurons-1))*log(max_freq/min_freq))*Hz'\n", "synapses = Synapses(receptors, neurons, on_pre='v += 0.5',\n", " multisynaptic_index='k')\n", "synapses.connect(n=2) # one synapse without delay; one with delay\n", "synapses.delay['k == 1'] = '1/freq_post'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We record the spikes of the \"pitch detector\" neurons, and run the simulation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "spikes = SpikeMonitor(neurons)\n", "\n", "run(runtime)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After the simulation ran through, we plot the raw sound input as well as its spectrogram, and the spiking activity of the detector neurons:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from plotly import tools\n", "from plotly.offline import iplot, init_notebook_mode\n", "import plotly.graph_objs as go\n", "from scipy.signal import spectrogram\n", "init_notebook_mode(connected=True)\n", "\n", "fig = tools.make_subplots(5, 1, shared_xaxes=True,\n", " specs=[[{}], [{'rowspan': 2}], [None], [{'rowspan': 2}], [None]],\n", " print_grid=False\n", " # subplot_titles=('Raw sound signal', 'Spectrogram of sound signal',\n", " # 'Spiking activity')\n", " )\n", "\n", "trace = go.Scatter(x=sound_mon.t/second,\n", " y=sound_mon.sound[0],\n", " name='sound signal',\n", " mode='lines',\n", " line={'color':'#1f77b4'},\n", " showlegend=False\n", " )\n", "fig.append_trace(trace, 1, 1)\n", "f, t, Sxx = spectrogram(sound_mon.sound[0], fs=sample_rate/Hz, nperseg=2**12, window='hamming')\n", "\n", "trace = go.Heatmap(x=t, y=f, z=10*np.log10(Sxx), showscale=False,\n", " colorscale='Viridis', name='PSD')\n", "fig.append_trace(trace, 2, 1)\n", "\n", "trace = go.Scatter(x=spikes.t/second,\n", " y=neurons.freq[spikes.i]/Hz, \n", " marker={'symbol': 'line-ns', 'line': {'width': 1, 'color':'#1f77b4'},\n", " 'color':'#1f77b4'},\n", " mode='markers',\n", " name='spikes', showlegend=False)\n", "fig.append_trace(trace, 4, 1)\n", "\n", "fig['layout'].update(xaxis={'title': 'time (in s)',\n", " 'range': (0.4, runtime/second)},\n", " yaxis1={'title': 'amplitude',\n", " 'showticklabels': False},\n", " yaxis2={'type': 'log',\n", " 'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),\n", " 'title': 'Frequency (Hz)'},\n", " yaxis3={'type': 'log',\n", " 'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),\n", " 'title': 'Preferred\\nFrequency (Hz)'})\n", "\n", "iplot(fig)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you ran the above code for the pre-recorded sound file, you should clearly see that four separate, ascending notes were played." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }