{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Receptive Field Estimation and Prediction\n\nThis example reproduces figures from Lalor et al.'s mTRF toolbox in\nMATLAB :footcite:`CrosseEtAl2016`. We will show how the\n:class:`mne.decoding.ReceptiveField` class\ncan perform a similar function along with scikit-learn. We will first fit a\nlinear encoding model using the continuously-varying speech envelope to predict\nactivity of a 128 channel EEG system. Then, we will take the reverse approach\nand try to predict the speech envelope from the EEG (known in the literature\nas a decoding model, or simply stimulus reconstruction).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Chris Holdgraf \n# Eric Larson \n# Nicolas Barascud \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors.\n\nfrom os.path import join\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nfrom sklearn.model_selection import KFold\nfrom sklearn.preprocessing import scale\n\nimport mne\nfrom mne.decoding import ReceptiveField" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the data from the publication\n\nFirst we will load the data collected in :footcite:`CrosseEtAl2016`.\nIn this experiment subjects\nlistened to natural speech. Raw EEG and the speech stimulus are provided.\nWe will load these below, downsampling the data in order to speed up\ncomputation since we know that our features are primarily low-frequency in\nnature. Then we'll visualize both the EEG and speech envelope.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "path = mne.datasets.mtrf.data_path()\ndecim = 2\ndata = loadmat(join(path, \"speech_data.mat\"))\nraw = data[\"EEG\"].T\nspeech = data[\"envelope\"].T\nsfreq = float(data[\"Fs\"].item())\nsfreq /= decim\nspeech = mne.filter.resample(speech, down=decim, method=\"polyphase\")\nraw = mne.filter.resample(raw, down=decim, method=\"polyphase\")\n\n# Read in channel positions and create our MNE objects from the raw data\nmontage = mne.channels.make_standard_montage(\"biosemi128\")\ninfo = mne.create_info(montage.ch_names, sfreq, \"eeg\").set_montage(montage)\nraw = mne.io.RawArray(raw, info)\nn_channels = len(raw.ch_names)\n\n# Plot a sample of brain and stimulus activity\nfig, ax = plt.subplots(layout=\"constrained\")\nlns = ax.plot(scale(raw[:, :800][0].T), color=\"k\", alpha=0.1)\nln1 = ax.plot(scale(speech[0, :800]), color=\"r\", lw=2)\nax.legend([lns[0], ln1[0]], [\"EEG\", \"Speech Envelope\"], frameon=False)\nax.set(title=\"Sample activity\", xlabel=\"Time (s)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create and fit a receptive field model\n\nWe will construct an encoding model to find the linear relationship between\na time-delayed version of the speech envelope and the EEG signal. This allows\nus to make predictions about the response to new stimuli.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Define the delays that we will use in the receptive field\ntmin, tmax = -0.2, 0.4\n\n# Initialize the model\nrf = ReceptiveField(\n tmin, tmax, sfreq, feature_names=[\"envelope\"], estimator=1.0, scoring=\"corrcoef\"\n)\n# We'll have (tmax - tmin) * sfreq delays\n# and an extra 2 delays since we are inclusive on the beginning / end index\nn_delays = int((tmax - tmin) * sfreq) + 2\n\nn_splits = 3\ncv = KFold(n_splits)\n\n# Prepare model data (make time the first dimension)\nspeech = speech.T\nY, _ = raw[:] # Outputs for the model\nY = Y.T\n\n# Iterate through splits, fit the model, and predict/test on held-out data\ncoefs = np.zeros((n_splits, n_channels, n_delays))\nscores = np.zeros((n_splits, n_channels))\nfor ii, (train, test) in enumerate(cv.split(speech)):\n print(f\"split {ii + 1} / {n_splits}\")\n rf.fit(speech[train], Y[train])\n scores[ii] = rf.score(speech[test], Y[test])\n # coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature\n coefs[ii] = rf.coef_[:, 0, :]\ntimes = rf.delays_ / float(rf.sfreq)\n\n# Average scores and coefficients across CV splits\nmean_coefs = coefs.mean(axis=0)\nmean_scores = scores.mean(axis=0)\n\n# Plot mean prediction scores across all channels\nfig, ax = plt.subplots(layout=\"constrained\")\nix_chs = np.arange(n_channels)\nax.plot(ix_chs, mean_scores)\nax.axhline(0, ls=\"--\", color=\"r\")\nax.set(title=\"Mean prediction score\", xlabel=\"Channel\", ylabel=\"Score ($r$)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Investigate model coefficients\nFinally, we will look at how the linear coefficients (sometimes\nreferred to as beta values) are distributed across time delays as well as\nacross the scalp. We will recreate `figure 1`_ and `figure 2`_ from\n:footcite:`CrosseEtAl2016`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Print mean coefficients across all time delays / channels (see Fig 1)\ntime_plot = 0.180 # For highlighting a specific time.\nfig, ax = plt.subplots(figsize=(4, 8), layout=\"constrained\")\nmax_coef = mean_coefs.max()\nax.pcolormesh(\n times,\n ix_chs,\n mean_coefs,\n cmap=\"RdBu_r\",\n vmin=-max_coef,\n vmax=max_coef,\n shading=\"gouraud\",\n)\nax.axvline(time_plot, ls=\"--\", color=\"k\", lw=2)\nax.set(\n xlabel=\"Delay (s)\",\n ylabel=\"Channel\",\n title=\"Mean Model\\nCoefficients\",\n xlim=times[[0, -1]],\n ylim=[len(ix_chs) - 1, 0],\n xticks=np.arange(tmin, tmax + 0.2, 0.2),\n)\nplt.setp(ax.get_xticklabels(), rotation=45)\n\n# Make a topographic map of coefficients for a given delay (see Fig 2C)\nix_plot = np.argmin(np.abs(time_plot - times))\nfig, ax = plt.subplots(layout=\"constrained\")\nmne.viz.plot_topomap(\n mean_coefs[:, ix_plot], pos=info, axes=ax, show=False, vlim=(-max_coef, max_coef)\n)\nax.set(title=f\"Topomap of model coefficients\\nfor delay {time_plot}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create and fit a stimulus reconstruction model\n\nWe will now demonstrate another use case for the for the\n:class:`mne.decoding.ReceptiveField` class as we try to predict the stimulus\nactivity from the EEG data. This is known in the literature as a decoding, or\nstimulus reconstruction model :footcite:`CrosseEtAl2016`.\nA decoding model aims to find the\nrelationship between the speech signal and a time-delayed version of the EEG.\nThis can be useful as we exploit all of the available neural data in a\nmultivariate context, compared to the encoding case which treats each M/EEG\nchannel as an independent feature. Therefore, decoding models might provide a\nbetter quality of fit (at the expense of not controlling for stimulus\ncovariance), especially for low SNR stimuli such as speech.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now\n# index the relationship\n# between the neural response and the speech envelope earlier in time, whereas\n# positive lags would index how a unit change in the amplitude of the EEG would\n# affect later stimulus activity (obviously this should have an amplitude of\n# zero).\ntmin, tmax = -0.2, 0.0\n\n# Initialize the model. Here the features are the EEG data. We also specify\n# ``patterns=True`` to compute inverse-transformed coefficients during model\n# fitting (cf. next section and :footcite:`HaufeEtAl2014`).\n# We'll use a ridge regression estimator with an alpha value similar to\n# Crosse et al.\nsr = ReceptiveField(\n tmin,\n tmax,\n sfreq,\n feature_names=raw.ch_names,\n estimator=1e4,\n scoring=\"corrcoef\",\n patterns=True,\n)\n# We'll have (tmax - tmin) * sfreq delays\n# and an extra 2 delays since we are inclusive on the beginning / end index\nn_delays = int((tmax - tmin) * sfreq) + 2\n\nn_splits = 3\ncv = KFold(n_splits)\n\n# Iterate through splits, fit the model, and predict/test on held-out data\ncoefs = np.zeros((n_splits, n_channels, n_delays))\npatterns = coefs.copy()\nscores = np.zeros((n_splits,))\nfor ii, (train, test) in enumerate(cv.split(speech)):\n print(f\"split {ii + 1} / {n_splits}\")\n sr.fit(Y[train], speech[train])\n scores[ii] = sr.score(Y[test], speech[test])[0]\n # coef_ is shape (n_outputs, n_features, n_delays). We have 128 features\n coefs[ii] = sr.coef_[0, :, :]\n patterns[ii] = sr.patterns_[0, :, :]\ntimes = sr.delays_ / float(sr.sfreq)\n\n# Average scores and coefficients across CV splits\nmean_coefs = coefs.mean(axis=0)\nmean_patterns = patterns.mean(axis=0)\nmean_scores = scores.mean(axis=0)\nmax_coef = np.abs(mean_coefs).max()\nmax_patterns = np.abs(mean_patterns).max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize stimulus reconstruction\n\nTo get a sense of our model performance, we can plot the actual and predicted\nstimulus envelopes side by side.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "y_pred = sr.predict(Y[test])\ntime = np.linspace(0, 2.0, 5 * int(sfreq))\nfig, ax = plt.subplots(figsize=(8, 4), layout=\"constrained\")\nax.plot(\n time, speech[test][sr.valid_samples_][: int(5 * sfreq)], color=\"grey\", lw=2, ls=\"--\"\n)\nax.plot(time, y_pred[sr.valid_samples_][: int(5 * sfreq)], color=\"r\", lw=2)\nax.legend([lns[0], ln1[0]], [\"Envelope\", \"Reconstruction\"], frameon=False)\nax.set(title=\"Stimulus reconstruction\")\nax.set_xlabel(\"Time (s)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Investigate model coefficients\n\nFinally, we will look at how the decoding model coefficients are distributed\nacross the scalp. We will attempt to recreate `figure 5`_ from\n:footcite:`CrosseEtAl2016`. The\ndecoding model weights reflect the channels that contribute most toward\nreconstructing the stimulus signal, but are not directly interpretable in a\nneurophysiological sense. Here we also look at the coefficients obtained\nvia an inversion procedure :footcite:`HaufeEtAl2014`, which have a more\nstraightforward\ninterpretation as their value (and sign) directly relates to the stimulus\nsignal's strength (and effect direction).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "time_plot = (-0.140, -0.125) # To average between two timepoints.\nix_plot = np.arange(\n np.argmin(np.abs(time_plot[0] - times)), np.argmin(np.abs(time_plot[1] - times))\n)\nfig, ax = plt.subplots(1, 2)\nmne.viz.plot_topomap(\n np.mean(mean_coefs[:, ix_plot], axis=1),\n pos=info,\n axes=ax[0],\n show=False,\n vlim=(-max_coef, max_coef),\n)\nax[0].set(title=f\"Model coefficients\\nbetween delays {time_plot[0]} and {time_plot[1]}\")\n\nmne.viz.plot_topomap(\n np.mean(mean_patterns[:, ix_plot], axis=1),\n pos=info,\n axes=ax[1],\n show=False,\n vlim=(-max_patterns, max_patterns),\n)\nax[1].set(\n title=(\n f\"Inverse-transformed coefficients\\nbetween delays {time_plot[0]} and \"\n f\"{time_plot[1]}\"\n )\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n\n.. footbibliography::\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }