{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Computing source timecourses with an XFit-like multi-dipole model\n\nMEGIN's XFit program offers a \"guided ECD modeling\" interface, where multiple\ndipoles can be fitted interactively. By manually selecting subsets of sensors\nand time ranges, dipoles can be fitted to specific signal components. Then,\nsource timecourses can be computed using a multi-dipole model. The advantage of\nusing a multi-dipole model over fitting each dipole in isolation, is that when\nmultiple dipoles contribute to the same signal component, the model can make\nsure that activity assigned to one dipole is not also assigned to another. This\nexample shows how to build a multi-dipole model for estimating source\ntimecourses for evokeds or single epochs.\n\nThe XFit program is the recommended approach for guided ECD modeling, because\nit offers a convenient graphical user interface for it. These dipoles can then\nbe imported into MNE-Python by using the :func:`mne.read_dipole` function for\nbuilding and applying the multi-dipole model. In addition, this example will\nalso demonstrate how to perform guided ECD modeling using only MNE-Python\nfunctionality, which is less convenient than using XFit, but has the benefit of\nbeing reproducible.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Author: Marijn van Vliet \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Importing everything and setting up the data paths for the MNE-Sample\ndataset.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport mne\nfrom mne.channels import read_vectorview_selection\nfrom mne.datasets import sample\nfrom mne.minimum_norm import apply_inverse, apply_inverse_epochs, make_inverse_operator\n\ndata_path = sample.data_path()\nmeg_path = data_path / \"MEG\" / \"sample\"\nraw_fname = meg_path / \"sample_audvis_raw.fif\"\ncov_fname = meg_path / \"sample_audvis-shrunk-cov.fif\"\nbem_dir = data_path / \"subjects\" / \"sample\" / \"bem\"\nbem_fname = bem_dir / \"sample-5120-5120-5120-bem-sol.fif\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Read the MEG data from the audvis experiment. Make epochs and evokeds for the\nleft and right auditory conditions.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw = mne.io.read_raw_fif(raw_fname)\nraw = raw.pick(picks=[\"meg\", \"eog\", \"stim\"])\ninfo = raw.info\n\n# Create epochs for auditory events\nevents = mne.find_events(raw)\nevent_id = dict(right=1, left=2)\nepochs = mne.Epochs(\n raw,\n events,\n event_id,\n tmin=-0.1,\n tmax=0.3,\n baseline=(None, 0),\n reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6),\n)\n\n# Create evokeds for left and right auditory stimulation\nevoked_left = epochs[\"left\"].average()\nevoked_right = epochs[\"right\"].average()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Guided dipole modeling, meaning fitting dipoles to a manually selected subset\nof sensors as a manually chosen time, can now be performed in MEGINs XFit on\nthe evokeds we computed above. However, it is possible to do it completely\nin MNE-Python.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Setup conductor model\ncov = mne.read_cov(cov_fname) # bad channels were already excluded here\nbem = mne.read_bem_solution(bem_fname)\n\n# Fit two dipoles at t=80ms. The first dipole is fitted using only the sensors\n# on the left side of the helmet. The second dipole is fitted using only the\n# sensors on the right side of the helmet.\npicks_left = read_vectorview_selection(\"Left\", info=info)\nevoked_fit_left = evoked_left.copy().crop(0.08, 0.08)\nevoked_fit_left.pick(picks_left)\ncov_fit_left = cov.copy().pick_channels(picks_left, ordered=True)\n\npicks_right = read_vectorview_selection(\"Right\", info=info)\npicks_right = list(set(picks_right) - set(info[\"bads\"]))\nevoked_fit_right = evoked_right.copy().crop(0.08, 0.08)\nevoked_fit_right.pick(picks_right)\ncov_fit_right = cov.copy().pick_channels(picks_right, ordered=True)\n\n# Any SSS projections that are active on this data need to be re-normalized\n# after picking channels.\nevoked_fit_left.info.normalize_proj()\nevoked_fit_right.info.normalize_proj()\ncov_fit_left[\"projs\"] = evoked_fit_left.info[\"projs\"]\ncov_fit_right[\"projs\"] = evoked_fit_right.info[\"projs\"]\n\n# Fit the dipoles with the subset of sensors.\ndip_left, _ = mne.fit_dipole(evoked_fit_left, cov_fit_left, bem)\ndip_right, _ = mne.fit_dipole(evoked_fit_right, cov_fit_right, bem)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have the location and orientations of the dipoles, compute the\nfull timecourses using MNE, assigning activity to both dipoles at the same\ntime while preventing leakage between the two. We use a very low ``lambda``\nvalue to ensure both dipoles are fully used.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fwd, _ = mne.make_forward_dipole([dip_left, dip_right], bem, info)\n\n# Apply MNE inverse\ninv = make_inverse_operator(info, fwd, cov, fixed=True, depth=0)\nstc_left = apply_inverse(evoked_left, inv, method=\"MNE\", lambda2=1e-6)\nstc_right = apply_inverse(evoked_right, inv, method=\"MNE\", lambda2=1e-6)\n\n# Plot the timecourses of the resulting source estimate\nfig, axes = plt.subplots(nrows=2, sharex=True, sharey=True)\naxes[0].plot(stc_left.times, stc_left.data.T)\naxes[0].set_title(\"Left auditory stimulation\")\naxes[0].legend([\"Dipole 1\", \"Dipole 2\"])\naxes[1].plot(stc_right.times, stc_right.data.T)\naxes[1].set_title(\"Right auditory stimulation\")\naxes[1].set_xlabel(\"Time (s)\")\nfig.supylabel(\"Dipole amplitude\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also fit the timecourses to single epochs. Here, we do it for each\nexperimental condition separately.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "stcs_left = apply_inverse_epochs(epochs[\"left\"], inv, lambda2=1e-6, method=\"MNE\")\nstcs_right = apply_inverse_epochs(epochs[\"right\"], inv, lambda2=1e-6, method=\"MNE\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To summarize and visualize the single-epoch dipole amplitudes, we will create\na detailed plot of the mean amplitude of the dipoles during different\nexperimental conditions.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Summarize the single epoch timecourses by computing the mean amplitude from\n# 60-90ms.\namplitudes_left = []\namplitudes_right = []\nfor stc in stcs_left:\n amplitudes_left.append(stc.crop(0.06, 0.09).mean().data)\nfor stc in stcs_right:\n amplitudes_right.append(stc.crop(0.06, 0.09).mean().data)\namplitudes = np.vstack([amplitudes_left, amplitudes_right])\n\n# Visualize the epoch-by-epoch dipole ampltudes in a detailed figure.\nn = len(amplitudes)\nn_left = len(amplitudes_left)\nmean_left = np.mean(amplitudes_left, axis=0)\nmean_right = np.mean(amplitudes_right, axis=0)\n\nfig, ax = plt.subplots(figsize=(8, 4))\nax.scatter(np.arange(n), amplitudes[:, 0], label=\"Dipole 1\")\nax.scatter(np.arange(n), amplitudes[:, 1], label=\"Dipole 2\")\ntransition_point = n_left - 0.5\nax.plot([0, transition_point], [mean_left[0], mean_left[0]], color=\"C0\")\nax.plot([0, transition_point], [mean_left[1], mean_left[1]], color=\"C1\")\nax.plot([transition_point, n], [mean_right[0], mean_right[0]], color=\"C0\")\nax.plot([transition_point, n], [mean_right[1], mean_right[1]], color=\"C1\")\nax.axvline(transition_point, color=\"black\")\nax.set_xlabel(\"Epochs\")\nax.set_ylabel(\"Dipole amplitude\")\nax.legend()\nfig.suptitle(\"Single epoch dipole amplitudes\")\nfig.text(0.30, 0.9, \"Left auditory stimulation\", ha=\"center\")\nfig.text(0.70, 0.9, \"Right auditory stimulation\", ha=\"center\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }