{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Source reconstruction using an LCMV beamformer\n\nThis tutorial gives an overview of the beamformer method and shows how to\nreconstruct source activity using an LCMV beamformer.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Britta Westner \n# Eric Larson \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors.\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.beamformer import apply_lcmv, make_lcmv\nfrom mne.datasets import fetch_fsaverage, sample" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction to beamformers\nA beamformer is a spatial filter that reconstructs source activity by\nscanning through a grid of pre-defined source points and estimating activity\nat each of those source points independently. A set of weights is\nconstructed for each defined source location which defines the contribution\nof each sensor to this source.\n\nBeamformers are often used for their focal reconstructions and their ability\nto reconstruct deeper sources. They can also suppress external noise sources.\nThe beamforming method applied in this tutorial is the linearly constrained\nminimum variance (LCMV) beamformer :footcite:`VanVeenEtAl1997` operates on\ntime series.\n\nFrequency-resolved data can be reconstructed with the dynamic imaging of\ncoherent sources (DICS) beamforming method :footcite:`GrossEtAl2001`.\nAs we will see in the following, the spatial filter is computed from two\ningredients: the forward model solution and the covariance matrix of the\ndata.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data processing\nWe will use the sample data set for this tutorial and reconstruct source\nactivity on the trials with left auditory stimulation.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = sample.data_path()\nsubjects_dir = data_path / \"subjects\"\nmeg_path = data_path / \"MEG\" / \"sample\"\nraw_fname = meg_path / \"sample_audvis_filt-0-40_raw.fif\"\n\n# Read the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nraw.info[\"bads\"] = [\"MEG 2443\"] # bad MEG channel\n\n# Set up epoching\nevent_id = 1 # those are the trials with left-ear auditory stimuli\ntmin, tmax = -0.2, 0.5\nevents = mne.find_events(raw)\n\n# pick relevant channels\nraw.pick([\"meg\", \"eog\"]) # pick channels of interest\n\n# Create epochs\nproj = False # already applied\nepochs = mne.Epochs(\n raw,\n events,\n event_id,\n tmin,\n tmax,\n baseline=(None, 0),\n preload=True,\n proj=proj,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6),\n)\n\n# for speed purposes, cut to a window of interest\nevoked = epochs.average().crop(0.05, 0.15)\n\n# Visualize averaged sensor space data\nevoked.plot_joint()\n\ndel raw # save memory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computing the covariance matrices\nSpatial filters use the data covariance to estimate the filter\nweights. The data covariance matrix will be `inverted`_ during the spatial\nfilter computation, so it is valuable to plot the covariance matrix and its\neigenvalues to gauge whether matrix inversion will be possible.\nAlso, because we want to combine different channel types (magnetometers and\ngradiometers), we need to account for the different amplitude scales of these\nchannel types. To do this we will supply a noise covariance matrix to the\nbeamformer, which will be used for whitening.\nThe data covariance matrix should be estimated from a time window that\nincludes the brain signal of interest,\nand incorporate enough samples for a stable estimate. A rule of thumb is to\nuse more samples than there are channels in the data set; see\n:footcite:`BrookesEtAl2008,WestnerEtAl2022` for more detailed advice on\ncovariance estimation for beamformers. Here, we use a time\nwindow incorporating the expected auditory response at around 100 ms post\nstimulus and extend the period to account for a low number of trials (72) and\nlow sampling rate of 150 Hz.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_cov = mne.compute_covariance(epochs, tmin=0.01, tmax=0.25, method=\"empirical\")\nnoise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method=\"empirical\")\ndata_cov.plot(epochs.info)\ndel epochs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When looking at the covariance matrix plots, we can see that our data is\nslightly rank-deficient as the rank is not equal to the number of channels.\nThus, we choose to regularize the covariance matrix before inverting it\nin the beamformer calculation. This can be achieved by setting the parameter\n``reg=0.05`` when calculating the spatial filter with\n:func:`~mne.beamformer.make_lcmv`. This corresponds to loading the diagonal\nof the covariance matrix with 5% of the sensor power. Other ways to deal with\nrank-deficient covariance matrices are discussed in\n:footcite:`WestnerEtAl2022`.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The forward model\nThe forward model is the other important ingredient for the computation of a\nspatial filter. Here, we will load the forward model from disk; more\ninformation on how to create a forward model can be found in this tutorial:\n`tut-forward`.\nNote that beamformers are usually computed in a :class:`volume source space\n`, because estimating only cortical surface\nactivation can misrepresent the data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Read forward model\n\nfwd_fname = meg_path / \"sample_audvis-meg-vol-7-fwd.fif\"\nforward = mne.read_forward_solution(fwd_fname)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Handling depth bias\n\nThe forward model solution is inherently biased toward superficial sources.\nWhen analyzing single conditions it is best to mitigate the depth bias\nsomehow. There are several ways to do this:\n\n- :func:`mne.beamformer.make_lcmv` has a ``depth`` parameter that normalizes\n the forward model prior to computing the spatial filters. See the docstring\n for details.\n- Unit-noise gain beamformers handle depth bias by normalizing the\n weights of the spatial filter. Choose this by setting\n ``weight_norm='unit-noise-gain'``.\n- When computing the Neural activity index, the depth bias is handled by\n normalizing both the weights and the estimated noise (see\n :footcite:`VanVeenEtAl1997`). Choose this by setting ``weight_norm='nai'``.\n\nNote that when comparing conditions, the depth bias will cancel out and it is\npossible to set both parameters to ``None``.\n\n\n## Compute the spatial filter\nNow we can compute the spatial filter. We'll use a unit-noise gain beamformer\nto deal with depth bias, and will also optimize the orientation of the\nsources such that output power is maximized.\nThis is achieved by setting ``pick_ori='max-power'``.\nThis gives us one source estimate per source (i.e., voxel), which is known\nas a scalar beamformer.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filters = make_lcmv(\n evoked.info,\n forward,\n data_cov,\n reg=0.05,\n noise_cov=noise_cov,\n pick_ori=\"max-power\",\n weight_norm=\"unit-noise-gain\",\n rank=None,\n)\n\n# You can save the filter for later use with:\n# filters.save('filters-lcmv.h5')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is also possible to compute a vector beamformer, which gives back three\nestimates per voxel, corresponding to the three direction components of the\nsource. This can be achieved by setting\n``pick_ori='vector'`` and will yield a :class:`volume vector source estimate\n`. Note that we switch the ``weight_norm``\nparameter to ``'unit-noise-gain-invariant'``, which is only necessary for the\nvector unit-noise-gain beamformer. For more in-depth detail, see\n:footcite:`WestnerEtAl2022`.\nWe will compute another set of filters using the vector beamformer approach:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filters_vec = make_lcmv(\n evoked.info,\n forward,\n data_cov,\n reg=0.05,\n noise_cov=noise_cov,\n pick_ori=\"vector\",\n weight_norm=\"unit-noise-gain-invariant\",\n rank=None,\n)\n# save a bit of memory\nsrc = forward[\"src\"]\ndel forward" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Apply the spatial filter\nThe spatial filter can be applied to different data types: raw, epochs,\nevoked data or the data covariance matrix to gain a static image of power.\nThe function to apply the spatial filter to :class:`~mne.Evoked` data is\n:func:`~mne.beamformer.apply_lcmv` which is\nwhat we will use here. The other functions are\n:func:`~mne.beamformer.apply_lcmv_raw`,\n:func:`~mne.beamformer.apply_lcmv_epochs`, and\n:func:`~mne.beamformer.apply_lcmv_cov`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "stc = apply_lcmv(evoked, filters)\nstc_vec = apply_lcmv(evoked, filters_vec)\ndel filters, filters_vec" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualize the reconstructed source activity\nWe can visualize the source estimate in different ways, e.g. as a volume\nrendering, an overlay onto the MRI, or as an overlay onto a glass brain.\n\nThe plots for the scalar beamformer show brain activity in the right temporal\nlobe around 100 ms post stimulus. This is expected given the left-ear\nauditory stimulation of the experiment.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "lims = [0.3, 0.45, 0.6]\nkwargs = dict(\n src=src,\n subject=\"sample\",\n subjects_dir=subjects_dir,\n initial_time=0.087,\n verbose=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### On MRI slices (orthoview; 2D)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "stc.plot(mode=\"stat_map\", clim=dict(kind=\"value\", pos_lims=lims), **kwargs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### On MNI glass brain (orthoview; 2D)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "stc.plot(mode=\"glass_brain\", clim=dict(kind=\"value\", lims=lims), **kwargs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Volumetric rendering (3D) with vectors\nThese plots can also be shown using a volumetric rendering via\n:meth:`~mne.VolVectorSourceEstimate.plot_3d`. Let's try visualizing the\nvector beamformer case. Here we get three source time courses out per voxel\n(one for each component of the dipole moment: x, y, and z), which appear\nas small vectors in the visualization (in the 2D plotters, only the\nmagnitude can be shown):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "brain = stc_vec.plot_3d(\n clim=dict(kind=\"value\", lims=lims),\n hemi=\"both\",\n size=(600, 600),\n views=[\"sagittal\"],\n # Could do this for a 3-panel figure:\n # view_layout='horizontal', views=['coronal', 'sagittal', 'axial'],\n brain_kwargs=dict(silhouette=True),\n **kwargs,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualize the activity of the maximum voxel with all three components\nWe can also visualize all three components in the peak voxel. For this, we\nwill first find the peak voxel and then plot the time courses of this voxel.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "peak_vox, _ = stc_vec.get_peak(tmin=0.08, tmax=0.1, vert_as_index=True)\n\nori_labels = [\"x\", \"y\", \"z\"]\nfig, ax = plt.subplots(1)\nfor ori, label in zip(stc_vec.data[peak_vox, :, :], ori_labels):\n ax.plot(stc_vec.times, ori, label=f\"{label} component\")\nax.legend(loc=\"lower right\")\nax.set(\n title=\"Activity per orientation in the peak voxel\",\n xlabel=\"Time (s)\",\n ylabel=\"Amplitude (a. u.)\",\n)\nmne.viz.utils.plt_show()\ndel stc_vec" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Morph the output to fsaverage\n\nWe can also use volumetric morphing to get the data to fsaverage space. This\nis for example necessary when comparing activity across subjects. Here, we\nwill use the scalar beamformer example.\nWe pass a :class:`mne.SourceMorph` as the ``src`` argument to\n`mne.VolSourceEstimate.plot`. To save some computational load when applying\nthe morph, we will crop the ``stc``:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fetch_fsaverage(subjects_dir) # ensure fsaverage src exists\nfname_fs_src = subjects_dir / \"fsaverage\" / \"bem\" / \"fsaverage-vol-5-src.fif\"\n\nsrc_fs = mne.read_source_spaces(fname_fs_src)\nmorph = mne.compute_source_morph(\n src,\n subject_from=\"sample\",\n src_to=src_fs,\n subjects_dir=subjects_dir,\n niter_sdr=[5, 5, 2],\n niter_affine=[5, 5, 2],\n zooms=7, # just for speed\n verbose=True,\n)\nstc_fs = morph.apply(stc)\ndel stc\n\nstc_fs.plot(\n src=src_fs,\n mode=\"stat_map\",\n initial_time=0.085,\n subjects_dir=subjects_dir,\n clim=dict(kind=\"value\", pos_lims=lims),\n verbose=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n\n.. footbibliography::\n\n\n.. LINKS\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }