{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# EEG source localization given electrode locations on an MRI\n\nThis tutorial explains how to compute the forward operator from EEG data when the\nelectrodes are in MRI voxel coordinates.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Eric Larson \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import nibabel\nimport numpy as np\nfrom nilearn.plotting import plot_glass_brain\n\nimport mne\nfrom mne.channels import compute_native_head_t, read_custom_montage\nfrom mne.viz import plot_alignment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\nFor this we will assume that you have:\n\n- raw EEG data\n- your subject's MRI reconstrcted using FreeSurfer\n- an appropriate boundary element model (BEM)\n- an appropriate source space (src)\n- your EEG electrodes in Freesurfer surface RAS coordinates, stored\n in one of the formats :func:`mne.channels.read_custom_montage` supports\n\nLet's set the paths to these files for the ``sample`` dataset, including\na modified ``sample`` MRI showing the electrode locations plus a ``.elc``\nfile corresponding to the points in MRI coords (these were [synthesized](https://gist.github.com/larsoner/0ac6fad57e31cb2d9caa77350a9ff366)_,\nand thus are stored as part of the ``misc`` dataset).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = mne.datasets.sample.data_path()\nsubjects_dir = data_path / \"subjects\"\nfname_raw = data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw.fif\"\nbem_dir = subjects_dir / \"sample\" / \"bem\"\nfname_bem = bem_dir / \"sample-5120-5120-5120-bem-sol.fif\"\nfname_src = bem_dir / \"sample-oct-6-src.fif\"\n\nmisc_path = mne.datasets.misc.data_path()\nfname_T1_electrodes = misc_path / \"sample_eeg_mri\" / \"T1_electrodes.mgz\"\nfname_mon = misc_path / \"sample_eeg_mri\" / \"sample_mri_montage.elc\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualizing the MRI\nLet's take our MRI-with-eeg-locations and adjust the affine to put the data\nin MNI space, and plot using :func:`nilearn.plotting.plot_glass_brain`,\nwhich does a maximum intensity projection (easy to see the fake electrodes).\nThis plotting function requires data to be in MNI space.\nBecause ``img.affine`` gives the voxel-to-world (RAS) mapping, if we apply a\nRAS-to-MNI transform to it, it becomes the voxel-to-MNI transformation we\nneed. Thus we create a \"new\" MRI image in MNI coordinates and plot it as:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "img = nibabel.load(fname_T1_electrodes) # original subject MRI w/EEG\nras_mni_t = mne.transforms.read_ras_mni_t(\"sample\", subjects_dir) # from FS\nmni_affine = np.dot(ras_mni_t[\"trans\"], img.affine) # vox->ras->MNI\nimg_mni = nibabel.Nifti1Image(img.dataobj, mni_affine) # now in MNI coords!\nplot_glass_brain(\n img_mni,\n cmap=\"hot_black_bone\",\n threshold=0.0,\n black_bg=True,\n resampling_interpolation=\"nearest\",\n colorbar=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords\nLet's load our :class:`~mne.channels.DigMontage` using\n:func:`mne.channels.read_custom_montage`, making note of the fact that\nwe stored our locations in Freesurfer surface RAS (MRI) coordinates.\n\n.. dropdown:: What if my electrodes are in MRI voxels?\n :color: warning\n :icon: question\n\n If you have voxel coordinates in MRI voxels, you can transform these to\n FreeSurfer surface RAS (called \"mri\" in MNE) coordinates using the\n transformations that FreeSurfer computes during reconstruction.\n ``nibabel`` calls this transformation the ``vox2ras_tkr`` transform\n and operates in millimeters, so we can load it, convert it to meters,\n and then apply it::\n\n >>> pos_vox = ... # loaded from a file somehow\n >>> img = nibabel.load(fname_T1)\n >>> vox2mri_t = img.header.get_vox2ras_tkr() # voxel -> mri trans\n >>> pos_mri = mne.transforms.apply_trans(vox2mri_t, pos_vox)\n >>> pos_mri /= 1000. # mm -> m\n\n You can also verify that these are correct (or manually convert voxels\n to MRI coords) by looking at the points in Freeview or tkmedit.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dig_montage = read_custom_montage(\n fname_mon,\n head_size=None,\n coord_frame=\"mri\",\n verbose=\"error\", # because it contains a duplicate point\n)\ndig_montage.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then get our transformation from the MRI coordinate frame (where our\npoints are defined) to the head coordinate frame from the object.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "trans = compute_native_head_t(dig_montage)\nprint(trans) # should be mri->head, as the \"native\" space here is MRI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's apply this digitization to our dataset, and in the process\nautomatically convert our locations to the head coordinate frame, as\nshown by :meth:`~mne.io.Raw.plot_sensors`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw = mne.io.read_raw_fif(fname_raw)\nraw.pick(picks=[\"eeg\", \"stim\"]).load_data()\nraw.set_montage(dig_montage)\nraw.plot_sensors(show_names=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can do standard sensor-space operations like make joint plots of\nevoked data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw.set_eeg_reference(projection=True)\nevents = mne.find_events(raw)\nepochs = mne.Epochs(raw, events)\ncov = mne.compute_covariance(epochs, tmax=0.0)\nevoked = epochs[\"1\"].average() # trigger 1 in auditory/left\nevoked.plot_joint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting a source estimate\nNew we have all of the components we need to compute a forward solution,\nbut first we should sanity check that everything is well aligned:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = plot_alignment(\n evoked.info,\n trans=trans,\n show_axes=True,\n surfaces=\"head-dense\",\n subject=\"sample\",\n subjects_dir=subjects_dir,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can actually compute the forward:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fwd = mne.make_forward_solution(\n evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally let's compute the inverse and apply it:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "inv = mne.minimum_norm.make_inverse_operator(evoked.info, fwd, cov, verbose=True)\nstc = mne.minimum_norm.apply_inverse(evoked, inv)\nbrain = stc.plot(subjects_dir=subjects_dir, initial_time=0.1)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }