{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Source alignment and coordinate frames\n\nThis tutorial shows how to visually assess the spatial alignment of MEG sensor\nlocations, digitized scalp landmark and sensor locations, and MRI volumes. This\nalignment process is crucial for computing the forward solution, as is\nunderstanding the different coordinate frames involved in this process.\n\nLet's start out by loading some data.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The MNE-Python contributors.\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import nibabel as nib\nimport numpy as np\nfrom scipy import linalg\n\nimport mne\nfrom mne.io.constants import FIFF\n\ndata_path = mne.datasets.sample.data_path()\nsubjects_dir = data_path / \"subjects\"\nraw_fname = data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw.fif\"\ntrans_fname = data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw-trans.fif\"\nraw = mne.io.read_raw_fif(raw_fname)\ntrans = mne.read_trans(trans_fname)\nsrc = mne.read_source_spaces(subjects_dir / \"sample\" / \"bem\" / \"sample-oct-6-src.fif\")\n\n# Load the T1 file and change the header information to the correct units\nt1w = nib.load(data_path / \"subjects\" / \"sample\" / \"mri\" / \"T1.mgz\")\nt1w = nib.Nifti1Image(t1w.dataobj, t1w.affine)\nt1w.header[\"xyzt_units\"] = np.array(10, dtype=\"uint8\")\nt1_mgh = nib.MGHImage(t1w.dataobj, t1w.affine)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. raw:: html\n\n \n\n.. role:: pink\n.. role:: blue\n.. role:: gray\n.. role:: magenta\n.. role:: purple\n.. role:: green\n.. role:: red\n\n\n## Understanding coordinate frames\nFor M/EEG source imaging, there are three **coordinate frames** must be\nbrought into alignment using two 3D [transformation matrices](wiki_xform_)\nthat define how to rotate and translate points in one coordinate frame\nto their equivalent locations in another. The three main coordinate frames\nare:\n\n* :blue:`\"meg\"`: the coordinate frame for the physical locations of MEG sensors\n* :gray:`\"mri\"`: the coordinate frame for MRI images, and scalp/skull/brain\n surfaces derived from the MRI images\n* :pink:`\"head\"`: the coordinate frame for digitized sensor locations and\n scalp landmarks (\"fiducials\")\n\n\nEach of these are described in more detail in the next section.\n\nA good way to start visualizing these coordinate frames is to use the\n`mne.viz.plot_alignment` function, which is used for creating or inspecting\nthe transformations that bring these coordinate frames into alignment, and\ndisplaying the resulting alignment of EEG sensors, MEG sensors, brain\nsources, and conductor models. If you provide ``subjects_dir`` and\n``subject`` parameters, the function automatically loads the subject's\nFreesurfer MRI surfaces. Important for our purposes, passing\n``show_axes=True`` to `~mne.viz.plot_alignment` will draw the origin of each\ncoordinate frame in a different color, with axes indicated by different sized\narrows:\n\n* shortest arrow: (**R**)ight / X\n* medium arrow: forward / (**A**)nterior / Y\n* longest arrow: up / (**S**)uperior / Z\n\nNote that all three coordinate systems are **RAS** coordinate frames and\nhence are also `right-handed`_ coordinate systems. Finally, note that the\n``coord_frame`` parameter sets which coordinate frame the camera\nshould initially be aligned with. Let's have a look:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = mne.viz.plot_alignment(\n raw.info,\n trans=trans,\n subject=\"sample\",\n subjects_dir=subjects_dir,\n surfaces=\"head-dense\",\n show_axes=True,\n dig=True,\n eeg=[],\n meg=\"sensors\",\n coord_frame=\"meg\",\n mri_fiducials=\"estimated\",\n)\nmne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0.0, 0.0, 0.0))\nprint(\n \"Distance from head origin to MEG origin: \"\n f\"{1000 * np.linalg.norm(raw.info[\"dev_head_t\"][\"trans\"][:3, 3]):.1f} mm\"\n)\nprint(\n \"Distance from head origin to MRI origin: \"\n f\"{1000 * np.linalg.norm(trans[\"trans\"][:3, 3]):.1f} mm\"\n)\ndists = mne.dig_mri_distances(raw.info, trans, \"sample\", subjects_dir=subjects_dir)\nprint(\n f\"Distance from {len(dists)} digitized points to head surface: \"\n f\"{1000 * np.mean(dists):0.1f} mm\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Coordinate frame definitions\n1. Neuromag/Elekta/MEGIN head coordinate frame (\"head\", :pink:`pink axes`)\n The head coordinate frame is defined through the coordinates of\n anatomical landmarks on the subject's head: usually the Nasion (`NAS`_),\n and the left and right preauricular points (`LPA`_ and `RPA`_).\n Different MEG manufacturers may have different definitions of the head\n coordinate frame. A good overview can be seen in the\n `FieldTrip FAQ on coordinate systems`_.\n\n For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the\n intersection of\n\n 1. the line between the LPA (:red:`red sphere`) and RPA\n (:purple:`purple sphere`), and\n 2. the line perpendicular to this LPA-RPA line one that goes through\n the Nasion (:green:`green sphere`).\n\n The axes are oriented as **X** origin\u2192RPA, **Y** origin\u2192NAS,\n **Z** origin\u2192upward (orthogonal to X and Y).\n\n .. note:: The required 3D coordinates for defining the head coordinate\n frame (NAS, LPA, RPA) are measured at a stage separate from\n the MEG data recording. There exist numerous devices to\n perform such measurements, usually called \"digitizers\". For\n example, see the devices by the company `Polhemus`_.\n\n2. MEG device coordinate frame (\"meg\", :blue:`blue axes`)\n The MEG device coordinate frame is defined by the respective MEG\n manufacturers. All MEG data is acquired with respect to this coordinate\n frame. To account for the anatomy and position of the subject's head, we\n use so-called head position indicator (HPI) coils. The HPI coils are\n placed at known locations on the scalp of the subject and emit\n high-frequency magnetic fields used to coregister the head coordinate\n frame with the device coordinate frame.\n\n From the Neuromag/Elekta/MEGIN user manual:\n\n The origin of the device coordinate system is located at the center\n of the posterior spherical section of the helmet with X axis going\n from left to right and Y axis pointing front. The Z axis is, again\n normal to the plane with positive direction up.\n\n .. note:: The HPI coils are shown as :magenta:`magenta spheres`.\n Coregistration happens at the beginning of the recording and\n the head\u2194meg transformation matrix is stored in\n ``raw.info['dev_head_t']``.\n\n3. MRI coordinate frame (\"mri\", :gray:`gray axes`)\n Defined by Freesurfer, the \"MRI surface RAS\" coordinate frame has its\n origin at the center of a 256\u00d7256\u00d7256 1mm anisotropic volume (though the\n center may not correspond to the anatomical center of the subject's\n head).\n\n .. note:: We typically align the MRI coordinate frame to the head\n coordinate frame through a [rotation and translation matrix](wiki_xform_), that we refer to in MNE as ``trans``.\n\n### A bad example\nLet's try using `~mne.viz.plot_alignment` by making ``trans`` the identity\ntransform. This (incorrectly!) equates the MRI and head coordinate frames.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "identity_trans = mne.transforms.Transform(\"head\", \"mri\")\nmne.viz.plot_alignment(\n raw.info,\n trans=identity_trans,\n subject=\"sample\",\n src=src,\n subjects_dir=subjects_dir,\n dig=True,\n surfaces=[\"head-dense\", \"white\"],\n coord_frame=\"meg\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A good example\nHere is the same plot, this time with the ``trans`` properly defined\n(using a precomputed transformation matrix).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mne.viz.plot_alignment(\n raw.info,\n trans=trans,\n subject=\"sample\",\n src=src,\n subjects_dir=subjects_dir,\n dig=True,\n surfaces=[\"head-dense\", \"white\"],\n coord_frame=\"meg\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualizing the transformations\nLet's visualize these coordinate frames using just the scalp surface; this\nwill make it easier to see their relative orientations. To do this we'll\nfirst load the Freesurfer scalp surface, then apply a few different\ntransforms to it. In addition to the three coordinate frames discussed above,\nwe'll also show the \"mri_voxel\" coordinate frame. Unlike MRI Surface RAS,\n\"mri_voxel\" has its origin in the corner of the volume (the left-most,\nposterior-most coordinate on the inferior-most MRI slice) instead of at the\ncenter of the volume. \"mri_voxel\" is also **not** an RAS coordinate system:\nrather, its XYZ directions are based on the acquisition order of the T1 image\nslices.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# The head surface is stored in \"mri\" coordinate frame\n# (origin at center of volume, units=mm)\nseghead_rr, seghead_tri = mne.read_surface(\n subjects_dir / \"sample\" / \"surf\" / \"lh.seghead\"\n)\n\n# To put the scalp in the \"head\" coordinate frame, we apply the inverse of\n# the precomputed `trans` (which maps head \u2192 mri)\nmri_to_head = linalg.inv(trans[\"trans\"])\nscalp_pts_in_head_coord = mne.transforms.apply_trans(mri_to_head, seghead_rr, move=True)\n\n# To put the scalp in the \"meg\" coordinate frame, we use the inverse of\n# raw.info['dev_head_t']\nhead_to_meg = linalg.inv(raw.info[\"dev_head_t\"][\"trans\"])\nscalp_pts_in_meg_coord = mne.transforms.apply_trans(\n head_to_meg, scalp_pts_in_head_coord, move=True\n)\n\n# The \"mri_voxel\"\u2192\"mri\" transform is embedded in the header of the T1 image\n# file. We'll invert it and then apply it to the original `seghead_rr` points.\n# No unit conversion necessary: this transform expects mm and the scalp surface\n# is defined in mm.\nvox_to_mri = t1_mgh.header.get_vox2ras_tkr()\nmri_to_vox = linalg.inv(vox_to_mri)\nscalp_points_in_vox = mne.transforms.apply_trans(mri_to_vox, seghead_rr, move=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've transformed all the points, let's plot them. We'll use the\nsame colors used by `~mne.viz.plot_alignment` and use :green:`green` for the\n\"mri_voxel\" coordinate frame:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def add_head(renderer, points, color, opacity=0.95):\n renderer.mesh(*points.T, triangles=seghead_tri, color=color, opacity=opacity)\n\n\nrenderer = mne.viz.backends.renderer.create_3d_figure(\n size=(600, 600), bgcolor=\"w\", scene=False\n)\nadd_head(renderer, seghead_rr, \"gray\")\nadd_head(renderer, scalp_pts_in_meg_coord, \"blue\")\nadd_head(renderer, scalp_pts_in_head_coord, \"pink\")\nadd_head(renderer, scalp_points_in_vox, \"green\")\nmne.viz.set_3d_view(\n figure=renderer.figure,\n distance=800,\n focalpoint=(0.0, 30.0, 30.0),\n elevation=105,\n azimuth=180,\n)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The relative orientations of the coordinate frames can be inferred by\nobserving the direction of the subject's nose. Notice also how the origin of\nthe :green:`mri_voxel` coordinate frame is in the corner of the volume\n(above, behind, and to the left of the subject), whereas the other three\ncoordinate frames have their origin roughly in the center of the head.\n\n### Example: MRI defacing\nFor a real-world example of using these transforms, consider the task of\ndefacing the MRI to preserve subject anonymity. If you know the points in\nthe \"head\" coordinate frame (as you might if you're basing the defacing on\ndigitized points) you would need to transform them into \"mri\" or \"mri_voxel\"\nin order to apply the blurring or smoothing operations to the MRI surfaces or\nimages. Here's what that would look like (we'll use the nasion landmark as a\nrepresentative example):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Get the nasion:\nnasion = [\n p\n for p in raw.info[\"dig\"]\n if p[\"kind\"] == FIFF.FIFFV_POINT_CARDINAL and p[\"ident\"] == FIFF.FIFFV_POINT_NASION\n][0]\nassert nasion[\"coord_frame\"] == FIFF.FIFFV_COORD_HEAD\nnasion = nasion[\"r\"] # get just the XYZ values\n\n# Transform it from head to MRI space (recall that `trans` is head \u2192 mri)\nnasion_mri = mne.transforms.apply_trans(trans, nasion, move=True)\n# Then transform to voxel space, after converting from meters to millimeters\nnasion_vox = mne.transforms.apply_trans(mri_to_vox, nasion_mri * 1e3, move=True)\n# Plot it to make sure the transforms worked\nrenderer = mne.viz.backends.renderer.create_3d_figure(\n size=(400, 400), bgcolor=\"w\", scene=False\n)\nadd_head(renderer, scalp_points_in_vox, \"green\", opacity=1)\nrenderer.sphere(center=nasion_vox, color=\"orange\", scale=10)\nmne.viz.set_3d_view(\n figure=renderer.figure,\n distance=600.0,\n focalpoint=(0.0, 125.0, 250.0),\n elevation=45,\n azimuth=180,\n)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n## Defining the head\u2194MRI ``trans`` using the GUI\nYou can try creating the head\u2194MRI transform yourself using\n:func:`mne.gui.coregistration`.\n\n* To set the MRI fiducials, make sure ``Lock Fiducials`` is toggled off.\n* Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then\n clicking the corresponding point in the image.\n\n

Note

The position of each fiducial used is the center of the octahedron icon.

\n\n* After doing this for all the landmarks, toggle ``Lock Fiducials`` radio\n button and optionally pressing ``Save MRI Fid.`` which will save to a\n default location in the ``bem`` folder of the Freesurfer subject directory.\n* Then you can load the digitization data from the raw file\n (``Path to info``).\n* Click ``Fit ICP``. This will align the digitization points to the\n head surface. Sometimes the fitting algorithm doesn't find the correct\n alignment immediately. You can try first fitting using LPA/RPA or fiducials\n and then align according to the digitization. You can also finetune\n manually with the controls on the right side of the panel.\n* Click ``Save`` (lower right corner of the panel), set the filename\n and read it with :func:`mne.read_trans`.\n\nFor more information, see this video:\n\n.. youtube:: ALV5qqMHLlQ\n\n

Note

Coregistration can also be automated as shown in `tut-auto-coreg`.

\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mne.gui.coregistration(subject=\"sample\", subjects_dir=subjects_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n## Alignment without MRI\nThe surface alignments above are possible if you have the surfaces available\nfrom Freesurfer. :func:`mne.viz.plot_alignment` automatically searches for\nthe correct surfaces from the provided ``subjects_dir``. Another option is\nto use a `spherical conductor model `. It is\npassed through ``bem`` parameter.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sphere = mne.make_sphere_model(info=raw.info, r0=\"auto\", head_radius=\"auto\")\nsrc = mne.setup_volume_source_space(sphere=sphere, pos=10.0)\nmne.viz.plot_alignment(\n raw.info,\n trans=trans,\n eeg=\"projected\",\n bem=sphere,\n src=src,\n dig=True,\n surfaces=[\"brain\", \"inner_skull\", \"outer_skull\", \"outer_skin\"],\n coord_frame=\"meg\",\n show_axes=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is also possible to use :func:`mne.gui.coregistration`\nto warp a subject (usually ``fsaverage``) to subject digitization data, see\n[these slides](https://www.slideshare.net/mne-python/mnepython-scale-mri).\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }