{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# How MNE uses FreeSurfer's outputs\n\nThis tutorial explains how MRI coordinate frames are handled in MNE-Python,\nand how MNE-Python integrates with FreeSurfer for handling MRI data and source\nspace data in general.\n\nAs usual we'll start by importing the necessary packages; for this tutorial\nthat includes :mod:`nibabel` to handle loading the MRI images (MNE-Python also\nuses :mod:`nibabel` under the hood). We'll also use a special :mod:`Matplotlib\n` function for adding outlines to text, so that text is\nreadable on top of an MRI image.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The MNE-Python contributors.\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.patheffects as path_effects\nimport matplotlib.pyplot as plt\nimport nibabel\nimport numpy as np\n\nimport mne\nfrom mne.io.constants import FIFF\nfrom mne.transforms import apply_trans" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MRI coordinate frames\n\nLet's start out by looking at the ``sample`` subject MRI. Following standard\nFreeSurfer convention, we look at :file:`T1.mgz`, which gets created from the\noriginal MRI :file:`sample/mri/orig/001.mgz` when you run the FreeSurfer\ncommand [recon-all](https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all).\nHere we use :mod:`nibabel` to load the T1 image, and the resulting object's\n:meth:`~nibabel.spatialimages.SpatialImage.orthoview` method to view it.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = mne.datasets.sample.data_path()\nsubjects_dir = data_path / \"subjects\"\nsubject = \"sample\"\nt1_fname = subjects_dir / subject / \"mri\" / \"T1.mgz\"\nt1 = nibabel.load(t1_fname)\nt1.orthoview()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the axes in the\n:meth:`~nibabel.spatialimages.SpatialImage.orthoview` figure are labeled\nL-R, S-I, and P-A. These reflect the standard RAS (right-anterior-superior)\ncoordinate system that is widely used in MRI imaging. If you are unfamiliar\nwith RAS coordinates, see the excellent nibabel tutorial\n:doc:`nibabel:coordinate_systems`.\n\nNibabel already takes care of some coordinate frame transformations under the\nhood, so let's do it manually so we understand what is happening. First let's\nget our data as a 3D array and note that it's already a standard size:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data = np.asarray(t1.dataobj)\nprint(data.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These data are voxel intensity values. Here they are unsigned integers in the\nrange 0-255, though in general they can be floating point values. A value\n``data[i, j, k]`` at a given index triplet ``(i, j, k)`` corresponds to some\nreal-world physical location ``(x, y, z)`` in space. To get its physical\nlocation, first we have to choose what coordinate frame we're going to use.\n\nFor example, we could choose a geographical coordinate\nframe, with origin is at the center of the earth, Z axis through the north\npole, X axis through the prime meridian (zero degrees longitude), and Y axis\northogonal to these forming a right-handed coordinate system. This would not\nbe a very useful choice for defining the physical locations of the voxels\nduring the MRI acquisition for analysis, but you could nonetheless figure out\nthe transformation that related the ``(i, j, k)`` to this coordinate frame.\n\nInstead, each scanner defines a more practical, native coordinate system that\nit uses during acquisition, usually related to the physical orientation of\nthe scanner itself and/or the subject within it. During acquisition the\nrelationship between the voxel indices ``(i, j, k)`` and the physical\nlocation ``(x, y, z)`` in the *scanner's native coordinate frame* is saved in\nthe image's *affine transformation*.\n\n.. admonition:: Under the hood\n :class: sidebar note\n\n ``mne.transforms.apply_trans`` effectively does a matrix multiplication\n (i.e., :func:`numpy.dot`), with a little extra work to handle the shape\n mismatch (the affine has shape ``(4, 4)`` because it includes a\n *translation*, which is applied separately).\n\nWe can use :mod:`nibabel` to examine this transformation, keeping in mind\nthat it processes everything in units of millimeters, unlike MNE where things\nare always in SI units (meters).\n\nThis allows us to take an arbitrary voxel or slice of data and know where it\nis in the scanner's native physical space ``(x, y, z)`` (in mm) by applying\nthe affine transformation to the voxel coordinates.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(t1.affine)\nvox = np.array([122, 119, 102])\nxyz_ras = apply_trans(t1.affine, vox)\nprint(\n \"Our voxel has real-world coordinates {}, {}, {} (mm)\".format(*np.round(xyz_ras, 3))\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you have a point ``(x, y, z)`` in scanner-native RAS space and you want\nthe corresponding voxel number, you can get it using the inverse of the\naffine. This involves some rounding, so it's possible to end up off by one\nvoxel if you're not careful:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ras_coords_mm = np.array([1, -17, -18])\ninv_affine = np.linalg.inv(t1.affine)\ni_, j_, k_ = np.round(apply_trans(inv_affine, ras_coords_mm)).astype(int)\nprint(f\"Our real-world coordinates correspond to voxel ({i_}, {j_}, {k_})\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's write a short function to visualize where our voxel lies in an\nimage, and annotate it in RAS space (rounded to the nearest millimeter):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def imshow_mri(data, img, vox, xyz, suptitle):\n \"\"\"Show an MRI slice with a voxel annotated.\"\"\"\n i, j, k = vox\n fig, ax = plt.subplots(1, figsize=(6, 6), layout=\"constrained\")\n codes = nibabel.orientations.aff2axcodes(img.affine)\n # Figure out the title based on the code of this axis\n ori_slice = dict(\n P=\"Coronal\", A=\"Coronal\", I=\"Axial\", S=\"Axial\", L=\"Sagittal\", R=\"Sagittal\"\n )\n ori_names = dict(\n P=\"posterior\", A=\"anterior\", I=\"inferior\", S=\"superior\", L=\"left\", R=\"right\"\n )\n title = ori_slice[codes[0]]\n ax.imshow(data[i], vmin=10, vmax=120, cmap=\"gray\", origin=\"lower\")\n ax.axvline(k, color=\"y\")\n ax.axhline(j, color=\"y\")\n for kind, coords in xyz.items():\n annotation = \"{}: {}, {}, {} mm\".format(kind, *np.round(coords).astype(int))\n text = ax.text(k, j, annotation, va=\"baseline\", ha=\"right\", color=(1, 1, 0.7))\n text.set_path_effects(\n [\n path_effects.Stroke(linewidth=2, foreground=\"black\"),\n path_effects.Normal(),\n ]\n )\n # reorient view so that RAS is always rightward and upward\n x_order = -1 if codes[2] in \"LIP\" else 1\n y_order = -1 if codes[1] in \"LIP\" else 1\n ax.set(\n xlim=[0, data.shape[2] - 1][::x_order],\n ylim=[0, data.shape[1] - 1][::y_order],\n xlabel=f\"k ({ori_names[codes[2]]}+)\",\n ylabel=f\"j ({ori_names[codes[1]]}+)\",\n title=f\"{title} view: i={i} ({ori_names[codes[0]]}+)\",\n )\n fig.suptitle(suptitle)\n return fig\n\n\nimshow_mri(data, t1, vox, {\"Scanner RAS\": xyz_ras}, \"MRI slice\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the axis scales (``i``, ``j``, and ``k``) are still in voxels\n(ranging from 0-255); it's only the annotation text that we've translated\ninto real-world RAS in millimeters.\n\n\n### \"MRI coordinates\" in MNE-Python: FreeSurfer surface RAS\n\nWhile :mod:`nibabel` uses **scanner RAS** ``(x, y, z)`` coordinates,\nFreeSurfer uses a slightly different coordinate frame: **MRI surface RAS**.\nThe transform from voxels to the FreeSurfer MRI surface RAS coordinate frame\nis known in the [FreeSurfer documentation](https://surfer.nmr.mgh.harvard.edu/fswiki/CoordinateSystems) as ``Torig``,\nand in nibabel as :meth:`vox2ras_tkr\n`. This\ntransformation sets the center of its coordinate frame in the middle of the\nconformed volume dimensions (``N / 2.``) with the axes oriented along the\naxes of the volume itself. For more information, see\n`coordinate_systems`.\n\n

Note

In general, you should assume that the MRI coordinate system for\n a given subject is specific to that subject, i.e., it is not the\n same coordinate MRI coordinate system that is used for any other\n FreeSurfer subject. Even though during processing FreeSurfer will\n align each subject's MRI to ``fsaverage`` to do reconstruction,\n all data (surfaces, MRIs, etc.) get stored in the coordinate frame\n specific to that subject. This is why it's important for group\n analyses to transform data to a common coordinate frame for example\n by `surface ` or\n `volumetric ` morphing, or even by just\n applying `mni-affine-transformation` to points.

\n\nSince MNE-Python uses FreeSurfer extensively for surface computations (e.g.,\nwhite matter, inner/outer skull meshes), internally MNE-Python uses the\nFreeurfer surface RAS coordinate system (not the :mod:`nibabel` scanner RAS\nsystem) for as many computations as possible, such as all source space\nand BEM mesh vertex definitions.\n\nWhenever you see \"MRI coordinates\" or \"MRI coords\" in MNE-Python's\ndocumentation, you should assume that we are talking about the\n\"FreeSurfer MRI surface RAS\" coordinate frame!\n\nWe can do similar computations as before to convert the given voxel indices\ninto FreeSurfer MRI coordinates (i.e., what we call \"MRI coordinates\" or\n\"surface RAS\" everywhere else in MNE), just like we did above to convert\nvoxel indices to *scanner* RAS:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Torig = t1.header.get_vox2ras_tkr()\nprint(t1.affine)\nprint(Torig)\nxyz_mri = apply_trans(Torig, vox)\nimshow_mri(data, t1, vox, dict(MRI=xyz_mri), \"MRI slice\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Knowing these relationships and being mindful about transformations, we\ncan get from a point in any given space to any other space. Let's start out\nby plotting the Nasion on a sagittal MRI slice:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fiducials = mne.coreg.get_mni_fiducials(subject, subjects_dir=subjects_dir)\nnasion_mri = [d for d in fiducials if d[\"ident\"] == FIFF.FIFFV_POINT_NASION][0]\nprint(nasion_mri) # note it's in Freesurfer MRI coords" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we print the nasion, it displays as a ``DigPoint`` and shows its\ncoordinates in millimeters, but beware that the underlying data is\n`actually stored in meters `,\nso before transforming and plotting we'll convert to millimeters:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nasion_mri = nasion_mri[\"r\"] * 1000 # meters \u2192 millimeters\nnasion_vox = np.round(apply_trans(np.linalg.inv(Torig), nasion_mri)).astype(int)\nimshow_mri(\n data, t1, nasion_vox, dict(MRI=nasion_mri), \"Nasion estimated from MRI transform\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also take the digitization point from the MEG data, which is in the\n\"head\" coordinate frame.\n\nLet's look at the nasion in the head coordinate frame:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "info = mne.io.read_info(data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw.fif\")\nnasion_head = [\n d\n for d in info[\"dig\"]\n if d[\"kind\"] == FIFF.FIFFV_POINT_CARDINAL and d[\"ident\"] == FIFF.FIFFV_POINT_NASION\n][0]\nprint(nasion_head) # note it's in \"head\" coordinates" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. admonition:: Head coordinate frame\n :class: sidebar note\n\n The head coordinate frame in MNE is the \"Neuromag\" head coordinate\n frame. The origin is given by the intersection between a line connecting\n the LPA and RPA and the line orthogonal to it that runs through the\n nasion. It is also in RAS orientation, meaning that +X runs through\n the RPA, +Y goes through the nasion, and +Z is orthogonal to these\n pointing upward. See `coordinate_systems` for more information.\n\nNotice that in \"head\" coordinate frame the nasion has values of 0 for the\n``x`` and ``z`` directions (which makes sense given that the nasion is used\nto define the ``y`` axis in that system).\nTo convert from head coordinate frame to voxels, we first apply the head \u2192\nMRI (surface RAS) transform\nfrom a :file:`trans` file (typically created with the MNE-Python\ncoregistration GUI), then convert meters \u2192 millimeters, and finally apply the\ninverse of ``Torig`` to get to voxels.\n\nUnder the hood, functions like :func:`mne.setup_source_space`,\n:func:`mne.setup_volume_source_space`, and :func:`mne.compute_source_morph`\nmake extensive use of these coordinate frames.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "trans = mne.read_trans(data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw-trans.fif\")\n\n# first we transform from head to MRI, and *then* convert to millimeters\nnasion_dig_mri = apply_trans(trans, nasion_head[\"r\"]) * 1000\n\n# ...then we can use Torig to convert MRI to voxels:\nnasion_dig_vox = np.round(apply_trans(np.linalg.inv(Torig), nasion_dig_mri)).astype(int)\nimshow_mri(\n data,\n t1,\n nasion_dig_vox,\n dict(MRI=nasion_dig_mri),\n \"Nasion transformed from digitization\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using FreeSurfer's surface reconstructions\nAn important part of what FreeSurfer does is provide cortical surface\nreconstructions. For example, let's load and view the ``white`` surface\nof the brain. This is a 3D mesh defined by a set of vertices (conventionally\ncalled ``rr``) with shape ``(n_vertices, 3)`` and a set of triangles\n(``tris``) with shape ``(n_tris, 3)`` defining which vertices in ``rr`` form\neach triangular facet of the mesh.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fname = subjects_dir / subject / \"surf\" / \"rh.white\"\nrr_mm, tris = mne.read_surface(fname)\nprint(f\"rr_mm.shape == {rr_mm.shape}\")\nprint(f\"tris.shape == {tris.shape}\")\nprint(f\"rr_mm.max() = {rr_mm.max()}\") # just to show that we are in mm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's actually plot it:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "renderer = mne.viz.backends.renderer.create_3d_figure(\n size=(600, 600), bgcolor=\"w\", scene=False\n)\ngray = (0.5, 0.5, 0.5)\nrenderer.mesh(*rr_mm.T, triangles=tris, color=gray)\nview_kwargs = dict(elevation=90, azimuth=0) # camera at +X with +Z up\nmne.viz.set_3d_view(\n figure=renderer.figure, distance=350, focalpoint=(0.0, 0.0, 40.0), **view_kwargs\n)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also plot the mesh on top of an MRI slice. The mesh surfaces are\ndefined in millimeters in the MRI (FreeSurfer surface RAS) coordinate frame,\nso we can convert them to voxels by applying the inverse of the ``Torig``\ntransform:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rr_vox = apply_trans(np.linalg.inv(Torig), rr_mm)\nfig = imshow_mri(data, t1, vox, {\"Scanner RAS\": xyz_ras}, \"MRI slice\")\n\n# Based on how imshow_mri works, the \"X\" here is the last dim of the MRI vol,\n# the \"Y\" is the middle dim, and the \"Z\" is the first dim, so now that our\n# points are in the correct coordinate frame, we need to ask matplotlib to\n# do a tricontour slice like:\nfig.axes[0].tricontour(\n rr_vox[:, 2],\n rr_vox[:, 1],\n tris,\n rr_vox[:, 0],\n levels=[vox[0]],\n colors=\"r\",\n linewidths=1.0,\n zorder=1,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the method used by :func:`mne.viz.plot_bem` to show the BEM surfaces.\n\n### Cortical alignment (spherical)\nA critical function provided by FreeSurfer is spherical surface alignment\nof cortical surfaces, maximizing sulcal-gyral alignment. FreeSurfer first\nexpands the cortical surface to a sphere, then aligns it optimally with\nfsaverage. Because the vertex ordering is preserved when expanding to a\nsphere, a given vertex in the source (sample) mesh can be mapped easily\nto the same location in the destination (fsaverage) mesh, and vice-versa.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "renderer_kwargs = dict(bgcolor=\"w\")\nrenderer = mne.viz.backends.renderer.create_3d_figure(\n size=(800, 400), scene=False, **renderer_kwargs\n)\ncurvs = [\n (\n mne.surface.read_curvature(\n subjects_dir / subj / \"surf\" / \"rh.curv\", binary=False\n )\n > 0\n ).astype(float)\n for subj in (\"sample\", \"fsaverage\")\n for _ in range(2)\n]\nfnames = [\n subjects_dir / subj / \"surf\" / surf\n for subj in (\"sample\", \"fsaverage\")\n for surf in (\"rh.white\", \"rh.sphere\")\n]\ny_shifts = [-450, -150, 450, 150]\nz_shifts = [-40, 0, -30, 0]\nfor name, y_shift, z_shift, curv in zip(fnames, y_shifts, z_shifts, curvs):\n this_rr, this_tri = mne.read_surface(name)\n this_rr += [0, y_shift, z_shift]\n renderer.mesh(\n *this_rr.T,\n triangles=this_tri,\n color=None,\n scalars=curv,\n colormap=\"copper_r\",\n vmin=-0.2,\n vmax=1.2,\n )\nzero = [0.0, 0.0, 0.0]\nwidth = 50.0\ny = np.sort(y_shifts)\ny = (y[1:] + y[:-1]) / 2.0 - width / 2.0\nrenderer.quiver3d(zero, y, zero, zero, [1] * 3, zero, \"k\", width, \"arrow\")\nview_kwargs[\"focalpoint\"] = (0.0, 0.0, 0.0)\nmne.viz.set_3d_view(figure=renderer.figure, distance=1050, **view_kwargs)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look a bit more closely at the spherical alignment by overlaying the\ntwo spherical meshes as wireframes and zooming way in (the vertices of the\nblack mesh are separated by about 1 mm):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyan = \"#66CCEE\"\nblack = \"k\"\nrenderer = mne.viz.backends.renderer.create_3d_figure(\n size=(800, 800), scene=False, **renderer_kwargs\n)\nsurfs = [\n mne.read_surface(subjects_dir / subj / \"surf\" / \"rh.sphere\")\n for subj in (\"fsaverage\", \"sample\")\n]\ncolors = [black, cyan]\nline_widths = [2, 3]\nfor surf, color, line_width in zip(surfs, colors, line_widths):\n this_rr, this_tri = surf\n # cull to the subset of tris with all positive X (toward camera)\n this_tri = this_tri[(this_rr[this_tri, 0] > 0).all(axis=1)]\n renderer.mesh(\n *this_rr.T,\n triangles=this_tri,\n color=color,\n representation=\"wireframe\",\n line_width=line_width,\n render_lines_as_tubes=True,\n )\nmne.viz.set_3d_view(figure=renderer.figure, distance=150, **view_kwargs)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that the fsaverage (black) mesh is uniformly spaced, and the\nmesh for subject \"sample\" (in cyan) has been deformed along the spherical\nsurface by\nFreeSurfer. This deformation is designed to optimize the sulcal-gyral\nalignment.\n\n### Surface decimation\nThese surfaces have a lot of vertices, and in general we only need to use\na subset of these vertices for creating source spaces. A uniform sampling can\neasily be achieved by subsampling in the spherical space. To do this, we\nuse a recursively subdivided icosahedron or octahedron. For example, let's\nload a standard oct-6 source space, and at the same zoom level as before\nvisualize how it subsampled (in red) the dense mesh:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "src = mne.read_source_spaces(subjects_dir / \"sample\" / \"bem\" / \"sample-oct-6-src.fif\")\nprint(src)\n\nred = \"#EE6677\"\nrenderer = mne.viz.backends.renderer.create_3d_figure(\n size=(800, 800), scene=False, **renderer_kwargs\n)\nrr_sph, _ = mne.read_surface(fnames[1])\nfor tris, color in [(src[1][\"tris\"], cyan), (src[1][\"use_tris\"], red)]:\n # cull to the subset of tris with all positive X (toward camera)\n tris = tris[(rr_sph[tris, 0] > 0).all(axis=1)]\n renderer.mesh(\n *rr_sph.T,\n triangles=tris,\n color=color,\n representation=\"wireframe\",\n line_width=3,\n render_lines_as_tubes=True,\n )\nmne.viz.set_3d_view(figure=renderer.figure, distance=150, **view_kwargs)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also then look at how these two meshes compare by plotting the\noriginal, high-density mesh as well as our decimated mesh white surfaces.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "renderer = mne.viz.backends.renderer.create_3d_figure(\n size=(800, 400), scene=False, **renderer_kwargs\n)\ny_shifts = [-125, 125]\ntris = [src[1][\"tris\"], src[1][\"use_tris\"]]\nfor y_shift, tris in zip(y_shifts, tris):\n this_rr = src[1][\"rr\"] * 1000.0 + [0, y_shift, -40]\n renderer.mesh(\n *this_rr.T,\n triangles=tris,\n color=None,\n scalars=curvs[0],\n colormap=\"copper_r\",\n vmin=-0.2,\n vmax=1.2,\n )\nrenderer.quiver3d([0], [-width / 2.0], [0], [0], [1], [0], \"k\", width, \"arrow\")\nmne.viz.set_3d_view(figure=renderer.figure, distance=450, **view_kwargs)\nrenderer.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Warning

Some source space vertices can be removed during forward computation.\n See `tut-forward` for more information.

\n\n\n### FreeSurfer's MNI affine transformation\nIn addition to surface-based approaches, FreeSurfer also provides a simple\naffine coregistration of each subject's data to the ``fsaverage`` subject.\nLet's pick a point for ``sample`` and plot it on the brain:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "brain = mne.viz.Brain(\n \"sample\", \"lh\", \"white\", subjects_dir=subjects_dir, background=\"w\"\n)\nxyz = np.array([[-55, -10, 35]])\nbrain.add_foci(xyz, hemi=\"lh\", color=\"k\")\nbrain.show_view(\"lat\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can take this point and transform it to MNI space:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mri_mni_trans = mne.read_talxfm(subject, subjects_dir)\nprint(mri_mni_trans)\nxyz_mni = apply_trans(mri_mni_trans, xyz / 1000.0) * 1000.0\nprint(np.round(xyz_mni, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And because ``fsaverage`` is special in that it's already in MNI space\n(its MRI-to-MNI transform is identity), it should land in the equivalent\nanatomical location:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "brain = mne.viz.Brain(\n \"fsaverage\", \"lh\", \"white\", subjects_dir=subjects_dir, background=\"w\"\n)\nbrain.add_foci(xyz_mni, hemi=\"lh\", color=\"k\")\nbrain.show_view(\"lat\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Understanding the inflated brain\nIt takes a minute to interpret data displayed on an inflated brain. This\nvisualization is very helpful in showing more of a brain in one image\nsince it is difficult to visualize inside the sulci. Below is a video\nrelating the pial surface to an inflated surface. If you're interested\nin how this was created, here is the gist used to create the video:\nhttps://gist.github.com/alexrockhill/b5a1ce6c6ba363cf3f277cd321a763bf.\n\n.. youtube:: mOmfNX-Lkn0\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }