{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using Python for neuroimaging data\n", "\n", "The primary goal of this section is to become familiar with loading, modifying, saving, and visualizing neuroimages in Python. A secondary goal is to develop a conceptual understanding of the data structures involved, to facilitate diagnosing problems in data or analysis pipelines.\n", "\n", "To these ends, we'll be exploring two libraries: [nibabel](http://nipy.org/nibabel/) and [nilearn](https://nilearn.github.io/). Each of these projects has excellent documentation. While this should get you started, it is well worth your time to look through these sites.\n", "\n", "This notebook only covers nilearn, see the notebook [`image_manipulation_nibabel.ipynb`](image_manipulation_nibabel.ipynb) for more information about nibabel." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Nilearn\n", "\n", "[Nilearn](http://nilearn.github.io/index.html) labels itself as: *A Python module for fast and easy statistical learning on NeuroImaging data. It leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modeling, classification, decoding, or connectivity analysis.*\n", "\n", "But it's much more than that. It is also an excellent library to **manipulate** (e.g. resample images, smooth images, ROI extraction, etc.) and **visualize** your neuroimages.\n", "\n", "So let's visit all three of those domains:\n", "\n", "1. Image manipulation\n", "2. Image visualization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Image settings\n", "from nilearn import plotting\n", "import pylab as plt\n", "%matplotlib inline\n", "\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Throughout this tutorial, we will be using the anatomical and functional image of subject 1. So let's load them already here that we have a quicker access later on:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn import image as nli\n", "t1 = nli.load_img('/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz')\n", "bold = nli.load_img('/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because the bold image didn't reach steady-state at the beginning of the image, let's cut of the first 5 volumes, to be sure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bold = bold.slicer[..., 5:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Image manipulation with `nilearn`\n", "\n", "### Let's create a mean image\n", "\n", "If you use nibabel to compute the mean image, you first need to load the img, get the data and then compute the mean thereof. With nilearn, you can do all this in just one line with `mean image`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn import image as nli" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = nli.mean_img(bold)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From version `0.5.0` on, `nilearn` provides interactive visual views. A nice alternative to `nibabel`'s `orthoview()`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.view_img(img, bg_img=img)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect! What else can we do with the `image` module? \n", "Let's see..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Resample image to a template\n", "Using `resample_to_img`, we can resample one image to have the same dimensions as another one. For example, let's resample an anatomical T1 image to the computed mean image above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean = nli.mean_img(bold)\n", "print([mean.shape, t1.shape])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's resample the t1 image to the mean image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "resampled_t1 = nli.resample_to_img(t1, mean)\n", "resampled_t1.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The image size of the resampled t1 image seems to be right. But what does it look like?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn import plotting\n", "plotting.plot_anat(t1, title='original t1', display_mode='z', dim=-1,\n", " cut_coords=[-20, -10, 0, 10, 20, 30])\n", "plotting.plot_anat(resampled_t1, title='resampled t1', display_mode='z', dim=-1,\n", " cut_coords=[-20, -10, 0, 10, 20, 30])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Smooth an image\n", "Using `smooth_img`, we can very quickly smooth any kind of MRI image. Let's, for example, take the mean image from above and smooth it with different FWHM values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for fwhm in range(1, 12, 5):\n", " smoothed_img = nli.smooth_img(mean, fwhm)\n", " plotting.plot_epi(smoothed_img, title=\"Smoothing %imm\" % fwhm,\n", " display_mode='z', cmap='magma')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clean an image to improve SNR\n", "\n", "Sometimes you also want to clean your functional images a bit to improve the SNR. For this, nilearn offers `clean_img`. You can improve the SNR of your fMRI signal by using one or more of the following options:\n", "\n", "- detrend\n", "- standardize\n", "- remove confounds\n", "- low- and high-pass filter\n", "\n", "**Note:** Low-pass filtering improves specificity. High-pass filtering should be kept small, to keep some sensitivity." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's get the TR value of our functional image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "TR = bold.header['pixdim'][4]\n", "TR" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a first step, let's just detrend the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "func_d = nli.clean_img(bold, detrend=True, standardize=False, t_r=TR)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot the original and detrended timecourse of a random voxel\n", "x, y, z = [31, 14, 7]\n", "plt.figure(figsize=(12, 4))\n", "plt.plot(np.transpose(bold.get_data()[x, y, z, :]))\n", "plt.plot(np.transpose(func_d.get_data()[x, y, z, :]))\n", "plt.legend(['Original', 'Detrend']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now see what `standardize` does:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "func_ds = nli.clean_img(bold, detrend=True, standardize=True, t_r=TR)\n", "\n", "plt.figure(figsize=(12, 4))\n", "plt.plot(np.transpose(func_d.get_data()[x, y, z, :]))\n", "plt.plot(np.transpose(func_ds.get_data()[x, y, z, :]))\n", "plt.legend(['Detrend', 'Detrend+standardize']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And as a final step, let's also remove the influence of the motion parameters from the signal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "func_ds_c = nli.clean_img(bold, detrend=True, standardize=True, t_r=TR,\n", " confounds='data/sub-01_ses-test_task-fingerfootlips_bold_mcf.par')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12, 5))\n", "plt.plot(np.transpose(func_ds.get_data()[x, y, z, :]))\n", "plt.plot(np.transpose(func_ds_c.get_data()[x, y, z, :]))\n", "plt.legend(['Det.+stand.', 'Det.+stand.-confounds']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Mask an image and extract an average signal of a region\n", "\n", "Thanks to nibabel and nilearn you can consider your images just a special kind of a numpy array. Which means that you have all the liberties that you are used to.\n", "\n", "For example, let's take a functional image, (1) create the mean image thereof, then we (2) threshold it to only keep the voxels that have a value that is higher than 95% of all voxels. Of this thresholded image, we only (3) keep those regions that are bigger than 1000mm^3. And finally, we (4) binarize those regions to create a mask image." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So first, we load again a functional image and compute the mean thereof." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean = nli.mean_img(bold)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use `threshold_img` to only keep voxels that have a value that is higher than 95% of all voxels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "thr = nli.threshold_img(mean, threshold='95%')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.view_img(thr, bg_img=img)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's only keep those voxels that are in regions/clusters that are bigger than 1000mm^3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "voxel_size = np.prod(thr.header['pixdim'][1:4]) # Size of 1 voxel in mm^3\n", "voxel_size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create the mask that only keeps those big clusters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn.regions import connected_regions\n", "cluster = connected_regions(thr, min_region_size=1000. / voxel_size, smoothing_fwhm=1)[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally, let's binarize this cluster file to create a mask." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mask = nli.math_img('np.mean(img,axis=3) > 0', img=cluster)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us investigate this mask by visualizing it on the subject specific anatomy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn.plotting import plot_roi\n", "plotting.plot_roi(mask, bg_img=t1, display_mode='z', dim=-.5, cmap='magma_r');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next step is now to take this mask, apply it to the original functional image and extract the mean of the temporal signal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Apply mask to original functional image\n", "from nilearn.masking import apply_mask\n", "\n", "all_timecourses = apply_mask(bold, mask)\n", "all_timecourses.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** You can bring the timecourses (or masked data) back into the original 3D/4D space with `unmask`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn.masking import unmask\n", "img_timecourse = unmask(all_timecourses, mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute mean trace of all extracted timecourses and plot the mean signal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean_timecourse = all_timecourses.mean(axis=1)\n", "plt.plot(mean_timecourse)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Independent Component Analysis\n", "\n", "Nilearn gives you also the possibility to run an ICA on your data. This can either be on a single file or on multiple subjects." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import CanICA module\n", "from nilearn.decomposition import CanICA\n", "\n", "# Specify relevant parameters\n", "n_components = 5\n", "fwhm = 6." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify CanICA object\n", "canica = CanICA(n_components=n_components, smoothing_fwhm=fwhm,\n", " memory=\"nilearn_cache\", memory_level=2,\n", " threshold=3., verbose=10, random_state=0, n_jobs=-1,\n", " standardize=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run/fit CanICA on input data\n", "canica.fit(bold)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Retrieve the independent components in brain space\n", "components_img = canica.masker_.inverse_transform(canica.components_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now visualize those components on the T1 image." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "from nilearn.image import iter_img\n", "from nilearn.plotting import plot_stat_map\n", "\n", "for i, cur_img in enumerate(iter_img(components_img)):\n", " plot_stat_map(cur_img, display_mode=\"z\", title=\"IC %d\" % i,\n", " cut_coords=[0, 10, 20, 30], colorbar=True, bg_img=t1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** The ICA components are not ordered, the visualization above and below therefore might look different from case to case. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're now also curious about how those independent components, correlate with the functional image over time, you can use the following code to get some insights:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats.stats import pearsonr\n", "\n", "# Get data of the functional image\n", "orig_data = bold.get_data()\n", "\n", "# Compute the pearson correlation between the components and the signal\n", "curves = np.array([[pearsonr(components_img.get_data()[..., j].ravel(),\n", " orig_data[..., i].ravel())[0] for i in range(orig_data.shape[-1])]\n", " for j in range(canica.n_components)])\n", "\n", "# Plot the components\n", "fig = plt.figure(figsize=(14, 4))\n", "centered_curves = curves - curves.mean(axis=1)[..., None]\n", "plt.plot(centered_curves.T)\n", "plt.legend(['IC %d' % i for i in range(canica.n_components)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dictionary Learning\n", "\n", "Recent work has shown that dictionary learning based techniques outperform ICA in term of stability and constitutes a better first step in a statistical analysis pipeline. Dictionary learning in neuro-imaging seeks to extract a few representative temporal elements along with their sparse spatial loadings, which constitute good extracted maps. Luckily, doing dictionary learning with `nilearn` is as easy as it can be.\n", "\n", "DictLearning is a ready-to-use class with the same interface as CanICA. The sparsity of output map is controlled by a parameter alpha: using a larger alpha yields sparser maps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import DictLearning module\n", "from nilearn.decomposition import DictLearning" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify the dictionary learning object\n", "dict_learning = DictLearning(n_components=n_components, n_epochs=1,\n", " alpha=1., smoothing_fwhm=fwhm, standardize=True,\n", " memory=\"nilearn_cache\", memory_level=2,\n", " verbose=1, random_state=0, n_jobs=-1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# As before, let's fit the model to the data\n", "dict_learning.fit(bold)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Retrieve the independent components in brain space\n", "components_img = dict_learning.masker_.inverse_transform(dict_learning.components_)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now let's plot the components\n", "for i, cur_img in enumerate(iter_img(components_img)):\n", " plot_stat_map(cur_img, display_mode=\"z\", title=\"IC %d\" % i,\n", " cut_coords=[0, 10, 20, 30], colorbar=True, bg_img=t1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Maps obtained with dictionary leaning are often easier to exploit as they are less noisy than ICA maps, with blobs usually better defined. Typically, smoothing can be lower than when doing ICA. While dictionary learning computation time is comparable to CanICA, obtained atlases have been shown to outperform ICA in a variety of classification tasks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats.stats import pearsonr\n", "\n", "# Get data of the functional image\n", "orig_data = bold.get_data()\n", "\n", "# Compute the pearson correlation between the components and the signal\n", "curves = np.array([[pearsonr(components_img.get_data()[..., j].ravel(),\n", " orig_data[..., i].ravel())[0] for i in range(orig_data.shape[-1])]\n", " for j in range(dict_learning.n_components)])\n", "\n", "# Plot the components\n", "fig = plt.figure(figsize=(14, 4))\n", "centered_curves = curves - curves.mean(axis=1)[..., None]\n", "plt.plot(centered_curves.T)\n", "plt.legend(['IC %d' % i for i in range(dict_learning.n_components)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Image visualization with `nilearn`\n", "\n", "Above, we've already seen some ways on how to visualize brain images with `nilearn`. And there are many more. To keep this notebook short, we will only take a look at some of them. For a complete list, see [nilearn's plotting section](http://nilearn.github.io/plotting/index.html).\n", "\n", "**Note:** In most of the `nilearn`'s plotting functions, you can specify the value `output_file=example.png'`, to save the figure directly to a file." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "from nilearn import plotting" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Specify the functional file that we want to plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "localizer_tmap = '/home/neuro/workshop/notebooks/data/brainomics_localizer/brainomics_data/'\n", "localizer_tmap += 'S02/t_map_left_auditory_&_visual_click_vs_right_auditory&visual_click.nii.gz'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Glass brain\n", "\n", "A really cool way to visualize your brain images on the MNI brain is nilearn's `plot_glass_brain()` function. It gives you a good overview of all significant voxels in our image.\n", "\n", "**Note**: It's important that your data is normalized to the MNI-template, as the overlay is otherwise not overlapping." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.plot_glass_brain(localizer_tmap, threshold=3, colorbar=True,\n", " title='plot_glass_brain with display_mode=\"lyrz\"',\n", " plot_abs=False, display_mode='lyrz', cmap='rainbow')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Overlay functional image onto anatomical image\n", "\n", "In this type of visualization, you can specify the axis through which you want to cut and the cut coordinates. `cut_coords` as integer 5 without a list implies that number of cuts in the slices should be maximum of 5.\n", "The coordinates to cut the slices are selected automatically. But you could also specify the exact cuts with`cut_coords=[-10, 0, 10, 20]`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.plot_stat_map(localizer_tmap, display_mode='z', cut_coords=5, threshold=2,\n", " title=\"display_mode='z', cut_coords=5\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**: `plot_stat_map()` can also be used to create figures with cuts in all directions, i.e. orthogonal cuts. For this, set `display_mode=ortho`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.plot_stat_map(localizer_tmap, display_mode='ortho', threshold=2,\n", " title=\"display_mode='orth', cut_coords=peak\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Overlay two images ontop of each other\n", "\n", "We can also create a plot that overlays different anatomical images. Let's show the FreeSurfer `aseg` segmentation over the T1 image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.plot_roi('/usr/share/fsl/data/atlases/HarvardOxford/HarvardOxford-cort-maxprob-thr25-1mm.nii.gz',\n", " 'data/templates/MNI152_T1_1mm.nii.gz', dim=0, cut_coords=(0, 0, 0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that nilearn will accept an image or a filename equally. Also recall that `t1` was a NIfTI-1 image, while `aseg` is in a FreeSurfer `.mgz` file. Nilearn takes advantage of the common interface (data-affine-header) that nibabel provides for these different formats, and makes correctly aligned overlays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Use `add_edges` to see the overlay between two images\n", "\n", "Let's assume we want to see how well our anatomical image overlays with the mean functional image. Let's first load those two files:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import nilearn.image as nli\n", "mean = nli.mean_img(bold)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can use the `plot_anat` plotting function to plot the background image, in this case the mean fMRI image. Followed by the `add_edges` function to overlay the edges of the anatomical image onto the mean image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display = plotting.plot_anat(t1, dim=-1.0)\n", "display.add_edges(mean, color='r')" ] }, { "cell_type": "markdown", "metadata": { "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 1:\n", "\n", "Using the function `plot_epi()`, draw the image `mean` as a set of 5 slices spanning front to back. Suppress the background using the `vmin` option." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "solution2": "hidden" }, "outputs": [], "source": [ "plotting.plot_epi(mean, display_mode='y', cut_coords=5, vmin=100)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create solution here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3D Surface Plot\n", "\n", "One of the newest features in nilearn is the possibility to project a 3D statistical map onto a cortical mesh using `nilearn.surface.vol_to_surf`. And then to display a surface plot of the projected map using `nilearn.plotting.plot_surf_stat_map`.\n", "\n", "**Note:** Both of those modules require that your `matplotlib` version is 1.3.1 or higher and that your `nilearn` version is 0.4 or higher." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's specify the location of the surface files:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fsaverage = {'pial_left': '/home/neuro/workshop/notebooks/data/fsaverage5/pial.left.gii',\n", " 'pial_right': '/home/neuro/workshop/notebooks/data/fsaverage5/pial.right.gii',\n", " 'infl_left': '/home/neuro/workshop/notebooks/data/fsaverage5/pial_inflated.left.gii',\n", " 'infl_right': '/home/neuro/workshop/notebooks/data/fsaverage5/pial_inflated.right.gii',\n", " 'sulc_left': '/home/neuro/workshop/notebooks/data/fsaverage5/sulc.left.gii',\n", " 'sulc_right': '/home/neuro/workshop/notebooks/data/fsaverage5/sulc.right.gii'}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Project the statistical map from the volume onto the surface." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nilearn import surface\n", "texture = surface.vol_to_surf(localizer_tmap, fsaverage['pial_right'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to plot the statistical map on the surface:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "from nilearn import plotting\n", "plotting.plot_surf_stat_map(fsaverage['infl_right'], texture,\n", " hemi='right', title='Inflated surface - right hemisphere',\n", " threshold=1., bg_map=fsaverage['sulc_right'],\n", " view='lateral', cmap='cold_hot')\n", "plotting.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or another example, using the pial surface:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.plot_surf_stat_map(fsaverage['pial_left'], texture,\n", " hemi='left', title='Pial surface - left hemisphere',\n", " threshold=1., bg_map=fsaverage['sulc_left'],\n", " view='medial', cmap='cold_hot')\n", "plotting.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From version `0.5.0` on, `nilearn` provides interactive visual views also for surface plots. The great thing is that you actually don't need to project your statistical maps onto the surface, `nilearn` does that directly for you.\n", "\n", "So taking the `localizer_tmap` from before we can call the interactive plotting function `view_img_on_surf`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotting.view_img_on_surf(localizer_tmap, surf_mesh='fsaverage5',\n", " threshold=0, cmap='cold_hot', vmax=3)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 1 }