{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Make figures more publication ready\n\nIn this example, we show several use cases to take MNE plots and\ncustomize them for a more publication-ready look.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Eric Larson \n# Daniel McCloy \n# Stefan Appelhoff \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports\nWe are importing everything we need for this example:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.axes_grid1 import ImageGrid, inset_locator, make_axes_locatable\n\nimport mne" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evoked plot with brain activation\n\nSuppose we want a figure with an evoked plot on top, and the brain activation\nbelow, with the brain subplot slightly bigger than the evoked plot. Let's\nstart by loading some `example data `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = mne.datasets.sample.data_path()\nsubjects_dir = data_path / \"subjects\"\nfname_stc = data_path / \"MEG\" / \"sample\" / \"sample_audvis-meg-eeg-lh.stc\"\nfname_evoked = data_path / \"MEG\" / \"sample\" / \"sample_audvis-ave.fif\"\n\nevoked = mne.read_evokeds(fname_evoked, \"Left Auditory\")\nevoked.pick(picks=\"grad\").apply_baseline((None, 0.0))\nmax_t = evoked.get_peak()[1]\n\nstc = mne.read_source_estimate(fname_stc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "During interactive plotting, we might see figures like this:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evoked.plot()\n\nstc.plot(\n views=\"lat\",\n hemi=\"split\",\n size=(800, 400),\n subject=\"sample\",\n subjects_dir=subjects_dir,\n initial_time=max_t,\n time_viewer=False,\n show_traces=False,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make a publication-ready figure, first we'll re-plot the brain on a white\nbackground, take a screenshot of it, and then crop out the white margins.\nWhile we're at it, let's change the colormap, set custom colormap limits and\nremove the default colorbar (so we can add a smaller, vertical one later):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "colormap = \"viridis\"\nclim = dict(kind=\"value\", lims=[4, 8, 12])\n\n# Plot the STC, get the brain image, crop it:\nbrain = stc.plot(\n views=\"lat\",\n hemi=\"split\",\n size=(800, 400),\n subject=\"sample\",\n subjects_dir=subjects_dir,\n initial_time=max_t,\n background=\"w\",\n colorbar=False,\n clim=clim,\n colormap=colormap,\n time_viewer=False,\n show_traces=False,\n)\nscreenshot = brain.screenshot()\nbrain.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's crop out the white margins and the white gap between hemispheres.\nThe screenshot has dimensions ``(h, w, 3)``, with the last axis being R, G, B\nvalues for each pixel, encoded as integers between ``0`` and ``255``. ``(255,\n255, 255)`` encodes a white pixel, so we'll detect any pixels that differ\nfrom that:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nonwhite_pix = (screenshot != 255).any(-1)\nnonwhite_row = nonwhite_pix.any(1)\nnonwhite_col = nonwhite_pix.any(0)\ncropped_screenshot = screenshot[nonwhite_row][:, nonwhite_col]\n\n# before/after results\nfig = plt.figure(figsize=(4, 4))\naxes = ImageGrid(fig, 111, nrows_ncols=(2, 1), axes_pad=0.5)\nfor ax, image, title in zip(\n axes, [screenshot, cropped_screenshot], [\"Before\", \"After\"]\n):\n ax.imshow(image)\n ax.set_title(f\"{title} cropping\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A lot of figure settings can be adjusted after the figure is created, but\nmany can also be adjusted in advance by updating the\n:data:`~matplotlib.rcParams` dictionary. This is especially useful when your\nscript generates several figures that you want to all have the same style:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Tweak the figure style\nplt.rcParams.update(\n {\n \"ytick.labelsize\": \"small\",\n \"xtick.labelsize\": \"small\",\n \"axes.labelsize\": \"small\",\n \"axes.titlesize\": \"medium\",\n \"grid.color\": \"0.75\",\n \"grid.linestyle\": \":\",\n }\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's create our custom figure. There are lots of ways to do this step.\nHere we'll create the figure and the subplot axes in one step, specifying\noverall figure size, number and arrangement of subplots, and the ratio of\nsubplot heights for each row using :mod:`GridSpec keywords\n`. Other approaches (using\n:func:`~matplotlib.pyplot.subplot2grid`, or adding each axes manually) are\nshown commented out, for reference.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# figsize unit is inches\nfig, axes = plt.subplots(\n nrows=2, ncols=1, figsize=(4.5, 3.0), gridspec_kw=dict(height_ratios=[3, 4])\n)\n\n# alternate way #1: using subplot2grid\n# fig = plt.figure(figsize=(4.5, 3.))\n# axes = [plt.subplot2grid((7, 1), (0, 0), rowspan=3),\n# plt.subplot2grid((7, 1), (3, 0), rowspan=4)]\n\n# alternate way #2: using figure-relative coordinates\n# fig = plt.figure(figsize=(4.5, 3.))\n# axes = [fig.add_axes([0.125, 0.58, 0.775, 0.3]), # left, bot., width, height\n# fig.add_axes([0.125, 0.11, 0.775, 0.4])]\n\n# we'll put the evoked plot in the upper axes, and the brain below\nevoked_idx = 0\nbrain_idx = 1\n\n# plot the evoked in the desired subplot, and add a line at peak activation\nevoked.plot(axes=axes[evoked_idx])\npeak_line = axes[evoked_idx].axvline(max_t, color=\"#66CCEE\", ls=\"--\")\n# custom legend\naxes[evoked_idx].legend(\n [axes[evoked_idx].lines[0], peak_line],\n [\"MEG data\", \"Peak time\"],\n frameon=True,\n columnspacing=0.1,\n labelspacing=0.1,\n fontsize=8,\n fancybox=True,\n handlelength=1.8,\n)\n# remove the \"N_ave\" annotation\nfor text in list(axes[evoked_idx].texts):\n text.remove()\n# Remove spines and add grid\naxes[evoked_idx].grid(True)\naxes[evoked_idx].set_axisbelow(True)\nfor key in (\"top\", \"right\"):\n axes[evoked_idx].spines[key].set(visible=False)\n# Tweak the ticks and limits\naxes[evoked_idx].set(\n yticks=np.arange(-200, 201, 100), xticks=np.arange(-0.2, 0.51, 0.1)\n)\naxes[evoked_idx].set(ylim=[-225, 225], xlim=[-0.2, 0.5])\n\n# now add the brain to the lower axes\naxes[brain_idx].imshow(cropped_screenshot)\naxes[brain_idx].axis(\"off\")\n# add a vertical colorbar with the same properties as the 3D one\ndivider = make_axes_locatable(axes[brain_idx])\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.2)\ncbar = mne.viz.plot_brain_colorbar(cax, clim, colormap, label=\"Activation (F)\")\n\n# tweak margins and spacing\nfig.subplots_adjust(left=0.15, right=0.9, bottom=0.01, top=0.9, wspace=0.1, hspace=0.5)\n\n# add subplot labels\nfor ax, label in zip(axes, \"AB\"):\n ax.text(\n 0.03,\n ax.get_position().ymax,\n label,\n transform=fig.transFigure,\n fontsize=12,\n fontweight=\"bold\",\n va=\"top\",\n ha=\"left\",\n )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom timecourse with montage inset\n\nSuppose we want a figure with some mean timecourse extracted from a number of\nsensors, and we want a smaller panel within the figure to show a head outline\nwith the positions of those sensors clearly marked.\nIf you are familiar with MNE, you know that this is something that\n:func:`mne.viz.plot_compare_evokeds` does, see an example output in\n`ex-hf-sef-data` at the bottom.\n\nIn this part of the example, we will show you how to achieve this result on\nyour own figure, without having to use :func:`mne.viz.plot_compare_evokeds`!\n\nLet's start by loading some `example data `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = mne.datasets.sample.data_path()\nfname_raw = data_path / \"MEG\" / \"sample\" / \"sample_audvis_raw.fif\"\nraw = mne.io.read_raw_fif(fname_raw)\n\n# For the sake of the example, we focus on EEG data\nraw.pick(picks=\"eeg\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's make a plot.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# channels to plot:\nto_plot = [f\"EEG {i:03}\" for i in range(1, 5)]\n\n# get the data for plotting in a short time interval from 10 to 20 seconds\nstart = int(raw.info[\"sfreq\"] * 10)\nstop = int(raw.info[\"sfreq\"] * 20)\ndata, times = raw.get_data(picks=to_plot, start=start, stop=stop, return_times=True)\n\n# Scale the data from the MNE internal unit V to \u00b5V\ndata *= 1e6\n# Take the mean of the channels\nmean = np.mean(data, axis=0)\n# make a figure\nfig, ax = plt.subplots(figsize=(4.5, 3))\n# plot some EEG data\nax.plot(times, mean)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far so good. Now let's add the smaller figure within the figure to show\nexactly, which sensors we used to make the timecourse.\nFor that, we use an \"inset_axes\" that we plot into our existing axes.\nThe head outline with the sensor positions can be plotted using the\n`~mne.io.Raw` object that is the source of our data.\nSpecifically, that object already contains all the sensor positions,\nand we can plot them using the ``plot_sensors`` method.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# recreate the figure (only necessary for our documentation server)\nfig, ax = plt.subplots(figsize=(4.5, 3))\nax.plot(times, mean)\naxins = inset_locator.inset_axes(ax, width=\"30%\", height=\"30%\", loc=2)\n\n# pick() edits the raw object in place, so we'll make a copy here\n# so that our raw object stays intact for potential later analysis\nraw.copy().pick(to_plot).plot_sensors(title=\"\", axes=axins)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That looks nice. But the sensor dots are way too big for our taste. Luckily,\nall MNE-Python plots use Matplotlib under the hood and we can customize\neach and every facet of them.\nTo make the sensor dots smaller, we need to first get a handle on them to\nthen apply a ``*.set_*`` method on them.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# If we inspect our axes we find the objects contained in our plot:\nprint(axins.get_children())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's quite a a lot of objects, but we know that we want to change the\nsensor dots, and those are most certainly a \"PathCollection\" object.\nSo let's have a look at how many \"collections\" we have in the axes.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(axins.collections)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is only one! Those must be the sensor dots we were looking for.\nWe finally found exactly what we needed. Sometimes this can take a bit of\nexperimentation.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sensor_dots = axins.collections[0]\n\n# Recreate the figure once more; shrink the sensor dots; add axis labels\nfig, ax = plt.subplots(figsize=(4.5, 3))\nax.plot(times, mean)\naxins = inset_locator.inset_axes(ax, width=\"30%\", height=\"30%\", loc=2)\nraw.copy().pick(to_plot).plot_sensors(title=\"\", axes=axins)\nsensor_dots = axins.collections[0]\nsensor_dots.set_sizes([1])\n# add axis labels, and adjust bottom figure margin to make room for them\nax.set(xlabel=\"Time (s)\", ylabel=\"Amplitude (\u00b5V)\")\nfig.subplots_adjust(bottom=0.2)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }