{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Overview of MEG/EEG analysis with MNE-Python\n\nThis tutorial covers the basic EEG/MEG pipeline for event-related analysis: loading\ndata, epoching, averaging, plotting, and estimating cortical activity from sensor data.\nIt introduces the core MNE-Python data structures `~mne.io.Raw`, `~mne.Epochs`,\n`~mne.Evoked`, and `~mne.SourceEstimate`, and covers a lot of ground fairly quickly (at\nthe expense of depth). Subsequent tutorials address each of these topics in greater\ndetail.\n\nWe begin by importing the necessary Python modules:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The MNE-Python contributors.\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nimport mne" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading data\n\nMNE-Python data structures are based around the FIF file format from\nNeuromag, but there are reader functions for `a wide variety of other\ndata formats `. MNE-Python also has interfaces to a\nvariety of `publicly available datasets `, which MNE-Python\ncan download and manage for you.\n\nWe'll start this tutorial by loading one of the example datasets (called\n\"`sample-dataset`\"), which contains EEG and MEG data from one subject\nperforming an audiovisual experiment, along with structural MRI scans for\nthat subject. The `mne.datasets.sample.data_path` function will automatically\ndownload the dataset if it isn't found in one of the expected locations, then\nreturn the directory path to the dataset (see the documentation of\n`~mne.datasets.sample.data_path` for a list of places it checks before\ndownloading). Note also that for this tutorial to run smoothly on our\nservers, we're using a filtered and downsampled version of the data\n(:file:`sample_audvis_filt-0-40_raw.fif`), but an unfiltered version\n(:file:`sample_audvis_raw.fif`) is also included in the sample dataset and\ncould be substituted here when running the tutorial locally.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = (\n sample_data_folder / \"MEG\" / \"sample\" / \"sample_audvis_filt-0-40_raw.fif\"\n)\nraw = mne.io.read_raw_fif(sample_data_raw_file)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, `~mne.io.read_raw_fif` displays some information about the file\nit's loading; for example, here it tells us that there are four \"projection\nitems\" in the file along with the recorded data; those are :term:`SSP\nprojectors ` calculated to remove environmental noise from the MEG\nsignals, plus a projector to mean-reference the EEG channels; these are\ndiscussed in the tutorial `tut-projectors-background`. In addition to\nthe information displayed during loading, you can get a glimpse of the basic\ndetails of a `~mne.io.Raw` object by printing it; even more is available by\nprinting its ``info`` attribute (a `dictionary-like object ` that\nis preserved across `~mne.io.Raw`, `~mne.Epochs`, and `~mne.Evoked` objects).\nThe ``info`` data structure keeps track of channel locations, applied\nfilters, projectors, etc. Notice especially the ``chs`` entry, showing that\nMNE-Python detects different sensor types and handles each appropriately. See\n`tut-info-class` for more on the `~mne.Info` class.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(raw)\nprint(raw.info)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`~mne.io.Raw` objects also have several built-in plotting methods; here we\nshow the power spectral density (PSD) for each sensor type with\n`~mne.io.Raw.compute_psd`, as well as a plot of the raw sensor traces with\n`~mne.io.Raw.plot`. In the PSD plot, we'll only plot frequencies below 50 Hz\n(since our data are low-pass filtered at 40 Hz). In interactive Python\nsessions, `~mne.io.Raw.plot` is interactive and allows scrolling, scaling,\nbad channel marking, annotations, projector toggling, etc.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw.compute_psd(fmax=50).plot(picks=\"data\", exclude=\"bads\", amplitude=False)\nraw.plot(duration=5, n_channels=30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preprocessing\n\nMNE-Python supports a variety of preprocessing approaches and techniques\n(maxwell filtering, signal-space projection, independent components analysis,\nfiltering, downsampling, etc); see the full list of capabilities in the\n:mod:`mne.preprocessing` and :mod:`mne.filter` submodules. Here we'll clean\nup our data by performing independent components analysis\n(`~mne.preprocessing.ICA`); for brevity we'll skip the steps that helped us\ndetermined which components best capture the artifacts (see\n`tut-artifact-ica` for a detailed walk-through of that process).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# set up and fit the ICA\nica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)\nica.fit(raw)\nica.exclude = [1, 2] # details on how we picked these are omitted here\nica.plot_properties(raw, picks=ica.exclude)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we're confident about which component(s) we want to remove, we pass them\nas the ``exclude`` parameter and then apply the ICA to the raw signal. The\n`~mne.preprocessing.ICA.apply` method requires the raw data to be loaded into\nmemory (by default it's only read from disk as-needed), so we'll use\n`~mne.io.Raw.load_data` first. We'll also make a copy of the `~mne.io.Raw`\nobject so we can compare the signal before and after artifact removal\nside-by-side:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "orig_raw = raw.copy()\nraw.load_data()\nica.apply(raw)\n\n# show some frontal channels to clearly illustrate the artifact removal\nchs = [\n \"MEG 0111\",\n \"MEG 0121\",\n \"MEG 0131\",\n \"MEG 0211\",\n \"MEG 0221\",\n \"MEG 0231\",\n \"MEG 0311\",\n \"MEG 0321\",\n \"MEG 0331\",\n \"MEG 1511\",\n \"MEG 1521\",\n \"MEG 1531\",\n \"EEG 001\",\n \"EEG 002\",\n \"EEG 003\",\n \"EEG 004\",\n \"EEG 005\",\n \"EEG 006\",\n \"EEG 007\",\n \"EEG 008\",\n]\nchan_idxs = [raw.ch_names.index(ch) for ch in chs]\norig_raw.plot(order=chan_idxs, start=12, duration=4)\nraw.plot(order=chan_idxs, start=12, duration=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n## Detecting experimental events\n\nThe sample dataset includes several :term:`\"STIM\" channels `\nthat recorded electrical signals sent from the stimulus delivery computer (as\nbrief DC shifts / squarewave pulses). These pulses (often called \"triggers\")\nare used in this dataset to mark experimental events: stimulus onset,\nstimulus type, and participant response (button press). The individual STIM\nchannels are combined onto a single channel, in such a way that voltage\nlevels on that channel can be unambiguously decoded as a particular event\ntype. On older Neuromag systems (such as that used to record the sample data)\nthis summation channel was called ``STI 014``, so we can pass that channel\nname to the `mne.find_events` function to recover the timing and identity of\nthe stimulus events.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "events = mne.find_events(raw, stim_channel=\"STI 014\")\nprint(events[:5]) # show the first 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting events array is an ordinary 3-column :class:`NumPy array\n`, with sample number in the first column and integer event ID\nin the last column; the middle column is usually ignored. Rather than keeping\ntrack of integer event IDs, we can provide an *event dictionary* that maps\nthe integer IDs to experimental conditions or events. In this dataset, the\nmapping looks like this:\n\n\n+----------+----------------------------------------------------------+\n| Event ID | Condition |\n+==========+==========================================================+\n| 1 | auditory stimulus (tone) to the left ear |\n+----------+----------------------------------------------------------+\n| 2 | auditory stimulus (tone) to the right ear |\n+----------+----------------------------------------------------------+\n| 3 | visual stimulus (checkerboard) to the left visual field |\n+----------+----------------------------------------------------------+\n| 4 | visual stimulus (checkerboard) to the right visual field |\n+----------+----------------------------------------------------------+\n| 5 | smiley face (catch trial) |\n+----------+----------------------------------------------------------+\n| 32 | subject button press |\n+----------+----------------------------------------------------------+\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "event_dict = {\n \"auditory/left\": 1,\n \"auditory/right\": 2,\n \"visual/left\": 3,\n \"visual/right\": 4,\n \"smiley\": 5,\n \"buttonpress\": 32,\n}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Event dictionaries like this one are used when extracting epochs from\ncontinuous data; the ``/`` character in the dictionary keys allows pooling\nacross conditions by requesting partial condition descriptors (i.e.,\nrequesting ``'auditory'`` will select all epochs with Event IDs 1 and 2;\nrequesting ``'left'`` will select all epochs with Event IDs 1 and 3). An\nexample of this is shown in the next section. There is also a convenient\n`~mne.viz.plot_events` function for visualizing the distribution of events\nacross the duration of the recording (to make sure event detection worked as\nexpected). Here we will also make use of the `~mne.Info` attribute to get the\nsampling frequency of the recording (so our x-axis will be in seconds instead\nof in samples).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = mne.viz.plot_events(\n events, event_id=event_dict, sfreq=raw.info[\"sfreq\"], first_samp=raw.first_samp\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For paradigms that are not event-related (e.g., analysis of resting-state\ndata), you can extract regularly spaced (possibly overlapping) spans of data\nby creating events using `mne.make_fixed_length_events` and then proceeding\nwith epoching as described in the next section.\n\n\n\n## Epoching continuous data\n\nThe `~mne.io.Raw` object and the events array are the bare minimum needed to\ncreate an `~mne.Epochs` object, which we create with the `~mne.Epochs` class\nconstructor. Here we'll also specify some data quality constraints: we'll\nreject any epoch where peak-to-peak signal amplitude is beyond reasonable\nlimits for that channel type. This is done with a *rejection dictionary*; you\nmay include or omit thresholds for any of the channel types present in your\ndata. The values given here are reasonable for this particular dataset, but\nmay need to be adapted for different hardware or recording conditions. For a\nmore automated approach, consider using the `autoreject package`_.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "reject_criteria = dict(\n mag=4000e-15, # 4000 fT\n grad=4000e-13, # 4000 fT/cm\n eeg=150e-6, # 150 \u00b5V\n eog=250e-6,\n) # 250 \u00b5V" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll also pass the event dictionary as the ``event_id`` parameter (so we can\nwork with easy-to-pool event labels instead of the integer event IDs), and\nspecify ``tmin`` and ``tmax`` (the time relative to each event at which to\nstart and end each epoch). As mentioned above, by default `~mne.io.Raw` and\n`~mne.Epochs` data aren't loaded into memory (they're accessed from disk only\nwhen needed), but here we'll force loading into memory using the\n``preload=True`` parameter so that we can see the results of the rejection\ncriteria being applied:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "epochs = mne.Epochs(\n raw,\n events,\n event_id=event_dict,\n tmin=-0.2,\n tmax=0.5,\n reject=reject_criteria,\n preload=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we'll pool across left/right stimulus presentations so we can compare\nauditory versus visual responses. To avoid biasing our signals to the left or\nright, we'll use `~mne.Epochs.equalize_event_counts` first to randomly sample\nepochs from each condition to match the number of epochs present in the\ncondition with the fewest good epochs.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "conds_we_care_about = [\"auditory/left\", \"auditory/right\", \"visual/left\", \"visual/right\"]\nepochs.equalize_event_counts(conds_we_care_about) # this operates in-place\naud_epochs = epochs[\"auditory\"]\nvis_epochs = epochs[\"visual\"]\ndel raw, epochs # free up memory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like `~mne.io.Raw` objects, `~mne.Epochs` objects also have a number of\nbuilt-in plotting methods. One is `~mne.Epochs.plot_image`, which shows each\nepoch as one row of an image map, with color representing signal magnitude;\nthe average evoked response and the sensor location are shown below the\nimage:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "aud_epochs.plot_image(picks=[\"MEG 1332\", \"EEG 021\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

Both `~mne.io.Raw` and `~mne.Epochs` objects have `~mne.Epochs.get_data`\n methods that return the underlying data as a\n :class:`NumPy array `. Both methods have a ``picks``\n parameter for subselecting which channel(s) to return; ``raw.get_data()``\n has additional parameters for restricting the time domain. The resulting\n matrices have dimension ``(n_channels, n_times)`` for `~mne.io.Raw` and\n ``(n_epochs, n_channels, n_times)`` for `~mne.Epochs`.

\n\n## Time-frequency analysis\n\nThe :mod:`mne.time_frequency` submodule provides implementations of several\nalgorithms to compute time-frequency representations, power spectral density,\nand cross-spectral density. Here, for example, we'll compute for the auditory\nepochs the induced power at different frequencies and times, using Morlet\nwavelets. On this dataset the result is not especially informative (it just\nshows the evoked \"auditory N100\" response); see `here\n` for a more extended example on a dataset with richer\nfrequency content.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "frequencies = np.arange(7, 30, 3)\npower = aud_epochs.compute_tfr(\n \"morlet\", n_cycles=2, return_itc=False, freqs=frequencies, decim=3, average=True\n)\npower.plot([\"MEG 1332\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating evoked responses\n\nNow that we have our conditions in ``aud_epochs`` and ``vis_epochs``, we can\nget an estimate of evoked responses to auditory versus visual stimuli by\naveraging together the epochs in each condition. This is as simple as calling\nthe `~mne.Epochs.average` method on the `~mne.Epochs` object, and then using\na function from the :mod:`mne.viz` module to compare the global field power\nfor each sensor type of the two `~mne.Evoked` objects:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "aud_evoked = aud_epochs.average()\nvis_evoked = vis_epochs.average()\n\nmne.viz.plot_compare_evokeds(\n dict(auditory=aud_evoked, visual=vis_evoked),\n legend=\"upper left\",\n show_sensors=\"upper right\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also get a more detailed view of each `~mne.Evoked` object using other\nplotting methods such as `~mne.Evoked.plot_joint` or\n`~mne.Evoked.plot_topomap`. Here we'll examine just the EEG channels, and see\nthe classic auditory evoked N100-P200 pattern over dorso-frontal electrodes,\nthen plot scalp topographies at some additional arbitrary times:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "aud_evoked.plot_joint(picks=\"eeg\")\naud_evoked.plot_topomap(times=[0.0, 0.08, 0.1, 0.12, 0.2], ch_type=\"eeg\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Evoked objects can also be combined to show contrasts between conditions,\nusing the `mne.combine_evoked` function. A simple difference can be\ngenerated by passing ``weights=[1, -1]``. We'll then plot the difference wave\nat each sensor using `~mne.Evoked.plot_topo`:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])\nevoked_diff.pick(picks=\"mag\").plot_topo(color=\"r\", legend=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inverse modeling\n\nFinally, we can estimate the origins of the evoked activity by projecting the\nsensor data into this subject's :term:`source space` (a set of points either\non the cortical surface or within the cortical volume of that subject, as\nestimated by structural MRI scans). MNE-Python supports lots of ways of doing\nthis (dynamic statistical parametric mapping, dipole fitting, beamformers,\netc.); here we'll use minimum-norm estimation (MNE) to generate a continuous\nmap of activation constrained to the cortical surface. MNE uses a linear\n:term:`inverse operator` to project EEG+MEG sensor measurements into the\nsource space. The inverse operator is computed from the\n:term:`forward solution` for this subject and an estimate of `the\ncovariance of sensor measurements `. For this\ntutorial we'll skip those computational steps and load a pre-computed inverse\noperator from disk (it's included with the `sample data\n`). Because this \"inverse problem\" is underdetermined (there\nis no unique solution), here we further constrain the solution by providing a\nregularization parameter specifying the relative smoothness of the current\nestimates in terms of a signal-to-noise ratio (where \"noise\" here is akin to\nbaseline activity level across all of cortex).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# load inverse operator\ninverse_operator_file = (\n sample_data_folder / \"MEG\" / \"sample\" / \"sample_audvis-meg-oct-6-meg-inv.fif\"\n)\ninv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)\n# set signal-to-noise ratio (SNR) to compute regularization parameter (\u03bb\u00b2)\nsnr = 3.0\nlambda2 = 1.0 / snr**2\n# generate the source time course (STC)\nstc = mne.minimum_norm.apply_inverse(\n vis_evoked, inv_operator, lambda2=lambda2, method=\"MNE\"\n) # or dSPM, sLORETA, eLORETA" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, in order to plot the source estimate on the subject's cortical\nsurface we'll also need the path to the sample subject's structural MRI files\n(the ``subjects_dir``):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# path to subjects' MRI files\nsubjects_dir = sample_data_folder / \"subjects\"\n# plot the STC\nstc.plot(\n initial_time=0.1, hemi=\"split\", views=[\"lat\", \"med\"], subjects_dir=subjects_dir\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The remaining tutorials have *much more detail* on each of these topics (as\nwell as many other capabilities of MNE-Python not mentioned here:\nconnectivity analysis, encoding/decoding models, lots more visualization\noptions, etc). Read on to learn more!\n\n.. LINKS\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }