{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# EEG analysis - Event-Related Potentials (ERPs)\n\nThis tutorial shows how to perform standard ERP analyses in MNE-Python. Most of\nthe material here is covered in other tutorials too, but for convenience the\nfunctions and methods most useful for ERP analyses are collected here, with\nlinks to other tutorials where more detailed information is given.\n\nAs usual we'll start by importing the modules we need and loading some example\ndata. Instead of parsing the events from the raw data's :term:`stim channel`\n(like we do in `this tutorial `), we'll load\nthe events from an external events file. Finally, to speed up computations\nwe'll crop the raw data from ~4.5 minutes down to 90 seconds.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The MNE-Python contributors.\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport mne\n\nroot = mne.datasets.sample.data_path() / \"MEG\" / \"sample\"\nraw_file = root / \"sample_audvis_filt-0-40_raw.fif\"\nraw = mne.io.read_raw_fif(raw_file, preload=False)\n\nevents_file = root / \"sample_audvis_filt-0-40_raw-eve.fif\"\nevents = mne.read_events(events_file)\n\nraw.crop(tmax=90) # in seconds (happens in-place)\n# discard events >90 seconds (not strictly necessary, but avoids some warnings)\nevents = events[events[:, 0] <= raw.last_samp]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The file that we loaded has already been partially processed: 3D sensor\nlocations have been saved as part of the ``.fif`` file, the data have been\nlow-pass filtered at 40 Hz, and a common average reference is set for the\nEEG channels, stored as a projector (see `section-avg-ref-proj` in the\n`tut-set-eeg-ref` tutorial for more info about when you may want to do\nthis). We'll discuss how to do each of these below.\n\nSince this is a combined EEG/MEG dataset, let's start by restricting the data\nto just the EEG and EOG channels. This will cause the other projectors saved\nin the file (which apply only to magnetometer channels) to be removed. By\nlooking at the measurement info we can see that we now have 59 EEG channels\nand 1 EOG channel.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw.pick([\"eeg\", \"eog\"]).load_data()\nraw.info" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Channel names and types\n\nIn practice it is quite common to have some channels labeled as EEG that are\nactually EOG channels. :class:`~mne.io.Raw` objects have a\n:meth:`~mne.io.Raw.set_channel_types` method that can be used to change a\nchannel that is mislabeled as ``eeg`` to ``eog``.\n\nYou can also rename channels using :meth:`~mne.io.Raw.rename_channels`.\nDetailed examples of both of these methods can be found in the tutorial\n`tut-raw-class`.\n\nIn our data set, all channel types are already correct. Therefore, we'll only\nremove a space and a leading zero in the channel names and convert to\nlowercase:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "channel_renaming_dict = {name: name.replace(\" 0\", \"\").lower() for name in raw.ch_names}\n_ = raw.rename_channels(channel_renaming_dict) # happens in-place" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

The assignment to a temporary name ``_`` (the ``_ =`` part) is included\n here to suppress automatic printing of the ``raw`` object. You do not\n have to do this in your interactive analysis.

\n\n## Channel locations\n\nThe tutorial `tut-sensor-locations` describes how sensor locations are\nhandled in great detail. To briefly summarize: MNE-Python distinguishes\n:term:`montages ` (which contain 3D sensor locations ``x``, ``y``,\nand ``z``, in meters) from :term:`layouts ` (which define 2D sensor\narrangements for plotting schematic sensor location diagrams). Additionally,\nmontages may specify *idealized* sensor locations (based on, e.g., an\nidealized spherical head model), or they may contain *realistic* sensor\nlocations obtained by digitizing the 3D locations of the sensors when placed\non a real person's head.\n\nThis dataset has realistic digitized 3D sensor locations saved as part of the\n``.fif`` file, so we can view the sensor locations in 2D or 3D using the\n:meth:`~mne.io.Raw.plot_sensors` method:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw.plot_sensors(show_names=True)\nfig = raw.plot_sensors(\"3d\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're working with a standard montage like the [10\u201320](ten_twenty_)\nsystem, you can add sensor locations to the data with\n``raw.set_montage('standard_1020')`` (see `tut-sensor-locations` for\ninformation on other standard montages included with MNE-Python).\n\nIf you have digitized realistic sensor locations, there are dedicated\nfunctions for loading those digitization files into MNE-Python (see\n`reading-dig-montages` for discussion and `dig-formats` for a list\nof supported formats). Once loaded, the digitized sensor locations can be\nadded to the data by passing the loaded montage object to\n:meth:`~mne.io.Raw.set_montage`.\n\n\n## Setting the EEG reference\n\nAs mentioned, this data already has an EEG common average reference\nadded as a :term:`projector`. We can view the effect of this projector on the\nraw data by plotting it with and without the projector applied:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for proj in (False, True):\n with mne.viz.use_browser_backend(\"matplotlib\"):\n fig = raw.plot(\n n_channels=5, proj=proj, scalings=dict(eeg=50e-6), show_scrollbars=False\n )\n fig.subplots_adjust(top=0.9) # make room for title\n ref = \"Average\" if proj else \"No\"\n fig.suptitle(f\"{ref} reference\", size=\"xx-large\", weight=\"bold\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The referencing scheme can be changed with the function\n:func:`mne.set_eeg_reference` (which by default operates on a *copy* of the\ndata) or the :meth:`raw.set_eeg_reference() `\nmethod (which always modifies the data *in-place*). The tutorial\n`tut-set-eeg-ref` shows several examples.\n\n\n## Filtering\n\nMNE-Python has extensive support for different ways of filtering data. For a\ngeneral discussion of filter characteristics and MNE-Python defaults, see\n`disc-filtering`. For practical examples of how to apply filters to your\ndata, see `tut-filter-resample`. Here, we'll apply a simple high-pass\nfilter for illustration:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw.filter(l_freq=0.1, h_freq=None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evoked responses: epoching and averaging\n\nThe general process for extracting evoked responses from continuous data is\nto use the :class:`~mne.Epochs` constructor, and then average the resulting\nepochs to create an :class:`~mne.Evoked` object. In MNE-Python, events are\nrepresented as a :class:`NumPy array ` containing event\nlatencies (in samples) and integer event codes. The event codes are stored in\nthe last column of the events array:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "np.unique(events[:, -1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tut-event-arrays` tutorial discusses event arrays in more detail.\nInteger event codes are mapped to more descriptive text using a Python\n:class:`dictionary ` usually called ``event_id``. This mapping is\ndetermined by your experiment (i.e., it reflects which event codes you chose\nto represent different experimental events or conditions). The\n`sample-dataset` data uses the following mapping:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "event_dict = {\n \"auditory/left\": 1,\n \"auditory/right\": 2,\n \"visual/left\": 3,\n \"visual/right\": 4,\n \"face\": 5,\n \"buttonpress\": 32,\n}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can proceed to epoch the continuous data. An interactive plot allows\nus to click on epochs to mark them as \"bad\" and drop them from the\nanalysis (it is not interactive on this documentation website, but will be\nwhen you run `epochs.plot() ` in a Python console).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.3, tmax=0.7, preload=True)\nfig = epochs.plot(events=events)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is also possible to automatically drop epochs (either when first creating\nthem or later on) by providing maximum peak-to-peak signal value thresholds\n(passed to :class:`~mne.Epochs` as the ``reject`` parameter; see\n`tut-reject-epochs-section` for details). You can also do this after\nthe epochs are already created using :meth:`~mne.Epochs.drop_bad`:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "reject_criteria = dict(eeg=100e-6, eog=200e-6) # 100 \u00b5V, 200 \u00b5V\nepochs.drop_bad(reject=reject_criteria)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we generate a barplot of which channels contributed most to epochs\ngetting rejected. If one channel is responsible for many epoch rejections,\nit may be worthwhile to mark that channel as \"bad\" in the\n:class:`~mne.io.Raw` object and then re-run epoching (fewer channels with\nmore good epochs may be preferable to keeping all channels but losing many\nepochs). See `tut-bad-channels` for more information.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "epochs.plot_drop_log()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Epochs can also be dropped automatically if the event around which the epoch\nis created is too close to the start or end of the :class:`~mne.io.Raw`\nobject (e.g., if the epoch would extend past the end of the recording; this\nis the cause for the \"TOO_SHORT\" entry in the\n:meth:`~mne.Epochs.plot_drop_log` plot).\n\nEpochs may also be dropped automatically if the :class:`~mne.io.Raw` object\ncontains :term:`annotations` that begin with either ``bad`` or ``edge``\n(\"edge\" annotations are automatically inserted when concatenating two or more\n:class:`~mne.io.Raw` objects). See `tut-reject-data-spans` for more\ninformation on annotation-based epoch rejection.\n\nNow that we've dropped all bad epochs, let's look at our evoked responses for\nsome conditions we care about. Here, the :meth:`~mne.Epochs.average` method\nwill create an :class:`~mne.Evoked` object, which we can then plot. Notice\nthat we select which condition we want to average using square-bracket\nindexing (like for a :class:`dictionary `). This returns a subset with\nonly the desired epochs, which we then average:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "l_aud = epochs[\"auditory/left\"].average()\nl_vis = epochs[\"visual/left\"].average()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These :class:`~mne.Evoked` objects have their own interactive plotting method\n(though again, it won't be interactive on the documentation website).\nClicking and dragging a span of time will generate a topography of scalp\npotentials for the selected time segment. Here, we also demonstrate built-in\ncolor-coding the channel traces by location:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig1 = l_aud.plot()\nfig2 = l_vis.plot(spatial_colors=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Scalp topographies can also be obtained non-interactively with the\n:meth:`~mne.Evoked.plot_topomap` method. Here, we display topomaps of the\naverage evoked potential in 50 ms time windows centered at -200 ms, 100 ms,\nand 400 ms.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "l_aud.plot_topomap(times=[-0.2, 0.1, 0.4], average=0.05)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Considerable customization of these plots is possible, see the docstring of\n:meth:`~mne.Evoked.plot_topomap` for details.\n\nThere is also a built-in method for combining butterfly plots of the signals\nwith scalp topographies called :meth:`~mne.Evoked.plot_joint`. Like in\n:meth:`~mne.Evoked.plot_topomap`, you can specify times for the scalp\ntopographies or you can let the method choose times automatically as shown\nhere:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "l_aud.plot_joint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Global field power (GFP)\n\nGlobal field power :footcite:`Lehmann1980,Lehmann1984,Murray2008` is,\ngenerally speaking, a measure of agreement of the signals picked up by all\nsensors across the entire scalp: if all sensors have the same value at a\ngiven time point, the GFP will be zero at that time point. If the signals\ndiffer, the GFP will be non-zero at that time point. GFP\npeaks may reflect \"interesting\" brain activity, warranting further\ninvestigation. Mathematically, the GFP is the population standard\ndeviation across all sensors, calculated separately for every time point.\n\nYou can plot the GFP using `evoked.plot(gfp=True) `. The GFP\ntrace will be black if ``spatial_colors=True`` and green otherwise. The EEG\nreference does not affect the GFP:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for evk in (l_aud, l_vis):\n evk.plot(gfp=True, spatial_colors=True, ylim=dict(eeg=[-12, 12]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To plot the GFP by itself, you can pass ``gfp='only'`` (this makes it easier\nto read off the GFP data values, because the scale is aligned):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "l_aud.plot(gfp=\"only\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The GFP is the population standard deviation of the signal\nacross channels. To compute it manually, we can leverage the fact that\n`evoked.data ` is a :class:`NumPy array `,\nand verify by plotting it using plain Matplotlib commands:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "gfp = l_aud.data.std(axis=0, ddof=0)\n\n# Reproducing the MNE-Python plot style seen above\nfig, ax = plt.subplots()\nax.plot(l_aud.times, gfp * 1e6, color=\"lime\")\nax.fill_between(l_aud.times, gfp * 1e6, color=\"lime\", alpha=0.2)\nax.set(xlabel=\"Time (s)\", ylabel=\"GFP (\u00b5V)\", title=\"EEG\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Averaging across channels with regions of interest\n\nSince our sample data contains responses to left and right auditory and\nvisual stimuli, we may want to compare left versus right regions of interest\n(ROIs). To average across channels in a given ROI, we first find the relevant\nchannel indices. Revisiting the 2D sensor plot above, we might choose the\nfollowing channels for left and right ROIs, respectively:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "left = [\"eeg17\", \"eeg18\", \"eeg25\", \"eeg26\"]\nright = [\"eeg23\", \"eeg24\", \"eeg34\", \"eeg35\"]\n\nleft_ix = mne.pick_channels(l_aud.info[\"ch_names\"], include=left)\nright_ix = mne.pick_channels(l_aud.info[\"ch_names\"], include=right)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can create a new Evoked object with two virtual channels (one for each\nROI):\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "roi_dict = dict(left_ROI=left_ix, right_ROI=right_ix)\nroi_evoked = mne.channels.combine_channels(l_aud, roi_dict, method=\"mean\")\nprint(roi_evoked.info[\"ch_names\"])\nroi_evoked.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparing conditions\n\nIf we wanted to contrast auditory to visual stimuli, a useful function is\n:func:`mne.viz.plot_compare_evokeds`. By default, this function will combine\nall channels in each evoked object using GFP (or RMS for MEG channels); here\ninstead we specify to combine by averaging, and restrict it to a subset of\nchannels by passing ``picks``:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evokeds = dict(auditory=l_aud, visual=l_vis)\npicks = [f\"eeg{n}\" for n in range(10, 15)]\nmne.viz.plot_compare_evokeds(evokeds, picks=picks, combine=\"mean\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also generate confidence intervals by treating each epoch as a\nseparate observation using :meth:`~mne.Epochs.iter_evoked`. A confidence\ninterval across subjects could also be obtained by passing a list of\n:class:`~mne.Evoked` objects (one per subject) to the\n:func:`~mne.viz.plot_compare_evokeds` function.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evokeds = dict(\n auditory=list(epochs[\"auditory/left\"].iter_evoked()),\n visual=list(epochs[\"visual/left\"].iter_evoked()),\n)\nmne.viz.plot_compare_evokeds(evokeds, combine=\"mean\", picks=picks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also compare conditions by subtracting one :class:`~mne.Evoked` object\nfrom another using the :func:`mne.combine_evoked` function (this function\nalso supports pooling of epochs without subtraction).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "aud_minus_vis = mne.combine_evoked([l_aud, l_vis], weights=[1, -1])\naud_minus_vis.plot_joint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Warning

The code above yields an **equal-weighted difference**. If you have\n different numbers of epochs per condition, you might want to equalize the\n number of events per condition first by using\n `epochs.equalize_event_counts() `\n before averaging.

\n\n\n## Grand averages\n\nTo compute grand averages across conditions (or subjects), you can pass a\nlist of :class:`~mne.Evoked` objects to :func:`mne.grand_average`. The result\nis another :class:`~mne.Evoked` object.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "grand_average = mne.grand_average([l_aud, l_vis])\nprint(grand_average)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For combining *conditions* it is also possible to make use of :term:`HED`\ntags in the condition names when selecting which epochs to average. For\nexample, we have the condition names:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "list(event_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can select the auditory conditions (left and right together) by passing:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "epochs[\"auditory\"].average()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See `tut-section-subselect-epochs` for more details on that.\n\nThe tutorials `tut-epochs-class` and `tut-evoked-class` have many\nmore details about working with the :class:`~mne.Epochs` and\n:class:`~mne.Evoked` classes.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Amplitude and latency measures\nIt is common in ERP research to extract measures of amplitude or latency to\ncompare across different conditions. There are many measures that can be\nextracted from ERPs, and many of these are detailed (including the respective\nstrengths and weaknesses) in chapter 9 of Luck :footcite:`Luck2014` (also see\nthe [Measurement Tool](https://bit.ly/37uydRw) in the ERPLAB Toolbox\n:footcite:`Lopez-CalderonLuck2014`).\n\nThis part of the tutorial will demonstrate how to extract three common\nmeasures:\n\n* Peak latency\n* Peak amplitude\n* Mean amplitude\n\n### Peak latency and amplitude\n\nThe most common measures of amplitude and latency are peak measures.\nPeak measures are basically the maximum amplitude of the signal in a\nspecified time window and the time point (or latency) at which the peak\namplitude occurred.\n\nPeak measures can be obtained using the :meth:`~mne.Evoked.get_peak` method.\nThere are two important things to point out about\n:meth:`~mne.Evoked.get_peak`. First, it finds the strongest peak\nlooking across **all channels** of the selected type that are available in\nthe :class:`~mne.Evoked` object. As a consequence, if you want to restrict\nthe search to a group of channels or a single channel, you\nshould first use the :meth:`~mne.Evoked.pick` or\n:meth:`~mne.Evoked.pick_channels` methods. Second, the\n:meth:`~mne.Evoked.get_peak` method can find different types of peaks using\nthe ``mode`` argument. There are three options:\n\n* ``mode='pos'``: finds the peak with a positive voltage (ignores\n negative voltages)\n* ``mode='neg'``: finds the peak with a negative voltage (ignores\n positive voltages)\n* ``mode='abs'``: finds the peak with the largest absolute voltage\n regardless of sign (positive or negative)\n\nThe following example demonstrates how to find the first positive peak in the\nERP (i.e., the P100) for the left visual condition (i.e., the\n``l_vis`` :class:`~mne.Evoked` object). The time window used to search for\nthe peak ranges from 0.08 to 0.12 s. This time window was selected because it\nis when P100 typically occurs. Note that all ``'eeg'`` channels are submitted\nto the :meth:`~mne.Evoked.get_peak` method.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Define a function to print out the channel (ch) containing the\n# peak latency (lat; in msec) and amplitude (amp, in \u00b5V), with the\n# time range (tmin and tmax) that was searched.\n# This function will be used throughout the remainder of the tutorial.\ndef print_peak_measures(ch, tmin, tmax, lat, amp):\n print(f\"Channel: {ch}\")\n print(f\"Time Window: {tmin * 1e3:.3f} - {tmax * 1e3:.3f} ms\")\n print(f\"Peak Latency: {lat * 1e3:.3f} ms\")\n print(f\"Peak Amplitude: {amp * 1e6:.3f} \u00b5V\")\n\n\n# Get peak amplitude and latency from a good time window that contains the peak\ngood_tmin, good_tmax = 0.08, 0.12\nch, lat, amp = l_vis.get_peak(\n ch_type=\"eeg\", tmin=good_tmin, tmax=good_tmax, mode=\"pos\", return_amplitude=True\n)\n\n# Print output from the good time window that contains the peak\nprint(\"** PEAK MEASURES FROM A GOOD TIME WINDOW **\")\nprint_peak_measures(ch, good_tmin, good_tmax, lat, amp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output shows that channel ``eeg55`` had the maximum positive peak in\nthe chosen time window from all of the ``'eeg'`` channels searched.\nIn practice, one might want to pull out the peak for\nan *a priori* region of interest or a single channel depending on the study.\nThis can be done by combining the :meth:`~mne.Evoked.pick`\nor :meth:`~mne.Evoked.pick_channels` methods with the\n:meth:`~mne.Evoked.get_peak` method.\n\nHere, let's assume we believe the effects of interest will occur\nat ``eeg59``.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Fist, return a copy of l_vis to select the channel from\nl_vis_roi = l_vis.copy().pick(\"eeg59\")\n\n# Get the peak and latency measure from the selected channel\nch_roi, lat_roi, amp_roi = l_vis_roi.get_peak(\n tmin=good_tmin, tmax=good_tmax, mode=\"pos\", return_amplitude=True\n)\n\n# Print output\nprint(\"** PEAK MEASURES FOR ONE CHANNEL FROM A GOOD TIME WINDOW **\")\nprint_peak_measures(ch_roi, good_tmin, good_tmax, lat_roi, amp_roi)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the peak latencies are the same in channels ``eeg55`` and ``eeg59``,\nthe peak amplitudes differ. This approach can also be applied to virtual\nchannels created with the :func:`~mne.channels.combine_channels` function and\ndifference waves created with the :func:`mne.combine_evoked` function (see\n``aud_minus_vis`` in section `Comparing conditions`_ above).\n\nPeak measures are very susceptible to high frequency noise in the\nsignal (for discussion, see :footcite:`Luck2014`). Specifically, high\nfrequency noise positively biases peak amplitude measures. This bias can\nconfound comparisons across conditions where ERPs differ in the level of high\nfrequency noise, such as when the conditions differ in the number of trials\ncontributing to the ERP. One way to avoid this is to apply a non-causal\nlow-pass filter to the ERP. Low-pass filters reduce the contribution of high\nfrequency noise by smoothing out fast (i.e., high frequency) fluctuations in\nthe signal (see `disc-filtering`). While this can reduce the positive\nbias in peak amplitude measures caused by high frequency noise, low-pass\nfiltering the ERP can introduce challenges in interpreting peak latency\nmeasures for effects of interest :footcite:`Rousselet2012,VanRullen2011`.\n\nIf using peak measures, it is critical to visually inspect the data to\nmake sure the selected time window actually contains a peak. The\nmeth:`~mne.Evoked.get_peak` method detects the maximum or minimum voltage in\nthe specified time range and returns the latency and amplitude of this peak.\nThere is no guarantee that this method will return an actual peak. Instead,\nit may return a value on the rising or falling edge of a peak we are trying\nto find.\n\nThe following example demonstrates why visual inspection is crucial. Below,\nwe use a known bad time window (0.095 to 0.135 s) to search for a peak in\nchannel ``eeg59``.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Get BAD peak measures\nbad_tmin, bad_tmax = 0.095, 0.135\nch_roi, bad_lat_roi, bad_amp_roi = l_vis_roi.get_peak(\n mode=\"pos\", tmin=bad_tmin, tmax=bad_tmax, return_amplitude=True\n)\n\n# Print output\nprint(\"** PEAK MEASURES FOR ONE CHANNEL FROM A BAD TIME WINDOW **\")\nprint_peak_measures(ch_roi, bad_tmin, bad_tmax, bad_lat_roi, bad_amp_roi)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all we had were the above values, it would be unclear if they are truly\nidentifying a peak in the ERP. In fact, the 0.095 to 0.135 s time window\nactually does not contain the true peak, which is shown in the top panel\nbelow. The bad time window (highlighted in orange) does not contain the true\npeak (the pink star). In contrast, the time window defined initially (0.08 to\n0.12 s; highlighted in blue) returns an actual peak instead of a just a\nmaximum or minimum in the searched time window. Visual inspection will always\nhelp you to convince yourself that the returned values are actual peaks.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, axs = plt.subplots(nrows=2, ncols=1, layout=\"tight\")\nwords = ((\"Bad\", \"missing\"), (\"Good\", \"finding\"))\ntimes = (np.array([bad_tmin, bad_tmax]), np.array([good_tmin, good_tmax]))\ncolors = (\"C1\", \"C0\")\n\nfor ix, ax in enumerate(axs):\n title = \"{} time window {} peak\".format(*words[ix])\n l_vis_roi.plot(axes=ax, time_unit=\"ms\", show=False, titles=title)\n ax.plot(lat_roi * 1e3, amp_roi * 1e6, marker=\"*\", color=\"C6\")\n ax.axvspan(*(times[ix] * 1e3), facecolor=colors[ix], alpha=0.3)\n ax.set_xlim(-50, 150) # Show zoomed in around peak" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Mean Amplitude\n\nAnother common practice in ERP studies is to define a component (or effect)\nas the mean amplitude within a specified time window. One advantage of this\napproach is that it is less sensitive to high frequency noise (compared to\npeak amplitude measures), because averaging over a time window acts as a\nlow-pass filter (see discussion in the previous section\n`Peak latency and amplitude`_).\n\nWhen using mean amplitude measures, selecting the time window based on\nthe effect of interest (e.g., the difference between two conditions) can\ninflate the likelihood of finding false positives in your\nresults :footcite:`LuckGaspelin2017`. There are other, and\nbetter, ways to identify a time window to use for extracting mean amplitude\nmeasures. First, you can use an *a priori* time window based on prior\nresearch.\nA second option is to define a time window from an independent condition or\nset of trials not used in the analysis (e.g., a \"localizer\"). A third\napproach is\nto define a time window using the across-condition grand average. This latter\napproach is not circular because the across-condition mean and condition\ndifference are independent of one another. The issues discussed above also\napply to selecting channels used for analysis.\n\nThe following example demonstrates how to pull out the mean amplitude\nfrom the left visual condition (i.e., the ``l_vis`` :class:`~mne.Evoked`\nobject) from selected channels and time windows. Stimulating the\nleft visual field increases neural activity of visual cortex in the\ncontralateral (i.e., right) hemisphere. We can test this by examining the\namplitude of the ERP for left visual field stimulation over right\n(contralateral) and left (ipsilateral) channels. The channels used for this\nanalysis are ``eeg54`` and ``eeg57`` (left hemisphere), and ``eeg59`` and\n``eeg55`` (right hemisphere). The time window used is 0.08 (``good_tmin``)\nto 0.12 s (``good_tmax``) as it corresponds to when the P100 typically\noccurs.\nThe P100 is sensitive to left and right visual field stimulation. The mean\namplitude is extracted from the above four channels and stored in a\n:class:`pandas.DataFrame`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Select all of the channels and crop to the time window\nchannels = [\"eeg54\", \"eeg57\", \"eeg55\", \"eeg59\"]\nhemisphere = [\"left\", \"left\", \"right\", \"right\"]\nl_vis_mean_roi = l_vis.copy().pick(channels).crop(tmin=good_tmin, tmax=good_tmax)\n\n# Extract mean amplitude in \u00b5V over time\nmean_amp_roi = l_vis_mean_roi.data.mean(axis=1) * 1e6\n\n# Store the data in a data frame\nmean_amp_roi_df = pd.DataFrame(\n {\n \"ch_name\": l_vis_mean_roi.ch_names,\n \"hemisphere\": [\"left\", \"left\", \"right\", \"right\"],\n \"mean_amp\": mean_amp_roi,\n }\n)\n\n# Print the data frame\nprint(mean_amp_roi_df.groupby(\"hemisphere\").mean(numeric_only=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As demonstrated in this example, the mean amplitude was higher and\npositive in right compared to left hemisphere channels. It should be\nreiterated that both spatial and temporal windows used in the analysis should\nbe determined in an independent manner (e.g., defined *a priori* from prior\nresearch, a \"localizer\" or another independent condition) and not based\non the data you will use to test your hypotheses.\n\nThe example can be modified to extract the mean amplitude\nfrom all channels and store the resulting output in a\n:class:`pandas.DataFrame`. This can be useful for statistical analyses\nconducted in other programming languages.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Extract mean amplitude for all channels in l_vis (including `eog`)\nl_vis_cropped = l_vis.copy().crop(tmin=good_tmin, tmax=good_tmax)\nmean_amp_all = l_vis_cropped.data.mean(axis=1) * 1e6\nmean_amp_all_df = pd.DataFrame(\n {\"ch_name\": l_vis_cropped.info[\"ch_names\"], \"mean_amp\": mean_amp_all}\n)\nmean_amp_all_df[\"tmin\"] = good_tmin\nmean_amp_all_df[\"tmax\"] = good_tmax\nmean_amp_all_df[\"condition\"] = \"Left/Visual\"\nwith pd.option_context(\"display.max_columns\", None):\n print(mean_amp_all_df.head())\n print(mean_amp_all_df.tail())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n### References\n.. footbibliography::\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }