{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Working with eye tracker data in MNE-Python\n\nIn this tutorial we will explore simultaneously recorded eye-tracking and EEG data from\na pupillary light reflex task. We will combine the eye-tracking and EEG data, and plot\nthe ERP and pupil response to the light flashes (i.e. the pupillary light reflex).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Scott Huberty \n# Dominik Welke \n#\n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data loading\n\nAs usual we start by importing the modules we need and loading some\n`example data `: eye-tracking data recorded from SR research's\n``'.asc'`` file format, and EEG data recorded from EGI's ``'.mff'`` file format. We'll\npass ``create_annotations=[\"blinks\"]`` to :func:`~mne.io.read_raw_eyelink` so that\nonly blinks annotations are created (by default, annotations are created for blinks,\nsaccades, fixations, and experiment messages).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import mne\nfrom mne.datasets.eyelink import data_path\nfrom mne.preprocessing.eyetracking import read_eyelink_calibration\nfrom mne.viz.eyetracking import plot_gaze\n\net_fpath = data_path() / \"eeg-et\" / \"sub-01_task-plr_eyetrack.asc\"\neeg_fpath = data_path() / \"eeg-et\" / \"sub-01_task-plr_eeg.mff\"\n\nraw_et = mne.io.read_raw_eyelink(et_fpath, create_annotations=[\"blinks\"])\nraw_eeg = mne.io.read_raw_egi(eeg_fpath, events_as_annotations=True).load_data()\nraw_eeg.filter(1, 30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. seealso:: `tut-importing-eyetracking-data`\n :class: sidebar\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The info structure of the eye-tracking data tells us we loaded a monocular recording\nwith 2 eyegaze channels (x- and y-coordinate positions), 1 pupil channel, 1 stim\nchannel, and 3 channels for the head distance and position (since this data was\ncollected using EyeLink's Remote mode).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "raw_et.info" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Ocular annotations\nBy default, EyeLink files will output ocular events (blinks, saccades, and\nfixations), and experiment messages. MNE will store these events\nas `mne.Annotations`. Ocular annotations contain channel information in the\n``'ch_names'`` key. This means that we can see which eye an ocular event occurred in,\nwhich can be useful for binocular recordings:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(raw_et.annotations[0][\"ch_names\"]) # a blink in the right eye" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Checking the calibration\n\nEyeLink ``.asc`` files can also include calibration information.\nMNE-Python can load and visualize those eye-tracking calibrations, which\nis a useful first step in assessing the quality of the eye-tracking data.\n:func:`~mne.preprocessing.eyetracking.read_eyelink_calibration`\nwill return a list of :class:`~mne.preprocessing.eyetracking.Calibration` instances,\none for each calibration. We can index that list to access a specific calibration.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cals = read_eyelink_calibration(et_fpath)\nprint(f\"number of calibrations: {len(cals)}\")\nfirst_cal = cals[0] # let's access the first (and only in this case) calibration\nprint(first_cal)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calibrations have dict-like attribute access; in addition to the attributes shown in\nthe output above, additional attributes are ``'positions'`` (the x and y coordinates\nof each calibration point), ``'gaze'`` (the x and y coordinates of the actual gaze\nposition to each calibration point), and ``'offsets'`` (the offset in visual degrees\nbetween the calibration position and the actual gaze position for each calibration\npoint). Below is an example of how to access these data:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(f\"offset of the first calibration point: {first_cal['offsets'][0]}\")\nprint(f\"offset for each calibration point: {first_cal['offsets']}\")\nprint(f\"x-coordinate for each calibration point: {first_cal['positions'].T[0]}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's plot the calibration to get a better look. Below we see the location that each\ncalibration point was displayed (gray dots), the positions of the actual gaze (red),\nand the offsets (in visual degrees) between the calibration position and the actual\ngaze position of each calibration point.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "first_cal.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Standardizing eyetracking data to SI units\n\nEyeLink stores eyegaze positions in pixels, and pupil size in arbitrary units.\nMNE-Python expects eyegaze positions to be in radians of visual angle, and pupil\nsize to be in meters. We can convert the eyegaze positions to radians using\n:func:`~mne.preprocessing.eyetracking.convert_units`. We'll pass the calibration\nobject we created above, after specifying the screen resolution, screen size, and\nscreen distance.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "first_cal[\"screen_resolution\"] = (1920, 1080)\nfirst_cal[\"screen_size\"] = (0.53, 0.3)\nfirst_cal[\"screen_distance\"] = 0.9\nmne.preprocessing.eyetracking.convert_units(raw_et, calibration=first_cal, to=\"radians\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot the raw eye-tracking data\n\nLet's plot the raw eye-tracking data. Since we did not convert the pupil size to\nmeters, we'll pass a custom `dict` into the scalings argument to make the pupil size\ntraces legible when plotting.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ps_scalings = dict(pupil=1e3)\nraw_et.plot(scalings=ps_scalings)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Handling blink artifacts\n\nNaturally, there are blinks in our data, which occur within ``\"BAD_blink\"``\nannotations. During blink periods, eyegaze coordinates are not reported, and pupil\nsize data are ``0``. We don't want these blink artifacts biasing our analysis, so we\nhave two options: Drop the blink periods from our data during epoching, or interpolate\nthe missing data during the blink periods. For this tutorial, let's interpolate the\nblink samples. We'll pass ``(0.05, 0.2)`` to\n:func:`~mne.preprocessing.eyetracking.interpolate_blinks`, expanding the interpolation\nwindow 50 ms before and 200 ms after the blink, so that the noisy data surrounding\nthe blink is also interpolated.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mne.preprocessing.eyetracking.interpolate_blinks(\n raw_et, buffer=(0.05, 0.2), interpolate_gaze=True\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. important:: By default, :func:`~mne.preprocessing.eyetracking.interpolate_blinks`,\n will only interpolate blinks in pupil channels. Passing\n ``interpolate_gaze=True`` will also interpolate the blink periods of the\n eyegaze channels. Be aware, however, that eye movements can occur\n during blinks which makes the gaze data less suitable for interpolation.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extract common stimulus events from the data\n\nIn this experiment, a photodiode attached to the display screen was connected to both\nthe EEG and eye-tracking systems. The photodiode was triggered by the the light flash\nstimuli, causing a signal to be sent to both systems simultaneously, signifying the\nonset of the flash. The photodiode signal was recorded as a digital input channel in\nthe EEG and eye-tracking data. MNE loads these data as a :term:`stim channel`.\n\nWe'll extract the flash event onsets from both the EEG and eye-tracking data, as they\nare necessary for aligning the data from the two recordings.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "et_events = mne.find_events(raw_et, min_duration=0.01, shortest_event=1, uint_cast=True)\neeg_events = mne.find_events(raw_eeg, stim_channel=\"DIN3\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output above shows us that both the EEG and EyeLink data used event ID ``2`` for\nthe flash events, so we'll create a dictionary to use later when plotting to label\nthose events.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "event_dict = dict(Flash=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Align the eye-tracking data with EEG data\n\nIn this dataset, eye-tracking and EEG data were recorded simultaneously, but on\ndifferent systems, so we'll need to align the data before we can analyze them\ntogether. We can do this using the :func:`~mne.preprocessing.realign_raw` function,\nwhich will align the data based on the timing of the shared events that are present in\nboth :class:`~mne.io.Raw` objects. We'll use the shared photodiode events we extracted\nabove, but first we need to convert the event onsets from samples to seconds. Once the\ndata have been aligned, we'll add the EEG channels to the eye-tracking raw object.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Convert event onsets from samples to seconds\net_flash_times = et_events[:, 0] / raw_et.info[\"sfreq\"]\neeg_flash_times = eeg_events[:, 0] / raw_eeg.info[\"sfreq\"]\n# Align the data\nmne.preprocessing.realign_raw(\n raw_et, raw_eeg, et_flash_times, eeg_flash_times, verbose=\"error\"\n)\n# Add EEG channels to the eye-tracking raw object\nraw_et.add_channels([raw_eeg], force_update_info=True)\ndel raw_eeg # free up some memory\n\n# Define a few channel groups of interest and plot the data\nfrontal = [\"E19\", \"E11\", \"E4\", \"E12\", \"E5\"]\noccipital = [\"E61\", \"E62\", \"E78\", \"E67\", \"E72\", \"E77\"]\npupil = [\"pupil_right\"]\n# picks must be numeric (not string) when passed to `raw.plot(..., order=)`\npicks_idx = mne.pick_channels(\n raw_et.ch_names, frontal + occipital + pupil, ordered=True\n)\nraw_et.plot(\n events=et_events,\n event_id=event_dict,\n event_color=\"g\",\n order=picks_idx,\n scalings=ps_scalings,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Showing the pupillary light reflex\nNow let's extract epochs around our flash events. We should see a clear pupil\nconstriction response to the flashes.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Skip baseline correction for now. We will apply baseline correction later.\nepochs = mne.Epochs(\n raw_et, events=et_events, event_id=event_dict, tmin=-0.3, tmax=3, baseline=None\n)\ndel raw_et # free up some memory\nepochs[:8].plot(\n events=et_events, event_id=event_dict, order=picks_idx, scalings=ps_scalings\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this experiment, the participant was instructed to fixate on a crosshair in the\ncenter of the screen. Let's plot the gaze position data to confirm that the\nparticipant primarily kept their gaze fixated at the center of the screen.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plot_gaze(epochs, calibration=first_cal)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. seealso:: `tut-eyetrack-heatmap`\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's plot the evoked responses to the light flashes to get a sense of the\naverage pupillary light response, and the associated ERP in the EEG data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "epochs.apply_baseline().average().plot(picks=occipital + pupil)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }