{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Quality control (QC) reports with mne.Report\n\nQuality control (QC) is the process of systematically inspecting M/EEG data\n**throughout all stages of an analysis pipeline**, including raw data,\nintermediate preprocessing steps, and derived results.\n\nWhile QC often begins with an initial inspection of the raw recording,\nit is equally important to verify that signals continue to \"look reasonable\"\nafter operations such as filtering, artifact correction, epoching, and\naveraging. Issues introduced or missed at any stage can propagate downstream\nand invalidate later analyses.\n\nThis tutorial demonstrates how to create a **single, narrative QC report**\nusing :class:`mne.Report`, focusing on **what should be inspected and how the\nresults should be interpreted**, rather than exhaustively covering the API.\n\nFor clarity and reproducibility, the examples below focus on common QC checks\napplied at representative stages of an analysis pipeline. The same reporting\napproach can\u2014and should\u2014be reused whenever new processing steps are applied.\n\nWe use the MNE sample dataset for demonstration. Not all QC sections are\napplicable to every dataset (e.g., continuous head-position tracking), and\nthis tutorial explicitly handles such cases.\n\n

Note

For several additional examples of complete reports, see the\n [MNE-BIDS-Pipeline QC reports](https://mne.tools/mne-bids-pipeline/stable/examples/examples.html).

\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The MNE-Python contributors\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from pathlib import Path\n\nimport mne\nfrom mne.preprocessing import ICA, create_eog_epochs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the sample dataset\nWe load a pre-filtered MEG/EEG recording from the MNE sample dataset.\nOnly channels relevant for QC (MEG, EEG, EOG, stimulus) are retained.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_path = Path(mne.datasets.sample.data_path(verbose=False))\nsubject = \"sample\"\nsample_dir = data_path / \"MEG\" / subject\nsubjects_dir = data_path / \"subjects\"\n\nraw_path = sample_dir / \"sample_audvis_filt-0-40_raw.fif\"\n\n\nraw = mne.io.read_raw(raw_path)\n\n# We will also crop the dataset for speed\nraw.crop(0, 60).load_data()\n\n# Retain only channels relevant for QC to simplify visualization and\n# focus inspection on signals typically reviewed during data quality checks.\nraw.pick([\"meg\", \"eeg\", \"eog\", \"stim\"])\n\nsfreq = raw.info[\"sfreq\"] # Sampling Frequency (Hz)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the QC report\nThe report acts as a container that collects figures, tables, and text\ninto a single HTML document.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "report = mne.Report(\n title=\"Sample dataset - Quality Control report\",\n subject=subject,\n subjects_dir=subjects_dir,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset overview\nA brief overview helps the reviewer immediately understand the scale and\nbasic properties of the dataset.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "html_overview = \"\"\"\nThis report presents a quality control (QC) overview of the MNE sample dataset.

\nFor information about the paradigm, see\nthe MNE docs.\n\"\"\"\n\nreport.add_html(\n title=\"Overview\",\n html=html_overview,\n tags=(\"overview\"),\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Raw data inspection\nVisual inspection of raw data is the single most important QC step.\nHere we inspect both the time series and the power spectral density (PSD).\n\n- Look for channels with unusually large amplitudes or flat signals.\n- In the PSD, check for excessive low-frequency drift, strong line noise,\n or abnormal spectral shapes compared to neighboring channels.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "report.add_raw(\n raw,\n title=\"Raw data overview\",\n psd=False, # omit just for speed here\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Events and stimulus timing\nCorrect event detection is crucial for all subsequent epoch-based analyses.\n\n- Verify that the number of events matches expectations.\n- Check that event timing is plausible and evenly distributed.\n- Missing or duplicated events often indicate trigger channel issues.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "events = mne.find_events(raw)\n\nreport.add_events(\n events,\n sfreq=sfreq,\n title=\"Detected events\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Epoching and rejection statistics\nEpoching allows inspection of data segments time-locked to events, along\nwith automated rejection based on amplitude thresholds.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "event_id = {\n \"auditory/left\": 1,\n \"auditory/right\": 2,\n \"visual/left\": 3,\n \"visual/right\": 4,\n}\n\nepochs = mne.Epochs(\n raw,\n events,\n event_id=event_id,\n tmin=-0.2,\n tmax=0.5,\n baseline=(None, 0),\n reject=dict(eeg=150e-6),\n preload=True,\n)\n\nreport.add_epochs(\n epochs,\n title=\"Epochs and rejection statistics\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evoked responses\nAveraged responses should show physiologically plausible waveforms and\nreasonable signal-to-noise ratios.\n\n- Check that evoked responses have the expected polarity and timing.\n- Absence of clear evoked structure may indicate poor data quality or\n incorrect event definitions.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cov_path = sample_dir / \"sample_audvis-cov.fif\"\nevoked = mne.read_evokeds(\n sample_dir / \"sample_audvis-ave.fif\",\n baseline=(None, 0),\n)[0] # just one for speed\nevoked.decimate(4) # also for speed\n\nreport.add_evokeds(\n evokeds=evoked,\n noise_cov=cov_path,\n n_time_points=5,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ICA for artifact inspection\nIndependent Component Analysis (ICA) can be used during QC to identify\nstereotypical artifacts such as eye blinks and eye movements.\n\nFor QC purposes, ICA is typically run with a lightweight configuration\n(e.g., fewer components or temporal decimation) to provide rapid feedback\non data quality, rather than an optimized decomposition for final analysis.\n\n- Use the topographic maps to identify spatial patterns characteristic\n of artifacts (e.g., frontal patterns for eye blinks).\n- The component property viewer is intended for detailed inspection of\n individual components and is most informative when combined with\n epoched data or explicit artifact scoring.\n- Components correlated with EOG should show frontal topographies and\n stereotyped time courses.\n- Only components clearly associated with artifacts should be excluded.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ica = ICA(\n n_components=15,\n random_state=97,\n max_iter=50, # just for speed!\n)\n\n# Fit ICA using a decimated signal for speed\nica.fit(raw, picks=(\"meg\", \"eeg\"), decim=10, verbose=\"error\")\n\n\n# Identify EOG-related components\neog_epochs = create_eog_epochs(raw)\neog_inds, eog_scores = ica.find_bads_eog(eog_epochs)\nica.exclude = eog_inds\n\nreport.add_ica(\n ica=ica,\n inst=epochs,\n eog_evoked=eog_epochs.average(),\n eog_scores=eog_scores,\n title=\"ICA components (artifact inspection)\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MEG\u2013MRI coregistration\nAccurate coregistration is critical for source localization.\n\n- Head shape points should align well with the MRI scalp surface.\n- Systematic misalignment indicates digitization or transformation errors.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "trans = sample_dir / \"sample_audvis_raw-trans.fif\"\nreport.add_trans(\n trans,\n info=raw.info,\n title=\"MEG\u2013MRI-head coregistration\",\n subject=subject,\n subjects_dir=subjects_dir,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MRI and BEM surfaces\nBoundary Element Method (BEM) surfaces define the head model used for\nforward and inverse solutions.\n\n- Surfaces should be smooth, closed, and non-intersecting.\n- Poorly formed surfaces can severely degrade source estimates.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "report.add_bem(\n subject,\n subjects_dir=subjects_dir,\n title=\"BEM surfaces\",\n decim=20, # for speed\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## View the final report\nYou can set ``open_browser=True`` to have it pop open a browser tab if you want:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "report.save(\"qc_report.html\", overwrite=True, open_browser=False)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 0 }