{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Statistical inference\n\nHere we will briefly cover multiple concepts of inferential statistics in an\nintroductory manner, and demonstrate how to use some MNE statistical functions.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: Eric Larson \n#\n# License: BSD-3-Clause\n# Copyright the MNE-Python contributors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from functools import partial\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D # noqa: F401, analysis:ignore\nfrom scipy import stats\n\nimport mne\nfrom mne.stats import (\n bonferroni_correction,\n fdr_correction,\n permutation_cluster_1samp_test,\n permutation_t_test,\n ttest_1samp_no_p,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hypothesis testing\nNull hypothesis\n^^^^^^^^^^^^^^^\nFrom [Wikipedia](https://en.wikipedia.org/wiki/Null_hypothesis)_:\n\n In inferential statistics, a general statement or default position that\n there is no relationship between two measured phenomena, or no\n association among groups.\n\nWe typically want to reject a **null hypothesis** with\nsome probability (e.g., p < 0.05). This probability is also called the\nsignificance level $\\alpha$.\nTo think about what this means, let's follow the illustrative example from\n:footcite:`RidgwayEtAl2012` and construct a toy dataset consisting of a\n40 \u00d7 40 square with a \"signal\" present in the center with white noise added\nand a Gaussian smoothing kernel applied.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "width = 40\nn_subjects = 10\nsignal_mean = 100\nsignal_sd = 100\nnoise_sd = 0.01\ngaussian_sd = 5\nsigma = 1e-3 # sigma for the \"hat\" method\nn_permutations = \"all\" # run an exact test\nn_src = width * width\n\n# For each \"subject\", make a smoothed noisy signal with a centered peak\nrng = np.random.RandomState(2)\nX = noise_sd * rng.randn(n_subjects, width, width)\n# Add a signal at the center\nX[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd\n# Spatially smooth with a 2D Gaussian kernel\nsize = width // 2 - 1\ngaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd**2)))\nfor si in range(X.shape[0]):\n for ri in range(X.shape[1]):\n X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, \"same\")\n for ci in range(X.shape[2]):\n X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, \"same\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data averaged over all subjects looks like this:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(layout=\"constrained\")\nax.imshow(X.mean(0), cmap=\"inferno\")\nax.set(xticks=[], yticks=[], title=\"Data averaged over subjects\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, a null hypothesis we could test for each voxel is:\n\n There is no difference between the mean value and zero\n ($H_0 \\colon \\mu = 0$).\n\nThe alternative hypothesis, then, is that the voxel has a non-zero mean\n($H_1 \\colon \\mu \\neq 0$).\nThis is a *two-tailed* test because the mean could be less than\nor greater than zero, whereas a *one-tailed* test would test only one of\nthese possibilities, i.e. $H_1 \\colon \\mu \\geq 0$ or\n$H_1 \\colon \\mu \\leq 0$.\n\n

Note

Here we will refer to each spatial location as a \"voxel\".\n In general, though, it could be any sort of data value,\n including cortical vertex at a specific time, pixel in a\n time-frequency decomposition, etc.

\n\n### Parametric tests\nLet's start with a **paired t-test**, which is a standard test\nfor differences in paired samples. Mathematically, it is equivalent\nto a 1-sample t-test on the difference between the samples in each condition.\nThe paired t-test is **parametric**\nbecause it assumes that the underlying sample distribution is Gaussian, and\nis only valid in this case. This happens to be satisfied by our toy dataset,\nbut is not always satisfied for neuroimaging data.\n\nIn the context of our toy dataset, which has many voxels\n($40 \\cdot 40 = 1600$), applying the paired t-test is called a\n*mass-univariate* approach as it treats each voxel independently.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles = [\"t\"]\nout = stats.ttest_1samp(X, 0, axis=0)\nts = [out[0]]\nps = [out[1]]\nmccs = [False] # these are not multiple-comparisons corrected\n\n\ndef plot_t_p(t, p, title, mcc, axes=None):\n if axes is None:\n fig = plt.figure(figsize=(6, 3), layout=\"constrained\")\n axes = [fig.add_subplot(121, projection=\"3d\"), fig.add_subplot(122)]\n show = True\n else:\n show = False\n\n # calculate critical t-value thresholds (2-tailed)\n p_lims = np.array([0.1, 0.001])\n df = n_subjects - 1 # degrees of freedom\n t_lims = stats.distributions.t.ppf(1 - p_lims / 2, df=df)\n p_lims = [-np.log10(p) for p in p_lims]\n\n # t plot\n x, y = np.mgrid[0:width, 0:width]\n surf = axes[0].plot_surface(\n x,\n y,\n np.reshape(t, (width, width)),\n rstride=1,\n cstride=1,\n linewidth=0,\n vmin=t_lims[0],\n vmax=t_lims[1],\n cmap=\"viridis\",\n )\n axes[0].set(\n xticks=[], yticks=[], zticks=[], xlim=[0, width - 1], ylim=[0, width - 1]\n )\n axes[0].view_init(30, 15)\n cbar = axes[0].figure.colorbar(\n ax=axes[0],\n shrink=0.75,\n orientation=\"horizontal\",\n fraction=0.1,\n pad=0.025,\n mappable=surf,\n )\n cbar.set_ticks(t_lims)\n cbar.set_ticklabels([f\"{t_lim:0.1f}\" for t_lim in t_lims])\n cbar.set_label(\"t-value\")\n cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)\n if not show:\n axes[0].set(title=title)\n if mcc:\n axes[0].title.set_weight(\"bold\")\n # p plot\n use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))\n img = axes[1].imshow(\n use_p, cmap=\"inferno\", vmin=p_lims[0], vmax=p_lims[1], interpolation=\"nearest\"\n )\n axes[1].set(xticks=[], yticks=[])\n cbar = axes[1].figure.colorbar(\n ax=axes[1],\n shrink=0.75,\n orientation=\"horizontal\",\n fraction=0.1,\n pad=0.025,\n mappable=img,\n )\n cbar.set_ticks(p_lims)\n cbar.set_ticklabels([f\"{p_lim:0.1f}\" for p_lim in p_lims])\n cbar.set_label(r\"$-\\log_{10}(p)$\")\n cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)\n if show:\n text = fig.suptitle(title)\n if mcc:\n text.set_weight(\"bold\")\n\n\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### \"Hat\" variance adjustment\nThe \"hat\" technique regularizes the variance values used in the t-test\ncalculation :footcite:`RidgwayEtAl2012` to compensate for implausibly small\nvariances.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ts.append(ttest_1samp_no_p(X, sigma=sigma))\nps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)\ntitles.append(r\"$\\mathrm{t_{hat}}$\")\nmccs.append(False)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Non-parametric tests\nInstead of assuming an underlying Gaussian distribution, we could instead\nuse a **non-parametric resampling** method. In the case of a paired t-test\nbetween two conditions A and B, which is mathematically equivalent to a\none-sample t-test between the difference in the conditions A-B, under the\nnull hypothesis we have the principle of **exchangeability**. This means\nthat, if the null is true, we can exchange conditions and not change\nthe distribution of the test statistic.\n\nWhen using a paired t-test, exchangeability thus means that we can flip the\nsigns of the difference between A and B. Therefore, we can construct the\n**null distribution** values for each voxel by taking random subsets of\nsamples (subjects), flipping the sign of their difference, and recording the\nabsolute value of the resulting statistic (we record the absolute value\nbecause we conduct a two-tailed test). The absolute value of the statistic\nevaluated on the veridical data can then be compared to this distribution,\nand the p-value is simply the proportion of null distribution values that\nare smaller.\n\n

Warning

In the case of a true one-sample t-test, i.e. analyzing a single\n condition rather than the difference between two conditions,\n it is not clear where/how exchangeability applies; see\n [this FieldTrip discussion](ft_exch_).

\n\nIn the case where ``n_permutations`` is large enough (or \"all\") so\nthat the complete set of unique resampling exchanges can be done\n(which is $2^{N_{samp}}-1$ for a one-tailed and\n$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the\nveridical distribution), instead of randomly exchanging conditions\nthe null is formed from using all possible exchanges. This is known\nas a permutation test (or exact test).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Here we have to do a bit of gymnastics to get our function to do\n# a permutation test without correcting for multiple comparisons:\n\nX.shape = (n_subjects, n_src) # flatten the array for simplicity\ntitles.append(\"Permutation\")\nts.append(np.zeros(width * width))\nps.append(np.zeros(width * width))\nmccs.append(False)\nfor ii in range(n_src):\n t, p = permutation_t_test(X[:, [ii]], verbose=False)[:2]\n ts[-1][ii], ps[-1][ii] = t[0], p[0]\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple comparisons\nSo far, we have done no correction for multiple comparisons. This is\npotentially problematic for these data because there are\n$40 \\cdot 40 = 1600$ tests being performed. If we use a threshold\np < 0.05 for each individual test, we would expect many voxels to be declared\nsignificant even if there were no true effect. In other words, we would make\nmany **type I errors** (adapted from [here](errors_)):\n\n.. rst-class:: skinnytable\n\n +----------+--------+------------------+------------------+\n | | Null hypothesis |\n | +------------------+------------------+\n | | True | False |\n +==========+========+==================+==================+\n | | | Type I error | Correct |\n | | Yes | False positive | True positive |\n + Reject +--------+------------------+------------------+\n | | | Correct | Type II error |\n | | No | True Negative | False negative |\n +----------+--------+------------------+------------------+\n\nTo see why, consider a standard $\\alpha = 0.05$.\nFor a single test, our probability of making a type I error is 0.05.\nThe probability of making at least one type I error in\n$N_{\\mathrm{test}}$ independent tests is then given by\n$1 - (1 - \\alpha)^{N_{\\mathrm{test}}}$:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "N = np.arange(1, 80)\nalpha = 0.05\np_type_I = 1 - (1 - alpha) ** N\nfig, ax = plt.subplots(figsize=(4, 3), layout=\"constrained\")\nax.scatter(N, p_type_I, 3)\nax.set(\n xlim=N[[0, -1]],\n ylim=[0, 1],\n xlabel=r\"$N_{\\mathrm{test}}$\",\n ylabel=\"Probability of at least\\none type I error\",\n)\nax.grid(True)\nfig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To combat this problem, several methods exist. Typically these\nprovide control over either one of the following two measures:\n\n1. [Familywise error rate (FWER)](fwer_)\n The probability of making one or more type I errors:\n\n .. math::\n \\mathrm{P}(N_{\\mathrm{type\\ I}} >= 1 \\mid H_0)\n\n2. [False discovery rate (FDR)](fdr_)\n The expected proportion of rejected null hypotheses that are\n actually true:\n\n .. math::\n \\mathrm{E}(\\frac{N_{\\mathrm{type\\ I}}}{N_{\\mathrm{reject}}}\n \\mid N_{\\mathrm{reject}} > 0) \\cdot\n \\mathrm{P}(N_{\\mathrm{reject}} > 0 \\mid H_0)\n\nWe cover some techniques that control FWER and FDR below.\n\n### Bonferroni correction\nPerhaps the simplest way to deal with multiple comparisons, [Bonferroni\ncorrection](https://en.wikipedia.org/wiki/Bonferroni_correction)_\nconservatively multiplies the p-values by the number of comparisons to\ncontrol the FWER.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(\"Bonferroni\")\nts.append(ts[-1])\nps.append(bonferroni_correction(ps[0])[1])\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### False discovery rate (FDR) correction\nTypically FDR is performed with the Benjamini-Hochberg procedure, which\nis less restrictive than Bonferroni correction for large numbers of\ncomparisons (fewer type II errors), but provides less strict control of type\nI errors.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(\"FDR\")\nts.append(ts[-1])\nps.append(fdr_correction(ps[0])[1])\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Non-parametric resampling test with a maximum statistic\n**Non-parametric resampling tests** can also be used to correct for multiple\ncomparisons. In its simplest form, we again do permutations using\nexchangeability under the null hypothesis, but this time we take the\n*maximum statistic across all voxels* in each permutation to form the\nnull distribution. The p-value for each voxel from the veridical data\nis then given by the proportion of null distribution values\nthat were smaller.\n\nThis method has two important features:\n\n1. It controls FWER.\n2. It is non-parametric. Even though our initial test statistic\n (here a 1-sample t-test) is parametric, the null\n distribution for the null hypothesis rejection (the mean value across\n subjects is indistinguishable from zero) is obtained by permutations.\n This means that it makes no assumptions of Gaussianity\n (which do hold for this example, but do not in general for some types\n of processed neuroimaging data).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(r\"$\\mathbf{Perm_{max}}$\")\nout = permutation_t_test(X, verbose=False)[:2]\nts.append(out[0])\nps.append(out[1])\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clustering\nEach of the aforementioned multiple comparisons corrections have the\ndisadvantage of not fully incorporating the correlation structure of the\ndata, namely that points close to one another (e.g., in space or time) tend\nto be correlated. However, by defining the adjacency (or \"neighbor\")\nstructure in our data, we can use **clustering** to compensate.\n\nTo use this, we need to rethink our null hypothesis. Instead\nof thinking about a null hypothesis about means per voxel (with one\nindependent test per voxel), we consider a null hypothesis about sizes\nof clusters in our data, which could be stated like:\n\n The distribution of spatial cluster sizes observed in two experimental\n conditions are drawn from the same probability distribution.\n\nHere we only have a single condition and we contrast to zero, which can\nbe thought of as:\n\n The distribution of spatial cluster sizes is independent of the sign\n of the data.\n\nIn this case, we again do permutations with a maximum statistic, but, under\neach permutation, we:\n\n1. Compute the test statistic for each voxel individually.\n2. Threshold the test statistic values.\n3. Cluster voxels that exceed this threshold (with the same sign) based on\n adjacency.\n4. Retain the size of the largest cluster (measured, e.g., by a simple voxel\n count, or by the sum of voxel t-values within the cluster) to build the\n null distribution.\n\nAfter doing these permutations, the cluster sizes in our veridical data\nare compared to this null distribution. The p-value associated with each\ncluster is again given by the proportion of smaller null distribution\nvalues. This can then be subjected to a standard p-value threshold\n(e.g., p < 0.05) to reject the null hypothesis (i.e., find an effect of\ninterest).\n\nThis reframing to consider *cluster sizes* rather than *individual means*\nmaintains the advantages of the standard non-parametric permutation\ntest -- namely controlling FWER and making no assumptions of parametric\ndata distribution.\nCritically, though, it also accounts for the correlation structure in the\ndata -- which in this toy case is spatial but in general can be\nmultidimensional (e.g., spatio-temporal) -- because the null distribution\nwill be derived from data in a way that preserves these correlations.\n\n.. admonition:: Effect size\n :class: sidebar note\n\n For a nice description of how to compute the effect size obtained\n in a cluster test, see this\n [FieldTrip mailing list discussion](ft_cluster_effect_size_).\n\nHowever, there is a drawback. If a cluster significantly deviates from\nthe null, no further inference on the cluster (e.g., peak location) can be\nmade, as the entire cluster as a whole is used to reject the null.\nMoreover, because the test statistic concerns the full data, the null\nhypothesis (and our rejection of it) refers to the structure of the full\ndata. For more information, see also the comprehensive\n[FieldTrip tutorial](ft_cluster_).\n\n#### Defining the adjacency matrix\nFirst we need to define our adjacency (sometimes called \"neighbors\") matrix.\nThis is a square array (or sparse matrix) of shape ``(n_src, n_src)`` that\ncontains zeros and ones to define which spatial points are neighbors, i.e.,\nwhich voxels are adjacent to each other. In our case this\nis quite simple, as our data are aligned on a rectangular grid.\n\nLet's pretend that our data were smaller -- a 3 \u00d7 3 grid. Thinking about\neach voxel as being connected to the other voxels it touches, we would\nneed a 9 \u00d7 9 adjacency matrix. The first row of this matrix contains the\nvoxels in the flattened data that the first voxel touches. Since it touches\nthe second element in the first row and the first element in the second row\n(and is also a neighbor to itself), this would be::\n\n [1, 1, 0, 1, 0, 0, 0, 0, 0]\n\n:mod:`sklearn.feature_extraction` provides a convenient function for this:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.feature_extraction.image import grid_to_graph # noqa: E402\n\nmini_adjacency = grid_to_graph(3, 3).toarray()\nassert mini_adjacency.shape == (9, 9)\nprint(mini_adjacency[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general the adjacency between voxels can be more complex, such as\nthose between sensors in 3D space, or time-varying activation at brain\nvertices on a cortical surface. MNE provides several convenience functions\nfor computing adjacency matrices, for example:\n\n* :func:`mne.channels.find_ch_adjacency`\n* :func:`mne.stats.combine_adjacency`\n\nSee the `Statistics API ` for a full list.\n\nMNE also ships with numerous built-in channel adjacency matrices from the\nFieldTrip project (called \"neighbors\" there). You can get an overview of\nthem by using :func:`mne.channels.get_builtin_ch_adjacencies`:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "builtin_ch_adj = mne.channels.get_builtin_ch_adjacencies(descriptions=True)\nfor adj_name, adj_description in builtin_ch_adj:\n print(f\"{adj_name}: {adj_description}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These built-in channel adjacency matrices can be loaded via\n:func:`mne.channels.read_ch_adjacency`.\n\n#### Standard clustering\nHere, since our data are on a grid, we can use ``adjacency=None`` to\ntrigger optimized grid-based code, and run the clustering algorithm.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(\"Clustering\")\n\n# Reshape data to what is equivalent to (n_samples, n_space, n_time)\nX.shape = (n_subjects, width, width)\n\n# Compute threshold from t distribution (this is also the default)\n# Here we use a two-tailed test, hence we need to divide alpha by 2.\n# Subtracting alpha from 1 guarantees that we get a positive threshold,\n# which MNE-Python expects for two-tailed tests.\ndf = n_subjects - 1 # degrees of freedom\nt_thresh = stats.distributions.t.ppf(1 - alpha / 2, df=df)\n\n# run the cluster test\nt_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(\n X,\n n_jobs=None,\n threshold=t_thresh,\n adjacency=None,\n n_permutations=n_permutations,\n out_type=\"mask\",\n)\n\n# Put the cluster data in a viewable format\np_clust = np.ones((width, width))\nfor cl, p in zip(clusters, p_values):\n p_clust[cl] = p\nts.append(t_clust)\nps.append(p_clust)\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### \"Hat\" variance adjustment\nThis method can also be used in this context to correct for small\nvariances :footcite:`RidgwayEtAl2012`:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(r\"$\\mathbf{C_{hat}}$\")\nstat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)\nt_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(\n X,\n n_jobs=None,\n threshold=t_thresh,\n adjacency=None,\n out_type=\"mask\",\n n_permutations=n_permutations,\n stat_fun=stat_fun_hat,\n buffer_size=None,\n)\np_hat = np.ones((width, width))\nfor cl, p in zip(clusters, p_values):\n p_hat[cl] = p\nts.append(t_hat)\nps.append(p_hat)\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n#### Threshold-free cluster enhancement (TFCE)\nTFCE eliminates the free parameter initial ``threshold`` value that\ndetermines which points are included in clustering by approximating\na continuous integration across possible threshold values with a standard\n[Riemann sum](https://en.wikipedia.org/wiki/Riemann_sum)_\n:footcite:`SmithNichols2009`.\nThis requires giving a starting threshold ``start`` and a step\nsize ``step``, which in MNE is supplied as a dict.\nThe smaller the ``step`` and closer to 0 the ``start`` value,\nthe better the approximation, but the longer it takes.\n\nA significant advantage of TFCE is that, rather than modifying the\nstatistical null hypothesis under test (from one about individual voxels\nto one about the distribution of clusters in the data), it modifies the *data\nunder test* while still controlling for multiple comparisons.\nThe statistical test is then done at the level of individual voxels rather\nthan clusters. This allows for evaluation of each point\nindependently for significance rather than only as cluster groups.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(r\"$\\mathbf{C_{TFCE}}$\")\nthreshold_tfce = dict(start=0, step=0.2)\nt_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(\n X,\n n_jobs=None,\n threshold=threshold_tfce,\n adjacency=None,\n n_permutations=n_permutations,\n out_type=\"mask\",\n)\nts.append(t_tfce)\nps.append(p_tfce)\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also combine TFCE and the \"hat\" correction:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "titles.append(r\"$\\mathbf{C_{hat,TFCE}}$\")\nt_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(\n X,\n n_jobs=None,\n threshold=threshold_tfce,\n adjacency=None,\n out_type=\"mask\",\n n_permutations=n_permutations,\n stat_fun=stat_fun_hat,\n buffer_size=None,\n)\nts.append(t_tfce_hat)\nps.append(p_tfce_hat)\nmccs.append(True)\nplot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualize and compare methods\nLet's take a look at these statistics. The top row shows each test statistic,\nand the bottom shows p-values for various statistical tests, with the ones\nwith proper control over FWER or FDR with bold titles.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = plt.figure(facecolor=\"w\", figsize=(14, 3), layout=\"constrained\")\nassert len(ts) == len(titles) == len(ps)\nfor ii in range(len(ts)):\n ax = [\n fig.add_subplot(2, 10, ii + 1, projection=\"3d\"),\n fig.add_subplot(2, 10, 11 + ii),\n ]\n plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first three columns show the parametric and non-parametric statistics\nthat are not corrected for multiple comparisons:\n\n- Mass univariate **t-tests** result in jagged edges.\n- **\"Hat\" variance correction** of the t-tests produces less peaky edges,\n correcting for sharpness in the statistic driven by low-variance voxels.\n- **Non-parametric resampling tests** are very similar to t-tests. This is to\n be expected: the data are drawn from a Gaussian distribution, and thus\n satisfy parametric assumptions.\n\nThe next three columns show multiple comparison corrections of the\nmass univariate tests (parametric and non-parametric). These\ntoo conservatively correct for multiple comparisons because neighboring\nvoxels in our data are correlated:\n\n- **Bonferroni correction** eliminates any significant activity.\n- **FDR correction** is less conservative than Bonferroni.\n- A **permutation test with a maximum statistic** also eliminates any\n significant activity.\n\nThe final four columns show the non-parametric cluster-based permutation\ntests with a maximum statistic:\n\n- **Standard clustering** identifies the correct region. However, the whole\n area must be declared significant, so no peak analysis can be done.\n Also, the peak is broad.\n- **Clustering with \"hat\" variance adjustment** tightens the estimate of\n significant activity.\n- **Clustering with TFCE** allows analyzing each significant point\n independently, but still has a broadened estimate.\n- **Clustering with TFCE and \"hat\" variance adjustment** tightens the area\n declared significant (again FWER corrected).\n\n## Statistical functions in MNE\nThe complete listing of statistical functions provided by MNE are in\nthe `Statistics API list `, but we will give\na brief overview here.\n\nMNE provides several convenience parametric testing functions that can be\nused in conjunction with the non-parametric clustering methods. However,\nthe set of functions we provide is not meant to be exhaustive.\n\nIf the univariate statistical contrast of interest is not listed here\n(e.g., interaction term in an unbalanced ANOVA), consider checking out the\n:mod:`statsmodels` package. It offers many functions for computing\nstatistical contrasts, e.g., :func:`statsmodels.stats.anova.anova_lm`.\nTo use these functions in clustering:\n\n1. Determine which test statistic (e.g., t-value, F-value) you would use\n in a univariate context to compute your contrast of interest. In other\n words, if there were only a single output such as reaction times, what\n test statistic might you compute on the data?\n2. Wrap the call to that function within a function that takes an input of\n the same shape that is expected by your clustering function,\n and returns an array of the same shape without the \"samples\" dimension\n (e.g., :func:`mne.stats.permutation_cluster_1samp_test` takes an array\n of shape ``(n_samples, p, q)`` and returns an array of shape ``(p, q)``).\n3. Pass this wrapped function to the ``stat_fun`` argument to the clustering\n function.\n4. Set an appropriate ``threshold`` value (float or dict) based on the\n values your statistical contrast function returns.\n\n### Parametric methods provided by MNE\n\n- :func:`mne.stats.ttest_1samp_no_p`\n Paired t-test, optionally with hat adjustment.\n This is used by default for contrast enhancement in paired cluster tests.\n\n- :func:`mne.stats.f_oneway`\n One-way ANOVA for independent samples.\n This can be used to compute various F-contrasts. It is used by default\n for contrast enhancement in non-paired cluster tests.\n\n- :func:`mne.stats.f_mway_rm`\n M-way ANOVA for repeated measures and balanced designs.\n This returns F-statistics and p-values. The associated helper function\n :func:`mne.stats.f_threshold_mway_rm` can be used to determine the\n F-threshold at a given significance level.\n\n- :func:`mne.stats.linear_regression`\n Compute ordinary least square regressions on multiple targets, e.g.,\n sensors, time points across trials (samples).\n For each regressor it returns the beta value, t-statistic, and\n uncorrected p-value. While it can be used as a test, it is\n particularly useful to compute weighted averages or deal with\n continuous predictors.\n\n### Non-parametric methods\n\n- :func:`mne.stats.permutation_cluster_test`\n Unpaired contrasts with clustering.\n\n- :func:`mne.stats.spatio_temporal_cluster_test`\n Unpaired contrasts with spatio-temporal clustering.\n\n- :func:`mne.stats.permutation_t_test`\n Paired contrast with no clustering.\n\n- :func:`mne.stats.permutation_cluster_1samp_test`\n Paired contrasts with clustering.\n\n- :func:`mne.stats.spatio_temporal_cluster_1samp_test`\n Paired contrasts with spatio-temporal clustering.\n\n

Warning

In most MNE functions, data has shape\n ``(..., n_space, n_time)``, where the spatial dimension can\n be e.g. sensors or source vertices. But for our spatio-temporal\n clustering functions, the spatial dimensions need to be **last**\n for computational efficiency reasons. For example, for\n :func:`mne.stats.spatio_temporal_cluster_1samp_test`, ``X``\n needs to be of shape ``(n_samples, n_time, n_space)``. You can\n use :func:`numpy.transpose` to transpose axes if necessary.

\n\n## References\n.. footbibliography::\n\n.. include:: ../../links.inc\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }