{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "# Welcome to fMRIflows\n", "\n", "fMRIflows is a consortium of many (dependent) fMRI analysis pipelines, including anatomical and functional pre-processing, univariate 1st and 2nd-level analysis, as well as multivariate pattern analysis.\n", "\n", "This notebook will help you to setup the JSON specification files to run the individual analyses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exploration of the dataset\n", "\n", "First, let's see what we've got:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from bids.layout import BIDSLayout\n", "layout = BIDSLayout(\"/data/\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### List of subjects" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "subject_list = sorted(layout.get_subjects())\n", "subject_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sessions in the dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session_list = layout.get_sessions()\n", "session_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Runs in the dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "runs = layout.get_runs()\n", "runs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tasks in the dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "task_id = sorted(layout.get_tasks())\n", "task_id" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Metadata in dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get information, such as **TR** and **voxel resolution** of functional images, let's collect the metadata from the functional images of all subjects (of the first task)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of functional images\n", "func_files = layout.get(datatype='func', return_type='file', extension='.nii.gz',\n", " suffix='bold', task=task_id[0])\n", "func_files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's collect TR and voxel resolution of all functional images (of first task), overall subjects, sessions and runs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import nibabel as nb\n", "\n", "resolution = np.array([nb.load(f).header.get_zooms() for f in func_files])\n", "resolution" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get median TR of all collected functional images\n", "TR = np.median(resolution[:, 3])\n", "TR" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get average voxel resolution of all collected functional images\n", "vox_res = [round(r, 3) for r in np.median(resolution[:, :3], axis=0)]\n", "vox_res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Specifications for preprocessing workflows\n", "\n", "Now that we know the content of our dataset, let's write the specification file for the anatomical and functional preprocessing workflow." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specification for the anatomical preprocessing workflow\n", "\n", "For the anatomical preprocessing workflow, we need only a few parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty dictionary\n", "content_anat = {}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of subject identifiers\n", "content_anat['subject_list_anat'] = subject_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of session identifiers\n", "content_anat['session_list_anat'] = session_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# T1w image identifier (default: T1w)\n", "content_anat['T1w_id'] = 'T1w'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Voxel resolution of reference template \n", "content_anat['res_norm'] = [1.0, 1.0, 1.0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Should ANTs Normalization a 'fast' or a 'precise' normalization (default: 'precise')\n", "content_anat['norm_accuracy'] = 'precise'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make sure that everything is as we want it, let's plot the parameters for the anatomical preprocessing pipeline again:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content_anat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specification for the functional preprocessing workflow\n", "\n", "For the functional preprocessing workflow, we need a few more parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty dictionary\n", "content_func = {}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of subject identifiers\n", "content_func['subject_list_func'] = subject_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of session identifiers\n", "content_func['session_list_func'] = session_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of task identifiers\n", "content_func['task_list'] = task_id" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of run identifiers\n", "content_func['run_list'] = runs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Reference timepoint for slice time correction (in ms)\n", "content_func['ref_timepoint'] = int(round((TR * 1000.) / 2.0, 3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Requested isometric voxel resolution after coregistration of functional images\n", "content_func['res_func'] = round(np.median(vox_res).astype('float64'), 3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of spatial filters (smoothing) to apply (separetely, i.e. with iterables)\n", "# Values are given in mm\n", "content_func['filters_spatial'] = [[\"LP\", 3. * content_func['res_func']]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of temporal filters to apply (separetely, i.e. with iterables)\n", "# Values are given in seconds\n", "content_func['filters_temporal'] = [[None, 100.]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And for the confound sub-workflow, we need to specify the **number of `CompCor` components** that should be computed, the **thresholds** to detect **outliers** in `FD`, `DVARS`, `TV`, `GM`, `WM`, `CSF`, as well as the number of **independent components** that should be extracted from the preprocessed signal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of CompCor components to compute\n", "content_func['n_compcor_confounds'] = 5" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Threshold for outlier detection (3.27 represents a threshold of 99.9%)\n", "content_func['outlier_thresholds'] = [3.27, 3.27, 3.27, None, None, None]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of independent components to compute\n", "content_func['n_independent_components'] = 10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make sure that everything is as we want it, let's plot the parameters for the functional preprocessing pipeline again:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating the `JSON` specification file\n", "\n", "We will be using one common `JSON` file for the specifications for the anatomical and functional preprocessing pipelines. The creation of this file is rather simple:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content = {}\n", "content.update(content_anat)\n", "content.update(content_func)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The only thing that we're still missing is the number of parallel processes that we want to allow during the execution of the preprocessing workflows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of parallel jops to use during the preprocessing\n", "import multiprocessing\n", "n_proc = multiprocessing.cpu_count() - 1\n", "n_proc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content['n_parallel_jobs'] = n_proc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we're ready to write the `content` to a `JSON` file. By default the filename is `fmriflows_spec_preproc.json`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "file_id = 'fmriflows_spec_preproc'\n", "with open('/data/%s.json' % file_id, 'w') as f:\n", " json.dump(content, f, indent=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Specifications for 1st-level analysis Workflows\n", "\n", "After the anatomical and functional preprocessing pipelines were run, we can move on to the 1st-level univariate and multivariate analysis. For this we need to get the different conditions and corresponding contrasts, for each task.\n", "\n", "## Exploration of the dataset\n", "\n", "So let's explore the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty dictionary\n", "content_analysis = {}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content_analysis['tasks'] = {}\n", "\n", "for t in task_id:\n", " \n", " # Collect first event file of a given task\n", " idx = 0\n", " condition_names = []\n", " while len(condition_names) == 0 and idx < len(func_files):\n", " event_file = layout.get(return_type='file', suffix='events', task=t)[idx]\n", "\n", " # Collect unique condition names\n", " import pandas as pd\n", " df = pd.read_csv(event_file, sep='\t')\n", " condition_names = [str(c) for c in np.unique(df['trial_type'])]\n", " idx += 1\n", " \n", " # Create contrasts (unique and versus rest)\n", " contrasts = []\n", " \n", " # Adding unique contrasts, i.e. [1, 0, 0, 0, ...]\n", " eye_matrix = np.eye(len(condition_names)).tolist()\n", " contrasts += [[c, eye_matrix[i], 'T'] for i, c in enumerate(condition_names)]\n", " \n", " # Add one vs. rest contrasts, i.e. [1, -0.25, -0.25, -0.25, -0.25]\n", " try:\n", " rest_matrix = np.copy(eye_matrix)\n", " rest_matrix[rest_matrix==0] = np.round(-1./(len(condition_names) - 1), 4)\n", " contrasts += [['%s_vs_rest' % c, rest_matrix[i].tolist(), 'T']\n", " for i, c in enumerate(condition_names)]\n", " except:\n", " pass\n", " \n", " # Store the task specific information in a dictionray\n", " content_task = {}\n", " content_task['condition_names'] = condition_names\n", " \n", " # Add contrasts to task dictionary\n", " content_task['contrasts'] = contrasts\n", " \n", " # Store everything ini the analysis dictionary\n", " content_analysis['tasks'][t] = content_task\n", " \n", " # Processing feedback\n", " print('Task \\'%s\\' finished.' % t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what do we have so far?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pprint import pprint\n", "pprint(content_analysis['tasks'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to add some additional contrasts, either add them to `content_analysis['tasks'][task_id]['contrasts']` or directly adapt the content of the `fmriflows_spec_analysis.json` file, once this section was excecuted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify additional 1st-level analysis parameters\n", "\n", "The next section specifies additional model parameters. The assumption is that all different tasks will use those same parameters. If this is not the case, specify multiple `fmriflows_spec_analysis.json` files and run them individually in the `03_analysis_1st-level` notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of subject identifiers\n", "content_analysis['subject_list'] = subject_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of session identifiers\n", "content_analysis['session_list'] = session_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of spatial filters (smoothing) that were used during functional preprocessing\n", "content_analysis['filters_spatial'] = content_func['filters_spatial']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of temporal filters that were used during functional preprocessing\n", "content_analysis['filters_temporal'] = content_func['filters_temporal']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Nuisance identifiers that should be included in the GLM\n", "content_analysis['nuisance_regressors'] = ['Rotation', 'Translation', 'FD', 'DVARS']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# If outliers detected during functional preprocing should be used in GLM\n", "content_analysis['use_outliers'] = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Serial Correlation Model to use: 'AR(1)', 'FAST' or 'none'\n", "content_analysis['model_serial_correlations'] = 'AR(1)'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Model bases to use: 'hrf', 'fourier', 'fourier', 'fourier_han', 'gamma' or 'fir'\n", "# If 'hrf', also specify if time and dispersion derivatives should be used\n", "content_analysis['model_bases'] = {'hrf': {'derivs': [0, 0]}}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Estimation Method to use: 'Classical', 'Bayesian' or 'Bayesian2'\n", "content_analysis['estimation_method'] = {'Classical': 1}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify if contrasts should be normalized to template space after estimation\n", "content_analysis['normalize'] = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify voxel resolution of normalized contrasts\n", "content_analysis['norm_res'] = [1, 1, 1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify if a contrast should be computed for stimuli category per run\n", "content_analysis['con_per_run'] = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify voxel resolution of normalized contrasts for multivariate analysis\n", "content_analysis['norm_res_multi'] = [float(v) for v in vox_res]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify a particular analysis postfix\n", "content_analysis['analysis_postfix'] = ''" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify 2nd-level analysis parameters\n", "\n", "Some of the parameters for the 2nd-level analysis can directly be taken from the 1st-level analysis parameters, i.e. subject names, filters, etc. Here we specify some additional parameters, unique to the 2nd-level analysis. They will be stored together with the 1st-level parameters in a file called `fmriflows_spec_analysis.json`. To run the 2nd-level analysis, run the `04_analysis_2nd-level` notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify value to threshold gray matter probability template to create 2nd-level mask\n", "content_analysis['gm_mask_thr'] = 0.1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Value for initial thresholding to define clusters\n", "content_analysis['height_threshold'] = 0.001" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Whether to use FWE (Bonferroni) correction for initial threshold\n", "content_analysis['use_fwe_correction'] = False" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Minimum cluster size in voxels \n", "content_analysis['extent_threshold'] = 5" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Whether to use FDR correction over cluster extent probabilities\n", "content_analysis['use_topo_fdr'] = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# P threshold to use to on FDR corrected cluster size probabilities\n", "content_analysis['extent_fdr_p_threshold'] = 0.05" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Name of atlases to use for creation of output tables\n", "content_analysis['atlasreader_names'] = 'default'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Probability threshold to use for output tables\n", "content_analysis['atlasreader_prob_thresh'] = 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating the `JSON` specification file\n", "\n", "As in the preprocessing example, we will be using one common `JSON` file for the specifications of the 1st-level and 2nd-level analysis. The creation of this file is as before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of parallel jops to use during the preprocessing\n", "import multiprocessing\n", "n_proc = multiprocessing.cpu_count() - 1\n", "content_analysis['n_parallel_jobs'] = n_proc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "file_id = 'fmriflows_spec_analysis'\n", "with open('/data/%s.json' % file_id, 'w') as f:\n", " json.dump(content_analysis, f, indent=4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content_analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Multivariate analysis\n", "\n", "Once the 1st-level analysis was run with the parameter `con_per_run` set to `True`, you can run a multivariate searchlight analysis. Keep in mind that this might take some time, especially if you want to perform a correct group analysis, as proposed in [Stelzer et al. (2013)](https://www.sciencedirect.com/science/article/pii/S1053811912009810).\n", "\n", "## Specify multivariate analysis parameters" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty dictionary\n", "content_multivariate = {}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of subject identifiers\n", "content_multivariate['subject_list'] = subject_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of session identifiers\n", "content_multivariate['session_list'] = session_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of spatial filters (smoothing) that were used during functional preprocessing\n", "content_multivariate['filters_spatial'] = content_func['filters_spatial']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List of temporal filters that were used during functional preprocessing\n", "content_multivariate['filters_temporal'] = content_func['filters_temporal']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify a particular multivariate postfix\n", "content_multivariate['multivariate_postfix'] = ''" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify which classifier to use, options are one or many of:\n", "# 'LinearCSVMC', 'LinearNuSVMC', 'RbfCSVMC', 'RbfNuSVMC', 'SMLR', 'kNN', 'GNB'\n", "content_multivariate['clf_names'] = ['LinearNuSVMC']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Searchlight sphere radius (in voxels), i.e. number of additional voxels\n", "# next to the center voxel. E.g sphere_radius = 3 means radius = 3.5*voxelsize\n", "content_multivariate['sphere_radius'] = 3\n", "\n", "# Comment 01: ~10mm radius seems to be a good starting point\n", "# Comment 02: How many voxels are in a sphere of a given radius?\n", "# Radius = 2 -> Voxel in Sphere = 33\n", "# Radius = 3 -> Voxel in Sphere = 123\n", "# Radius = 4 -> Voxel in Sphere = 257\n", "# Radius = 5 -> Voxel in Sphere = 515\n", "# Radius = 6 -> Voxel in Sphere = 925" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of step size to define a sphere center, i.e. value of 5 means\n", "# that only every 5th voxel is used to perform a searchlight analysis\n", "content_multivariate['sphere_steps'] = 3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of chunks to use for the N-Fold crossvalidation\n", "# (needs to divide number of labels without reminder)\n", "content_multivariate['n_chunks'] = len(runs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Which classifications should be performed? (separated by task)\n", "# - Classification targets are a tuple of two tuples, indicating\n", "# - Target classification to train and target classification to test\n", "content_multivariate['tasks'] = {}\n", "\n", "for t in task_id:\n", " \n", " # Collect first event file of a given task\n", " event_file = layout.get(return_type='file', suffix='events', task=t)[0]\n", " \n", " # Collect unique condition names\n", " import pandas as pd\n", " df = pd.read_csv(event_file, sep='\t')\n", " condition_names = [str(c) for c in np.unique(df['trial_type'])]\n", " \n", " import itertools\n", " target_list = list(itertools.combinations(condition_names, 2))\n", " \n", " # Store everything in the analysis dictionary\n", " if len(target_list) != 0:\n", " content_multivariate['tasks'][t] = [z for z in zip(target_list, target_list)]\n", "\n", "# Show the proposed classification targets\n", "from pprint import pprint\n", "pprint(content_multivariate['tasks'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parameters needed for Stelzer's Group Cluster Threshold approach" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of permutations to indicate group-analysis strategy:\n", "# n_perm <= 1: group analysis is classical 2nd-level GLM analysis\n", "# n_perm > 1: group analysis is multivariate analysis according to Stelzer et al. (2013)\n", "content_multivariate['n_perm'] = 100" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of bootstrap samples to be generated\n", "content_multivariate['n_bootstrap'] = 100000" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of elements per segment used to compute the feature-wise NULL distributions\n", "# Low number mean low memory demand and slow computation time\n", "content_multivariate['block_size'] = 1000" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Feature-wise probability threshold per voxel\n", "content_multivariate['threshold'] = 0.001" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Strategy for multiple comparison correction, options are: 'bonferroni', 'sidak',\n", "# 'holm-sidak', 'holm', 'simes-hochberg', 'hommel', 'fdr_bh', 'fdr_by', None\n", "content_multivariate['multicomp_correction'] = 'fdr_bh'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Family-wise error rate threshold for multiple comparison correction\n", "content_multivariate['fwe_rate'] = 0.05" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Name of atlases to use for creation of output tables\n", "content_multivariate['atlasreader_names'] = 'default'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Probability threshold to use for output tables\n", "content_multivariate['atlasreader_prob_thresh'] = 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating the `JSON` specification file\n", "\n", "The multivariate analysis parameters will be stored in a common `JSON` file, as before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of parallel jops to use during the preprocessing\n", "import multiprocessing\n", "n_proc = multiprocessing.cpu_count() - 1\n", "content_multivariate['n_parallel_jobs'] = n_proc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "file_id = 'fmriflows_spec_multivariate'\n", "with open('/data/%s.json' % file_id, 'w') as f:\n", " json.dump(content_multivariate, f, indent=4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "content_multivariate" ] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }