{ "metadata": { "name": "identify_regions_coactivated_with_ROI" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": "Generating a coactivation map for a seed ROI" }, { "cell_type": "markdown", "metadata": {}, "source": "In this example we generate a map of brain regions that are coactivated with a seed ROI--i.e., regions in which studies that report activity in the seed regions are also likely to report activity. We define the ROI using two different approaches: (a) by feeding in an image file containing the ROI, and (b) by passing x/y/z coordinates and asking Neurosynth to grow a sphere.\n\nTo begin, let's import the modules we need:" }, { "cell_type": "code", "collapsed": false, "input": "from neurosynth.base.dataset import Dataset\nfrom neurosynth.analysis import network", "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": "Now we load a Dataset object from a file. We're assuming here that we've previously created a Dataset and saved it; if you haven't done that, take a look at create_a_new_dataset_and_load_features example.py in this folder." }, { "cell_type": "code", "collapsed": false, "input": "dataset = Dataset.load('dataset.pkl')", "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": "Now we're ready to define our seed ROI, which we can do that in two different ways.\n\n## Using an image-based ROI\n\nFirst, let's define a seed using an existing image file. The file can be any image file (most commonly, in Nifti format) that standard neuroimaging packages can read (in our case, we're using NiBabel behind the scenes). Neurosynth will include all voxels with value > 0 in the mask, so if you have multiple ROIs with different labels (i.e., voxel values) in the image, make sure to isolate the one you want in a separate image before running this analysis." }, { "cell_type": "code", "collapsed": false, "input": "roi = 'my_roi_mask.nii.gz'", "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": "And now that we have an ROI, we're ready to run a coactivation analysis:" }, { "cell_type": "code", "collapsed": false, "input": "network.coactivation(dataset, roi, threshold=0.1, outroot='coactivation_from_image')", "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": "And we're done! It's that simple. The network.coactivation() function does everything for you in one shot, and spits out a bunch of images reflecting various kinds of information related to our coactivation analysis. Actually, network.coactivation() is really just a wrapper around the standard meta-analysis processing stream Neurosynth implements. In practice, a coactivation analysis is just a meta-analysis where we're comparing all the studies that activate within an ROI to all the studies that don't. So if you've used Neurosynth's meta-analysis functionality before (see some of the other examples), you'll get all of the same outputs.\n\nThere are two arguments worth discussing here though:\n\n* threshold: Here indicate that we want a study to be considered 'active' within a region if at least 10% of the voxels within that region are active. Too low a threshold will allow a lot of studies in that activate a region only incidentally (e.g., in a couple of voxels; too high a threshold will leave too few studies to produce reliable results. The default (0.1) seems to produce reasonable results).\n* outroot: We indicate that we want all output images to be prepended with 'coactivation_from_image'.\n\n## Using explicit coordinates\n\nAlternatively, instead of using an image to define our seed, we could also just pass coordinates explicitly and ask Neurosynth to grow a sphere around them. This is even simpler:" }, { "cell_type": "code", "collapsed": false, "input": "network.coactivation(dataset, [[0, 20, 28]], threshold=0.1, outroot='coactivation_from_coords', r=10)", "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": "The only difference here is we're passing x/y/z coordinates in to define the seed ROI (in this case, a voxel in dorsal ACC) rather than an image. We also need to pass a radius parameter (r) telling Neurosynth how big we want the spheres around each coordinate to be. In this case, we ask for 10 mm spheres.\n\nNote that we're not limited to one set of coordinates; we could have just as easily passed [[0, 20, 28], [-40, 20, 28]], which would tell Neurosynth to identify regions that coactivate with some combination of dorsal ACC and dorsolateral PFC. A seed doesn't have to be a single ROI; it could be any set of regions you like (and the same is true for ROIs defined using an image: any non-zero voxels will be included in the mask, so you can test for coactivation with entire networks if you want)." } ], "metadata": {} } ] }