{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Stochastic Microstructure Imaging in Crossings (MIX) Optimization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The stochastic Microstructure In Crossings (MIX) optimizer *(Farooq et al. 2016)* uses a three-step process to fit the parameters of a multi-compartment (MC) model to data. The key innovation is that they separate linear from non-linear parameters in the fitting process, meaning the linear volume fractions and non-linear other ones(e.g. diffusivities) are optimized at different stages in the process. In this notebook we we first show an overall example of how the optimizer is used, and then go over the three steps as given by *(Farooq et al. 2016)*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Overall Mix example" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True parameters: [ 0.46405481 0.30005209 0.03821492 0.64970014 0.48858711 0.51141289]\n", "Estimated parameters: [ 0.46407092 0.30005198 0.03821499 0.64970364 0.4886475 0.5113525 ]\n", "Note that the 'mu' parameters (second and third) might be pi away from each other (antipodal symmetry).\n" ] } ], "source": [ "# necessary imports\n", "from dmipy.data.saved_acquisition_schemes import wu_minn_hcp_acquisition_scheme\n", "from dmipy.core.modeling_framework import MultiCompartmentModel\n", "from dmipy.signal_models import gaussian_models, cylinder_models\n", "import numpy as np\n", "\n", "# Import some acquition scheme (Wu-Minn HCP scheme)\n", "scheme = wu_minn_hcp_acquisition_scheme()\n", "\n", "# Set up test Ball and Stick model\n", "ball = gaussian_models.G1Ball()\n", "stick = cylinder_models.C1Stick()\n", "mc_model = MultiCompartmentModel([stick, ball])\n", "\n", "# Make some random model parameters with the appropriate scale\n", "random_parameters = {}\n", "for parameter, card in mc_model.parameter_cardinality.items():\n", " random_parameters[parameter] = (\n", " np.random.rand(card) * mc_model.parameter_scales[parameter])\n", "\n", "# Making sure the volume fractions add up to 1.\n", "random_parameters['partial_volume_1'] = 1. - random_parameters['partial_volume_0']\n", "\n", "# Making a parameter vector out of the parameters so the mc_model understands it.\n", "random_parameter_vector = mc_model.parameters_to_parameter_vector(**random_parameters)\n", "\n", "# Generate data with the HCP scheme and the model parameters\n", "random_signal = mc_model.simulate_signal(scheme, random_parameter_vector)\n", "\n", "# Set up the MIX optimizer\n", "from dmipy.optimizers.mix import MixOptimizer\n", "mix = MixOptimizer(mc_model, scheme)\n", "\n", "estimated_parameters = mix(random_signal)\n", "\n", "# Print the true and estimated parameters in O(1) scale\n", "print \"True parameters: \", random_parameter_vector / mc_model.scales_for_optimization\n", "print \"Estimated parameters:\", estimated_parameters\n", "print \"Note that the 'mu' parameters (second and third) might be pi away from each other (antipodal symmetry).\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you rerun this many times you'll see that most of the time the solver finds the ground truth solution. However, sometimes the random diffusivities will make the Ball and Stick look very similar in terms of signal, in which case it can still find the wrong solution.\n", "\n", "In the code you should call MIX as follows:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using parallel processing with 8 workers.\n", "Setup MIX optimizer in 3.09944152832e-05 seconds\n", "Fitting of 1 voxels complete in 4.15033698082 seconds.\n", "Average of 4.15033698082 seconds per voxel.\n", "Estimated parameters: [[ 0.46407092 2.84154068 -3.10337767 0.64970364 0.4886475 0.5113525 ]]\n" ] } ], "source": [ "fitted_mc_model = mc_model.fit(scheme, random_signal, solver='mix')\n", "print \"Estimated parameters:\", fitted_mc_model.fitted_parameters_vector / mc_model.scales_for_optimization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While it often manages to find the global solution for more complex models, it does require many more function evaluations than gradient descent methods." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Stochastic Optimization of non-linear parameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the first step a genetic algorithm (GA) is used to estimate the non-linear parameters of an MC model. GAs are inspired by the process of natural selection that belong to the larger class of evolutionary algorithms. Without going too much into the details, this means that GAs do not rely on using gradient-descent on some cost function, but instead stochastically sample pools in the parameter space looking for good candidates. For this we use scipy's\n", "[differential_evolution](https://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.optimize.differential_evolution.html) (DE) algorithm.\n", "\n", "The idea is to separate estimation of the linear from the non-linear parameters. This is done by only sampling the non-linear parameters of components of a multi-compartment model, and then estimating the linear volume fractions using least squares. Dmipy's MultiCompartmentModel instance can be called with the 'stochastic cost function' option to give us the signal attenuation for every model in the MultiCompartmentModel separately for the given parameters, ignoring the volume fraction values and just setting them to one for all models." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1. , 1. ],\n", " [ 0.99384665, 0.62872909],\n", " [ 0.77348182, 0.39530027],\n", " [ 0.21384913, 0.24853678],\n", " [ 0.9378883 , 0.62872909],\n", " [ 0.89676035, 0.24853678],\n", " [ 0.52003482, 0.39530027],\n", " [ 0.5596586 , 0.62872909],\n", " [ 0.68081258, 0.39530027],\n", " [ 0.73836681, 0.24853678],\n", " [ 0.93131368, 0.62872909],\n", " [ 0.28058554, 0.39530027],\n", " [ 0.53834826, 0.24853678],\n", " [ 0.73879139, 0.62872909],\n", " [ 0.90655213, 0.39530027],\n", " [ 0.99504891, 0.24853678],\n", " [ 1. , 1. ],\n", " [ 0.79055492, 0.62872909],\n", " [ 0.99993183, 0.39530027],\n", " [ 0.21916649, 0.24853678]])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "separate_model_signals = mc_model(scheme, quantity=\"stochastic cost function\", **random_parameters)\n", "\n", "# the first column is the signal for the Stick and the second for the Ball\n", "# You can see that volume fractions are ignored and both models are normalized to 1.\n", "separate_model_signals[:20]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The volume fractions are then estimated using least squares" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 0.48858711, 0.51141289])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# doing this with the ground truth parmameters will give use the ground truth volume fractions\n", "volume_fractions = np.dot(np.linalg.pinv(separate_model_signals), random_signal)\n", "volume_fractions" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sum Squared Difference: 5.06288463781e-30\n" ] } ], "source": [ "# meaning we will find the right solution\n", "estimated_signal = np.dot(separate_model_signals, volume_fractions)\n", "print 'Sum Squared Difference:', np.sum((estimated_signal - random_signal) ** 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Normally, the GE algorithms will sample the parameter space to look for the model parameters that will match the given signal. Since there are no other constraints on volume fractions in this step, this means that the least-squares solution for the volume fractions will probably not sum up to one (but will hopefully be pretty close). In the next step we use the found non-linear parameters to find a constrained solution for the volume fractions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Constrained Optimization to find linear parameters " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*(Farooq et al. 2016)* describes using CVX to estimate the linear volume fractions of an MC model. For this we use scipy's [COBYLA algorithm](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_cobyla.html) since it allows us to impose the parameter constraints we need for volume fractions; namely that they are positive and sum up to one." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "initial fractions: [ 0.53661881 0.50558657]\n", "COBYLA fractions: [ 0.48863674 0.51150103]\n", "ground truth fractions: [ 0.48858711 0.51141289]\n" ] } ], "source": [ "from scipy.optimize import fmin_cobyla\n", "\n", "# defining the two constraints for the volume fractions\n", "def cobyla_positivity_constraint(volume_fractions, *args):\n", " \"COBYLA positivity constraint on volume fractions\"\n", " return volume_fractions - 0.001\n", "\n", "\n", "def cobyla_unity_constraint(volume_fractions, *args):\n", " \"COBYLA unity constraint on volume fractions\"\n", " return np.sum(volume_fractions) - 1\n", "\n", "# generate slightly off initial guess\n", "x0 = volume_fractions + (np.random.rand(2) - 0.5) / 5.\n", "\n", "# do the COBYLA optimization\n", "cobyla_fractions = fmin_cobyla(func=mix.cobyla_cost_function, x0=x0,\n", " cons=[cobyla_positivity_constraint,cobyla_unity_constraint],\n", " args=(separate_model_signals, random_signal))\n", "print \"initial fractions: \", x0\n", "print \"COBYLA fractions: \", cobyla_fractions\n", "print \"ground truth fractions:\", volume_fractions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Refining The Solution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The third and last step in is a refining step to find a local minimum given the solutions of step one and two. For this we use scipy's gradient-based L-BFGS-B algorithm with nested volume fractions. This is exactly the same as in the brute force algorithm, so we'll refer to that example for more info." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "- Farooq, Hamza, et al. \"Microstructure imaging of crossing (MIX) white matter fibers from diffusion MRI.\" Scientific reports 6 (2016): 38927." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.14" } }, "nbformat": 4, "nbformat_minor": 2 }