{ "cells": [ { "cell_type": "markdown", "id": "2ddc3c7c", "metadata": {}, "source": [ "# PyTorch and Mitsuba interoperability" ] }, { "cell_type": "markdown", "id": "abfc1227", "metadata": {}, "source": [ "## Overview\n", "\n", "This tutorial shows how to mix differentiable computations between Mitsuba and [PyTorch][1]. The ability to combine these frameworks allows us to squeeze an entire rendering pipeline between neural layers whilst still preserving the differentiability (end-to-end) of their combination.\n", "\n", "Note that the necessary communication and synchronization between Dr.Jit and PyTorch along with the complexity of traversing two separate computation graph data structures produces an overhead when compared to an implementation which only uses Dr.Jit. We generally recommend sticking with Dr.Jit unless the problem requires neural network building blocks like fully connected layers or convolutions, where PyTorch provides a clear advantage.\n", "\n", "In this example, we are going to train a single fully connected layer to pre-distort a texture image to counter the distortion introduced by a refractive object placed in front of the camera when looking at the textured plane. The objective of this optimization will be to minimize the difference between the rendered image and the input texture image.\n", "\n", "We assume the reader is familiar with the PyTorch framework or has followed at least the basic [PyTorch tutorials][2].\n", "\n", "![](pytorch_tuto_figure.jpg)\n", "\n", "\n", "
dr.wrap_ad()
function decorator to insert Mitsuba computations in a PyTorch pipeline