{ "cells": [ { "cell_type": "markdown", "id": "d866343e-dc43-44e1-b61d-2ed1c7a82be0", "metadata": {}, "source": [ "# Radiance Field Reconstruction (NeRF-like)\n", "\n", "In this tutorial, we will learn how to implement a 3D scene reconstruction pipeline similar to the one presented in the following work: [ReLU Fields: The Little Non-linearity That Could][1]. This simple [NeRF][2]-like method reconstructs the radiance field of a scene without the use of any neural networks or sparse data structures. It models a scene as a purely emissive volume with a directionally varying emission. The volume density is modeled by interpolated 3D grid values that are passed through a fixed non-linearity (ReLU) to boost sharpness. The view-dependent appearance is represented using spherical harmonics (SH), with coefficients stored on a regular grid. For a given ray, the outgoing radiance is evaluated using ray marching. By differentiating through the ray marching routine, the density and SH coefficients can be fit to a set of reference images. \n", "\n", "This algorithm is simple and surprisingly powerful, and very easy to implement using the built-in functionality of Mitsuba and Dr.Jit.\n", "\n", "