<!DOCTYPE html> <!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]--> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Windows FAQ — PyTorch master documentation</title> <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" /> <link rel="stylesheet" href="../_static/pygments.css" type="text/css" /> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Lato" type="text/css" /> <link rel="stylesheet" href="../_static/css/pytorch_theme.css" type="text/css" /> <link rel="index" title="Index" href="../genindex.html" /> <link rel="search" title="Search" href="../search.html" /> <link rel="next" title="torch" href="../torch.html" /> <link rel="prev" title="Serialization semantics" href="serialization.html" /> <script src="../_static/js/modernizr.min.js"></script> </head> <body class="wy-body-for-nav"> <div class="wy-grid-for-nav"> <nav data-toggle="wy-nav-shift" class="wy-nav-side"> <div class="wy-side-scroll"> <div class="wy-side-nav-search"> <a href="../index.html"> <img src="../_static/pytorch-logo-dark-unstable.png" class="logo" alt="Logo"/> </a> <div class="version"> <a href="http://pytorch.org/docs/versions.html">master (0.5.0a0+2b44c42 ) ▼</a> </div> <div role="search"> <form id="rtd-search-form" class="wy-form" action="../search.html" method="get"> <input type="text" name="q" placeholder="Search docs" /> <input type="hidden" name="check_keywords" value="yes" /> <input type="hidden" name="area" value="default" /> </form> </div> </div> <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation"> <div> <a style="color:#F05732" href=http://pytorch.org/docs/stable/> You are viewing unstable developer preview docs. Click here to view docs for latest stable release. </a> </div> <p class="caption"><span class="caption-text">Notes</span></p> <ul class="current"> <li class="toctree-l1"><a class="reference internal" href="autograd.html">Autograd mechanics</a><ul> <li class="toctree-l2"><a class="reference internal" href="autograd.html#excluding-subgraphs-from-backward">Excluding subgraphs from backward</a><ul> <li class="toctree-l3"><a class="reference internal" href="autograd.html#requires-grad"><code class="docutils literal notranslate"><span class="pre">requires_grad</span></code></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="autograd.html#how-autograd-encodes-the-history">How autograd encodes the history</a></li> <li class="toctree-l2"><a class="reference internal" href="autograd.html#in-place-operations-with-autograd">In-place operations with autograd</a></li> <li class="toctree-l2"><a class="reference internal" href="autograd.html#in-place-correctness-checks">In-place correctness checks</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="broadcasting.html">Broadcasting semantics</a><ul> <li class="toctree-l2"><a class="reference internal" href="broadcasting.html#general-semantics">General semantics</a></li> <li class="toctree-l2"><a class="reference internal" href="broadcasting.html#in-place-semantics">In-place semantics</a></li> <li class="toctree-l2"><a class="reference internal" href="broadcasting.html#backwards-compatibility">Backwards compatibility</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="cuda.html">CUDA semantics</a><ul> <li class="toctree-l2"><a class="reference internal" href="cuda.html#asynchronous-execution">Asynchronous execution</a><ul> <li class="toctree-l3"><a class="reference internal" href="cuda.html#cuda-streams">CUDA streams</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="cuda.html#memory-management">Memory management</a></li> <li class="toctree-l2"><a class="reference internal" href="cuda.html#best-practices">Best practices</a><ul> <li class="toctree-l3"><a class="reference internal" href="cuda.html#device-agnostic-code">Device-agnostic code</a></li> <li class="toctree-l3"><a class="reference internal" href="cuda.html#use-pinned-memory-buffers">Use pinned memory buffers</a></li> <li class="toctree-l3"><a class="reference internal" href="cuda.html#use-nn-dataparallel-instead-of-multiprocessing">Use nn.DataParallel instead of multiprocessing</a></li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="extending.html">Extending PyTorch</a><ul> <li class="toctree-l2"><a class="reference internal" href="extending.html#extending-torch-autograd">Extending <code class="docutils literal notranslate"><span class="pre">torch.autograd</span></code></a></li> <li class="toctree-l2"><a class="reference internal" href="extending.html#extending-torch-nn">Extending <code class="docutils literal notranslate"><span class="pre">torch.nn</span></code></a><ul> <li class="toctree-l3"><a class="reference internal" href="extending.html#adding-a-module">Adding a <code class="docutils literal notranslate"><span class="pre">Module</span></code></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="extending.html#writing-custom-c-extensions">Writing custom C extensions</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="faq.html">Frequently Asked Questions</a><ul> <li class="toctree-l2"><a class="reference internal" href="faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory">My model reports “cuda runtime error(2): out of memory”</a></li> <li class="toctree-l2"><a class="reference internal" href="faq.html#my-gpu-memory-isn-t-freed-properly">My GPU memory isn’t freed properly</a></li> <li class="toctree-l2"><a class="reference internal" href="faq.html#my-data-loader-workers-return-identical-random-numbers">My data loader workers return identical random numbers</a></li> <li class="toctree-l2"><a class="reference internal" href="faq.html#my-recurrent-network-doesn-t-work-with-data-parallelism">My recurrent network doesn’t work with data parallelism</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="multiprocessing.html">Multiprocessing best practices</a><ul> <li class="toctree-l2"><a class="reference internal" href="multiprocessing.html#sharing-cuda-tensors">Sharing CUDA tensors</a></li> <li class="toctree-l2"><a class="reference internal" href="multiprocessing.html#best-practices-and-tips">Best practices and tips</a><ul> <li class="toctree-l3"><a class="reference internal" href="multiprocessing.html#avoiding-and-fighting-deadlocks">Avoiding and fighting deadlocks</a></li> <li class="toctree-l3"><a class="reference internal" href="multiprocessing.html#reuse-buffers-passed-through-a-queue">Reuse buffers passed through a Queue</a></li> <li class="toctree-l3"><a class="reference internal" href="multiprocessing.html#asynchronous-multiprocess-training-e-g-hogwild">Asynchronous multiprocess training (e.g. Hogwild)</a><ul> <li class="toctree-l4"><a class="reference internal" href="multiprocessing.html#hogwild">Hogwild</a></li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="serialization.html">Serialization semantics</a><ul> <li class="toctree-l2"><a class="reference internal" href="serialization.html#best-practices">Best practices</a><ul> <li class="toctree-l3"><a class="reference internal" href="serialization.html#recommended-approach-for-saving-a-model">Recommended approach for saving a model</a></li> </ul> </li> </ul> </li> <li class="toctree-l1 current"><a class="current reference internal" href="#">Windows FAQ</a><ul> <li class="toctree-l2"><a class="reference internal" href="#building-from-source">Building from source</a><ul> <li class="toctree-l3"><a class="reference internal" href="#include-optional-components">Include optional components</a></li> <li class="toctree-l3"><a class="reference internal" href="#speeding-cuda-build-for-windows">Speeding CUDA build for Windows</a></li> <li class="toctree-l3"><a class="reference internal" href="#one-key-install-script">One key install script</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="#extension">Extension</a><ul> <li class="toctree-l3"><a class="reference internal" href="#cffi-extension">CFFI Extension</a></li> <li class="toctree-l3"><a class="reference internal" href="#cpp-extension">Cpp Extension</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="#installation">Installation</a><ul> <li class="toctree-l3"><a class="reference internal" href="#package-not-found-in-win-32-channel">Package not found in win-32 channel.</a></li> <li class="toctree-l3"><a class="reference internal" href="#why-are-there-no-python-2-packages-for-windows">Why are there no Python 2 packages for Windows?</a></li> <li class="toctree-l3"><a class="reference internal" href="#import-error">Import error</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="#usage-multiprocessing">Usage (multiprocessing)</a><ul> <li class="toctree-l3"><a class="reference internal" href="#multiprocessing-error-without-if-clause-protection">Multiprocessing error without if-clause protection</a></li> <li class="toctree-l3"><a class="reference internal" href="#multiprocessing-error-broken-pipe">Multiprocessing error “Broken pipe”</a></li> <li class="toctree-l3"><a class="reference internal" href="#multiprocessing-error-driver-shut-down">Multiprocessing error “driver shut down”</a></li> <li class="toctree-l3"><a class="reference internal" href="#cuda-ipc-operations">CUDA IPC operations</a></li> </ul> </li> </ul> </li> </ul> <p class="caption"><span class="caption-text">Package Reference</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="../torch.html">torch</a><ul> <li class="toctree-l2"><a class="reference internal" href="../torch.html#tensors">Tensors</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torch.html#creation-ops">Creation Ops</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#indexing-slicing-joining-mutating-ops">Indexing, Slicing, Joining, Mutating Ops</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../torch.html#random-sampling">Random sampling</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torch.html#in-place-random-sampling">In-place random sampling</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../torch.html#serialization">Serialization</a></li> <li class="toctree-l2"><a class="reference internal" href="../torch.html#parallelism">Parallelism</a></li> <li class="toctree-l2"><a class="reference internal" href="../torch.html#locally-disabling-gradient-computation">Locally disabling gradient computation</a></li> <li class="toctree-l2"><a class="reference internal" href="../torch.html#math-operations">Math operations</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torch.html#pointwise-ops">Pointwise Ops</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#reduction-ops">Reduction Ops</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#comparison-ops">Comparison Ops</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#spectral-ops">Spectral Ops</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#other-operations">Other Operations</a></li> <li class="toctree-l3"><a class="reference internal" href="../torch.html#blas-and-lapack-operations">BLAS and LAPACK Operations</a></li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../tensors.html">torch.Tensor</a></li> <li class="toctree-l1"><a class="reference internal" href="../tensor_attributes.html">Tensor Attributes</a><ul> <li class="toctree-l2"><a class="reference internal" href="../tensor_attributes.html#torch-dtype">torch.dtype</a></li> <li class="toctree-l2"><a class="reference internal" href="../tensor_attributes.html#torch-device">torch.device</a></li> <li class="toctree-l2"><a class="reference internal" href="../tensor_attributes.html#torch-layout">torch.layout</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../sparse.html">torch.sparse</a></li> <li class="toctree-l1"><a class="reference internal" href="../cuda.html">torch.cuda</a><ul> <li class="toctree-l2"><a class="reference internal" href="../cuda.html#random-number-generator">Random Number Generator</a></li> <li class="toctree-l2"><a class="reference internal" href="../cuda.html#communication-collectives">Communication collectives</a></li> <li class="toctree-l2"><a class="reference internal" href="../cuda.html#streams-and-events">Streams and events</a></li> <li class="toctree-l2"><a class="reference internal" href="../cuda.html#memory-management">Memory management</a></li> <li class="toctree-l2"><a class="reference internal" href="../cuda.html#nvidia-tools-extension-nvtx">NVIDIA Tools Extension (NVTX)</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../storage.html">torch.Storage</a></li> <li class="toctree-l1"><a class="reference internal" href="../nn.html">torch.nn</a><ul> <li class="toctree-l2"><a class="reference internal" href="../nn.html#parameters">Parameters</a></li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#containers">Containers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#module"><span class="hidden-section">Module</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#sequential"><span class="hidden-section">Sequential</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#modulelist"><span class="hidden-section">ModuleList</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#parameterlist"><span class="hidden-section">ParameterList</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#convolution-layers">Convolution layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv1d"><span class="hidden-section">Conv1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv2d"><span class="hidden-section">Conv2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv3d"><span class="hidden-section">Conv3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#convtranspose1d"><span class="hidden-section">ConvTranspose1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#convtranspose2d"><span class="hidden-section">ConvTranspose2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#convtranspose3d"><span class="hidden-section">ConvTranspose3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#pooling-layers">Pooling layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxpool1d"><span class="hidden-section">MaxPool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxpool2d"><span class="hidden-section">MaxPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxpool3d"><span class="hidden-section">MaxPool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxunpool1d"><span class="hidden-section">MaxUnpool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxunpool2d"><span class="hidden-section">MaxUnpool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#maxunpool3d"><span class="hidden-section">MaxUnpool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avgpool1d"><span class="hidden-section">AvgPool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avgpool2d"><span class="hidden-section">AvgPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avgpool3d"><span class="hidden-section">AvgPool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#fractionalmaxpool2d"><span class="hidden-section">FractionalMaxPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lppool1d"><span class="hidden-section">LPPool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lppool2d"><span class="hidden-section">LPPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptivemaxpool1d"><span class="hidden-section">AdaptiveMaxPool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptivemaxpool2d"><span class="hidden-section">AdaptiveMaxPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptivemaxpool3d"><span class="hidden-section">AdaptiveMaxPool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptiveavgpool1d"><span class="hidden-section">AdaptiveAvgPool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptiveavgpool2d"><span class="hidden-section">AdaptiveAvgPool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptiveavgpool3d"><span class="hidden-section">AdaptiveAvgPool3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#padding-layers">Padding layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#reflectionpad1d"><span class="hidden-section">ReflectionPad1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#reflectionpad2d"><span class="hidden-section">ReflectionPad2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#replicationpad1d"><span class="hidden-section">ReplicationPad1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#replicationpad2d"><span class="hidden-section">ReplicationPad2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#replicationpad3d"><span class="hidden-section">ReplicationPad3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#zeropad2d"><span class="hidden-section">ZeroPad2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#constantpad1d"><span class="hidden-section">ConstantPad1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#constantpad2d"><span class="hidden-section">ConstantPad2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#constantpad3d"><span class="hidden-section">ConstantPad3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#non-linear-activations-weighted-sum-nonlinearity">Non-linear activations (weighted sum, nonlinearity)</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#elu"><span class="hidden-section">ELU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#hardshrink"><span class="hidden-section">Hardshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#hardtanh"><span class="hidden-section">Hardtanh</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#leakyrelu"><span class="hidden-section">LeakyReLU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#logsigmoid"><span class="hidden-section">LogSigmoid</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#prelu"><span class="hidden-section">PReLU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#relu"><span class="hidden-section">ReLU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#relu6"><span class="hidden-section">ReLU6</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#rrelu"><span class="hidden-section">RReLU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#selu"><span class="hidden-section">SELU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#sigmoid"><span class="hidden-section">Sigmoid</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softplus"><span class="hidden-section">Softplus</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softshrink"><span class="hidden-section">Softshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softsign"><span class="hidden-section">Softsign</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#tanh"><span class="hidden-section">Tanh</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#tanhshrink"><span class="hidden-section">Tanhshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#threshold"><span class="hidden-section">Threshold</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#non-linear-activations-other">Non-linear activations (other)</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softmin"><span class="hidden-section">Softmin</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softmax"><span class="hidden-section">Softmax</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softmax2d"><span class="hidden-section">Softmax2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#logsoftmax"><span class="hidden-section">LogSoftmax</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#normalization-layers">Normalization layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#batchnorm1d"><span class="hidden-section">BatchNorm1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#batchnorm2d"><span class="hidden-section">BatchNorm2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#batchnorm3d"><span class="hidden-section">BatchNorm3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#instancenorm1d"><span class="hidden-section">InstanceNorm1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#instancenorm2d"><span class="hidden-section">InstanceNorm2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#instancenorm3d"><span class="hidden-section">InstanceNorm3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#layernorm"><span class="hidden-section">LayerNorm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#localresponsenorm"><span class="hidden-section">LocalResponseNorm</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#recurrent-layers">Recurrent layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#rnn"><span class="hidden-section">RNN</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lstm"><span class="hidden-section">LSTM</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#gru"><span class="hidden-section">GRU</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#rnncell"><span class="hidden-section">RNNCell</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lstmcell"><span class="hidden-section">LSTMCell</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#grucell"><span class="hidden-section">GRUCell</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#linear-layers">Linear layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#linear"><span class="hidden-section">Linear</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#bilinear"><span class="hidden-section">Bilinear</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#dropout-layers">Dropout layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#dropout"><span class="hidden-section">Dropout</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#dropout2d"><span class="hidden-section">Dropout2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#dropout3d"><span class="hidden-section">Dropout3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#alphadropout"><span class="hidden-section">AlphaDropout</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#sparse-layers">Sparse layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#embedding"><span class="hidden-section">Embedding</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#embeddingbag"><span class="hidden-section">EmbeddingBag</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#distance-functions">Distance functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#cosinesimilarity"><span class="hidden-section">CosineSimilarity</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pairwisedistance"><span class="hidden-section">PairwiseDistance</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#loss-functions">Loss functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#l1loss"><span class="hidden-section">L1Loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#mseloss"><span class="hidden-section">MSELoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#crossentropyloss"><span class="hidden-section">CrossEntropyLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#nllloss"><span class="hidden-section">NLLLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#poissonnllloss"><span class="hidden-section">PoissonNLLLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#kldivloss"><span class="hidden-section">KLDivLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#bceloss"><span class="hidden-section">BCELoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#bcewithlogitsloss"><span class="hidden-section">BCEWithLogitsLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#marginrankingloss"><span class="hidden-section">MarginRankingLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#hingeembeddingloss"><span class="hidden-section">HingeEmbeddingLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multilabelmarginloss"><span class="hidden-section">MultiLabelMarginLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#smoothl1loss"><span class="hidden-section">SmoothL1Loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#softmarginloss"><span class="hidden-section">SoftMarginLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multilabelsoftmarginloss"><span class="hidden-section">MultiLabelSoftMarginLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#cosineembeddingloss"><span class="hidden-section">CosineEmbeddingLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multimarginloss"><span class="hidden-section">MultiMarginLoss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#tripletmarginloss"><span class="hidden-section">TripletMarginLoss</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#vision-layers">Vision layers</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pixelshuffle"><span class="hidden-section">PixelShuffle</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#upsample"><span class="hidden-section">Upsample</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#upsamplingnearest2d"><span class="hidden-section">UpsamplingNearest2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#upsamplingbilinear2d"><span class="hidden-section">UpsamplingBilinear2d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#dataparallel-layers-multi-gpu-distributed">DataParallel layers (multi-GPU, distributed)</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#dataparallel"><span class="hidden-section">DataParallel</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#distributeddataparallel"><span class="hidden-section">DistributedDataParallel</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#utilities">Utilities</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#clip-grad-norm"><span class="hidden-section">clip_grad_norm_</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#clip-grad-value"><span class="hidden-section">clip_grad_value_</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#weight-norm"><span class="hidden-section">weight_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#remove-weight-norm"><span class="hidden-section">remove_weight_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#packedsequence"><span class="hidden-section">PackedSequence</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pack-padded-sequence"><span class="hidden-section">pack_padded_sequence</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pad-packed-sequence"><span class="hidden-section">pad_packed_sequence</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pad-sequence"><span class="hidden-section">pad_sequence</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pack-sequence"><span class="hidden-section">pack_sequence</span></a></li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../nn.html#torch-nn-functional">torch.nn.functional</a><ul> <li class="toctree-l2"><a class="reference internal" href="../nn.html#convolution-functions">Convolution functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id20"><span class="hidden-section">conv1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id21"><span class="hidden-section">conv2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id22"><span class="hidden-section">conv3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv-transpose1d"><span class="hidden-section">conv_transpose1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv-transpose2d"><span class="hidden-section">conv_transpose2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#conv-transpose3d"><span class="hidden-section">conv_transpose3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#pooling-functions">Pooling functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avg-pool1d"><span class="hidden-section">avg_pool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avg-pool2d"><span class="hidden-section">avg_pool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#avg-pool3d"><span class="hidden-section">avg_pool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-pool1d"><span class="hidden-section">max_pool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-pool2d"><span class="hidden-section">max_pool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-pool3d"><span class="hidden-section">max_pool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-unpool1d"><span class="hidden-section">max_unpool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-unpool2d"><span class="hidden-section">max_unpool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#max-unpool3d"><span class="hidden-section">max_unpool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lp-pool1d"><span class="hidden-section">lp_pool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#lp-pool2d"><span class="hidden-section">lp_pool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-max-pool1d"><span class="hidden-section">adaptive_max_pool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-max-pool2d"><span class="hidden-section">adaptive_max_pool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-max-pool3d"><span class="hidden-section">adaptive_max_pool3d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-avg-pool1d"><span class="hidden-section">adaptive_avg_pool1d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-avg-pool2d"><span class="hidden-section">adaptive_avg_pool2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#adaptive-avg-pool3d"><span class="hidden-section">adaptive_avg_pool3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#non-linear-activation-functions">Non-linear activation functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id23"><span class="hidden-section">threshold</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id24"><span class="hidden-section">relu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id25"><span class="hidden-section">hardtanh</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id26"><span class="hidden-section">relu6</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id27"><span class="hidden-section">elu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id28"><span class="hidden-section">selu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#leaky-relu"><span class="hidden-section">leaky_relu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id29"><span class="hidden-section">prelu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id30"><span class="hidden-section">rrelu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#glu"><span class="hidden-section">glu</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id31"><span class="hidden-section">logsigmoid</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id32"><span class="hidden-section">hardshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id33"><span class="hidden-section">tanhshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id34"><span class="hidden-section">softsign</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id35"><span class="hidden-section">softplus</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id36"><span class="hidden-section">softmin</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id37"><span class="hidden-section">softmax</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id38"><span class="hidden-section">softshrink</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#log-softmax"><span class="hidden-section">log_softmax</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id39"><span class="hidden-section">tanh</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id40"><span class="hidden-section">sigmoid</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#normalization-functions">Normalization functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#batch-norm"><span class="hidden-section">batch_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#instance-norm"><span class="hidden-section">instance_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#layer-norm"><span class="hidden-section">layer_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#local-response-norm"><span class="hidden-section">local_response_norm</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#normalize"><span class="hidden-section">normalize</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#linear-functions">Linear functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id41"><span class="hidden-section">linear</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#dropout-functions">Dropout functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id42"><span class="hidden-section">dropout</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#alpha-dropout"><span class="hidden-section">alpha_dropout</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id43"><span class="hidden-section">dropout2d</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id44"><span class="hidden-section">dropout3d</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#id45">Distance functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pairwise-distance"><span class="hidden-section">pairwise_distance</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#cosine-similarity"><span class="hidden-section">cosine_similarity</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#id46">Loss functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#binary-cross-entropy"><span class="hidden-section">binary_cross_entropy</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#poisson-nll-loss"><span class="hidden-section">poisson_nll_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#cosine-embedding-loss"><span class="hidden-section">cosine_embedding_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#cross-entropy"><span class="hidden-section">cross_entropy</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#hinge-embedding-loss"><span class="hidden-section">hinge_embedding_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#kl-div"><span class="hidden-section">kl_div</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#l1-loss"><span class="hidden-section">l1_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#mse-loss"><span class="hidden-section">mse_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#margin-ranking-loss"><span class="hidden-section">margin_ranking_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multilabel-margin-loss"><span class="hidden-section">multilabel_margin_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multilabel-soft-margin-loss"><span class="hidden-section">multilabel_soft_margin_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#multi-margin-loss"><span class="hidden-section">multi_margin_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#nll-loss"><span class="hidden-section">nll_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#binary-cross-entropy-with-logits"><span class="hidden-section">binary_cross_entropy_with_logits</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#smooth-l1-loss"><span class="hidden-section">smooth_l1_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#soft-margin-loss"><span class="hidden-section">soft_margin_loss</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#triplet-margin-loss"><span class="hidden-section">triplet_margin_loss</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#vision-functions">Vision functions</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pixel-shuffle"><span class="hidden-section">pixel_shuffle</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#pad"><span class="hidden-section">pad</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#id47"><span class="hidden-section">upsample</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#upsample-nearest"><span class="hidden-section">upsample_nearest</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#upsample-bilinear"><span class="hidden-section">upsample_bilinear</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#grid-sample"><span class="hidden-section">grid_sample</span></a></li> <li class="toctree-l3"><a class="reference internal" href="../nn.html#affine-grid"><span class="hidden-section">affine_grid</span></a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../nn.html#dataparallel-functions-multi-gpu-distributed">DataParallel functions (multi-GPU, distributed)</a><ul> <li class="toctree-l3"><a class="reference internal" href="../nn.html#data-parallel"><span class="hidden-section">data_parallel</span></a></li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../nn.html#torch-nn-init">torch.nn.init</a></li> <li class="toctree-l1"><a class="reference internal" href="../optim.html">torch.optim</a><ul> <li class="toctree-l2"><a class="reference internal" href="../optim.html#how-to-use-an-optimizer">How to use an optimizer</a><ul> <li class="toctree-l3"><a class="reference internal" href="../optim.html#constructing-it">Constructing it</a></li> <li class="toctree-l3"><a class="reference internal" href="../optim.html#per-parameter-options">Per-parameter options</a></li> <li class="toctree-l3"><a class="reference internal" href="../optim.html#taking-an-optimization-step">Taking an optimization step</a><ul> <li class="toctree-l4"><a class="reference internal" href="../optim.html#optimizer-step"><code class="docutils literal notranslate"><span class="pre">optimizer.step()</span></code></a></li> <li class="toctree-l4"><a class="reference internal" href="../optim.html#optimizer-step-closure"><code class="docutils literal notranslate"><span class="pre">optimizer.step(closure)</span></code></a></li> </ul> </li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../optim.html#algorithms">Algorithms</a></li> <li class="toctree-l2"><a class="reference internal" href="../optim.html#how-to-adjust-learning-rate">How to adjust Learning Rate</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../autograd.html">torch.autograd</a><ul> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#locally-disabling-gradient-computation">Locally disabling gradient computation</a></li> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#in-place-operations-on-tensors">In-place operations on Tensors</a><ul> <li class="toctree-l3"><a class="reference internal" href="../autograd.html#in-place-correctness-checks">In-place correctness checks</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#variable-deprecated">Variable (deprecated)</a></li> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#tensor-autograd-functions">Tensor autograd functions</a></li> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#function"><span class="hidden-section">Function</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../autograd.html#profiler">Profiler</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../distributions.html">torch.distributions</a><ul> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#score-function">Score function</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#pathwise-derivative">Pathwise derivative</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#distribution"><span class="hidden-section">Distribution</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#exponentialfamily"><span class="hidden-section">ExponentialFamily</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#bernoulli"><span class="hidden-section">Bernoulli</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#beta"><span class="hidden-section">Beta</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#binomial"><span class="hidden-section">Binomial</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#categorical"><span class="hidden-section">Categorical</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#cauchy"><span class="hidden-section">Cauchy</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#chi2"><span class="hidden-section">Chi2</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#dirichlet"><span class="hidden-section">Dirichlet</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#exponential"><span class="hidden-section">Exponential</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#fishersnedecor"><span class="hidden-section">FisherSnedecor</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#gamma"><span class="hidden-section">Gamma</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#geometric"><span class="hidden-section">Geometric</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#gumbel"><span class="hidden-section">Gumbel</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#independent"><span class="hidden-section">Independent</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#laplace"><span class="hidden-section">Laplace</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#lognormal"><span class="hidden-section">LogNormal</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#multinomial"><span class="hidden-section">Multinomial</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#multivariatenormal"><span class="hidden-section">MultivariateNormal</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#normal"><span class="hidden-section">Normal</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#onehotcategorical"><span class="hidden-section">OneHotCategorical</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#pareto"><span class="hidden-section">Pareto</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#poisson"><span class="hidden-section">Poisson</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#relaxedbernoulli"><span class="hidden-section">RelaxedBernoulli</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#relaxedonehotcategorical"><span class="hidden-section">RelaxedOneHotCategorical</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#studentt"><span class="hidden-section">StudentT</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#transformeddistribution"><span class="hidden-section">TransformedDistribution</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#uniform"><span class="hidden-section">Uniform</span></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#module-torch.distributions.kl"><cite>KL Divergence</cite></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#module-torch.distributions.transforms"><cite>Transforms</cite></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#module-torch.distributions.constraints"><cite>Constraints</cite></a></li> <li class="toctree-l2"><a class="reference internal" href="../distributions.html#module-torch.distributions.constraint_registry"><cite>Constraint Registry</cite></a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../multiprocessing.html">torch.multiprocessing</a><ul> <li class="toctree-l2"><a class="reference internal" href="../multiprocessing.html#strategy-management">Strategy management</a></li> <li class="toctree-l2"><a class="reference internal" href="../multiprocessing.html#sharing-cuda-tensors">Sharing CUDA tensors</a></li> <li class="toctree-l2"><a class="reference internal" href="../multiprocessing.html#sharing-strategies">Sharing strategies</a><ul> <li class="toctree-l3"><a class="reference internal" href="../multiprocessing.html#file-descriptor-file-descriptor">File descriptor - <code class="docutils literal notranslate"><span class="pre">file_descriptor</span></code></a></li> <li class="toctree-l3"><a class="reference internal" href="../multiprocessing.html#file-system-file-system">File system - <code class="docutils literal notranslate"><span class="pre">file_system</span></code></a></li> </ul> </li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../distributed.html">torch.distributed</a><ul> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#basics">Basics</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#initialization">Initialization</a><ul> <li class="toctree-l3"><a class="reference internal" href="../distributed.html#tcp-initialization">TCP initialization</a></li> <li class="toctree-l3"><a class="reference internal" href="../distributed.html#shared-file-system-initialization">Shared file-system initialization</a></li> <li class="toctree-l3"><a class="reference internal" href="../distributed.html#environment-variable-initialization">Environment variable initialization</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#groups">Groups</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#point-to-point-communication">Point-to-point communication</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#collective-functions">Collective functions</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#multi-gpu-collective-functions">Multi-GPU collective functions</a></li> <li class="toctree-l2"><a class="reference internal" href="../distributed.html#launch-utility">Launch utility</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../bottleneck.html">torch.utils.bottleneck</a></li> <li class="toctree-l1"><a class="reference internal" href="../checkpoint.html">torch.utils.checkpoint</a></li> <li class="toctree-l1"><a class="reference internal" href="../cpp_extension.html">torch.utils.cpp_extension</a></li> <li class="toctree-l1"><a class="reference internal" href="../data.html">torch.utils.data</a></li> <li class="toctree-l1"><a class="reference internal" href="../ffi.html">torch.utils.ffi</a></li> <li class="toctree-l1"><a class="reference internal" href="../model_zoo.html">torch.utils.model_zoo</a></li> <li class="toctree-l1"><a class="reference internal" href="../onnx.html">torch.onnx</a><ul> <li class="toctree-l2"><a class="reference internal" href="../onnx.html#example-end-to-end-alexnet-from-pytorch-to-caffe2">Example: End-to-end AlexNet from PyTorch to Caffe2</a></li> <li class="toctree-l2"><a class="reference internal" href="../onnx.html#limitations">Limitations</a></li> <li class="toctree-l2"><a class="reference internal" href="../onnx.html#supported-operators">Supported operators</a></li> <li class="toctree-l2"><a class="reference internal" href="../onnx.html#functions">Functions</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="../legacy.html">torch.legacy</a></li> </ul> <p class="caption"><span class="caption-text">torchvision Reference</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="../torchvision/index.html">torchvision</a><ul> <li class="toctree-l2"><a class="reference internal" href="../torchvision/datasets.html">torchvision.datasets</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#mnist">MNIST</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#fashion-mnist">Fashion-MNIST</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#emnist">EMNIST</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#coco">COCO</a><ul> <li class="toctree-l4"><a class="reference internal" href="../torchvision/datasets.html#captions">Captions</a></li> <li class="toctree-l4"><a class="reference internal" href="../torchvision/datasets.html#detection">Detection</a></li> </ul> </li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#lsun">LSUN</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#imagefolder">ImageFolder</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#datasetfolder">DatasetFolder</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#imagenet-12">Imagenet-12</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#cifar">CIFAR</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#stl10">STL10</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#svhn">SVHN</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/datasets.html#phototour">PhotoTour</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../torchvision/models.html">torchvision.models</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#id1">Alexnet</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#id2">VGG</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#id3">ResNet</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#id4">SqueezeNet</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#id5">DenseNet</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/models.html#inception-v3">Inception v3</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../torchvision/transforms.html">torchvision.transforms</a><ul> <li class="toctree-l3"><a class="reference internal" href="../torchvision/transforms.html#transforms-on-pil-image">Transforms on PIL Image</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/transforms.html#transforms-on-torch-tensor">Transforms on torch.*Tensor</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/transforms.html#conversion-transforms">Conversion Transforms</a></li> <li class="toctree-l3"><a class="reference internal" href="../torchvision/transforms.html#generic-transforms">Generic Transforms</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="../torchvision/utils.html">torchvision.utils</a></li> </ul> </li> </ul> </div> </div> </nav> <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"> <nav class="wy-nav-top" aria-label="top navigation"> <i data-toggle="wy-nav-top" class="fa fa-bars"></i> <a href="../index.html">PyTorch</a> </nav> <div class="wy-nav-content"> <div class="rst-content"> <div role="navigation" aria-label="breadcrumbs navigation"> <ul class="wy-breadcrumbs"> <li><a href="../index.html">Docs</a> »</li> <li>Windows FAQ</li> <li class="wy-breadcrumbs-aside"> <a href="../_sources/notes/windows.rst.txt" rel="nofollow"> View page source</a> </li> </ul> <hr/> </div> <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article"> <div itemprop="articleBody"> <div class="section" id="windows-faq"> <h1>Windows FAQ<a class="headerlink" href="#windows-faq" title="Permalink to this headline">¶</a></h1> <div class="section" id="building-from-source"> <h2>Building from source<a class="headerlink" href="#building-from-source" title="Permalink to this headline">¶</a></h2> <div class="section" id="include-optional-components"> <h3>Include optional components<a class="headerlink" href="#include-optional-components" title="Permalink to this headline">¶</a></h3> <p>There are two supported components for Windows PyTorch: MKL and MAGMA. Here are the steps to build with them.</p> <div class="highlight-bat notranslate"><div class="highlight"><pre><span></span><span class="c1">REM Make sure you have 7z and curl installed.</span> <span class="c1">REM Download MKL files</span> curl https://s3.amazonaws.com/ossci-windows/mkl_2018.2.185.7z -k -O 7z x -aoa mkl_2018.2.185.7z -omkl <span class="c1">REM Download MAGMA files</span> <span class="c1">REM cuda90/cuda91 is also available in the following line.</span> <span class="k">set</span> <span class="nv">CUDA_PREFIX</span><span class="p">=</span>cuda80 curl -k https://s3.amazonaws.com/ossci-windows/magma_<span class="nv">%CUDA_PREFIX%</span>_release_mkl_2018.2.185.7z -o magma.7z 7z x -aoa magma.7z -omagma <span class="c1">REM Setting essential environment variables</span> <span class="k">set</span> <span class="s2">"CMAKE_INCLUDE_PATH=</span><span class="nv">%cd%</span><span class="s2">\\mkl\\include"</span> <span class="k">set</span> <span class="s2">"LIB=</span><span class="nv">%cd%</span><span class="s2">\\mkl\\lib;</span><span class="nv">%LIB%</span><span class="s2">"</span> <span class="k">set</span> <span class="s2">"MAGMA_HOME=</span><span class="nv">%cd%</span><span class="s2">\\magma"</span> </pre></div> </div> </div> <div class="section" id="speeding-cuda-build-for-windows"> <h3>Speeding CUDA build for Windows<a class="headerlink" href="#speeding-cuda-build-for-windows" title="Permalink to this headline">¶</a></h3> <p>Visual Studio doesn’t support parallel custom task currently. As an alternative, we can use <code class="docutils literal notranslate"><span class="pre">Ninja</span></code> to parallelize CUDA build tasks. It can be used by typing only a few lines of code.</p> <div class="highlight-bat notranslate"><div class="highlight"><pre><span></span><span class="c1">REM Let's install ninja first.</span> pip install ninja <span class="c1">REM Set it as the cmake generator</span> <span class="k">set</span> <span class="nv">CMAKE_GENERATOR</span><span class="p">=</span>Ninja </pre></div> </div> </div> <div class="section" id="one-key-install-script"> <h3>One key install script<a class="headerlink" href="#one-key-install-script" title="Permalink to this headline">¶</a></h3> <p>You can take a look at the script <a class="reference external" href="https://github.com/peterjc123/pytorch-scripts">here</a>. It will lead the way for you.</p> </div> </div> <div class="section" id="extension"> <h2>Extension<a class="headerlink" href="#extension" title="Permalink to this headline">¶</a></h2> <div class="section" id="cffi-extension"> <h3>CFFI Extension<a class="headerlink" href="#cffi-extension" title="Permalink to this headline">¶</a></h3> <p>The support for CFFI Extension is very experimental. There’re generally two steps to enable it under Windows.</p> <p>First, specify additional <code class="docutils literal notranslate"><span class="pre">libraries</span></code> in <code class="docutils literal notranslate"><span class="pre">Extension</span></code> object to make it build on Windows.</p> <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">ffi</span> <span class="o">=</span> <span class="n">create_extension</span><span class="p">(</span> <span class="s1">'_ext.my_lib'</span><span class="p">,</span> <span class="n">headers</span><span class="o">=</span><span class="n">headers</span><span class="p">,</span> <span class="n">sources</span><span class="o">=</span><span class="n">sources</span><span class="p">,</span> <span class="n">define_macros</span><span class="o">=</span><span class="n">defines</span><span class="p">,</span> <span class="n">relative_to</span><span class="o">=</span><span class="vm">__file__</span><span class="p">,</span> <span class="n">with_cuda</span><span class="o">=</span><span class="n">with_cuda</span><span class="p">,</span> <span class="n">extra_compile_args</span><span class="o">=</span><span class="p">[</span><span class="s2">"-std=c99"</span><span class="p">],</span> <span class="n">libraries</span><span class="o">=</span><span class="p">[</span><span class="s1">'ATen'</span><span class="p">,</span> <span class="s1">'_C'</span><span class="p">]</span> <span class="c1"># Append cuda libaries when necessary, like cudart</span> <span class="p">)</span> </pre></div> </div> <p>Second, here is a workground for “unresolved external symbol state caused by <code class="docutils literal notranslate"><span class="pre">extern</span> <span class="pre">THCState</span> <span class="pre">*state;</span></code>”</p> <p>Change the source code from C to C++. An example is listed below.</p> <div class="highlight-cpp notranslate"><div class="highlight"><pre><span></span><span class="cp">#include</span> <span class="cpf"><THC/THC.h></span><span class="cp"></span> <span class="cp">#include</span> <span class="cpf"><ATen/ATen.h></span><span class="cp"></span> <span class="n">THCState</span> <span class="o">*</span><span class="n">state</span> <span class="o">=</span> <span class="n">at</span><span class="o">::</span><span class="n">globalContext</span><span class="p">().</span><span class="n">thc_state</span><span class="p">;</span> <span class="k">extern</span> <span class="s">"C"</span> <span class="kt">int</span> <span class="n">my_lib_add_forward_cuda</span><span class="p">(</span><span class="n">THCudaTensor</span> <span class="o">*</span><span class="n">input1</span><span class="p">,</span> <span class="n">THCudaTensor</span> <span class="o">*</span><span class="n">input2</span><span class="p">,</span> <span class="n">THCudaTensor</span> <span class="o">*</span><span class="n">output</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="n">THCudaTensor_isSameSizeAs</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">input1</span><span class="p">,</span> <span class="n">input2</span><span class="p">))</span> <span class="k">return</span> <span class="mi">0</span><span class="p">;</span> <span class="n">THCudaTensor_resizeAs</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">input1</span><span class="p">);</span> <span class="n">THCudaTensor_cadd</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">input1</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">input2</span><span class="p">);</span> <span class="k">return</span> <span class="mi">1</span><span class="p">;</span> <span class="p">}</span> <span class="k">extern</span> <span class="s">"C"</span> <span class="kt">int</span> <span class="n">my_lib_add_backward_cuda</span><span class="p">(</span><span class="n">THCudaTensor</span> <span class="o">*</span><span class="n">grad_output</span><span class="p">,</span> <span class="n">THCudaTensor</span> <span class="o">*</span><span class="n">grad_input</span><span class="p">)</span> <span class="p">{</span> <span class="n">THCudaTensor_resizeAs</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">grad_input</span><span class="p">,</span> <span class="n">grad_output</span><span class="p">);</span> <span class="n">THCudaTensor_fill</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">grad_input</span><span class="p">,</span> <span class="mi">1</span><span class="p">);</span> <span class="k">return</span> <span class="mi">1</span><span class="p">;</span> <span class="p">}</span> </pre></div> </div> </div> <div class="section" id="cpp-extension"> <h3>Cpp Extension<a class="headerlink" href="#cpp-extension" title="Permalink to this headline">¶</a></h3> <p>This type of extension has better support compared with the previous one. However, it still needs some manual configuration. First, you should open the <strong>x86_x64 Cross Tools Command Prompt for VS 2017</strong>. And then, you can open the Git-Bash in it. It is usually located in <code class="docutils literal notranslate"><span class="pre">C:\Program</span> <span class="pre">Files\Git\git-bash.exe</span></code>. Finally, you can start your compiling process.</p> </div> </div> <div class="section" id="installation"> <h2>Installation<a class="headerlink" href="#installation" title="Permalink to this headline">¶</a></h2> <div class="section" id="package-not-found-in-win-32-channel"> <h3>Package not found in win-32 channel.<a class="headerlink" href="#package-not-found-in-win-32-channel" title="Permalink to this headline">¶</a></h3> <div class="highlight-bat notranslate"><div class="highlight"><pre><span></span>Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: - pytorch Current channels: - https://conda.anaconda.org/pytorch/win-32 - https://conda.anaconda.org/pytorch/noarch - https://repo.continuum.io/pkgs/main/win-32 - https://repo.continuum.io/pkgs/main/noarch - https://repo.continuum.io/pkgs/free/win-32 - https://repo.continuum.io/pkgs/free/noarch - https://repo.continuum.io/pkgs/r/win-32 - https://repo.continuum.io/pkgs/r/noarch - https://repo.continuum.io/pkgs/pro/win-32 - https://repo.continuum.io/pkgs/pro/noarch - https://repo.continuum.io/pkgs/msys2/win-32 - https://repo.continuum.io/pkgs/msys2/noarch </pre></div> </div> <p>PyTorch doesn’t work on 32-bit system. Please use Windows and Python 64-bit version.</p> </div> <div class="section" id="why-are-there-no-python-2-packages-for-windows"> <h3>Why are there no Python 2 packages for Windows?<a class="headerlink" href="#why-are-there-no-python-2-packages-for-windows" title="Permalink to this headline">¶</a></h3> <p>Because it’s not stable enough. There’re some issues that need to be solved before we officially release it. You can build it by yourself.</p> </div> <div class="section" id="import-error"> <h3>Import error<a class="headerlink" href="#import-error" title="Permalink to this headline">¶</a></h3> <div class="highlight-py3tb notranslate"><div class="highlight"><pre><span></span>from torch._C import * ImportError: DLL load failed: The specified module could not be found. </pre></div> </div> <p>The problem is caused by the missing of the essential files. Actually, we include almost all the essential files that PyTorch need except VC2017 redistributable. You can resolve this by typing the following command.</p> <div class="highlight-bat notranslate"><div class="highlight"><pre><span></span>conda install -c peterjc123 vc vs2017_runtime </pre></div> </div> <p>Another possible cause may be you are using GPU version without NVIDIA graphics cards. Please replace your GPU package with the CPU one.</p> </div> </div> <div class="section" id="usage-multiprocessing"> <h2>Usage (multiprocessing)<a class="headerlink" href="#usage-multiprocessing" title="Permalink to this headline">¶</a></h2> <div class="section" id="multiprocessing-error-without-if-clause-protection"> <h3>Multiprocessing error without if-clause protection<a class="headerlink" href="#multiprocessing-error-without-if-clause-protection" title="Permalink to this headline">¶</a></h3> <div class="highlight-py3tb notranslate"><div class="highlight"><pre><span></span>RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. </pre></div> </div> <p>The implementation of <code class="docutils literal notranslate"><span class="pre">multiprocessing</span></code> is different on Windows, which uses <code class="docutils literal notranslate"><span class="pre">spawn</span></code> instead of <code class="docutils literal notranslate"><span class="pre">fork</span></code>. So we have to wrap the code with an if-clause to protect the code from executing multiple times. Refactor your code into the following structure.</p> <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span> <span class="k">def</span> <span class="nf">main</span><span class="p">()</span> <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">data</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">dataloader</span><span class="p">):</span> <span class="c1"># do something here</span> <span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s1">'__main__'</span><span class="p">:</span> <span class="n">main</span><span class="p">()</span> </pre></div> </div> </div> <div class="section" id="multiprocessing-error-broken-pipe"> <h3>Multiprocessing error “Broken pipe”<a class="headerlink" href="#multiprocessing-error-broken-pipe" title="Permalink to this headline">¶</a></h3> <div class="highlight-py3tb notranslate"><div class="highlight"><pre><span></span>ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe </pre></div> </div> <p>This issue happens when the child process ends before the parent process finishes sending data. There may be something wrong with your code. You can debug your code by reducing the <code class="docutils literal notranslate"><span class="pre">num_worker</span></code> of <a class="reference internal" href="../data.html#torch.utils.data.DataLoader" title="torch.utils.data.DataLoader"><code class="xref py py-class docutils literal notranslate"><span class="pre">DataLoader</span></code></a> to zero and see if the issue persists.</p> </div> <div class="section" id="multiprocessing-error-driver-shut-down"> <h3>Multiprocessing error “driver shut down”<a class="headerlink" href="#multiprocessing-error-driver-shut-down" title="Permalink to this headline">¶</a></h3> <div class="highlight-py3tb notranslate"><div class="highlight"><pre><span></span>Couldn’t open shared file mapping: <torch_14808_1591070686>, error code: <1455> at torch\lib\TH\THAllocator.c:154 [windows] driver shut down </pre></div> </div> <p>Please update your graphics driver. If this persists, this may be that your graphics card is too old or the calculation is too heavy for your card. Please update the TDR settings according to this <a class="reference external" href="https://www.pugetsystems.com/labs/hpc/Working-around-TDR-in-Windows-for-a-better-GPU-computing-experience-777/">post</a>.</p> </div> <div class="section" id="cuda-ipc-operations"> <h3>CUDA IPC operations<a class="headerlink" href="#cuda-ipc-operations" title="Permalink to this headline">¶</a></h3> <div class="highlight-py3tb notranslate"><div class="highlight"><pre><span></span>THCudaCheck FAIL file=torch\csrc\generic\StorageSharing.cpp line=252 error=63 : OS call failed or operation not supported on this OS </pre></div> </div> <p>They are not supported on Windows. Something like doing multiprocessing on CUDA tensors cannot succeed, there are two alternatives for this.</p> <p>1. Don’t use <code class="docutils literal notranslate"><span class="pre">multiprocessing</span></code>. Set the <code class="docutils literal notranslate"><span class="pre">num_worker</span></code> of <a class="reference internal" href="../data.html#torch.utils.data.DataLoader" title="torch.utils.data.DataLoader"><code class="xref py py-class docutils literal notranslate"><span class="pre">DataLoader</span></code></a> to zero.</p> <p>2. Share CPU tensors instead. Make sure your custom <code class="xref py py-class docutils literal notranslate"><span class="pre">DataSet</span></code> returns CPU tensors.</p> </div> </div> </div> </div> </div> <footer> <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation"> <a href="../torch.html" class="btn btn-neutral float-right" title="torch" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a> <a href="serialization.html" class="btn btn-neutral" title="Serialization semantics" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a> </div> <hr/> <div role="contentinfo"> <p> © Copyright 2018, Torch Contributors. </p> </div> Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>. </footer> </div> </div> </section> </div> <script type="text/javascript"> var DOCUMENTATION_OPTIONS = { URL_ROOT:'../', VERSION:'master', LANGUAGE:'None', COLLAPSE_INDEX:false, FILE_SUFFIX:'.html', HAS_SOURCE: true, SOURCELINK_SUFFIX: '.txt' }; </script> <script type="text/javascript" src="../_static/jquery.js"></script> <script type="text/javascript" src="../_static/underscore.js"></script> <script type="text/javascript" src="../_static/doctools.js"></script> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type="text/javascript" src="../_static/js/theme.js"></script> <script type="text/javascript"> jQuery(function () { SphinxRtdTheme.Navigation.enableSticky(); }); </script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-90545585-1', 'auto'); ga('send', 'pageview'); </script> </body> </html>