torch.utils.cpp_extension¶
-
torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs)[source]¶ Creates a
setuptools.Extensionfor C++.Convenience method that creates a
setuptools.Extensionwith the bare minimum (but often sufficient) arguments to build a C++ extension.All arguments are forwarded to the
setuptools.Extensionconstructor.Example
>>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CppExtension >>> setup( name='extension', ext_modules=[ CppExtension( name='extension', sources=['extension.cpp'], extra_compile_args=['-g'])), ], cmdclass={ 'build_ext': BuildExtension })
-
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs)[source]¶ Creates a
setuptools.Extensionfor CUDA/C++.Convenience method that creates a
setuptools.Extensionwith the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library.All arguments are forwarded to the
setuptools.Extensionconstructor.Example
>>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CppExtension >>> setup( name='cuda_extension', ext_modules=[ CUDAExtension( name='cuda_extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_compile_args={'cxx': ['-g'], 'nvcc': ['-O2']}) ], cmdclass={ 'build_ext': BuildExtension })
-
torch.utils.cpp_extension.BuildExtension(dist, **kw)[source]¶ A custom
setuptoolsbuild extension .This
setuptools.build_extsubclass takes care of passing the minimum required compiler flags (e.g.-std=c++11) as well as mixed C++/CUDA compilation (and support for CUDA files in general).When using
BuildExtension, it is allowed to supply a dictionary forextra_compile_args(rather than the usual list) that maps from languages (cxxorcuda) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation.
-
torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False)[source]¶ Loads a PyTorch C++ extension just-in-time (JIT).
To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use.
By default, the directory to which the build file is emitted and the resulting library compiled to is
<tmp>/torch_extensions/<name>, where<tmp>is the temporary folder on the current platform and<name>the name of the extension. This location can be overridden in two ways. First, if theTORCH_EXTENSIONS_DIRenvironment variable is set, it replaces<tmp>/torch_extensionsand all extensions will be compiled into subfolders of this directory. Second, if thebuild_directoryargument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly.To compile the sources, the default system compiler (
c++) is used, which can be overridden by setting theCXXenvironment variable. To pass additional arguments to the compilation process,extra_cflagsorextra_ldflagscan be provided. For example, to compile your extension with optimizations, passextra_cflags=['-O3']. You can also useextra_cflagsto pass further include directories.CUDA support with mixed compilation is provided. Simply pass CUDA source files (
.cuor.cuh) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linkingcudart. You can pass additional flags to nvcc viaextra_cuda_cflags, just like withextra_cflagsfor C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting theCUDA_HOMEenvironment variable is the safest option.Parameters: - name – The name of the extension to build. This MUST be the same as the name of the pybind11 module!
- sources – A list of relative or absolute paths to C++ source files.
- extra_cflags – optional list of compiler flags to forward to the build.
- extra_cuda_cflags – optional list of compiler flags to forward to nvcc when building CUDA sources.
- extra_ldflags – optional list of linker flags to forward to the build.
- extra_include_paths – optional list of include directories to forward to the build.
- build_directory – optional path to use as build workspace.
- verbose – If
True, turns on verbose logging of load steps.
Returns: The loaded PyTorch extension as a Python module.
Example
>>> from torch.utils.cpp_extension import load >>> module = load( name='extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_cflags=['-O2'], verbose=True)
-
torch.utils.cpp_extension.include_paths(cuda=False)[source]¶ Get the include paths required to build a C++ or CUDA extension.
Parameters: cuda – If True, includes CUDA-specific include paths. Returns: A list of include path strings.
-
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler)[source]¶ Verifies that the given compiler is ABI-compatible with PyTorch.
Parameters: compiler (str) – The compiler executable name to check (e.g. g++). Must be executable in a shell process.Returns: False if the compiler is (likely) ABI-incompatible with PyTorch, else True.