torch.Tensor¶
A torch.Tensor
is a multi-dimensional matrix containing elements of
a single data type.
Torch defines seven CPU tensor types and eight GPU tensor types:
Data type | CPU tensor | GPU tensor |
---|---|---|
32-bit floating point | torch.FloatTensor |
torch.cuda.FloatTensor |
64-bit floating point | torch.DoubleTensor |
torch.cuda.DoubleTensor |
16-bit floating point | torch.HalfTensor |
torch.cuda.HalfTensor |
8-bit integer (unsigned) | torch.ByteTensor |
torch.cuda.ByteTensor |
8-bit integer (signed) | torch.CharTensor |
torch.cuda.CharTensor |
16-bit integer (signed) | torch.ShortTensor |
torch.cuda.ShortTensor |
32-bit integer (signed) | torch.IntTensor |
torch.cuda.IntTensor |
64-bit integer (signed) | torch.LongTensor |
torch.cuda.LongTensor |
The torch.Tensor
constructor is an alias for the default tensor type
(torch.FloatTensor
).
A tensor can be constructed from a Python list
or sequence:
>>> torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
1 2 3
4 5 6
[torch.FloatTensor of size 2x3]
An empty tensor can be constructed by specifying its size:
>>> torch.IntTensor(2, 4).zero_()
0 0 0 0
0 0 0 0
[torch.IntTensor of size 2x4]
The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation:
>>> x = torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
6.0
>>> x[0][1] = 8
>>> print(x)
1 8 3
4 5 6
[torch.FloatTensor of size 2x3]
Each tensor has an associated torch.Storage
, which holds its data.
The tensor class provides multi-dimensional, strided
view of a storage and defines numeric operations on it.
Note
Methods which mutate a tensor are marked with an underscore suffix.
For example, torch.FloatTensor.abs_()
computes the absolute value
in-place and returns the modified tensor, while torch.FloatTensor.abs()
computes the result in a new tensor.
-
class
torch.
Tensor
¶ -
class
torch.
Tensor
(*sizes) -
class
torch.
Tensor
(size) -
class
torch.
Tensor
(sequence) -
class
torch.
Tensor
(ndarray) -
class
torch.
Tensor
(tensor) -
class
torch.
Tensor
(storage) Creates a new tensor from an optional size or data.
If no arguments are given, an empty zero-dimensional tensor is returned. If a
numpy.ndarray
,torch.Tensor
, ortorch.Storage
is given, a new tensor that shares the same data is returned. If a Python sequence is given, a new tensor is created from a copy of the sequence.-
abs
() → Tensor¶ See
torch.abs()
-
acos
() → Tensor¶ See
torch.acos()
-
add
(value) → Tensor¶ See
torch.add()
-
addbmm
(beta=1, mat, alpha=1, batch1, batch2) → Tensor¶ See
torch.addbmm()
-
addcdiv
(value=1, tensor1, tensor2) → Tensor¶ See
torch.addcdiv()
-
addcmul
(value=1, tensor1, tensor2) → Tensor¶ See
torch.addcmul()
-
addmm
(beta=1, mat, alpha=1, mat1, mat2) → Tensor¶ See
torch.addmm()
-
addmv
(beta=1, tensor, alpha=1, mat, vec) → Tensor¶ See
torch.addmv()
-
addr
(beta=1, alpha=1, vec1, vec2) → Tensor¶ See
torch.addr()
-
apply_
(callable) → Tensor¶ Applies the function
callable
to each element in the tensor, replacing each element with the value returned bycallable
.Note
This function only works with CPU tensors and should not be used in code sections that require high performance.
-
asin
() → Tensor¶ See
torch.asin()
-
atan
() → Tensor¶ See
torch.atan()
-
atan2
(other) → Tensor¶ See
torch.atan2()
-
baddbmm
(beta=1, alpha=1, batch1, batch2) → Tensor¶ See
torch.baddbmm()
-
bernoulli
() → Tensor¶
-
bernoulli_
() → Tensor¶ In-place version of
bernoulli()
-
bmm
(batch2) → Tensor¶ See
torch.bmm()
-
byte
()¶
-
btrifact
(info=None, pivot=True)¶ See
torch.btrifact()
-
btrifact_with_info
(pivot=True) -> (Tensor, Tensor, Tensor)¶
-
btrisolve
()¶
-
cauchy_
(median=0, sigma=1, *, generator=None) → Tensor¶ Fills the tensor with numbers drawn from the Cauchy distribution:
\[f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - median)^2 + \sigma^2}\]
-
ceil
() → Tensor¶ See
torch.ceil()
-
char
()¶
-
chunk
(chunks, dim=0) → List of Tensors¶ See
torch.chunk()
-
clamp
(min, max) → Tensor¶ See
torch.clamp()
-
clone
() → Tensor¶ Returns a copy of the
self
tensor. The copy has the same size and data type asself
.
-
contiguous
() → Tensor¶ Returns a contiguous tensor containing the same data as
self
tensor. Ifself
tensor is contiguous, this function returns theself
tensor.
-
copy_
(src, non_blocking=False) → Tensor¶ Copies the elements from
src
intoself
tensor and returnsself
.The
src
tensor must be broadcastable with theself
tensor. It may be of a different data type or reside on a different device.Parameters:
-
cos
() → Tensor¶ See
torch.cos()
-
cosh
() → Tensor¶ See
torch.cosh()
-
cpu
()¶
-
cross
(other, dim=-1) → Tensor¶ See
torch.cross()
-
cuda
()¶
-
cumprod
(dim) → Tensor¶ See
torch.cumprod()
-
cumsum
(dim) → Tensor¶ See
torch.cumsum()
-
data_ptr
() → int¶ Returns the address of the first element of
self
tensor.
-
det
() → Tensor¶ See
torch.det()
-
diag
(diagonal=0) → Tensor¶ See
torch.diag()
-
dim
() → int¶ Returns the number of dimensions of
self
tensor.
-
dist
(other, p=2) → Tensor¶ See
torch.dist()
-
div
(value) → Tensor¶ See
torch.div()
-
dot
(tensor2) → Tensor¶ See
torch.dot()
-
double
()¶
-
eig
(eigenvectors=False) -> (Tensor, Tensor)¶ See
torch.eig()
-
element_size
() → int¶ Returns the size in bytes of an individual element.
Example:
>>> torch.FloatTensor().element_size() 4 >>> torch.ByteTensor().element_size() 1
-
eq
(other) → Tensor¶ See
torch.eq()
-
equal
(other) → bool¶ See
torch.equal()
-
erf
() → Tensor¶ See
torch.erf()
-
erf_
()¶
-
erfinv
() → Tensor¶ See
torch.erfinv()
-
erfinv_
()¶
-
exp
() → Tensor¶ See
torch.exp()
-
expm1
() → Tensor¶ See
torch.expm1()
-
expand
(*sizes) → Tensor¶ Returns a new view of the
self
tensor with singleton dimensions expanded to a larger size.Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the
stride
to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.Parameters: *sizes (torch.Size or int...) – the desired expanded size Example:
>>> x = torch.Tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) 1 1 1 1 2 2 2 2 3 3 3 3 [torch.FloatTensor of size (3,4)] >>> x.expand(-1, 4) # -1 means not changing the size of that dimension 1 1 1 1 2 2 2 2 3 3 3 3 [torch.FloatTensor of size (3,4)]
-
expand_as
(tensor)¶
-
exponential_
(lambd=1, *, generator=None) → Tensor¶ Fills
self
tensor with elements drawn from the exponential distribution:\[f(x) = \lambda e^{-\lambda x}\]
-
fill_
(value) → Tensor¶ Fills
self
tensor with the specified value.
-
float
()¶
-
floor
() → Tensor¶ See
torch.floor()
-
fmod
(divisor) → Tensor¶ See
torch.fmod()
-
frac
() → Tensor¶ See
torch.frac()
-
gather
(dim, index) → Tensor¶ See
torch.gather()
-
ge
(other) → Tensor¶ See
torch.ge()
-
gels
(A) → Tensor¶ See
torch.gels()
-
geometric_
(p, *, generator=None) → Tensor¶ Fills
self
tensor with elements drawn from the geometric distribution:\[f(X=k) = (1 - p)^{k - 1} p\]
-
geqrf
() -> (Tensor, Tensor)¶ See
torch.geqrf()
-
ger
(vec2) → Tensor¶ See
torch.ger()
-
gesv
(A) → Tensor, Tensor¶ See
torch.gesv()
-
gt
(other) → Tensor¶ See
torch.gt()
-
half
()¶
-
histc
(bins=100, min=0, max=0) → Tensor¶ See
torch.histc()
-
index
(m) → Tensor¶ Selects elements from
self
tensor using a binary mask or along a given dimension. The expressiontensor.index(m)
is equivalent totensor[m]
.Parameters: m (int or ByteTensor or slice) – the dimension or mask used to select elements
-
index_add_
(dim, index, tensor) → Tensor¶ Accumulate the elements of
tensor
into theself
tensor by adding to the indices in the order given inindex
. For example, ifdim == 0
andindex[i] == j
, then thei
th row oftensor
is added to thej
th row ofself
.The
dim
th dimension oftensor
must have the same size as the length ofindex
(which must be a vector), and all other dimensions must matchself
, or an error will be raised.Parameters: Example:
>>> x = torch.Tensor(5, 3).fill_(1) >>> t = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> index = torch.LongTensor([0, 4, 2]) >>> x.index_add_(0, index, t) >>> x 2 3 4 1 1 1 8 9 10 1 1 1 5 6 7 [torch.FloatTensor of size (5,3)]
-
index_copy_
(dim, index, tensor) → Tensor¶ Copies the elements of
tensor
into theself
tensor by selecting the indices in the order given inindex
. For example, ifdim == 0
andindex[i] == j
, then thei
th row oftensor
is copied to thej
th row ofself
.The
dim
th dimension oftensor
must have the same size as the length ofindex
(which must be a vector), and all other dimensions must matchself
, or an error will be raised.Parameters: Example:
>>> x = torch.zeros(5, 3) >>> t = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> index = torch.LongTensor([0, 4, 2]) >>> x.index_copy_(0, index, t) >>> x 1 2 3 0 0 0 7 8 9 0 0 0 4 5 6 [torch.FloatTensor of size (5,3)]
-
index_fill_
(dim, index, val) → Tensor¶ Fills the elements of the
self
tensor with valueval
by selecting the indices in the order given inindex
.Parameters: - Example::
>>> x = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> index = torch.LongTensor([0, 2]) >>> x.index_fill_(1, index, -1) >>> x
-1 2 -1 -1 5 -1 -1 8 -1 [torch.FloatTensor of size (3,3)]
-
index_select
(dim, index) → Tensor¶
-
int
()¶
-
inverse
() → Tensor¶ See
torch.inverse()
-
is_contiguous
() → bool¶ Returns True if
self
tensor is contiguous in memory in C order.
-
is_cuda
¶
-
is_pinned
()¶ Returns true if this tensor resides in pinned memory
-
is_set_to
(tensor) → bool¶ Returns True if this object refers to the same
THTensor
object from the Torch C API as the given tensor.
-
is_signed
()¶
-
kthvalue
(k, dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.kthvalue()
-
le
(other) → Tensor¶ See
torch.le()
-
lerp
(start, end, weight) → Tensor¶ See
torch.lerp()
-
log
() → Tensor¶ See
torch.log()
-
logdet
() → Tensor¶ See
torch.logdet()
-
log1p
() → Tensor¶ See
torch.log1p()
-
log_normal_
(mean=1, std=2, *, generator=None)¶ Fills
self
tensor with numbers samples from the log-normal distribution parameterized by the given mean (µ) and standard deviation (σ). Note thatmean
andstdv
are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:\[f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\dfrac{(\ln x - \mu)^2}{2\sigma^2}}\]
-
long
()¶
-
lt
(other) → Tensor¶ See
torch.lt()
-
map_
(tensor, callable)¶ Applies
callable
for each element inself
tensor and the giventensor
and stores the results inself
tensor.self
tensor and the giventensor
must be broadcastable.The
callable
should have the signature:def callable(a, b) -> number
-
masked_scatter_
(mask, source)¶ Copies elements from
source
intoself
tensor at positions where themask
is one. The shape ofmask
must be broadcastable with the shape of the underlying tensor. Thesource
should have at least as many elements as the number of ones inmask
Parameters: - mask (ByteTensor) – the binary mask
- source (Tensor) – the tensor to copy from
Note
The
mask
operates on theself
tensor, not on the givensource
tensor.
-
masked_fill_
(mask, value)¶ Fills elements of
self
tensor withvalue
wheremask
is one. The shape ofmask
must be broadcastable with the shape of the underlying tensor.Parameters: - mask (ByteTensor) – the binary mask
- value (float) – the value to fill in with
-
masked_select
(mask) → Tensor¶
-
matmul
(tensor2) → Tensor¶ See
torch.matmul()
-
max
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.max()
-
mean
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.mean()
-
median
(dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.median()
-
min
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.min()
-
mm
(mat2) → Tensor¶ See
torch.mm()
-
mode
(dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.mode()
-
mul
(value) → Tensor¶ See
torch.mul()
-
multinomial
(num_samples, replacement=False, *, generator=None) → Tensor¶
-
mv
(vec) → Tensor¶ See
torch.mv()
-
narrow
(dimension, start, length) → Tensor¶ Returns a new tensor that is a narrowed version of
self
tensor. The dimensiondim
is narrowed fromstart
tostart + length
. The returned tensor andself
tensor share the same underlying storage.Parameters: Example:
>>> x = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> x.narrow(0, 0, 2) 1 2 3 4 5 6 [torch.FloatTensor of size (2,3)] >>> x.narrow(1, 1, 2) 2 3 5 6 8 9 [torch.FloatTensor of size (3,2)]
-
ne
(other) → Tensor¶ See
torch.ne()
-
neg
() → Tensor¶ See
torch.neg()
-
new
()¶
-
nonzero
() → LongTensor¶ See
torch.nonzero()
-
norm
(p=2, dim=None, keepdim=False) → Tensor¶ See
torch.norm()
-
normal_
(mean=0, std=1, *, generator=None) → Tensor¶ Fills
self
tensor with elements samples from the normal distribution parameterized bymean
andstd
.
-
numel
() → int¶ See
torch.numel()
-
numpy
() → numpy.ndarray¶ Returns
self
tensor as a NumPyndarray
. This tensor and the returnedndarray
share the same underlying storage. Changes toself
tensor will be reflected in thendarray
and vice versa.
-
orgqr
(input2) → Tensor¶ See
torch.orgqr()
-
ormqr
(input2, input3, left=True, transpose=False) → Tensor¶ See
torch.ormqr()
-
permute
()¶
-
pin_memory
()¶
-
potrf
(upper=True) → Tensor¶ See
torch.potrf()
-
potri
(upper=True) → Tensor¶ See
torch.potri()
-
potrs
(input2, upper=True) → Tensor¶ See
torch.potrs()
-
pow
(exponent) → Tensor¶ See
torch.pow()
-
prod
(dim=None, keepdim=False) → Tensor¶ See
torch.prod()
-
pstrf
(upper=True, tol=-1) -> (Tensor, IntTensor)¶ See
torch.pstrf()
-
put_
(indices, tensor, accumulate=False) → Tensor¶ Copies the elements from
tensor
into the positions specified by indices. For the purpose of indexing, theself
tensor is treated as if it were a 1-D tensor.If
accumulate
isTrue
, the elements intensor
are added toself
. If accumulate isFalse
, the behavior is undefined if indices contains duplicate elements.Parameters: Example:
>>> src = torch.Tensor([[4, 3, 5], [6, 7, 8]]) >>> src.put_(torch.LongTensor([1, 3]), torch.Tensor([9, 10])) 4 9 5 10 7 8 [torch.FloatTensor of size (2,3)]
-
qr
() -> (Tensor, Tensor)¶ See
torch.qr()
-
random_
(from=0, to=None, *, generator=None) → Tensor¶ Fills
self
tensor with numbers sampled from the discrete uniform distribution over[from, to - 1]
. If not specified, the values are usually only bounded byself
tensor’s data type. However, for floating point types, if unspecified, range will be[0, 2^mantissa]
to ensure that every value is representable. For example, torch.DoubleTensor(1).random_() will be uniform in[0, 2^53]
.
-
reciprocal
() → Tensor¶
-
reciprocal_
() → Tensor¶ In-place version of
reciprocal()
-
remainder
(divisor) → Tensor¶
-
remainder_
(divisor) → Tensor¶ In-place version of
remainder()
-
renorm
(p, dim, maxnorm) → Tensor¶ See
torch.renorm()
-
repeat
(*sizes) → Tensor¶ Repeats this tensor along the specified dimensions.
Unlike
expand()
, this function copies the tensor’s data.Parameters: sizes (torch.Size or int...) – The number of times to repeat this tensor along each dimension Example:
>>> x = torch.Tensor([1, 2, 3]) >>> x.repeat(4, 2) 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 [torch.FloatTensor of size (4,6)] >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3])
-
reshape
(*shape) → Tensor¶ Returns a tensor with the same data and number of elements as
self
, but with the specified shape.Parameters: shape (tuple of python:ints or int...) – the desired shape See
torch.reshape()
-
resize_
(*sizes) → Tensor¶ Resizes
self
tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.Parameters: sizes (torch.Size or int...) – the desired size Example:
>>> x = torch.Tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) >>> x 1 2 3 4 [torch.FloatTensor of size (2,2)]
-
resize_as_
(tensor) → Tensor¶ Resizes the
self
tensor to be the same size as the specifiedtensor
. This is equivalent toself.resize_(tensor.size())
.
-
round
() → Tensor¶ See
torch.round()
-
rsqrt
() → Tensor¶ See
torch.rsqrt()
-
scatter_
(dim, index, src) → Tensor¶ Writes all values from the tensor
src
intoself
at the indices specified in theindex
tensor. For each value insrc
, its output index is specified by its index insrc
for dimension !=dim
and by the corresponding value inindex
for dimension =dim
.For a 3-D tensor,
self
is updated as:self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in
gather()
.self
,index
andsrc
should have same number of dimensions. It is also required that index->size[d] <= src->size[d] for all dimension d, and that index->size[d] <= real->size[d] for all dimensions d != dim.Moreover, as for
gather()
, the values ofindex
must be between 0 and (self.size(dim) -1) inclusive, and all values in a row along the specified dimensiondim
must be unique.Parameters: Example:
>>> x = torch.rand(2, 5) >>> x 0.4319 0.6500 0.4080 0.8760 0.2355 0.2609 0.4711 0.8486 0.8573 0.1029 [torch.FloatTensor of size (2,5)] >>> torch.zeros(3, 5).scatter_(0, torch.LongTensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x) 0.4319 0.4711 0.8486 0.8760 0.2355 0.0000 0.6500 0.0000 0.8573 0.0000 0.2609 0.0000 0.4080 0.0000 0.1029 [torch.FloatTensor of size (3,5)] >>> z = torch.zeros(2, 4).scatter_(1, torch.LongTensor([[2], [3]]), 1.23) >>> z 0.0000 0.0000 1.2300 0.0000 0.0000 0.0000 0.0000 1.2300 [torch.FloatTensor of size (2,4)]
-
select
(dim, index) → Tensor¶ Slices the
self
tensor along the selected dimension at the given index. This function returns a tensor with the given dimension removed.Parameters: Note
select()
is equivalent to slicing. For example,tensor.select(0, index)
is equivalent totensor[index]
andtensor.select(2, index)
is equivalent totensor[:,:,index]
.
-
set_
(source=None, storage_offset=0, size=None, stride=None) → Tensor¶ Sets the underlying storage, size, and strides. If
source
is a tensor,self
tensor will share the same storage and have the same size and strides assource
. Changes to elements in one tensor will be reflected in the other.If
source
is aStorage
, the method sets the underlying storage, offset, size, and stride.Parameters:
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
-
short
()¶
-
sigmoid
() → Tensor¶ See
torch.sigmoid()
-
sign
() → Tensor¶ See
torch.sign()
-
sin
() → Tensor¶ See
torch.sin()
-
sinh
() → Tensor¶ See
torch.sinh()
-
size
() → torch.Size¶ Returns the size of the
self
tensor. The returned value is a subclass oftuple
.Example:
>>> torch.Tensor(3, 4, 5).size() torch.Size([3, 4, 5])
-
slogdet
() -> (Tensor, Tensor)¶ See
torch.slogdet()
-
sort
(dim=None, descending=False) -> (Tensor, LongTensor)¶ See
torch.sort()
-
split
(split_size, dim=0)¶ See
torch.split()
-
sqrt
() → Tensor¶ See
torch.sqrt()
-
squeeze
(dim=None) → Tensor¶ See
torch.squeeze()
-
std
(dim=None, unbiased=True, keepdim=False) → Tensor¶ See
torch.std()
-
storage
() → torch.Storage¶ Returns the underlying storage
-
storage_offset
() → int¶ Returns
self
tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).Example:
>>> x = torch.Tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3
-
storage_type
()¶
-
stride
(dim) → tuple or int¶ Returns the stride of
self
tensor.Stride is the jump necessary to go from one element to the next one in the specified dimension
dim
. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimensiondim
.Parameters: dim (int, optional) – the desired dimension in which stride is required Example:
>>> x = torch.Tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>>x.stride(0) 5 >>> x.stride(-1) 1
-
sub
(value, other) → Tensor¶ Subtracts a scalar or tensor from
self
tensor. If bothvalue
andother
are specified, each element ofother
is scaled byvalue
before being used.When
other
is a tensor, the shape ofother
must be broadcastable with the shape of the underlying tensor.
-
sum
(dim=None, keepdim=False) → Tensor¶ See
torch.sum()
-
svd
(some=True) -> (Tensor, Tensor, Tensor)¶ See
torch.svd()
-
symeig
(eigenvectors=False, upper=True) -> (Tensor, Tensor)¶ See
torch.symeig()
-
take
(indices) → Tensor¶ See
torch.take()
-
tan
()¶
-
tanh
() → Tensor¶ See
torch.tanh()
-
tolist
()¶
-
topk
(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)¶ See
torch.topk()
-
trace
() → Tensor¶ See
torch.trace()
-
transpose
(dim0, dim1) → Tensor¶
-
transpose_
(dim0, dim1) → Tensor¶ In-place version of
transpose()
-
tril
(k=0) → Tensor¶ See
torch.tril()
-
triu
(k=0) → Tensor¶ See
torch.triu()
-
trtrs
(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)¶ See
torch.trtrs()
-
trunc
() → Tensor¶ See
torch.trunc()
-
type
(dtype=None, non_blocking=False, **kwargs) → str or Tensor¶ Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
Parameters: - dtype (type or string) – The desired type
- non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
-
type_as
(tensor) → Tensor¶ Returns this tensor cast to the type of the given tensor.
This is a no-op if the tensor is already of the correct type. This is equivalent to:
self.type(tensor.type())
- Params:
- tensor (Tensor): the tensor which has the desired type
-
unfold
(dim, size, step) → Tensor¶ Returns a tensor which contains all slices of size
size
fromself
tensor in the dimensiondim
.Step between two slices is given by
step
.If sizedim is the size of dimension dim for
self
, the size of dimensiondim
in the returned tensor will be (sizedim - size) / step + 1.An additional dimension of size size is appended in the returned tensor.
Parameters: Example:
>>> x = torch.arange(1, 8) >>> x 1 2 3 4 5 6 7 [torch.FloatTensor of size (7,)] >>> x.unfold(0, 2, 1) 1 2 2 3 3 4 4 5 5 6 6 7 [torch.FloatTensor of size (6,2)] >>> x.unfold(0, 2, 2) 1 2 3 4 5 6 [torch.FloatTensor of size (3,2)]
-
uniform_
(from=0, to=1) → Tensor¶ Fills
self
tensor with numbers sampled from the continuous uniform distribution:\[P(x) = \dfrac{1}{\text{to} - \text{from}}\]
-
unique
(sorted=False, return_inverse=False)¶ Returns the unique scalar elements of the tensor as a 1-D tensor.
See
torch.unique()
-
unsqueeze
(dim) → Tensor¶
-
unsqueeze_
(dim) → Tensor¶ In-place version of
unsqueeze()
-
var
(dim=None, unbiased=True, keepdim=False) → Tensor¶ See
torch.var()
-
view
(*args) → Tensor¶ Returns a new tensor with the same data as the
self
tensor but of a different size.The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions \(d, d+1, \dots, d+k\) that satisfy the following contiguity-like condition that \(\forall i = 0, \dots, k-1\),
\[stride[i] = stride[i+1] \times size[i+1]\]Otherwise,
contiguous()
needs to be called before the tensor can be viewed.Parameters: args (torch.Size or int...) – the desired size Example:
>>> x = torch.randn(4, 4) >>> x.size() torch.Size([4, 4]) >>> y = x.view(16) >>> y.size() torch.Size([16]) >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions >>> z.size() torch.Size([2, 8])
-
view_as
(tensor)¶
-
zero_
() → Tensor¶ Fills
self
tensor with zeros.
-
-
class
torch.
ByteTensor
¶ The following methods are unique to
torch.ByteTensor
.-
all
() → bool¶ Returns
True
if all elements in the tensor are non-zero,False
otherwise.
-
any
() → bool¶ Returns
True
if any elements in the tensor are non-zero,False
otherwise.
-