Features
- Similar to NumPy’s ndarrays, except that tensors can run on GPUs or other hardware accelerators.
- Share the same underlying memory, eliminating the need to copy data.
- Optimized for automatic differentiation.
Tensor Initialization
- Directly from data. Use
torch.tensor()
- From a NumPy array. Use
torch.from_numpy()
- From another tensor. Use
torch.randn_like()
or torch.ones_like()
- With random or constant values.
Atributions
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
Operations
if torch.cuda.is_available():
tensor = tensor.to('cuda')
tensor = torch.ones(4, 4)
print('First row: ',tensor[0])
print('First column: ', tensor[:, 0])
print('Last column: ', tensor[..., -1])
tensor[:,1] = 0
t1 = torch.cat([tensor, tensor, tensor], dim=1)
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)
y3 = torch.rand_like(y1)
torch.matmul(tensor, tensor.T, out=y3)
z1 = tensor * tensor
z2 = tensor.mul(tensor)
agg = tensor.sum()
agg_item = agg.item()
print(tensor, "\n")
tensor.add_(5)
print(tensor)
Bridge with NumPy
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
Autogard
x = torch.ones(5, requires_grad=True)
print(x)
y = x + 3
print(y)
print(y.grad_fn)
z = y * y * 2
out = z.mean()
print(z, out)
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)