Back, gradient function in pytorch

I am trying to implement backward, grad function in pytorch.

But I don't know why this value is returned.

Here is my code.

x = Variable(torch.FloatTensor([[1,2],[3,4]]), requires_grad=True)
y = x + 2
z = y * y

gradient = torch.ones(2, 2)
z.backward(gradient)
print(x.grad)

      

I think the result value should be [[6,8], [10,12]]

Due to dz / dx = 2 * (x + 2) and x = 1,2,3,4

But the return value is [[7,9], [11,13]]

Why did this happen. I want to know how the gradient does it, the grad function.

Help me please.

+3


source to share


1 answer


Below is a snippet of code on pytorch v0.12.1

import torch
from torch.autograd import Variable
x = Variable(torch.FloatTensor([[1,2],[3,4]]), requires_grad=True)
y = x + 2
z = y * y
gradient = torch.ones(2, 2)
z.backward(gradient)
print(x.grad)

      

returns



Variable containing:
  6   8
 10  12
[torch.FloatTensor of size 2x2]

      

Update your pytorch installation. This explains how autograph works, which handles gradient calculations for pytorch.

+2


source







All Articles