def momentum_updates(cost, params, lr, mu):
grads = T.grad(cost, params)
updates = []
for p, g in zip(params, grads):
dp = theano.shared(p.get_value() * 0)
new_dp = mu*dp  lr*g
new_p = p + new_dp
updates.append((dp, new_dp))
updates.append((p, new_p))
return updates
i can't understand this line: dp = theano.shared(p.get_value() * 0)? why p.get_value() is times at 0?
batch_norm_theano.py

 Site Admin
 Posts: 52
 Joined: Sat Jul 28, 2018 3:46 am
Re: batch_norm_theano.py
Thanks for your inquiry.
That's the momentum value, it should start at 0. Check out the momentum section earlier in the course.
That's the momentum value, it should start at 0. Check out the momentum section earlier in the course.