Hi, I recently discovered this excellent repository for learning basic concepts in ML and noticed a potential problem in the implementation of dropout wrapper. In particular, this is the line of code I am confused about:
|
dLdy *= 1.0 / (1.0 - self.p) |
Shouldn't the gradient from a later layer also apply the mask from the one used in forward()? Otherwise, dLdy will overestimate the true gradient after dividing by the probability 1.0 - self.p. Not sure this is an issue though as I am a beginner in ML.
Hi, I recently discovered this excellent repository for learning basic concepts in ML and noticed a potential problem in the implementation of dropout wrapper. In particular, this is the line of code I am confused about:
numpy-ml/numpy_ml/neural_nets/wrappers/wrappers.py
Line 231 in b0359af
Shouldn't the gradient from a later layer also apply the mask from the one used in
forward()? Otherwise,dLdywill overestimate the true gradient after dividing by the probability1.0 - self.p. Not sure this is an issue though as I am a beginner in ML.