Last active
June 8, 2017 01:49
-
-
Save yosemitebandit/8aec5677e69017bed04c to your computer and use it in GitHub Desktop.
udacity neural network course -- assignment 3.4, 3-layer NN with regularization and dropout
@zhuanquan I would suggest to initialize your weights' variables with standard deviation between 0.1 and 0.2 i.e, weights = tf.Variable([size], stddev=stdvalue)
Why do you use np.random.choice(np.arange(5))
instead of just np.random.choice(5)
? Just looking at the docs: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html and wondering what I am missing. Are they the same or slightly different? Or is this way easier for understanding?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@zhuanquan I would assume somewhere where the losses are being computed incorrectly. If I drop your initial learning rate by an order of magnitude or more it begins to minimize. But it will not work for me either when I start at 0.5