Skip to content

Instantly share code, notes, and snippets.

@yosemitebandit
Last active June 8, 2017 01:49
Show Gist options
  • Save yosemitebandit/8aec5677e69017bed04c to your computer and use it in GitHub Desktop.
Save yosemitebandit/8aec5677e69017bed04c to your computer and use it in GitHub Desktop.
udacity neural network course -- assignment 3.4, 3-layer NN with regularization and dropout
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@cipher982
Copy link

@zhuanquan I would assume somewhere where the losses are being computed incorrectly. If I drop your initial learning rate by an order of magnitude or more it begins to minimize. But it will not work for me either when I start at 0.5

@sahibzada-irfanullah
Copy link

@zhuanquan I would suggest to initialize your weights' variables with standard deviation between 0.1 and 0.2 i.e, weights = tf.Variable([size], stddev=stdvalue)

@ashleylid
Copy link

Why do you use np.random.choice(np.arange(5)) instead of just np.random.choice(5)? Just looking at the docs: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html and wondering what I am missing. Are they the same or slightly different? Or is this way easier for understanding?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment