Skip to content

Instantly share code, notes, and snippets.

@SnowMasaya
Last active April 15, 2018 23:52
Show Gist options
  • Save SnowMasaya/33e6de837d6e8f5be6aa3542bbca8d5a to your computer and use it in GitHub Desktop.
Save SnowMasaya/33e6de837d6e8f5be6aa3542bbca8d5a to your computer and use it in GitHub Desktop.
PyTorchではじめるチャットボット ref: https://qiita.com/GushiSnow/items/f5ed09fd37315357eeae
Z^l = tanh(W^{l}_z*X^l+V^{l}_z\tilde{h}^{l}_T) \\
F^l = \sigma(W^{l}_f*X^l+V^{l}_f\tilde{h}^{l}_T) \\
O^l = \sigma(W^{l}_o*X^l+V^{l}_o\tilde{h}^{l}_T) \\
c_last = c[range(len(inputs)), (input_len - 1).data, :]
h_last = h[range(len(inputs)), (input_len - 1).data, :]
cell_states.append(c_last)
hidden_states.append((h_last, h))
c, h = layer(h, state, memory)
(conv_memory, attention_memory) = memory
Z, F, O = self._conv_step(inputs, conv_memory)
if memory is not None:
gates = gates + self.conv_linear(memory).unsqueeze(1)
pytorch: {対話したい内容}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment