Created
March 15, 2021 11:25
-
-
Save Shreyz-max/88cb07c7e812b92cb95f90cb3a068da4 to your computer and use it in GitHub Desktop.
training model description
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
time_steps_encoder is the number of frames per video we will be using for training | |
num_encoder_tokens is the number of features from each frame | |
latent_dim is the number of hidden features for lstm | |
time_steps_decoder is the maximum length of each sentence | |
num_decoder_tokens is the final number of tokens in the softmax layer | |
batch size | |
""" | |
time_steps_encoder=80 | |
num_encoder_tokens=4096 | |
latent_dim=512 | |
time_steps_decoder=10 | |
num_decoder_tokens=1500 | |
batch_size=320 | |
encoder_inputs = Input(shape=(time_steps_encoder, num_encoder_tokens), name="encoder_inputs") | |
encoder = LSTM(latent_dim, return_state=True,return_sequences=True, name='endcoder_lstm') | |
_, state_h, state_c = encoder(encoder_inputs) | |
encoder_states = [state_h, state_c] | |
# Set up the decoder | |
decoder_inputs = Input(shape=(time_steps_decoder, num_decoder_tokens), name= "decoder_inputs") | |
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True, name='decoder_lstm') | |
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) | |
decoder_dense = Dense(num_decoder_tokens, activation='softmax', name='decoder_relu') | |
decoder_outputs = decoder_dense(decoder_outputs) | |
model = Model([encoder_inputs, decoder_inputs], decoder_outputs) | |
model.summary() |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment