This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import praw | |
from json import dumps, JSONEncoder | |
reddit = praw.Reddit(username='', | |
password='', | |
client_id='', | |
client_secret='', | |
user_agent='') | |
comment = reddit.comment('em3rygg') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
### steps #### | |
# Verify the system has a cuda-capable gpu | |
# Download and install the nvidia cuda toolkit and cudnn | |
# Setup environmental variables | |
# Verify the installation | |
### | |
### to verify your gpu is cuda enable check |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from sklearn import linear_model | |
import numpy as np | |
import scipy.stats as stat | |
class LogisticReg: | |
""" | |
Wrapper Class for Logistic Regression which has the usual sklearn instance | |
in an attribute self.model, and pvalues, z scores and estimated | |
errors for each coefficient in | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
from collections import OrderedDict | |
def pystan_vb_extract(results): | |
param_specs = results['sampler_param_names'] | |
samples = results['sampler_params'] | |
n = len(samples[0]) | |
# first pass, calculate the shape | |
param_shapes = OrderedDict() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import keras.backend as K | |
import multiprocessing | |
import tensorflow as tf | |
from gensim.models.word2vec import Word2Vec | |
from keras.callbacks import EarlyStopping | |
from keras.models import Sequential | |
from keras.layers.core import Dense, Dropout, Flatten | |
from keras.layers.convolutional import Conv1D |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
import torch.nn as nn | |
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence | |
seqs = ['gigantic_string','tiny_str','medium_str'] | |
# make <pad> idx 0 | |
vocab = ['<pad>'] + sorted(set(''.join(seqs))) | |
# make model |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
### Problem Statement ### | |
Let's say you have a square matrix which consists of cosine similarities (values between 0 and 1). | |
This square matrix can be of any size. | |
You want to get clusters which maximize the values between elemnts in the cluster. | |
For example, for the following matrix: | |
| A | B | C | D | |
A | 1.0 | 0.1 | 0.6 | 0.4 | |
B | 0.1 | 1.0 | 0.1 | 0.2 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
class AttentionLSTM(LSTM): | |
"""LSTM with attention mechanism | |
This is an LSTM incorporating an attention mechanism into its hidden states. | |
Currently, the context vector calculated from the attended vector is fed | |
into the model's internal states, closely following the model by Xu et al. | |
(2016, Sec. 3.1.2), using a soft attention model following | |
Bahdanau et al. (2014). | |
The layer expects two inputs instead of the usual one: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
from keras.layers import GRU, initializations, K | |
from collections import OrderedDict | |
class GRULN(GRU): | |
'''Gated Recurrent Unit with Layer Normalization | |
Current impelemtation only works with consume_less = 'gpu' which is already | |
set. | |
# Arguments |
NewerOlder