Skip to content

Instantly share code, notes, and snippets.

@vahbuna
vahbuna / not_enough_data.md
Created April 5, 2023 23:05
references from Learning with not Enough Data - Lilian Weng

Semi-Supervised Learning

[1] Ouali, Hudelot & Tami. “An Overview of Deep Semi-Supervised Learning” arXiv preprint arXiv:2006.05278 (2020).

[2] Sajjadi, Javanmardi & Tasdizen “Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning.” arXiv preprint arXiv:1606.04586 (2016).

[3] Pham et al. “Meta Pseudo Labels.” CVPR 2021.

[4] Laine & Aila. “Temporal Ensembling for Semi-Supervised Learning” ICLR 2017.

[5] Tarvaninen & Valpola. “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.” NeuriPS 2017

@vahbuna
vahbuna / pymem.md
Last active June 3, 2025 18:16
Debugging PyTorch memory use with snapshots - Zach's Blog
@vahbuna
vahbuna / cnlg.md
Created March 31, 2023 08:18
Controllable Neural Text Generation - Lilian Weng
@vahbuna
vahbuna / fastai-nlp.md
Last active September 1, 2020 11:05
Notes on fastai nlp course
@vahbuna
vahbuna / einsum.md
Last active January 14, 2025 12:13
Numpy Einstein Summation

Einsum

A = np.array([[1, 1, 1],
              [2, 2, 2],
              [5, 5, 5]])

B = np.array([[0, 1, 0],
              [1, 1, 0],
 [1, 1, 1]])
@vahbuna
vahbuna / mp_train.md
Last active January 14, 2025 12:13
Mixed Precision Training