Skip to content

Instantly share code, notes, and snippets.

@mcarilli
mcarilli / gradient_accumulation.py
Last active June 30, 2023 12:21
Minimal example of gradient accumulation, allreducing only on step() iterations and interacting properly with torch.cuda.amp
# For single-node, run this script via
# python -m torch.distributed.launch --nproc_per_node=<ngpus this node> example.py
#
# For multinode, see https://pytorch.org/docs/stable/distributed.html#launch-utility
#
# Example showing native mixed precision tools
# (torch.cuda.amp.GradScaler and torch.cuda.amp.autocast)
# used along with native DistributedDataParallel to perform
# gradient accumulation with allreduces only when stepping.
#
#!/bin/bash
#
# This script is for Arch Linux to configure XRDP for enhanced session mode
#
# The configuration is adapted from the Ubuntu 16.04 script.
#
# Script adapted from https://github.com/microsoft/linux-vm-tools
#
@alper111
alper111 / lap_pyramid_loss.py
Last active March 19, 2025 10:42
PyTorch implementation of Laplacian pyramid loss
# MIT License
#
# Copyright (c) 2024 Alper Ahmetoglu
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
@oscarknagg
oscarknagg / projected_gradient_descent.py
Last active November 25, 2023 03:52
Gist for projected gradient descent adversarial attack using PyTorch
import torch
def projected_gradient_descent(model, x, y, loss_fn, num_steps, step_size, step_norm, eps, eps_norm,
clamp=(0,1), y_target=None):
"""Performs the projected gradient descent attack on a batch of images."""
x_adv = x.clone().detach().requires_grad_(True).to(x.device)
targeted = y_target is not None
num_channels = x.shape[1]
for i in range(num_steps):
@gngdb
gngdb / Least Squares in PyTorch.ipynb
Last active April 27, 2020 04:03
Least Squares in PyTorch
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@guillefix
guillefix / lc.py
Last active May 10, 2020 14:00
temporal workaround to get Conv2dLocal to work in PyTorch
# coding: utf-8
# In[1]:
import math
import torch
from torch.nn.parameter import Parameter
import torch.nn.functional as F
@marta-sd
marta-sd / tf_merge.py
Last active February 10, 2022 10:08
Merge two models in TensorFlow
import tensorflow as tf
# 1. Create and save two graphs
# c = a*b
g1 = tf.Graph()
with g1.as_default():
a = tf.placeholder(tf.float32, name='a')
b = tf.Variable(initial_value=tf.truncated_normal((1,)), name='b')
@victor-shepardson
victor-shepardson / pytorch-glumpy.py
Last active February 23, 2025 17:12
using pycuda and glumpy to draw pytorch GPU tensors to the screen without copying to host memory
from contextlib import contextmanager
import numpy as np
import torch
from torch import Tensor, ByteTensor
import torch.nn.functional as F
from torch.autograd import Variable
import pycuda.driver
from pycuda.gl import graphics_map_flags
from glumpy import app, gloo, gl
@willurd
willurd / web-servers.md
Last active May 20, 2025 11:28
Big list of http static server one-liners

Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.

Discussion on reddit.

Python 2.x

$ python -m SimpleHTTPServer 8000