This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
PyTorch Guided Filter for multi-channel (color) guide image and 1 channel | |
(grayscale) source image | |
""" | |
import torch as T | |
import torch.nn as nn | |
def box_filter_1d(tensor, dim, r): | |
cs = tensor.cumsum(dim).transpose(dim, 0) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# this is a quick ipython script to download all pre-trained pytorch models. | |
# run like this: | |
# ipython THIS_SCRIPT.ipy | |
import torchvision as tv, types, os.path | |
# get list of urls ... in a brittle way. | |
x = {k: v for dct in [getattr(y, 'model_urls') for y in (getattr(tv.models, x) for x in dir(tv.models)) if isinstance(y, types.ModuleType)] for k, v in dct.items()} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env bash | |
# This script installs leptonica and tesseract from source | |
# it does not install other pre-requisites to a custom location. | |
# side note: install prefix is defined once per library. | |
# side note: it clones git repositories in the current directory. | |
set -e | |
set -u |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import java.util.HashMap; | |
/* A weighted counter that remembers most frequent and recent pairs on a 2-color graph, where: | |
* - any pair (a_i, b_i) contains elements a_i from set A and elements b_i are from set B. A and B are disjoint. | |
* | |
* This counter basically implements a recurrence relation to maintain scores for each pair: | |
* score = memory * prev_score + (1-memory) * (+/-)1 | |
* | |
* "memory" is a value between 0 and 1 chooses how much history history to take into account. | |
* |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// these double forward slashes are comments. you can write whatever you | |
// want in the text after the double slashes. | |
x = 10; | |
y = 20; | |
z = 5; | |
cube([x,y,z], center=true); | |
cube([x,y,z], center=false); |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
This example demonstrates a distributed algorithm to identify the | |
percentile of a distributed data set. | |
Because this is a toy implementation, the data isn't actually | |
distributed across multiple machines. | |
""" | |
import numpy as np |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
This gist demonstrates that spark 1.0.0 and 0.9.1 | |
don't serialize a logger instance properly when code runs on workers. | |
run this code via: | |
spark-submit spark_serialization_demo.py | |
- or - | |
pyspark spark_serialization_demo.py | |
""" | |
import pyspark |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# My SparkR install notes. SparkR gives R access to Apache Spark. | |
# | |
# For details about SparkR, see their site: | |
# http://amplab-extras.github.io/SparkR-pkg/ | |
# | |
# Author: Alex Gaudio <[email protected]> | |
# | |
# Note the aweful hack where I symlink libjvm.so to /usr/lib. I did that to get rJava installed. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
"""A starcluster plugin that registers starcluster nodes in DNS | |
It assumes that nsupdate command can be run from the same machine where you run this plugin. | |
""" | |
from starcluster import clustersetup | |
import subprocess | |
from os.path import join | |
# You should configure these to your needs | |
DNS_ZONE = "example.com" |
NewerOlder