Skip to content

Instantly share code, notes, and snippets.

View nikolay's full-sized avatar
🤷‍♂️
¯\_(ツ)_/¯

Nikolay Kolev nikolay

🤷‍♂️
¯\_(ツ)_/¯
View GitHub Profile
@jwbee
jwbee / jq.md
Last active April 27, 2025 11:31
Make Ubuntu packages 90% faster by rebuilding them

Make Ubuntu packages 90% faster by rebuilding them

TL;DR

You can take the same source code package that Ubuntu uses to build jq, compile it again, and realize 90% better performance.

Setting

I use jq for processing GeoJSON files and other open data offered in JSON format. Today I am working with a 500MB GeoJSON file that contains the Alameda County Assessor's parcel map. I want to run a query that prints the city for every parcel worth more than a threshold amount. The program is

@peppergrayxyz
peppergrayxyz / qemu-vulkan-virtio.md
Last active April 29, 2025 15:06
QEMU with VirtIO GPU Vulkan Support

QEMU with VirtIO GPU Vulkan Support

With its latest reales qemu added the Venus patches so that virtio-gpu now support venus encapsulation for vulkan. This is one more piece to the puzzle towards full Vulkan support.

An outdated blog post on clollabora described in 2021 how to enable 3D acceleration of Vulkan applications in QEMU through the Venus experimental Vulkan driver for VirtIO-GPU with a local development environment. Following up on the outdated write up, this is how its done today.

Definitions

Let's start with the brief description of the projects mentioned in the post & extend them:

@lurebat
lurebat / README.md
Last active November 1, 2024 07:36
Jetmove - A script to enhance navigation and multiple carets in Jetbrains IDEs

What is this?

Jetmove is a script I wrote to myself for some navigation and multiple carets features I felt were missing in Jetbrains IDEs.

It uses the excellent Live Plugin to run the script in the IDE.

All features support multiple carets, and are designed to work with them.

Features

@rain-1
rain-1 / llama-home.md
Last active April 24, 2025 06:41
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@kconner
kconner / macOS Internals.md
Last active May 1, 2025 02:34
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@prologic
prologic / LearnGoIn5mins.md
Last active February 12, 2025 06:52
Learn Go in ~5mins
@gullyn
gullyn / flappy.html
Last active January 24, 2025 00:41
Flappy bird in 205 bytes (improved!)
<body onload=z=c.getContext`2d`,setInterval(`c.width=W=150,Y<W&&P<Y&Y<P+E|9<p?z.fillText(S++${Y=`,9,9|z.fillRect(p`}*0,Y-=--M${Y+Y},P+E,9,W),P))):p=M=Y=S=6,p=p-6||(P=S%E,W)`,E=49) onclick=M=9><canvas id=c>
@andy-thomason
andy-thomason / Genomics_A_Programmers_Guide.md
Created May 14, 2019 13:32
Genomics a programmers introduction

Genomics - A programmer's guide.

Andy Thomason is a Senior Programmer at Genomics PLC. He has been witing graphics systems, games and compilers since the '70s and specialises in code performance.

https://www.genomicsplc.com

@seven1m
seven1m / open_source_church_software.md
Last active March 12, 2025 10:30
List of Open Source Church Software (NO LONGER MAINTAINED)
@mumoshu
mumoshu / helmify-kustomize
Last active December 13, 2024 13:20
Run `helmify-kustomize build $chart $env` in order to generate a local helm chart at `$chart/`, from kustomize overlay at `${chart}-kustomize/overlays/$env`
#!/usr/bin/env bash
cmd=$1
chart=$2
env=$3
dir=${chart}-kustomize
chart=${chart/.\//}
build() {