Skip to content

Instantly share code, notes, and snippets.

View lhl's full-sized avatar

Leonard lhl

View GitHub Profile
@ctlllll
ctlllll / longest_chinese_tokens_gpt4o.py
Created May 13, 2024 19:53
Longest Chinese tokens in gpt4o
import tiktoken
import langdetect
T = tiktoken.get_encoding("o200k_base")
length_dict = {}
for i in range(T.n_vocab):
try:
length_dict[i] = len(T.decode([i]))
except:
@kalomaze
kalomaze / llm_samplers_explained.md
Last active May 5, 2025 04:39
LLM Samplers Explained

LLM Samplers Explained

Everytime a large language model makes predictions, all of the thousands of tokens in the vocabulary are assigned some degree of probability, from almost 0%, to almost 100%. There are different ways you can decide to choose from those predictions. This process is known as "sampling", and there are various strategies you can use which I will cover here.

OpenAI Samplers

Temperature

  • Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic).
  • 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same.
  • Graph demonstration with voiceover: https://files.catbox.moe/6ht56x.mp4
@RaphaelWimmer
RaphaelWimmer / btw5-switch.py
Last active January 16, 2025 15:38
Switch Creative BT-W5 between AptX Adaptive Low Latency and High Quality modes
#!/usr/bin/env python3
# Simple tool to switch the Creative BT-W5 Bluetooth Audio dongle between AptX Adaptive **Low Latency** or **High Quality** mode.
# Of course, only works with Bluetooth headphones that support AptX Adaptive, such as the Tranya X3
# Reverse engineered based on communication between Creative's desktop app for Windows and the BT-W5.
# Might also accidentally overwrite other settings as a whole config data array is sent without taking into account the existing config.
#
# Usage: sudo ./btw5-switch.py ll (for low-latency mode)
# sudo ./btw5-switch.py hq (for high-quality mode)
#
@veekaybee
veekaybee / normcore-llm.md
Last active June 5, 2025 08:38
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active April 7, 2025 18:27
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_model_name_or_path", type=str)
@kconner
kconner / macOS Internals.md
Last active June 7, 2025 16:40
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@Hellisotherpeople
Hellisotherpeople / blog.md
Last active March 27, 2025 00:37
You probably don't know how to do Prompt Engineering, let me educate you.

You probably don't know how to do Prompt Engineering

(This post could also be titled "Features missing from most LLM front-ends that should exist")

Apologies for the snarky title, but there has been a huge amount of discussion around so called "Prompt Engineering" these past few months on all kinds of platforms. Much of it is coming from individuals who are peddling around an awful lot of "Prompting" and very little "Engineering".

Most of these discussions are little more than users finding that writing more creative and complicated prompts can help them solve a task that a more simple prompt was unable to help with. I claim this is not Prompt Engineering. This is not to say that crafting good prompts is not a difficult task, but it does not involve doing any kind of sophisticated modifications to general "template" of a prompt.

Others, who I think do deserve to call themselves "Prompt Engineers" (and an awful lot more than that), have been writing about and utilizing the rich new eco-system

@kinoc
kinoc / llama4openai-api.py
Created March 24, 2023 22:35
Flask based endpoint to emulate OpenAI API enpoints using llama/alpaca and HF models
# a simple Flask API to emulate OpenAI's using llama models and/or transformers
# runs on 3080
import sys
import time
import torch
import json
from peft import PeftModel
from flask import Flask, make_response, request, abort
@veekaybee
veekaybee / chatgpt.md
Last active June 1, 2025 16:08
Everything I understand about chatgpt

ChatGPT Resources

Context

ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?

I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.

Model Architecture