Skip to content

Instantly share code, notes, and snippets.

View tekumara's full-sized avatar
🥚

Oliver Mannion tekumara

🥚
View GitHub Profile
@0thernet
0thernet / sysprompts.txt
Last active August 4, 2025 15:32
ai coding prompts, jul 2025
# system prompt (always applied)
<who_you_are>
You are a superintelligent autonomous AGENT.
You are assisting a USER in the context of a CONVERSATION represented as a chronological series of EVENTS.
You have been trained on a vast amount of data from the entire history of human activity on the internet up to this date. You have a deep capacity to find answers to many subjects inside your training data.
Your training data knowledge cutoff date is 2024-01-01.
You are a relentless truth-seeker. If you are not sure about file content or factual information pertaining to the USER's request (for example, if it requires information PAST your training data knowledge cutoff date, or the information is not available in the EVENTs of the CONVERSATION), you MUST use your tools to gather the relevant information: do NOT guess or make up an answer.
</who_you_are>

Learning LLMs in 2025

So you know how the transformer works, and you know basic ML/DL, and you want to learn more about LLMs. One way to go is looking into the various "algorithmic" stuff (optimization algorithms, RL, DPO, etc). Lot's of materials on that. But the interesting stuff is (in my opinion at least) not there.

This is an attempt to collect a list of academic (or academic-like) materials that explore LLMs from other directions, and focus on the non-ML-algorithmic aspects.

Courses

  • David Chiang's Theory of Neural Networks course.
  • This is not primarily LLMs, but does have substantial section on Transformers. Formal/Theory. More of a book than a course.
@boxabirds
boxabirds / .cursorrules
Last active August 13, 2025 12:28
Rock solid: turn Cursor into a rock-solid software engineering companion
# Project Policy
This policy provides a single, authoritative, and machine-readable source of truth for AI coding agents and humans, ensuring that all work is governed by clear, unambiguous rules and workflows. It aims to eliminate ambiguity, reduce supervision needs, and facilitate automation while maintaining accountability and compliance with best practices.
# 1. Introduction
> Rationale: Sets the context, actors, and compliance requirements for the policy, ensuring all participants understand their roles and responsibilities.
## 1.1 Actors

Generating Synthetic Data for LLM Evaluation

Summary

  1. Use your application extensively to build intuition about failure modes
  2. Define 3-4 dimensions based on observed or anticipated failures
  3. Create structured tuples covering your priority failure scenarios
  4. Generate natural language queries from each tuple using a separate LLM call
  5. Scale to more examples across your most important failure hypotheses (we suggest at least ~100)
  6. Test and iterate on the most critical failure modes first, and generate more until you reach theoretical saturation
@NohTow
NohTow / train_reason_moderncolbert.py
Created May 22, 2025 12:43
Boilerplate to reproduce the training of Reason-ModernColBERT
from datasets import load_dataset
from sentence_transformers import (
SentenceTransformerTrainer,
SentenceTransformerTrainingArguments,
)
from pylate import losses, models, utils
def main():
# As ReasonIR do not re-upload the BRIGHT data, we need to load it from the original source
@jlia0
jlia0 / agent loop
Last active September 6, 2025 20:05
Manus tools and prompts
You are Manus, an AI agent created by the Manus team.
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
4. Creating websites, applications, and tools
5. Using programming to solve various problems beyond development
6. Various tasks that can be accomplished using computers and the internet
@transitive-bullshit
transitive-bullshit / claude-code-prompts.js
Last active September 4, 2025 19:39
Unminified prompts and tool definitions for Claude Code
// Claude Code is a Beta product per Anthropic's Commercial Terms of Service.
// By using Claude Code, you agree that all code acceptance or rejection decisions you make,
// and the associated conversations in context, constitute Feedback under Anthropic's Commercial Terms,
// and may be used to improve Anthropic's products, including training models.
// You are responsible for reviewing any code suggestions before use.
// (c) Anthropic PBC. All rights reserved. Use is subject to Anthropic's Commercial Terms of Service (https://www.anthropic.com/legal/commercial-terms).
// Version: 0.2.9
@kalomaze
kalomaze / gist:37c70e022cb1e9428ebb1ee7a4b52275
Last active April 5, 2025 10:57
GRPO Reinforcement Learning - 7b GSM8k on 8xH100 / 8xA100
# the "verifiers" repository is a clean implementation of templated GRPO reinforcement learning training environments
# this is a generic set of "install from scratch" commands complete with a deepspeed z3 config that i have been using when i spin up nodes
# it will run on the gsm8k example w/ default batch size & generation size (8), and the 8th GPU is used for vllm generations
# qwen 14b full finetuning will run on this configuration too without LoRA or CUDA OOM, at least for the gsm8k task's context sizes + generation lengths
# hyperparameters are controlled by `verifiers/utils/config_utils.py`; i have been preferring extreme grad clipping (between 0.001 and 0.01) and low beta (under 0.01)
# NOTE FEB 27: examples have moved into `verifiers/examples` not `/examples`
cd /root
mkdir boom
  1. Every atomic object has a timeline (TL) of writes:

    • A write is either a store or a read-modify-write (RMW): it read latest write & pushed new one.
    • A write is either tagged Relaxed, Release, or SeqCst.
    • A read observes some write on the timeline:
      • On the same thread, future reads can't go backwards on the timeline.
      • A read is either tagged Relaxed, Acquire, or SeqCst.
      • RMWs can also be tagged Acquire (or AcqRel). If so, the Acquire refers to the "read" portion of "RMW".
  2. Each thread has its own view of the world:

  • Shared write timelines but each thread could be reading at different points.
@trappitsch
trappitsch / README.md
Last active August 12, 2025 09:13
PyApp packaging for air-gapped computers

Package PyApp app with batteries included

This is just a quick write up - mostly for myself - on how to create a python PyApp package for an air-gapped machine. This means that all dependencies, etc., will be included.