Skip to content

Instantly share code, notes, and snippets.

View ehartford's full-sized avatar

Eric Hartford ehartford

View GitHub Profile
TEMPLATE """{{- range $index, $_ := .Messages }}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS]{{ $.Tools }}[/AVAILABLE_TOOLS]
{{- end }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content }}{{ .Content }}
{{- if not (eq (len (slice $.Messages $index)) 1) }}</s>
{{- end }}
{{- else if .ToolCalls }}[TOOL_CALLS][
# make_multi_metric_head.py
# ------------------------------------------------------------
# Replace WorldPM-72B’s 1-unit reward head with a 15-unit head
# and save the result so you can fine-tune from it later.
# ------------------------------------------------------------
import torch
from transformers import AutoConfig, AutoModelForSequenceClassification
# Metrics you want separate scores for
METRICS = [
RUBRICS = {
"structural_coherence": {
"name": "Structural Coherence & Progression",
"description": "Evaluates the overall organization, logical progression, and effective shaping of the content.",
"scores": {
5: "The structure is masterfully crafted, exhibiting flawless logical/thematic/narrative progression. All parts are intrinsically linked, contributing to a powerful and unified whole, perfectly suited to the work's purpose and form.",
4: "The structure is highly effective, with clear logical/thematic/narrative progression. Most parts are well-integrated, contributing to a cohesive work.",
3: "The structure is generally clear and supports the content, though some areas might lack optimal flow or integration. Progression is mostly logical/thematic/narrative.",
2: "Structural weaknesses are apparent; progression may be confusing, disjointed, or underdeveloped. Connections between parts are often uncle
FROM ./mmproj-F16.gguf
FROM ./Devstral-Small-2505-UD-Q4_K_XL.gguf
TEMPLATE """{{- range $index, $_ := .Messages }}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS]{{ $.Tools }}[/AVAILABLE_TOOLS]
{{- end }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content }}{{ .Content }}
#!/usr/bin/env python3
# -----------------------------------------------------------
# ultra_math_dataset.py
# Token-lean “visible-scratch-pad” dataset generator
# Revision 7 – Final minor cleanup (removed unreachable code)
# -----------------------------------------------------------
import json, math, random, uuid
from decimal import Decimal, InvalidOperation
from fractions import Fraction
# Requires Python 3.9+ for math.lcm / math.isqrt preferred paths
FROM qwen3:30b-a3b-q8_0
TEMPLATE """{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
# Tools
@ehartford
ehartford / Modelfile
Created March 14, 2025 04:57
gemma3 tool
FROM gemma3:latest
TEMPLATE """{{- /* If you want to inject system or tool instructions before the conversation, do it here */ -}}
{{- if or .System .Tools }}
<start_of_turn>user
{{ if .System }}
{{ .System }}
{{ end }}
{{ if .Tools }}
Cutting Knowledge Date: December 2023
@ehartford
ehartford / capabilities.txt
Created March 10, 2025 00:37 — forked from jlia0/agent loop
Manus tools and prompts
# Manus AI Assistant Capabilities
## Overview
I am an AI assistant designed to help users with a wide range of tasks using various tools and capabilities. This document provides a more detailed overview of what I can do while respecting proprietary information boundaries.
## General Capabilities
### Information Processing
- Answering questions on diverse topics using available information
- Conducting research through web searches and data analysis
@ehartford
ehartford / recipe.md
Last active March 8, 2025 19:14
training model for impressive demos

seed

[
  {
    "category": "Physical and Spatial Reasoning",
    "overview": "Large language models (LLMs), especially transformer-based models, typically struggle with physical and spatial reasoning due to their associative rather than causal or simulation-based internal representations. They lack grounded understanding or internal simulations of real-world physics, instead relying solely on statistical associations learned from textual data. Without explicit mental models or sensory experiences of spatial relations, gravity, friction, containment, and object permanence, LLMs default to pattern-based associations and linguistic heuristics rather than accurate physical logic. Thus, when confronted with scenarios that require concrete reasoning about physical interactions, spatial positioning, or hidden-object inference, LLMs often provide incorrect or illogical responses.\n\nThis limitation arises fundamentally because LLMs do not possess innate spatial or physical intuitions, nor do they internally simu
@ehartford
ehartford / Modelfile
Last active March 6, 2025 14:43
qwq ollama Modelfile
FROM ./qwq-32b-q5_k_m.gguf
PARAMETER num_ctx 131072
PARAMETER temperature 0.6
PARAMETER top_k 40
PARAMETER top_p 0.95
PARAMETER repeat_penalty 1.0
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"