Skip to content

Instantly share code, notes, and snippets.

@0xatm
0xatm / AGENTS.md
Created December 5, 2025 00:23 — forked from juanpabloaj/AGENTS.md
General guidelines and best practices for AI code generation

1. Core Coding Principles

  • Language: All code, comments, variable names, function names, class names, and commit messages must be strictly in English.
  • Idiomatic Style:
    • Write code that adheres to the idiomatic style, conventions, and official style guides of the target language. This includes formatting, naming, and general structure.
    • Assume a linter or formatter will eventually run; aim to produce code that is already close to passing common linting rules.
  • Clarity and Readability:
    • Prioritize clarity, readability, and maintainability over unnecessary "cleverness" or extreme brevity if it sacrifices understanding.
    • Write self-documenting code.
  • Follow the Principle of Least Surprise: code should behave in a way that users and other developers would naturally expect.
@0xatm
0xatm / pydantic_model_eval_spec.local.py
Created August 22, 2025 23:59
Defining metadata in matched evaluation metadata spec models (unfortunately blows up type checking)
from typing import Optional, Dict, Any, List
from typing_extensions import Self
from pydantic import (
BaseModel,
ConfigDict,
Field,
model_validator,
)
@0xatm
0xatm / pydantic_model_injected_metadata.local.py
Created August 22, 2025 23:58
Injecting metadata directly into Pydantic Fields, using the .metadata attr
from typing import Optional, Dict, List
from pydantic import (
BaseModel,
Field,
create_model,
)
from pydantic.fields import FieldInfo
# Existing evaluation config models
@0xatm
0xatm / output.md
Created August 19, 2025 22:00
String-Churn Analysis: qwen-32b + RAGv1 on CSV
<think>
Okay, let's see. The user wants me to analyze the given datasets to find hidden correlations between the string structures and churn. First, I need to understand what fields are in the dataset.

Looking at the citations, there are several columns: order_id, date, region, channel, rep_id, product_code, color, os, browser, issue_type, unit_price, quantity, discount_pct, revenue, returned, support_ticket, satisfaction, churned_90d, summary_word_count, priority. The churn is indicated by 'churned_90d' which seems to be a binary flag (0 or 1). 

The task is about correlations between string structures and churn. String columns include region, channel, rep_id, product_code, color, os, browser, issue_type, summary_word_count (but that's numeric), priority (which is categorical: Low, Medium, High). So I should focus on these string or categorical fields.

First thought: maybe certain regions have higher churn rates. The dataset mentions Northeast, West, Midwest, etc. If in the data, for example, the 'Nort