Skip to content

Instantly share code, notes, and snippets.

View rsrini7's full-sized avatar
😃
Happy

Srinivasan Ragothaman rsrini7

😃
Happy
View GitHub Profile
@rsrini7
rsrini7 / insta-user-eng.md
Created November 18, 2025 02:37
Improve individual user engagement Instagram

Functional and Business Requirements

  • Goal: Improve individual user engagement (view, like, comment) on suggested posts—boosting metrics like Daily Active Users (DAU) and session numbers.
  • Scope: Focus on non-friend content (from creators, not just connections). Aim is to predict and increase personalized engagement.
  • ML Objective: Aligns with business needs but optimizes a correlated surrogate metric (like engagement probability) at the user level, not global DAU directly.

@rsrini7
rsrini7 / Diff
Last active November 17, 2025 10:06
Google Nested Learning
Google's Nested Learning and Meta's (not Google's) Sparse Memory Finetuning are two distinct approaches to the problem of continual learning in AI, aiming to prevent "catastrophic forgetting". Nested Learning is an architectural paradigm, while Sparse Memory Finetuning is a specific training method within existing architectures.
Google Nested Learning
Nested Learning is a novel architectural paradigm that treats a single model as a system of interconnected, multi-level learning problems that are optimized simultaneously at different rates.
Core Concept: It introduces a Continuum Memory System (CMS), a spectrum of memory modules updating at different frequencies.
Mechanism: It uses various "layers" of memory:
High-frequency layers update often, storing recent, fast-changing information (short-term memory).
Low-frequency layers update rarely, storing stable, core knowledge that shouldn't change easily (long-term memory).
Result: This structural approach allows the model to naturally integrate new information
@rsrini7
rsrini7 / Agent Lightning
Created November 8, 2025 15:12
Agent Lightning
**Microsoft Agent Lightning** differs from **Unsloth** and the **Hugging Face fine-tune library** in several key ways:
- **Purpose & Scope**:
- **Agent Lightning** is designed to *automate optimization of AI agents* in real production environments—this means not just fine-tuning LLMs, but also orchestrating prompts, reward-based RL, supervised fine-tuning, and managing agent workflows and traces. It can interface with multiple agent frameworks (LangChain, CrewAI, etc.) and integrates fine-tuning directly into the agent's operational loop.[1][2][3]
- **Unsloth** focuses specifically on *speeding up and simplifying LLM fine-tuning*. It optimizes memory to support larger models on smaller GPUs and significantly reduces time and complexity for traditional supervised fine-tuning tasks.[4][5][6]
- **Hugging Face’s library** (Transformers/Trainer) provides the foundational tools for *fine-tuning and training models* using standard workflows, with flexibility but less opinionated automation for agent-centric R
@rsrini7
rsrini7 / QuantumEchoes
Created October 27, 2025 05:04
Google’s Quantum Computing Breakthrough (Willow Chip & Quantum Echoes Algorithm)
Google’s Quantum Computing Breakthrough (Willow Chip & Quantum Echoes Algorithm):
Historic achievement: Google’s Willow chip delivers the first verifiable quantum advantage—solving real, testable problems 13,000× faster than the world’s best supercomputers.​
Quantum Echoes algorithm: This algorithm lets scientists run quantum operations forward and backward in time, observing how information spreads or scrambles when a tiny disturbance is introduced (like dropping a pebble in a pond and watching ripples).
Why qubits are special: Qubits (quantum bits) can be both "0" and "1" at the same time (superposition), and can be linked so that changing one changes another instantly (entanglement).
Fragility of qubits: Qubits are very fragile—they get messed up by heat, vibration, or noise, so Willow chip runs at near absolute zero temperatures to keep them stable.
@rsrini7
rsrini7 / ML-Math
Last active October 24, 2025 03:34
ML Math
Linear Algebra (Matrices): Learn about matrix properties, multiplying matrices, LU decomposition, and determinants. This is needed for data analysis, processing, and techniques like PCA (Principal Component Analysis) [08:54].
Probability and Statistics: Learn about random variables, probability distributions, expectation value, variance, covariance, correlation, and Bayes' Rule. This is essential for understanding your data and model results.
Numerical Computation: Learn about Gradient Descent, which is used to find a local minimum. The speaker suggests writing code for gradient descent.
Calculus Basics: Learn the Chain Rule, which is at the heart of backpropagation.
Theory of Machine Learning: Learn key terminologies and concepts like regression, train/test/validation sets, labels/targets, weights, generalization error, regularization, hyperparameter tuning (using cross-validation), and bias-variance tradeoff.
https://www.deeplearningbook.org/exercises.html
@rsrini7
rsrini7 / Quantum Computer terms con
Last active October 27, 2025 05:22
Quantum Computing Terms
Bits vs Qubits (with analogy):
A classical bit is like a coin lying flat—only "heads" (0) or "tails" (1).
A qubit is like a spinning coin in the air—while it spins, it can be both heads and tails at the same time! (superposition).
Superposition and Probability:
A qubit “in the air” can be 70% likely to land heads and 30% likely to land tails. It holds both until you catch (measure) the coin, then it picks one.
@rsrini7
rsrini7 / searching or downloading research papers
Created October 17, 2025 03:08
searching or downloading research papers
Here is a supporting content for searching or downloading research papers (with their specific uses):
### Websites and Tools
- **Consensus**: https://get.consensus.app/neha
*AI-powered search and summarization tool; highlights open-access status and offers free Pro trial for research paper search.*
- **Arxiv**: https://arxiv.org/
*Preprint repository for free academic papers in physics, computer science, math, and more.*
- **Biorxiv**: https://www.biorxiv.org/
@rsrini7
rsrini7 / gist:e51c6d1b25ea208a7276919e109bcf4f
Created October 6, 2025 13:08
Fine Tuning LLM - GTX 1060 - WSL2-Ubuntu-2022
1. Ubuntu 2020 -> default python 3.8 -> require pyenv to handle higher versions of python
2. Ubuntu 2024 -> default python 3.12 -> PyTorch does not currently provide prebuilt wheels for Python 3.12 (cp312) via the official website or PyPI.
3. Installed Ubunt 2022 -> default python 3.10
wsl --install -d Ubuntu-22.04
wsl --setdefault Ubuntu-22.04
@rsrini7
rsrini7 / docs-gen
Last active August 9, 2025 10:09
prompts
You are an expert technical writer and an automated documentation maintenance system. Your primary goal is to ensure the project has a complete, accurate, and up-to-date set of documentation in a `/docs` folder.
You will operate in two phases:
**Phase 1: Situational Analysis & Planning**
**Phase 2: Markdown Generation**
### PHASE 1: SITUATIONAL ANALYSIS & PLANNING
First, perform a detailed analysis of the current state of the project.
* **Role and Purpose**: The AI is designated as a "prompt coach" with the mission to create a prompt blueprint that transforms the assistant into a personal AI tutor. This tutor will methodically quiz the user to diagnose their current AI level and deliver progressively harder lessons to stretch their understanding.
* **Framework**: The prompt follows a four-section blueprint:
* **Purpose** (Goal, Meta-switches, Mode & Effort)
* **Instructions** (Behavior & Rules)
* **Reference** (Context, Data, Materials)
* **Output** (Expected Format & Length)
* **Workflow Rules**:
* **Section-by-section**: No skipping ahead; the AI handles one section at a time.
* **Full question set**: For the current section, the AI shows every question and provides a concrete example answer for each.
* **Gatekeeping**: The AI waits until all questions are answered. If an answer is unclear, it asks a follow-up question.