Skip to content

Instantly share code, notes, and snippets.

View rmusser01's full-sized avatar
💯
¯\_(ツ)_/¯

Robert rmusser01

💯
¯\_(ツ)_/¯
View GitHub Profile

g.co, Google's official URL shortcut (update: or Google Workspace's domain verification, see bottom), is compromised. People are actively having their Google accounts stolen.

Someone just tried the most sophisticated phishing attack I've ever seen. I almost fell for it. My mind is a little blown.

  1. Someone named "Chloe" called me from 650-203-0000 with Caller ID saying "Google". She sounded like a real engineer, the connection was super clear, and she had an American accent. Screenshot.

  2. They said that they were from Google Workspace and someone had recently gained access to my account, which they had blocked. They asked me if I had recently logged in from Frankfurt, Germany and I said no.

  3. I asked if they can confirm this is Google calling by emailing me from a Google email and they said sure and sent me this email and told me to look for a case number in it, which I saw in

@pyros-projects
pyros-projects / 01_planning_summary.md
Last active March 7, 2025 15:50
Alternative Meta prompts for use with Coding Agents á la Cline etc

Technical Project Planning Meta-Prompt

You are an expert software architect and technical project planner. Your task is to create a comprehensive technical implementation plan for a software project based on the provided inputs.

User Input

do you know googly python-fire? Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object. I want a similar library, but instead of a CLI it generates amazing web apps for any python project!

Output Format

@av
av / post.md
Created December 28, 2024 20:14
r/LocalLLaMA - a year in review

r/LocalLLaMA - a year in review

This community was a great part of my life for the past two years, so as 2024 comes to a close, I wanted to feed my nostalgia a bit. Let me take you back to the most notable things happened here this year.

This isn't a log of model releases or research, rather things that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on of sorts. I hope that it'll also show the amount of progress and development that happend in just a single year and make you even more excited for what's to come in 2025.


The year started with the excitement about Phi-2 (443 upvotes, by u/steph_pop). Phi-2 feels like ancient history these days, it's also fascinating that we end the 2024 with the Phi-4. Just one week after, people discovered that apparently it [was trained on the software engineer's diary](https://reddit.com/r/LocalLLaMA/comments/1

@tin-z
tin-z / VR_roadmap.md
Last active March 26, 2025 06:33
Becoming a Vulnerability Researcher roadmap: my personal experience
@kalomaze
kalomaze / llm_samplers_explained.md
Last active April 10, 2025 19:59
LLM Samplers Explained

LLM Samplers Explained

Everytime a large language model makes predictions, all of the thousands of tokens in the vocabulary are assigned some degree of probability, from almost 0%, to almost 100%. There are different ways you can decide to choose from those predictions. This process is known as "sampling", and there are various strategies you can use which I will cover here.

OpenAI Samplers

Temperature

  • Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic).
  • 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same.
  • Graph demonstration with voiceover: https://files.catbox.moe/6ht56x.mp4
@Tostino
Tostino / inkbot-summary-of-summaries.txt
Last active December 15, 2024 03:05
Generate a summary-of-summaries prompt example
<#meta#>
- Date: 2023-10-05
- Task: summary
<#system#>
Your main objective is to condense the content of the document into a concise summary, capturing the main points and themes.
<#chat#>
<#user#>
To craft a Final Summary:
1. Read Summarized Sections: Carefully review all the summarized sections of the document. Ensure that you have a clear understanding of the main points, key details, and essential information presented in each section.
@Tostino
Tostino / inkbot-chunked-summary.txt
Last active January 8, 2025 09:43
Generate a chunked summary prompt example
<#meta#>
- Date: 2023-10-05
- Task: summary
<#system#>
Your main objective is to condense the content of the document into a concise summary, capturing the main points and themes.
<#chat#>
<#user#>
Please read the provided Original section to understand the context and content. Use this understanding to generate a summary of the Original section, incorporating relevant details and maintaining coherence with the Prior Summary.
Notes:
@ritwikraha
ritwikraha / Pretraining-LLM.md
Last active March 3, 2025 07:16
Pretraining of Large Language Models

Pretraining


A Map for Studying Pre-training in LLMs

  • Data Collection
    • General Text Data
    • Specialized Data
  • Data Preprocessing
    • Quality Filtering
  • Deduplication
@veekaybee
veekaybee / normcore-llm.md
Last active April 18, 2025 15:29
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models