Skip to content

Instantly share code, notes, and snippets.

@pyros-projects
Last active April 12, 2025 14:21
Show Gist options
  • Save pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527 to your computer and use it in GitHub Desktop.
Save pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527 to your computer and use it in GitHub Desktop.

How to

Default meta prompt collection: https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9

Meta prompt collection with creating summaries and context sync (use them when using Cline or other coding assistants): https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf


Create a plan

1. Copy 01_planning and replace user input with your app idea, project spec, or whatever.

Example: https://imgur.com/a/4zSpwkT

2. Put the whole prompt into your LLM.

Use the best LLM you have access to. For serious work, o1 Pro is the best option, and it isn’t even close. Followed by Claude and Gemini 2.0 Reasoning.

If you don’t mind putting in the effort, every other LLM like a locally run Qwen-Coder2.5 also works.

We call this instance of your bot META INSTANCE.

Potential result:
https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md

Read it and fix errors by talking to your META INSTANCE. (For example: Missing components, missing tasks, or you don't like the emoji for your app)


3. Add 02_prompt_chain.md to the same chat (META INSTANCE).

This prompt generates, based on the technical plan, the first coding prompt and a review prompt to evaluate the results.

https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md

Your META INSTANCE now includes:

  • Prompt #01
  • Technical plan
  • Prompt #02
  • Coding prompt/review prompt

4. Open a completely fresh instance of the LLM of your choice (or open up your coding assistant like Cline).

Paste the coding prompt into this new instance. This instance is called CODING INSTANCE.

Coding prompts include all the context needed to solve them. We do it in a sperate instance because coding itself devours context for irrelevant information, so the prompts are designed to be self-contained. If you are using Gemini you are probably fine using a single instance, but even Gemini's 2mil token context degrades pretty quick.

In theory, you could create a new CODING INSTANCE for every coding prompt. But let’s be real—having 2,318,476 chats open is a recipe for insanity.

If you use a normal LLM, you’ll get something like this—a step-by-step plan to follow:
https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-04_coding_prompt_potential_result-md

If you’re using Cline or similar, your assistant will just start working.

Once the assistant or you finish, refine the output. Ask questions, request improvements, and seek clarification. Don’t complain, “the model sucks because it got an npm command wrong.” Mistakes happen in every human project, too. Projects succeed because people communicate; they fail when they don’t.

Don’t expect superhuman capabilities from an LLM without engaging with it. This won’t work. If a LLM could do this already society as we know it wouldn't exist anymore. LLMs are still a bit stupid, and their boon is their crazy speed, being able to generate 100 times the code a day. But it needs you so it's actually good code it produces. You are the architect and orchestrator

That’s why one-shot snake game prompts are stupid. No dev I know can one-shot a snake game, and it’s irrelevant for daily project work. I’d rather have an LLM that can’t one-shot snake but has solid reasoning and planning skills—like o1 models.


5. Review, refine, and repeat.

When the acceptance criteria in the prompt are met:

  1. Generate a summary (if using Cline or similar).
  2. Go back to the META INSTANCE and say:
    "I’m done with this. Please generate the next coding and review prompt."
    Include summaries or reviews, if available.

Paste the next prompt into your CODING INSTANCE. Repeat this process until all tasks in the technical plan are done.

Congratulations—you’ve completed a real project!


FAQ

My LLM is outdated when working with library XYZ. What should I do?

No worries. It sucks, but it’s solvable.
You probably don’t know this library:
https://docs.fastht.ml/

It’s the best WebApp/HTMX library for Python and makes Streamlit and Gradio look like drawing by numbers for five-year-olds.

And they offer a dedicated text file you should load into your context when working with the library, which makes the LLM crazy good in using it:
https://docs.fastht.ml/llms-ctx.txt

What does it help you? Well they’ve even proposed a standard for “LLM-fying” documentation:
https://llmstxt.org/

Not every library follows this yet, which is a pity, but it’s easy to create your own for any library. Here’s a directory of libraries that provide LLM context:
https://directory.llmstxt.cloud/

If your library isn’t listed, create a meta prompt to generate such a file from the repository. Or, better yet, build an app with the meta prompts this guide is about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment