Skip to content

Instantly share code, notes, and snippets.

@burkeholland
Created December 2, 2025 22:18
Show Gist options
  • Select an option

  • Save burkeholland/435ab18c549ddbefde1846165e8b2e08 to your computer and use it in GitHub Desktop.

Select an option

Save burkeholland/435ab18c549ddbefde1846165e8b2e08 to your computer and use it in GitHub Desktop.
Prompt Files vs Custom Instructions vs Custom Agents
layout title date categories permalink
post
We need practical AI workflows
2025-11-17 08:40:00 +0000
posts
/posts/promptfiles-vs-instructions-vs-agents/

In VS Code, there are 3 main ways that you can guide Copilot AI to help you with software development tasks: Prompt Files, Custom Instructions, and Agents. Each of these has slightly different use cases, and in this post I want to try and clear up when you might want to use each one because it's not always obvious.

Understanding the agent system prompt

Before we get into what all of these are, it's important to first understand that the agent in Copilot is driven by a system prompt. Any instructions that you give the agent are appended to this system prompt. The system prompt is dynamic and can change depending on the model that you select, but generally speaking it looks something liks this:

[TODO REPLACE WITH DIAGRAM]

Core Identity and Global Rules:
    - A brief set of global rules that are always used regardless of the task, model or agent selected - i.e. "You are an expert AI programming assistant..."
General Instructions (dynamic based on model, agent, etc)
    - More generic instructions that may be tweaked based on the model selected - i.e. "NEVER print out a codeblock with file changes unless..."
Tool Use Instructions
    - High-level tool use guidelines - i.e. "Don't call the run_in_terminal tool multiple times in parallel..."
Output Format Instructions
    - Tells the agent how to format its output - like how to link to files in the workspace from the chat, etc.
**Custom Instructions**
    - The content of any custom instruction files.
**Custom Agent Instructions**

There's more details to it than that, but that's the general idea. This prompt gets passed as the system prompt to the model.

Next, a user prompt is added that provides context about the user's current workspace...

**Prompt Files**
    - The content of any prompt files the user has referenced in their prompt.
Environment Info:
    - Information about the OS, etc.
Workspace Info:
    - Sends all of the files and folder names in your workspace in a simple tree format.

Finally, your actual prompt is added to the conversation as a user message. If you were to type "Hello, World!", the user prompt that gets sent would look something like this:

Context
    - Current Date/Time
    - List of open terminals if any
Editor Context
    - Any files that you have added to the chat
User Request
    - "Hello, World!"

A simple, "Hello, World!" to Claude Opus 4.5 will use approximately 12.8K tokens.

And now you understand the basic structure of the agent prompt in VS Code. Still with me? Great! Now let's look at what happens when you use custom instructions, prompt files, and agents.

Custom Instructions

Custom Instructions are a way to pass information to the prompt on every single request.

You can have as many custom instruction files as you like, and they can either be global to every project, or they can be specific to a particular workspace.

To create a new instructions file, you can use the "Chat: New Instructions File". You are then prompted to put them either in the .github folder in your workspace, or in "User Data", which is the global location.

The most common use case for custom instructions is the .github\copilot-instructions.md file. This file typically contains important information about your project - "Big Picture" architecture, project specific conventions, etc. You can even have the agent generate this file for you by choosing "Chat: Generate Workspace Instructions File" from the Command Palette.

Note that you can also use the AGENTS.md file name instead of .github\copilot-instructions.md if you prefer. The AGENTS.md file can be anywhere in your project`.

But you can have as many custom instructions files as you would like. For instance, I have an instructions file called ".github/instructions/memory.instructions.md" where I put things I want the AI to remember - things like "don't run unit tests automatically, let me do it" or "Remember that useEffect can only be used in client components in NextJS."

These files will get appended to the system prompt on every single request, and the custom instructions file always gets appended last. This means that if there are any conflicts between the copilot-instructions.md file and other instructions, it's likely that the custom instructions will take precedence.

So just to recap, if we have a .github/copilot-instructions.md file in our project and a .github/instructions/memory.instructions.md file, the system prompt will look something like this:

Core Identity and Global Rules
General Instructions
Tool Use Instructions
Output Format Instructions
**memory.instructions.md contents**
**copilot-instructions.md contents**

Custom instructions do allow you to conditionally include them based a glob pattern to match files in the context. For more info on Custom Instructions, see here.

Prompt Files

Prompt files are a second way for you to pass instructions to the agent. However, unlike custom instructions, prompt files are only included in the prompt when you explicitly reference them in your user prompt. Prompt files can be local to your workspace or global - just like custom instructions.

Prompt files are added to the same user prompt where your chat message is added.

While instructions files allow you to append arbitrary text to the system prompt, prompt files allow you to control much more of the workflow in which the agent operates - including the agent itself. For instance, in instruction files you can only specify a name, description and a glob pattern to match files in the frontmatter. In prompt files, you can specify which tools the agent has access to,

For instance, I have a prompt file called "remember" that automatically writes things to the memory.instructions.md file. It can ONLY be used to read and write things to memory. So it only has the "read" and "edit" tools available.

---
agent: agent
tools: ['read', 'edit']
model: 'GPT-5 mini (copilot)'
---

The user is asking you to save something to your memory. 

Your "memory" is a special instruction file located in the root of the project at ".github/instructions/memory.instructions.md".

If this file does not exist, you'll need to create it with the appropriate <frontmatter>.

<frontmatter>
---
applyTo: '**'
---
<frontmatter>

Then if I want the agent to remember that - for the love of god, you can only use useEffect in client components in NextJS, I can just type this in the chat:

/remember for the love of god, you can only use `useEffect` in client components in NextJS.

The user prompt then looks something like this:

**content of memory.prompt.md file**
Context
Editor Context
**User Request**
    - "Follow instructions in the memory.prompt.md (links to memory.prompt.md file instructions above) for the love of god, you can only use `useEffect` in client components in NextJS."

This is how I use prompt files and instructions files together to build up a memory for the agent as I go.

You can find a ton more prompt files and instructions files in the awesome-copilot repo.

The last way that you have for passing instructions is with something called a "Custom Agent".

Custom Agents

Custom Agents used to be called "Chat Modes". They are simply pre-configured sets of instructions that are tailored for specific tasks. These "Agents" will show up in the chat UI picker.

When you use a custom agent, the instructions get appended to the system prompt AFTER the custom instructions. Like prompt files, custom agents also allow you to enable or disable specific tools in the frontmatter and specify the model. But they also allow some additional functionality in the form of something called "handoffs". Handoffs allow you to suggest the next step to the user in the chat UI after the agent has completed its task.

For instance, there is a built-in "Plan" agent in VS Code that will create a plan document for you based on a prompt. When it finishes, it displays the options to either implement the plan, or view the plan in the editor. These are handoffs. They are simply buttons in the chat which will execute predefined prompts making it easier to compose workflows.

Handoffs in VS Code chat view

When to use what where

The question then, is why do these things exist and where is the right place to use each of them?

The answer to this question is, like most things in programming, "it depends". These are not necessarily tools for specific jobs as much as they are building blocks for AI workflows. Which sounds like a dodge, but I promise you it's not, and to prove it, I'll show you how I use them to assemble my own agentic workflow.

A look at my workflow

My AI workflow is fairly simple, and all it tries to do is mimic normal developer behaviour - because that's all I know. In my mind a typical workflow goes like this...

  • Create a feature branch
  • Make some commits
  • Merge the branch to main (via PR if on a team)

I realize there are a lot of other workflows out there but this seems to me to be a simple, logical way of working so that it's easy to track work and rollback to various points in time. I am also optimizing for the best model for the job, for speed and for cost. Some people can just hit Opus 4.5 all day long and some people really have to watch those premium requests closely. I opt-in to the latter group because thats where the average VS Code customer finds themeselves.

OK, so here's how it works. We assume that everything that we might want to do will be a single branch/PR. Even if it's trying to one-shot an entire app - that's a single branch. I find that it helps to think about things this way because it will stop you from trying to...one shot a whole app in a single branch. It's super tempting to do that, but will almost certainly end badly.

To do this, I use Custom Instructions, Prompt Files and a Custom Agent - so all three.

First I have a custom planning prompt.

That said, it helps to see how other people use these to compose their own workflows, so let me show you how I use them to do something that I like to call "Logical AI Programming".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment