The following are some notes taken while testing and intercepting requests from the Zed Assistant Panel (Zed version 0.150.4) to their server hosting Claude 3.5 Sonnet model.
The command /now
insert the current date and time in the chat.
Assistant Panel
/now
what's the time?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "Today is Fri, 30 Aug 2024 11:47:36 +0200.\nwhat's the time?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- It start with
Today is
followed by the current date and time. - Example format:
Fri, 30 Aug 2024 11:47:36 +0200.
- Timezone is included.
The command /file [file path]
look in the workspace for a file (try to autocomplete)
and insert its content on Enter.
Assistant Panel
/file headers.py
Explain this code
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "```py anthropic/headers.py\nimport json\n[...]\n```\ndiagnostics: anthropic/headers.py\n```python\nfrom pathlib import Path\n\nfrom mitmproxy import http\n// error: Import \"mitmproxy\" could not be resolved\n\nPATH_ROOT = Path(__file__).parent\n```\n\nExplain this code\n",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- The file content is enclosed in triple backticks with the language identifier.
- Near to the language identifier, the file path is provided. (
anthropic
here is the cwd) - If any diagnostics is present, they are included (see Diagnostics).
The command /tab
is similar to file but it look for a open tab in the workspace (Zed
tabs are equivalent to Neovim buffers). It has different forms:
/tab
to get the content of the current active tab./tab [file path]
to get the content of a specific tab./tab all
to include all the open tabs in the chat.
Assistant Panel
/tab all
What are these tabs?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "```md anthropic/README.md\n# Anthropic\n[...]\n```\n```py anthropic/headers.py\nimport json\n[...]\n```\n\nWhat are these tabs?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- File content are included in the context in the say way as
/file
(triple backticks, language identifier, file path).
The command /diagnostics
comes in different forms:
/diagnostics
to get diagnostics for the whole workspace./diagnostics --include-warning
to get diagnostics for the whole workspace including warnings./diagnostics [file path]
to get diagnostics for a specific file./diagnostics [file path] --include-warning
to get diagnostics for a specific file including warnings.
Assistant Panel
/diagnostics --include-warning
How can I fix those?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "diagnostics\nanthropic/headers.py\n```python\nfrom pathlib import Path\n\nfrom mitmproxy import http\n// error: Import \"mitmproxy\" could not be resolved\n\nPATH_ROOT = Path(__file__).parent\n```\n```python\nPATH_HEADERS = PATH_ROOT / \"headers\"\n\nx = lambda x: x\n// warning: Do not assign a `lambda` expression, use a `def`\n\nclass DumpHeaders:\n```\nanthropic/headers.py\n```python\nfrom pathlib import Path\n\nfrom mitmproxy import http\n// error: Import \"mitmproxy\" could not be resolved\n\nPATH_ROOT = Path(__file__).parent\n```\n```python\nPATH_HEADERS = PATH_ROOT / \"headers\"\n\nx = lambda x: x\n// warning: Do not assign a `lambda` expression, use a `def`\n\nclass DumpHeaders:\n```\nanthropic/anthropic.py\n```python\nfrom pathlib import Path\n\nimport requests\n// warning: Import \"requests\" could not be resolved from source\nfrom mitmproxy import http\nfrom mitmproxy import ctx\n```\n```python\n\nimport requests\nfrom mitmproxy import http\n// error: Import \"mitmproxy\" could not be resolved\nfrom mitmproxy import ctx\n\n```\n```python\nimport requests\nfrom mitmproxy import http\nfrom mitmproxy import ctx\n// error: Import \"mitmproxy\" could not be resolved\n\nPATH_ROOT = Path(__file__).parent\n```\n\nHow can I fix those?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
Request (print)
diagnostics
anthropic/headers.py
```python
from pathlib import Path
from mitmproxy import http
// error: Import "mitmproxy" could not be resolved
PATH_ROOT = Path(__file__).parent
```
```python
PATH_HEADERS = PATH_ROOT / "headers"
x = lambda x: x
// warning: Do not assign a `lambda` expression, use a `def`
class DumpHeaders:
```
anthropic/headers.py
```python
from pathlib import Path
from mitmproxy import http
// error: Import "mitmproxy" could not be resolved
PATH_ROOT = Path(__file__).parent
```
```python
PATH_HEADERS = PATH_ROOT / "headers"
x = lambda x: x
// warning: Do not assign a `lambda` expression, use a `def`
class DumpHeaders:
```
anthropic/anthropic.py
```python
from pathlib import Path
import requests
// warning: Import "requests" could not be resolved from source
from mitmproxy import http
from mitmproxy import ctx
```
```python
import requests
from mitmproxy import http
// error: Import "mitmproxy" could not be resolved
from mitmproxy import ctx
```
```python
import requests
from mitmproxy import http
from mitmproxy import ctx
// error: Import "mitmproxy" could not be resolved
PATH_ROOT = Path(__file__).parent
```
How can I fix those?
- It start with simple word
diagnostics
followed by file path and a list of diagnostics in that file enclosed in triple backticks with the language identifier. - Diagnostic comment are in the form
// error: ...
or// warning: ...
(even if it's a Python file and Python doesn't have comments of this form). - It is included the line with the diagnostic, the line before and the line after.
The command /symbols
insert in the chat the symbols in the current active file.
Assistant Panel
/symbols
What are those?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "Symbols for anthropic/headers.py:\n- class DumpHeaders\n- class DumpHeaders def request\n- class Auth\n- class ChatCompletion\n- class Embeddings\n\nWhat are those?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- It start with
Symbols for [file path]:
followed by a list of symbols in that file. - Then double newline and the rest of the prompt.
The command /prompt [prompt title]
look in the prompt library for the title
(try to autocomplete with "prompt title") and insert prompt text
on Enter.
- Prompt Title: "Explain code"
- Prompt Text: "Explain the provided code as a poem."
Assistant Panel
/prompt Explain code
/file headers.py
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "Explain the provied code as a poem.\n```py anthropic/headers.py\nimport json\n[...]\n```\ndiagnostics: anthropic/headers.py\n```python\nfrom pathlib import Path\n\nfrom mitmproxy import http\n// error: Import \"mitmproxy\" could not be resolved\n\nPATH_ROOT = Path(__file__).parent\n```\n",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- The prompt text is insert instead of
/prompt [prompt title]
in the request. - As showcased on https://zed.dev/ai homepage, prompts can be nested.
The command /default
insert the default prompt in the chat.
- The default prompt is empty but it can be customized by including custom prompts in it
- The default prompt is a prompt like other prompt but it is automatically inserted when starting a new context in the assistant panel.
The command /terminal
insert a terminal text in the chat. It should have a option to
specify the line count (/terminal --line-count
) but I was not able to make it work :(.
Assistant Panel
/terminal
what files are in the dir?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"text": "Terminal output:\nanthropic on main [✘!?] via v3.11.9\n[...]\nwhat files are in the dir?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
Request (print)
Terminal output:
anthropic on main [✘!?] via v3.11.9 (anthropic)
❯ ls
README.md headers notes.md
anthropic.py headers.py requirements.txt
commands.md logs
anthropic on main [✘!?] via v3.11.9 (anthropic)
❯
what files are in the dir?
- It start with
Terminal output:
followed by the terminal output.
The command /fetch [URL]
fetch the content of the URL and insert it in the chat.
Assistant Panel
/fetch https://en.wikipedia.org/wiki/Python_(programming_language)
Who created Python?
Request
{
"model": "claude-3-5-sonnet-20240620",
"provider": "anthropic",
"provider_request": {
"max_tokens": 8192,
"messages": [
{
"content": [
{
"cache_control": {
"type": "ephemeral"
},
"text": "# Python (programming language)\n\nPython is a high-level, general-purpose programming language [...]\nWho created Python?",
"type": "text"
}
],
"role": "user"
}
],
"model": "claude-3-5-sonnet-20240620",
"system": ""
}
}
- The URL is feteched and its content is converted to markdown.
- Fetching and converting is a somewhat long process (couple of seconds), when the site content is successfully converted to markdown, "@" is place in front of the /fetch command.
- The markdown content is inserted in the chat without any additional formatting.
The command /workflow
it's quite different from the other commands.
- The system prompt is quite long (with few shots examples).
- It make use of XML tags to structure the workflow (in Claude 3.5 sonnet XML tags are the default way to structure text, it's an improvement over markdown).
- Zed parse those XML to provide a way to interact with the workflow (see "Workflow for Complex Transformations" section in the blog).