This works today, even before first-class support lands. Example uses opencode with Docker Model Runner (DMR).
docker model pull qwen3.6host.docker.internal is allowed by default, so this is usually not needed. If you hit a network policy issue, run:
sbx policy allow network localhost:12434For opencode, drop this opencode.json into the root of your project:
{
"$schema": "https://opencode.ai/config.json",
"model": "qwen3.6",
"provider": {
"dmr": {
"models": {
"qwen3.6": {
"name": "qwen3.6"
}
},
"name": "Docker Model Runner",
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://host.docker.internal:12434/v1"
}
}
}
}sbx run opencodeThat's it.
You don't need to override the context window manually. DMR no longer caps context at 4k by default — models advertise their real context length. For example:
docker model inspect -r qwen3.6
# "qwen35moe.context_length": "262144"Credit: thanks to Ignasi Lopez Luna and Chris Crone for the recipe.