Skip to content

Instantly share code, notes, and snippets.

@kiview
Created April 30, 2026 11:47
Show Gist options
  • Select an option

  • Save kiview/8db14a06b493da0bdfd0291dcf47a0f4 to your computer and use it in GitHub Desktop.

Select an option

Save kiview/8db14a06b493da0bdfd0291dcf47a0f4 to your computer and use it in GitHub Desktop.
Using Docker Sandboxes (sbx) with local models via Docker Model Runner

Using Docker Sandboxes (sbx) with Local Models via Docker Model Runner

This works today, even before first-class support lands. Example uses opencode with Docker Model Runner (DMR).

Steps

1. Pull a model

docker model pull qwen3.6

2. (Optional) Allow network access to DMR

host.docker.internal is allowed by default, so this is usually not needed. If you hit a network policy issue, run:

sbx policy allow network localhost:12434

3. Configure your agent to talk to DMR

For opencode, drop this opencode.json into the root of your project:

{
  "$schema": "https://opencode.ai/config.json",
  "model": "qwen3.6",
  "provider": {
    "dmr": {
      "models": {
        "qwen3.6": {
          "name": "qwen3.6"
        }
      },
      "name": "Docker Model Runner",
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "http://host.docker.internal:12434/v1"
      }
    }
  }
}

4. Run the sandbox

sbx run opencode

That's it.

Note on context window

You don't need to override the context window manually. DMR no longer caps context at 4k by default — models advertise their real context length. For example:

docker model inspect -r qwen3.6
# "qwen35moe.context_length": "262144"

Credit: thanks to Ignasi Lopez Luna and Chris Crone for the recipe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment