curl -fsSL https://ollama.com/install.sh | sh
ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance)
curl -fsSL https://claude.ai/install.sh | bash
ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b
Hi, did you find any solution for this?
I’m facing the same issue. I tested with local models like qwen3.5, and Claude Code behaves like a normal LLM (no file read/write, no tool usage).
But when I switched to a cloud model (qwen3.5:cloud), it worked properly and was able to create files using Claude Code.
Just wanted to check if you managed to get local models working with file operations, or if cloud is the only way right now.