Skip to content

Instantly share code, notes, and snippets.

@iam-veeramalla
Last active April 18, 2026 00:16
Show Gist options
  • Select an option

  • Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.

Select an option

Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.
claude code integration with ollama to use local models

Run Claude with the power of Local LLMs using Ollama

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull the Model

ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance)

Install Claude

curl -fsSL https://claude.ai/install.sh | bash

Run Claude with Ollama

ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b

@chinthakindi-saikumar
Copy link
Copy Markdown

macOS, Linux, WSL:
curl -fsSL https://claude.ai/install.sh | bash
not working for me in windows.

@chinthakindi-saikumar
Copy link
Copy Markdown

used instead curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd for windows

@vinoth-6
Copy link
Copy Markdown

used instead curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd for windows

This also not working in Powershell

image

@Soumya14041987
Copy link
Copy Markdown

For Windows suggest folks to install wsl by execute commands

wsl --install

Then execute everything in Ubuntu terminal

@plk456
Copy link
Copy Markdown

plk456 commented Feb 25, 2026

irm https://claude.ai/install.ps1 | iex
use this cmd in powershell for cli tool for windows users

@chinthakindi-saikumar
Copy link
Copy Markdown

Here are the clear steps @vinoth-6 .

1.Install Ollama
Open CMD/terminal and run below command curl -fsSL https://ollama.com/install.sh | sh
2.Pull the Model
Install model based on your system configuration using belw commands ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance) or ollama pull gemma:2b
*.Optional: ollama run gemma:2b and work in your local
3.Install Claude
Install Claude using below commands macOS, Linux, WSL: curl -fsSL https://claude.ai/install.sh | bash Windows CMD: curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd
4.Run Claude with Ollama
Launch Claude using below commands ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b

@Nestleai
Copy link
Copy Markdown

I've followed this directive and all works but claude, one the first test command "write a python function to reverse a script" its been hyperspacing for 10mins+ Bare in mind i'm working the qwen2.5-coder:7b model. could this mean the model isnt comaptible with my hardware?

@NathanLewis
Copy link
Copy Markdown

I tried it but claude can't see the local filesystem not even the files in the directory I run it from.

@eshwarvijay
Copy link
Copy Markdown

eshwar: ollama launch claude --model qwen3-coder-next:latest
Error: unknown command "launch" for "ollama"

@JenilSavalia
Copy link
Copy Markdown

eshwar: ollama launch claude --model qwen3-coder-next:latest Error: unknown command "launch" for "ollama"

same error

@naimroslan
Copy link
Copy Markdown

eshwar: ollama launch claude --model qwen3-coder-next:latest Error: unknown command "launch" for "ollama"

Ran to this same issue. I just updated ollama and it works fine now.

@yangboz
Copy link
Copy Markdown

yangboz commented Mar 5, 2026

eshwar: ollama launch claude --model qwen3-coder-next:latest Error: unknown command "launch" for "ollama"

man , how about Error: unknown command "launch" for "ollama" ? thanks.

@abhayIII
Copy link
Copy Markdown

abhayIII commented Mar 5, 2026

eshwar: ollama launch claude --model qwen3-coder-next:latest Error: unknown command "launch" for "ollama"

man , how about Error: unknown command "launch" for "ollama" ? thanks.

Upgrading Ollama (restart to upgrade) option made it work for me. After you install claude, restart ollama once. Then give this command.

@DevendraBhoraniya
Copy link
Copy Markdown

eshwar: ollama launch claude --model qwen3-coder-next:latest Error: unknown command "launch" for "ollama"

Try Updating the Ollama it worked for me

@cheerygopiramnaidu-web
Copy link
Copy Markdown

/head/

@seshubabubatchu
Copy link
Copy Markdown

Hey guys I tried the same but with different model qwen2.5-coder:7b
I tried with the continue extension in vs code and also with the Claude as well, why this is giving the output in only json format

is this something related to the model?

@DevjitSikdar
Copy link
Copy Markdown

why the heck i am getting ouput in my terminal why can't the claude code access my files?

@ShubhmPatil
Copy link
Copy Markdown

curl https://claude.ai/install.cmd
This command worked for me for the windows to install claude

@sanjaykadavarath
Copy link
Copy Markdown

Thank you @iam-veeramalla

@DevendraBhoraniya
Copy link
Copy Markdown

Hey guys I tried the same but with different model qwen2.5-coder:7b I tried with the continue extension in vs code and also with the Claude as well, why this is giving the output in only json format

is this something related to the model?

i am getting the same problme if you find some solution please tell me

@nipodemus
Copy link
Copy Markdown

I tried it but claude can't see the local filesystem not even the files in the directory I run it from.

I had same problem. Check /permissions, by default almost all usable are denied. I added for example Bash to Allow rules and file access started working.

@medake
Copy link
Copy Markdown

medake commented Mar 16, 2026

Hi @iam-veeramalla @ShubhmPatil @chinthakindi-saikumar - I am running Windows with PowerShell and set up Ollama with the local model qwen3.5:9b. I am not able to create or read or write the files in the project directory. After a lot of serch I found that Claude’s agent capabilities (filesystem tools, project scanning) exist only in Claude Code CLI, which requires Anthropic authentication and billing. Under Ollama-only execution, Claude behaves as a plain LLM without tool access, so automated codebase analysis is not possible. How did it work in your case?

@Kalyani-192006
Copy link
Copy Markdown

API Error: 500
{"type":"error","error":{"type":"api_error","message":"model requires more
system memory (18.2 GiB) than is available (4.1
GiB)"},"request_id":"req_7a5c7bfa4bc13c556a2405ad"}
getting this error

@Hemanth26080
Copy link
Copy Markdown

@Kalyani-192006 Your laptop has 4 GB of RAM, but to install models, you need aleast 12 GB of RAM. Upgrade the RAM either physically or virtually.

@wahab-bahi
Copy link
Copy Markdown

i have 8 gb ram in my laptop which claude model is perfect for my laptop that works fine .
if anyone knows kindly helps me @Hemanth26080

@aneeshksoft
Copy link
Copy Markdown

Hi @iam-veeramalla @ShubhmPatil @chinthakindi-saikumar - I am running Windows with PowerShell and set up Ollama with the local model qwen3.5:9b. I am not able to create or read or write the files in the project directory. After a lot of serch I found that Claude’s agent capabilities (filesystem tools, project scanning) exist only in Claude Code CLI, which requires Anthropic authentication and billing. Under Ollama-only execution, Claude behaves as a plain LLM without tool access, so automated codebase analysis is not possible. How did it work in your case?

Hi, did you find any solution for this?
I’m facing the same issue. I tested with local models like qwen3.5, and Claude Code behaves like a normal LLM (no file read/write, no tool usage).
But when I switched to a cloud model (qwen3.5:cloud), it worked properly and was able to create files using Claude Code.
Just wanted to check if you managed to get local models working with file operations, or if cloud is the only way right now.

@kamols
Copy link
Copy Markdown

kamols commented Apr 5, 2026

https://www.youtube.com/watch?v=AKKx1PoNtnM

This was spot on - thanks for this Abhishek

@Hemanth26080
Copy link
Copy Markdown

i have 8 gb ram in my laptop which claude model is perfect for my laptop that works fine . if anyone knows kindly helps me @Hemanth26080

What I did was I just installed Claude code, and I configured the OpenRoute API key without any installation. I started using all the free model in openroute for free. I hope this information will helpful. @iam-veeramalla @aneeshksoft @ShubhmPatil @chinthakindi-saikumar @Kalyani-192006

@AnshumanBharatiya
Copy link
Copy Markdown

i have 8 gb ram in my laptop which claude model is perfect for my laptop that works fine . if anyone knows kindly helps me @Hemanth26080

What I did was I just installed Claude code, and I configured the OpenRoute API key without any installation. I started using all the free model in openroute for free. I hope this information will helpful. @iam-veeramalla @aneeshksoft @ShubhmPatil @chinthakindi-saikumar @Kalyani-192006

Which Is Bettter Open Router or Ollama?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment