Skip to content

Instantly share code, notes, and snippets.

@valosekj
Created January 2, 2025 16:34
Show Gist options
  • Save valosekj/d4e81f1a1ca58091d0a4230eec8ce5ba to your computer and use it in GitHub Desktop.
Save valosekj/d4e81f1a1ca58091d0a4230eec8ce5ba to your computer and use it in GitHub Desktop.
Setting Up a Local AI Co-Pilot for PyCharm with Continue and Ollama

Steps:

  1. Install the Continue plugin:

    PyCharm --> Settings --> Plugins --> search for "Continue" --> Install

  2. Download and install Ollama

  3. Download some LLM model, which can be run locally. For example, the open-source Granite model from IBM:

ollama pull granite3.1-dense:8b
  1. OPTIONAL: In addition to the LLM for chat and code generation, you can install an embedding model to enable the Retrieval Augmented Generation (RAG) capabilities of Continue, for example:
ollama pull granite-embedding:30m
  1. Configure the Continue PyCharm plugin

Modify ~/.continue/config.json file, for example:

{
  "models": [
    {
      "title": "Granite 3.1 8b",
      "provider": "ollama",
      "model": "granite3.1-dense:8b"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Granite 3.1 2b",
    "provider": "ollama",
    "model": "granite3.1-dense:2b"
  },
  "embeddingsProvider": {
      "provider": "ollama",
      "model": "granite-embedding:30m",
      "maxChunkSize": 512
  },
  "customCommands": [
    {
      "name": "list-comprehension",
      "prompt": "{{{ input }}}\n\nRefactor the selected python code to use list comprehensions wherever possible. Present the output as a python code snippet.",
      "description": "Refactor to use list comprehensions"
    }
  ]
  ...
}

Sources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment