-
Install the Continue plugin:
PyCharm --> Settings --> Plugins --> search for "Continue" --> Install
-
Download and install Ollama
-
Download some LLM model, which can be run locally. For example, the open-source Granite model from IBM:
ollama pull granite3.1-dense:8b
- OPTIONAL: In addition to the LLM for chat and code generation, you can install an embedding model to enable the Retrieval Augmented Generation (RAG) capabilities of Continue, for example:
ollama pull granite-embedding:30m
- Configure the Continue PyCharm plugin
Modify ~/.continue/config.json
file, for example:
{
"models": [
{
"title": "Granite 3.1 8b",
"provider": "ollama",
"model": "granite3.1-dense:8b"
}
],
"tabAutocompleteModel": {
"title": "Granite 3.1 2b",
"provider": "ollama",
"model": "granite3.1-dense:2b"
},
"embeddingsProvider": {
"provider": "ollama",
"model": "granite-embedding:30m",
"maxChunkSize": 512
},
"customCommands": [
{
"name": "list-comprehension",
"prompt": "{{{ input }}}\n\nRefactor the selected python code to use list comprehensions wherever possible. Present the output as a python code snippet.",
"description": "Refactor to use list comprehensions"
}
]
...
}