Skip to content

Instantly share code, notes, and snippets.

@burtawicz
Created May 14, 2024 02:01
Show Gist options
  • Save burtawicz/48463a9c697baa4d4ed80b5feef339b0 to your computer and use it in GitHub Desktop.
Save burtawicz/48463a9c697baa4d4ed80b5feef339b0 to your computer and use it in GitHub Desktop.

Augmenting IntelliJ with Ollama + Continue

The purpose of this gist is to help those who are interested in augmenting their development experience with LLMs using Ollama and Continue. This text should be short and easy to intuit, but please feel free to reach out via GitHub or email should you have questions. This guide assumes you already have JetBrains' IntelliJ installed on your machine (though it looks like this same approach will work with Visual Studio Code as well).

📋 Steps

  1. Install Ollama
  2. Install Continue
  3. Configure Continue
    • Modify your config via the button in the Continue UI (will be a button somewhere along the side bars) or via CLI - vi ~/.continue/config.json
    • Continue docs are great for customizing your configuration.
  4. Configure IntelliJ
  5. Verify functionality

Example ~/.continue/config.json

{  
  "models": [  
    {  
      "title": "Ollama - WizardLM2 7B",  
      "provider": "ollama",  
      "model": "wizardlm2:7b",  
      "apiBase": "http://localhost:11434",  
      "completionOptions": {}  
    },  
    {  
      "title": "Ollama - Codellama 13B",  
      "provider": "ollama",  
      "model": "codellama:13b",  
      "apiBase": "http://localhost:11434",  
      "completionOptions": {}  
    },  
    {  
      "title": "Ollama - LLama3 8B",  
      "provider": "ollama",  
      "model": "llama3",  
      "apiBase": "http://localhost:11434",  
      "completionOptions": {}  
    }  
  ],  
  "slashCommands": [  
    {  
      "name": "edit",  
      "description": "Edit selected code"  
    },  
    {  
      "name": "comment",  
      "description": "Write comments for the selected code"  
    },  
    {  
      "name": "share",  
      "description": "Export this session as markdown"  
    }  
  ],  
  "customCommands": [  
    {  
      "name": "unit-test",  
      "prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",  
      "description": "Write unit tests for selected code"  
    },  
    {  
      "name": "integration-test",  
      "prompt": "Write a comprehensive set of integration tests for the selected code. It should setup, run tests that check for correctness, including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",  
      "description": "Write integration tests for selected code."  
    }  
  ],  
  "contextProviders": [  
    {"name": "open", "params": {}},  
    {"name": "code", "params": {}},  
    {"name": "docs", "params": {}},  
    {"name": "diff", "params": {}},  
    {"name": "terminal", "params": {}},  
    {"name": "search", "params": {}}  
  ],  
  "tabAutocompleteModel": {  
    "title": "Ollama - Starcoder2 3B",  
    "provider": "ollama",  
    "model": "starcoder2:3b",
    "apiBase": "http://localhost:11434",  
  },  
  "tabAutocompleteOptions": {  
    "useCopyBuffer": false,  
    "useSuffix": false,  
    "maxPromptTokens": 512,  
    "prefixPercentage": 0.5  
  },  
  "allowAnonymousTelemetry": false  
}

Note: I've prefixed the model names with "Ollama - " because I've also been experimenting with running models with Continue and llama2.c / llm.c on a remote machine. If you're only using Ollama as a provider, this is bloat.

🧠 Recommended Models

  • Starcoder2
    • 3B params for autocomplete
  • Codegemma
    • 7B+ for code inference and chat (more params tend to result in better suggestions)
  • LLama3
  • Phi3
    • 3.8B params for autocomplete (this model does best with Python-specific codebases)

Pull models via ollama pull <model-name:tag>.

🤷‍♂️ Conclusion

Once your configuration is set up, you should see completion suggestions in the editor window.

Select a block of code and:

  • cmd+i/ctrl+i to start a prompt related to the selected code for an inline suggestion (use the slashCommands defined in the config for shortcuts)
  • cmd+j/ctrl+j to start a prompt related to the selected code in chat
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment