Title: Setting Up a Local Ollama Copilot via LSP
Author: Alex Kirk
Published: November 15, 2024
Last modified: February 12, 2025

---

# Setting Up a Local Ollama Copilot via LSP

November 15, 2024

I am quite interested in running AI offline. Thus I really like [Ollama](https://github.com/ollama/ollama),
and have [added automatic failover from ChatGPT to a local AI to my little terminal llm tool cll](https://alex.kirk.at/2023/10/20/chat-cli-renamed-and-added-ollama-support/)(
get it on [Github at akirk/cll](https://github.com/akirk/cll)).

As a developer, an important local gap for me was [Github Copilot](https://github.com/features/copilot).
Its function of autocomplete on steroids is really powerful in my day to day work
and speeds up my development a lot.

Now, how can you get this offline? Mostly, search engines point to [solutions that involve Visual Studio Code extensions](https://rahultah.medium.com/build-your-own-github-copilot-with-ollama-continue-llm-magic-and-api-power-745dd9597322),
for example [Continue](https://www.continue.dev/) and lots of other dependencies.

### LSPs are independent of IDEs

But why should this involve IDE extensions? With the concept of LSPs (read [LSP: the good, the bad, and the ugly](https://www.michaelpj.com/blog/2024/09/03/lsp-good-bad-ugly.html)
to learn how LSPs work), and the existence of [LSP-Copilot](https://github.com/TerminalFi/LSP-copilot),
this should be independent of IDEs. And I personally use [Sublime Text](https://www.sublimetext.com/).

And indeed, it does work just on that basis: using the go proxy [ollama-copilot](https://github.com/bernardo-bruning/ollama-copilot)
by [Bernardo de Oliveira Bruning](https://github.com/bernardo-bruning).

But for me it didn’t work out of the box. Thus, I’d like to share the steps that
got this working for me. I use macOS.

### Steps to get it running

First, follow the [install instructions for Ollama and ollama-copilot](https://github.com/bernardo-bruning/ollama-copilot?tab=readme-ov-file#installation).
This puts the go binary in `~/go/bin/ollama-copilot`

Then, change the settings for lsp-copilot and add `"proxy": "127.0.0.1:11435"` (
this is the default local port).

Now, you also need to address the certificate situation. I use mkcert which you 
can install with homebrew using

`brew install mkcert`

Follow the instructions to install its root cert. We need a certificate that covers
~two~ **Edit: **three hosts, so run

`cd ~/go/bin/`; m`kcert api.github.com copilot-proxy.githubusercontent.com proxy.
individual.githubcopilot.com`

which gives you two files with which you can now now start the proxy:

`~/go/bin/ollama-copilot -cert ~/go/bin/api.github.com+2.pem -key ~/go/bin/api.github.
com+2-key.pem`

Finally, you need to add one more thing to the lsp-copilot config JSON. First find
out the location of the root cert: `echo $(mkcert -CAROOT)/rootCA.pem` and add an
env section there ([see this FAQ](https://github.com/TerminalFi/LSP-copilot#i-see-unable_to_get_issuer_cert_locally-error)),
for me it’s:

    ```wp-block-code
    "env": {
    	"NODE_EXTRA_CA_CERTS": "~/Library/Application Support/mkcert/rootCA.pem"
    },
    ```

This made it work for me. **Edit:** It seems a bit erratic. For me it works most
reliably if you start ollama-copilot first, and only then Sublime Text. You can 
see the proxy at work through its output in the terminal.

    ```wp-block-code
    2024/11/15 16:04:08 request: POST /v1/engines/copilot-codex/completions
    2024/11/15 16:04:12 response: POST /v1/engines/copilot-codex/completions 200 4.744932083s
    ```

And this is from the LSP log panel:

    ```wp-block-code
    :: [16:04:07.967]  -> LSP-copilot textDocument/didChange: {'textDocument': {'uri': 'file:///...', 'version': 42}, 'contentChanges': [{'range': {'start': {'line': 2860, 'character': 53}, 'end': {'line': 2860, 'character': 53}}, 'rangeLength': 0, 'text': 'c'}]}
    :: [16:04:08.013] --> LSP-copilot getCompletions (6): <params with 147614 characters>
    :: [16:04:08.027] --> LSP-copilot getCompletionsCycling (7): <params with 147614 characters>
    :: [16:04:08.133] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
    :: [16:04:08.156] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
    :: [16:04:12.447] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[fetchCompletions] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 4288 ms'}
    :: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] solution 0 returned. finish reason: [Iteration Done]'}
    :: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] request done: headerRequestId: [] model deployment ID: []'}
    :: [16:04:12.920] <-  LSP-copilot statusNotification: {'status': 'Normal', 'message': ''}
    :: [16:04:12.920] <<< LSP-copilot (7) (duration: 4892ms): {'completions': [{'uuid': '4224f736-39f9-402e-b80e-027700892012', 'text': '\t\t\t\t\'title\'  => \'<span class="ab-icon dashicons dashicons-groups"></span>...', {'line': 2860, 'character': 54}, 'docVersion': 42, 'point': 105676, 'region': (105622, 105676)}]}
    ```

### Verdict

So far it showed that it is neither better nor faster than Github Copilot: In the
logfile above you can see that a completion took almost 5 seconds. But ollama-copilot
works offline which is better than no copilot. And it works with only a few moving
parts.

[AI](https://alex.kirk.at/category/ai/), [Code](https://alex.kirk.at/category/code/),
[Workflow](https://alex.kirk.at/category/workflow/)

Read this next

[Previous Post](https://alex.kirk.at/2024/11/04/2120729/)

### Leave a Reply 󠀁[Cancel reply](https://alex.kirk.at/2024/11/15/setting-up-a-local-ollama-copilot-via-lsp/?output_format=md#respond)󠁿

Only people in [my network](https://alex.kirk.at/friends/) can comment.

This site uses Akismet to reduce spam. [Learn how your comment data is processed.](https://akismet.com/privacy/)