Setting Up a Local Ollama Copilot via LSP

I am quite interested in running AI offline. Thus I really like Ollama, and have added automatic failover from ChatGPT to a local AI to my little terminal llm tool cll (get it on Github at akirk/cll).

As a developer, an important local gap for me was Github Copilot. Its function of autocomplete on steroids is really powerful in my day to day work and speeds up my development a lot.

Now, how can you get this offline? Mostly, search engines point to solutions that involve Visual Studio Code extensions, for example Continue and lots of other dependencies.

LSPs are independent of IDEs

But why should this involve IDE extensions? With the concept of LSPs (read LSP: the good, the bad, and the ugly to learn how LSPs work), and the existence of LSP-Copilot, this should be independent of IDEs. And I personally use Sublime Text.

And indeed, it does work just on that basis: using the go proxy ollama-copilot by Bernardo de Oliveira Bruning.

But for me it didn’t work out of the box. Thus, I’d like to share the steps that got this working for me. I use macOS.

Steps to get it running

First, follow the install instructions for Ollama and ollama-copilot. This puts the go binary in ~/go/bin/ollama-copilot

Then, change the settings for lsp-copilot and add "proxy": "127.0.0.1:11435" (this is the default local port).

Now, you also need to address the certificate situation. I use mkcert which you can install with homebrew using

brew install mkcert

Follow the instructions to install its root cert. We need a certificate that covers two hosts, so run

mkcert api.github.com copilot-proxy.githubusercontent.com

which gives you two files with which you can now now start the proxy:

~/go/bin/ollama-copilot -cert api.github.com+1.pem -key api.github.com+1-key.pem

Finally, you need to add one more thing to the lsp-copilot config JSON. First find out the location of the root cert: echo $(mkcert -CAROOT)/rootCA.pem and add an env section there (see this FAQ), for me it’s:

"env": {
	"NODE_EXTRA_CA_CERTS": "~/Library/Application Support/mkcert/rootCA.pem"
},

This made it work for me. You can see the proxy at work through its output in the terminal.

2024/11/15 16:04:08 request: POST /v1/engines/copilot-codex/completions
2024/11/15 16:04:12 response: POST /v1/engines/copilot-codex/completions 200 4.744932083s

And this is from the LSP log panel:

:: [16:04:07.967]  -> LSP-copilot textDocument/didChange: {'textDocument': {'uri': 'file:///...', 'version': 42}, 'contentChanges': [{'range': {'start': {'line': 2860, 'character': 53}, 'end': {'line': 2860, 'character': 53}}, 'rangeLength': 0, 'text': 'c'}]}
:: [16:04:08.013] --> LSP-copilot getCompletions (6): <params with 147614 characters>
:: [16:04:08.027] --> LSP-copilot getCompletionsCycling (7): <params with 147614 characters>
:: [16:04:08.133] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:08.156] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:12.447] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[fetchCompletions] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 4288 ms'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] solution 0 returned. finish reason: [Iteration Done]'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] request done: headerRequestId: [] model deployment ID: []'}
:: [16:04:12.920] <-  LSP-copilot statusNotification: {'status': 'Normal', 'message': ''}
:: [16:04:12.920] <<< LSP-copilot (7) (duration: 4892ms): {'completions': [{'uuid': '4224f736-39f9-402e-b80e-027700892012', 'text': '\t\t\t\t\'title\'  => \'<span class="ab-icon dashicons dashicons-groups"></span>...', {'line': 2860, 'character': 54}, 'docVersion': 42, 'point': 105676, 'region': (105622, 105676)}]}

Verdict

So far it showed that it is neither better nor faster than Github Copilot: In the logfile above you can see that a completion took almost 5 seconds. But ollama-copilot works offline which is better than no copilot. And it works with only a few moving parts.

Leave a Reply

Only people in my network can comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.