npm install playground-step-library

I have updated my Playground Step Library (which I had written about before)–the tool that allows you to use more advanced steps in WordPress Playground–so that it can now also be used programmatically: It is now an npm package: playground-step-library.

Behind the scenes this actually dominoed into migrating it to TypeScript and restructuring the code so that it now both powers the Web UI and the npm package.

Having those custom steps available now makes even more sense that the Playground CLI is production ready and you can use it for things like testing your WordPress plugin with Playwright, see this presentation Building Automated Tests with WordPress Playground from WordCamp Europe 2025 by my colleague Berislav “Bero” Grigicak.

In this example you can see a blueprint JSON that contains steps setSiteName and addPage that don’t exist in the library of steps of Playground. At the time of writing there are 36 custom steps with the goal of making it easier to do things that can be done with a blueprint already but need some complexity. See in the example below how creating a page can be done with runPHP and wp_insert_post but it’s visually easier with a step addPage.

import PlaygroundStepLibrary from 'playground-step-library';
const compiler = new PlaygroundStepLibrary();

const blueprint = {
steps: [
{
step: 'setSiteName',
sitename: 'My Site',
tagline: 'A WordPress Playground demo'
},
{
step: 'addPage',
title: 'Welcome',
content: '<p>Welcome to my site!</p>'
}
]
};

const compiledBlueprint = compiler.compile(blueprint);
console.log(compiledBlueprint);

Which turns this into a valid blueprint:

{
"steps": [
{
"step": "setSiteOptions",
"options": {
"blogname": "My Site",
"blogdescription": "A WordPress Playground demo"
}
},
{
"step": "runPHP",
"code": "\n<?php require_once '/wordpress/wp-load.php';\n$page_args = array(\n\t'post_type' => 'page',\n\t'post_status' => 'publish',\n\t'post_title' => 'Welcome',\n\t'post_content' => '<p>Welcome to my site!</p>',\n);\n$page_id = wp_insert_post( $page_args );"
}
]
}

You can then pass this blueprint to playground CLI to run it (see other demos by my colleague Fellyph):

import { runCLI, RunCLIServer } from '@wp-playground/cli';
await runCLI({
  command: 'server',
  login: true,
  blueprint: compiledBlueprint
});

You can also conveniently try it out in WordPress Playground with this link (and also view in the Step Library UI).

Finally, in the repo there are a number of examples that you can browse and I created a little screen recording of a few of them:

Happy coding!

Setting Up a Local Ollama Copilot via LSP

I am quite interested in running AI offline. Thus I really like Ollama, and have added automatic failover from ChatGPT to a local AI to my little terminal llm tool cll (get it on Github at akirk/cll).

As a developer, an important local gap for me was Github Copilot. Its function of autocomplete on steroids is really powerful in my day to day work and speeds up my development a lot.

Now, how can you get this offline? Mostly, search engines point to solutions that involve Visual Studio Code extensions, for example Continue and lots of other dependencies.

LSPs are independent of IDEs

But why should this involve IDE extensions? With the concept of LSPs (read LSP: the good, the bad, and the ugly to learn how LSPs work), and the existence of LSP-Copilot, this should be independent of IDEs. And I personally use Sublime Text.

And indeed, it does work just on that basis: using the go proxy ollama-copilot by Bernardo de Oliveira Bruning.

But for me it didn’t work out of the box. Thus, I’d like to share the steps that got this working for me. I use macOS.

Steps to get it running

First, follow the install instructions for Ollama and ollama-copilot. This puts the go binary in ~/go/bin/ollama-copilot

Then, change the settings for lsp-copilot and add "proxy": "127.0.0.1:11435" (this is the default local port).

Now, you also need to address the certificate situation. I use mkcert which you can install with homebrew using

brew install mkcert

Follow the instructions to install its root cert. We need a certificate that covers two Edit: three hosts, so run

cd ~/go/bin/; mkcert api.github.com copilot-proxy.githubusercontent.com proxy.individual.githubcopilot.com

which gives you two files with which you can now now start the proxy:

~/go/bin/ollama-copilot -cert ~/go/bin/api.github.com+2.pem -key ~/go/bin/api.github.com+2-key.pem

Finally, you need to add one more thing to the lsp-copilot config JSON. First find out the location of the root cert: echo $(mkcert -CAROOT)/rootCA.pem and add an env section there (see this FAQ), for me it’s:

"env": {
	"NODE_EXTRA_CA_CERTS": "~/Library/Application Support/mkcert/rootCA.pem"
},

This made it work for me. Edit: It seems a bit erratic. For me it works most reliably if you start ollama-copilot first, and only then Sublime Text. You can see the proxy at work through its output in the terminal.

2024/11/15 16:04:08 request: POST /v1/engines/copilot-codex/completions
2024/11/15 16:04:12 response: POST /v1/engines/copilot-codex/completions 200 4.744932083s

And this is from the LSP log panel:

:: [16:04:07.967]  -> LSP-copilot textDocument/didChange: {'textDocument': {'uri': 'file:///...', 'version': 42}, 'contentChanges': [{'range': {'start': {'line': 2860, 'character': 53}, 'end': {'line': 2860, 'character': 53}}, 'rangeLength': 0, 'text': 'c'}]}
:: [16:04:08.013] --> LSP-copilot getCompletions (6): <params with 147614 characters>
:: [16:04:08.027] --> LSP-copilot getCompletionsCycling (7): <params with 147614 characters>
:: [16:04:08.133] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:08.156] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:12.447] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[fetchCompletions] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 4288 ms'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] solution 0 returned. finish reason: [Iteration Done]'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] request done: headerRequestId: [] model deployment ID: []'}
:: [16:04:12.920] <-  LSP-copilot statusNotification: {'status': 'Normal', 'message': ''}
:: [16:04:12.920] <<< LSP-copilot (7) (duration: 4892ms): {'completions': [{'uuid': '4224f736-39f9-402e-b80e-027700892012', 'text': '\t\t\t\t\'title\'  => \'<span class="ab-icon dashicons dashicons-groups"></span>...', {'line': 2860, 'character': 54}, 'docVersion': 42, 'point': 105676, 'region': (105622, 105676)}]}

Verdict

So far it showed that it is neither better nor faster than Github Copilot: In the logfile above you can see that a completion took almost 5 seconds. But ollama-copilot works offline which is better than no copilot. And it works with only a few moving parts.