WordPress Development Without a Computer

You’re on the train, scrolling through your WordPress site on your phone, and you see an issue that you’d like to fix, for example improvement to the mobile view. Normally you’d make a (mental) note and deal with it when you’re back at your computer.

But what if you could just fix it right there?

With AI coding assistants that run in the browser—like Claude Code, OpenAI Codex, GitHub Copilot Workspace, or similar tools—combined with WordPress Playground for testing, you can now do WordPress plugin development without a computer.

What It Looks Like

AI: I’ve implemented the fix for you, committed and pushed it to Github. Use this link to Test it in WordPress Playground.

This link above just leads to a generic Playground but to give you an idea of the workflow: The AI helps you fix or implement what you asked for and makes the code available in a branch. You’ll then run/view it via Playground. Here’s how to set this up:

What You Need

  • Your plugin or theme in a GitHub repository,
  • A web AI coding assistant,
  • A way to tell the AI how to generate Playground test links that you can click.

The third part is where the Playground Step Library comes in.

Step 1: Create Your Blueprint

WordPress Playground uses blueprints: JSON configurations that describe what to install and how to set things up. You can install plugins, themes, configure settings, import content, and more. Writing these by hand is a little cumbersome, so I built the Step Library as a visual tool to assemble blueprints step by step.

The Step Library also provides more steps than Playground offers natively: that’s where the name comes from. It compiles these custom steps into the native steps that Playground understands. The native steps are powerful but require you to know how to combine them in clever ways; the Step Library’s custom steps make it easier. Examples include addProduct for WooCommerce, addTemplatePart for block themes, a debug step to enable common debug settings and plugins, or disableWelcomeGuides.

Use this special link to the Step Library to start with an “Install Plugin” step. Paste your HTTPS GitHub repository URL. If you want to test a specific branch, add /tree/branch-name to the URL—but for now, just use your main branch. We’ll make the branch dynamic later.

Add any other steps your testing environment needs: maybe WooCommerce if your plugin integrates with it, or some test content, or specific WordPress settings.

Step 2: Generate AI Instructions

Once your blueprint is ready, open the “Copy/Share” dropdown and select “Generate AI Instructions”. This creates a markdown snippet you can add to your project’s CLAUDE.md, .github/copilot-instructions.md, or similar AI instruction file:

The generated instructions tell the AI to include a Playground testing link at the end of its responses. The branch name in your URL gets replaced with a BRANCH_NAME placeholder, so the AI knows to substitute the actual branch it’s working on.

Step 3: Add to Your Repository

Copy the generated markdown and add it to your AI instruction file. Commit it to your repository. Now any AI assistant that reads these instructions will include Playground links when it makes changes to your code.

Bonus: you can also instruct your coding assistant to add such a file to your repo!

The Workflow

Here’s what this looks like in practice:

  1. Open your AI coding assistant on your phone (or desktop),
  2. Connect to your GitHub repository,
  3. Describe what you want to change or fix,
  4. The AI makes the changes and pushes a branch,
  5. Tap the Playground link in the response,
  6. Test the changes in Playground—if it’s a private repo, you’ll authenticate with GitHub here,
  7. If it works, create a PR and merge it—you got to test before even opening the PR.

It’s a complete development loop. The AI handles the code, GitHub handles version control, and Playground handles testing. Your phone is just the interface tying it all together.

Private Repositories

Until recently, this workflow only worked with public GitHub repositories. I submitted a PR to WordPress Playground that adds GitHub OAuth authentication. Now when you load a plugin from a private repository, Playground prompts you to authenticate, and then it works just like public repos.

Beyond Mobile: Preconfigured Test Environments

The mobile workflow is a fun demo, but the same setup is useful on desktop too. The real power is in the preconfigured Playground environments through blueprints (which you can easily create with the Step Library).

Say your plugin integrates with WooCommerce. You can create a blueprint that installs WooCommerce, sets up a test product, and installs your plugin from the current branch. Now every Playground link the AI generates loads an environment where you can actually test the integration—not just whether your plugin activates without errors.

Or you want to test across different configurations: multisite vs single site, classic editor vs block editor, different PHP versions. Create a blueprint for each scenario, generate AI instructions for each, and you have a test matrix that’s one click away.

GitHub Actions

You can take this further with a GitHub Action that posts a Playground link “Try it in Playground” as a comment on every PR. That way anyone reviewing the PR can test the changes without setting up a local environment.

The Step Library is available as an npm package, so you can integrate it into your own tooling and CI pipelines.

Let AI Create Blueprints for You

Something often overlooked: the Step Library is also useful for getting AI to help you create blueprints in the first place. The native Playground steps are low-level—things like writeFile and runPHP—so AI assistants often don’t grasp what’s actually possible with blueprints. The Step Library’s high-level steps are more intuitive, and with a JSON schema that describes them, AI can easily understand what’s available and generate useful blueprints.

Other New Step Library Features

Some notable other things I added recently:

wp-env.json import: Drop your .wp-env.json into the Step Library and it converts your local dev environment config into a Playground blueprint.

GitLab, Bitbucket, and Codeberg support: Not everyone uses GitHub. The Step Library now recognizes repository URLs from these platforms.

Paste detection: Paste a plugin URL, some PHP code, or even an existing Playground URL, and the Step Library figures out what it is and creates the right steps.

Try It

The Playground Step Library is where you can create your blueprint and generate AI instructions.

I’ve found myself using this on the train, in waiting rooms, wherever I have a few minutes and an idea I want to try. It’s not how I imagined WordPress development would work, but it does.

Setting Up a Local Ollama Copilot via LSP

I am quite interested in running AI offline. Thus I really like Ollama, and have added automatic failover from ChatGPT to a local AI to my little terminal llm tool cll (get it on Github at akirk/cll).

As a developer, an important local gap for me was Github Copilot. Its function of autocomplete on steroids is really powerful in my day to day work and speeds up my development a lot.

Now, how can you get this offline? Mostly, search engines point to solutions that involve Visual Studio Code extensions, for example Continue and lots of other dependencies.

LSPs are independent of IDEs

But why should this involve IDE extensions? With the concept of LSPs (read LSP: the good, the bad, and the ugly to learn how LSPs work), and the existence of LSP-Copilot, this should be independent of IDEs. And I personally use Sublime Text.

And indeed, it does work just on that basis: using the go proxy ollama-copilot by Bernardo de Oliveira Bruning.

But for me it didn’t work out of the box. Thus, I’d like to share the steps that got this working for me. I use macOS.

Steps to get it running

First, follow the install instructions for Ollama and ollama-copilot. This puts the go binary in ~/go/bin/ollama-copilot

Then, change the settings for lsp-copilot and add "proxy": "127.0.0.1:11435" (this is the default local port).

Now, you also need to address the certificate situation. I use mkcert which you can install with homebrew using

brew install mkcert

Follow the instructions to install its root cert. We need a certificate that covers two Edit: three hosts, so run

cd ~/go/bin/; mkcert api.github.com copilot-proxy.githubusercontent.com proxy.individual.githubcopilot.com

which gives you two files with which you can now now start the proxy:

~/go/bin/ollama-copilot -cert ~/go/bin/api.github.com+2.pem -key ~/go/bin/api.github.com+2-key.pem

Finally, you need to add one more thing to the lsp-copilot config JSON. First find out the location of the root cert: echo $(mkcert -CAROOT)/rootCA.pem and add an env section there (see this FAQ), for me it’s:

"env": {
	"NODE_EXTRA_CA_CERTS": "~/Library/Application Support/mkcert/rootCA.pem"
},

This made it work for me. Edit: It seems a bit erratic. For me it works most reliably if you start ollama-copilot first, and only then Sublime Text. You can see the proxy at work through its output in the terminal.

2024/11/15 16:04:08 request: POST /v1/engines/copilot-codex/completions
2024/11/15 16:04:12 response: POST /v1/engines/copilot-codex/completions 200 4.744932083s

And this is from the LSP log panel:

:: [16:04:07.967]  -> LSP-copilot textDocument/didChange: {'textDocument': {'uri': 'file:///...', 'version': 42}, 'contentChanges': [{'range': {'start': {'line': 2860, 'character': 53}, 'end': {'line': 2860, 'character': 53}}, 'rangeLength': 0, 'text': 'c'}]}
:: [16:04:08.013] --> LSP-copilot getCompletions (6): <params with 147614 characters>
:: [16:04:08.027] --> LSP-copilot getCompletionsCycling (7): <params with 147614 characters>
:: [16:04:08.133] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:08.156] <-  LSP-copilot statusNotification: {'status': 'InProgress', 'message': ''}
:: [16:04:12.447] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[fetchCompletions] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 4288 ms'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] solution 0 returned. finish reason: [Iteration Done]'}
:: [16:04:12.920] <-  LSP-copilot window/logMessage: {'type': 3, 'message': '[streamChoices] request done: headerRequestId: [] model deployment ID: []'}
:: [16:04:12.920] <-  LSP-copilot statusNotification: {'status': 'Normal', 'message': ''}
:: [16:04:12.920] <<< LSP-copilot (7) (duration: 4892ms): {'completions': [{'uuid': '4224f736-39f9-402e-b80e-027700892012', 'text': '\t\t\t\t\'title\'  => \'<span class="ab-icon dashicons dashicons-groups"></span>...', {'line': 2860, 'character': 54}, 'docVersion': 42, 'point': 105676, 'region': (105622, 105676)}]}

Verdict

So far it showed that it is neither better nor faster than Github Copilot: In the logfile above you can see that a completion took almost 5 seconds. But ollama-copilot works offline which is better than no copilot. And it works with only a few moving parts.