Prototype: Create a Website from a Screenshot and Refine It, All in the Browser

I’ve been working on this experiment, combining OpenAI’s gpt-4-vision-preview with WordPress Playground to create a website based on a screenshot.

This follows on the heels of Matt Mullenweg’s announcement at the 2023 State of the Word that in 2024, the WordPress project wants to work on Data Liberation.

While the typical approach to migrating data is to build importers for specific services, a truely universal migration could happen through screenshots. With AI vision this might now be in reach.

So I built this prototype that combines a OpenAI-powered chat interface with WordPress Playground. First a screenshot, a screen recording further down.

So this is only a start. The website somewhat resembles the screenshot but it’s far from being pixel perfect.

My idea is that you’ll work together with the assistant in refining the site. It can help you update, you can ask it questions. An import is rarely perfect from the start but you can see and test the result right away in the browser and refine it.

I imagine that when done, you can then transfer the site to a web hoster who from then on can host your website for everyone.

You can try this yourself here, you “just” need an OpenAI API key that will be stored in your local storage: https://akirk.github.io/website-liberator/

Source (very unpolished) at https://github.com/akirk/website-liberator/

Some notes on this first implementation:

  • Every message is a new conversation. Modifying a website can be token intense, so for now it cannot refer to previous messages.
  • It’s using function calling to allow OpenAI to gather more information to fulfil the request.
  • Chosing the right functions to provide can be tricky.
  • It uses different models depending on the task. gpt4-vision-preview for the screenshot, gpt-3.5-turbo for the rest. I need to experiment more with gpt-4 for the latter tasks.

Finally, a screen recording of an early iteration, I have since moved the chat to the right side.

As I’ve been working on the prototype, it has shown to be interesting to have the bot be there just for customizing sites, it can create and modify pages, update settings of the website. Maybe install plugins.

So starting with a basis from screenshots and imported data, it might just be able to assist you to arrive at a comparable WordPress website, and all with the ease and effortless setup of WordPress Playground. I wonder where we can take this!

Bonus

Some screenshots from a recent version:

New Unit Tests Need to Fail (Running the Old Code)

When possible, I very much recommend adding unit tests to a Pull Request when you fix something as a way to prevent that it breaks again in future. This is no news.

One important step of adding that unit test, though, is to make sure it actually tests the bug you are fixing. Specifically, this means that you need to test that fact like this:

Keep the new unit test, undo the other code changes. The unit test now needs to fail.

If your newly introduced unit test still passes, then you didn’t expose the bug in the test. A simple check but often neglected.

A tooling solution

Here is how you can automatically test this in Github. I have created a sample repo with 3 pull requests https://github.com/akirk/unit-test-failure/pulls:

3 Pull requests of which one fails the unit tests

You can see the first one failed:

A screenshot of a Github Action that has a unit test passing when it shouldn't

The problem is that the old code already passes the new tests. This means the tests don’t test the change.

The second one passes but only because no new tests were added at all.

A screenshot of a Github Action that skipps testing without the unit test changes because no new unit test was added

And the third one does it correctly. The old code fails the new tests as desired:

A screenshot of a Github Action that has correctly fails the unit test when running the old code

Here is the code for the Github Action to test this:

name: Pull Request Unit Test Validation

on:
  pull_request:
    types:
      - opened
      - synchronize

jobs:
  test-changes:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.9

      - name: Run unit tests with changes
        run: python -m unittest discover -s tests

  test-without-changes:
    needs: test-changes
    runs-on: ubuntu-latest
    if: ${{ github.event_name == 'pull_request' }}

    steps:
      - name: PR commits + 1
        run: echo "PR_FETCH_DEPTH=$(( ${{ github.event.pull_request.commits }} + 1 ))" >> "${GITHUB_ENV}"

      - name: Checkout PR branch and all PR commits
        uses: actions/checkout@v3
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          fetch-depth: ${{ env.PR_FETCH_DEPTH }}

      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.9

      - name: 'Fetch the other branch with enough history for a common merge-base commit'
        run: git fetch origin ${{ github.event.pull_request.base.ref }}

      - name: Check if tests were changed
        run: |
          git restore --source=$(git merge-base origin/${{ github.event.pull_request.base.ref }} HEAD) --worktree tests/
          if git diff --quiet; then
            echo "TESTS_CHANGED=0" >> "${GITHUB_ENV}"
          else
            echo "TESTS_CHANGED=1" >> "${GITHUB_ENV}"
          fi
          git restore .

      - name: Revert code changes (excluding tests)
        run: git restore --source=$(git merge-base origin/${{ github.event.pull_request.base.ref }} HEAD) --worktree :^tests/
        if: ${{ env.TESTS_CHANGED }} == '1'

      - name: Run unit tests without changes
        run: |
          if [[ ${{ env.TESTS_CHANGED }} -eq 0 ]]; then
            echo "No unit test changes detected, skipping"
            exit 0
          fi

          if python -m unittest discover -s tests; then
            echo "Unit test should not pass"
            exit 1
          else
            echo "Unit test failed as expected"
          fi

Thanks @jesusamieiro-com for pointing out that you need to be careful with this for code that adds new test coverage. Probably best to use a Github label to activate or deactivate this action.