FoundryVTT Module Test Automation
I really loath manual testing. In a professional setting, nearly all of my projects are developed using TDD (test driven development). However, a bit to my embarassment, most of my personal projects haven't received this level of care. In this article, I'm going to overview how I bucked that trend by introducing testing to a friend's project.
Back in 2017, I started GM'ing the FFG Star Wars Tabletop RPG for my friends. Over the years, folks moved and eventually the game migrated online. In early 2021, a friend suggested I explore FoundryVTT. It quickly became our favorite tabletop RPG platform due to its extensibility.
We used the Star Wars FFG System, and shortly thereafter, my friend began development of an add-on module (FFG Star Wars Enhancements) with features not really suited for the core system.
Since then, FoundryVTT has undergone three major releases. While I've contributed a number of features to the module, the majority of the maintenance and testing during those releases has fallen on my friend's shoulders. After discussing options, I decided to use my background in CI/CD workflow development to setup automated testing.
Challenges
In many projects, a good mocking framework will give you sufficient coverage for testing against integration points with other libraries. In our case, FoundryVTT effectively acts as a framework for the module. This tightly couples the project's code with the implementation of the FoundryVTT API. The level of mocking that would be required to adequetly test a feature would prove extremely unreliable during a major version change. Many frameworks eventually create a testing scaffold that blends production code and fakes. This may be in FoundryVTT's future, but at time of writing it does not exist yet.
That leaves the next best option being integration and end-to-end tests. The difference between them is subtle. For the purposes of this article, integration tests attempt to minimize their reliance on UI interactions and end-to-end tests attempt to primarily test the UI.
Additionally, FoundryVTT (reasonably) restricts distribution of their software. That makes automating tests challenging, particularly in CI.
Quench
Quench is a FoundryVTT module that can be used to run tests from your module within FoundryVTT. This allows you to write Mocha tests and run them within FoundryVTT. We use Quench for our integration tests, where we try to avoid UI interactions.
Excerpt from a quench test:
it(`creates a new ${option} journal`, async () => {
// Hook to capture when our dialog has actually rendered
const rendered = $.Deferred();
Hooks.once("renderApplication", (...args) => rendered.resolve(args));
const dialog = await create_datapad_journal();
const [application, $html] = await rendered.promise();
// Sanity check the renderApplication hook returned our dialog
expect(dialog).to.equal(application);
... 8< snip ...
const datapad = game.journal.getName(option);
expect(datapad).to.not.be.undefined;
const page = datapad.pages?.values()?.next().value;
expect(page).to.not.be.undefined;
expect(page.text?.content).to.have.string(needle);
To see how we use Quench in action, check out our Quench tests.
Cypress
For full end-to-end tests, we've opted for Cypress. Cypress is a good fit for our project, because our tests really are purely frontend tests. Cypress's opinionated approach to testing encourages us to write tests in a way that avoids a lot of the patterns that make UI tests brittle.
We developed a number of Cypress Commands that handle installing systems, modules, and creating a test world.
Excerpt from a cypress test:
// Open the crawl dialog
cy.get('[data-control="ffg-star-wars-enhancements"]').click();
cy.get('[data-tool="opening-crawl"]').click();
// Create a folder for the Opening Crawl journals
cy.get(".window-content .yes").click();
// Create a crawl
cy.get("#ffg-star-wars-enhancements-opening-crawl-select .create").click();
Aside: If you end-up exploring Cypress, I strongly recommend reading their documentation. Specifically, Conditional Testing. Wrapping your head around the "Element existence" section will go a long way to avoiding common pitfalls.
Tests inside your tests?
An astute reader might have noticed that Quench requires a running instance of FoundryVTT to execute tests. To support running Quench tests in CI, we actually execute the Quench tests via Cypress.
Honestly, this feels a bit hack. But, writing integration tests in Quench feels more fluid, and this gives us a bit of the best of both worlds.
Docker Compose
Having the ability to quickly spin up and tear down different versions of FoundryVTT is extremely useful when testing. Locally, I'm using Docker Compose with felddy/foundryvtt-docker's docker container.
The container is really well designed and pretty flexible. To learn more about our Docker Compose configuration and usage, check out our Cypress README.md.
GitHub Actions
Tests that don't run as part of CI inevitably end up broken.
Aside: I actually worked backwards from this requirement to design all of the above, but explaining it in that order would have been very difficult.
Cypress makes testing in GitHub very each. They expose a github-action for running Cypress tests that handle launch and readiness checks for your server. Additionally, it automatically archives video recordings of your test runs.
To run FoundryVTT in a GitHub action, we reuse the learnings from Docker Compose.
The GitHub Ubuntu runners come preloaded with Docker. It's easy enough to craft
a docker run
command that will launch FoundryVTT with our module's code
installed.
From there, a few challenges remained:
- Caching the installation to avoid abusive downloads of FoundryVTT
- Securing FoundryVTT credentials while still supporting tests on user forks
- Requiring approval only on PRs from forks
- Race conditions!
Excerpt from our GitHub action configuration. See StarWarsFFG-Enhancements repository for all workflows.
- if: ${{ steps.container_cache.outputs.cache-hit != 'true' }}
name: Launch FoundryVTT and run Cypress Tests
uses: cypress-io/github-action@v5
with:
start: >-
sudo docker run
--name foundryvtt
--env FOUNDRY_ADMIN_KEY=test-admin-key
--env FOUNDRY_USERNAME=${{ secrets.FOUNDRY_USERNAME }}
--env FOUNDRY_PASSWORD=${{ secrets.FOUNDRY_PASSWORD }}
--env FOUNDRY_LICENSE_KEY=${{ secrets.FOUNDRY_LICENSE_KEY }}
--publish 30001:30000/tcp
--volume ${{ github.workspace }}/data:/data
felddy/foundryvtt:release
wait-on: "http://localhost:30001"
wait-on-timeout: 120
More detail about the repository configuration can be found in the Cypress README.md.
1. Caching
To ensure we don't repetitively download the FoundryVTT application, we need to
setup caching within the GitHub workflow. Fortunately, the felddy/foundryvtt
docker image supports this natively. All we need to do is cache the
data/container_cache
directory. Unfortunately, the container entrypoint will
still authenticate to the FoundryVTT API if a username/password is provided.
Normally, this feature gives you a way to make sure you're running the latest
release. In our case, we don't want that. To avoid it, we have two "steps": one
if the cache hits and one if the cache misses. We omit the FOUNDRY_USERNAME
and
FOUNDRY_PASSWORD
when we have a cache hit, skipping the check entirely.
To actually cache, we use the actions/cache@v3
action provided by GitHub. The
only special thing we do here is an additional step that saves on failure to
ensure we capture the installation regardless of whether the tests pass. The
cache should hang around for at least seven days.
2. Security credentials
Getting the level of flexibility we wanted out of GitHub proved a little
difficult. We wanted to know that tests would pass before merging a pull request.
To support contributors forking our repository, that means checking out their
code and running tests against it. However, these tests launch a docker
container that requires secrets (FOUNDRY_USERNAME
, FOUNDRY_PASSWORD
, and
FOUNDRY_LICENSE_KEY
). To make those secrets accessible in workflows run
with code from forks, we use the pull_request_target
event.
This is potentially very risky.
To limit our exposure, we use a GitHub "environment" on PRs originating from forks
called requires-approval
. The environment has a few contributors assigned to
it that must manually approve every run.
3. Approval only on forks
Requiring approval only for PRs coming from forks was tricky to configure.
The yaml SDL provided by GitHub actions is pretty flexible, but code reuse can
get difficult. There isn't a good way to specify the workflow environment based on
the origin. To work around that limitation, we create two jobs: one for upstream
and one for forks. That splits the workflow across cypress.yaml
where the
jobs are defined and cypress-impl.yaml
where the actual steps are defined.
This was a bit quirky to setup, but the sharp edges are pretty well documented in the configs.
4. Race conditions!
The free GitHub Actions runners are awesome! But they definitely run in a low performance environment. The heavy load will unfortunately/fortunately highlight every sloppy race condition in your tests. Adding fixed length timeouts generally is not reliable.
This makes tests trickier to write. You have to be very careful to be sure you've waited for the right thing before proceeding.
Additionally, some of our code, the system code, and FoundryVTT itself is not written in a way that makes it easy to know when you can proceed.
A few examples:
- The close button on "Applications" have a 500ms timer before it attaches the
onClick
handler to prevent accidentally closing. Meaning, clicking it does nothing from fast automation. - FoundryVTT has
Hook
s for a lot of asynchronous events, but reasonably not everything. If you launch a dialog, you need to be thoughtful about how to know the dialog resolved. This could be done with a custom Hook, or by looking for a side-effect (like the creation of a Journal).
Summary
I hope that this proves useful to someone who hopes to setup test automation for a FoundryVTT module.