Should you use QA Wolf?

Joel Ramos
ramosly | blog
Published in
10 min readAug 21, 2020

--

Recently a client asked me if I had heard of the QA Wolf test automation framework.

I said, “No 🤔”

So, naturally, I decided to take a peek at it and take it for a spin. What follows will be a sort of play-by-play of what it was like for me (a QA engineer) to get up and running, along with its included GitHub Actions workflow for continuous integration (CI).

So let’s get started! 🚀

Setting up

I’m using an existing project, spacex-ships.

I’ll create a branch for qawolf

git checkout -b qawolf

Add qawolf per the documentation

yarn create qawolf

I’m also interested in testing out the Github Actions, so I answered “yes” to the CI question.

The docs also say to verify the installation by running the howl command.

npx qawolf howl

Lol ✨

Creating a test

The docs then say to run the create command and pass a url, and the name of a test 🤔

Interesting.

Okay… here we go!

npx qawolf create https://spacex-ships.now.sh navigation

Running this command opened my default browser (chrome) to my site, evidently created some test code files, and printed out some options to the terminal. See below.

At this point, I haven’t touched anything in the browser yet. I notice that in the terminal it looks like we have options to tell QA Wolf to save, open a REPL (which is the node interpreter for executing JavaScript, which stands for Read Eval Print Loop).

So now, I’m going to scroll to the bottom and click on the Page 2 button to go to the next page and see what QA Wolf does.

After that, I choose the Save and exit option.

I guess my “navigation” test has passed haha.

I wonder what the code looks like 🤔

First, the test code

Interesting. Since QA Wolf is based on playwright, the code looks a lot like puppeteer code.

This is pretty cool. If we were to continue doing things this way, we would probably have to start parameterizing and centrally locating configuration things like the url, locators, etc. That way, if they change we change them in one place.

I also notice that there’s no assertion. I suppose that’s not the end of the world. QA Wolf did a bit of the work for us. I’m also curious what happens if I create another test with the same name, do I get a new test in the same file? And what if I used a different url? 🤔

But before we get too far with this line of reasoning, let’s first look at the other files that were created, and we can come back to these.

A qawolf.config.js file was created, so let’s take a look at what’s in there 👀

Okay this looks mostly like just QA Wolf stuff, but perhaps we can add arbitrary values and retrieve them in tests. Otherwise we can just create our own pattern for configuration, so not a big deal.

Onward!

CI — Github Actions

So, I answered “yes” to the CI question when I installed QA Wolf. Naturally, a .github folder was created in the root of my directory.

In this directory a workflows directory was created with a qawolf.yml

GitHub knows to look for yaml files in the .github/workflows folder when changes are pushed. See below.

The on: key, or field, defines the GitHub events that trigger the workflow. It looks like they've added a trigger for pushevents on all git branches. You actually don't need to include the branches: * part, but I imagine it's included to let you know know that you can limit to only certain branches.

Anyway, with this workflow as it is, my test should run in a GitHub runner as soon as I push my branch.

It also looks like there’s a step for uploading artifacts. Are we going to get a report when the tests run? 🤔

Let’s try it and see!

git add .github/ .qawolf/ qawolf.config.js package.json yarn.lock
git commit -m "add qawolf test and github actions workflow"
git push -u origin qawolf

Now let’s go to GitHub and to the Actions tab.

Heyyy it’s running! 🚀

Let’s click on the workflow name with the yellow spinning circle to see the job.

Nice.

Now once the job finishes let’s expand the step that runs the tests.

Okay cool cool 🤔

Looks like artifacts were uploaded, and we have a 1 in the upper-right corner. “Artifacts” are files that are produced by the job that you want GitHub to keep rather than discard when the job has completed.

So let’s see what we got!

Interesting. A video and a text log.

Contents of the text log

debug: ignore non-wheel s:croll event {"isTrusted":true}
debug: ignore non-wheel s:croll event {"isTrusted":true}
debug: ignore non-wheel s:croll event {"isTrusted":true}
debug: ignore non-wheel s:croll event {"isTrusted":true}
debug: qawolf: get clickable ancestor for //*[@id='root']/div/div[3]/div/div[12]/div/a[3]
debug: qawolf: found clickable ancestor: A //*[@id='root']/div/div[3]/div/div[12]/div/a[3]
debug: qawolf: get clickable ancestor for //*[@id='root']/div/div[3]/div/div[12]/div/a[3]
debug: qawolf: found clickable ancestor: A //*[@id='root']/div/div[3]/div/div[12]/div/a[3]
log: 2
debug: ignore non-wheel s:croll event {"isTrusted":true}

And then the video looks… weird. It’s 5 seconds long, and it looks like it waits for the page to load and then ends. It looks like we did not actually get to page 2 judging by the page number at the bottom.

Video recording generated by QA Wolf

Looking back at the generated test code, I suspect that the click happens, the scroll happens, but we don’t wait for the page to actually navigate before continuing or calling the test “done.”

We can run the test locally and see what happens.

npx qawolf test navigation

Maybe we should have done that before jumping to seeing CI run, but whatever. I was eager haha.

Anyway, as I suspected I think we need to tell playwright to wait for the page to navigate to page 2.

Often times on most websites you can use await page.waitForNavigation() to tell playwright / puppeteer to wait for the next page to load. Since my app is a single-page app we're not actually changing the url or navigating to a new page, let's just wait for the first ship to appear on the next page and then verify the name of this ship.

Now let’s run the test again.

Nice.

Alright let’s push and see it run in CI.

Wow. Okay the video is only 2 seconds long this time 🤔

Welp… maybe the video feature needs work. Or maybe I’m doing something wrong 🤷‍♂️

Let’s move on…

Evidently, we can tell QA Wolf to prefer certain html attributes when locating elements. A best practice is to use a dedicated html attribute for locators. The documentation shows data-qa attributes in their examples. My site has data-tid attributes. Let's try updating the qawolf.config.js file and recreate the test to see what happens.

We can add a comma-separated list of these attributes to prefer to an attribute field in the qawolf.config.js file.

Aaaaannnnd looks like that works!

createTemplate 👀

Thiiiiiiiis feature looks interesting. You can actually customize the test code that is generated when you run npx qawolf create [<https://someurl.com>](<https://someurl.com>) testname

Perhaps we could use this to feature to always use process.env.BASE_URL to grab the url from configuration rather than passing in a url to hardcode into each test.

This is preferable for running tests against different environments. Maybe I want to run the GitHub Actions workflow against staging first, and then against production after release. Best not to hardcode the url into the test.

Let’s update the GitHub Actions workflow to create one job per browser.

GitHub Actions CI — Browser Matrix

QA Wolf has an --all-browsers command line option, but I actually prefer having a job per browser so that we can see the chromium job fail, or the firefox job fail, etc.

GitHub Actions workflow syntax supports this. We can add a strategy.matrix value with a named list, and GitHub will create a job per item in the list.

We’ll call our list browser and parameterize the job name, the artifacts name, and the QAW_BROWSER value.

I’m also going to do this directly in GitHub since their editor has autocompletion, which is nice.

When we commit changes to our branch, we can click the Actions tab and we should see 3 jobs running.

Nice.

And at the end, we should have our 3 named artifacts.

Conclusion

It’s neat

This is pretty neat to get up and running, quickly. It’s especially cool for folks who might be new to JavaScript and who can learn from reviewing the code that is generated.

For someone like me who has to create/maintain test frameworks, I’d be curious what maintaining something like this would be like. Perhaps combining the custom templates with some organization for localization, desktop/mobile locators, and reporting on results, it could be doable 🤔

It also looks like there are some requested enhancements for supporting the browser’s back/forward buttons, file uploads and downloads, drag & drop, and iframes.

Playwright supports these, but QA Wolf just can’t generate tests with playwright code for these behaviors.

It doesn’t solve all automation challenges

Overall, I think you would still end up needing to build some framework-y patterns for parameterizing which environment you’re hitting, localization for different languages, or perhaps a visual-regression tool for comparing screenshots, centralizing logic, etc.

For example, when a page’s logic or workflow changes, you can “Edit” an existing test and rerecord an update to that test. But if we don’t build in our own reusable logic for common page interactions it seems we’d have to do a lot rerecording, especially if you have a lot of tests.

One of the things we do in test frameworks is separate page interactions from the logic of tests. We create an abstraction layer that wraps whatever tool we’re using for page interactions (E.g., WebDriver, Puppeteer, Playwright, Appium, etc), and then we reference that layer to make reusable step functions which are implemented in tests.

The benefit of this approach is that when things change on the page, we can go to the logic in the page-interaction layer and effectively “swap out a part.” Generally the Page Object Model is used to centralize the page interactions where classes are created to model pages and components. It clarifies where to put logic and is a helpful mental model.

So how does QA Wolf solve this problem? Or, is it even trying to solve this problem?

Maybe QA Wolf is saying that we shouldn’t care about this much architecture. Maybe they’re saying, “just get scripts created and running and don’t waste your time.”

Even if we go along with that, QA Wolf is only coding the behaviors; it cannot add the assertions. So, if it’s critical that a piece of legal text is on the page, you’re gonna have to add an assertion into the code that QA Wolf generates, or have a visual-regression check that screenshots the page and compares to a baseline image. Both of those you’ll have to add yourself.

Similarly, if you have a bunch of tests and a lot of them fail, they all have to be updated. With the page object model, you often times can update a single centralized locator or page interaction, and update all the tests that broke which were relying on it.

This suggests to me that you’re making a trade-off between the speed of getting tests created and the ease of maintenance.

But who knows!? Maybe a large project somewhere will produce a pattern for leveraging QA Wolf’s strengths while minimizing the maintenance burden.

If you’ve been using it on a project, I’d be curious to hear how it’s going and if you’ve run into maintenance challenges, or if it turned out to be easier to just delete outdated tests and record new ones.

Anyway, that’s all for now! 🍻

--

--

• Economics grad from UC San Diego • QA Engineer • Pythonista • Always learning!