Skip to main content

The Different Shades of Testing Web Apps: Aiming for Balance

ยท 8 min read

It's 2022, and automation is on the rise. The number of software testing tools, libraries, and platforms available increases daily. What different aspects of web applications should you be testing? Which ones should you automate? And when and how should you employ manual testing?

Testing dimensionsโ€‹

Software testing has many dimensions, but talking specifically about frontend applications, developers are mainly concerned about:

  • Functionality - does it work? Does the application react per my actions?
  • Visual - does it look good? Are things aligned?
  • Compatibility - does the app works seamlessly across all the different supported browsers and devices? Typically companies keep lists of support browsers and devices in publically accessible spaces.
  • Performance - does it load fast? Does it perform well on lower-end devices?
  • Security - are there any security holes introduced?
  • Accessibility - is the app accessible? Could a visually impaired user navigate and use the app?
  • Legal/Compliance - are we infringing any user rights laws (e.g., GDPR)? Do these apps make use of cookies and or other browser storage capabilities?

Kinds of testsโ€‹

The above section focuses on dimensions (or categories). But within most categories, you will find various options to implement your tests. We will try to briefly go through the most prominent ones:

  • Unit tests - are typically small and focused around a tinier piece of your code. Usually, run in isolation, which means they run in controlled environments which don't greatly reflect how real-world users interact with your product.
  • Integration tests - taking a step further from unit tests. Let us still focus on a particular part of the product. We stretch the scope of a unit test and introduce other components into play.
  • End-to-end tests - these should test the product as a whole, no more hiding. The software that runs these tests has the job of playing the role of the user, meaning actually simulating user interactions. Due to their nature, these are typically slower and, hence, more expensive than the two previous types.
  • Smoke tests - these are basic health checks for your products. A smoke test typically serves as a guiding light on whether your system is currently available. This is useful for the purposes of monitoring and alerting.

To deep dive here we highly recommend you to have a look at "The Test Pyramid" by Martin Fowler to understand the trade-offs between these kinds of tests.


That's a lot to take in! Let's see what tools are helping us cover most of the above.

  • Functionality - typically covered through software testing automation. Jest (unit testing), Cypress, Playwright, and Puppeteer (integration/e2e testing) have risen in popularity over the past years. In addition, no-code solutions have been a hot topic recently, so you will find emergent solutions such as katalon in this space.
  • Visual - a group of tests that has gained more traction recently. We have Storybook (chromatic add-on), and mpbox/pixelmatch (with a Jest wrapper available americanexpress/jest-image-snapshot). Beyond libraries, you have platforms such as saucelabs and applitools, the latter more specialized in visual testing.
  • Compatibility - cross-device/browser testing could fit in the functionality bucket. Some of the previously mentioned tools will allow you to run your automation across different browsers, operative systems, and devices. If manual testing is required, BrowserStack is probably the leading tool offering a comprehensive list of device types and browsers that can be accessed on-demand through your browser (independently of your computer/OS. For compatibility (like many of the below dimensions), static code analysis might help prevent disasters. Tools such as eslint-plugin-compat allow developers to check for compatibility issues as they code.
  • Performance - Google's lighthouse is undoubtedly a big player in this field, allowing you to automate performance audits for your web applications. With the proper integration, you can spot performance degradation at each step.
  • Security - along with accessibility, one of the most neglected testing dimensions, has cost large corporations too much to ignore. We've seen the rise of platforms such as Snyk. This can help you scan for vulnerabilities across your codebases to prevent malicious code from going out with your software. No wonder this space has seen a record-breaking 2021 in cybersecurity startup investments.
  • Accessibility - another neglected space where axe seems to be the consensual option that powers many integrations. For example, you will find plugins to use the axe-core with libraries such as Jest or Playwright.
  • Legal/Compliance - with GDRP out to get you, it's crucial that you pay attention to details such as the use of cookies and user tracking. We are unaware of automation that could help you here, so if this concerns you, you'll probably have to outsource this to legal experts.

Note that in the above list, some of the tools mentioned in one testing dimension cover much more.

How much can you automate?โ€‹

For your application to have fully automated checks, you will require not only the tools above but also consider:

  • Money - many of these tools are free, others quite pricey. Regardless of pricing, if you work with teams, you need to have automated pipelines that run this for you hosted somewhere (e.g., GitHub actions), and these resources alone have costs.
  • Human resources - in line with money, you need people to put this automation in place and maintain them. Depending on your systems' complexity, you might require the assistance of well-paid experts.
  • Time - putting these systems in place takes time.

Maybe you have good unit and functional tests that cover most of the cases of your application, but you don't have the expertise to run visual tests. Maybe you work for a big tech company with loads of cash, and you have all the mentioned above in your automation pipeline and other tools that the typical developer won't hear about until 2070. But yet, you're still releasing bugs from time to time. Finding the right balance between how much you can automate and how much human testing is required is the problem you should be focusing on. Evaluate where you and your project/company stand across the different testing dimensions and try to craft the best strategy, but while at it, better to hold and advocate for a testing mindset for your team.

When you can't automateโ€‹

You'll often have to find ways to cover the gaps your automation might leave behind. Next, we'll present a list of different methods to get your product manually tested. You can combine multiple of them with your automation to optimize how much ground your team can cover.

  • Quality assurance as a mandatory step - set up an approval process for code changes where any given developer must have their changes manually tested by a peer. Sometimes this is part of the code review process, meaning that a developer is responsible for accessing the code quality and approach through the standard review process and manually testing the code changes to approve a given code change (typically a pull/merge request). This is far better than having the code author test their own changes because authors can easily have blindspots caused by their familiarity with the functionality at hand, usually missing very obvious details that an "outsider" would immediately catch.
  • Have a checklist to make your developer's life easier that outlines every testing dimension your developer should care about, e.g., "Does the new code work in Safari? Does the new code respect our accessibility standards?". This list should complement your automation.
  • Dogfooding - another common way large organizations test their products nowadays is by using the product. Simple right? The idea of dogfooding is to integrate your own product as a mission-critical piece of your own team/company processes. Additionally, most companies will internally run a "beta" version of those products, meaning they purposely leave their customers a few versions behind to leave room for their staff to catch bugs while using the most recent version. This is a brilliant concept that everyone should seriously consider.
  • Assign a quality assurance person to your team - in an ideal world, you would have dedicated testers whose job is to create a bug barrier and assess every single code change before it reaches production.
  • Outsource - get outside help to perform the testing you need. You will find plenty of offers across the web from companies or freelancers to test your products.
  • Closed beta programs - ok, you don't have the funds to outsource just yet. You might want to trade off some of your customers and turn them into testers. This strategy is most commonly used to gather feedback from your users before launching a particular functionality, allowing you to work behind the scenes and iterate while the final product is not publically available. But why not also consider these users as your testing staff? You could leverage them by creating tools and communication channels that make them want to report issues to you.

We hope this article helps you get the big picture of testing frontend applications and give you a couple of alternative ideas to complement your automated checks with some manual testing processes.

If you liked this article, consider sharing (tweeting) it to your followers.

Did you like this article?