Each piece of software has bugs. We know this is true for technical products created by absolutely everyone, from Fiverr freelancers to Tech Giants (and, of course, ourselves). Whether we talk about security issues resulting in server breaches, performance issues, or some mere UI typos, they all hinder someone's experience (or potentially worse than just the experience) in one way or another.
Eventually, these issues will come to light, being discovered by either the QA engineers or the end user.
We do not want the latter to happen, therefore the question is how should you approach the complex testing process? We'll show you what happens behind the scenes and how we tackle it at Quickleaf - and, of course, how you can then do it yourself!
Maybe you've heard of the Test Pyramid. It's a concept related to Agile development.
It represents the various layers of the testing process, each having its merits and caveats in the development process.
The foundation of this Pyramid consists of...
Unit tests are the solid base of any successful software release. The scope of this best practice is to validate that each unit of the software code performs as documented or as expected.
Many companies such as Google require that any changes in the codebase or newly added functions should be covered by a corresponding unit test.
When you write new code, you must ensure it is tested by writing a new unit test and maintaining code testing coverage. Due to the low cost of implementation, unit tests should outnumber the rest of the test suites, end-to-end scenarios, or service tests.
The first stage of the testing process applied in the CI/CD pipeline is the execution of Unit Tests.
When executing a unit test, the codebase does not need to be deployed and external services are mocked, faked, or stubbed. The execution time for a set of unit tests is considerably lower than the execution time for an end-to-end scenario, so Unit tests are less costly compared to Service Tests or UI tests. Improving the speed of the test execution results in a faster CI/CD process. Having a layer of unit tests available is a safety net against introducing bugs into the codebase, while allowing the developer the liberty of refactoring the code and assuring the refactored module still works as expected.
Will not catch any issues related to the service/integration layer. It cannot catch every error in a piece of software.
It is not recommended to rely solely on unit tests because most applications still depend on external services, APIs, or databases. Therefore the next layer will be...
Any modern app is split into layers, which is a good thing. Testing each layer by mocking other parts of the app(like in unit testing) is good, but not enough. An integration test takes two or more components of a software application and ensures these bigger units work together correctly.
These tests can prove to be quite valuable to validate a real environment and make sure the thing works properly as a whole.
API testing is a subset of integration testing which determines if the API meets a set of expectations determined by usability, reliability, and functionality factors. Let us look at an API test. In the following example we will verify:
Firstly, we will need a server to test.
We will also need a basic web-server running, so let's get that out of the way by executing the following commands in the terminal:
{{{git clone https://github.com/quickleaf/api-test.git}}} will clone our server application
{{{cd api-test}}} will get into the project root
{{{npm i}}} will install the project dependencies
{{{npm run startWin}}} will start the server if you are running on Windows
If you are running this on MacOs or Linux use {{{npm run start}}}, instead of the command above
Please note that this server runs on port 3000, so keep it open.
Your terminal should look like this when the server starts.
So now that we have our local server running on our local machine, lets see if we can test it.
We will keep the same working folder and install additional libraries to help us with our task.
The packages needed are:
Execute the following steps in the project folder in order to set-up scaffolding for our basic test framework.
Lets create a test file - {{{touch test.js}}} in the working folder and append it with the following code.
const { assert } = require('chai');
const request = require('supertest')('http://localhost:3000');
describe('Flick Photo Api test', () => {
it('GET /photos', () =>
request
// Make a GET request to /photos route
.get('/photos?query=test')
// Assert 200 HTTP response code
.expect(200)
// Assert response content type is JSON
.expect('Content-Type', /json/)
.then(res => {
// Verify data being returned is not empty
console.log(res.body.photos);
assert.isNotEmpty(res.body.photos);
}));
});
Lets run this newly created test by executing the following command in the terminal {{{npx mocha test.js}}}
After test execution, you will hopefully see something like this:
Do not close this server yet, as we will use it in the next segment, which is...
The UI is the interface in which the end user communicates with a certain application. In the case of modern web apps, we have a graphical user interface(GUI). The user should experience a flawless user interface/user experience, therefore testing must be put in place to enforce this. Actions performed via keyboard, mouse, or touch interactions should be tested and verified that they perform as expected. Page elements should be displayed and function properly. Proper data should be displayed to the user. Various states of the application should be tested as they change based on certain user roles and rights.
These tests can be executed by a manual tester or through an automated regression test framework. Both automated and manual regression have pros and cons.
Automated tests bring speed into the ci/cd development process. A developer can have his branch tested in a fraction of the time needed for a manual tester to do it. The man-hour costs associated with QA will drop over time after the successful implementation of an automated regression framework. Automated testing is more geared towards a faster-paced development environment with shorter sprints.
An automated regression framework is in essence just a piece of software that tests another application. It will have its subset of problems and bugs, possibly linked to the application under test. Did you change your front end, well then expect to refactor your UI tests as well. If the API schema for a certain service changes, then you will need to refactor your automated API/service tests as well. Test maintenance is a well know problem and not to be taken lightly by the QA engineer. In conclusion, automated tests are more expensive and provide a lower return on invested time compared to unit tests.
The computer does not have an eye for detail. Not everything needs to be or should be fully automated. A manual tester would apply his knowledge of the product under test and would conduct exploratory testing. He would have a subjective approach toward the tested application. An automated script will not do that. It will just do its assertions, perhaps passing all of them, sometimes luring the developer into a false sense of security because "all tests pass".
Requires more resources and more man-hours. We need to factor in the possibility of human error. It is the slowest testing method, sometimes causing the release cycle to be blocked until all manual tests are done.
A balance should exist between automated and manual testing and they should complement each other. A mix of these two elements is perhaps...
E2e testing means testing a deployed application via its user interface.
Automated e2e testing means doing it in an automated manner. Selenium, Playwright, Cypress or Testcafe are some of the open source libraries that testers and developers use to write code that automates user agents, such as the browser.
These test scripts will mimic the user's behavior and go through the web app just like a real user would. From logging in to logging out, there's a myriad of scenarios the user would go through and a substantial number of ways it can go wrong. Catching these issues on the branch level reduces future development costs, spent on fixes and refactoring.//move this to example section
All of these would be covered by different tests, grouped in test suites, verifying different services or parts of the application. Much like a spider laying its web line by line, the QA engineer produces end-to-end test suites. The scope of both is to catch bugs. Each newly added test scenario brings extended test coverage and peace of mind for the developers who can commit and deploy faster without fear of introducing unwanted issues into the codebase. The scope of this workload is to:
To illustrate an example, we will implement a basic React app which will be used as the front-end. We will use our back-end solution to retrieve our data, so make sure to have the previous server running before attempting the next steps.
Let's set up the front-end component of our application by executing the following commands:
{{{git clone https://github.com/quickleaf/react-test.git}}} - get a copy of a frontend application
{{{cd react-test}}} - change directory into project root
{{{npm i}}} - resolves project dependencies
{{{npm run start}}} - starts the frontend application on your local machine(port 8080)
After a successful compilation, your browser should open {{{http://localhost:8080}}} and you can interact with the app.
Now that we have our example Application up and running, we can start adding some UI tests. We will use Playwright to also set up our test framework.
We favored Playwright over other solutions because of what it can do:
We will first install the latest version of Playwright in the {{{react-test}}} folder by using the following command in the terminal: {{{npm init playwright@latest}}}
This will install all required dependencies including browsers. Let's execute the default tests to make sure everything is okay. To do this run the following command {{{npx playwright test}}}. You should see something like this, telling us all tests have been successful. As the tests were executed in the headless mode you did not see any browsers popping up.
Next, let's modify the example test found in {{{/tests/example.spec.js}}} with the following code block.
import { test, expect } from '@playwright/test';
test('react app test', async ({ page }) => {
// attempts to reach the desired URL
await page.goto('http://localhost:8080/');
// Expect a page to have a specific title
await expect(page).toHaveTitle('React App', {ignoreCase: true});
//clicks the Mountain button
await page.getByRole('link', { name: 'Mountain' }).click();
//verifies header contains the text: Mountain
await page.waitForSelector('h2:has-text("Mountain")');
//clicks the Beaches button
await page.getByRole('link', { name: 'Beaches' }).click();
//verifies header contains the text: Beaches
await page.waitForSelector('h2:has-text("Beach")');
//clicks the Birds button
await page.getByRole('link', { name: 'Birds' }).click();
//verifies header contains the text: Birds
await page.waitForSelector('h2:has-text("Bird")');
//Sends text to the search input
await page.getByPlaceholder('Search...').fill('test');
//clicks the search button
await page.getByRole('button').click();
//verifies header contains the text: Search
await page.waitForSelector('h2:has-text("test")');
});
Now, we want to execute the test again, but this time we will run the new test in headful mode, meaning all browsers will be visible and we can observe the test script being executed: {{{npx playwright test --headed}}}
For us to check the test report, let's run {{{npx playwright show-report}}}, this will serve the report on localhost and should look like this:
Looks like our test was executed on Chromium, Firefox, and Webkit environments, meaning we have coverage on all modern browsers:
In conclusion, the test automation pyramid is a good model when thinking about the distribution of your tests.
/\ --------------
/ \ UI / End-to-End \ /
/----\ \--------/
/ \ Integration \ /
/--------\ \----/
/ \ Unit \ /
-------------- \/
Pyramid (good) Ice cream cone (bad)
Therefore if we want to have a successful development process, it should retain its Pyramid shape by keeping in mind a few takeaways:
Hopefully this helped