Automation of testing web applications
Timing & value
It all depends on the project’s current phase, scope, and utilization of QAs on the project. Yes that is true, but …
When should automated tests be developed? When does it make sense? When is it just a waste of time? What kind of automated tests?
In the past, I thought that developing automated tests made sense only in the middle of the project at the latest. That it doesn’t make sense to develop or make some major changes in the automation framework in the late phase of the project. It is simply a waste of effort, time and money. The reason is simple. The project is ending, work is done, it was delivered and tested. No one will use automated tests after this milestone.
As I was thinking more and more about this, I realized it makes sense every time. The development process never ends, even after the project is delivered. Some new features might be added in the future, there are some changes needed in existing website or some maintenance is needed in a matter of updates of framework or infrastructure. Even during “normal” days, you want to be sure that your website is up and running as expected. In the end, you will find out that automated tests are always useful and effort and money investment is worth it.
Imagine if you would need to retest your solution manually with each change, every new feature, every day if you want to be sure that your solution is working as you expect and as you wish for your customers to have a great experience from usage. This effort is logically getting more complex during development as your website grows. So regression manual test can take days, maybe even more than a week on big solutions for one deployment vs automated test on a scale of hours.
You can say, but I also need to invest time and effort in developing automated tests. That is true, but you will implement them once and if needed, then just extend it or change some parts of the solution. In the end, you need to invest much less time compared to manual regression tests. Automated tests can be executed on all environments whenever you wish. It gives you reliability, flexibility, speed, saved money and time.
Various automated test types can be used on projects, but not all of them need to be used on every project. We are using some of these types in Apollo Division and I will mention them in the following section.
Automated test types we use
In our division, we use various automated test types to assure that our solution is working as expected and fulfils the requirements of our customers.
We may use all of them on the project, we may use only some of them, depending on the project scope. Let’s look at what kind of automated test types we are using in our QA.
Functional logic automated tests purpose is mainly to validate functional logic of website like forms, search, pop up windows and basic validation checks.
We use C#, Selenium, and Specflow as our main dev stack for the implementation of tests, but recently we have been thinking of using Cypress at the beginning as an alternative, because it is spreading across the testing community as a widely used framework and because it appears to be an interesting framework not only for these kinds of tests.
Visual automated tests purpose is to validate the design of developed functionalities continuously to avoid defects.
You can read about a solution which we are using in our delivery unit in the article about Automated visual testing. We are also now experimenting with Cypress.io as a replacement or backup framework for visual automated tests.
Performance tests are an important part of our testing solution. They help you to spot weak places in your infrastructure and application to evaluate if your solutions fulfil criteria.
One remark here, performance criteria are a critical part of performance testing. Criteria need to be defined before the preparation and definition of performance tests. They can be set based on analytical data or from the expectations of the usage system.
In our delivery unit, we use JMeter for performance tests and cloud pipelines / VM for hosting and execution. If you would be more interested in performance testing in our delivery unit, you can read this article about Performance testing of web applications.
Api & integration tests are other areas where you should put your focus on the development of automated tests. It is a good idea to also test APIs separately and APIs within integrations. For example, with the front end for the search of website it means what kind of results are displayed, what response format we are getting from APIs etc.
On market, you can find many tools for APIs like Postman, Soap UI, Cypress and others. It is up to you what tool you prefer or fits your needs. These tools which I mentioned are widely used around the world and more than usable.
In our division, we use mainly Postman for pure APIs tests and you can read more about it the article written by one of my colleagues: REST API testing with Postman in ACTUM Digital.
Accessibility regression tests are focused on page design. If they fulfil accessibility rules and guidelines which are making the page usable for people with various disabilities.
For this purpose, we are using axe cli and axe core which are part of axe DevTools framework and custom console app for generating the report. We are also investigating integration with Cypress tests to have it under one roof.
Security tests are a complicated topic and not easy to cover. In our delivery unit, we are currently using ZAP which is a security scanner that can help to avoid basic security holes in applications.
Anyway, deep security testing should be done by a specialized company that is focusing strictly on this area and has a big expertise in the general security of cloud services, web servers etc.
I wrote an article about our solution for basic security testing so if you are more interested in it, learn more at Basic automated security testing of web applications.
Automation test initiatives in Apollo
In the previous chapter, you could see what types of automated tests we are currently using in our division, but let’s look to the future a little bit on what we are working on to extend and improve our automation test dev stack.
Security testing is one of the areas we are currently focusing on. We are putting together a complex strategy on how to do security testing in our delivery unit for our future and existing projects.
We have some baseline with ZAP and we are testing some other areas, but we would like to extend it by focusing on API security testing, web forms, URLs, etc. We run some tests, but we need to make it as a standard in our process. We are aiming not to be security professionals, but to cover as much as we can within our skillset.
Cypress which I mentioned a lot in previous chapter, is a tool used for “playing” with the API tests, visual tests, functional logic tests, accessibility tests and maybe some other areas we haven’t thought of yet. We are preparing various POCs and comparing them to our existing solutions.
Standards for our automated tests are based on our experience from projects and research. We are crafting standards for each automated test type we are currently using to avoid some problems we had in past or are having currently. To be able to give our potential to the current customer what kind of tools and framework we are using, how and when we are executing tests from a process perspective, how we are handling development, what are risks etc., for automated tests.
Boiler plate projects as we are working for different clients around the world and not on one product. Then another our initiative is speeding up development and integration of automated tests to CI/CD. We want to have a prepared clean table for work ahead of time which can help to speed up this process. Having tools ready, empty projects, pipelines, reporting, storages etc. are all documented “just to use it” on the project to avoid reinventing the wheel all over again and shorten the project preparation phase from a QA perspective.
We are still enhancing our testing process for accessibility for manual and also for automated tests. We are pushing for certifications in this area to gain knowledge and maybe get some new ideas or identify gaps in our testing. We are exploring new tools and frameworks.
Those are initiatives we are currently working on in our QA. At the time you are reading this article some or all initiatives may be finished and some new were put on the table. :-)
There is vast space for automated tests in the development process. With automated tests, you can get rid of a lot of repetitive “pain” from testers.
On another hand, don’t over-rely on automated tests. You still can’t replace manual testing since it is faster for a one-time test than directly writing automated tests for it. There is still no AI involved in automated tests which would be able to substitute the human brain, despite some companies trying to name everything AI. That is why functionality always should be at first tested manually and then covered by automated regression tests.
This piece was written by Vlasta, QA Team Leader of Apollo Division.
If you seek help with your project or initiative, just drop us a line. 🚀