What is Automated testing and why use it?
What is automated testing and why should we bother with it? Whether a question or a statement – it seems a reasonable point to raise.
For those at the sharp-end of automated testing or engineering, the answer is simple: it is about being able to test time and time again to determine that behaviour has not been affected by change – in a very short time at reduced cost and with considerably more reliability than manual regression testing. Not checking results is a leap of faith, but once comfortable with your chosen tool any concerns should disappear quickly.
We all need to buy into automated testing!
Automated testing is an increasingly adopted process that is becoming easier to manage as the tools to do it become ever more capable to operate with reduced technical knowledge.
Essentially, automation has the very realisable potential to deliver significant benefits in the time, cost and quality triumvirate. Some of the key benefits are:
- Reduced time to test, by as much as 95% when comparing the same test against a manual approach;
- Reduced cost of testing that often returns the cost of its development in as few as 4 automated regression tests;
- Increased coverage as new tests are added to the regression pack;
- Increased confidence in change.
Building automated regression test packs allows testing to be treated and reused as an asset, rather than reinventing the wheel each time an application is changed. It is a cold fact that automated testing can be expensive to build, leading some companies who have not worked out the business case to leave it on the shelf. However, it is likely to pay for itself in as few as 4 regression testing cycles, and thereafter at a much cheaper cost in as little as 5% of the time it would take using manual regression test packs. If we consider the proven metric that systems generally cost more to maintain then they do to develop, it is reasonable to surmise that automated testing will significantly reduce the cost of that maintenance over time. Not only will test coverage increase as you build your automated regression pack, but your confidence will increase that your systems and changes are good to go – before you send them to your production environment!
The Return on Investment
A few years ago, I oversaw a major project to automate testing of SAP releases. The business case was simple: 3-4 releases a year affecting 900 business processes. The client couldn’t fully test a release manually before the next release was upon them. The Dev cost of £200,00 to automate testing seemed excessive, but the return on investment was achieved after 3 quarterly releases, with subsequent releases offering a cumulative return. Test runs down from 3-months to 5-days at a cost saving of £91,000 per run. This programme provided a good return, as will many others. However, some wont, some will and others may struggle. It is key, therefore, that the business case for automation be prepared and proven from the outset – before you embark on automating something that may not be feasible or offer a decent return
What’s the end goal of automated testing?
So, we have seen the potential return, but that is not the complete end game. We also need to look at the returns and gains, not least:
- Having a framework of reusable, repeatable and predictable test assets that can be run at any time;
- Informing decision-making by determining if an application or process has been adversely affected by change – quickly;
- Only having to check test results by exception. That is, compare pre and post test runs and only review unexpected results/mis-matches;
- Increasing test coverage in a reduced time and cost.
These are four simple productivity measures that we look to achieve with automating testing for an application. If you cannot achieve these then something is missing, and you will struggle to achieve your sought-after return on investment.
Ok, what can we use Automated testing for?
Automated testing is precisely that – test cases that have been automated and which can run without intervention. However, there are many different types of automated testing, and which you select depends on what you want to achieve and the tool or tools you use to support it. So, let’s have a look at the key types or cases of automated testing:
With the exception of ‘Pre-Production Assurance’ (more later), each of the different types of automation tests shown in the diagram is long accepted by the industry as key to ensuring if change has or has not impacted an application or process being modified – quickly, efficiently and cost effectively, Whilst not all are suitable for all phases of testing, each has a clear definition of what is sets out achieve and the reasons why we should consider incorporating it into our corporate and programme level automated testing strategies and plans. In all cases, automated testing and each of its types achieve most when supported by a tool that is appropriate to the task in hand.
Automated testing, its key test types and their usage
Let’s take at them look at them individually so that you might consider how to provide assurance that the four key productivity measures are fulfilled to help ensure you can meet what has been set in business case and to maximise your return
Unit Testing. This class of automated testing will only ever be employed in the development environment as it simply tests the smallest component of our systems or programmes. Unit testing is frequently conducted manually, unless, of course, you can build a succession of unit tests for automated testing, much as might be considered in an ‘Agile Sprint’. This class of automated testing will, in all likelihood, be built and managed by Agile Teams.
New Functionality Testing: This class of automated testing deals with how functions work within applications. Put more simply, how a function relates to the customer or user. The customer of user won’t care how a function works, but they do care how it is triggered and what it returns. This class of automated testing is likely to be built by a test specialist and managed in a reusable framework.
Regression Testing: This is, perhaps, the most commonly understood form of scalable automated testing. Essentially, it should be used whenever developers change or modify their software. How often, though, do we hear “it’s not worth running a set of tests because I only changed one line of code”? The simple fact is that even the smallest of changes can have unexpected consequences, so we undertake regression testing at the point of change. As we progress through the life cycle of systems, integration, etc., regression testing will seek to target different areas and functions. For example, when unit testing a specific function will be measured. But in pre-implementation automated testing we will have a regression pack that targets as many of the function points contained in an application as possible.
Essentially, regression testing tests existing software applications to make sure that a change or addition hasn’t broken any existing functionality or re-introduced previously corrected faults. Such tests can be performed manually on small projects, such as unit testing, but as we progress through the life cycle of test phases we seek to test more of the system and exercise a greater number of data triggered test conditions, which will inevitably be time consuming and would benefit from automated testing using a test tool.
Integration Testing tests the flow of data to and from both upstream and downstream systems. We care little for the functions involved in how the data travels, but that it does travel correctly between systems, across API’s and interfaces inside or outside or our organisation to trigger other functions or responses. This is often the most technical form of automated testing, as it likely to have to manage differing technologies, wait for responses and handle error conditions that would not occur in functional testing.
Pre-Production Assurance is perhaps the least well known or adopted use of automated testing. Essentially, this class of automated testing is used to assert that your current production data returns the same set of results when running a regression pack on existing software and software that is to be implemented. That’s is: A copy of production data is copied at any point time. It is then used twice: 1) against production-based software and the results saved, and 2), against the software to be implemented. The third step in the process compares the results. This method can be used at any time to help ensure that nothing unexpected has occurred. On space saving, it negates the need to store production sized data and databases long-term. Predominantly used for production data, the approach is also relevant to other automated testing types.
Automated testing Frameworks and Approaches
There are several popular approaches that can be taken for automated testing; and which you choose will depend on a number of factors and variable, not least:
- Your organisational strategy for automated testing
- The tool or tools you use
- Your technical skills;
- Your systems architecture
- Spend constraints or return on investment demand
Each of these types typically resides in an automated testing framework, where they can be stored as assets and reused predictably and repeatably.
Let’s take a brief look at these different approached in turn:
Keyword Driven Automated testing is also frequently referred to as table-driven testing or action word-based testing. In Keyword-driven testing, we use a table, such as Excel, SQL or other form to define keywords or action words for association with each function or method that is to be executed. As one might expect, Keyword Driven automated testing runs automated test scripts based on our keywords specified in our table driver. By using this approach, testers can work with keywords to develop any test automation script, regardless of their technical experience. The skills to define Keyword Driven Automated testing is relatively simple, but the technical skills and costs required to maintain the frameworks in response to SUT changes can also be a significant factor in achieving a positive ROI.
Data Driven Automated testing is, perhaps, the most flexible approach as it separates test scripts and contained logic from data. Using this approach allows us to multiple sources of data without having to modify the script to get a set of results. Your test data can come from anywhere, such as Excel, Access Tables, SQL Database, XML files, other files etc. A test script will call your data resources in your framework to get its test data. This method means that we can simply make test scripts work for different sets of test data; each of which can have its own saved set of results. This is, perhaps, the quickest route as it requires significantly fewer to test scripts compared to the module-based framework, which is often the preferred route to learning about automated testing or putting a few quick scripts together to see how an application behaves. However, as a note of caution, data driven approaches require more programming knowledge than the others to develop test scripts.
Behaviour Driven Automated testing. The purpose a Behaviour Driven Development framework is to create a platform that encourages Business Analysts, Developers, Testers etc to participate actively in defining, building and running automated testing. As a pre-condition, this type of framework requires increased collaboration between Development and Test Teams. It doesn’t, however, require the users to be overly technical or have detailed knowledge of programming as the approach uses non-technical, natural language to create test specifications.
Hybrid Driven Automated testing. This type of Test automation framework is a very popular choice in the industry, as it allows organisations to use two or more types of framework in order to generate a flexible, best of breed approach that leverages the benefits of the other available types of framework.
When and when not to embark on automated testing
Automated testing clearly has value as it saves a significant amount of time and workload in testing changes:
- During development or modification to detect early regression failure;
- Pre-implementation to give confidence that things will be OK;
- Post implementation production assurance to look for unexpected results.
However, automation is expensive and so we need to ensure that we can get a return on investment over time. Consider the following key criteria for post-implementation automated testing
Be under no illusion that if you answer:
- ‘no’ to any of the above then it is you might not get the return on investment you seek;
- ‘Yes’ to at least three of the above then you should consider putting a business case together outlining why the spend on automation will result in a decent return on investment.
In summary, automated testing is about your peace of mind that your systems and processes have not been adversely impacted by mandatory releases or voluntary change. Done right, it will increase test coverage over time, increase confidence and save you a whole load of money into the bargain. If you don’t currently embrace automation, then it is worth investigation. If you do embrace it, then it is worth checking that you are getting a decent return and asking if you could get more.
TSG Training offer a full programme of Automated testing Training, spanning from a simple 1-day introduction through to complex tool usage over 5-days, including:
- Introduction to Test Automation;
- ISTQB Advanced Test Automation Engineer;
- iSQI Certified Selenium Foundation
- BDD Driven Development using Visual Studio, SpecFlow and WebDriver C#
- BDD with Cucumber and WebDriver JavScript
- Complete LoadRunner 12
- Complete Unified Functional Tester (UFT) 12
- Introduction to Appium
- Selenium WebDriver C# .NET
- Selenium WebDriver with Java
- Using ALM 12
- Using Quality Center 11
Automated testing – it is not for the few, but for those who care about systems accuracy with cost savings and increased efficiency of testing.