Let’s see, Why Automation testing is required. It’s basic interview Question, Why we need to Automate our Software Applications. We are going to explain this with reference to some factors –
Reusing the test scripts:
When you want to execute the regression test scripts after every build it makes more sense to automate them. In the case of testing web based application, there is a more need to automate as the test suite has to be run on various browsers like Internet Explorer, Firefox, and other browsers.
Running unattended automated test scripts saves human time as well as machine time than executing scripts manually.
Better use of resource:
While automated scripts are running unattended on machines, testers can do more useful tasks.
On test engagements requiring a lot of regression testing, usage of automated testing reduces the people count and time requirement to complete the engagement and helps reduce the costs.
What should be Automate and What Should’t
It is not always advantageous to automate test cases. There are times when manual testing may be more appropriate.
For instance, if the application’s user interface will change considerably in the near future, then any automation would need to be rewritten. Also, sometimes there simply is not enough time to build test automation. For the short term, manual testing may be more effective. If an application has a very tight deadline, there is currently no test automation available, and it’s imperative that the testing gets done within that time frame, then manual testing is the best solution.
Decide What Test Cases to Automate
- Repetitive tests that run for multiple builds.
- Tests that tend to cause human error.
- Tests that require multiple data sets.
- Frequently used functionality that introduces high-risk conditions.
- Tests those are impossible to perform manually.
- Tests that run on several different hardware or software platforms and configurations.
- Tests that take a lot of effort and time when manual testing.
Create Automated Tests that are Resistant to Changes in the UI
- Automated tests created with scripts or keyword tests are dependent on the application under test.
- The user interface of the application may change between builds, especially in the early stages. These changes may affect the test results, or your automated tests may no longer work with future versions of the application.
- The problem is automated testing tools use a series of properties to identify and locate an object.
- Sometimes a testing tool relies on location coordinates to find the object. For instance, if the location has changed, the automated test will no longer be able to find the object when it runs and will fail.
- To run the automated test successfully, you may need to replace old names with new ones in the entire project, before running the test against the new version of the application.
- However, if you provide unique names for your controls, it makes your automated tests resistant to these UI changes and ensures that your automated tests work without having to make changes to the test itself.
- This also eliminates the automated testing tool from relying on location coordinates to find the control, which is less stable and breaks easily.
- However, automation has specific advantages for improving the long-term efficiency of a software team’s testing processes.
Test automation supports:
• Frequent regression testing
• Rapid feedback to developers during the development process
• Virtually unlimited iterations of test case execution
• Customized reporting of application defects
• Disciplined documentation of test cases
• Finding defects missed by manual testing
Automated tests should be:
Concise: Test should be as simple as possible and no simpler.
Self Checking: Test should report its results such that no human interpretation is necessary.
Repeatable: Test can be run repeatedly without human intervention.
Robust: Test produces the same result now and forever. Tests are not affected by changes in the external environment.
Sufficient: Tests verify all the requirements of the software being tested.
Necessary: Everything in each test contributes to the specification of desired behavior.
Clear: Every statement is easy to understand.
Efficient: Tests run in a reasonable amount of time.
Specific: Each test failure points to a specific piece of broken functionality (e.g. each test case tests one possible point of failure).
Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.
Maintainable: Tests should be easy to modify and extend.
Traceable: Tests should be traceable to the requirements; requirements should be traceable to the tests.