Journey towards Autonomous Testing to achieve Testing Singularity
The pace of change driven by digital transformation is rapidly increasing as new generations of agile competitors emerge. Customers, too, are evolving to expect more rapid updates to their products and services. And testing cannot be slowing it down!
Enterprises leading the way in DevOps operate at the highest level of test automation maturity. Although there are many activities that are dependent on human intervention like development and maintenance of automation scripts, failure analysis and corrective actions, proactive learning from different data sources (Voice of Machines, Voice of Customers, Voice of Tests), etc.
To support this pace of change, we believe that the next level of transformation for software testing is to shift from Test Automation to Autonomous Testing using AI/ML and make testing the fastest cog in the DevOps chain.
We at Hexaware have identified all possible white spaces (use cases) that are not addressed by regular test automation. We have built a unified platform – ATOP (Autonomous Test Orchestration Platform) with a plug-and-play architecture that can become a one-stop solution for all testing needs by implementing Autonomous Testing addressing all white spaces that eventually would make software testing independent of human intervention. There are in total 205 use cases cutting across various personas, different types of testing, and across all the layers of testing. Implementation of all the use cases will almost eliminate the need for human intervention in software testing.
Below is the summary of use cases
Learn how autonomous testing can enable organizations to deliver high quality applications on time with cost efficiencies.
ATOP has out-of-the-box integrations with leading application lifecycle management tools, application performance management tools, log aggregators, defect management tools, and app stores and can be deployed in the following modes.
Implementation of an autonomous testing tool can help your company in three ways:
A thoughtcast that provides answers to key questions regarding autonomous software testing and provides information on how to transition seamlessly from automation to autonomous!
This assessment and estimation can easily be automated by consolidating historic data from requirements management systems, project management systems and defect management systems. The estimates can be continually revised based on the actual changes and defects occurring in the due course of the project, to make autonomous change requests, approval requests or backlog updates. The natural language understanding and deep learning algorithms such as CNN (Convolutional Neural Network) and RNN (Recurrent Neural Network) would be leveraged for performing this activity by going through user stories, acceptance criteria or feature files (VoB – Voice of Business).
Get in touch with our QA Experts to get an assessment of your organization’s Autonomous Testing Maturity level.
In any Software Development Lifecycle (SDLC), it is expedient that the 360-degree feedback received can be converted to test cases / scripts and executed to provide proactive insights to developers and product owner. The drift in requirements during project execution is usually invisible to human eyes and left out as production defects.
Our Autonomous engine offers 360-degree coverage to combat any drifts, while autonomously generating Test Cases and Automation Test Scripts by analyzing data from Requirement Management tools. NLP techniques such as BERT or libraries such as Python NLTK, help us achieve this use case.
Continuous changes to user stories can be incorporated back into the test cases and scripts, once the changes are made in the requirement management tools. This would be an independent process which can happen outside of the Continuous Integration pipeline and the newly created / updated scripts can be selected for execution in the build pipelines.
The test cases / scripts created autonomously must be executed by specifically targeting the functionalities developed as part of the release, sprint or check-in. Also, a human Software Development Engineer in Tests (SDET) working in a regression team is not always kept in loop or happens to be a part of an agile team, having to spend a lot of time analyzing the changes to select the tests. The accuracy is therefore, lost resulting in leakage of defects or bloated test suites with very low defect density.
The autonomous agents can continuously gather data from the agile management tool for the completed features and the source code repositories to precisely select tests which configure & execute on their own in a pipeline. The selection of tests can also be contextualized precisely by grading the current code changes against the production fallouts and relevant tests can be selected specifically for Smoke, Sanity and Regression execution.
Often, the tests require to be executed with multiple set of data and with subtle variations which are mostly decided by the human SDET based on his application knowledge. Our Autonomous Engine ensures that the edge cases and boundary conditions along with multiple flows of tests can be automatically decided using various ML algorithms such as Random Forest and auto-configured.
The autonomous system can learn from the way the Human SDET organizes & configures the tests for execution across multiple targets such as end points, mobile devices & browsers, and autonomously predict the tests for similar data gathered from agile management tools and source code repositories. Based on the type of tests selected for a change, the required environment scripts can be autonomously executed to prepare the environments. These environments can be on-premise or containers over cloud.
Often Tests fail during execution because of developers making changes to the DOM code such as renaming a field name, id or add or removes the frames. Test scripts failing due to network and application slowness often require re-execution, resulting delays in test completion and manual intervention to execute the pre-requisite test scripts.
The autonomous platform can ensure totally independent and resilient execution which can dynamically prioritize, guess failures and take necessary steps to avoid the failures and provide accurate insights on the execution and quality of the system to the project team in the form of Dashboards. The execution status can also get updated back to test management systems and project management systems. There is only very limited time to be spent by SDETs on execution and he/she can focus on higher order tasks such as coaching the ATOP for accuracy.
Typically, the production release decision is taken in collaboration between the product owner who represents Business, Scrum Master who represents the IT team and the SDET who represents the Quality Assurance function. The challenge arises when these key stakeholders communicate in different languages. While the Business talks the language of business metrics, the Scrum Master works with a functional requirement language and the SDET with a test case / scripts language.
Our Autonomous platform helps tackle these challenges by facilitating the mapping of functional requirements to tests to business metrics. The platform will have the ability to translate the time series data which is coming in from the automation test executions and correlate them to the Business Metrics. The priority of the Business Metrics, based on current production fallouts, can also be directly taken as an input in prioritizing the tests for a release regression. This input is very valuable to an organization, with inputs from the Sales and Marketing divisions.
( Mandatory field * )