Journey towards Autonomous Testing to achieve Testing Singularity
With advances in leveraging Machine Learning and Deep Learning in Software Development, humanity is reaching new heights everyday. Softwares built with ML and DL revolutionize every aspect of work, life and business around the world. Like self-driven vehicle powered by AI , Autonomous Software Testing takes advantages of AI/ML to make testing more independent from human intervention and self-learning by aggregating the data that it learns from the activities it performs.
This seems to be science fiction which can be dreamt only in a utopian future, akin to the one in a Star trek episode. But the latest advancements in Artificial Intelligence and Cloud is enabling us at Hexaware to think like never before and implement use cases across:
Hexaware is embarking on this journey and is developing an autonomous test platform named “Autonomous Test Orchestration Platform”, shortly called “ATOP”. This platform incorporates ML, DL algorithms and cutting-edge NLP techniques to execute autonomous testing use cases.
Now let’s delve deeper into how various use cases of autonomous testing help QA teams ensure quality in high-speed development environments.
This assessment and estimation can easily be automated by consolidating historic data from requirements management systems, project management systems and defect management systems. The estimates can be continually revised based on the actual changes and defects occurring in the due course of the project, to make autonomous change requests, approval requests or backlog updates. The natural language understanding and deep learning algorithms such as CNN (Convolutional Neural Network) and RNN (Recurrent Neural Network) would be leveraged for performing this activity by going through user stories, acceptance criteria or feature files (VoB – Voice of Business).
Get in touch with our QA Experts to get an assessment of your organization’s Autonomous Testing Maturity level.
In any Software Development Lifecycle (SDLC), it is expedient that the 360-degree feedback received can be converted to test cases / scripts and executed to provide proactive insights to developers and product owner. The drift in requirements during project execution is usually invisible to human eyes and left out as production defects.
Our Autonomous engine offers 360-degree coverage to combat any drifts, while autonomously generating Test Cases and Automation Test Scripts by analyzing data from Requirement Management tools. NLP techniques such as BERT or libraries such as Python NLTK, help us achieve this use case.
Continuous changes to user stories can be incorporated back into the test cases and scripts, once the changes are made in the requirement management tools. This would be an independent process which can happen outside of the Continuous Integration pipeline and the newly created / updated scripts can be selected for execution in the build pipelines.
The test cases / scripts created autonomously must be executed by specifically targeting the functionalities developed as part of the release, sprint or check-in. Also, a human Software Development Engineer in Tests (SDET) working in a regression team is not always kept in loop or happens to be a part of an agile team, having to spend a lot of time analyzing the changes to select the tests. The accuracy is therefore, lost resulting in leakage of defects or bloated test suites with very low defect density.
The autonomous agents can continuously gather data from the agile management tool for the completed features and the source code repositories to precisely select tests which configure & execute on their own in a pipeline. The selection of tests can also be contextualized precisely by grading the current code changes against the production fallouts and relevant tests can be selected specifically for Smoke, Sanity and Regression execution.
Often, the tests require to be executed with multiple set of data and with subtle variations which are mostly decided by the human SDET based on his application knowledge. Our Autonomous Engine ensures that the edge cases and boundary conditions along with multiple flows of tests can be automatically decided using various ML algorithms such as Random Forest and auto-configured.
The autonomous system can learn from the way the Human SDET organizes & configures the tests for execution across multiple targets such as end points, mobile devices & browsers, and autonomously predict the tests for similar data gathered from agile management tools and source code repositories. Based on the type of tests selected for a change, the required environment scripts can be autonomously executed to prepare the environments. These environments can be on-premise or containers over cloud.
Often Tests fail during execution because of developers making changes to the DOM code such as renaming a field name, id or add or removes the frames. Test scripts failing due to network and application slowness often require re-execution, resulting delays in test completion and manual intervention to execute the pre-requisite test scripts.
The autonomous platform can ensure totally independent and resilient execution which can dynamically prioritize, guess failures and take necessary steps to avoid the failures and provide accurate insights on the execution and quality of the system to the project team in the form of Dashboards. The execution status can also get updated back to test management systems and project management systems. There is only very limited time to be spent by SDETs on execution and he/she can focus on higher order tasks such as coaching the ATOP for accuracy.
Typically, the production release decision is taken in collaboration between the product owner who represents Business, Scrum Master who represents the IT team and the SDET who represents the Quality Assurance function. The challenge arises when these key stakeholders communicate in different languages. While the Business talks the language of business metrics, the Scrum Master works with a functional requirement language and the SDET with a test case / scripts language.
Our Autonomous platform helps tackle these challenges by facilitating the mapping of functional requirements to tests to business metrics. The platform will have the ability to translate the time series data which is coming in from the automation test executions and correlate them to the Business Metrics. The priority of the Business Metrics, based on current production fallouts, can also be directly taken as an input in prioritizing the tests for a release regression. This input is very valuable to an organization, with inputs from the Sales and Marketing divisions.
Please select 'I Agree' or 'No, Thanks'
( Mandatory field * )