The DevOps Pilgrimage - Part III - The Ascent

Posted by Krishna Kumar
August 13th, 2018

1. Functional Test Environment: A functional test environment is an early pipeline stage which typically precedes the integration stage. Here, all outgoing API calls are intercepted and mocked by facades of the external services. This way, we isolate the implementation logic and test it thoroughly before we go the integration environment where both implementation logic and interaction logic with live services gets tested.

2. Test fresh deployment in the pipeline: As part of the pipeline, in some stage (functional or integration), we can test fresh deployment to a clean environment. This test ensures that, if we were ever to to deploy to a new datacenter/farm, then the deployment code and the systemunder-test will deploy and run correctly.

3. Test upgrade process in the pipeline: Upgrade process should also be tested in the pipeline. Especially in the case where there are multi-stage transactions in the application, the first few stages must be executed on the current version, then the version upgrade should be performed and then the transaction should be completed on the newer version and verified that it still behaves correctly.

4. Blue-Green, Rolling, Canary Deployments: There are high available deployment techniques which ensure that the newer version is working correctly in the target environment before starting to direct live traffic at it. We can make use of these techniques to perform highly available and safe upgrades.

5. Secrets: While environment-specific configuration can be distributed by different not-sosecure mechanisms – environment variables, Jenkins configurations, docker compose etc., – distributing secrets required some additional work. There are tools available such as AWS KMS, and Docker secrets etc which address distributing secrets securely to the different environments.

6. Operability Changes Via Pipeline: Logs, Metrics, and Monitoring systems provide visibility into the application’s performance and hence the automation of the configuration of these supporting systems can also be delivered by the same pipeline that delivers the application code, configuration and secrets. For instance, a new metric may be emitted by a newer version of the application service, and the configuration of how that metric should be streamed, stored, and aggregated is also delivered by the same application service pipeline.

7. Automated performance tests: Performance tests can be automated by generating a load from a client or a farm of clients and directing the load at the servers in a pre-production phase, perhaps the staging environment. The staging environment is typically provisioned as a smaller, but exact replica of the production environment (for instance, with load balancers, caches and reverse proxies etc). This makes this environment an ideal candidate for doing performance tests.

8. Automated security tests: Similarly, security tests can also be done automatically in the staging layer since the security, privacy and encryption is setup in staging as in production. Analysis using static code analysis tools for security vulnerabilities can also be done as part of the build pipeline.

9. Branch Builds: We can switch to a mode where the pipeline runs on the branch, verifies all the changes and then merges the changes to master on success automatically. This way, the master branch build is never broken, and programmers who break builds are isolated to their own branches.

10. Every Commit to Production: The final stage of automation is where every commit to master is automatically delivered to production. A great deal of confidence in the pipeline needs be in place before we can enable this; and this may also need to incorporate facilities such as ‘feature flags’ through which we can turn on/off features in production – either completely or for certain set of users. But, this is truly the pinnacle of build automation, where every code change hits production automatically and within minutes.

Conclusion:

DevOps process promotes faster and smarter deployment, agility, reduced risk, increased value delivery and stability, faster time-to-market and cost savings. Organizations can streamline the relationship between developers and the operations team by adopting DevOps benefits and make them more productive. Correct implementation of DevOps culture can help transform your business processes by changing the overall software process. It helps create value for both employees and customers, leading to better business performance.


About the Author

Krishna has over 25 years of experience in the software industry including several start-up companies in the networking, search and big data space as well as in IT-services and web companies. His prior experience includes web search, big data and cloud infrastructures. His current responsibilities include leading the adoption of automation technologies both in the work that Hexaware performs for its customers as well as in the IT and business processes of the customers. He also leads Hexaware’s investments in the next generation of technologies which promises to transform the hi-tech landscape.

Krishna holds a degree in electrical engineering from IIT Madras. He is based out of our Chennai, India office.

Krishna Kumar   


Read this case study to learn how a US-based Healthcare Firm Leverages Agile DevOps for Operational Efficiency, Innovation, Automation and Better ROI. Read more

Read this case study to understand how a US-based Provider of Specialized Payment Products & Services Leverages Agile DevOps for ASM & Development.Read more


For more insights please feel free to connect with us on marketing@hexaware.com.


Visit related articles

Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *