The DevOps Pilgrimage – Part II – The Purge

Cloud

August 13, 2018

  • Centralise the build: We first centralise the build in a controlled server. Access to this server is extremely restricted so that the versions of the compilers, libraries installed are managed by responsible technical leads.
  • Automate the build: We then automate the build process using tools such as Jenkins. The first job, typically called the unit job is the only job with access to code and is triggered whenever the master/trunk is changed. It creates all the artifacts necessary for all the subsequent stages of the pipeline, and stores these artifacts in a repository.
  • Add Style Checks: We then add style checkers, a.k.a. linters, to the first job. These can be seeded with a simple set of rules to start with and then we can add to the ruleset progressively. We can also follow the mechanism where we tolerate a certain number of warnings and errors to start with, and tighten the rules and reduce the numbers tolerated week by week.
  • Add Unit Tests and Coverage Checks: We then add unit test execution to the first job. Again, as above, we can start with a low threshold for code coverage, and progressively tighten the requirements as time goes on. Also, we use mocking libraries so that we can isolate the system-under-test from all external dependencies and libraries.
  • Short-Lived Branches Only: We enforce that the developers do not maintain long-lived branches but merge to the master (a.k.a. the main trunk) frequently, even several times a day. This, coupled the above changes to the build systems, ensures that the programmer’s changes are verified and are available to all the other team members right away. And any failures can be acted upon immediately.
  • Automate the deployment: This automation may be as simple as unzipping a compressed file and copying files around, or it may be a more complicated deployment automation using tools like Chef. Depending the toolset, you should also take the configuration files for each environment and deploy it so that those are available to the runtime environment. Container technology, e.g., Docker, can also be used to automate the process of deploying, and configuring, the services.
  • Integration testing environment: An integration test environment uses the test version of the services for the system-under-test and stable versions for all other systems that this systemunder-test interacts with. This way, we ensure that the production deployment runs the least amount of risk of breakage due to incompatible service versions.
  • Automated api testing: API testing is an intermediate testing level between unit tests and ui tests; they are very useful in making sure that the running services perform the way we want them to. For instance, for CRUD style APIs, we can ensure that the entities can be added, queried, read, updated, re-read and then finally deleted via the service API.
  • Automated UI testing: We then add automated UI testing so that we can ensure the user experience is verified before we go for any manual testing for usability etc. These UI tests are fundamental in making sure that the application does not regress in any of its functionalities already delivered to the users.
  • Database Evolution Tools: Database updates (new tables/columns, etc) should also be delivered to the different environments via the build pipeline. This is done typically using tools such as flyway and liquibase to deliver database schema evolution.

About the Author

Krishna has over 25 years of experience in the software industry including several start-up companies in the networking, search and big data space as well as in IT-services and web companies. His prior experience includes web search, big data and cloud infrastructures. His current responsibilities include leading the adoption of automation technologies both in the work that Hexaware performs for its customers as well as in the IT and business processes of the customers. He also leads Hexaware’s investments in the next generation of technologies which promises to transform the hi-tech landscape.

Krishna holds a degree in electrical engineering from IIT Madras. He is based out of our Chennai, India office.

Krishna Kumar


Read this case study to learn how a US-based Healthcare Firm Leverages Agile DevOps for Operational Efficiency, Innovation, Automation and Better ROI. Read more

Read this case study to understand how a US-based Provider of Specialized Payment Products & Services Leverages Agile DevOps for ASM & Development.Read more


For more insights please feel free to connect with us on marketing@hexaware.com.


Visit related articles

About the Author

Krishna Kumar

Krishna Kumar

Krishna has over 25 years of experience in the software industry including several start-up companies in the networking, search and big data space as well as in IT-services and web companies. His prior experience includes web search, big data and cloud infrastructures. His current responsibilities include leading the adoption of automation technologies both in the work that Hexaware performs for its customers as well as in the IT and business processes of the customers. He also leads Hexaware’s investments in the next generation of technologies which promises to transform the hi-tech landscape. Krishna holds a degree in electrical engineering from IIT Madras. He is based out of our Chennai, India office.

Read more Read more image

Related Blogs

Every outcome starts with a conversation

Ready to Pursue Opportunity?

Connect Now

right arrow

ready_to_pursue
Ready to Pursue Opportunity?

Every outcome starts with a conversation