Grounded in the Cloud
Showing results for 
Search instead for 
Did you mean: 

Drinking our own champagne: Making product deployment easier with Operations Orchestration


Written by Liran Stern, HPSW RnD


20130918_HpLondon_Courtroom_088.jpgThe HP APM and Analytics DevServices group has been supporting the various HP Software products for many years. With all of this experience, we are accustomed to providing Source Control Management (SCM), Build and Release services for dozens of products and there are very few challenges we haven’t confronted by now. However, a new request arrived that gave our team a new challenge! This request was to support a new modern product—requiring big adjustments in our services.



Until this product came we were used to supporting large enterprise traditional products. These products typically had weekly or bi-weekly QA Releases, Market Releases once a year and Service packs every quarter (of course, hotfixes were available when needed).


The new product turned the tables. The request for our team was short and sweet: Provide high-quality, tested builds to the QA team on a daily basis. Provide a single solution for automatically deploying and configuring the product for Developers, QA and HP SaaS Staging and Production environments.


We started looking into the product architecture. The product consists of six different main components, deployed on four different machines (x2 for High Availability, which makes it eight machines). In addition it requires an Oracle DB Server, a Vertica big-data server, Redis data structure server and VIP (Virtual IP) per component and DB Server.


The product consists of dozens of developers and QA engineers, and an average of 110 commits per week (with dozens of files per commit). Looking at the product architecture, we can see this is not a simple product.


This product requires multi-level tests to deliver a quality-tested product on a daily basis, after validating that the integration between the different components is not broken.


To accomplish this we decided on creating three test levels. All three levels are within the Jenkins solution, which provides clear visibility for both the developers and us, the DevServices group.



Level 1 - Quick Build

For each of the six components, each commit goes through a quick build and unit tests. The Jenkins job takes between two and twenty minutes, and depends on the complexity of the component. Each job results with a “success” or “failure” mail to the committer to provide immediate commit feedback 


Level 2 – Component Tests

For each of the six components, once the quick build is complete, the component is quickly deployed on a test machine (simple deployment, via the maven POM.XML file target). The machine is configured to imitate a full product and UI Functional tests are executed to validate that the component is working. Once tests are passed, the component binaries are uploaded to Nexus and the component is marked as “Passed component tests”


Level 3 – Integration tests

Every hour, all the components that were marked as “passed component tests” are deployed on an integration environment and full end-to-end UI tests are performed on the environments. The mobile device test is performed by Appium connected to a mobile emulator. Once the end-to-end tests are successful, the source code that was used to build those binaries is marked as “passed integration tests”.


If the end-to-end tests failed, mail is sent to Developers with the list of changes that entered the build, and the tests log for easier debugging.


Release build

Every night we perform a full “Release build”. The source code of the binaries that passed integration tests is rebuilt, this time in Release (instead of snapshot). Binaries are packed in MSI setups and the product is deployed on sanity machines. The product goes through end-to-end tests once again for last verification and once passed, mail is sent to QA and Developers to alert them of the progress. All binaries are uploaded to a Nexus server in a HP SaaS environment to enable QA testing.


As you can see, since the product is continuously built and tested in various levels throughout the day, we are able to release a daily build to the QA, with a very high quality level. It’s important to mention that this method has a downside. If a component does not pass component tests or integration tests, it will not participate in the nightly build. This results in higher product quality, but less content.



We needed an ability to automatically deploy and configure this complicated product. We started looking into standard free tools like Chef or Puppet, but we quickly understood this is not the best way to go for us. The product is too complicated, is windows-based and requires daily deployment modifications.


We eventually decided to choose HP Operation Orchestration v10.10. There were few main reasons why to choose this tool:


  1. We wanted to use our own brand which obviously has its benefits like close relations with the R&D located in our site. The Development team for OO is always happy to listen to our requests and add missing features (or update existing features)
  2. Easy ability to collaborate between several OO developers for a single flow
  3. Overall it’s an excellent tool– Great UI, excellent features and very simple to coordinate work of several engineers


We allocated a full-time engineer to create automated deployment flows for the product. One of the biggest advantages we had was the fact we started creating the automation along with the product development. Pretty soon the developers architected the product with automation in mind, which helped us deliver their requests on a daily basis.


You can download Operations Orchestration Community Edition here to see how it can help you with your deployment requirements.


We had several guidelines:


  1. Make the deployment as robust as possible (100 percent success rate)
  2. Provide an easy method for the users (developers, QA Engineers, Cloud operators) to provide the deployment flow with parameters (i.e. machine names, schema names, etc..) without needing to alter the flow itself
  3. Create a single flow that would work on both the local lab and SaaS Cloud lab
  4. Upgrade an existing environment – check the component versions and only replace changed components (to minimize risk and upgrade time)


JSON file example



We spent a lot of time on making the deployment stable. We utilized retries when possible to fix network glitches, verifications for many steps and making sure we are not cutting any corners. We decided on creating JSON parameter files for customers (with easy to replace parameters), and created a Java  utility that reads those files and initiate those deployment scripts.


The new modern product required daily changes to the deployment flow. Using OO 10.10 we were able to easily introduce those changes (in matter of few hours) straight to all Development, QA and the Production environments. Using the source control integration that was introduced into the product, we were able to easily back-up our flows and collaborate via the various OO developers in our team.



OO 10.10 IT process automation comes with a large number of out-of-the-box actions which covered all the flow actions we needed. There was actually no reason for us to develop any additional actions (even though the product provides us with a simple ability to add more actions via content packs taken either from the OO site, from the large community or developed ourselves)


Moreover, the OO 10.10 introduced a new revamped Central UI with many features that we frankly can’t believe we could do without, such as automatic refresh, better performance and a lot of UI improvements.


OO flow.png

OO 10.10 – Modern product flow screenshot



G8641111032008_JPGHighres_96dpi_800x600.pngBottom line: Developers, QA and SaaS Cloud operators are able to deploy the product (or upgrade an existing product) by a click of a button. The whole process takes 30 minutes. We believe that doing it manually, in addition to huge risk of missing few steps here and there, would have taken a full day, or maybe more.


Learn more 

To learn more about how HP Operations Orchestration can help you with troubleshooting your flows and with your IT process automation visit the Operations Orchestration product page. Take the next steps and download your HP Operations Orchestration Community Edition Free Trial. Experience the tangible benefits of Operations Orchestration in less then 30 days. And if you might have any requests, feel free to post them to the OO community forums.


Making Cloud Simple
About the Author


Lending 20 years of IT market expertise across 5 continents, for defining moments as an innovation adoption change agent.

Jan 30-31, 2018
Expert Days - 2018
Visit this forum and get the schedules for online HPE Expert Days where you can talk to HPE product experts, R&D and support team members and get answ...
Read more
See posts for dates
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all