Ask the Expert - DevOps & Pega Deployment Manager with Ryan Feeney
Join Ryan Feeney (RyanFeeney) in this Ask the Expert session (2nd - 6th Dec) on DevOps and Pega Deployment Manager!
Meet Ryan: Ryan is a Principal Software Engineer in the Developer Productivity backlog and has over 10 years of experience with the Pega Platform. Ryan brings to the engineering department several years of hands on experience building Pega Applications for their internal applications department which makes him a strong advocate for customer experience. His development contributions have spanned the suite of developer tools, and most recently include guiding Pega's Deployment Manager product.
Message from Ryan: Hello Everyone, my career interests include keeping a pulse on the software industry outside of the Pega ecosystem to incorporate successful trends and best practices into the experience of developing Pega Applications. I've enjoyed my time working on Deployment Manager largely because it is a tool I would have really appreciated in past roles. I'm hoping to use my experience to answer DevOps and Deployment Manager questions to ultimately make your development and deployment practices faster and more frictionless.
Pega Rule Security Analyzer as a service during DevOps.
We at are exploring options to call "Pega Rule Security Analyzer" as a service during DevOps to do static analysis. As of now we have our own DevOps CI/CD pipeline built and want to call the "pega rule security analyzer" as a service from devops.Need inputs if it is already exposed as a service in pega 8 versions.
As of now we are NOT using PEGA deployment manager. In Deployment Manager we see the option of "Verify Security Checks.." option. which does the security checks during the devops CI/CD pipeline. Looking in similar lines to consume the security as a service.
Thanks for the question. The Rule Security Analyzer is not a tool that I have much experience with so this gave me an oportunity to try it out and seek advice from others who have worked directly with this tool.
At this time RSA is not exposed as a service and there is no current plan to do so. The list item in the security checklist which is a pre-req for Deployment Manager deployments also does not run the Rule Security Analyzer in an automated fashion - it just serves as a reminder to run the utility against your rulebase.
The scenario tests require you to have a supported test provider service available to you; supported providers are CrossBrowserTesting or BrowserStack or SauceLabs. Once you have have that account set up you should be able to get the auth name and auth key from that vendor.
Do you have such an account available to you? Which vendor are you using?
Can you please give some feedback on how can I update CICD as per my requirement.
I have 4 environments: Dev -> Staging -> UAT -> Production
consider the below scenario: I'm developing code for Sprint 1 of the project.
Basic functionality is developed in Rulseset version: 01-01-01 and is deployed from Dev to SIT through CICD pipeline (Pipeline 1). Now, pipeline 1 cannot be processed till UAT window is reached.
There are bugs raised on SIT that we are fixing in Rule set version: 01-01-02 and is deployed from Dev to SIT through CICD pipeline (Pipeline 2). Now, pipeline 2 also cannot be processed till UAT window is reached.
And we have final bug fix patch in Rule set version: 01-01-03. Again deployed to SIT using CICD pipeline (Pipeline 3). Pipeline 3 also cannot be processed till UAT window is reached.
On the day of UAT window, I process all the 3 pipelines (Pipeline 1, 2 & 3) using Deployment manager. Now, during UAT there are bugs raised by business users. This needs to be deployed from Dev -> SIT -> UAT (Rule set version: 01-01-04). (Creating Pipeline 4).
On the day of Production code press, I will process all the Pipelines using deployment manager.
Can you please suggest an efficient way for deployments using Deployment manager where I can update the Product on the fly without creating multiple deployment pipelines?
The scenario you've described is a pretty common complaint about the current behavior in deployment manager. We hope to streamline this at some point in the future. That being said, I think I have a recomendation which will require less manual intervention on your part.
I would recomend that you use a single pipeline rather than one per patch version. The product rule, or the application it references, would be configured to specify a minor version of a ruleset (So all patch versions of the ruleset are packaged). As bugs are found they can be fixed in the dev environment. When you want to promote the bug fixes to the SIT environment abort the current deployment to allow the next deployment with your bug fixes to be started. Once UAT is completed, you'll only need to promote a single deployment to production that includes the new functionality and all related bug fixes.
You didn't mention using branches, but they can help to further improve this workflow. You can configure the pipeline to create a new ruleset version with each branch merge. As long as the pipeline is configured to trigger a deployment automatically after branch merge you wouldn't need to manually start a new deployment after aborting the previous one.
I hope this makes sense. Let me know if I was not understanding some part of your configuration that necessitated multiple pipelines.
If adopting DevOps practices is entirely new to the organization, the very first thing to do is to step back and evaluate their current situation.
What are all of the ways through which change is introduced to production? (Rule changes, code changes, platform upgrades, Schema, run time configuration).
What manual steps are necessary to set up an environment?
What parts of the change migration process are the most frequent, time consuming, or error prone?
How much of the code base has automated tests? How often are those tests running?
What are your auditing requirements, and how strict are they?
You can't automate everything at once, so it's important to evaluate your needs and then prioritize them. Embracing devops practices is an ongoing journey, and varies for every engineering organization.
If you are on 7.1.9 automating some of these features may be more time-consuming to implement than if you were to upgrade to a later version which may provide some of these functionalities OOTB, but it is still possible.
You mentioned that you discovered prpcServiceUtils, but there is also prpcUtils which can be used going back to 6.1. This requires database access rather than having using an http API.
If you are integrating with Jira, you can write your own connectors and utilities to work with the Jire REST api. There are extension points throughout the platform depending on what your needs are. (Ex. Updating work items as part of rule commit and branch merge, or displaying a worklist)