Setting different system names in a single env with two Pega nodes sharing DB
Here's the scenario: We are on PRPC 7.1.9 with CSI 7.14 and PegaSurvey 7.1 ML3. The end goal is to email a survey to an external customer to complete. The security team won't allow any external access to core region app servers so, we setup a new VM in the DMZ that shares the Core DB with the main Pega nodes. This was done to allow linking between the Survey work object and the originating work object.
After exposing all the required ports for hazelcast between DMZ and Core, we've seen library compilation issues and caching issues on the new DMZ VM. Functionality that works find on the Core VMs consistently picks up the wrong rule on the DMZ VM. After some experimentation, we've found that once we run the functionality on the Core VM, it will start working on the DMZ VM as well; presumably because it's now cached properly.
With all the issues, I was thinking it might work better to set the system names for Core and DMZ differently to allow them to work semi-independently while still using the same DB. Based on my research, I can manually specify the system name in the prconfig.xml but in order for it to work properly I'll need to update the prconfig for all the nodes and remove the DSS that's currently setting the name.
My questions are,
1. Will this change in fact help to resolve the issues? Or is there a better way to do this?
2. Is that the correct process for separating them out?
You will gain some level of isolation by using a different system name.
I would be more concerned that the issues were due to bringing up the node withouth the hazelcast ports set up propertly. Is the history in your case (IE setup DMZ vm node, started it, then realized you needed the DMZ hazlecast ports?)
That was the DSS setting I was referring. The need for the hazelcast ports was discovered after bringing up the DMZ VM but we caught that based on the logfile entries before the functionality was coded. The issues with wrong rule resolutions occured much later after exposing the required ports and restarting everything.
I ended up changing the prconfig on the DMV VM and leaving the DSS for the Core VMs. The prconfig setting overrode the DSS and the VM appears to be working properly so far but we're still completing the testing.
We've completed QA testing on this configuration and so far so good. The only issue we've found with this setup is the DMV VM needs to be restarted after every deployment to recognize the new rules in the system. Since it's no longer part of the hazelcast cluster it doesn't get the invalidation notice after importing.