Our application is deployed on a WebLogic cluster. Nodes in this cluster are spread across multiple physical servers. Each server has its own specific settings such as MQ queue manager name, MQ request queue, MQ response queue, etc.
At run-time, each node should use correct server specific values. For instance, to/from which MQ queue manager send/receive messages. Using DSS doesn't seem to help since the settings are global. Should we define these server specific settings in prconfig.xml file instead? If yes, could you please explain how?
***Updated by moderator: Lochan to update Categories***
You might want to change settings in the prconfig.xml file when you only want the settings to apply to a subset of nodes. Settings in the prconfig.xml file apply to the node on which the prconfig.xml file resides.
Coming to the point of defining the server settings, can you specify what are the settings you are trying to specify?
This is an existing configuration; each of the 2 servers has its own MQ Queue Manager on that box. I believe the intention must have been to avoid having a single point of failure (SPOF). Also, currently, there are other applications running on these servers, so changing the existing configuration is not feasible.
The old school way involves setting the value of the trailing element of the DSS in a prconfig setting
You need to put this in your prconfig.xml
To change a prconfig setting's value for a subset of your nodes, create a classification. Classifications are defined in a nodeclassification setting in the prconfig.xml file, for example, <env name="initialization/nodeclassification" value="Agents"/>.
Note: Dynamic System Settings that reference a classification in their Setting Purpose string, for example prconfig/agent/enable/Agents, are applied only to nodes that include this classification.
The new method involves passing a type to the JVM and associating certain agent/listener/ … this became available in Pega 7.3 - read about it best in the 7.4 help link above
Hi Masoud - I feel what you are going about this in the wrong way - if I understand your need correctly you are trying to achieve failover using this setup (i.e. prevent a single point of failure). I don't think this should be the way to get that to work.
I believe the MQ setup allows you to provide a comma separated list of provider URLs in it's setup (I can remember IBM WebSphere allowed that at least from one of my previous projects) which is what you should use to prevent a single point of failure as opposed to trying to code that logic into the calling application. So, you can just create a single MQ manager, and corresponding send and receive queues - but have it setup with an end point URL that contains all the servers where it is hosted within a given environment. That way you are putting the responsibility of load balancing it on your application server. If WebLogic doesn't allow you to setup the queues in that manner (i.e. comma separated list of URLs) I would consider using a load balancer URL as your end point URL that balances the load against the multiple instances you have up and running in a given environment.