Are you sure pysystemname is same for two nodes in pr_sys_statusnodes table(in data schema)?
The systemnames of the two nodes must be same in order to be in the same cluster and to maintain rule sync.
Sytem pulse will no longer be used to maintain rule sync among nodes when using hazelcast/ignite clustering technology but is used to indicate that a node is up whcih intern is used by hazelcast/ignite to maintain rule sync among nodes.
pr_sys_statusupdates table will be empty when using clustering protocols/technologies to maintain rule sync.
Please make sure the sytemnames of two nodes are same.
The pysystemname value is same for both nodes (with the value pega) in the table pr_sys_statusnodes. Also pyclusteraddress has value nodeIP:5701 & pyrunstate is running for both the nodes in the table pr_sys_statusnodes.
I could not see any table with the name pr_sys_statusupdates. Do you mean pr_sys_statusdetails ? If yes, then the table pr_sys_statusdetails is not empty and contains more than 5000 records.
Also is there any specific configuration for hazelcast that i need verify eg: prconfig.xml. In my environment I have the below settings in prconfig.xml with regards to hazelcast
<env name="cluster/hazelcast/interface" value="IP address of node 1"/>
I believe you are using incorrect values for the two prconfig settings you shared,
<env name=”identification/cluster/protocol” value=”none”/> - I am not sure who recommended this to you, usually you should not touch this and you have made it to none, so I am afraid how hazelcast will be able play its role here.
<env name="cluster/hazelcast/members" value="host name of node 1"/> - you have mentioned this setting separately for the two nodes, so I don't think both the members will be a part of one cluster, however are you able to see the two members getting added in the PegaRULES log?
member_1:5701 --> This
Also, Have you tried using the setting as <env name="cluster/hazelcast/members" value="host name of node 1,host name of node 2"/> ?
Hi Arun, Thanks for the response. After removing those settings from the prconfig.xml , the rule sync up issue is resolved.
In addition to the above change in the prconfig.xml , i restarted the nodes one by one and not simultaneously (which is different from our environment deployment pattern).
Our environment setup runs pega inside the docker container.So our deployment pattern will deploy the pega docker containers simultaneously on both the nodes. Does this pattern of parallel deployment of docker containers onto multiple VMs will cause any issue in Hazelcast configuration /Elastic search ?
When the docker starts the container image, its may assign a different port then how does the other pega cluster understand that the hazelcast is actually running on a different port?
In my environment, we have 2 dockers server running 3 containers each. So the synchronizing is not happening because when the server start docker is assigning a different port for each container. So what configurations can be done to fix the issue. It would be great if you can share the sample prconfig.xml file from any one node.
Right now search is not working for us in any of the nodes because of hazelcast(I believe)