am not seeing these agents ( ClusterAndDBCleaner & NodeCleaner) in PegaInfinity which were initially replace by "SystemCleaner". Is Pega again changed... we are seeing delete querry on PR_PAGE_STORE table taking more than 20 minutes and running couple of times a day. as i know passivation data holds on this table and "*cleaner" agent is responsible to remove.
Thanks Harish... could you please let me understand why "delete" querry ran for 20 minutes ( multiple times), let me know out of both which agent is responsible to delete passivated data from pr_page_store.
The above setting enables the Job scheduler to process all the nodes passivated data. So just having pyClusterAndDBCleaner on one node can process all the nodes data.
If the above setting is not set, pyNodeCleaner will skip the cleanup as ClusterandDBCleaner will do the job.
From PegaALERTS, understand if the DELETE query is fired by Job Scheduler as a part of clean up OR activation of requestor data back into memory will be done if passivated users come back into session. This also triggers the DELETE query to remove the data for the particular user.
Thanks for update Harish, that was useful info... in our env you mentioned DSS wasn't there, that means ClusterandDBCleaner agent is running and i suspect that it is cleaning data only on node where it is running. when checked on db it is observed that more than 15M records there in each node if i enable DSS hope it may affect DB performance, may i know what will be impact if we can truncate pr_page_store table. Do i need to truncate both pr_page_store and pr_sys_context table. Please confirm.
If SystemCleanerCanProcessDatabaseObjectsFromAllNodes DSS is not set to true. pyClusterAndDBCleaner will not be able to clean up the nodes data and pyNodeCleaner from each node will do the job to clean up its own passivated data.
Remember pyClusterAndDBCleaner has the capability to clean up all the node's passivated data from pr_page_store & pr_sys_context tables.
And pyNodeCleaner running on each node can only clean up its own passivated data.
The below DSS will help to avoid concurrency issues like pyNodeCleaner scheduler enabled from 3-4 nodes running at same time 12 AM operating on a single DB table can cause deadlock or concurrency issue. In stead with the DSS enabled to true, pyNodeCleaner won't do anything and pyClusterAndDBCleaner (enabled on backgroundProcessing nodetype) will qualify all the node's data & cleanup.