Posted: 30 Mar 2017 14:09 EDT Last activity: 26 Jun 2017 15:11 EDT
How to purge huge amount of data from pega work table?
Our Application is planning to purge data which is older than 2 years from 3 work tables and also implement a periodic purge for every week.
No of rows that needs to be purged from individual work tables
1) PURGE/ARCHIVE TOOL
Is this approach ideal to purge 70 millions rows? and also I read a article which stated that this is only ideal for applications which have less than 100K and pega default values are 10k and interval between each run is 30 days. How often we can run this activity and also how many records can we purge with single run.
We want know the constraints and also the necessary steps that we need to follow for implementing this approach .
2) Run delete queries with frequent commits from SQL/ORACLE client.
delete queries can be run with commit after deleting certain number of rows. This will ensure that UNDO table space required to hold records in case user rollbacks the transaction will not be overtly utilized.
3)Using PEGA Agent
Creating a standard pega agent to run periodically to execute a delete query but writing any pega process or agent would mean creating threads within Websphere JVM process.DB purging agents will eat into the heapspace and other resources defined for JVM and rest of processes.
Please let us know what is the best approach to delete huge amount of data and also implement the periodic purge
With my limited database knowledge if you have requirement like this, it will be good to store data in database partition which are created for week or month on the fly. Thus deleting a particular partition in database should do the work. Refer attached article for the same. In my view this is most efficient way.
Our LSA suggested to go with approach 2 considering the huge data to be purged and also the constraints we have with PURGE/ARCHIVE tool. Using a pega agent will consume more CPU which results in bringing down the node