The agent running will have CPU hit based on the complex logic. It depends on OS no. of cpu's and other configurations how much CPU % it would take. It's advised to to do complete test,verify alerts and other OS stats for bench-marking.
Yes , I agree ,but this testing will be done in PAT environment, I need to think of other approach before project goes to PAT if we have problem with looping logic in agent, is there any way or experience from previous purge projects that can help us to come to conclusion that the JVM can handle the purge process load?
This advanced agent is of recurring mode ,it will run daily(weekdays) at 12:30 am when there is no user load on the system.
It will fetch 500 records at a time in sub loop and overall 2000 records in main loop.
Yes, we are using custom archive and purge ,at a time for single record the archival will be done and then related data for the main deal in it's custom tables will be deleted in loop with obj-delete-by handle.
once all the data is deleted the commit method will be called to commit all transactions.
As it is advanced agent , we wanted to avoid any locking issues and that's the reason we decided to go with single node, for multiple nodes it would be difficult to calculate the timing on each node and then schedule the agent to run on particular time which won't collide with the agent on other node.As per my previous experience a standard agent can handle thousands of records at a time but I am not sure about advanced agent looping logic.
I am trying to understand your use case a little more.
1) Does your advanced agent have a queue table associated with it?
2) Are you using the PRQueueIterator to iterate over the entries in your queue? You can get to the iterator using tools.getThread().getQueueManager().iterator(<classname>)
3) A standard agent works with one entry at time but it will wait till the queue is drained. Using a standard agent can help you work with different nodes at the same time because you are only locking one record at a time.