We are supporting a pega application where we manually add organizations to an existing org tree like the following:
- Grandparent Org
- Parent Org
- Child Org
- etc (at least 10 levels deep)
We have approximately 8300 organization records which span approximately 9 levels deep. When you update an existing record, the system also needs to update all children (to update the org path). The previous developer maintains the path for each org record (i.e. /GranparentOrg/ParentOrg/ChildOrg/ ... etc.)
The problem we run into is that if we update a lower level organization, the system updates correctly; however if we update a higher level org (at level 1,2,3) which could have more than 1000 records to recursively update, we get an error. We noticed that on the successful transactions, the temporary data pages clean up successfully from the clipboard; however, for the ones that fail, we noticed that we get the following errors in the clipboard (see attached).
We also tried (unsuccessfully) updating the Dynamic System Setting for:
to 2000 (instead of 1000), but that didn't work either ...
Is there a limit on the number of Data Pages you can have opened?
I am running Pega 8.3 (personal edition) on a Windows 10 Professional laptop.
Any help and/or assistance will be greatly appreciated.
sorry for the delay ... I have added debug messages into the java console.
The first error message that appears in the java console is the following:
Exception caught when evaluating when
java.lang.RuntimeException: Scalar query .pyUI.pyQueryTimeoutValue on page 'pegaInternalReportTempPage' of type 'Rule-Obj-Report-Definition' using 0 parameter(s)
The problem is very intermittent. It currently seems to bomb right around 60 seconds; however, yesterday and previous days it would bomb much quicker.
From above, the logic is recursive within a tree (parent / child relationships). I have varied the number of records it processes (258, 603, and 691 records) succeeded; 872 and 1302 records succeeded today (but in previous days they did not succeed). I am getting it to fail on 7554 records after about 60 seconds currently.