We are getting below alert in our log file. Please let us know how to resolve Pega0005 alert due to pr_sys_locks.
GMT*8*PEGA0005*6157*500*353fb4828f7466d25868b062e668e790*NA*NA*AE11AFC7DE5187BDBA2BD42BC8F9F7F82*none*XXX-XX-XXXX-Work*PegaRULES:07.10*02b8c3d24e72134941050057c14d21a3*N*0*AE11AFC7DE5187BDBA2BD42BC8F9F7F82*288*Thread-1530 (HornetQ-client-global-threads-1946063216)*STANDARD*com.pega.pegarules.data.internal.store.DatabasePreparedStatementImpl*EMAIL.XXXXEmailListener.Listener*NA***NA*****NA*NA*NA*NA*NA*NA*NA*Database query took more than the threshold of 500 ms: 6,157 ms SQL: select pxOwnerId as "pxOwnerId", pxUpdateOperator as "pxUpdateOperator", pxUpdateOpName as "pxUpdateOpName", pxExpireDateTime as "pxExpireDateTime", pxUpdateDateTime as "pxUpdateDateTime", pxLockHandle as "pxLockHandle", pxCreateDateTime as "pxCreateDateTime", pxUserHost as "pxUserHost" from DATAADM.pr_sys_locks where pzInsKey = ?*
***Edited by Moderator Marissa to update categories***
Do you have a large number of users logged in at a particular time ?Can you take the count from pr_sys_locks table at the peak time and what entries are there in the table . You can go thorough the below article.
com.pega.pegarules.data.internal.store.DatabasePreparedStatementImpl*EMAIL.XXXXEmailListener.Listener*NA***NA*****NA*NA*NA*NA*NA*NA*NA*Database query took more than the threshold of 500 ms: 6,157 m. It is something to related with Email Listner.
The instance of Data-Admin-Operator-ID i.e. operator can't be locked and hence it doesn't seem to be an error which has happened becs of locking of operator . Are you aware of the scenario or circumstances under which it is happening
From the discussion I can see the email listener creating workobject, at the time of creating it's quering pr_sys_locks table. Seems to be this table highly fragmented due to frequent inserts and deletes. Please perform below operations.
1. Reorganize and Rebuild DB Indexes –
The Database Engine automatically maintains indexes whenever insert, update, or delete operations are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.
You can remedy index fragmentation by reorganizing or rebuilding DB indexes. Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages.
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and non-clustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value.
2. Updating Statistics on the DB tables –
Updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates statistics as necessary to improve the query plan.
Updating statistics ensures that queries compile with up-to-date statistics.
3. Perform the above DB maintenance operations on a scheduled routine. This will help with application performance bottleneck issues arising out of the database.