Close popover
Kensho Tsuchihashi (KenshoTsuchihashi)
Project Delivery Leader
Pegasystems Inc.
KenshoTsuchihashi Member since 2010 137 posts
Posted: March 22, 2021
Last activity: March 22, 2021

Frequent "duplicate key value violates unique constraint" error in the database



Customer finds a lot of "duplicate key value violates unique constraint" error in the database log and wants to suppress it. In this post, I am sharing what they are and what we should do.


  • Issue

In the database, "duplicate key value violates unique constraint" is logged intermittently. For example, below is a screenshot of my local PostgreSQL database log file. You can find the log file at "C:\Program Files\PostgreSQL\9.6\data\pg_log\postgresql-YYYY-MM-DD_HHMMSS.log". This error is not limited to PostgreSQL but any DBMS (PostgreSQL, Oracle, Microsoft SQL Server, or DB2). Be noted that this error is only logged at database level, and it won't be seen in the PegaRULES log file.

Duplicate key value violates unique constraint

I have counted the number of errors in my local machine and it was 42339. Below shows the list of related tables. You may have different frequencies or tables depending on how you are using Pega Platform.

No Table Count / Percentage
1 pr_sys_ruleset_index (rules) 59.8%
2 pr_data_stream_nodes (data) 11.3%
3 pr_assembledclasses (rules) 25.6%
4 pr_sys_appcache_dep (rules) 3.1%
5 pr_sys_app_hierarchy_flat (rules) 0.16%
6 pr_sys_appcache_entry (rules) 1/42339
7 pr_history (data) 19/42339
8 pr_sys_locks (data) 3/42339


  • Root cause

This is a known issue and an expected behavior of an internal SaveAllOrNone API. In designing the database access techniques for Pega Platform, the developers had the issue where an in-memory item had been updated and required to be written to the PegaRULES database. One standard technique for this situation is to select the row from the database and use the resulting SQL code to determine if an Insert or an Update needed to be done. However, to minimize database access where possible, a different technique is chosen:

Try to Insert the row and if the Insert fails due to a duplicate key condition, then issue an Update for the existing row.

This second technique leads to only one SQL statement being issued for every case where the Insert succeeds, reducing the required SQL statement processing by some, often significant, percent. These failed Insert attempts are handled programmatically in Pega Platform code that manages the database. They are not sent back to the application. And normally, they do not produce an exception on the logs. One can see this exception if third party tools is used, or if DEBUG is enabled. With this error there will not be any performance impact or system crash. Internally it improves the system performance. This SaveAllOrNone API has been unchanged from version 6x and is functioning as designed. Previously enhancement was raised but it was closed because the fix is not worthwhile as doing a "SELECT" before attempting inserts would be a detriment to performance with little benefit aside from cleaning up some log clutter.


  • Actions to take

No action is required. The log message should be documented as a known side effect of this API with no harm on the client end. There is no possible workaround to suppress the message.





***Edited by Moderator Marissa to update Platform Capability tags; update Support Case Details; add the Developer Knowledge Share tag***
Pega Platform 8.4.1 Data Integration Developer Knowledge Share Support Case Exists