Question

6
Views
Close popover
Iwona Lowe (IwonaL94)
Cigna

Cigna
US
IwonaL94 Member since 2014 2 posts
Cigna
Posted: October 30, 2020
Last activity: October 30, 2020

Pega on Postgresql with large tables, partition or not for fast data access queries

trying to move one Pega application from our oracle to postgresql; application have few large tables to store proceeded transaction history. We processing 60K transactions/day that are recorded in work/history large tables, those tables are partitioned using date. This solution work great on oracle, because oracle has concept of index and global index and local partition index. The global index allows to query single item in the partitioned table using global index, which queries just the partition where that item is stored -- fast access. It is also easy to purge (drop partitions for old data) without table reorg;

Postgresql dbms does not have global index capability, therefore query to find one transaction record would scan each partition local index to find the item, which will be much slower. Therefore, all queries not containing (partition hash/range clause) will not perform great.

We want to ask questions: how in our situation to architect application on postgres?

How other clients build Pega on postgres database and successfully use for online or high transactions volume systems?

Are there smart way to architect historical transactions archiving to resolve fast data access to historical records? 

Pega Smart Claims Engine for Healthcare Project Delivery Insurance System Architect