We are using Pega Platform 8.1.5 (PegaCloud) with Pega Marketing 8.1.5
Our current setup is as follows:
We only have outbound channels.
We load the customer data into Cassandra (which contains 100% of the customer data) and into the Customer DB table Postgress - simply contains the CustomerID field (we do this due the limitation on Pega Marketing Segments which require a relational database table).
Due to some limitations, we load the data during business hours (cannot be pushed back overnight). It currently takes about 30 minutes.
We run the NBA during business hours. It takes about 1 hour 30 minutes.
We have 3 Batch dataflow nodes (used by the data load, and used by the NBA). The thread count is set to 5.
We have 3 DDS nodes for Cassandra.
Now we are introducing our first inbound channel, using the Container API and a custom REST API.
We have 3 WebUser nodes to serve the inbound requests.
During Load Tests, we've observed that the services respond as expected for a while (1-3 secs), then there is a performance degradation after about 100 requests (5-15 secs), and the cycle repeats: expected response times, then degradation, expected response times, then degradation.
The degradation is even more noticeable when we run the Pega Data Load WHILE the Load Tests are running (some response times go up to almost 30 seconds).
As far as I understand:
The data load uses the Batch dataflow nodes
The NBA uses the Batch dataflow nodes.
The Inbound requests use the WebUser nodes.
But everything (data load, NBA and Inbound use the DDS nodes - to either read or write data - so the bottleneck might be here).
So I guess my questions are:
What can we do to improve the performance? Is there any PegaCloud sizing tool?
Add additional DDS nodes?
Add additional Batch dataflow nodes?
Add additional WebUser nodes?
How to find out the optimal Batch dataflow thread count?
Is there any configuration for the Webuser node type thread count?
Any other suggestion you can think of?
Any suggestions to improve the performance are highly appreciated.
Thank you very much,
***Edited by Moderator: Lochan to update SR details***