Is there any recommendation from Pega on the maximum DB it can handle without impacting performance. We are doing our DB projections and evaluating strategies to archive data. But the prioritization will be based on impact to system performance. We are on DB2 UDB. Application Performance is very good at the current size of 600GB overall with 300GB of Work table. Assuming all the indexes and fine tuning is in place, at what DB size will the application performance be impacted(degrading). It certainly depends on a number of other factors like the infra architecture and stuff but in general is there any documentation/recommendation or any tests performed in these lines? Please share your experiences or any best practices.