We have a node configured as a Universal node. On startup this spins up 2 additional Kafka processes. As per the documentation I would expect with a default heap size of 1Gb. We see this alongside an additional process that utilises around 1.4Gb. We need to understand expected memory footprint for the application and any supporting processes to accurately size production and test hardware.
***Edited by Moderator Marissa to update SR Details***
The Kafka element itself is pretty clear, the supporting Cassandra process is not. How much memory is this expected to consume? How do we size accordingly for this in terms of both memory and disk space?