From BIX 6.3 onwards, the batch size is the number of work objects which are extracted. A single work object extraction may results in multiple table writes depending on the nested structures chosen in the extract rule.
There is always a trade off between memory and resources when we increase the batch size. When using a large batch size, the JDBC driver will keep those many statements (in this case 39 * batch size * number of records per table) in memory. This is a huge memory consumption. This also means that if a single record in a batch fails, then the whole batch fails and reconciliation could be a big effort considering not all databases work the same way in notifying which record caused the issue.