Yes ideally we don't want exposed column to be out of sync with blob content. However from Pega if we want to save some value which is exceeding the column size, Pega truncates the data to the column width minus 1, adds a plus sign (+) to the end of the value, and saves the revised data to the column and the blob corresponding to that column holds the entire value. In this scenario there will be mismatch in column and blob content.
This truncation by Pega fails if the value contains any multi-byte characters. So we are trying to mimic the behavior by truncating the data to the allowable size of the column and save it and save the entire value to the blob. While showing the data we'll be fetching from blob so that there is no data loss.
Yes, we too faced an issue with multi byte character data while storing in the exposed columns.
I dont think we have any UDF's to update the blob directly and its not a recommended approach as well. Only way is to not use the column data directly and instead depend on obj-open to get the complete data. In short, having the exposed column wouldn't help much in this usecase.
Found a probable solution of this multi byte character issue - change the DDL to modify the column from VARCHAR2(BYTE) to VARCHAR2(CHAR). It will allow multi byte characters and if the char count exceeds Pega will truncate the value gracefully.