Question

1
Replies
54
Views
Imranullah DawoodBasha (imranullah)
Coforge DPA
Imranullah Mohammed DawoodBasha
Coforge DPA
US
imranullah Member since 2013 32 posts
Coforge DPA
Posted: March 12, 2021
Last activity: April 14, 2021
Posted: 12 Mar 2021 15:43 EST
Last activity: 14 Apr 2021 3:45 EDT

Dataflow target data set of type AWS S3 Repo

I have a requirement where I need to export data into csv format and save it onto an amazon s3 bucket. We are reading data from a postgres table and applying some transformation, rules and enriching the data with other datasets on postgres. After this process we want to save the files onto an amazon s3 bucket.

We run this dataflow for multiple times and every time a new file is generated.

The thing I wanted to find out is there a way to find out the name with which the file was saved onto the s3 bucket so that I can map to my case, giving me an option to download the file from s3.

Is there any property on the dataflow run case where this information is stored? 

We tried this approach as well which didn't meet our requirement: https://collaborate.pega.com/question/parameterize-file-name-and-path-file-dataset

 
 
***Edited by Moderator Marissa to update Content Type from Discussion to Question; update Support Case Details****
Pega Customer Decision Hub 8.3 Decision Management Manufacturing Lead System Architect Support Case Exists