Is there a way to view the results of each and every component executed when a strategy is run.
I am looking for a way to extract the Execution results of all the components and the sub strategies after a Strategy is run.
I see two places where i see such results:
1. There's a param on the strategy run page "STRATEGY_EXECUTION_RESULTS_CONTAINER" which seems to have all the results but in an encrypted way.
2.When we test run the strategy through Action->Run and open the clipboard there's a page generated of class "Decision-Result" which contains pyExecutionResults of each component, but here i doesnt give the results of the components present in the sub strategies. And this page is only available when i do a test run i cant find it when i run the strategy through a data flow.
There are several ways you can achieve your requirement, depending on the version you are interested in and whether this is a hypothetical question or not.
Prior to 7.4 simulating and understand the filtering effect (decision funnel) of your decision framework was hard to do. This was acknowledged in product and we implemented a multi release roadmap to rectify this.
In 7.4 we addressed the pressing need of simplifying simulations. This was to address the complexity of building the simulation. This addressed the usability issue and now artefacts required for the simulation are generated for you.
In 8.2 we decided to enhance the simulation run itself to capture the population counts for the main components that do the filtering in a strategy. The intention of this was to make clear which component was responsible for filtering out a customer for being eligible for an offer. Initially we made the raw data available through reports. I suggest if you would like to analyse this data open the reports in excel to filter and format.
In 8.3 we are introducing a simulation data sync pipeline. The ability to sample and copy data (Customer, IH, Adaptive) from a production environment to a simulation environment for better simulations. This data will be moved by a Deployment manager pipeline on the Pega cloud. The aim of this is to avoid the manual overhead our customers currently experience setting up simulation environments. Look out for more information about this.
In 8.4 we are planning to take the counts from the reports and make them available on the canvas. The purpose of these is to make the counts more understandable and be able to show them in context. It is likely these counts will also start showing up in other areas of decisioning, NBA Designer, Eligibilities etc.
So that's the plans. For now I would follow the advice from the mesh thread and here. Make use of the batch test run panel to track the propositions (the chart varies as you click on components so that should help you.) Try out simulations and the reports and keep an eye out for further improvements in 8.3 and 8.4.
I am also working on an updated roadmap deck, will attach it to this thread once done.