What are the prerequisites for enabling Scenario Testing?
Before using the Scenario testing tool, please make sure you have the privilege pxScenarioTestAutomation a part of your access roles and the DSS setting pzPegaSUT set to true and an unlocked ruleset to save the test cases recorded. Along with this please make sure that all your UI components have unique data-test-ids generated. If you have created an application in the latest version of Pega using OOTB components, this should get generated by default.
Why is highlighter not visible on few controls while recording a test?
Ideally no. But the system cannot prevent or validate this. If more than one control has same data-test-id, recording or replay of the Scenario test may fail as it relies on data-test-id to uniquely identify a control and act on it. When a control is copied and pasted at a different place in a section, data-test-id redundancy may occur and can be avoided by regenerating the data-test-id on the copied control.
I installed the latest PTAK version from marketplace but I cannot see new features associated with that version. Is there anything specific to be done?
Make sure that you have PTAK as a built-on app in the application structure and version is given as 1 so that the latest ruleset is picked up.
Can I record a scenario test from one of the applications in the built-on stack and execute it from an application that is built on top of it?
Yes. If we included the built-on app in the quality- settings section, then we will be seeing the scenario tests in the LP and we can run them.
Can I record a scenario test from one portal and execute from a different portal?
No. It is not possible.
How can I record a scenario test from a certain point of a workflow (Case type or portal)?
No. Currently, we cannot achieve this.
For a dropdown field with multiple values in the list, if I want to use one script to verify all the values in the list or execute script with different values every time. Is there a way to do it?
Firstly, verification of the values in case of dropdown can be done using the attribute “Options” which has a comparator contains and it fetches the list of values being fed to the dropdown. Secondly, executing the script with different values is like data driven testing which straightaway is not possible, but we can dynamically configure the values of the dropdown from setup section.
Can we duplicate an existing test case and add new steps to create a new test case, keeping the existing test as is?
We can save as a scenario test to a new name like any other rule. And after that, edit the new test and add-steps to it or re-record from a particular step.
I have a custom control which is not being recognized by scenario test tool. How can I make it recognized?
Given that creation a test scenario involving multiple users is currently not supported for scenario testing, is there any workaround to achieve this concept?
Splitting it into 2 different test cases and testing the functionalities separately is the only workaround for that while the switching of roles has to be done manually for it and that cannot be covered as part of scenario testing.
How can I execute scenario test cases and test suites from an external CI tool?
We can configure a task to run the ExecuteScenarioTests rest service which takes the test suit id or runs all the tests in the app if not specified. This can be executed through third-party runners like BrowserStack, SauceLabs, CBT and Selenium standalone.
My scenario test suite failed while executing it from the CI/CD pipeline. How can I know which tests have failed and the reasons for failure, without referring to developer studio?
The ExecuteScenarioTests api is tailored to return the response with list of all test cases which got executed and their results. The failed test cases list can be viewed in the errors section of the task in the CI/CD pipeline. While the specific failure reason could be seen from dev studio
How to manage the file location of an attachment as it is throwing an error when it is recorded by one user and being played back with a different user?
The location of the attachment should not matter as in playback, we just refer to the attachment key. The reason for the failure should be debugged more in detail as it would not be because of the file location of attachment.
While recording a scenario test, it captures all the values entered by the user and stores. When it is executed, it fails as some of the data that gets created as part of previous execution stays and causing data duplication. How to avoid this situation and author proper test cases?
Cleanup should be handled in a better way while executing scenario tests. The automatic cleanup checkbox takes care of cleaning up the data created as part of setup. Along with that, the explicit cleanup can also be configured based on the requirement. Every time we execute a scenario test created on a case type, a new work object is created where this kind of duplication cannot happen. In portal, the flow should always start from the landing page. Recording it from the middle of flow is considered as bad recording.
Which UI controls are not supported by scenario testing?
Please refer to the following collab document which specifies the list of supported controls. As of PTAK 1.3, Attach content, multiselect and anypicker are the additional controls which we additionally extended the support while recording a scenario test case.
How do I know which PTAK version I am currently on, and which features are supported by it?
After we import the latest rap of PTAK, we include it in the built-on app list. It is a good practice to specify the version of the app as 1 and when we open the application rule, the version of the ruleset that we see in the ruleset section is the version that is being picked up.
Is there a way to avoid re-recording of the entire scenario test when the data test ID of a UI control changes?
Test case can be edited from portal by using the “add-steps” or “re-record” from a particular step where the changes have been made. Unnecessary steps can also be deleted from the script using the “remove” option.