Unit Testing RPA Without Dependence on Application
I'm working on setting up Continuous Integration and Delivery for RPA development and am wondering about best practices for unit testing RPA code. It would be best to not depend on the live application for a number of reasons:
The unit test project might have to be deployed to a separate machine with an active user session in order to run it, rather than being run right on the build server.
Test case data cannot be guaranteed not to change
There's no way to guarantee that the application will not go into an unavailable state
Running negative tests (such as wait-for-create timeouts) would either be impossible or make the unit test runs take a long time (as opposed to simulating timeouts without having to wait the full time)
The way I know to handle these scenarios is to use dependency injection and inject mocks for all the controls of the application, so that the RPA code doesn't know (and doesn't care) whether it has the real application (interrogations) or a fake one. This would have to be a custom, in-house build. However, I wanted to ask here to see if there is some built-in way to handle this, and if someone has experience with this before going ahead with that solution. Also, if we use custom code to achieve this kind of testability, would Pega refuse to support our RPA code?
If you are testing Pega Robotics code, then, fundamentally, what you are testing is the ability to interact with the applications. If you replace that with something else, then what really are you testing? If you limit the code in Pega Robotics to simply interacting with the application(s) and leave all logic to the Pega case, then you can test everything you are asking for without Pega Robotics (although you still need to test that part at some point). You could create a testing flag on your case and when the Robot Activity is fired, check this parameter and decide whether or not to perform any action against the applications, or to simply return any data you expect.
You asked: "If you replace that with something else, then what really are you testing?" You'd be testing the functionality of the automation code itself in isolation, without a dependence on the actual, running application.
I mentioned in my original post that if we wanted to test how automation code responds to things like controls not creating within the timeout period, we'd (1) have to figure out a way to induce this artificially, which in most cases is impossible, and (2) actually wait the entire timeout time (typically 30 seconds) for every test. That adds up. With a mock application, this could be simulated on-demand and the function could be made to return false immediately, not having to wait the entire time. The app would also need to be put into the proper state for every test - shutting down and re-opening, getting to the right screen, setting up the scenario, etc... takes more time and would make for many more automation files and boxes and lines just to set the thing up.
You also wrote: "If you are testing Pega Robotics code, then, fundamentally, what you are testing is the ability to interact with the applications." That's not quite it. You also need to test the times when application interaction does not go as expected, when normally the bot would be able to interact but something unexpected has gone wrong - i.e., the negative scenarios.
When the bots are running against the live app in the wild, many times things don't go as expected. The tests would prove that the code can successfully handle positive and negative scenarios. It's the negative scenarios that are difficult to test for by using the live app because the scenarios cannot be induced on-demand.
If we stuck to testing only the positive scenarios (merely the ability to interact with the application) there'd really be no need for unit testing at all. Just feeding it cases and watching it work would be sufficient to cover that. That's not testing, and doing only that is what has led to failures in production in our project. And the blame for those failures goes directly to the developers, who have no reliable way to make tests for all those failure scenarios.
It would be extremely useful to be able to right-click an interrogated application/adapter in Studio and generate a mock of it and then choose which one the automation code would interact with when it starts. If it's a unit test project, then send in the mock app. If it's the live app, then the real thing. These could even be synced to one another so that the mock always accurately reproduces the live app. The automation code itself doesn't need to know which one it's interacting with, and all scenarios, good bad and ugly, could be easily tested.
If you are intending to test every link in the automation flow, then the only way to simulate that is to actually encounter every scenario and test for it. If you want to test timeouts without waiting for the full time, then set them as avraibales that you can adjust to lower values so that they fail.
What you have asked though is not currently available within the product. You can request that functionality as a product enhancement through your Engagement or Practice Lead.
In my experience though, when testing RPA, you are more likely testing the results (i.e. did it perform the task correctly and all required data was produced) and not actually every step of the process. When testing failure scenarios, I generally make sure to test as much as possible and then adjust as needed any issues that occur during testing.