I am looking to develop a system availability tool, where my main aim is to automatically login to multiple pega urls and send back a success / failure message. I am looking to do this from within a Pega instance and not through an external script. What would be the best approach to handle the requirement?
First of all, if you have to develope a system availablity tool which will check the health of all the pega urls, then building it inside a pega instance might be difficult because in that way you have to then skip that instance from testing as you yourself will be using it.
Rather, think of using some external applicaiton.
How about trying AES ? Refer this article for more detail on AES
I was thinking of replicating a situation that IAC uses to access pega. Is it something you will suggest? I understand the security risks I shall have to assume in order to use the IAC gadget alone; Apart from that, will there be other major challenges?
I dont think implementing it based on IAC will be a good idea. Reason is again same. IAC / Mashup interacts with a live PRPC application. If your intention is to check health of all PRPC instances then you cant exclude this one as well.
AES is the best solution. You can customize AES to send emails for specific alerts, so if a node is down or performing poorly you have the insight needed for troubleshooting and prevention.
If AES is not an option, and you are using v7, you can build a simple REST service. The Activity rule tied to the service will simply return a 'success' string (in JSON perhaps), based on whatever logic you embed in the service.
In earlier versions it is possible to build a Authentication Service that calls an Activity rule with similar behavior.
I suggest, if you take this route, that the URL for the Authentication Service or the REST service use either Realm security or mutual authentication in order to avoid creating a 'backdoor'.
@JW@Stratosphere , When we have multi-node clustered setup, application users are given with cluster urls and cluster distributes tasks among nodes. In this case, how can we know the health of individual nodes? Most of the setup don't allow direct service invocation on any node other than default host and port which point to cluster web server.
Just a thought, an agent (running on all nodes) which writes entry in a database table along with its node name / id based on your configured time interval. All you have to do is a report definition on that table, and check the pxResultscount, to decide all nodes are up or not.