It is too little information to answer your question. This means I can only give you directions on how to think.
What are the possibilities to run in parallel? Are later records dependent on previous records or other interdependencies?
Are there contention on other parts of the system. I.e. is there other resources that forces your processing to queue up to use that resource (Database table locking, printer, log facility etc.)
Is your processing CPU intensive? storage intensive? Do you have enough main memory to avoid swapping?
What are your time constraints? How long does it take and how long is it allowed to take? Can you run in weekends or when the system is used by few users or even on a dedicated system? Is this processing needed daily, hourly or weekly or is it just one-time?
Yoy can run Pega Agents on each node in a cluster and have them run in parallel and you can have agents to dispatch processing to other parallel processing but only if your data and system is set up in such way it is possible.
Sorry not to be able to answer your question better,
As mentioned by Bernving that it depends on how you want to run the agent, we had similar requirement but it not exactly one lakh records it was varying when we want to see in monthly basis. we used to create rows in excel sheet about 5000 records at a time.
If time is not a problem then you can process limited records at time based on your memory availability. You may have to make multiple SQL calls. but after each iteration you may want to clean your pages. This case memory and CPU may not be impacted
If time is a problem then you can run agents in multiple nodes. This may require some additional columns and write back operations in the DB to avoid duplicate processing