LSA Data Excellence Webinar: Methodology-Related Questions
How to handle changing requirements around the data design and model?
With some in-depth questioning of your subject matter experts (SMEs), you may be able to get an informed view of the propensity for the data model to change over time. Bottom line is that you can’t predict the future. The recommendation is to build what you know today and build it in a way that is most globally used, leveraging techniques such as dynamic referencing to protect yourself against changes to class names. When rule names and rule placement do need to change, you can also refactor those rules as needed.
How to design data when you need to deliver your first MLP in 30 days?
Keep your data design as open as you can, yet completely usable for your day 1 delivery. For example, you may choose to implement your data classes at the organization level to support your MLP, then only specialize those classes at the application level for future releases.
Whose responsibility is the data design (LSA, LBA or a client data steward)?
All the above. If you happen to have a Data Modeler on client data steward available to you, those resources can be incredibly helpful in building an extensible, “future proof” data model since they know the data model best. Ultimately, it is the LSA who is responsible for quality and extensibility of the data models as it pertains to the Pega application.
Is data design part of sprint 0?
I would say the answer is “yes”, although the recommendation is to provide a foundational data model only to support the first MLP. Thinking past the data model provided in the MLP will put you in a better position to evolve your application over time, although you may not actually implement any data model changes until you’ve committed to that work in an upcoming sprint. That design might remain on “virtual” paper until then.
What do I do if I don't understand the client's entire enterprise data model before we start? What if the client doesn't understand their data model?
Let’s break this down. If you don’t understand the client’s enterprise model, ASK someone at the client to explain it to you. Most larger clients have an enterprise architecture team who can either explain the model or point you to the resource who can. For less technically mature clients or those who don’t have an enterprise architecture team, the data model may be scattered across multiple department applications and you may have to abstract the data model from those applications to identify the data objects and relationships that form the “big picture” data model to support your application.
With that said, you do not have to have a complete understanding of the entire enterprise data model to bring business value on day 1. In the Loan Application example from the webinar, we designed for and built out what we knew we needed to support other collateral types in the future. With some additional questioning of your SMEs, you can begin to infer patterns of where data objects you are defining today could benefit from reuse techniques that we covered in the webinar.
Can I still deliver a Low Code App Factory build in 2 weeks whilst adopting these data design recommendations?
You can both release what you need for your first App Factory delivery and continue to evolve your data model design and implementation. As new applications come through your App Factory, you have an opportunity to apply the techniques covered in the webinar to build re-usability and extensibility into your data model over time. Leveraging these techniques is an investment in foundation of your application. You do not have to sacrifice velocity for a solid data model design.