nowap Member since 2016 13 posts
Posted: 10 months ago
Last activity: 5 months 2 weeks ago

Implementation of the Apache Kafka® with the Apache AVRO serialization in Pega


The attached document is the lesson learned for a future reuse with the implementation details of the Pega Platform integration with the Apache Kafka streaming platform using the Apache AVRO serialization.

Use Case

Demonstrate the implementation of the Apache Kafka event streaming platform with the Apache AVRO serialization to consume the Apache AVRO messages using the Pega Real-Time Data Flow run.

Apache Kafka®

The Apache Kafka® is a distributed streaming platform, which has three key capabilities:

  • publish and subscribe to streams of records, like a message queue or enterprise messaging system
  • store streams of records in a fault-tolerant durable way
  • process streams of records, as they occur

Kafka is generally used for two broad classes of applications:

  • building the real-time streaming data pipelines that reliably get data between systems or applications
  • building the real-time streaming applications that transform or react to the streams of data

More about Apache Kafka: https://kafka.apache.org/

Apache AVRO

"The Apache Avro is a row-orientedremote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format."

More about Apache AVRO: https://avro.apache.org/

Pega Platform 8.2.1 Low-Code App Development Low-Code App Development Dev/Designer Studio Low-Code App Development App Factory Case Management Data Integration Decision Management Healthcare and Life Sciences Lead System Architect Developer Knowledge Share
Share this page LinkedIn