Close popover
Pawel Nowak (nowap)
Principal System Architect - Pega Consulting
Pegasystems Inc.
nowap Member since 2016 13 posts
Posted: December 9, 2019
Last activity: 1 week 5 days ago

Implementation of the Apache Kafka® with the Apache AVRO serialization in Pega


The attached document is the lesson learned for a future reuse with the implementation details of the Pega Platform integration with the Apache Kafka streaming platform using the Apache AVRO serialization.

Use Case

Demonstrate the implementation of the Apache Kafka event streaming platform with the Apache AVRO serialization to consume the Apache AVRO messages using the Pega Real-Time Data Flow run.

Apache Kafka®

The Apache Kafka® is a distributed streaming platform, which has three key capabilities:

  • publish and subscribe to streams of records, like a message queue or enterprise messaging system
  • store streams of records in a fault-tolerant durable way
  • process streams of records, as they occur

Kafka is generally used for two broad classes of applications:

  • building the real-time streaming data pipelines that reliably get data between systems or applications
  • building the real-time streaming applications that transform or react to the streams of data

More about Apache Kafka: https://kafka.apache.org/

Apache AVRO

"The Apache Avro is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format."

More about Apache AVRO: https://avro.apache.org/

Pega Platform 8.2.1 Low-Code App Development Enterprise Application Development Dev/Designer Studio App Factory Case Management Data Integration Decision Management Healthcare and Life Sciences Lead System Architect Developer Knowledge Share