I have been spending some time this weekend trying to understand Kafka.
Kafka has four core APIs:
- The Producer API :
- allows an application to publish a stream records to one or more Kafka topics.
- The Consumer API
- allows an application to subscribe to one or more topics and process the stream of records produced to them.
- the Producer + Consumer API’s are essentially achieving the functionality provided by Azure’s EventHub
- The Streams API :
- allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
- the Streams API is achieving what Azure’s ASA does.
- The Connector API
- allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to ? 🙂
- the Connector API is achieving what Azure’s ADF does.
The documentation uses the WordCountDemo as the motivating example for the Streaming scenario. So I decided to spend some time over the weekend understanding this API better.