If you are an Apache Kafka developer looking to write stream-processing applications in Flink, the initial setup isn’t so obvious. Apache Flink has their own opinions on consuming and producing to Kafka along with its integration with Confluent’s Schema Registry. Here are steps and a working example of Apache Kafka and Apache Flink streaming platform up in no time.
Insights
Not all Kafka integration tools are the same. Some integration systems only produce JSON data without a schema. The JDBC Sink Connector requires a schema. Here are steps showcasing a low-code option to push events into a relational database when the source data is schema-less JSON.
Are you interested in setting up Kafka without Zookeeper and with a dedicated controller quorum? Here are the steps and reference project showcasing how to do this using the Confluent community-licensed container images. A Grafana dashboard to observe the new metrics is also provided.
If you want to make sure your expected String key is what you think it is, using BytesDeserializer with your console consumers is better than StringDeserializer.
Introduction
Are you interested in using Grafana to monitor an Apache Kafka Cluster? Are you concerned you can integrate it with a specific cluster configurations? Using Grafana requires a fair amount of infrastructure to be established. While there are plenty of examples out there, you can spend a lot of time adjusting dashboards to get a desired setup.
Setting up multiple kafka cluster configurations to explore the nuances of various tools that monitor Apache Kafka.