Last active
April 9, 2025 16:13
-
-
Save nobbynobbs/8ee1e4f493168f09dbffd13a76fac8d0 to your computer and use it in GitHub Desktop.
kafka and kafka-ui, without zookeeper
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
name: kafka-sandbox | |
services: | |
kafka: | |
image: bitnami/kafka:4.0.0 | |
environment: | |
- KAFKA_CLUSTER_ID=lkorDA4qT6W1K_dk0LHvtg | |
# Start Kraft Setup (Kafka as Controller - no Zookeeper) | |
- KAFKA_CFG_NODE_ID=1 | |
- KAFKA_CFG_PROCESS_ROLES=broker,controller | |
- KAFKA_CFG_BROKER_ID=1 | |
- [email protected]:9093 | |
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT | |
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER | |
- KAFKA_CFG_LOG_DIRS=/tmp/logs | |
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,INTERNAL://:9094 | |
# End Kraft Specific Setup | |
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092,INTERNAL://kafka:9094 | |
ports: | |
- "0.0.0.0:9092:9092" | |
kafka-ui: | |
image: provectuslabs/kafka-ui | |
ports: | |
- "8080:8080" | |
restart: "always" | |
environment: | |
KAFKA_CLUSTERS_0_NAME: "lkorDA4qT6W1K_dk0LHvtg" | |
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9094 | |
depends_on: | |
- kafka |
Actually I don't know. I’m using this config only for local development, so I never thought about persistence. However, the Bitnami Docker image page (look at Persisting your data section) suggests a slightly different approach and uses a different path for data inside the container.
kafka:
...
volumes:
- /path/to/kafka-persistence:/bitnami/kafka
...
Thanks again for the help.
I'm using your compose example in production here where I work.
a demonstration of the current data load that kafka handles:
We only have this topic that receives a high volume of messages per hour. This topic is adjusted to retain data for 8 hours.
Only with Kafka can we absorb this data and then insert it into a Postgres DB.
Kafka with 1 worker can support a lot of data, this docker compose is approved 😅
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thank you very much again. It helped me a lot.
one last question (sorry to bother you).
I added this to my docker compose inside kafka broker configs:
and at the end of the file I added this:
Would that be correct?
I want to be able to store the data that is being inserted into Kafka so that if it is restarted, nothing in the queue is lost.