Skip to content

Instantly share code, notes, and snippets.

@nobbynobbs
Last active October 22, 2025 12:58
Show Gist options
  • Save nobbynobbs/8ee1e4f493168f09dbffd13a76fac8d0 to your computer and use it in GitHub Desktop.
Save nobbynobbs/8ee1e4f493168f09dbffd13a76fac8d0 to your computer and use it in GitHub Desktop.
kafka and kafka-ui, without zookeeper
name: kafka-sandbox
services:
kafka:
image: bitnami/kafka:4.0.0
environment:
- KAFKA_CLUSTER_ID=lkorDA4qT6W1K_dk0LHvtg
# Start Kraft Setup (Kafka as Controller - no Zookeeper)
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_BROKER_ID=1
- [email protected]:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LOG_DIRS=/tmp/logs
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,INTERNAL://:9094
# End Kraft Specific Setup
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092,INTERNAL://kafka:9094
ports:
- "0.0.0.0:9092:9092"
kafka-ui:
image: provectuslabs/kafka-ui
ports:
- "8080:8080"
restart: "always"
environment:
KAFKA_CLUSTERS_0_NAME: "lkorDA4qT6W1K_dk0LHvtg"
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9094
depends_on:
- kafka
@abhikr4545
Copy link

thanks for this helpful docker compose file

@erickythierry
Copy link

Thank you very much. It helped me a lot.
One question: I tried to update the Kafka version to 4.0.0 but there was a problem with the Kafka broker getting stuck in a loop.
Do you know how to adjust these settings to make it work in the current version?

@nobbynobbs
Copy link
Author

@erickythierry check the second revision of the gist please.

@erickythierry
Copy link

Thank you very much again. It helped me a lot.
one last question (sorry to bother you).
I added this to my docker compose inside kafka broker configs:

volumes:
      - kafka-data:/bitnami/kafka/data

and at the end of the file I added this:

volumes:
  kafka-data:

Would that be correct?
I want to be able to store the data that is being inserted into Kafka so that if it is restarted, nothing in the queue is lost.

@nobbynobbs
Copy link
Author

nobbynobbs commented Apr 8, 2025

Actually I don't know. I’m using this config only for local development, so I never thought about persistence. However, the Bitnami Docker image page (look at Persisting your data section) suggests a slightly different approach and uses a different path for data inside the container.

kafka:
  ...
  volumes:
    - /path/to/kafka-persistence:/bitnami/kafka
  ...

@erickythierry
Copy link

Thanks again for the help.
I'm using your compose example in production here where I work.

a demonstration of the current data load that kafka handles:
image
We only have this topic that receives a high volume of messages per hour. This topic is adjusted to retain data for 8 hours.
Only with Kafka can we absorb this data and then insert it into a Postgres DB.
Kafka with 1 worker can support a lot of data, this docker compose is approved 😅

@swuecho
Copy link

swuecho commented Oct 22, 2025

  - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092,INTERNAL://kafka:9094

if need to connect from remove server, should change the 127.0.0.1 to the server ip of the docker host.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment