Skip to content

Commit 6ed7f6b

Browse files
committed
feat(docker): cherry pick #2468 #2469
Signed-off-by: Robin Han <[email protected]>
1 parent 4afee95 commit 6ed7f6b

File tree

3 files changed

+287
-107
lines changed

3 files changed

+287
-107
lines changed

docker/README.md

Lines changed: 75 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Building image and running tests using github actions
3434
- `native` image type is for graalvm based `native` kafka docker image (to be hosted on apache/kafka-native) as described in [KIP-974](https://cwiki.apache.org/confluence/display/KAFKA/KIP-974%3A+Docker+Image+for+GraalVM+based+Native+Kafka+Broker#KIP974:DockerImageforGraalVMbasedNativeKafkaBroker-ImageNaming)
3535

3636
- Example(jvm):-
37-
To build and test a jvm image type ensuring kafka to be containerised should be https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz (it is recommended to use scala 2.13 binary tarball), following inputs in github actions workflow are recommended.
37+
To build and test a jvm image type ensuring kafka to be containerised should be https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz (it is recommended to use scala 2.13 binary tarball), following inputs in github actions workflow are recommended.
3838
```
3939
image_type: jvm
4040
kafka_url: https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz
@@ -52,7 +52,7 @@ Creating a Release Candidate using github actions
5252
- Go to `Build and Push Release Candidate Docker Image` Github Actions Workflow.
5353
- Choose the `image_type` and provide `kafka_url` that needs to be containerised in the `rc_docker_image` that will be pushed to github.
5454
- Example(jvm):-
55-
If you want to push a jvm image which contains kafka from https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz to dockerhub under the namespace apache, repo name as kafka and image tag as 3.6.0-rc1 then following values need to be added in Github Actions Workflow:-
55+
If you want to push a jvm image which contains kafka from https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz to dockerhub under the namespace apache, repo name as kafka and image tag as 3.6.0-rc1 then following values need to be added in Github Actions Workflow:-
5656
```
5757
image_type: jvm
5858
kafka_url: https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz
@@ -73,7 +73,7 @@ Promoting a Release Candidate using github actions
7373
- Go to `Promote Release Candidate Docker Image` Github Actions Workflow.
7474
- Choose the RC docker image (`rc_docker_image`) that you want to promote and where it needs to be pushed to (`promoted_docker_image`), i.e. the final docker image release.
7575
- Example(jvm):-
76-
If you want to promote apache/kafka:3.6.0-rc0 RC docker image to apache/kafka:3.6.0 then following parameters can be provided to the workflow.
76+
If you want to promote apache/kafka:3.6.0-rc0 RC docker image to apache/kafka:3.6.0 then following parameters can be provided to the workflow.
7777
```
7878
rc_docker_image: apache/kafka:3.6.0-rc0
7979
promoted_docker_image: apache/kafka:3.6.0
@@ -114,6 +114,29 @@ Run `pip install -r requirements.txt` to get all the requirements for running th
114114

115115
Make sure you have docker installed with support for buildx enabled. (For pushing multi-architecture image to docker registry)
116116

117+
Use local code run in docker
118+
---------------------------------------
119+
120+
- command run in project root folders
121+
122+
1. generate tgz
123+
```shell
124+
# For example only, can be modified based on your compilation requirements
125+
./gradlew releaseTarGz -x test -x check
126+
```
127+
2. run
128+
```shell
129+
docker-compose -f docker/local/docker-compose.yml up -d
130+
```
131+
132+
- After modifying your code, simply regenerate the tgz and restart the specified service.
133+
```shell
134+
# For example only, can be modified based on your compilation requirements
135+
./gradlew releaseTarGz -x test -x check
136+
# eg: restart broker
137+
docker-compose -f docker/local/docker-compose.yml up broker1 broker2 -d --force-recreate
138+
```
139+
117140
Building image and running tests locally
118141
---------------------------------------
119142

@@ -187,7 +210,7 @@ image_type: jvm
187210
kafka_version: 3.7.0
188211
```
189212

190-
- Run the `docker/extract_docker_official_image_artifact.py` script, by providing it the path to the downloaded artifact. This will create a new directory under `docker/docker_official_images/kafka_version`.
213+
- Run the `docker/extract_docker_official_image_artifact.py` script, by providing it the path to the downloaded artifact. This will create a new directory under `docker/docker_official_images/kafka_version`.
191214

192215
```
193216
python extract_docker_official_image_artifact.py --path_to_downloaded_artifact=path/to/downloaded/artifact
@@ -210,6 +233,53 @@ python generate_kafka_pr_template.py --image-type=jvm
210233
```
211234

212235
- kafka-version - This is the version to create the Docker official images static Dockerfile and assets for, as well as the version to build and test the Docker official image for.
213-
- image-type - This is the type of image that we intend to build. This will be dropdown menu type selection in the workflow. `jvm` image type is for official docker image (to be hosted on apache/kafka) as described in [KIP-975](https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka).
236+
- image-type - This is the type of image that we intend to build. This will be dropdown menu type selection in the workflow. `jvm` image type is for official docker image (to be hosted on apache/kafka) as described in [KIP-975](https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka).
214237
- **NOTE:** As of now [KIP-1028](https://cwiki.apache.org/confluence/display/KAFKA/KIP-1028%3A+Docker+Official+Image+for+Apache+Kafka) only aims to release JVM based Docker Official Images and not GraalVM based native Apache Kafka docker image.
215238

239+
AutoMQ Docker Compose Configurations
240+
====================================
241+
242+
This directory contains Docker Compose configurations for deploying AutoMQ in different scenarios.
243+
244+
Quick Start (Single Node)
245+
-------------------------
246+
247+
The main `docker-compose.yaml` in the root directory provides a simple single-node setup for quick evaluation and development:
248+
249+
```bash
250+
# From the root directory
251+
docker-compose up -d
252+
```
253+
254+
This configuration:
255+
- Deploys a single AutoMQ node that acts as both controller and broker
256+
- Includes MinIO for S3 storage
257+
- Uses the latest bucket URI pattern (s3.data.buckets, s3.ops.buckets, s3.wal.path)
258+
- All services run in a single Docker network
259+
260+
Production-like Cluster
261+
-----------------------
262+
263+
For a more production-like setup, use the cluster configuration:
264+
265+
```bash
266+
# From the root directory
267+
docker-compose -f docker/docker-compose-cluster.yaml up -d
268+
```
269+
270+
This configuration:
271+
- Deploys a 3-server cluster
272+
- Includes MinIO for S3 storage
273+
- Uses the latest bucket URI pattern (s3.data.buckets, s3.ops.buckets, s3.wal.path)
274+
- All services run in a single Docker network
275+
276+
Configuration Notes
277+
-------------------
278+
279+
Both configurations use the new bucket URI pattern as recommended in the AutoMQ documentation:
280+
281+
- `s3.data.buckets` for data storage
282+
- `s3.ops.buckets` for logs and metrics storage
283+
- `s3.wal.path` for S3 WAL
284+
285+
For more details, see the [AutoMQ documentation](https://www.automq.com/docs/automq/getting-started/cluster-deployment-on-linux#step-2-edit-the-cluster-configuration-template).

docker/docker-compose-cluster.yaml

Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
# Licensed to the Apache Software Foundation (ASF) under one or more
2+
# contributor license agreements. See the NOTICE file distributed with
3+
# this work for additional information regarding copyright ownership.
4+
# The ASF licenses this file to You under the Apache License, Version 2.0
5+
# (the "License"); you may not use this file except in compliance with
6+
# the License. You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
# Single-node AutoMQ setup with MinIO for quick starts
17+
version: "3.8"
18+
19+
x-common-variables: &common-env
20+
KAFKA_S3_ACCESS_KEY: minioadmin
21+
KAFKA_S3_SECRET_KEY: minioadmin
22+
KAFKA_HEAP_OPTS: -Xms1g -Xmx4g -XX:MetaspaceSize=96m -XX:MaxDirectMemorySize=1G
23+
# Replace CLUSTER_ID with a unique base64 UUID using "bin/kafka-storage.sh random-uuid"
24+
CLUSTER_ID: 5XF4fHIOTfSIqkmje2KFlg
25+
26+
services:
27+
# MinIO service for S3 storage
28+
minio:
29+
container_name: "minio"
30+
image: minio/minio
31+
environment:
32+
MINIO_ROOT_USER: minioadmin
33+
MINIO_ROOT_PASSWORD: minioadmin
34+
MINIO_DOMAIN: minio
35+
ports:
36+
- "9000:9000" # MinIO API
37+
- "9001:9001" # MinIO Console
38+
command: [ "server", "/data", "--console-address", ":9001" ]
39+
networks:
40+
automq_net:
41+
healthcheck:
42+
test: [ "CMD", "curl", "-f", "http://minio:9000/minio/health/live" ]
43+
interval: 5s
44+
timeout: 5s
45+
retries: 3
46+
47+
# Create needed buckets
48+
mc:
49+
container_name: "mc"
50+
image: minio/mc
51+
depends_on:
52+
minio:
53+
condition: service_healthy
54+
entrypoint: >
55+
/bin/sh -c "
56+
until (/usr/bin/mc config host add minio http://minio:9000 minioadmin minioadmin) do echo '...waiting...' && sleep 1; done;
57+
/usr/bin/mc rm -r --force minio/automq-data;
58+
/usr/bin/mc rm -r --force minio/automq-ops;
59+
/usr/bin/mc mb minio/automq-data;
60+
/usr/bin/mc mb minio/automq-ops;
61+
/usr/bin/mc policy set public minio/automq-data;
62+
/usr/bin/mc policy set public minio/automq-ops;
63+
tail -f /dev/null
64+
"
65+
networks:
66+
- automq_net
67+
68+
# Three nodes for AutoMQ cluster
69+
server1:
70+
container_name: "automq-server1"
71+
image: automqinc/automq:latest
72+
stop_grace_period: 1m
73+
environment:
74+
<<: *common-env
75+
command:
76+
- bash
77+
- -c
78+
- |
79+
/opt/automq/kafka/bin/kafka-server-start.sh \
80+
/opt/automq/kafka/config/kraft/server.properties \
81+
--override cluster.id=$$CLUSTER_ID \
82+
--override node.id=0 \
83+
--override controller.quorum.voters=0@server1:9093,1@server2:9093,2@server3:9093 \
84+
--override controller.quorum.bootstrap.servers=server1:9093,server2:9093,server3:9093 \
85+
--override advertised.listeners=PLAINTEXT://server1:9092 \
86+
--override s3.data.buckets='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
87+
--override s3.ops.buckets='1@s3://automq-ops?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
88+
--override s3.wal.path='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true'
89+
networks:
90+
automq_net:
91+
depends_on:
92+
- minio
93+
- mc
94+
95+
server2:
96+
container_name: "automq-server2"
97+
image: automqinc/automq:latest
98+
stop_grace_period: 1m
99+
environment:
100+
<<: *common-env
101+
command:
102+
- bash
103+
- -c
104+
- |
105+
/opt/automq/kafka/bin/kafka-server-start.sh \
106+
/opt/automq/kafka/config/kraft/server.properties \
107+
--override cluster.id=$$CLUSTER_ID \
108+
--override node.id=1 \
109+
--override controller.quorum.voters=0@server1:9093,1@server2:9093,2@server3:9093 \
110+
--override controller.quorum.bootstrap.servers=server1:9093,server2:9093,server3:9093 \
111+
--override advertised.listeners=PLAINTEXT://server2:9092 \
112+
--override s3.data.buckets='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
113+
--override s3.ops.buckets='1@s3://automq-ops?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
114+
--override s3.wal.path='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true'
115+
networks:
116+
automq_net:
117+
depends_on:
118+
- minio
119+
- mc
120+
121+
server3:
122+
container_name: "automq-server3"
123+
image: automqinc/automq:latest
124+
stop_grace_period: 1m
125+
environment:
126+
<<: *common-env
127+
command:
128+
- bash
129+
- -c
130+
- |
131+
/opt/automq/kafka/bin/kafka-server-start.sh \
132+
/opt/automq/kafka/config/kraft/server.properties \
133+
--override cluster.id=$$CLUSTER_ID \
134+
--override node.id=2 \
135+
--override controller.quorum.voters=0@server1:9093,1@server2:9093,2@server3:9093 \
136+
--override controller.quorum.bootstrap.servers=server1:9093,server2:9093,server3:9093 \
137+
--override advertised.listeners=PLAINTEXT://server3:9092 \
138+
--override s3.data.buckets='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
139+
--override s3.ops.buckets='1@s3://automq-ops?region=us-east-1&endpoint=http://minio:9000&pathStyle=true' \
140+
--override s3.wal.path='0@s3://automq-data?region=us-east-1&endpoint=http://minio:9000&pathStyle=true'
141+
networks:
142+
automq_net:
143+
depends_on:
144+
- minio
145+
- mc
146+
147+
networks:
148+
automq_net:
149+
name: automq_net
150+
driver: bridge
151+
ipam:
152+
driver: default
153+
config:
154+
- subnet: "10.6.0.0/16"
155+
gateway: "10.6.0.1"

0 commit comments

Comments
 (0)