Kubernetes: kafka-zookeeper on GKE (Google Kubernetes Engine)


Goal: Create a zookeeper and kafka cluster, add brokers, remove brokers, clean up.

If you wish to use your own namespace for this installation, be sure to replace itsmetommy with your own.

Create namespace

kubectl create ns itsmetommy

Clone git repository

git clone https://github.com/itsmetommy/kubernetes-kafka-zookeeper && cd kubernetes-kafka-zookeeper

Create

kubectl apply -f .

Example

kubectl apply -f .
service "kafka-svc" created
poddisruptionbudget.policy "kafka-pdb" created
statefulset.apps "kafka" created
service "zk-svc" created
poddisruptionbudget.policy "zk-pdb" created
statefulset.apps "zk" created

Logs

kubectl -n itsmetommy logs kafka-0 
kubectl -n itsmetommy logs zk-0 --tail 20
kubectl -n itsmetommy exec zk-0 cat /usr/etc/zookeeper/log4j.properties

Create Topic

SSH into one of the pods.

kubectl -n itsmetommy exec kafka-0 -it bash
  • Topic test — 2 partitions (2 partitions over 2 pods) and a replication factor of 3 (data replicated over 3 pods)
  • Topic test2 — 3 partitions (3 partitions over 3 pods) and a replication factor of 2 (data replicated over 2 pods)

Create topic 1

kafka-topics.sh --create \
  --topic test1 \
  --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 \
  --partitions 3 \
  --replication-factor 3

Create topic 2

kafka-topics.sh --create \
  --topic test2 \
  --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 \
  --partitions 3 \
  --replication-factor 2

Example

kafka@kafka-0:/$kafka-topics.sh --create \
>   --topic test1 \
>   --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 \
>   --partitions 3 \
>   --replication-factor 3
Created topic "test1".
kafka@kafka-0:/$ kafka-topics.sh --create \
>   --topic test2 \
>   --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 \
>   --partitions 3 \
>   --replication-factor 2
Created topic "test2".

Increase Replication Factor

Lets change the replication factor of test2 from 2 to 3.

View the current replication factor

kafka-topics.sh --describe --topic test2 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181

Example

Note: Notice the Isr (in-sync replica).

kafka-topics.sh --describe --topic test2 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181
Topic:test2	PartitionCount:3	ReplicationFactor:2	Configs:
	Topic: test2	Partition: 0	Leader: 0	Replicas: 0,2	Isr: 0,2
	Topic: test2	Partition: 1	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: test2	Partition: 2	Leader: 2	Replicas: 2,1	Isr: 2,1

Create increase-replication-factor.json

cat > /tmp/increase-replication-factor.json

Example

cat > /tmp/increase-replication-factor.json # Hit ENTER
{"version":1,
  "partitions":[
     {"topic":"test2","partition":0,"replicas":[0,1,2]},
     {"topic":"test2","partition":1,"replicas":[0,1,2]},
     {"topic":"test2","partition":2,"replicas":[0,1,2]}
]} # Hit ENTER and CTRL+D

Execute

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --reassignment-json-file /tmp/increase-replication-factor.json --execute

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --reassignment-json-file /tmp/increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"test2","partition":2,"replicas":[2,1]},{"topic":"test2","partition":1,"replicas":[1,0]},{"topic":"test2","partition":0,"replicas":[0,2]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

Verify

kafka-topics.sh --describe --topic test2 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181

Example

Note: Notice the Isr (in-sync replica).

kafka-topics.sh --describe --topic test2 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181
Topic:test2	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test2	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: test2	Partition: 1	Leader: 1	Replicas: 0,1,2	Isr: 1,0,2
	Topic: test2	Partition: 2	Leader: 2	Replicas: 0,1,2	Isr: 2,1,0

Consumer

Run a simple console consumer using the kafka-console-consumer.sh script. This will allow you to view the incoming producer data in realtime.

kafka-console-consumer.sh --topic test1 --bootstrap-server kafka-0.kafka-svc.itsmetommy.svc.cluster.local:9093

Producer

In another window, exec into the same container again.

kubectl -n itsmetommy exec kafka-0 -it bash

Run the producer so we can send messages to the consumer.

kafka-console-producer.sh --topic test1 --broker-list localhost:9093
hello        # PRESS ENTER
i like kafka # PRESS ENTER
goodbye      # PRESS ENTER

You should see the exact same text within the Consumer window.

Use Control+C to terminate each command.

Describe Topic

Describe topics and check out the Partitions, Leaders, Replicas and ISRs.

Describe all topics.

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe
...
Topic:test1	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test1	Partition: 0	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: test1	Partition: 1	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test1	Partition: 2	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
Topic:test2	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test2	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: test2	Partition: 1	Leader: 1	Replicas: 0,1,2	Isr: 1,0,2
	Topic: test2	Partition: 2	Leader: 2	Replicas: 0,1,2	Isr: 2,1,0

Describe a single topic.

kafka-topics.sh --describe --topic test1 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181

Example

kafka-topics.sh --describe --topic test1 --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181
Topic:test1	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test1	Partition: 0	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: test1	Partition: 1	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test1	Partition: 2	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0

List Brokers

This is helpful to view how many brokers you have before and after you scale your cluster.

zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"

Example

Note the [0, 1, 2].

kafka@kafka-0:/$ zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"
Connecting to zk-svc.itsmetommy.svc.cluster.local:2181
Welcome to ZooKeeper!
JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[0, 1, 2]

List Topics

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list
__consumer_offsets
test1
test2

Affinity

You’ll notice that we applied an affinity rule. Check to see which node each pod is on.

View which node each zookeeper pod is on.

for i in 0 1 2; do echo "zk-$i"; kubectl -n itsmetommy get pod zk-$i --template {{.spec.nodeName}}; echo ""; done

View which node each kafka pod is on.

for i in 0 1 2; do echo "kafka-$i"; do kubectl -n itsmetommy get pod kafka-$i --template {{.spec.nodeName}}; echo ""; done

Example

for i in 0 1 2; do echo "pod zk-$i"; kubectl -n itsmetommy get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
pod zk-0
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
pod zk-1
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
pod zk-2
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
for i in 0 1 2; do echo "pod kafka-$i"; kubectl -n itsmetommy get pod kafka-$i --template {{.spec.nodeName}}; echo ""; done
pod kafka-0
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
pod kafka-1
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
pod kafka-2
gke-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx

Test

The most basic sanity test is to write data to one ZooKeeper server and to read the data from another.

The command below executes the zkCli.sh script to write world to the path /hello on the zk-0 Pod in the ensemble.

kubectl -n itsmetommy exec zk-0 zkCli.sh create /hello world

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
Created /hello

To get the data from the zk-1 Pod use the following command.

kubectl -n itsmetommy exec zk-1 zkCli.sh get /hello

The data that you created on zk-0 is available on all the servers in the ensemble.

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x100000002
ctime = Thu Dec 08 15:13:30 UTC 2016
mZxid = 0x100000002
mtime = Thu Dec 08 15:13:30 UTC 2016
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

Adding brokers

Once the node is started and has successfully joined the cluster, it doesn’t automatically receive partitions. When you scale a Kafka cluster up or down you will have to use kafka-reassign-partitions.sh to ensure that your data is correctly replicated and assigned after scaling.

We have two ways we can add to the kafka replica.

Note: I recommend option 1 so that everything can be checked in as code.

Option 1 — via kubectl apply

Update the replicas number

I updated it from 3 to 6.

vi kafka.yaml
replicas: 6

Run apply

kubectl apply -f kafka.yaml

Example

kubectl apply -f kafka.yaml
service "kafka-svc" unchanged
poddisruptionbudget.policy "kafka-pdb" unchanged
statefulset.apps "kafka" configured

Option 2 — via kubectl scale

kubectl -n itsmetommy scale statefulset kafka --replicas=6

Confirm by listing brokers

zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"

Example

Note the [0, 1, 2, 3, 4, 5].

zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"
Connecting to zk-svc.itsmetommy.svc.cluster.local:2181
Welcome to ZooKeeper!
JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[0, 1, 2, 3, 4, 5]

Reassign Partitions

Once we’ve scaled the replicaset, we need to reassign the partitions.

Describe topics

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe
...
Topic:test1	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test1	Partition: 0	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: test1	Partition: 1	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test1	Partition: 2	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
Topic:test2	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test2	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: test2	Partition: 1	Leader: 1	Replicas: 0,1,2	Isr: 1,0,2
	Topic: test2	Partition: 2	Leader: 2	Replicas: 0,1,2	Isr: 2,1,0

List topics

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list
__consumer_offsets
test1
test2

Move topic

Let’s move the topic test2 to the new brokers 3,4,5.

Create topics.json

Create a JSON file to list the topics you want to reorganize.

cat > /tmp/topics.json

Example for a single topic

cat > /tmp/topics.json # Hit ENTER
{ "version": 1,
  "topics": [
     {"topic": "test2"}
  ]
} # Hit ENTER and CTRL+D

Example of multiple topics

cat > /tmp/topics.json # Hit ENTER
{ "version": 1,
  "topics": [
     {"topic": "test1"},
     {"topic": "test2"}
  ]
} # Hit ENTER and CTRL+D

Generate

Now we can use the kafka-reassign-partitions.sh tool to generate partition assignments. It takes the topic list and the broker list as input, and produces the assignment plan in JSON format.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --generate --topics-to-move-json-file /tmp/topics.json --broker-list 3,4,5

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --generate --topics-to-move-json-file /tmp/topics.json --broker-list 3,4,5
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test2","partition":2,"replicas":[0,1,2]},{"topic":"test2","partition":1,"replicas":[0,1,2]},{"topic":"test2","partition":0,"replicas":[0,1,2]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test2","partition":2,"replicas":[5,4,3]},{"topic":"test2","partition":1,"replicas":[4,3,5]},{"topic":"test2","partition":0,"replicas":[3,5,4]}]}

Create reassignment.json

Use the proposed reassignment plan, format it a bit to make it more readable, and save it in a reassignment.json file.

cat > /tmp/reassignment.json

Example

cat > /tmp/reassignment.json
{"version":1,
  "partitions":[
    {"topic":"test2","partition":2,"replicas":[5,4,3]},
    {"topic":"test2","partition":1,"replicas":[4,3,5]},
    {"topic":"test2","partition":0,"replicas":[3,5,4]}
  ]
}

Run the plan using –execute

Note: You should be aware that you can not execute an assignment plan containing a dead or stopped node. The assignment can only be executed if mentioned brokers are alive.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --execute --reassignment-json-file /tmp/reassignment.json

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --execute --reassignment-json-file /tmp/reassignment.json
Current partition replica assignment

{"version":1,"partitions":[{"topic":"test2","partition":2,"replicas":[0,1,2]},{"topic":"test2","partition":1,"replicas":[0,1,2]},{"topic":"test2","partition":0,"replicas":[0,1,2]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

Verify

It may take a lot of time to move partitions from one node to another when the partitions are large.

To check the partition reassignment, you can either use:

The kafka-reassign-partitions.sh tool with the –verify command.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --verify --reassignment-json-file /tmp/reassignment.json

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --verify --reassignment-json-file /tmp/reassignment.json
Status of partition reassignment:
Reassignment of partition [test2,2] completed successfully
Reassignment of partition [test2,1] completed successfully
Reassignment of partition [test2,0] completed successfully

OR the kafka-topic.sh tool with the –describe command.

kafka-topics.sh --describe --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --topic test2

Example

kafka-topics.sh --describe --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --topic test2
Topic:test2	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test2	Partition: 0	Leader: 3	Replicas: 3,5,4	Isr: 5,3,4
	Topic: test2	Partition: 1	Leader: 4	Replicas: 4,3,5	Isr: 5,3,4
	Topic: test2	Partition: 2	Leader: 5	Replicas: 5,4,3	Isr: 5,3,4

Removing brokers

My goal is to remove two brokers from the cluster.

List topics

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --list
__consumer_offsets
test1
test2

Create topics.json

Create a JSON file to list the topics you want to reorganize.

cat > /tmp/topics.json

Example

cat > /tmp/topics.json # Hit ENTER
{ "version": 1,
  "topics": [
     {"topic": "__consumer_offsets"},
     {"topic": "test1"},
     {"topic": "test2"}
  ]
} # Hit ENTER and CTRL+D

Generate

Now we can use the kafka-reassign-partitions.sh tool to generate partition assignments. It takes the topic list and the broker list as input, and produces the assignment plan in JSON format.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --generate --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --generate --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3
Current partition replica assignment
{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":19,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":30,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":47,"replicas":[1,2,0]},{"topic":"test2","partition":2,"replicas":[5,4,3]},{"topic":"__consumer_offsets","partition":29,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":41,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":39,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":10,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":17,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":14,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":40,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":18,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":26,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":0,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":24,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":33,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":20,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":21,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":3,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":5,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":22,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":12,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":8,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":23,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":15,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":48,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":11,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":13,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":49,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":6,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":28,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":4,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":37,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":31,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":44,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":42,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":34,"replicas":[0,1,2]},{"topic":"test2","partition":1,"replicas":[4,3,5]},{"topic":"__consumer_offsets","partition":46,"replicas":[0,1,2]},{"topic":"test2","partition":0,"replicas":[3,5,4]},{"topic":"__consumer_offsets","partition":25,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":45,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":27,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":32,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":43,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":36,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":35,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":7,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":9,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":38,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":1,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":16,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":2,"replicas":[1,0,2]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":19,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":30,"replicas":[3,0,1]},{"topic":"__consumer_offsets","partition":47,"replicas":[0,2,3]},{"topic":"test2","partition":2,"replicas":[1,2,3]},{"topic":"__consumer_offsets","partition":29,"replicas":[2,3,0]},{"topic":"__consumer_offsets","partition":41,"replicas":[2,3,0]},{"topic":"__consumer_offsets","partition":39,"replicas":[0,3,1]},{"topic":"__consumer_offsets","partition":17,"replicas":[2,3,0]},{"topic":"__consumer_offsets","partition":10,"replicas":[3,1,2]},{"topic":"__consumer_offsets","partition":14,"replicas":[3,2,0]},{"topic":"__consumer_offsets","partition":40,"replicas":[1,2,3]},{"topic":"__consumer_offsets","partition":18,"replicas":[3,0,1]},{"topic":"__consumer_offsets","partition":26,"replicas":[3,2,0]},{"topic":"__consumer_offsets","partition":0,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":24,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":33,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":20,"replicas":[1,3,0]},{"topic":"__consumer_offsets","partition":3,"replicas":[0,3,1]},{"topic":"__consumer_offsets","partition":21,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":5,"replicas":[2,3,0]},{"topic":"__consumer_offsets","partition":22,"replicas":[3,1,2]},{"topic":"__consumer_offsets","partition":12,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":8,"replicas":[1,3,0]},{"topic":"__consumer_offsets","partition":23,"replicas":[0,2,3]},{"topic":"__consumer_offsets","partition":15,"replicas":[0,3,1]},{"topic":"__consumer_offsets","partition":11,"replicas":[0,2,3]},{"topic":"__consumer_offsets","partition":48,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":13,"replicas":[2,1,3]},{"topic":"__consumer_offsets","partition":49,"replicas":[2,1,3]},{"topic":"__consumer_offsets","partition":6,"replicas":[3,0,1]},{"topic":"__consumer_offsets","partition":28,"replicas":[1,2,3]},{"topic":"__consumer_offsets","partition":4,"replicas":[1,2,3]},{"topic":"__consumer_offsets","partition":37,"replicas":[2,1,3]},{"topic":"__consumer_offsets","partition":31,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":44,"replicas":[1,3,0]},{"topic":"__consumer_offsets","partition":42,"replicas":[3,0,1]},{"topic":"__consumer_offsets","partition":34,"replicas":[3,1,2]},{"topic":"test2","partition":1,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":46,"replicas":[3,1,2]},{"topic":"test2","partition":0,"replicas":[3,0,1]},{"topic":"__consumer_offsets","partition":25,"replicas":[2,1,3]},{"topic":"__consumer_offsets","partition":45,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":27,"replicas":[0,3,1]},{"topic":"__consumer_offsets","partition":32,"replicas":[1,3,0]},{"topic":"__consumer_offsets","partition":43,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":36,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":35,"replicas":[0,2,3]},{"topic":"__consumer_offsets","partition":7,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":38,"replicas":[3,2,0]},{"topic":"__consumer_offsets","partition":9,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":1,"replicas":[2,1,3]},{"topic":"__consumer_offsets","partition":16,"replicas":[1,2,3]},{"topic":"__consumer_offsets","partition":2,"replicas":[3,2,0]}]}

Create reassignment.json

Use the proposed reassignment plan, format it a bit to make it more readable, and save it in a reassignment.json file.

cat > /tmp/reassignment.json

Example

cat > /tmp/reassignment.json
{"version":1,
  "partitions":[
    {"topic":"__consumer_offsets","partition":19,"replicas":[0,1,2]},
    {"topic":"__consumer_offsets","partition":30,"replicas":[3,0,1]},
    {"topic":"__consumer_offsets","partition":47,"replicas":[0,2,3]},
    {"topic":"test2","partition":2,"replicas":[1,2,3]},
    {"topic":"__consumer_offsets","partition":29,"replicas":[2,3,0]},
    {"topic":"__consumer_offsets","partition":41,"replicas":[2,3,0]},
    {"topic":"__consumer_offsets","partition":39,"replicas":[0,3,1]},
    {"topic":"__consumer_offsets","partition":17,"replicas":[2,3,0]},
    {"topic":"__consumer_offsets","partition":10,"replicas":[3,1,2]},
    {"topic":"__consumer_offsets","partition":14,"replicas":[3,2,0]},
    {"topic":"__consumer_offsets","partition":40,"replicas":[1,2,3]},
    {"topic":"__consumer_offsets","partition":18,"replicas":[3,0,1]},
    {"topic":"__consumer_offsets","partition":26,"replicas":[3,2,0]},
    {"topic":"__consumer_offsets","partition":0,"replicas":[1,0,2]},
    {"topic":"__consumer_offsets","partition":24,"replicas":[1,0,2]},
    {"topic":"__consumer_offsets","partition":33,"replicas":[2,0,1]},
    {"topic":"__consumer_offsets","partition":20,"replicas":[1,3,0]},
    {"topic":"__consumer_offsets","partition":3,"replicas":[0,3,1]},
    {"topic":"__consumer_offsets","partition":21,"replicas":[2,0,1]},
    {"topic":"__consumer_offsets","partition":5,"replicas":[2,3,0]},
    {"topic":"__consumer_offsets","partition":22,"replicas":[3,1,2]},
    {"topic":"__consumer_offsets","partition":12,"replicas":[1,0,2]},
    {"topic":"__consumer_offsets","partition":8,"replicas":[1,3,0]},
    {"topic":"__consumer_offsets","partition":23,"replicas":[0,2,3]},
    {"topic":"__consumer_offsets","partition":15,"replicas":[0,3,1]},
    {"topic":"__consumer_offsets","partition":11,"replicas":[0,2,3]},
    {"topic":"__consumer_offsets","partition":48,"replicas":[1,0,2]},
    {"topic":"__consumer_offsets","partition":13,"replicas":[2,1,3]},
    {"topic":"__consumer_offsets","partition":49,"replicas":[2,1,3]},
    {"topic":"__consumer_offsets","partition":6,"replicas":[3,0,1]},
    {"topic":"__consumer_offsets","partition":28,"replicas":[1,2,3]},
    {"topic":"__consumer_offsets","partition":4,"replicas":[1,2,3]},
    {"topic":"__consumer_offsets","partition":37,"replicas":[2,1,3]},
    {"topic":"__consumer_offsets","partition":31,"replicas":[0,1,2]},
    {"topic":"__consumer_offsets","partition":44,"replicas":[1,3,0]},
    {"topic":"__consumer_offsets","partition":42,"replicas":[3,0,1]},
    {"topic":"__consumer_offsets","partition":34,"replicas":[3,1,2]},
    {"topic":"test2","partition":1,"replicas":[0,1,2]},
    {"topic":"__consumer_offsets","partition":46,"replicas":[3,1,2]},
    {"topic":"test2","partition":0,"replicas":[3,0,1]},
    {"topic":"__consumer_offsets","partition":25,"replicas":[2,1,3]},
    {"topic":"__consumer_offsets","partition":45,"replicas":[2,0,1]},
    {"topic":"__consumer_offsets","partition":27,"replicas":[0,3,1]},
    {"topic":"__consumer_offsets","partition":32,"replicas":[1,3,0]},
    {"topic":"__consumer_offsets","partition":43,"replicas":[0,1,2]},
    {"topic":"__consumer_offsets","partition":36,"replicas":[1,0,2]},
    {"topic":"__consumer_offsets","partition":35,"replicas":[0,2,3]},
    {"topic":"__consumer_offsets","partition":7,"replicas":[0,1,2]},
    {"topic":"__consumer_offsets","partition":38,"replicas":[3,2,0]},
    {"topic":"__consumer_offsets","partition":9,"replicas":[2,0,1]},
    {"topic":"__consumer_offsets","partition":1,"replicas":[2,1,3]},
    {"topic":"__consumer_offsets","partition":16,"replicas":[1,2,3]},
    {"topic":"__consumer_offsets","partition":2,"replicas":[3,2,0]}
  ]
}

Run the plan using –execute

Note: You should be aware that you can not execute an assignment plan containing a dead or stopped node. The assignment can only be executed if mentioned brokers are alive.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --execute --reassignment-json-file /tmp/reassignment.json

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --execute --reassignment-json-file /tmp/reassignment.json
Current partition replica assignment

{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":19,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":30,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":47,"replicas":[1,2,0]},{"topic":"test2","partition":2,"replicas":[5,4,3]},{"topic":"__consumer_offsets","partition":29,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":41,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":39,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":10,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":17,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":14,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":40,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":18,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":26,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":0,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":24,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":33,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":20,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":21,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":3,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":5,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":22,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":12,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":8,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":23,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":15,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":48,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":11,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":13,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":49,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":6,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":28,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":4,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":37,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":31,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":44,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":42,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":34,"replicas":[0,1,2]},{"topic":"test2","partition":1,"replicas":[4,3,5]},{"topic":"__consumer_offsets","partition":46,"replicas":[0,1,2]},{"topic":"test2","partition":0,"replicas":[3,5,4]},{"topic":"__consumer_offsets","partition":25,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":45,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":27,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":32,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":43,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":36,"replicas":[2,1,0]},{"topic":"__consumer_offsets","partition":35,"replicas":[1,2,0]},{"topic":"__consumer_offsets","partition":7,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":9,"replicas":[2,0,1]},{"topic":"__consumer_offsets","partition":38,"replicas":[1,0,2]},{"topic":"__consumer_offsets","partition":1,"replicas":[0,2,1]},{"topic":"__consumer_offsets","partition":16,"replicas":[0,1,2]},{"topic":"__consumer_offsets","partition":2,"replicas":[1,0,2]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

Verify

It may take a lot of time to move partitions from one node to another when the partitions are large. To check the partition reassignment, you can either use:

The kafka-reassign-partitions.sh tool with the –verify command.

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --verify --reassignment-json-file /tmp/reassignment.json

Example

kafka-reassign-partitions.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --verify --reassignment-json-file /tmp/reassignment.json
Status of partition reassignment:
Reassignment of partition [__consumer_offsets,32] completed successfully
Reassignment of partition [__consumer_offsets,16] completed successfully
Reassignment of partition [__consumer_offsets,49] completed successfully
Reassignment of partition [__consumer_offsets,44] completed successfully
Reassignment of partition [__consumer_offsets,28] completed successfully
Reassignment of partition [__consumer_offsets,17] completed successfully
Reassignment of partition [__consumer_offsets,23] completed successfully
Reassignment of partition [__consumer_offsets,7] completed successfully
Reassignment of partition [__consumer_offsets,4] completed successfully
Reassignment of partition [__consumer_offsets,29] completed successfully
Reassignment of partition [__consumer_offsets,35] completed successfully
Reassignment of partition [__consumer_offsets,3] completed successfully
Reassignment of partition [__consumer_offsets,24] completed successfully
Reassignment of partition [__consumer_offsets,41] completed successfully
Reassignment of partition [__consumer_offsets,0] completed successfully
Reassignment of partition [__consumer_offsets,38] completed successfully
Reassignment of partition [__consumer_offsets,13] completed successfully
Reassignment of partition [__consumer_offsets,8] completed successfully
Reassignment of partition [__consumer_offsets,5] completed successfully
Reassignment of partition [__consumer_offsets,39] completed successfully
Reassignment of partition [__consumer_offsets,36] completed successfully
Reassignment of partition [__consumer_offsets,40] completed successfully
Reassignment of partition [__consumer_offsets,45] completed successfully
Reassignment of partition [__consumer_offsets,15] completed successfully
Reassignment of partition [__consumer_offsets,33] completed successfully
Reassignment of partition [__consumer_offsets,37] completed successfully
Reassignment of partition [__consumer_offsets,21] completed successfully
Reassignment of partition [__consumer_offsets,6] completed successfully
Reassignment of partition [__consumer_offsets,11] completed successfully
Reassignment of partition [__consumer_offsets,20] completed successfully
Reassignment of partition [__consumer_offsets,47] completed successfully
Reassignment of partition [__consumer_offsets,2] completed successfully
Reassignment of partition [__consumer_offsets,27] completed successfully
Reassignment of partition [__consumer_offsets,34] completed successfully
Reassignment of partition [__consumer_offsets,9] completed successfully
Reassignment of partition [__consumer_offsets,22] completed successfully
Reassignment of partition [__consumer_offsets,42] completed successfully
Reassignment of partition [test2,0] completed successfully
Reassignment of partition [__consumer_offsets,14] completed successfully
Reassignment of partition [__consumer_offsets,25] completed successfully
Reassignment of partition [__consumer_offsets,10] completed successfully
Reassignment of partition [__consumer_offsets,48] completed successfully
Reassignment of partition [__consumer_offsets,31] completed successfully
Reassignment of partition [__consumer_offsets,18] completed successfully
Reassignment of partition [__consumer_offsets,19] completed successfully
Reassignment of partition [test2,2] completed successfully
Reassignment of partition [__consumer_offsets,12] completed successfully
Reassignment of partition [test2,1] completed successfully
Reassignment of partition [__consumer_offsets,46] completed successfully
Reassignment of partition [__consumer_offsets,43] completed successfully
Reassignment of partition [__consumer_offsets,1] completed successfully
Reassignment of partition [__consumer_offsets,26] completed successfully
Reassignment of partition [__consumer_offsets,30] completed successfully

Describe topics

You should notice that broker 4 and 5 are no longer part of the list.

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe

Example

kafka-topics.sh --zookeeper zk-svc.itsmetommy.svc.cluster.local:2181 --describe
Topic:__consumer_offsets	PartitionCount:50	ReplicationFactor:3	Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
	Topic: __consumer_offsets	Partition: 0	Leader: 2	Replicas: 1,0,2	Isr: 2,1,0
	Topic: __consumer_offsets	Partition: 1	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
	Topic: __consumer_offsets	Partition: 2	Leader: 3	Replicas: 3,2,0	Isr: 0,2,3
	Topic: __consumer_offsets	Partition: 3	Leader: 0	Replicas: 0,3,1	Isr: 0,1,3
	Topic: __consumer_offsets	Partition: 4	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 5	Leader: 2	Replicas: 2,3,0	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 6	Leader: 3	Replicas: 3,0,1	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 7	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: __consumer_offsets	Partition: 8	Leader: 1	Replicas: 1,3,0	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 9	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: __consumer_offsets	Partition: 10	Leader: 3	Replicas: 3,1,2	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 11	Leader: 0	Replicas: 0,2,3	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 12	Leader: 2	Replicas: 1,0,2	Isr: 2,1,0
	Topic: __consumer_offsets	Partition: 13	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
	Topic: __consumer_offsets	Partition: 14	Leader: 3	Replicas: 3,2,0	Isr: 0,2,3
	Topic: __consumer_offsets	Partition: 15	Leader: 0	Replicas: 0,3,1	Isr: 0,1,3
	Topic: __consumer_offsets	Partition: 16	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 17	Leader: 2	Replicas: 2,3,0	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 18	Leader: 3	Replicas: 3,0,1	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 19	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: __consumer_offsets	Partition: 20	Leader: 1	Replicas: 1,3,0	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 21	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: __consumer_offsets	Partition: 22	Leader: 3	Replicas: 3,1,2	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 23	Leader: 0	Replicas: 0,2,3	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 24	Leader: 2	Replicas: 1,0,2	Isr: 2,1,0
	Topic: __consumer_offsets	Partition: 25	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
	Topic: __consumer_offsets	Partition: 26	Leader: 3	Replicas: 3,2,0	Isr: 0,2,3
	Topic: __consumer_offsets	Partition: 27	Leader: 0	Replicas: 0,3,1	Isr: 0,1,3
	Topic: __consumer_offsets	Partition: 28	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 29	Leader: 2	Replicas: 2,3,0	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 30	Leader: 3	Replicas: 3,0,1	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 31	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: __consumer_offsets	Partition: 32	Leader: 1	Replicas: 1,3,0	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 33	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: __consumer_offsets	Partition: 34	Leader: 3	Replicas: 3,1,2	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 35	Leader: 0	Replicas: 0,2,3	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 36	Leader: 2	Replicas: 1,0,2	Isr: 2,1,0
	Topic: __consumer_offsets	Partition: 37	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
	Topic: __consumer_offsets	Partition: 38	Leader: 3	Replicas: 3,2,0	Isr: 0,2,3
	Topic: __consumer_offsets	Partition: 39	Leader: 0	Replicas: 0,3,1	Isr: 0,1,3
	Topic: __consumer_offsets	Partition: 40	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 41	Leader: 2	Replicas: 2,3,0	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 42	Leader: 3	Replicas: 3,0,1	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 43	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
	Topic: __consumer_offsets	Partition: 44	Leader: 1	Replicas: 1,3,0	Isr: 1,0,3
	Topic: __consumer_offsets	Partition: 45	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: __consumer_offsets	Partition: 46	Leader: 3	Replicas: 3,1,2	Isr: 1,2,3
	Topic: __consumer_offsets	Partition: 47	Leader: 0	Replicas: 0,2,3	Isr: 2,0,3
	Topic: __consumer_offsets	Partition: 48	Leader: 2	Replicas: 1,0,2	Isr: 2,1,0
	Topic: __consumer_offsets	Partition: 49	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
Topic:test1	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test1	Partition: 0	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: test1	Partition: 1	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test1	Partition: 2	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
Topic:test2	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test2	Partition: 0	Leader: 3	Replicas: 3,0,1	Isr: 0,1,3
	Topic: test2	Partition: 1	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test2	Partition: 2	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3

Remove nodes from cluster

Note: I recommend option 1 so that everything can be checked in as code.

Option 1 — via kubectl apply

Update the replicas number within kafka.yaml

vi kafka.yaml
replicas: 4 # this will leave broker 0,1,2,3

Run apply

kubectl apply -f kafka.yaml

Example

kubectl apply -f kafka.yaml
service "kafka-svc" unchanged
poddisruptionbudget.policy "kafka-pdb" unchanged
statefulset.apps "kafka" configured

Watch the kafka pods terminate

kubectl -n itsmetommy get pods -w -l app=kafka

Example

kubectl -n itsmetommy get pods -w -l app=kafka
NAME      READY     STATUS    RESTARTS   AGE
kafka-0   1/1       Running   3          1h
kafka-1   1/1       Running   3          1h
kafka-2   1/1       Running   3          1h
kafka-3   1/1       Running   0          1h
kafka-4   1/1       Running   0          1h
kafka-5   1/1       Running   0          1h
kafka-5   1/1       Terminating   0         1h
kafka-4   1/1       Terminating   0         1h
kafka-4   0/1       Terminating   0         1h
kafka-4   0/1       Terminating   0         1h
kafka-4   0/1       Terminating   0         1h
kafka-5   0/1       Terminating   0         1h
kafka-5   0/1       Terminating   0         1h
kafka-5   0/1       Terminating   0         1h

Option 2 — via kubectl scale

kubectl -n itsmetommy scale statefulset kafka --replicas=4

Confirm by listing brokers

zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"

Example

zookeeper-shell.sh zk-svc.itsmetommy.svc.cluster.local:2181 <<< "ls /brokers/ids"
Connecting to zk-svc.itsmetommy.svc.cluster.local:2181
Welcome to ZooKeeper!
JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[0, 1, 2, 3]

Remove Persistent Volume Claim

Warning: Make sure it is the exact disk linked to the pod that was removed.

kubectl -n itsmetommy delete pvc datadir-kafka-4
kubectl -n itsmetommy delete pvc datadir-kafka-5

Example

kubectl -n itsmetommy delete pvc datadir-kafka-4
persistentvolumeclaim "datadir-kafka-4" deleted

kubectl -n itsmetommy delete pvc datadir-kafka-5
persistentvolumeclaim "datadir-kafka-5" deleted

Clean up

kubectl delete -f .

Example

kubectl delete -f .
service "kafka-svc" deleted
poddisruptionbudget.policy "kafka-pdb" deleted
statefulset.apps "kafka" deleted
service "zk-svc" deleted
configmap "zk-cm" deleted
poddisruptionbudget.policy "zk-pdb" deleted
statefulset.apps "zk" deleted

Delete Persistent Volumes

kubectl -n itsmetommy delete pvc -l app=zk
kubectl -n itsmetommy delete pvc -l app=kafka

Example

kubectl -n itsmetommy delete pvc -l app=zk
persistentvolumeclaim "datadir-zk-0" deleted
persistentvolumeclaim "datadir-zk-1" deleted
persistentvolumeclaim "datadir-zk-2" deleted

kubectl -n itsmetommy delete pvc -l app=kafka
persistentvolumeclaim "datadir-kafka-0" deleted
persistentvolumeclaim "datadir-kafka-1" deleted
persistentvolumeclaim "datadir-kafka-2" deleted
persistentvolumeclaim "datadir-kafka-3" deleted
,