Last active
January 5, 2025 07:11
-
-
Save dwelch2344/324ed57d8485b80bebff7a0749dfb598 to your computer and use it in GitHub Desktop.
Minikube + Strimzi fail
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
minikube start --driver docker --network socket_vmnet --cpus=4 --memory=8192 && \ | |
minikube update-context && \ | |
minikube addons enable ingress | |
terraform apply | |
# after load balancers are ready | |
minikube tunnel |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
>[2025-01-04 22:04:10,432] WARN [Producer clientId=console-producer] Got error produce response with correlation id 7 on topic-partition test-topic-0, retrying (2 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:10,433] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition test-topic-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:10,540] WARN [Producer clientId=console-producer] Got error produce response with correlation id 9 on topic-partition test-topic-0, retrying (1 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:10,540] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition test-topic-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:10,759] WARN [Producer clientId=console-producer] Got error produce response with correlation id 11 on topic-partition test-topic-0, retrying (0 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:10,759] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition test-topic-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender) | |
[2025-01-04 22:04:11,181] ERROR Error when sending message to topic test-topic with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) | |
org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. | |
[2025-01-04 22:04:11,182] ERROR Error when sending message to topic test-topic with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) | |
org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. | |
[2025-01-04 22:04:11,182] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition test-topic-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
→ kubectl logs -n strimzi service/my-cluster-kafka-brokers | |
Found 6 pods, using pod/my-cluster-controller-3 | |
STRIMZI_BROKER_ID=3 | |
Preparing truststore for replication listener | |
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/kafka/cluster.truststore.p12 with alias ca | |
Certificate was added to keystore | |
Preparing truststore for replication listener is complete | |
Looking for the CA matching the server certificate | |
CA matching the server certificate found: /opt/kafka/cluster-ca-certs/ca.crt | |
Preparing keystore for replication and clienttls listener | |
Preparing keystore for replication and clienttls listener is complete | |
Preparing truststore for client authentication | |
Adding /opt/kafka/client-ca-certs/ca.crt to truststore /tmp/kafka/clients.truststore.p12 with alias ca | |
Certificate was added to keystore | |
Preparing truststore for client authentication is complete | |
Starting Kafka with configuration: | |
############################## | |
############################## | |
# This file is automatically generated by the Strimzi Cluster Operator | |
# Any changes to this file will be ignored and overwritten! | |
############################## | |
############################## | |
########## | |
# Node / Broker ID | |
########## | |
node.id=3 | |
########## | |
# Kafka message logs configuration | |
########## | |
log.dirs=/var/lib/kafka/data-0/kafka-log3 | |
########## | |
# Control Plane listener | |
########## | |
listener.name.controlplane-9090.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12 | |
listener.name.controlplane-9090.ssl.keystore.password=[hidden] | |
listener.name.controlplane-9090.ssl.keystore.type=PKCS12 | |
listener.name.controlplane-9090.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12 | |
listener.name.controlplane-9090.ssl.truststore.password=[hidden] | |
listener.name.controlplane-9090.ssl.truststore.type=PKCS12 | |
listener.name.controlplane-9090.ssl.client.auth=required | |
########## | |
# Common listener configuration | |
########## | |
listener.security.protocol.map=CONTROLPLANE-9090:SSL | |
listeners=CONTROLPLANE-9090://0.0.0.0:9090 | |
advertised.listeners=CONTROLPLANE-9090://my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc:9090 | |
sasl.enabled.mechanisms= | |
ssl.endpoint.identification.algorithm=HTTPS | |
########## | |
# Config providers | |
########## | |
# Configuration providers configured by Strimzi | |
config.providers=strimzienv | |
config.providers.strimzienv.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider | |
config.providers.strimzienv.param.allowlist.pattern=.* | |
########## | |
# User provided configuration | |
########## | |
default.replication.factor=3 | |
min.insync.replicas=2 | |
offsets.topic.replication.factor=3 | |
transaction.state.log.min.isr=2 | |
transaction.state.log.replication.factor=3 | |
########## | |
# KRaft configuration | |
########## | |
process.roles=controller | |
controller.listener.names=CONTROLPLANE-9090 | |
controller.quorum.voters=3@my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090,4@my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090,5@my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090 | |
########## | |
# KRaft metadata log dir configuration | |
########## | |
metadata.log.dir=/var/lib/kafka/data-0/kafka-log3 | |
Kafka metadata config state [4] | |
Using KRaft [true] | |
Making sure the Kraft storage is formatted with cluster ID sAO-ZHurRfWFuS4OJjoZbA and metadata version 3.9-IV0 | |
2025-01-05 04:56:15,199 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main] | |
2025-01-05 04:56:15,527 INFO KafkaConfig values: | |
advertised.listeners = CONTROLPLANE-9090://my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc:9090 | |
alter.config.policy.class.name = null | |
alter.log.dirs.replication.quota.window.num = 11 | |
alter.log.dirs.replication.quota.window.size.seconds = 1 | |
authorizer.class.name = | |
auto.create.topics.enable = true | |
auto.include.jmx.reporter = true | |
auto.leader.rebalance.enable = true | |
background.threads = 10 | |
broker.heartbeat.interval.ms = 2000 | |
broker.id = 3 | |
broker.id.generation.enable = true | |
broker.rack = null | |
broker.session.timeout.ms = 9000 | |
client.quota.callback.class = null | |
compression.gzip.level = -1 | |
compression.lz4.level = 9 | |
compression.type = producer | |
compression.zstd.level = 3 | |
connection.failed.authentication.delay.ms = 100 | |
connections.max.idle.ms = 600000 | |
connections.max.reauth.ms = 0 | |
control.plane.listener.name = null | |
controlled.shutdown.enable = true | |
controlled.shutdown.max.retries = 3 | |
controlled.shutdown.retry.backoff.ms = 5000 | |
controller.listener.names = CONTROLPLANE-9090 | |
controller.quorum.append.linger.ms = 25 | |
controller.quorum.bootstrap.servers = [] | |
controller.quorum.election.backoff.max.ms = 1000 | |
controller.quorum.election.timeout.ms = 1000 | |
controller.quorum.fetch.timeout.ms = 2000 | |
controller.quorum.request.timeout.ms = 2000 | |
controller.quorum.retry.backoff.ms = 20 | |
controller.quorum.voters = [3@my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090, 4@my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090, 5@my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090] | |
controller.quota.window.num = 11 | |
controller.quota.window.size.seconds = 1 | |
controller.socket.timeout.ms = 30000 | |
create.topic.policy.class.name = null | |
default.replication.factor = 3 | |
delegation.token.expiry.check.interval.ms = 3600000 | |
delegation.token.expiry.time.ms = 86400000 | |
delegation.token.master.key = null | |
delegation.token.max.lifetime.ms = 604800000 | |
delegation.token.secret.key = null | |
delete.records.purgatory.purge.interval.requests = 1 | |
delete.topic.enable = true | |
early.start.listeners = null | |
eligible.leader.replicas.enable = false | |
fetch.max.bytes = 57671680 | |
fetch.purgatory.purge.interval.requests = 1000 | |
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] | |
group.consumer.heartbeat.interval.ms = 5000 | |
group.consumer.max.heartbeat.interval.ms = 15000 | |
group.consumer.max.session.timeout.ms = 60000 | |
group.consumer.max.size = 2147483647 | |
group.consumer.migration.policy = disabled | |
group.consumer.min.heartbeat.interval.ms = 5000 | |
group.consumer.min.session.timeout.ms = 45000 | |
group.consumer.session.timeout.ms = 45000 | |
group.coordinator.append.linger.ms = 10 | |
group.coordinator.new.enable = false | |
group.coordinator.rebalance.protocols = [classic] | |
group.coordinator.threads = 1 | |
group.initial.rebalance.delay.ms = 3000 | |
group.max.session.timeout.ms = 1800000 | |
group.max.size = 2147483647 | |
group.min.session.timeout.ms = 6000 | |
group.share.delivery.count.limit = 5 | |
group.share.enable = false | |
group.share.heartbeat.interval.ms = 5000 | |
group.share.max.groups = 10 | |
group.share.max.heartbeat.interval.ms = 15000 | |
group.share.max.record.lock.duration.ms = 60000 | |
group.share.max.session.timeout.ms = 60000 | |
group.share.max.size = 200 | |
group.share.min.heartbeat.interval.ms = 5000 | |
group.share.min.record.lock.duration.ms = 15000 | |
group.share.min.session.timeout.ms = 45000 | |
group.share.partition.max.record.locks = 200 | |
group.share.record.lock.duration.ms = 30000 | |
group.share.session.timeout.ms = 45000 | |
initial.broker.registration.timeout.ms = 60000 | |
inter.broker.listener.name = null | |
inter.broker.protocol.version = 3.9-IV0 | |
kafka.metrics.polling.interval.secs = 10 | |
kafka.metrics.reporters = [] | |
leader.imbalance.check.interval.seconds = 300 | |
leader.imbalance.per.broker.percentage = 10 | |
listener.security.protocol.map = CONTROLPLANE-9090:SSL | |
listeners = CONTROLPLANE-9090://0.0.0.0:9090 | |
log.cleaner.backoff.ms = 15000 | |
log.cleaner.dedupe.buffer.size = 134217728 | |
log.cleaner.delete.retention.ms = 86400000 | |
log.cleaner.enable = true | |
log.cleaner.io.buffer.load.factor = 0.9 | |
log.cleaner.io.buffer.size = 524288 | |
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 | |
log.cleaner.max.compaction.lag.ms = 9223372036854775807 | |
log.cleaner.min.cleanable.ratio = 0.5 | |
log.cleaner.min.compaction.lag.ms = 0 | |
log.cleaner.threads = 1 | |
log.cleanup.policy = [delete] | |
log.dir = /tmp/kafka-logs | |
log.dir.failure.timeout.ms = 30000 | |
log.dirs = /var/lib/kafka/data-0/kafka-log3 | |
log.flush.interval.messages = 9223372036854775807 | |
log.flush.interval.ms = null | |
log.flush.offset.checkpoint.interval.ms = 60000 | |
log.flush.scheduler.interval.ms = 9223372036854775807 | |
log.flush.start.offset.checkpoint.interval.ms = 60000 | |
log.index.interval.bytes = 4096 | |
log.index.size.max.bytes = 10485760 | |
log.initial.task.delay.ms = 30000 | |
log.local.retention.bytes = -2 | |
log.local.retention.ms = -2 | |
log.message.downconversion.enable = true | |
log.message.format.version = 3.0-IV1 | |
log.message.timestamp.after.max.ms = 9223372036854775807 | |
log.message.timestamp.before.max.ms = 9223372036854775807 | |
log.message.timestamp.difference.max.ms = 9223372036854775807 | |
log.message.timestamp.type = CreateTime | |
log.preallocate = false | |
log.retention.bytes = -1 | |
log.retention.check.interval.ms = 300000 | |
log.retention.hours = 168 | |
log.retention.minutes = null | |
log.retention.ms = null | |
log.roll.hours = 168 | |
log.roll.jitter.hours = 0 | |
log.roll.jitter.ms = null | |
log.roll.ms = null | |
log.segment.bytes = 1073741824 | |
log.segment.delete.delay.ms = 60000 | |
max.connection.creation.rate = 2147483647 | |
max.connections = 2147483647 | |
max.connections.per.ip = 2147483647 | |
max.connections.per.ip.overrides = | |
max.incremental.fetch.session.cache.slots = 1000 | |
max.request.partition.size.limit = 2000 | |
message.max.bytes = 1048588 | |
metadata.log.dir = /var/lib/kafka/data-0/kafka-log3 | |
metadata.log.max.record.bytes.between.snapshots = 20971520 | |
metadata.log.max.snapshot.interval.ms = 3600000 | |
metadata.log.segment.bytes = 1073741824 | |
metadata.log.segment.min.bytes = 8388608 | |
metadata.log.segment.ms = 604800000 | |
metadata.max.idle.interval.ms = 500 | |
metadata.max.retention.bytes = 104857600 | |
metadata.max.retention.ms = 604800000 | |
metric.reporters = [] | |
metrics.num.samples = 2 | |
metrics.recording.level = INFO | |
metrics.sample.window.ms = 30000 | |
min.insync.replicas = 2 | |
node.id = 3 | |
num.io.threads = 8 | |
num.network.threads = 3 | |
num.partitions = 1 | |
num.recovery.threads.per.data.dir = 1 | |
num.replica.alter.log.dirs.threads = null | |
num.replica.fetchers = 1 | |
offset.metadata.max.bytes = 4096 | |
offsets.commit.required.acks = -1 | |
offsets.commit.timeout.ms = 5000 | |
offsets.load.buffer.size = 5242880 | |
offsets.retention.check.interval.ms = 600000 | |
offsets.retention.minutes = 10080 | |
offsets.topic.compression.codec = 0 | |
offsets.topic.num.partitions = 50 | |
offsets.topic.replication.factor = 3 | |
offsets.topic.segment.bytes = 104857600 | |
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding | |
password.encoder.iterations = 4096 | |
password.encoder.key.length = 128 | |
password.encoder.keyfactory.algorithm = null | |
password.encoder.old.secret = null | |
password.encoder.secret = null | |
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder | |
process.roles = [controller] | |
producer.id.expiration.check.interval.ms = 600000 | |
producer.id.expiration.ms = 86400000 | |
producer.purgatory.purge.interval.requests = 1000 | |
queued.max.request.bytes = -1 | |
queued.max.requests = 500 | |
quota.window.num = 11 | |
quota.window.size.seconds = 1 | |
remote.fetch.max.wait.ms = 500 | |
remote.log.index.file.cache.total.size.bytes = 1073741824 | |
remote.log.manager.copier.thread.pool.size = -1 | |
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 | |
remote.log.manager.copy.quota.window.num = 11 | |
remote.log.manager.copy.quota.window.size.seconds = 1 | |
remote.log.manager.expiration.thread.pool.size = -1 | |
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 | |
remote.log.manager.fetch.quota.window.num = 11 | |
remote.log.manager.fetch.quota.window.size.seconds = 1 | |
remote.log.manager.task.interval.ms = 30000 | |
remote.log.manager.task.retry.backoff.max.ms = 30000 | |
remote.log.manager.task.retry.backoff.ms = 500 | |
remote.log.manager.task.retry.jitter = 0.2 | |
remote.log.manager.thread.pool.size = 10 | |
remote.log.metadata.custom.metadata.max.bytes = 128 | |
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager | |
remote.log.metadata.manager.class.path = null | |
remote.log.metadata.manager.impl.prefix = rlmm.config. | |
remote.log.metadata.manager.listener.name = null | |
remote.log.reader.max.pending.tasks = 100 | |
remote.log.reader.threads = 10 | |
remote.log.storage.manager.class.name = null | |
remote.log.storage.manager.class.path = null | |
remote.log.storage.manager.impl.prefix = rsm.config. | |
remote.log.storage.system.enable = false | |
replica.fetch.backoff.ms = 1000 | |
replica.fetch.max.bytes = 1048576 | |
replica.fetch.min.bytes = 1 | |
replica.fetch.response.max.bytes = 10485760 | |
replica.fetch.wait.max.ms = 500 | |
replica.high.watermark.checkpoint.interval.ms = 5000 | |
replica.lag.time.max.ms = 30000 | |
replica.selector.class = null | |
replica.socket.receive.buffer.bytes = 65536 | |
replica.socket.timeout.ms = 30000 | |
replication.quota.window.num = 11 | |
replication.quota.window.size.seconds = 1 | |
request.timeout.ms = 30000 | |
reserved.broker.max.id = 1000 | |
sasl.client.callback.handler.class = null | |
sasl.enabled.mechanisms = [] | |
sasl.jaas.config = null | |
sasl.kerberos.kinit.cmd = /usr/bin/kinit | |
sasl.kerberos.min.time.before.relogin = 60000 | |
sasl.kerberos.principal.to.local.rules = [DEFAULT] | |
sasl.kerberos.service.name = null | |
sasl.kerberos.ticket.renew.jitter = 0.05 | |
sasl.kerberos.ticket.renew.window.factor = 0.8 | |
sasl.login.callback.handler.class = null | |
sasl.login.class = null | |
sasl.login.connect.timeout.ms = null | |
sasl.login.read.timeout.ms = null | |
sasl.login.refresh.buffer.seconds = 300 | |
sasl.login.refresh.min.period.seconds = 60 | |
sasl.login.refresh.window.factor = 0.8 | |
sasl.login.refresh.window.jitter = 0.05 | |
sasl.login.retry.backoff.max.ms = 10000 | |
sasl.login.retry.backoff.ms = 100 | |
sasl.mechanism.controller.protocol = GSSAPI | |
sasl.mechanism.inter.broker.protocol = GSSAPI | |
sasl.oauthbearer.clock.skew.seconds = 30 | |
sasl.oauthbearer.expected.audience = null | |
sasl.oauthbearer.expected.issuer = null | |
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 | |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 | |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 | |
sasl.oauthbearer.jwks.endpoint.url = null | |
sasl.oauthbearer.scope.claim.name = scope | |
sasl.oauthbearer.sub.claim.name = sub | |
sasl.oauthbearer.token.endpoint.url = null | |
sasl.server.callback.handler.class = null | |
sasl.server.max.receive.size = 524288 | |
security.inter.broker.protocol = PLAINTEXT | |
security.providers = null | |
server.max.startup.time.ms = 9223372036854775807 | |
socket.connection.setup.timeout.max.ms = 30000 | |
socket.connection.setup.timeout.ms = 10000 | |
socket.listen.backlog.size = 50 | |
socket.receive.buffer.bytes = 102400 | |
socket.request.max.bytes = 104857600 | |
socket.send.buffer.bytes = 102400 | |
ssl.allow.dn.changes = false | |
ssl.allow.san.changes = false | |
ssl.cipher.suites = [] | |
ssl.client.auth = none | |
ssl.enabled.protocols = [TLSv1.2, TLSv1.3] | |
ssl.endpoint.identification.algorithm = HTTPS | |
ssl.engine.factory.class = null | |
ssl.key.password = null | |
ssl.keymanager.algorithm = SunX509 | |
ssl.keystore.certificate.chain = null | |
ssl.keystore.key = null | |
ssl.keystore.location = null | |
ssl.keystore.password = null | |
ssl.keystore.type = JKS | |
ssl.principal.mapping.rules = DEFAULT | |
ssl.protocol = TLSv1.3 | |
ssl.provider = null | |
ssl.secure.random.implementation = null | |
ssl.trustmanager.algorithm = PKIX | |
ssl.truststore.certificates = null | |
ssl.truststore.location = null | |
ssl.truststore.password = null | |
ssl.truststore.type = JKS | |
telemetry.max.bytes = 1048576 | |
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 | |
transaction.max.timeout.ms = 900000 | |
transaction.partition.verification.enable = true | |
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 | |
transaction.state.log.load.buffer.size = 5242880 | |
transaction.state.log.min.isr = 2 | |
transaction.state.log.num.partitions = 50 | |
transaction.state.log.replication.factor = 3 | |
transaction.state.log.segment.bytes = 104857600 | |
transactional.id.expiration.ms = 604800000 | |
unclean.leader.election.enable = false | |
unclean.leader.election.interval.ms = 300000 | |
unstable.api.versions.enable = false | |
unstable.feature.versions.enable = false | |
zookeeper.clientCnxnSocket = null | |
zookeeper.connect = null | |
zookeeper.connection.timeout.ms = null | |
zookeeper.max.in.flight.requests = 10 | |
zookeeper.metadata.migration.enable = false | |
zookeeper.metadata.migration.min.batch.size = 200 | |
zookeeper.session.timeout.ms = 18000 | |
zookeeper.set.acl = false | |
zookeeper.ssl.cipher.suites = null | |
zookeeper.ssl.client.enable = false | |
zookeeper.ssl.crl.enable = false | |
zookeeper.ssl.enabled.protocols = null | |
zookeeper.ssl.endpoint.identification.algorithm = HTTPS | |
zookeeper.ssl.keystore.location = null | |
zookeeper.ssl.keystore.password = null | |
zookeeper.ssl.keystore.type = null | |
zookeeper.ssl.ocsp.enable = false | |
zookeeper.ssl.protocol = TLSv1.2 | |
zookeeper.ssl.truststore.location = null | |
zookeeper.ssl.truststore.password = null | |
zookeeper.ssl.truststore.type = null | |
(kafka.server.KafkaConfig) [main] | |
2025-01-05 04:56:15,627 INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [main] | |
Formatting metadata directory /var/lib/kafka/data-0/kafka-log3 with metadata.version 3.9-IV0. | |
KRaft storage formatting is done | |
Preparing Kafka Agent configuration | |
+ exec /usr/bin/tini -w -e 143 -- /opt/kafka/bin/kafka-server-start.sh /tmp/strimzi.properties | |
2025-01-05 04:56:17,193 INFO Starting KafkaAgent with brokerReadyFile=null, sessionConnectedFile=null, sslKeyStorePath=/tmp/kafka/cluster.keystore.p12, sslTrustStore=/tmp/kafka/cluster.truststore.p12 (io.strimzi.kafka.agent.KafkaAgent) [main] | |
2025-01-05 04:56:17,218 INFO Logging initialized @477ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) [main] | |
2025-01-05 04:56:17,526 INFO jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.13+11-LTS (org.eclipse.jetty.server.Server) [main] | |
2025-01-05 04:56:17,692 INFO Started o.e.j.s.h.ContextHandler@23fe1d71{/v1/broker-state,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main] | |
2025-01-05 04:56:17,692 INFO Started o.e.j.s.h.ContextHandler@74a10858{/v1/ready,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main] | |
2025-01-05 04:56:17,692 INFO Started o.e.j.s.h.ContextHandler@28ac3dc3{/v1/kraft-migration,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main] | |
2025-01-05 04:56:18,018 INFO x509=X509@4a22f9e2(my-cluster-controller-3,h=[my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc, my-cluster-kafka-bootstrap, my-cluster-kafka-brokers, my-cluster-kafka-bootstrap.strimzi, my-cluster-kafka-brokers.strimzi.svc, my-cluster-kafka-brokers.strimzi.svc.cluster.local, my-cluster-kafka-brokers.strimzi, my-cluster-kafka-bootstrap.strimzi.svc.cluster.local, my-cluster-kafka-bootstrap.strimzi.svc, my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local, my-cluster-kafka],a=[],w=[]) for Server@3c419631[provider=null,keyStore=file:///tmp/kafka/cluster.keystore.p12,trustStore=file:///tmp/kafka/cluster.truststore.p12] (org.eclipse.jetty.util.ssl.SslContextFactory) [main] | |
2025-01-05 04:56:18,311 INFO Started ServerConnector@33c7e1bb{SSL, (ssl, http/1.1)}{0.0.0.0:8443} (org.eclipse.jetty.server.AbstractConnector) [main] | |
2025-01-05 04:56:18,317 INFO Started ServerConnector@e720b71{HTTP/1.1, (http/1.1)}{localhost:8080} (org.eclipse.jetty.server.AbstractConnector) [main] | |
2025-01-05 04:56:18,318 INFO Started @1579ms (org.eclipse.jetty.server.Server) [main] | |
2025-01-05 04:56:18,318 INFO Starting metrics registry (io.strimzi.kafka.agent.KafkaAgent) [main] | |
2025-01-05 04:56:18,525 INFO Found class org.apache.kafka.server.metrics.KafkaYammerMetrics for Kafka 3.3 and newer. (io.strimzi.kafka.agent.KafkaAgent) [main] | |
2025-01-05 04:56:18,703 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main] | |
2025-01-05 04:56:19,021 INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [main] | |
2025-01-05 04:56:19,311 INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [main] | |
2025-01-05 04:56:19,314 INFO [ControllerServer id=3] Starting controller (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:20,210 INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [main] | |
2025-01-05 04:56:20,518 INFO [SocketServer listenerType=CONTROLLER, nodeId=3] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLPLANE-9090) (kafka.network.SocketServer) [main] | |
2025-01-05 04:56:20,597 INFO authorizerStart completed for endpoint CONTROLPLANE-9090. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures) [main] | |
2025-01-05 04:56:20,602 INFO [SharedServer id=3] Starting SharedServer (kafka.server.SharedServer) [main] | |
2025-01-05 04:56:20,904 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log3] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [main] | |
2025-01-05 04:56:20,904 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log3] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$) [main] | |
2025-01-05 04:56:20,904 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log3] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$) [main] | |
2025-01-05 04:56:21,112 INFO Initialized snapshots with IDs SortedSet() from /var/lib/kafka/data-0/kafka-log3/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$) [main] | |
2025-01-05 04:56:21,201 INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [raft-expiration-reaper] | |
2025-01-05 04:56:21,212 INFO [RaftManager id=3] Reading KRaft snapshot and log as part of the initialization (org.apache.kafka.raft.KafkaRaftClient) [main] | |
2025-01-05 04:56:21,215 INFO [RaftManager id=3] Starting voters are VoterSet(voters={3=VoterNode(voterKey=ReplicaKey(id=3, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.63:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0]), 4=VoterNode(voterKey=ReplicaKey(id=4, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0]), 5=VoterNode(voterKey=ReplicaKey(id=5, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.62:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0])}) (org.apache.kafka.raft.KafkaRaftClient) [main] | |
2025-01-05 04:56:21,216 INFO [RaftManager id=3] Starting request manager with static voters: [my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090 (id: 4 rack: null), my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090 (id: 5 rack: null), my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090 (id: 3 rack: null)] (org.apache.kafka.raft.KafkaRaftClient) [main] | |
2025-01-05 04:56:21,220 INFO [RaftManager id=3] Attempting durable transition to Unattached(epoch=0, votedKey=null, voters=[3, 4, 5], electionTimeoutMs=1570, highWatermark=Optional.empty) from null (org.apache.kafka.raft.QuorumState) [main] | |
2025-01-05 04:56:21,691 INFO [RaftManager id=3] Completed transition to Unattached(epoch=0, votedKey=null, voters=[3, 4, 5], electionTimeoutMs=1570, highWatermark=Optional.empty) from null (org.apache.kafka.raft.QuorumState) [main] | |
2025-01-05 04:56:21,694 INFO [kafka-3-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:21,701 INFO [kafka-3-raft-io-thread]: Starting (org.apache.kafka.raft.KafkaRaftClientDriver) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:21,713 INFO [ControllerServer id=3] Waiting for controller quorum voters future (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:21,713 INFO [ControllerServer id=3] Finished waiting for controller quorum voters future (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:21,715 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:21,716 INFO [RaftManager id=3] Registered the listener org.apache.kafka.image.loader.MetadataLoader@728917036 (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:21,809 INFO [QuorumController id=3] Creating new QuorumController with clusterId sAO-ZHurRfWFuS4OJjoZbA. (org.apache.kafka.controller.QuorumController) [main] | |
2025-01-05 04:56:21,810 INFO [RaftManager id=3] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1569301250 (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:21,820 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:21,889 INFO [controller-3-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [controller-3-ThrottledChannelReaper-Produce] | |
2025-01-05 04:56:21,890 INFO [controller-3-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [controller-3-ThrottledChannelReaper-Fetch] | |
2025-01-05 04:56:21,891 INFO [controller-3-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [controller-3-ThrottledChannelReaper-Request] | |
2025-01-05 04:56:21,892 INFO [controller-3-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [controller-3-ThrottledChannelReaper-ControllerMutation] | |
2025-01-05 04:56:21,916 INFO [ExpirationReaper-3-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-3-AlterAcls] | |
2025-01-05 04:56:21,921 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,107 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,209 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,291 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,291 INFO [ControllerServer id=3] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,291 INFO [ControllerServer id=3] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,291 INFO [SocketServer listenerType=CONTROLLER, nodeId=3] Enabling request processing. (kafka.network.SocketServer) [main] | |
2025-01-05 04:56:22,292 INFO Awaiting socket connections on 0.0.0.0:9090. (kafka.network.DataPlaneAcceptor) [main] | |
2025-01-05 04:56:22,391 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,398 INFO [ControllerServer id=3] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,398 INFO [ControllerServer id=3] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,398 INFO [ControllerServer id=3] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,398 INFO [ControllerServer id=3] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [main] | |
2025-01-05 04:56:22,398 INFO [controller-3-to-controller-registration-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) [controller-3-to-controller-registration-channel-manager] | |
2025-01-05 04:56:22,402 INFO Kafka version: 3.9.0 (org.apache.kafka.common.utils.AppInfoParser) [main] | |
2025-01-05 04:56:22,402 INFO Kafka commitId: a60e31147e6b01ee (org.apache.kafka.common.utils.AppInfoParser) [main] | |
2025-01-05 04:56:22,402 INFO Kafka startTimeMs: 1736052982401 (org.apache.kafka.common.utils.AppInfoParser) [main] | |
2025-01-05 04:56:22,405 INFO [KafkaRaftServer nodeId=3] Kafka Server started (kafka.server.KafkaRaftServer) [main] | |
2025-01-05 04:56:22,424 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] initialized channel manager. (kafka.server.ControllerRegistrationManager) [controller-3-registration-manager-event-handler] | |
2025-01-05 04:56:22,425 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] maybeSendControllerRegistration: cannot register yet because the metadata.version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager) [controller-3-registration-manager-event-handler] | |
2025-01-05 04:56:22,493 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,596 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,697 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,794 INFO [RaftManager id=3] Attempting durable transition to CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=1, retries=1, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@56195e91, 4=org.apache.kafka.raft.CandidateState$VoterState@626e6099, 5=org.apache.kafka.raft.CandidateState$VoterState@43fc7527}, highWatermark=Optional.empty, electionTimeoutMs=1522) from Unattached(epoch=0, votedKey=null, voters=[3, 4, 5], electionTimeoutMs=1570, highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:22,798 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,799 INFO [RaftManager id=3] Completed transition to CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=1, retries=1, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@56195e91, 4=org.apache.kafka.raft.CandidateState$VoterState@626e6099, 5=org.apache.kafka.raft.CandidateState$VoterState@43fc7527}, highWatermark=Optional.empty, electionTimeoutMs=1522) from Unattached(epoch=0, votedKey=null, voters=[3, 4, 5], electionTimeoutMs=1570, highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:22,801 INFO [QuorumController id=3] In the new epoch 1, the leader is (none). (org.apache.kafka.controller.QuorumController) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:22,816 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:22,820 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:22,890 INFO [RaftManager id=3] Node 5 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:22,890 WARN [RaftManager id=3] Connection to node 5 (my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.62:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:22,906 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:22,913 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:22,913 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,007 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,091 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,091 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,109 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,212 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,316 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,394 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,394 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,492 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,595 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,696 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,796 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,897 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:23,898 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:23,898 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:24,000 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,101 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,201 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,303 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,320 INFO [RaftManager id=3] Vote request VoteRequestData(clusterId='sAO-ZHurRfWFuS4OJjoZbA', voterId=3, topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1, candidateId=5, candidateDirectoryId=EPMQuludQqWwUZ5344p9Qg, voterDirectoryId=AAAAAAAAAAAAAAAAAAAAAA, lastOffsetEpoch=0, lastOffset=0)])]) with epoch 1 is rejected (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,386 INFO [RaftManager id=3] Election has timed out, backing off for 100ms before becoming a candidate again (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,412 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,486 INFO [RaftManager id=3] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,487 INFO [RaftManager id=3] Attempting durable transition to CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=2, retries=2, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@19e3688f, 4=org.apache.kafka.raft.CandidateState$VoterState@2351266a, 5=org.apache.kafka.raft.CandidateState$VoterState@b44f561}, highWatermark=Optional.empty, electionTimeoutMs=1773) from CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=1, retries=1, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@56195e91, 4=org.apache.kafka.raft.CandidateState$VoterState@626e6099, 5=org.apache.kafka.raft.CandidateState$VoterState@43fc7527}, highWatermark=Optional.empty, electionTimeoutMs=1522) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,511 INFO [RaftManager id=3] Completed transition to CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=2, retries=2, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@19e3688f, 4=org.apache.kafka.raft.CandidateState$VoterState@2351266a, 5=org.apache.kafka.raft.CandidateState$VoterState@b44f561}, highWatermark=Optional.empty, electionTimeoutMs=1773) from CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=1, retries=1, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@56195e91, 4=org.apache.kafka.raft.CandidateState$VoterState@626e6099, 5=org.apache.kafka.raft.CandidateState$VoterState@43fc7527}, highWatermark=Optional.empty, electionTimeoutMs=1522) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,519 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,520 INFO [QuorumController id=3] In the new epoch 2, the leader is (none). (org.apache.kafka.controller.QuorumController) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,587 INFO [RaftManager id=3] Node 4 disconnected. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:24,587 WARN [RaftManager id=3] Connection to node 4 (my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc.cluster.local/10.244.0.64:9090) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) [kafka-3-raft-outbound-request-thread] | |
2025-01-05 04:56:24,620 INFO [RaftManager id=3] Attempting durable transition to Leader(localReplicaKey=ReplicaKey(id=3, directoryId=Optional[lLsUs8vs0tRLnfUA47n1IA]), epoch=2, epochStartOffset=0, highWatermark=Optional.empty, voterStates={3=ReplicaState(replicaKey=ReplicaKey(id=3, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 4=ReplicaState(replicaKey=ReplicaKey(id=4, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 5=ReplicaState(replicaKey=ReplicaKey(id=5, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=2, retries=2, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@19e3688f, 4=org.apache.kafka.raft.CandidateState$VoterState@2351266a, 5=org.apache.kafka.raft.CandidateState$VoterState@b44f561}, highWatermark=Optional.empty, electionTimeoutMs=1773) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,621 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,699 INFO [RaftManager id=3] Completed transition to Leader(localReplicaKey=ReplicaKey(id=3, directoryId=Optional[lLsUs8vs0tRLnfUA47n1IA]), epoch=2, epochStartOffset=0, highWatermark=Optional.empty, voterStates={3=ReplicaState(replicaKey=ReplicaKey(id=3, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 4=ReplicaState(replicaKey=ReplicaKey(id=4, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 5=ReplicaState(replicaKey=ReplicaKey(id=5, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=3, localDirectoryId=lLsUs8vs0tRLnfUA47n1IA,epoch=2, retries=2, voteStates={3=org.apache.kafka.raft.CandidateState$VoterState@19e3688f, 4=org.apache.kafka.raft.CandidateState$VoterState@2351266a, 5=org.apache.kafka.raft.CandidateState$VoterState@b44f561}, highWatermark=Optional.empty, electionTimeoutMs=1773) (org.apache.kafka.raft.QuorumState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,709 INFO [controller-3-to-controller-registration-channel-manager]: Recorded new KRaft controller, from now on will use node my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc.cluster.local:9090 (id: 3 rack: null) (kafka.server.NodeToControllerRequestThread) [controller-3-to-controller-registration-channel-manager] | |
2025-01-05 04:56:24,724 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,886 INFO [MetadataLoader id=3] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,916 INFO [RaftManager id=3] High watermark set to LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)]) for the first time for epoch 2 based on indexOfHw 1 and voters [ReplicaState(replicaKey=ReplicaKey(id=3, directoryId=Optional.empty), endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(replicaKey=ReplicaKey(id=5, directoryId=Optional.empty), endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=1736052984915, lastCaughtUpTimestamp=1736052984915, hasAcknowledgedLeader=true), ReplicaState(replicaKey=ReplicaKey(id=4, directoryId=Optional.empty), endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)] (org.apache.kafka.raft.LeaderState) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,918 INFO [RaftManager id=3] Setting the next offset of org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1569301250 to 0 since there are no snapshots (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,918 INFO [QuorumController id=3] registerBroker: event failed with NotControllerException in 230 microseconds. Exception message: The active controller appears to be node 3. (org.apache.kafka.controller.QuorumController) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,919 INFO [RaftManager id=3] Setting the next offset of org.apache.kafka.image.loader.MetadataLoader@728917036 to 0 since there are no snapshots (org.apache.kafka.raft.KafkaRaftClient) [kafka-3-raft-io-thread] | |
2025-01-05 04:56:24,919 INFO [MetadataLoader id=3] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have not loaded a controller record as of offset 0 and high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,920 INFO [QuorumController id=3] Becoming the active controller at epoch 2, next write offset 1. (org.apache.kafka.controller.QuorumController) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,988 WARN [QuorumController id=3] Performing controller activation. The metadata log appears to be empty. Appending 1 bootstrap record(s) in metadata transaction at metadata.version 3.9-IV0 from bootstrap source 'the binary bootstrap metadata file: /var/lib/kafka/data-0/kafka-log3/bootstrap.checkpoint'. Setting the ZK migration state to NONE since this is a de-novo KRaft cluster. (org.apache.kafka.controller.QuorumController) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,994 INFO [MetadataLoader id=3] initializeNewPublishers: The loader finished catching up to the current high water mark of 1 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:24,996 INFO [QuorumController id=3] Replayed BeginTransactionRecord(name='Bootstrap records') at offset 1. (org.apache.kafka.controller.OffsetControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,997 INFO [QuorumController id=3] Replayed a FeatureLevelRecord setting metadata.version to 3.9-IV0 (org.apache.kafka.controller.FeatureControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:24,997 INFO [QuorumController id=3] Replayed EndTransactionRecord() at offset 4. (org.apache.kafka.controller.OffsetControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,000 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,000 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing KRaftMetadataCachePublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,000 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,000 INFO [ControllerServer id=3] Loaded new metadata Features(metadataVersion=3.0-IV1, finalizedFeatures={metadata.version=1}, finalizedFeaturesEpoch=0). (org.apache.kafka.metadata.publisher.FeaturesPublisher) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,001 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing ControllerRegistrationsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,001 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing ControllerRegistrationManager with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,001 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] maybeSendControllerRegistration: cannot register yet because the metadata.version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager) [controller-3-registration-manager-event-handler] | |
2025-01-05 04:56:25,001 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=3 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,001 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=3 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,002 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing ScramPublisher controller id=3 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,002 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing DelegationTokenPublisher controller id=3 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,003 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,003 INFO [MetadataLoader id=3] InitializeNewPublishers: initializing AclPublisher controller id=3 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,003 INFO [QuorumController id=3] No previous registration found for broker 1. New incarnation ID is H1pTofIUQwWFA1-xPU8S7w. Generated 0 record(s) to clean up previous incarnations. New broker epoch is 5. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,004 INFO [QuorumController id=3] Replayed initial RegisterBrokerRecord for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=H1pTofIUQwWFA1-xPU8S7w, brokerEpoch=5, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[BAWqId-1JeaGoWI3iDxibA, cbHDsyuipWtkVwIl-H1VLQ]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,008 INFO [QuorumController id=3] No previous registration found for broker 0. New incarnation ID is EU-5Rzx5S4q6CuKy620I4A. Generated 0 record(s) to clean up previous incarnations. New broker epoch is 6. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,009 INFO [QuorumController id=3] Replayed initial RegisterBrokerRecord for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=EU-5Rzx5S4q6CuKy620I4A, brokerEpoch=6, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[9igtiSa9EDBysnSVnYgD8g, eLLfwA5XSMVmIjU_qAaaMQ]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,012 INFO [QuorumController id=3] No previous registration found for broker 2. New incarnation ID is K9bGLHxXSNek1Gdra6XIHQ. Generated 0 record(s) to clean up previous incarnations. New broker epoch is 7. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,012 INFO [QuorumController id=3] Replayed initial RegisterBrokerRecord for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=K9bGLHxXSNek1Gdra6XIHQ, brokerEpoch=7, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[N30QeMfx4rAUWGeHypzLYQ, QbDtzmh0EH78n77HOFLRrw]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,035 INFO [ControllerServer id=3] Loaded new metadata Features(metadataVersion=3.9-IV0, finalizedFeatures={metadata.version=21}, finalizedFeaturesEpoch=7). (org.apache.kafka.metadata.publisher.FeaturesPublisher) [kafka-3-metadata-loader-event-handler] | |
2025-01-05 04:56:25,086 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] sendControllerRegistration: attempting to send ControllerRegistrationRequestData(controllerId=3, incarnationId=X5hjEBYiQg63DLiczB3mxA, zkMigrationReady=false, listeners=[Listener(name='CONTROLPLANE-9090', host='my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc', port=9090, securityProtocol=1)], features=[Feature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), Feature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)]) (kafka.server.ControllerRegistrationManager) [controller-3-registration-manager-event-handler] | |
2025-01-05 04:56:25,105 INFO [QuorumController id=3] Replayed RegisterControllerRecord contaning ControllerRegistration(id=3, incarnationId=X5hjEBYiQg63DLiczB3mxA, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLPLANE-9090', securityProtocol=SSL, host='my-cluster-controller-3.my-cluster-kafka-brokers.strimzi.svc', port=9090)], supportedFeatures={kraft.version: 0-1, metadata.version: 1-21}). (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,190 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] Our registration has been persisted to the metadata log. (kafka.server.ControllerRegistrationManager) [controller-3-registration-manager-event-handler] | |
2025-01-05 04:56:25,191 INFO [ControllerRegistrationManager id=3 incarnation=X5hjEBYiQg63DLiczB3mxA] RegistrationResponseHandler: controller acknowledged ControllerRegistrationRequest. (kafka.server.ControllerRegistrationManager) [controller-3-to-controller-registration-channel-manager] | |
2025-01-05 04:56:25,222 INFO [QuorumController id=3] Replayed RegisterControllerRecord contaning ControllerRegistration(id=5, incarnationId=lwtuSqoIQNGaihgZ1ERuzw, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLPLANE-9090', securityProtocol=SSL, host='my-cluster-controller-5.my-cluster-kafka-brokers.strimzi.svc', port=9090)], supportedFeatures={kraft.version: 0-1, metadata.version: 1-21}). (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,410 INFO [QuorumController id=3] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 5. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,415 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=5, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,498 INFO [QuorumController id=3] The request from broker 0 to unfence has been granted because it has caught up with the offset of its register broker record 6. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,498 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=6, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,538 INFO [QuorumController id=3] The request from broker 2 to unfence has been granted because it has caught up with the offset of its register broker record 7. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,538 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=7, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:56:25,918 INFO [QuorumController id=3] Replayed RegisterControllerRecord contaning ControllerRegistration(id=4, incarnationId=yRy54xwcQmGs7c43qmdTeQ, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLPLANE-9090', securityProtocol=SSL, host='my-cluster-controller-4.my-cluster-kafka-brokers.strimzi.svc', port=9090)], supportedFeatures={kraft.version: 0-1, metadata.version: 1-21}). (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:24,795 INFO [QuorumController id=3] Unfenced broker 1 has requested and been granted an immediate shutdown. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:24,796 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=5, fenced=1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:29,956 INFO [QuorumController id=3] Registering a new incarnation of broker 1. Previous incarnation ID was H1pTofIUQwWFA1-xPU8S7w; new incarnation ID is tD_u14WtR-eu40HbPLnbQA. Generated 0 record(s) to clean up previous incarnations. Broker epoch will become 144. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:29,956 INFO [QuorumController id=3] Replayed RegisterBrokerRecord establishing a new incarnation of broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=tD_u14WtR-eu40HbPLnbQA, brokerEpoch=144, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-1.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1), BrokerEndpoint(name='EXTERNAL-9094', host='127.0.0.1', port=9094, securityProtocol=0)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[BAWqId-1JeaGoWI3iDxibA, cbHDsyuipWtkVwIl-H1VLQ]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:30,198 INFO [QuorumController id=3] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 144. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:30,199 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=144, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:48,033 INFO [QuorumController id=3] Unfenced broker 2 has requested and been granted an immediate shutdown. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:48,035 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=7, fenced=1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:52,292 INFO [QuorumController id=3] Registering a new incarnation of broker 2. Previous incarnation ID was K9bGLHxXSNek1Gdra6XIHQ; new incarnation ID is gEOFm0KGQH-bIda3XYCwIg. Generated 0 record(s) to clean up previous incarnations. Broker epoch will become 192. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:52,292 INFO [QuorumController id=3] Replayed RegisterBrokerRecord establishing a new incarnation of broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=gEOFm0KGQH-bIda3XYCwIg, brokerEpoch=192, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-2.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1), BrokerEndpoint(name='EXTERNAL-9094', host='127.0.0.1', port=9094, securityProtocol=0)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[N30QeMfx4rAUWGeHypzLYQ, QbDtzmh0EH78n77HOFLRrw]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:52,928 INFO [QuorumController id=3] The request from broker 2 to unfence has been granted because it has caught up with the offset of its register broker record 192. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:57:52,929 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=192, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:10,577 INFO [QuorumController id=3] Unfenced broker 0 has requested and been granted an immediate shutdown. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:10,578 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=6, fenced=1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:15,729 INFO [QuorumController id=3] Registering a new incarnation of broker 0. Previous incarnation ID was EU-5Rzx5S4q6CuKy620I4A; new incarnation ID is hiPKzDKrQtWnEiog5wbVtA. Generated 0 record(s) to clean up previous incarnations. Broker epoch will become 241. (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:15,729 INFO [QuorumController id=3] Replayed RegisterBrokerRecord establishing a new incarnation of broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=hiPKzDKrQtWnEiog5wbVtA, brokerEpoch=241, endPoints=[BrokerEndpoint(name='REPLICATION-9091', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9091, securityProtocol=1), BrokerEndpoint(name='PLAIN-9092', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9092, securityProtocol=0), BrokerEndpoint(name='TLS-9093', host='my-cluster-broker-0.my-cluster-kafka-brokers.strimzi.svc', port=9093, securityProtocol=1), BrokerEndpoint(name='EXTERNAL-9094', host='127.0.0.1', port=9094, securityProtocol=0)], features=[BrokerFeature(name='kraft.version', minSupportedVersion=0, maxSupportedVersion=1), BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=21)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[9igtiSa9EDBysnSVnYgD8g, eLLfwA5XSMVmIjU_qAaaMQ]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:15,819 INFO [QuorumController id=3] The request from broker 0 to unfence has been granted because it has caught up with the offset of its register broker record 241. (org.apache.kafka.controller.BrokerHeartbeatManager) [quorum-controller-3-event-handler] | |
2025-01-05 04:58:15,819 INFO [QuorumController id=3] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=241, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:00:13,758 INFO [QuorumController id=3] Replaying ProducerIdsRecord ProducerIdsRecord(brokerId=0, brokerEpoch=241, nextProducerId=1000) (org.apache.kafka.controller.ProducerIdControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:00:20,412 INFO [QuorumController id=3] CreateTopics result(s): CreatableTopic(name='test-topic', numPartitions=1, replicationFactor=3, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:00:20,412 INFO [QuorumController id=3] Replayed TopicRecord for topic test-topic with topic ID CfHA42IwTN6_6foenDLhjw. (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:00:20,413 INFO [QuorumController id=3] Replayed PartitionRecord for new partition test-topic-0 with topic ID CfHA42IwTN6_6foenDLhjw and PartitionRegistration(replicas=[2, 0, 1], directories=[AAAAAAAAAAAAAAAAAAAAAQ, AAAAAAAAAAAAAAAAAAAAAQ, AAAAAAAAAAAAAAAAAAAAAQ], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], elr=[], lastKnownElr=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:00:20,508 INFO [QuorumController id=3] CreateTopics result(s): CreatableTopic(name='test-topic', numPartitions=1, replicationFactor=3, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'test-topic' already exists.) (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:06:25,336 INFO [NodeToControllerChannelManager id=3 name=registration] Node 3 disconnected. (org.apache.kafka.clients.NetworkClient) [controller-3-to-controller-registration-channel-manager] | |
2025-01-05 05:07:13,921 INFO [QuorumController id=3] CreateTopics result(s): CreatableTopic(name='test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'test-topic' already exists.) (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:07:43,302 INFO [QuorumController id=3] Replayed RemoveTopicRecord for topic test-topic with ID CfHA42IwTN6_6foenDLhjw. (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:07:48,785 INFO [QuorumController id=3] CreateTopics result(s): CreatableTopic(name='test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] | |
2025-01-05 05:07:48,786 INFO [QuorumController id=3] Replayed TopicRecord for topic test-topic with topic ID TUqT-d8NQ7iU7MEtZXRdgg. (org.apache.kafka.controller.ReplicationControlManager) [quorum-controller-3-event-handler] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
terraform { | |
required_providers { | |
kubernetes = { | |
source = "hashicorp/kubernetes" | |
# configuration_aliases = [kubernetes.minikube] | |
} | |
helm = { | |
source = "hashicorp/helm" | |
} | |
} | |
} | |
variable "enable_strimzi_kafka_operator" { | |
description = "Enable the Strimzi Kafka Operator" | |
type = bool | |
default = false | |
} | |
variable "strimzi_kafka_operator_helm_config" { | |
description = "Helm configuration for Strimzi Kafka Operator" | |
type = any | |
default = {} | |
} | |
resource "helm_release" "strimzi_kafka_operator" { | |
# assume: minikube start --driver docker --network socket_vmnet --cpus=4 --memory=8192 && minikube update-context && minikube addons enable ingress | |
count = var.enable_strimzi_kafka_operator ? 1 : 0 | |
provider = helm | |
name = try(var.strimzi_kafka_operator_helm_config["name"], "strimzi-operator") | |
repository = try(var.strimzi_kafka_operator_helm_config["repository"], "https://strimzi.io/charts/") | |
chart = try(var.strimzi_kafka_operator_helm_config["chart"], "strimzi-kafka-operator") | |
version = try(var.strimzi_kafka_operator_helm_config["version"], "0.45.0") | |
timeout = try(var.strimzi_kafka_operator_helm_config["timeout"], 300) | |
create_namespace = try(var.strimzi_kafka_operator_helm_config["create_namespace"], true) | |
namespace = try(var.strimzi_kafka_operator_helm_config["namespace"], "strimzi") | |
lint = try(var.strimzi_kafka_operator_helm_config["lint"], false) | |
description = try(var.strimzi_kafka_operator_helm_config["description"], "") | |
repository_key_file = try(var.strimzi_kafka_operator_helm_config["repository_key_file"], "") | |
repository_cert_file = try(var.strimzi_kafka_operator_helm_config["repository_cert_file"], "") | |
repository_username = try(var.strimzi_kafka_operator_helm_config["repository_username"], "") | |
repository_password = try(var.strimzi_kafka_operator_helm_config["repository_password"], "") | |
verify = try(var.strimzi_kafka_operator_helm_config["verify"], false) | |
keyring = try(var.strimzi_kafka_operator_helm_config["keyring"], "") | |
disable_webhooks = try(var.strimzi_kafka_operator_helm_config["disable_webhooks"], false) | |
reuse_values = try(var.strimzi_kafka_operator_helm_config["reuse_values"], false) | |
reset_values = try(var.strimzi_kafka_operator_helm_config["reset_values"], false) | |
force_update = try(var.strimzi_kafka_operator_helm_config["force_update"], false) | |
recreate_pods = try(var.strimzi_kafka_operator_helm_config["recreate_pods"], false) | |
cleanup_on_fail = try(var.strimzi_kafka_operator_helm_config["cleanup_on_fail"], false) | |
max_history = try(var.strimzi_kafka_operator_helm_config["max_history"], 0) | |
atomic = try(var.strimzi_kafka_operator_helm_config["atomic"], false) | |
skip_crds = try(var.strimzi_kafka_operator_helm_config["skip_crds"], false) | |
render_subchart_notes = try(var.strimzi_kafka_operator_helm_config["render_subchart_notes"], true) | |
disable_openapi_validation = try(var.strimzi_kafka_operator_helm_config["disable_openapi_validation"], false) | |
wait = try(var.strimzi_kafka_operator_helm_config["wait"], true) | |
wait_for_jobs = try(var.strimzi_kafka_operator_helm_config["wait_for_jobs"], false) | |
dependency_update = try(var.strimzi_kafka_operator_helm_config["dependency_update"], false) | |
replace = try(var.strimzi_kafka_operator_helm_config["replace"], false) | |
# values = try(var.strimzi_kafka_operator_helm_config["values"], null) | |
# values = merge( | |
# try(var.strimzi_kafka_operator_helm_config["values"], {}), | |
# { | |
# operator : { | |
# env : [ | |
# { | |
# name : "STRIMZI_LOG_LEVEL", | |
# value : "DEBUG" | |
# } | |
# ] | |
# } | |
# } | |
# ) | |
# values = { | |
# operator : { | |
# env : [ | |
# { | |
# name : "STRIMZI_LOG_LEVEL", | |
# value : "DEBUG" | |
# } | |
# ] | |
# } | |
# } | |
dynamic "set" { | |
iterator = each_item | |
for_each = concat( | |
try(var.strimzi_kafka_operator_helm_config["set"], []), | |
[ | |
{ | |
name = "operator.env[0].name" | |
value = "STRIMZI_LOG_LEVEL" | |
}, | |
{ | |
name = "operator.env[0].value" | |
value = "INFO" | |
} | |
] | |
) | |
content { | |
name = each_item.value.name | |
value = each_item.value.value | |
type = try(each_item.value.type, null) | |
} | |
} | |
postrender { | |
binary_path = try(var.strimzi_kafka_operator_helm_config["postrender"], "") | |
} | |
dynamic "set" { | |
iterator = each_item | |
for_each = try(var.strimzi_kafka_operator_helm_config["set"], []) | |
content { | |
name = each_item.value.name | |
value = each_item.value.value | |
type = try(each_item.value.type, null) | |
} | |
} | |
dynamic "set_sensitive" { | |
iterator = each_item | |
for_each = try(var.strimzi_kafka_operator_helm_config["set_sensitive"], []) | |
content { | |
name = each_item.value.name | |
value = each_item.value.value | |
type = try(each_item.value.type, null) | |
} | |
} | |
} | |
resource "kubernetes_manifest" "kafka_node_pool_controller" { | |
depends_on = [helm_release.strimzi_kafka_operator] | |
manifest = yamldecode(<<EOM | |
apiVersion: kafka.strimzi.io/v1beta2 | |
kind: KafkaNodePool | |
metadata: | |
name: controller | |
namespace: strimzi | |
labels: | |
strimzi.io/cluster: my-cluster | |
spec: | |
replicas: 3 | |
roles: | |
- controller | |
storage: | |
# type: ephemeral | |
# size: 100Gi | |
type: jbod | |
volumes: | |
- id: 0 | |
type: persistent-claim | |
size: 100Gi | |
kraftMetadata: shared | |
deleteClaim: true | |
EOM | |
) | |
} | |
resource "kubernetes_manifest" "kafka_node_pool_broker" { | |
depends_on = [helm_release.strimzi_kafka_operator] | |
manifest = yamldecode(<<EOM | |
apiVersion: kafka.strimzi.io/v1beta2 | |
kind: KafkaNodePool | |
metadata: | |
name: broker | |
namespace: strimzi | |
labels: | |
strimzi.io/cluster: my-cluster | |
spec: | |
replicas: 3 | |
roles: | |
- broker | |
storage: | |
# type: ephemeral | |
# size: 100Gi | |
type: jbod | |
volumes: | |
- id: 0 | |
type: persistent-claim | |
size: 100Gi | |
# Indicates that this directory will be used to store Kraft metadata log | |
kraftMetadata: shared | |
deleteClaim: true | |
- id: 1 | |
type: persistent-claim | |
size: 100Gi | |
deleteClaim: true | |
EOM | |
) | |
} | |
resource "kubernetes_manifest" "kafka_cluster" { | |
depends_on = [helm_release.strimzi_kafka_operator] | |
manifest = yamldecode(<<EOM | |
apiVersion: kafka.strimzi.io/v1beta2 | |
kind: Kafka | |
metadata: | |
name: my-cluster | |
namespace: strimzi | |
annotations: | |
strimzi.io/node-pools: enabled | |
strimzi.io/kraft: enabled | |
spec: | |
kafka: | |
version: 3.9.0 | |
metadataVersion: 3.9-IV0 | |
listeners: | |
- name: plain | |
port: 9092 | |
type: internal | |
tls: false | |
- name: tls | |
port: 9093 | |
type: internal | |
tls: true | |
- name: external | |
port: 9094 | |
type: loadbalancer | |
tls: false | |
### Trying to make it so can connect from host OS | |
# configuration: | |
# advertisedHost: 127.0.0.1 | |
# advertisedPort: 9094 | |
### no dice, maybe this? (nope) | |
# brokers: | |
# - broker: 0 | |
# advertisedHost: 127.0.0.1 | |
# advertisedPort: 9094 | |
# - broker: 1 | |
# advertisedHost: 127.0.0.1 | |
# advertisedPort: 9095 | |
# - broker: 2 | |
# advertisedHost: 127.0.0.1 | |
# advertisedPort: 9096 | |
config: | |
offsets.topic.replication.factor: 3 | |
transaction.state.log.replication.factor: 3 | |
transaction.state.log.min.isr: 2 | |
default.replication.factor: 3 | |
min.insync.replicas: 2 | |
entityOperator: | |
topicOperator: {} | |
userOperator: {} | |
EOM | |
) | |
} | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment