Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"No resolvable bootstrap urls given in bootstrap.servers" - Kafka #11758

Closed
1 task
vw98075 opened this issue May 11, 2020 · 19 comments
Closed
1 task

"No resolvable bootstrap urls given in bootstrap.servers" - Kafka #11758

vw98075 opened this issue May 11, 2020 · 19 comments
Milestone

Comments

@vw98075
Copy link
Contributor

vw98075 commented May 11, 2020

Overview of the issue

An error on Kafka straight from the box

Motivation for or Use Case

Generating a microservice application with Kafka

Reproduce the error
  1. generate a microservices with Kafka
  2. build a docker image of the application with a command
./mvnw -ntp -Pprod verify jib:dockerBuild
  1. create a docker-compose directory and run
jhipster docker-compose

based on the Jhipster document.

  1. run
docker-compose up

The first notable error message:

kafka_1      | kafka.common.InconsistentBrokerIdException: Configured broker.id 2 doesn't match stored broker.id 1 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).

and

jhipster_1   | 2020-05-11 02:51:43.665  WARN 1 --- [           main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jhipsterKafkaResource' defined in file [/app/classes/com/<...>/web/rest/JhipsterKafkaResource.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.<...>.web.rest.JhipsterKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer
jhipster_1   | 2020-05-11 02:51:43.689 ERROR 1 --- [           main] o.s.boot.SpringApplication               : Application run failed
jhipster_1   | 
jhipster_1   | org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jhipsterKafkaResource' defined in file [/app/classes/com/<...>/web/rest/JhipsterKafkaResource.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.<...>.web.rest.JhipsterKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer
jhipster_1   | 	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:314)
jhipster_1   | 	at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:295)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1358)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1204)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)
jhipster_1   | 	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)
jhipster_1   | 	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
jhipster_1   | 	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:879)
jhipster_1   | 	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)
jhipster_1   | 	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
jhipster_1   | 	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
jhipster_1   | 	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747)
jhipster_1   | 	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
jhipster_1   | 	at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
jhipster_1   | 	at com.vw.example.microservices.kafka.sammple.JhipsterApp.main(JhipsterApp.java:62)
jhipster_1   | Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.vw.example.microservices.kafka.sammple.web.rest.JhipsterKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer
jhipster_1   | 	at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:216)
jhipster_1   | 	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117)
jhipster_1   | 	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:310)
jhipster_1   | 	... 17 common frames omitted
jhipster_1   | Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
jhipster_1   | 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
jhipster_1   | 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:270)
jhipster_1   | 	at com.vw.example.microservices.kafka.sammple.web.rest.JhipsterKafkaResource.<init>(JhipsterKafkaResource.java:35)
jhipster_1   | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
jhipster_1   | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
jhipster_1   | 	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
jhipster_1   | 	at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
jhipster_1   | 	at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:203)
jhipster_1   | 	... 19 common frames omitted
jhipster_1   | Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
jhipster_1   | 	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
jhipster_1   | 	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
jhipster_1   | 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:407)
jhipster_1   | 	... 26 common frames omitted
Related issues
Suggest a Fix
JHipster Version(s)

6.8.0

JHipster configuration
{
  "generator-jhipster": {
    "promptValues": {
      "packageName": "com.myapp"
    },
    "jhipsterVersion": "6.8.0",
    "applicationType": "microservice",
    "baseName": "jhipster",
    "packageName": "com.myapp",
    "packageFolder": "com/myapp",
    "serverPort": "8081",
    "authenticationType": "jwt",
    "cacheProvider": "no",
    "enableHibernateCache": false,
    "websocket": false,
    "databaseType": "no",
    "devDatabaseType": "no",
    "prodDatabaseType": "no",
    "searchEngine": false,
    "messageBroker": "kafka",
    "serviceDiscoveryType": false,
    "buildTool": "maven",
    "enableSwaggerCodegen": false,
    "jwtSecretKey": "ZjBiYWRlODIyZDVkZTVkYzA3YTkwZDUwZTZjZjRlMzZiOGM4NDc5NTQ0MzVkZmZiZDY4ZTI2Yzc4ZDNkMWJjYWM5MzBmYWJmNTg2MGM4NzM3NzlkOTkyMzIyY2FmODA2NTBiYmQ1ZjliMzQyZGQ5YTdhOTRjMTM0MjA2MDY4ZWM=",
    "embeddableLaunchScript": false,
    "creationTimestamp": 1589165168805,
    "testFrameworks": [],
    "jhiPrefix": "jhi",
    "entitySuffix": "",
    "dtoSuffix": "DTO",
    "otherModules": [],
    "enableTranslation": false,
    "clientPackageManager": "npm",
    "blueprints": [],
    "skipClient": true,
    "skipUserManagement": true
  }
}

Entity configuration(s) entityName.json files generated in the .jhipster directory
Browsers and Operating System

Linux Mint

  • Checking this box is mandatory (this is just to show you read everything)
@vw98075
Copy link
Contributor Author

vw98075 commented May 13, 2020

I already provide all steps for problem reproduction. Is something missing?

@atomfrede
Copy link
Member

@vw98075 Nothing is missing. We just need time to work on that and reproduce it on our own such that we can reproduce it too (if you mean the added label).

@Shaolans
Copy link
Member

Shaolans commented May 18, 2020

I tried to reproduce this issue but I could only reproduce it once (the first time)..
I can't get the issue by doing the exact same steps as you mentioned.

@pascalgrimaud
Copy link
Member

I already encountered this because the Kafka broken didn't start correctly, or I started the application to quickly.
Then, as we can't reproduce, I'm closing this issue.
Feel free to provide more details and steps to reproduce if you think there is a real issue here.

@leifjones
Copy link
Contributor

leifjones commented Aug 20, 2020

I just encountered this. Generated via 6.10.1. It would be good for the generated application to not fail if kafka fails.

Caveat: I'm new to kafka and currently understand it as a tool for inter-service communication. If that's right, it would seem strange for 1 service to fail if the separately-run communication service would be unavailable.

In case it's helpful to whoever eventually figures out what's going, I'll share the .yo-rc.json and the part of the application part of the .jdl

{
  "generator-jhipster": {
    "authenticationType": "oauth2",
    "cacheProvider": "ehcache",
    "clientFramework": "angularX",
    "serverPort": "8002",
    "serviceDiscoveryType": false,
    "skipUserManagement": true,
    "baseName": "inventory",
    "buildTool": "gradle",
    "databaseType": "sql",
    "devDatabaseType": "mssql",
    "enableHibernateCache": true,
    "enableSwaggerCodegen": true,
    "enableTranslation": false,
    "jhiPrefix": "jhi",
    "languages": ["en", "fr"],
    "messageBroker": "kafka",
    "nativeLanguage": "en",
    "prodDatabaseType": "mssql",
    "searchEngine": "elasticsearch",
    "skipClient": false,
    "testFrameworks": ["protractor", "cucumber", "gatling"],
    "websocket": "spring-websocket",
    "applicationType": "monolith",
    "packageName": "package.name",
    "packageFolder": "package/folder",
    "useSass": true,
    "jhipsterVersion": "6.10.1",
    "creationTimestamp": 1597859100648,
    "skipServer": false,
    "clientPackageManager": "npm",
    "clientTheme": "none",
    "clientThemeVariant": "",
    "embeddableLaunchScript": false,
    "entitySuffix": "",
    "dtoSuffix": "DTO",
    "otherModules": [],
    "blueprints": []
  },
  "entities": []
}
application {
    config {
        applicationType monolith
        authenticationType oauth2
        baseName inventory
        buildTool gradle
        devDatabaseType mssql
        enableSwaggerCodegen true
        enableTranslation false
        messageBroker kafka
        packageName package.name
        prodDatabaseType mssql
        searchEngine elasticsearch
        serverPort 8002
        testFrameworks [protractor, cucumber, gatling]
        useSass true
        websocket spring-websocket
    }
    entities *
}

@pascalgrimaud
Copy link
Member

@mrsegen : can you provide all steps to reproduce the issue ? And add the stacktrace too plz ?

@leifjones
Copy link
Contributor

leifjones commented Aug 20, 2020

@pascalgrimaud , sure!

Steps taken:

  1. Put JDL file in blank directory.
  2. Run jhipster import-jdl ./[application].jdl
  3. Attempt to run ./gradlew, but abort and switch to ./gradlew bootJar -Pprod jibDockerBuild (in preparation for running it all with docker-compose).
  4. Encounter error and change jib_plugin_version=2.4.0 to jib_plugin_version=2.5.0 in [application]/gradle.properties as recommended here.
  5. Provide a jib { to { ... } } within [application]/build.gradle (and associated variables in [application]/gradle.properties) for a private docker registry per this guide.
  6. Run jhipster ci-cd to add a Jenkinspipe line
  7. Run npm audit fix
  8. Attempt docker-compose -f src/main/docker/app.yml up -d and discover that docker_[application]-app_1 was dying because mssql took to long to start. Thus, change JHIPSTER_SLEEP=30 to JHIPSTER_SLEEP=90 under [inventory]-app's environment variables in [application]/src/main/docker/app.yml
  9. Attempt docker-compose -f src/main/docker/app.yml up -d and find that docker_[application]-app_1 and docker_kafka_1 each exited.

For the docker_[application]-app_1 container, stack trace (with pertinent preceding logs) are:

2020-08-20 13:51:54.136 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka

2020-08-20 13:51:54.138 INFO 1 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.

2020-08-20 13:51:54.140 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'inventoryKafkaResource' defined in file [/app/classes/com/herzog/tps/inventory/web/rest/InventoryKafkaResource.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.herzog.tps.inventory.web.rest.InventoryKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer

2020-08-20 13:51:54.242 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed


org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'inventoryKafkaResource' defined in file [/app/classes/com/herzog/tps/inventory/web/rest/InventoryKafkaResource.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.herzog.tps.inventory.web.rest.InventoryKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer

at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:314)

at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:295)

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1358)

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1204)

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557)

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)

at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)

at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226)

at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)

at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)

at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:895)

at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)

at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)

at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747)

at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)

at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)

at com.herzog.tps.inventory.InventoryApp.main(InventoryApp.java:63)

Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.herzog.tps.inventory.web.rest.InventoryKafkaResource]: Constructor threw exception; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka producer

at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:217)

at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117)

at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:310)

... 17 common frames omitted

Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer

at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:433)

at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:270)

at com.herzog.tps.inventory.web.rest.InventoryKafkaResource.<init>(InventoryKafkaResource.java:35)

at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)

at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)

at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)

at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:204)

... 19 common frames omitted

Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers

at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)

at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)

at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:408)

... 26 common frames omitted

For docker_kafka_1, it's full output before exiting was:

===> ENV Variables ...

ALLOW_UNSIGNED=false

COMPONENT=kafka

CONFLUENT_DEB_VERSION=1

CONFLUENT_PLATFORM_LABEL=

CONFLUENT_VERSION=5.5.0

CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar

HOME=/root

HOSTNAME=e4909b0cf0f0

KAFKA_ADVERTISED_HOST_NAME=kafka

KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092

KAFKA_BROKER_ID=1

KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT

KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1

KAFKA_VERSION=

KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181

LANG=C.UTF-8

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

PWD=/

PYTHON_PIP_VERSION=8.1.2

PYTHON_VERSION=2.7.9-1

SCALA_VERSION=2.12

SHLVL=1

ZULU_OPENJDK_VERSION=8=8.38.0.13

_=/usr/bin/env

�

===> User

uid=0(root) gid=0(root) groups=0(root)

===> Configuring ...

===> Running preflight checks ...

===> Check if /var/lib/kafka/data is writable ...

===> Check if Zookeeper is healthy ...

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=e4909b0cf0f0

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.19.104-microsoft-standard

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=185MB

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2814MB

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=190MB

[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d

[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation

[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes

[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=

[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.22.0.7:2181. Will not attempt to authenticate using SASL (unknown error)

[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.22.0.2:54476, server: zookeeper/172.22.0.7:2181

[main] ERROR io.confluent.admin.utils.ClusterStatus - Timed out waiting for connection to Zookeeper server [zookeeper:2181].

[main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 40000ms for sessionid 0x0

[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x0 closed

[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x0

@pascalgrimaud pascalgrimaud reopened this Aug 20, 2020
@pascalgrimaud
Copy link
Member

I'm reopening it and will try to reproduce
thx for all steps

@leifjones
Copy link
Contributor

leifjones commented Aug 20, 2020

Also the docker_zookeeper_1 had this at the bottom of its output:

[2020-08-20 13:50:07,525] WARN Exception causing close of session 0x0: ZooKeeperServer not running (org.apache.zookeeper.server.NIOServerCnxn)

[2020-08-20 13:50:07,526] WARN Unexpected exception (org.apache.zookeeper.server.WorkerService)

java.lang.NullPointerException

at org.apache.zookeeper.server.ZooKeeperServer.removeCnxn(ZooKeeperServer.java:135)

at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:602)

at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:375)

at org.apache.zookeeper.server.NIOServerCnxnFactory$IOWorkRequest.doWork(NIOServerCnxnFactory.java:530)

at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:155)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Though on another docker-compose attempt, after running ./gradlew bootJar -Pprod jibDockerBuild, I could not reproduce the full issue. However:

  • The above zookeeper output still occurred
  • The above docker_[application]-app_1 output still occurred, but the application ran (but had separate seemingly unrelated issues with the app directing browser to http://keycloak:9080/... when clicking Sign In)

On an additional docker-compose attempt:

  • The zookeeper output still occurred, but the docker_[application]-app_1 output did not occur.

@pascalgrimaud
Copy link
Member

@mrsegen : I try to reproduce the issue, but it works well for me

  • checkout v6.10.1 to be at same version than you
  • change your JDL: package.name -> io.github.pascalgrimaud
  • generate the app
  • launch ./gradlew bootJar -Pprod jibDockerBuild
  • the image is correctly created and I launch docker-compose -f src/main/docker/app.yml up -d

Capture d’écran de 2020-08-22 10-40-29

  • my zookeeper logs:
➜ docker logs docker_zookeeper_1                              
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=zookeeper
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=ec9dc161dffc
KAFKA_VERSION=
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZOOKEEPER_CLIENT_PORT=2181
ZOOKEEPER_TICK_TIME=2000
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/zookeeper/data is writable ...
===> Check if /var/lib/zookeeper/log is writable ...
===> Launching ... 
===> Launching zookeeper ... 
[2020-08-22 08:31:46,235] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,256] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,256] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,262] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-08-22 08:31:46,262] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-08-22 08:31:46,262] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-08-22 08:31:46,262] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-08-22 08:31:46,280] INFO Log4j found with jmx enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2020-08-22 08:31:46,321] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,327] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,327] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-08-22 08:31:46,328] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-08-22 08:31:46,332] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-08-22 08:31:46,378] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,378] INFO Server environment:host.name=ec9dc161dffc (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,378] INFO Server environment:java.version=1.8.0_212 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,378] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,378] INFO Server environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,378] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/connect-json-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.45.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-scaladoc.jar:/usr/bin/../share/java/kafka/javassist-3.26.0-GA.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/scala-library-2.12.10.jar:/usr/bin/../share/java/kafka/scala-reflect-2.12.10.jar:/usr/bin/../share/java/kafka/jersey-client-2.28.jar:/usr/bin/../share/java/kafka/connect-runtime-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.45.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.12-0.9.0.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.45.Final.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.5.7.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.4.jar:/usr/bin/../share/java/kafka/scala-logging_2.12-3.9.2.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/commons-compress-1.19.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.1.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.45.Final.jar:/usr/bin/../share/java/kafka/jackson-module-paranamer-2.10.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.10.2.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.12-2.10.2.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.10.2.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.7.3.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.12-2.1.3.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-javadoc.jar:/usr/bin/../share/java/kafka/zstd-jni-1.4.4-7.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.28.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.28.jar:/usr/bin/../share/java/kafka/zookeeper-3.5.7.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/connect-transforms-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-mirror-client-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.10.2.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.28.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.12-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.45.Final.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/support-metrics-common-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/kafka/validation-api-2.0.1.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.10.2.jar:/usr/bin/../share/java/kafka/commons-codec-1.11.jar:/usr/bin/../share/java/kafka/httpcore-4.4.13.jar:/usr/bin/../share/java/kafka/jersey-server-2.28.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-file-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.45.Final.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/lz4-java-1.7.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.10.2.jar:/usr/bin/../share/java/kafka/connect-mirror-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.18.3.jar:/usr/bin/../share/java/kafka/maven-artifact-3.6.3.jar:/usr/bin/../share/java/kafka/support-metrics-client-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/httpmime-4.5.11.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.45.Final.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-test-sources.jar:/usr/bin/../share/java/kafka/jersey-common-2.28.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.5.jar:/usr/bin/../share/java/kafka/httpclient-4.5.11.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.28.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.5.0.jar:/usr/bin/../share/java/kafka/kafka-clients-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jackson-core-2.10.2.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0.jar:/usr/bin/../share/java/kafka/avro-1.9.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-test.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.10.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.10.2.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-sources.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../support-metrics-client/build/dependant-libs-2.12/*:/usr/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:os.version=5.4.0-42-generic (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:user.dir=/ (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,379] INFO Server environment:os.memory.free=500MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,380] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,380] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,384] INFO minSessionTimeout set to 4000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,384] INFO maxSessionTimeout set to 40000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,385] INFO Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-22 08:31:46,458] INFO Logging initialized @1046ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2020-08-22 08:31:46,838] WARN o.e.j.s.ServletContextHandler@3b088d51{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
[2020-08-22 08:31:46,838] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
[2020-08-22 08:31:46,874] INFO jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_212-b04 (org.eclipse.jetty.server.Server)
[2020-08-22 08:31:47,100] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
[2020-08-22 08:31:47,103] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
[2020-08-22 08:31:47,105] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
[2020-08-22 08:31:47,161] INFO Started o.e.j.s.ServletContextHandler@3b088d51{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2020-08-22 08:31:47,208] INFO Started ServerConnector@2038ae61{HTTP/1.1,[http/1.1]}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector)
[2020-08-22 08:31:47,209] INFO Started @1798ms (org.eclipse.jetty.server.Server)
[2020-08-22 08:31:47,213] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer)
[2020-08-22 08:31:47,230] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2020-08-22 08:31:47,234] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-08-22 08:31:47,237] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-08-22 08:31:47,301] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
[2020-08-22 08:31:47,314] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-08-22 08:31:47,320] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-08-22 08:31:47,373] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
[2020-08-22 08:31:48,741] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)

  • the logs of Kafka:
➜ docker logs docker_kafka_1                      
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=d5926e10e948
KAFKA_ADVERTISED_HOST_NAME=kafka
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
KAFKA_BROKER_ID=1
KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=d5926e10e948
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=5.4.0-42-generic
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=234MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=3525MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=238MB
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.19.0.7:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.19.0.6:58012, server: zookeeper/172.19.0.7:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.19.0.7:2181, sessionid = 0x10000155a0a0000, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10000155a0a0000 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10000155a0a0000
===> Launching ... 
===> Launching kafka ... 
[2020-08-22 08:31:50,134] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-08-22 08:31:51,085] INFO KafkaConfig values: 
	advertised.host.name = kafka
	advertised.listeners = PLAINTEXT://kafka:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = PLAINTEXT
	inter.broker.protocol.version = 2.5-IV0
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
	listeners = PLAINTEXT://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.5-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1048588
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = zookeeper:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-08-22 08:31:51,126] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-08-22 08:31:51,230] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2020-08-22 08:31:51,246] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled.  With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours.  This Metadata may be transferred to any country in which Confluent maintains facilities.  For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent.  You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker.  See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable)
[2020-08-22 08:31:51,249] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-08-22 08:31:51,251] INFO starting (kafka.server.KafkaServer)
[2020-08-22 08:31:51,256] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[2020-08-22 08:31:51,299] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-08-22 08:31:51,316] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,317] INFO Client environment:host.name=d5926e10e948 (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,317] INFO Client environment:java.version=1.8.0_212 (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,317] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,317] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,317] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/connect-json-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.45.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-scaladoc.jar:/usr/bin/../share/java/kafka/javassist-3.26.0-GA.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/scala-library-2.12.10.jar:/usr/bin/../share/java/kafka/scala-reflect-2.12.10.jar:/usr/bin/../share/java/kafka/jersey-client-2.28.jar:/usr/bin/../share/java/kafka/connect-runtime-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.45.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.12-0.9.0.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.45.Final.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.5.7.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.4.jar:/usr/bin/../share/java/kafka/scala-logging_2.12-3.9.2.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/commons-compress-1.19.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.1.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.45.Final.jar:/usr/bin/../share/java/kafka/jackson-module-paranamer-2.10.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.10.2.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.12-2.10.2.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.10.2.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.7.3.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.12-2.1.3.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-javadoc.jar:/usr/bin/../share/java/kafka/zstd-jni-1.4.4-7.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.28.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.28.jar:/usr/bin/../share/java/kafka/zookeeper-3.5.7.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/connect-transforms-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-mirror-client-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.10.2.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.28.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.12-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.45.Final.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/support-metrics-common-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/kafka/validation-api-2.0.1.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.10.2.jar:/usr/bin/../share/java/kafka/commons-codec-1.11.jar:/usr/bin/../share/java/kafka/httpcore-4.4.13.jar:/usr/bin/../share/java/kafka/jersey-server-2.28.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/connect-file-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.45.Final.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/lz4-java-1.7.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.10.2.jar:/usr/bin/../share/java/kafka/connect-mirror-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.18.3.jar:/usr/bin/../share/java/kafka/maven-artifact-3.6.3.jar:/usr/bin/../share/java/kafka/support-metrics-client-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/httpmime-4.5.11.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.45.Final.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-test-sources.jar:/usr/bin/../share/java/kafka/jersey-common-2.28.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.5.jar:/usr/bin/../share/java/kafka/httpclient-4.5.11.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.28.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.5.0.jar:/usr/bin/../share/java/kafka/kafka-clients-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jackson-core-2.10.2.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0.jar:/usr/bin/../share/java/kafka/avro-1.9.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-test.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.10.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-5.5.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.10.2.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.24.v20191120.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.12-5.5.0-ccs-sources.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../support-metrics-client/build/dependant-libs-2.12/*:/usr/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:os.version=5.4.0-42-generic (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,318] INFO Client environment:os.memory.free=987MB (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,319] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,319] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,322] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@4f18837a (org.apache.zookeeper.ZooKeeper)
[2020-08-22 08:31:51,327] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-08-22 08:31:51,344] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-08-22 08:31:51,376] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-08-22 08:31:51,377] INFO Opening socket connection to server zookeeper/172.19.0.7:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-08-22 08:31:51,382] INFO Socket connection established, initiating session, client: /172.19.0.6:58014, server: zookeeper/172.19.0.7:2181 (org.apache.zookeeper.ClientCnxn)
[2020-08-22 08:31:51,400] INFO Session establishment complete on server zookeeper/172.19.0.7:2181, sessionid = 0x10000155a0a0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2020-08-22 08:31:51,408] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-08-22 08:31:52,294] INFO Cluster ID = or3coT0CSnG3kANB-ef1Fg (kafka.server.KafkaServer)
[2020-08-22 08:31:52,299] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-08-22 08:31:52,447] INFO KafkaConfig values: 
	advertised.host.name = kafka
	advertised.listeners = PLAINTEXT://kafka:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = PLAINTEXT
	inter.broker.protocol.version = 2.5-IV0
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
	listeners = PLAINTEXT://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.5-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1048588
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = zookeeper:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-08-22 08:31:52,473] INFO KafkaConfig values: 
	advertised.host.name = kafka
	advertised.listeners = PLAINTEXT://kafka:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = PLAINTEXT
	inter.broker.protocol.version = 2.5-IV0
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
	listeners = PLAINTEXT://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.5-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1048588
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = zookeeper:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-08-22 08:31:52,547] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-08-22 08:31:52,549] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-08-22 08:31:52,550] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-08-22 08:31:52,638] INFO Loading logs. (kafka.log.LogManager)
[2020-08-22 08:31:52,657] INFO Logs loading complete in 19 ms. (kafka.log.LogManager)
[2020-08-22 08:31:52,693] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-08-22 08:31:52,716] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-08-22 08:31:52,729] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2020-08-22 08:31:52,821] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2020-08-22 08:31:53,304] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-08-22 08:31:53,377] INFO [SocketServer brokerId=1] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-08-22 08:31:53,379] INFO [SocketServer brokerId=1] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-08-22 08:31:53,410] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,412] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,415] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,418] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,442] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-08-22 08:31:53,489] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-08-22 08:31:53,518] INFO Stat of the created znode at /brokers/ids/1 is: 26,26,1598085113507,1598085113507,1,0,0,72057685742845953,180,0,26
 (kafka.zk.KafkaZkClient)
[2020-08-22 08:31:53,520] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 26 (kafka.zk.KafkaZkClient)
[2020-08-22 08:31:53,615] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-08-22 08:31:53,631] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,639] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,644] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2020-08-22 08:31:53,653] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,659] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2020-08-22 08:31:53,660] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController)
[2020-08-22 08:31:53,666] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController)
[2020-08-22 08:31:53,679] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController)
[2020-08-22 08:31:53,693] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController)
[2020-08-22 08:31:53,713] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2020-08-22 08:31:53,715] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2020-08-22 08:31:53,722] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-08-22 08:31:53,738] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2020-08-22 08:31:53,745] INFO [Controller id=1] Initialized broker epochs cache: Map(1 -> 26) (kafka.controller.KafkaController)
[2020-08-22 08:31:53,752] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController)
[2020-08-22 08:31:53,764] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager)
[2020-08-22 08:31:53,775] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-08-22 08:31:53,776] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread)
[2020-08-22 08:31:53,776] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController)
[2020-08-22 08:31:53,777] INFO [Controller id=1] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2020-08-22 08:31:53,778] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-08-22 08:31:53,778] INFO [Controller id=1] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2020-08-22 08:31:53,778] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-08-22 08:31:53,779] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2020-08-22 08:31:53,785] INFO [Controller id=1] List of topics to be deleted:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,786] INFO [Controller id=1] List of topics ineligible for deletion:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,786] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController)
[2020-08-22 08:31:53,787] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager)
[2020-08-22 08:31:53,788] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController)
[2020-08-22 08:31:53,808] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine)
[2020-08-22 08:31:53,809] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-08-22 08:31:53,814] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-22 08:31:53,815] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-08-22 08:31:53,815] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> Map() (kafka.controller.ZkReplicaStateMachine)
[2020-08-22 08:31:53,816] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine)
[2020-08-22 08:31:53,818] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine)
[2020-08-22 08:31:53,818] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
[2020-08-22 08:31:53,825] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> Map() (kafka.controller.ZkPartitionStateMachine)
[2020-08-22 08:31:53,826] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2020-08-22 08:31:53,840] INFO [Controller id=1] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,840] INFO [Controller id=1] Partitions that completed preferred replica election:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,841] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,842] INFO [Controller id=1] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController)
[2020-08-22 08:31:53,843] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions  triggered by ZkTriggered (kafka.controller.KafkaController)
[2020-08-22 08:31:53,863] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController)
[2020-08-22 08:31:53,908] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-08-22 08:31:53,945] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-08-22 08:31:53,953] INFO Kafka version: 5.5.0-ccs (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:53,954] INFO Kafka commitId: 606822a624024828 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:53,954] INFO Kafka startTimeMs: 1598085113946 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:53,963] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2020-08-22 08:31:53,989] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2020-08-22 08:31:53,995] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2020-08-22 08:31:53,996] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2020-08-22 08:31:54,044] TRACE [Controller id=1 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger)
[2020-08-22 08:31:54,166] WARN The replication factor of topic __confluent.support.metrics will be set to 1, which is less than the desired replication factor of 3 (reason: this cluster contains only 1 brokers).  If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2020-08-22 08:31:54,166] INFO Attempting to create topic __confluent.support.metrics with 1 replicas, assuming 1 total brokers (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2020-08-22 08:31:54,191] INFO Creating topic __confluent.support.metrics with configuration {retention.ms=31536000000} and initial partition assignment Map(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2020-08-22 08:31:54,240] INFO [Controller id=1] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))] (kafka.controller.KafkaController)
[2020-08-22 08:31:54,241] INFO [Controller id=1] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController)
[2020-08-22 08:31:54,245] TRACE [Controller id=1 epoch=1] Changed partition __confluent.support.metrics-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger)
[2020-08-22 08:31:54,251] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __confluent.support.metrics-0 from NonExistentReplica to NewReplica (state.change.logger)
[2020-08-22 08:31:54,318] TRACE [Controller id=1 epoch=1] Changed partition __confluent.support.metrics-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
[2020-08-22 08:31:54,320] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__confluent.support.metrics', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition __confluent.support.metrics-0 (state.change.logger)
[2020-08-22 08:31:54,323] TRACE [Controller id=1 epoch=1] Sending UpdateMetadata request UpdateMetadataPartitionState(topicName='__confluent.support.metrics', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) to brokers Set(1) for partition __confluent.support.metrics-0 (state.change.logger)
[2020-08-22 08:31:54,324] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __confluent.support.metrics-0 from NewReplica to OnlineReplica (state.change.logger)
[2020-08-22 08:31:54,329] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__confluent.support.metrics', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 1 epoch 1 (state.change.logger)
[2020-08-22 08:31:54,364] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __confluent.support.metrics-0 (state.change.logger)
[2020-08-22 08:31:54,371] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(__confluent.support.metrics-0) (kafka.server.ReplicaFetcherManager)
[2020-08-22 08:31:54,375] INFO ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [PLAINTEXT://kafka:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 10000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2020-08-22 08:31:54,419] INFO Kafka version: 5.5.0-ccs (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:54,421] INFO Kafka commitId: 606822a624024828 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:54,421] INFO Kafka startTimeMs: 1598085114415 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-22 08:31:54,505] WARN [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {__confluent.support.metrics=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2020-08-22 08:31:54,507] INFO [Producer clientId=producer-1] Cluster ID: or3coT0CSnG3kANB-ef1Fg (org.apache.kafka.clients.Metadata)
[2020-08-22 08:31:54,511] INFO [Log partition=__confluent.support.metrics-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-08-22 08:31:54,525] INFO [Log partition=__confluent.support.metrics-0, dir=/var/lib/kafka/data] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 73 ms (kafka.log.Log)
[2020-08-22 08:31:54,529] INFO Created log for partition __confluent.support.metrics-0 in /var/lib/kafka/data/__confluent.support.metrics-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 31536000000, flush.messages -> 9223372036854775807, message.format.version -> 2.5-IV0, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2020-08-22 08:31:54,530] INFO [Partition __confluent.support.metrics-0 broker=1] No checkpointed highwatermark is found for partition __confluent.support.metrics-0 (kafka.cluster.Partition)
[2020-08-22 08:31:54,532] INFO [Partition __confluent.support.metrics-0 broker=1] Log loaded for partition __confluent.support.metrics-0 with initial high watermark 0 (kafka.cluster.Partition)
[2020-08-22 08:31:54,537] INFO [Partition __confluent.support.metrics-0 broker=1] __confluent.support.metrics-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2020-08-22 08:31:54,561] TRACE [Broker id=1] Stopped fetchers as part of become-leader request from controller 1 epoch 1 with correlation id 1 for partition __confluent.support.metrics-0 (last update controller epoch 1) (state.change.logger)
[2020-08-22 08:31:54,563] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __confluent.support.metrics-0 (state.change.logger)
[2020-08-22 08:31:54,580] TRACE [Controller id=1 epoch=1] Received response {error_code=0,partition_errors=[{topic_name=__confluent.support.metrics,partition_index=0,error_code=0,_tagged_fields={}}],_tagged_fields={}} for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger)
[2020-08-22 08:31:54,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__confluent.support.metrics', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __confluent.support.metrics-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger)
[2020-08-22 08:31:54,598] TRACE [Controller id=1 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger)
[2020-08-22 08:31:54,749] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2020-08-22 08:31:54,757] INFO Successfully submitted metrics to Kafka topic __confluent.support.metrics (io.confluent.support.metrics.submitters.KafkaSubmitter)
[2020-08-22 08:31:56,643] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)
[2020-08-22 08:31:58,865] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-08-22 08:31:58,866] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-08-22 08:31:58,868] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
[2020-08-22 08:31:58,869] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)
[2020-08-22 08:36:58,869] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-08-22 08:36:58,869] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-08-22 08:36:58,870] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
[2020-08-22 08:36:58,870] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)
[2020-08-22 08:41:53,714] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-08-22 08:41:58,870] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-08-22 08:41:58,870] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-08-22 08:41:58,871] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
[2020-08-22 08:41:58,871] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)
[2020-08-22 08:46:58,871] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-08-22 08:46:58,871] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-08-22 08:46:58,872] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
[2020-08-22 08:46:58,872] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)

  • the logs of my app:
➜ docker logs docker_inventory-app_1              
The application will start in 90s...
Picked up _JAVA_OPTIONS: -Xmx512m -Xms256m

        ██╗ ██╗   ██╗ ████████╗ ███████╗   ██████╗ ████████╗ ████████╗ ███████╗
        ██║ ██║   ██║ ╚══██╔══╝ ██╔═══██╗ ██╔════╝ ╚══██╔══╝ ██╔═════╝ ██╔═══██╗
        ██║ ████████║    ██║    ███████╔╝ ╚█████╗     ██║    ██████╗   ███████╔╝
  ██╗   ██║ ██╔═══██║    ██║    ██╔════╝   ╚═══██╗    ██║    ██╔═══╝   ██╔══██║
  ╚██████╔╝ ██║   ██║ ████████╗ ██║       ██████╔╝    ██║    ████████╗ ██║  ╚██╗
   ╚═════╝  ╚═╝   ╚═╝ ╚═══════╝ ╚═╝       ╚═════╝     ╚═╝    ╚═══════╝ ╚═╝   ╚═╝

:: JHipster 🤓  :: Running Spring Boot 2.2.7.RELEASE ::
:: https://www.jhipster.tech ::

2020-08-22 08:33:10.682  INFO 1 --- [           main] io.github.pascalgrimaud.InventoryApp     : Starting InventoryApp on a7103ad7a2a3 with PID 1 (/app/classes started by root in /)
2020-08-22 08:33:10.686  INFO 1 --- [           main] io.github.pascalgrimaud.InventoryApp     : The following profiles are active: prod,swagger
2020-08-22 08:33:13.388  INFO 1 --- [           main] i.g.pascalgrimaud.config.WebConfigurer   : Web application configuration, using profiles: prod
2020-08-22 08:33:13.388  INFO 1 --- [           main] i.g.pascalgrimaud.config.WebConfigurer   : Web application fully configured
2020-08-22 08:33:18.648  INFO 1 --- [           main] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [kafka:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2020-08-22 08:33:18.691  INFO 1 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.3.1
2020-08-22 08:33:18.691  INFO 1 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 18a913733fb71c01
2020-08-22 08:33:18.692  INFO 1 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1598085198688
2020-08-22 08:33:18.884  INFO 1 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-1] Cluster ID: or3coT0CSnG3kANB-ef1Fg
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xnio.nio.NioXnio$2 (file:/app/libs/xnio-nio-3.3.8.Final.jar) to constructor sun.nio.ch.EPollSelectorProvider()
WARNING: Please consider reporting this to the maintainers of org.xnio.nio.NioXnio$2
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-08-22 08:33:20.032  INFO 1 --- [           main] io.github.pascalgrimaud.InventoryApp     : Started InventoryApp in 10.105 seconds (JVM running for 10.665)
2020-08-22 08:33:20.038  INFO 1 --- [           main] io.github.pascalgrimaud.InventoryApp     : 
----------------------------------------------------------
	Application 'inventory' is running! Access URLs:
	Local: 		http://localhost:8002/
	External: 	http://172.19.0.4:8002/
	Profile(s): 	[prod, swagger]
----------------------------------------------------------
2020-08-22 08:33:46.296  WARN 1 --- [  XNIO-1 task-8] o.z.problem.spring.common.AdviceTraits   : Unauthorized: Full authentication is required to access this resource
2020-08-22 08:33:46.368  WARN 1 --- [ XNIO-1 task-12] o.z.problem.spring.common.AdviceTraits   : Unauthorized: Full authentication is required to access this resource

Capture d’écran de 2020-08-22 10-46-05

@pascalgrimaud
Copy link
Member

pascalgrimaud commented Aug 22, 2020

Here the project I used:

All the commands:

Here all the logs:

So as already said previously, I think it can happened when Zookeeper / Kafka doesn't start correctly. Just try to relaunch it and it should be OK.

@leifjones
Copy link
Contributor

Thank you, kindly!

@leifjones
Copy link
Contributor

✔️ I'm no longer seeking the issue.

@pascalgrimaud
Copy link
Member

Ok then, so let's close this.
Don't hesitate to comment or provide additional information if there is a real issue :-)

@leifjones
Copy link
Contributor

One tangential point I didn't notice about the successful example / failure to reproduce...

I see from your screenshot that you were able to login as the admin user. (Not the case for me one two separate attempts.)

This would be a separate issue, but the next problem I was having was that clicking "Sign in" from localhost:8002 would direct the browser to http(s?)://keycloak:[port]/rest/of/auth/url

(I'm sneaking a moment to check this between time with my kids...) I hope to try out your sample repo. Just mentioning now in case you may be aware of something obvious I'm missing.

I'll confirm whether I need to update to latest stable of generator-jhipster.

@pascalgrimaud
Copy link
Member

@mrsegen : about Keycloak, you should have a look at https://www.jhipster.tech/docker-compose/#-keycloak and don't forget to add keycloak to your /etc/hosts
Otherwise, you won't be able to log when using docker-compose, with app.yml

@leifjones
Copy link
Contributor

That did it! Thank you!

For anyone else stumbling on this with spurious errors while generating a fresh service with JHipster, the Keycloak container was failing to start via docker-compose, with an error about port 9443 being in use. After restarting Docker Desktop, it was good to go!

@leifjones
Copy link
Contributor

@pascalgrimaud I have a (further tangential) follow up question: Where in the JHipster repositories is the 30 second default that's generated at src/main/docker/app.yml defined? You might recall that this was too small because of a 60 second wait that happens in the mssql container.

@pascalgrimaud
Copy link
Member

The logic is here:
https://github.com/jhipster/generator-jhipster/blob/master/generators/server/templates/src/main/docker/app.yml.ejs#L74-L81

So you're right about mssql app. It should be increase to 60

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants