fabric8.kubernetes.client.KubernetesClientException - Detection of Kubernetes version failed.
See original GitHub issueK8S: v1.18
Strimzi Kafka Operator was v0.18 - operational Kafka Cluster v2.5.0 - operational
In order for us to get to Kafka cluster 2.7.0, we first need to upgrade operator to v0.22 though came across problem.
Upgraded operator from 0.18 to 0.22 and got following:
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64) ~[io.fabric8.kubernetes-client-5.0.2.jar:?]
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53) ~[io.fabric8.kubernetes-client-5.0.2.jar:?]
at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersion(ClusterOperationsImpl.java:54) ~[io.fabric8.kubernetes-client-5.0.2.jar:?]
at io.fabric8.kubernetes.client.DefaultKubernetesClient.getVersion(DefaultKubernetesClient.java:489) ~[io.fabric8.kubernetes-client-5.0.2.jar:?]
at io.strimzi.operator.PlatformFeaturesAvailability.lambda$getVersionInfoFromKubernetes$5(PlatformFeaturesAvailability.java:150) ~[io.strimzi.operator-common-0.22.0.jar:0.22.0]
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) ~[io.vertx.vertx-core-3.9.1.jar:3.9.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.60.Final.jar:4.1.60.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: okhttp3.internal.http2.ConnectionShutdownException
This could have been caused by different number of reasons (not necessarily client release) though most obvious ones we ruled out for following reasons:
- We had successfully upgraded operator from 0.15 to 0.18 and did not experience a problem with K8S client is not able to reach K8S api;
- We did some manual tests / checks to ensure there was nothing preventing client from reaching K8S api
- [most critical one]: we set HTTP2_DISASBLE: “true” and strimzi operator passed through the point it used to fail. POD actually comes up and before it would cyclic crash/loop. (there are other errors in the logs but related to kafka clusters and objects we have - nothing related to the issue herein reported).
Could this be a problem with lib/java version used by the K8S client? Asking that because there were 2-3 previously reported issues (java8, if not mistaken) with similar symptom and also “addressed” by this very same work-around.
If you need additional logs/information, please let us know. All we need to do to reproduce the problem is remove env variable passed (HTTP2_DISABLE) as mentioned above. Just wanted some help as to further troubleshoot this problem.
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (3 by maintainers)
Top Related StackOverflow Question
@manusa i know what is going on. Check this out…
Conclusions:
for reference: have opened this on Strimzi/Kafka: https://github.com/strimzi/strimzi-kafka-operator/issues/5044
According to https://github.com/square/okhttp/blob/998633be00d1b2952d068ea04b376fd83bc05c3f/okhttp/src/main/kotlin/okhttp3/ConnectionSpec.kt#L302-L310, these ciphers are forbidden when using TLS1.2 and http/2, so the server rejects them. Kubernetes-client defaults to MODERN_TLS, I think that to accomodate this use case it should use RESTRICTED_TLS, at least when using http/2.