Solvedkubernetes kafka External service for kafka not working

Hi,
I tried to use a NodePort type to expose the kafka service out.
Attached my service yml file.

And here's the service descrition:
$ kubectl describe svc kafka -n kafka
Name: kafka
Namespace: kafka
Labels:
Selector: app=kafka
Type: NodePort
IP: 100.73.225.207
Port: 9092/TCP
NodePort: 32093/TCP
Endpoints: 10.44.0.10:9092,10.44.0.11:9092,10.44.0.12:9092
Session Affinity: None

But when I tried using port 32039 external to connect to the kafka service it seems not working.

$ ./bin/kafka-console-producer.sh --broker-list ${master-ip}:32093 --topic test2
abc
[2016-11-18 15:26:58,157] ERROR Error when sending message to topic test2 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test2-0 due to 1512 ms has passed since batch creation plus linger time

I'm pretty sure that the connection to the master-ip is working, and the port 32039 is listening in the cluster.

It works for the zookeeper, but I'm not sure why the kafka not working

49 Answers

βœ”οΈAccepted Answer

Got it working πŸ˜„

Changed the command: to:

          --override log.retention.hours=-1
          --override log.dirs=/var/lib/kafka/data/topics
          --override broker.id=${HOSTNAME##*-}
          --override listener.security.protocol.map=INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT
          --override advertised.listeners="INTERNAL_PLAINTEXT://${HOSTNAME}.broker.kafka.svc.cluster.local:9092,EXTERNAL_PLAINTEXT://$(eval wget -t3 -T2 -qO-  http://169.254.169.254/latest/meta-data/public-hostname):9093"
          --override listeners=INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093
          --override inter.broker.listener.name=INTERNAL_PLAINTEXT
          --override auto.create.topics.enable=true # Just our internal config
          --override auto.leader.rebalance.enable=true # Just our internal config
          --override num.partitions=3 # Just our internal config
          --override default.replication.factor=3 # Just our internal config

You then need to change the ports: to this:

        ports:
        - containerPort: 9092
        - containerPort: 9093
          hostPort: 9093

Finally you need to open up security groups if needed. You can then connect using ec2-blah.us-blah.com:9093 without dropping any messages:

> Messages: 50Mb/sec @ 21901 msgs/sec. | error rate 0.00%
> Batches: 43.80 batches/sec. | 979.979ms p99 | 545.219ms HMean | 341.991ms Min | 1.080252s Max
341.991ms - 415.817ms --
415.817ms - 489.643ms --------------------------------------------------
489.643ms - 563.469ms -------------------------------------
563.469ms - 637.295ms ------------------------------
637.295ms - 711.121ms ----------
711.121ms - 784.947ms ----------
784.947ms - 858.774ms ---
 858.774ms - 932.6ms -
 932.6ms - 1.006426s -
1.006426s - 1.080252s -

So this is just for AWS, but for anything else just change this line: EXTERNAL_PLAINTEXT://$(eval wget -t3 -T2 -qO- http://169.254.169.254/latest/meta-data/public-hostname):9093 to get a routable hostname.

Thanks,

Ben

Other Answers:

I was able to get around this using broker redirection in no-kafka. What happens is that upon connection to a broker, it sends a list of known brokers which then replace whatever endpoint you used to connect with. These are the kubernetes cluster endpoints, however, which will not be the same as what the consumer is connecting with. The broker redirection feature lets you map the external endpoint with the internal one.

Related Issues:

6
kubernetes kafka External service for kafka not working
Got it working πŸ˜„ Changed the command: to: You then need to change the ports: to this: Finally you n...