Transport Encryption Methods:
SASL/SSL (Solid Teal/Green Lines):
- Used for securing communication between producers/consumers and Kafka brokers.
- SASL (Simple Authentication and Security Layer): Authenticates clients (producers/consumers) to brokers, using SCRAM .
- SSL/TLS (Secure Sockets Layer/Transport Layer Security): Encrypts the data in transit, ensuring confidentiality and integrity during transmission.
Digest-MD5 (Dashed Yellow Lines):
- Secures communication between Kafka brokers and the Zookeeper cluster.
- Digest-MD5: A challenge-response authentication mechanism providing basic encryption
Notes:
While functional, Digest-MD5 is an older algorithm. we opted for this to reduce complexity and the fact the zookeepers have issues with connecting with Brokers via SSL/TLS
- We need to test and switch over KRAFT Protocol, this removes the use of Zookeeper altogether
- Add IP ACLs for Zookeeper connections using firewalld to limit traffic between the nodes for replication
PKI and Certificate Signing
CA cert for local PKI,
We need to share this PEM file(without the private key) with the customer to authenticate
Internal applications the CA file must be used for authentication – Refer to the Configuration example documents
# Generate CA Key
openssl genrsa -out multicastbits_CA.key 4096
# Generate CA Certificate
openssl req -x509 -new -nodes -key multicastbits_CA.key -sha256 -days 3650 -out multicastbits_CA.crt -subj "/CN=multicastbits_CA"
Kafka Broker Certificates
# For Node1 - Repeat for other nodes
openssl req -new -nodes -out node1.csr -newkey rsa:2048 -keyout node1.key -subj "/CN=kafka01.multicastbits.com"
openssl x509 -req -CA multicastbits_CA.crt -CAkey multicastbits_CA.key -CAcreateserial -in node1.csr -out node1.crt -days 3650 -sha256
Create the kafka and zookeeper users
⚠️ Important: Do not skip this step. we need these users to setup Authentication in JaaS configuration
Before configuring the cluster with SSL and SASL, let’s start up the cluster without authentication and SSL to create the users. This allows us to:
- Verify basic dependencies and confirm the zookeeper and Kafka clusters are coming up without any issues “make sure the car starts”
- Create necessary user accounts for SCRAM
- Test for any inter-node communication issues (Blocked Ports 9092, 9093 ,2181 etc)
Here’s how to set up this initial configuration:
Zookeeper Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
# Zookeeper Configuration without Auth
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
Kafka Broker Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
# Kafka Broker Configuration without Auth/SSL
broker.id=1
listeners=PLAINTEXT://kafka01.multicastbits.com:9092
advertised.listeners=PLAINTEXT://kafka01.multicastbits.com:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Open a new shell to the server Start Zookeeper:
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
Open a new shell to start Kafka:
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the users:
Open a new shell and run the following commands:
kafka-configs.sh --bootstrap-server ext-kafka01.fleetcam.io:9092 --alter --add-config 'SCRAM-SHA-512=[password=zookeeper-password]' --entity-type users --entity-name ftszk
kafka-configs.sh --zookeeper ext-kafka01.fleetcam.io:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafkaadmin-password]' --entity-type users --entity-name ftskafkaadminAfter the users are created without errors, press Ctrl+C to shut down the services we started earlier.
SASL_SSL configuration with SCRAM
Zookeeper configuration Notes
- Zookeeper is configured with SASL/MD5 due to the SSL issues we faced during the initial setup
- Zookeeper Traffic is isolated with in the Broker nodes to maintain security
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
/Data_Disk/zookeeper/myid
file is updated corresponding to the zookeeper nodeID
cat /Data_Disk/zookeeper/myid
1
Jaas configuration
Create the Jaas configuration for zookeeper authentication, it has the follow this syntax
/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_multicastbitszk="zkpassword";
};
KafkaOPTS
KafkaOPTS Java varible need to be passed when the zookeeper is started to point to the correct JaaS file
export KAFKA_OPTS="-Djava.security.auth.login.config="Path to the zookeeper-jaas.conf"
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
There are few ways to handle this, you can add a script under profile.d or use a custom Zookeeper launch script for the systemd service
Systemd service
Create the launch shell script for Zookeeper
/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
#Start the zookeeper service
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
After you save the file
chomod +x /opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
sudo chown -R multicastbitskafka:multicastbitskafka /opt/kafka/kafka_2.13-3.8.0
Create the systemd service file
/etc/systemd/system/zookeeper.service
[Unit]
Description=Apache Zookeeper Service
After=network.target
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
After the file is saved, start the service
sudo systemctl daemon-reload.
sudo systemctl enable zookeeper
sudo systemctl start zookeeper
Kafka Broker configuration Notes
/opt/kafka/kafka_2.13-3.8.0/config/server.properties
broker.id=1
listeners=SASL_SSL://kafka01.multicastbits.com:9093
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
listener.security.protocol.map=SASL_SSL:SASL_SSL
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
#SASL/SCRAM Authentication
sasl.enabled.mechanisms=SCRAM-SHA-256, SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.client=SCRAM-SHA-512
security.inter.broker.protocol=SASL_SSL
#zookeeper
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
zookeeper.sasl.client=true
zookeeper.sasl.clientconfig=ZookeeperClient
zookeeper connect options
Define the zookeeper servers the broker will connect to
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Enable SASL
zookeeper.sasl.client=true
Tell the broker to use the creds defined under ZookeeperClient
section on the JaaS file used by the kafka service
zookeeper.sasl.clientconfig=ZookeeperClient
Broker and listener configuration
Define the broker id
broker.id=1
Define the servers listener name and port
listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the servers advertised listener name and port
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the SASL_SSL for security protocol
listener.security.protocol.map=SASL_SSL:SASL_SSL
Enable ACLs
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Define the Java Keystores
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
Jaas configuration
/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="multicastbitskafkaadmin"
password="kafkaadmin-password";
};
ZookeeperClient {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="multicastbitszk"
password="Zookeeper_password";
};
SASL and SCRAM configuration Notes
Enable SASL SCRAM for authentication
org.apache.kafka.common.security.scram.ScramLoginModule required
Use MD5 for Zookeeper authentication
org.apache.zookeeper.server.auth.DigestLoginModule required
KafkaOPTS
KafkaOPTS Java variable need to be passed and must point to the correct JaaS file, when the kafka service is started
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
Systemd service
Create the launch shell script for kafka
/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
#Start the kafka service
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the systemd service
/etc/systemd/system/kafka.service
[Unit]
Description=Apache Kafka Broker Service
After=network.target zookeeper.service
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Connect authenticate and use Kafka CLI tools
Requirements
multicastbitsadmin.keystore.jks
multicastbitsadmin.truststore.jks
- WSL2 with
java-11-openjdk-devel
wget
nano
- Kafka 3.8 folder extracted locally
Setup your environment
- Setup WSL2
You can use any Linux environment with JDK17 or 11
- install dependencies
dnf install -y wget nano java-11-openjdk-devel
Download Kafka and extract it (in going to extract it to the home DIR under kafka)
# 1. Download Kafka (Choose a version compatible with your server)
wget https://dlcdn.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
# 2. Extract
tar xzf kafka_2.13-3.8.0.tgz
Copy the jks files (You should generate them with the CA JKS, or use one from one of the nodes) to ~/
cp multicastbitsadmin.keystore.jks ~/
cp multicastbitsadmin.truststore.jks ~/
Create your admin client properties file
change the path to fit your setup
nano ~/kafka-adminclient.properties
# Security protocol and SASL/SSL configuration
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
# SSL Configuration
ssl.keystore.location=/opt/kafka/secrets/multicastbitsadmin.keystore.jks
ssl.keystore.password=keystorepw
ssl.truststore.location=/opt/kafka/secrets/multicastbitsadmin.truststore.jks
ssl.truststore.password=truststorepw
# SASL Configuration
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
Create the JaaS file for the admin client
nano ~/kafka_client_jaas.conf
Some kafka-cli tools still look for the jaas.conf
under KAFKA_OPTS
environment variable
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
};
Export the Kafka environment variables
export KAFKA_HOME=/opt/kafka/kafka_2.13-3.8.0
export PATH=$PATH:$KAFKA_HOME/bin
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
export KAFKA_OPTS="-Djava.security.auth.login.config=~/kafka_client_jaas.conf"
source ~/.bashrc
Kafka CLI Usage Examples
Create a user
kafka-configs.sh --bootstrap-server kafka01.multicastbits.com:9093 --alter --add-config 'SCRAM-SHA-512=[password=#password#]' --entity-type users --entity-name %username%--command-config ~/kafka-adminclient.properties
Create a topic
kafka-topics.sh --bootstrap-server kafka01.multicastbits.com:9093 --create --topic %topicname% --partitions 10 --replication-factor 3 --command-config ~/kafka-adminclient.properties
Create ACLs
External customer user with READ DESCRIBE privileges to a single topic
kafka-acls.sh --bootstrap-server kafka01.multicastbits.com:9093
--command-config ~/kafka-adminclient.properties
--add --allow-principal User:customer-user01
--operation READ --operation DESCRIBE --topic Customer_topic
Troubleshooting
Here are some common issues you might encounter when setting up and using Kafka with SASL_SCRAM authentication, along with their solutions:
1. Connection refused errors
Issue: Clients unable to connect to Kafka brokers.
Solution:
- Verify that the Kafka brokers are running and listening on the correct ports.
- Check firewall settings to ensure the Kafka ports are open and accessible.
- Confirm that the bootstrap server addresses in client configurations are correct.
2. Authentication failures
Issue: Clients fail to authenticate with Kafka brokers.
Solution:
- Double-check username and password in the JAAS configuration file.
- Ensure the SCRAM credentials are properly set up on the Kafka brokers.
- Verify that the correct SASL mechanism (SCRAM-SHA-512) is specified in client configurations.
3. SSL/TLS certificate issues
Issue: SSL handshake failures or certificate validation errors.
Solution:
- Confirm that the keystore and truststore files are correctly referenced in configurations.
- Verify that the certificates in the truststore are up-to-date and not expired.
- Ensure that the hostname in the certificate matches the broker’s advertised listener.
4. Zookeeper connection issues
Issue: Kafka brokers unable to connect to Zookeeper ensemble.
Solution:
- Verify Zookeeper connection string in Kafka broker configurations.
- Ensure Zookeeper servers are running and accessible and the ports are open
- Check Zookeeper client authentication settings in JAAS configuration file