RKE Cluster MetalLB provides Services with IP Addresses but doesn't ARP for the address - Solution
Secure Unify controller on AWS using a free SSL certificate from LetsEncrypt, This guide will apply for any Debian based
Hello internetzzzAs an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized
Advertise connected routes within VRFs to an upstream or downstream ip address this is one of many ways to get
Domain Trust relationship failures, it may be a virus making it impossible to login using domain credentials..you are bound to
IP version 6 with Dual Stack via Hurricane Electric
When you strip away the UIs and YAML, Kubernetes is really just one thing: a stubborn, event-driven collection of control
If you found this page you already know why you are looking for this, Yes, you can change the location
Setting up Kafka 3.8 with Zookeeper SASL_SCRAM
Hello internetz I recently bought a dell 2950 for my home lab and decided to slap esxi on it and

You May Also Like

Solution – RKE Cluster MetalLB provides Services with IP Addresses but doesn’t ARP for the address

I ran in to the the same issue detailed here working with a RKE cluster

https://github.com/metallb/metallb/issues/1154

After looking around for a few hours digging in to the logs i figured out the issue, hopefully this helps some one else our there in the situation save some time.

Make sure the IPVS mode is enabled on the cluster configuration

If you are using :

RKE2 – edit the cluster.yaml file

RKE1 – Edit the cluster configuration from the rancher UI > Cluster management > Select the cluster > edit configuration > edit as YAML

Locate the services field under rancher_kubernetes_engine_config and add the following options to enable IPVS

    kubeproxy:
      extra_args:
        ipvs-scheduler: lc
        proxy-mode: ipvs

https://www.suse.com/support/kb/doc/?id=000020035

Default

After changes

Make sure the Kernel modules are enabled on the nodes running control planes

Background

Example Rancher – RKE1 cluster

sudo docker ps | grep proxy # find the container ID for kubproxy

sudo docker logs ####containerID###

0313 21:44:08.315888  108645 feature_gate.go:245] feature gates: &{map[]}
I0313 21:44:08.346872  108645 proxier.go:652] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack_ipv4"
E0313 21:44:08.347024  108645 server_others.go:107] "Can't use the IPVS proxier" err="IPVS proxier will not be used because the following required kernel modules are not loaded: [ip_vs_lc]"

Kubproxy is trying to load the needed kernel modules and failing to enable IPVS

Lets enable the kernel modules

sudo nano /etc/modules-load.d/ipvs.conf

ip_vs_lc
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

Install ipvsadm to confirm the changes

sudo dnf install ipvsadm -y

Reboot the VM or the Baremetal server

use the sudo ipvsadm to confirm ipvs is enabled

sudo ipvsadm

Testing

kubectl get svc -n #namespace | grep load
arping -I ens192 192.168.94.140
ARPING 192.168.94.140 from 192.168.94.65 ens192
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 1.117ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.737ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.845ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.668ms
Sent 4 probes (1 broadcast(s))
Received 4 response(s)

If you have the service type load balancer on a deployment now you should be able to reach it if the container is responding on the service

helpful Links

https://metallb.universe.tf/configuration/troubleshooting/

https://github.com/metallb/metallb/issues/1154

https://github.com/rancher/rke2/issues/3710

You May Also Like

Guide – Secure UniFi Cloud Controller on AWS lightsail signed with Lets-encrypt SSL

I found a solution for how to navigate cloud key issues and wanted to set up a ZTP solution for Unifi hardware so I can direct ship equipment to the site, and provision it securely via internet without having to stand up a L2L tunnel.

Alright, lets get started…

This guide is applicable for any Ubuntu based install, but I’m going to utilize Amazon Lightsail for the demo, since at the time of writing, it’s the cheapest option I can find with enough compute resources and a static IP included.

2 GB RAM, 1 vCPU60 GB SSD

OPex (Recurring Cost) – 10$ per Month – As of February 2019

Guide

Dry Run

1. Set up Lightsail instance
2. Create and attach static IP
3. Open necessary ports
4. Set up Unify packages
5. Set up SSL using certbot and letsencrypt
6. Add the certs to unify controller
7. Set up Cronjob for SSL auto Renewal
8. Adopting UniFi devices

1. Set up LightSail instance

Login to – https://lightsail.aws.amazon.com

Spin up a Lightsail instance:

Set a name for the instance and provision it.

2. Create and attach static IP

Click on the instance name and click on the networking tab:

Click “Create Static IP”:

3. Open necessary ports

TCP or UDP

Port Number

Usage

TCP

80

Port used inform-URL for adoption.

TCP

443

Port used for Cloud Access service.

UDP

3478

Port used for STUN.

TCP

8080

Port used for device and controller communication.

TCP

8443

Port used for controller GUI/API as seen in a web browser.

TCP

8880

Port used for HTTP portal redirection.

TCP

8843

Port used for HTTPS portal redirection.

You can disable or lock down the ports as needed using IP-tables depending on your security posture

Post spotlight-

https://www.lammertbies.nl/comm/info/iptables.html#intr

4. Set up Unify packages

https://help.ubnt.com/hc/en-us/articles/209376117-UniFi-Install-a-UniFi-Cloud-Controller-on-Amazon-Web-Services#7

Add the Ubiquiti repository to /etc/apt/sources.list:
sudo echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
Add the Ubiquiti GPG Key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50

Update the server’s repository information:

sudo apt-get update

Install JAVA 8 run time

You need Java Run-time 8 to run the UniFi Controller

Add Oracle’s PPA (Personal Package Archive) to your list of sources so that Ubuntu knows where to check for the updates. Use addaptrepository command for that.

sudo add-apt-repository ppa:webupd8team/java -y sudo apt install java-common oracle-java8-installer

update your package repository by issuing the following command

sudo apt-get update

The oracle-java8-set-default package will automatically set Oracle JDK8 as default. Once the installation is complete we can check Java version.

java -version

java version "1.8.0_191"

MongoDB

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
sudo apt update

Update. Retrieve the latest package information.

sudo apt update

sudo apt-get install apt-transport-https

Install UniFi Controller packages.

sudo apt install unifi

You should be able to Access the web interface and go through the initial setup wizard.

https://yourIPaddress:8443

5. Set up SSL using certbot and letsencrypt

Lets get that green-lock up in here shall we

So, a few things to note here… UniFi doesn’t really have a straightforward way to import certificates, you have to use the java keystore commands to import the cert, but there is a very handy script built by Steve Jenkins that makes this super easy.

First, we need to request a cert and sign it using lets encrypt certificate authority.

Let’s start with adding the repository and install the EFF certbot package – link

sudo apt-get update
sudo apt-get install software-properties-common 
sudo add-apt-repository universe 
sudo add-apt-repository ppa:certbot/certbot 
sudo apt-get update 
sudo apt-get install certbot

5.1 Update/add your DNS record and make sure its propagated (this is important)

Note - The DNS name should point to the static IP we attached to our light-sail instance
Im going to use the following A record for this example

unifyctrl01.multicastbits.com

Ping from the controller and make sure the server can resolve it.

ping unifyctrl01.multicastbits.com


You wont be able see any echo replies because ICMP is not allowed on the firewall rules in AWS - leave it as is we just need the server to see the IP resolving to DNS A record

5.2 Request the certificate

Issue the following command to start certbot in certonly mode

sudo certbot certonly
usage: 
  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate. The most common SUBCOMMANDS and flags are:

obtain, install, and renew certificates:
    (default) run   Obtain & install a certificate in your current webserver
    certonly        Obtain or renew a certificate, but do not install it
    renew           Renew all previously obtained certificates that are near expiry
    enhance         Add security enhancements to your existing configuration
   -d DOMAINS       Comma-separated list of domains to obtain a certificate for

 

5.3 Follow the wizard

Select the first option #1 (Spin up a temporary web server)

Enter all the information requested for the cert request.

This will save the certificate and the privet key generated to the following directory:

/etc/letsencrypt/live/DNSName/

All you need to worry about are these files:

  • cert.pem
  • fullchain.pem
  • privkey.pem

6 Import the certificate to the UniFi controller

You can do this manually using the keytool-import

https://crosstalksolutions.com/secure-unifi-controller/
https://docs.oracle.com/javase/tutorial/security/toolsign/rstep2.html

But for this we are going to use the handy SSL import script made by Steven Jenkins

6.1  Download Steve Jenkins UniFi SSL Import Script

Copy the unifi_ssl_import.sh script to your server

wget https://raw.githubusercontent.com/stevejenkins/unifi-linux-utils/master/unifi_ssl_import.sh

6.2 Modify Script

Install Nano if you don’t have it (it’s better than VI in my opinion. Some disagree, but hey, I’m entitled to my opinion)

sudo apt-get install nano
nano unifi_ssl_import.sh

Change your hostname.example.com to the actual hostname you wish to use. In my case, I’m using

UNIFI_HOSTNAME=your_DNS_Record

Since we are using Ubuntu comment following three lines for Fedora/RedHat/CentOS

#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore 

Uncomment following three lines for Debian/Ubuntu

UNIFI_DIR=/var/lib/unifi 
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

 Since we are using Letsencrypt

LE_MODE=yes

here’s what i used for this demo

#!/usr/bin/env bash

# unifi_ssl_import.sh
# UniFi Controller SSL Certificate Import Script for Unix/Linux Systems
# by Steve Jenkins <http://www.stevejenkins.com/>
# Part of https://github.com/stevejenkins/ubnt-linux-utils/
# Incorporates ideas from https://source.sosdg.org/brielle/lets-encrypt-scripts
# Version 2.8
# Last Updated Jan 13, 2017

# CONFIGURATION OPTIONS
UNIFI_HOSTNAME=unifyctrl01.multicastbits.com
UNIFI_SERVICE=unifi

# Uncomment following three lines for Fedora/RedHat/CentOS
#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu
UNIFI_DIR=/var/lib/unifi
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

# Uncomment following three lines for CloudKey
#UNIFI_DIR=/var/lib/unifi
#JAVA_DIR=/usr/lib/unifi
#KEYSTORE=${JAVA_DIR}/data/keystore

# FOR LET'S ENCRYPT SSL CERTIFICATES ONLY
# Generate your Let's Encrtypt key & cert with certbot before running this script
LE_MODE=yes
LE_LIVE_DIR=/etc/letsencrypt/live

# THE FOLLOWING OPTIONS NOT REQUIRED IF LE_MODE IS ENABLED
PRIV_KEY=/etc/ssl/private/hostname.example.com.key
SIGNED_CRT=/etc/ssl/certs/hostname.example.com.crt
CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt

#rest of the script Omitted

6.3 Make script executable:
chmod a+x unifi_ssl_import.sh
6.4 Run script:
sudo ./unifi_ssl_import.sh

This script will

  • Backup the old keystore file (very handy, something i always forget to do)
  • update the relevant keystore file with the LE cert
  • restart the services to apply the new cert

7. Setup Automatic Certificate renewal

Lets-encrypt cert expeires every 3 months you can easily renew this by using

letsencrypt renew

This will use the existing config you used to generate the cert and renew it

then run the SSL-import script to update the controller cert

you can automate this using a cronjob

Copy the modified import Script you used in Step 6 to “/bin/certupdate/unifi_ssl_import.sh”

sudo mkdir /bin/certupdate/
cp /home/user/unifi_ssl_import.sh /bin/certupdate/unifi_ssl_import.sh

switch to sudo and edit your cron-tab for root and add the following lines

sudo su
crontab -e
0 1 31 1,3,5,7,9,11 * root certbot renew
15 1 31 1,3,5,7,9,11 * root /bin/certupdate/unifi_ssl_import.sh

Save and exit nano by doing CTRL+X followed by Y. 

Check crontab for root and confirm

crontab -e

At 01:00 on day-of-month 31 in January, March, May, July, September, and November the command will attempt to renew the cert

At 01:15 on day-of-month 31 in January, March, May, July, September, and November it will update the keystore with the new cert

 

Useful links –

https://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/

https://crontab.guru/#

8. Adopting UniFi devices to the new Controller with SSH or other L3 adoption methods

If you can SSH into the AP, it’s possible to do L3-adoption via CLI command:

1. Make sure the AP is running the same firmware as the controller. If it is not, see this guide: UniFi – Changing the Firmware of a UniFi Device.

2. Make sure the AP is in factory default state. If it’s not, do:

syswrapper.sh restore-default

3. SSH into the device and type the following and hit enter:

set-inform http://ip-of-controller:8080/inform

4. After issuing the set-inform, the UniFi device will show up for adoption. Once you click adopt, the device will appear to go offline.

5. Once the device goes offline, issue the  set-inform  command from step 3 again. This will permanently save the inform address, and the device will start provisioning.

https://help.ubnt.com/hc/en-us/articles/204909754-UniFi-Device-Adoption-Methods-for-Remote-UniFi-Controllers

Managing the Unify controller services

# to stop the controller
$ sudo service unifi stop

# to start the controller
$ sudo service unifi start

# to restart the controller
$ sudo service unifi restart

# to view the controller's current status
$ sudo service unifi status

Troubleshooting  issues 

cat /var/log/unifi/server.log

go through the system logs and google the issue, best part about ubiquity gear is the strong community support 

 


You May Also Like

Deploying User Cutomizations & Office suit setting for M$ Office via Group Policy

Hello internetzzz

As an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized Ribbon, Quick toolbars, etc for Office applications on user Computers, or in my case Terminal servers.

here is a quick and dirty guide on how to do this via group policy.

For instance, lets say we have to deploy a button to initiate a 3rd party productivity program with in outlook and MS word.

First off, make the necessary changes to outlook or word on a Client pc running MS office.

To customize the Ribbon

  • On the File tab, click Options, and then click Customize Ribbon to open the Ribbon customization dialog.

To customize the Quick Access Toolbar

  • On the File tab, click Options, and then click Quick Access Toolbar to open the Quick Access Toolbar customization dialog.

You can also export your Ribbon and Quick Access Toolbar customizations into a file.

 

when we make changes to the default Ribbon these user customizations are saved in as .officeUI Files

%localappdata%MicrosoftOffice

The file names will differ according to the office program and the portion of the Ribbon UI  you customized.

Application Description Of .Ribbon File .officeUI File Name
Outlook 2010 Outlook Explorer olkexplorer.officeUI
Outlook 2010 Contact olkaddritem.officeUI
Outlook 2010 Appointment/Meeting (organizer on compose, organizer after compose, attendee) olkapptitem.officeUI
Outlook 2010 Contact Group (formerly known as Distribution List) olkdlstitem.officeUI
Outlook 2010 Journal Item olklogitem.officeUI
Outlook 2010 Mail Compose olkmailitem.officeUI
Outlook 2010 Mail Read olkmailread.officeUI
Outlook 2010 Multimedia Message Compose olkmmsedit.officeUI
Outlook 2010 Multimedia Message Read olkmmsread.officeUI
Outlook 2010 Received Meeting Request olkmreqread.officeUI
Outlook 2010 Forward Meeting Request olkmreqsend.officeUI
Outlook 2010 Post Item Compose olkpostitem.officeUI
Outlook 2010 Post Item Read olkpostread.officeUI
Outlook 2010 NDR olkreportitem.officeUI
Outlook 2010 Send Again Item olkresenditem.officeUI
Outlook 2010 Counter Response to a Meeting Request olkrespcounter.officeUI
Outlook 2010 Received Meeting Response olkresponseread.officeUI
Outlook 2010 Edit Meeting Response olkresponsesend.officeUI
Outlook 2010 RSS Item olkrssitem.officeUI
Outlook 2010 Sharing Item Compose olkshareitem.officeUI
Outlook 2010 Sharing Item Read olkshareread.officeUI
Outlook 2010 Text Message Compose olksmsedit.officeUI
Outlook 2010 Text Message Read olksmsread.officeUI
Outlook 2010 Task Item (Task/Task Request, etc.) olktaskitem.officeUI
Access 2010 Access Ribbon Access.officeUI
Excel 2010 Excel Ribbon Excel.officeUI
InfoPath 2010 InfoPath Designer Ribbon IPDesigner.officeUI
InfoPath 2010 InfoPath Editor Ribbon IPEditor.officeUI
OneNote 2010 OneNote Ribbon OneNote.officeUI
PowerPoint PowerPoint Ribbon PowerPoint.officeUI
Project 2010 Project Ribbon MSProject.officeUI
Publisher 2010 Publisher Ribbon Publisher.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveLB.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveWE.officeUI
SharePoint Designer 2010 SharePoint Designer Ribbon spdesign.officeUI
Visio 2010 Visio Ribbon Visio.officeUI
Word 2010 Word Ribbon Word.officeUI

You can use these files and push it via Group policy using a simple start up script..

@echo off 
setlocal
set userdir=%localappdata%MicrosoftOffice
set remotedir=\MyServerLogonFilespublicOfficeUI 
for %%r in (Word Excel PowerPoint) do if not exist %userdir%%%r.officeUI cp %remotedir%%%r.officeUI %userdir%%%r.officeUI
endlocal 

A basic script to copy .officeUI files from a network share into the user’s local AppData directory, if no .officeUI file currently exists there.
Can easily be modified to use the roaming AppData directory (replace %localappdata% with %appdata%) or to include additional ribbon customizations.

 

Managing Office suit setting via Group Policy

Download and import the ADM templates to the Group policy object editor.
This will allow you to  manage settings Security, UI related options, Trust center, etc.. on office 2010 using GPO

Download Office 2010 Administrative Template files (ADM, ADMX/ADML)

hopefully, this will be help full to someone..
until next time cháo

You May Also Like

Advertising VRF Connected/Static routes via MP BGP to OSPF – Guide Dell S4112F-ON – OS 10.5.1.3

Im going to base this off my VRF Setup and Route leaking article and continue building on top of it

Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective

For this example we are going to use BGP to collect connected routes and advertise that over OSPF

Setup the BGP process to collect connected routes

router bgp 65000
 router-id 10.252.250.6
 !
 address-family ipv4 unicast
 !
 neighbor 10.252.250.1
!
vrf Tenant01_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant02_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant03_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Shared_VRF
 !
 address-family ipv4 unicast
  redistribute connected

Setup OSPF to Redistribute the routes collected via BGP

router ospf 250 vrf Shared_VRF
 area 0.0.0.0 default-cost 0
 redistribute bgp 65000
interface vlan250
 mode L3
 description OSPF_Routing
 no shutdown
 ip vrf forwarding Shared_VRF
 ip address 10.252.250.6/29
 ip ospf 250 area 0.0.0.0
 ip ospf mtu-ignore
 ip ospf priority 10

Testing and confirmation

Local OSPF Database

Remote device

You May Also Like

You May Also Like

IP version 6 with Dual-stack using a Tunnel broker 6in4 – PFSense/ASA -Part 01

If your ISP doesn’t have Native IP version 6 Support with Dual Stack  here is a workaround to get it setup for your home lab enviroment

What you need

> Router/Firewall/UTM that supports IPv6 Tunneling

  • PFsense/OpenSense/VyOS
  • DD-WRT 
  • Cisco ISR
  • Juniper SRX

> Active Account with an Ipv6 Tunnel Broker

      For this example we are going to be using Hurricane Electric Free IPv6 Tunnel Broker

Overview of the setup

For part 1 of this series  we are going to cover the following

  • Dual Stack Setup
  • DHCPV6 configuration and explanation

– Guide –

I used my a Netgate router running PfSense to terminate the 6in4 tunnel.it adds the firewall and monitoring capabilities on your Ipv6 network

Before we begin, we need to make a few adjustments on the firewall

Allow IPv6 Traffic

On new installations of pfSense after 2.1, IPv6 traffic is allowed by default. If the configuration on the firewall has been upgraded from older versions, then IPv6 would still be blocked. To enable IPv6 traffic on PFsense, perform the following:

  • Navigate to System > Advanced on the Networking tab
  • Check Allow IPv6 if not already checked
  • Click Save

Allow ICMP

ICMP echo requests must be allowed on the WAN address that is terminating the tunnel to ensure that it is online and reachable.

Firewall> Rules > WAN
Create a regular tunnel.

Enter your IPv4 address as the tunnel’s endpoint address.

Note – After entering your IPv4 address, the website will check to make sure that it can ping your machine. If it cannot ping your machine, you will get an error like the one below:

You can access the tunnel information from the accounts page

While you are here go to “Advance Tab” and setup an “Update key”. (We need it later)

Create and Assign the GIF Interface

Next, create the interface for the GIF tunnel in pfSense. Complete the fields with the corresponding information from the tunnel broker configuration summary.

  • Navigate to Interfaces > (assign) on the GIF tab.
  • Click fa-plus Add to add a new entry.
  • Set the Parent Interface to the WAN where the tunnel terminates. This would be the WAN which has the Client IPv4 Address on the tunnel broker.
  • Set the GIF Remote Address in pfSense to the Server IPv4 Address on the summary.
  • Set the GIF Tunnel Local Address in pfSense to the Client IPv6 Address on the summary.
  • Set the GIF Tunnel Remote Address in pfSense to the Server IPv6 Address on the summary, along the with prefix length (typically / 64).
  • Leave remaining options blank or unchecked.
  • Enter a Description.
  • Click Save.

Example GIF Tunnel.

Assign GIF Interface

Click fa-plus on Interfaces > (Assignments)

choose the GIF interface to be used for an OPT interface. In this example, the OPT interface has been renamed WAN_HP_NET_IPv6. Click Save and Apply Changes if they appear.

 

Configure OPT Interface

With the OPT interface assigned, Click on the OPT interface from the Interfaces menu to enable it  Keep IPv6 Configuration Type set to None.

Setup the IPv6 Gateway

When the interface is configured as listed above, a dynamic IPv6 gateway is added automatically, but it is not yet marked as default.

  • Navigate to System > Routing
  • Edit the dynamic IPv6 gateway with the same name as the IPv6 WAN created above.
  • Check Default Gateway.
  • Click Save.
  • Click Apply Changes.
 
Status > Gateways to view the gateway status. The gateway will show as “Online” if the configuration is successful

Set Up the LAN Interface for IPv6

The LAN interface may be configured for static IPv6 network. The network used for IPv6 addressing on the LAN Interface is an address in the Routed /64 or /48 subnet assigned by the tunnel broker.

  • The Routed /64 or /48 is the basis for the IPv6 Address field

For this exercise we are going to use ::1 for the LAN interface IP from the Prefixes provided above

Routed /64 : 2001:470:1f07:79a::/64

Interface IP – 2001:470:1f07:79a::1

Set Up DHCPv6 and RA (Router Advertisements)

Now that we have the tunnel up and running we need to make sure devices behind the lan interface can get a IPv6 address

There are couple of ways to handle the addressing

Sateless Auto Address Configuration (SLAAC)

SLAAC just means Stateless Auto Address Configuration, but it shouldn’t be confused with Stateless DHCPv6. In fact, we are talking about two different approaches.

SLAAC is the simplest way to give an IPv6 address to a client, because it exclusively rely on Neighbor Discovery Protocol. This protocol, that we simply call NDP, allows devices on a network to discover their Layer 3 neighbors. We use it to retrieve the layer 2 reachability information, like ARP, and to find out routers on the network.

When a device comes online, it sends a Router Solicitation message. It’s basically asking “Are there some routers out there?”. If we have a router on the same network, that router will reply with a Router Advertisement (RA) message. Using this message, the router will tell the client some information about the network:

  • Who is the default gateway (the link-local address of the router itself)
  • What is the global unicast prefix (for example, 2001:DB8:ACAD:10::/64)

With these information, the client is going to create a new global unicast address using the EUI-64 technique. Now the client has an IP address from the global unicast prefix range of the router, and that address is valid over the Internet.

This method is extremely simple, and requires virtually no configuration. However, we can’t centralize it and we cannot specify further information, such as DNS settings. To do that, we need to use a DHCPv6 technique

Just like IP v4 we need to setup DHCP for the IPv6 range for the devices behind the firewall to use SLAAT

Stateless DHCPv6

Stateless DHCPv6 brings to the picture the DHCPv6 protocol. With this approach, we still use SLAAC to obtain reachability information, and we use DHCPv6 for extra items.

The client always starts with a Router Solicitation, and the router on the segment responds with a Router Advertisement. This time, the Router Advertisement has a flag called other-config set to 1. Once the client receives the message, it will still use SLAAC to craft its own IPv6 address. However, the flag tells the client to do something more.

After the SLAAC process succeed, the client will craft a DHCPv6 request and send it through the network. A DHCPv6 server will eventually reply with all the extra information we needed, such as DNS server or domain name.

This approach is called stateless since the DHCPv6 server does not manage any lease for the clients. Instead, it just gives extra information as needed.

Configuring IPv6 Router Advertisements

Router Advertisements (RA) tell an IPv6 network not only which routers are available to reach other networks, but also tell clients how to obtain an IPv6 address. These options are configured per-interface and work similar to and/or in conjunction with DHCPv6.

DHCPv6 is not able to send clients a router for use as a gateway as is traditionally done with IPv4 DHCP. The task of announcing gateways falls to RA.

Operating Mode: Controls how clients behave. All modes advertise this firewall as a router for IPv6. The following modes are available:

  • Router Only: Clients will need to set addresses statically
  • Unmanaged: Client addresses obtained only via Stateless Address Autoconfiguration (SLAAC).
  • Managed: Client addresses assigned only via DHCPv6.
  • Assisted: Client addresses assigned by either DHCPv6 or SLAAC (or both).

Enable DHCPv6 Server on the interface

Setup IPv6 DNS Addresses

we are going to use cloud-flare DNS (At the time of writing CF is rated as the fastest resolver by Thousandeyes.com)

https://developers.cloudflare.com/1.1.1.1/setting-up-1.1.1.1/

1.1.1.1

  • 2606:4700:4700::1111
  • 2606:4700:4700::1001

Keeping your Tunnel endpoint Address Updated with your Dynamic IP

This only applies if you have a dynamic IPv4 from your ISP

As you may remember from our first step when registering the 6in4 tunnel on the website we had to enter our Public IP and enable ICMP

We need to make sure we keep this updated when our IP changes ovetime

There are few ways to accomplish this

  • Use PFsense DynDNS feature 

dnsomatic.com  is wonderful free service to update your dynamic IP on multiple locations, i used this because if needed i have the freedom to change routers/firewalls with out messing up my config (Im using a one of my RasPi’s to update DNS-O-Matic)

im working on another article for this, will link it to this section ASAP

 

Few Notes –

Android OS, Chrome OS still doesn’t support DHCPv6

Mac OSX and windows 10, Server 2016 uses and prefers Ipv6

Check the windows firewall rules if you have issues with NAT rules and manually update rules

Your MTU will drop-down since you are sending the IPv6 headers encapsulated in the Ipv4 packets.Personally i have no issues with my Ipv6 network Behind a spectrum DOCSIS modem. but this may cause issues depending on your ISP ie : CGNat

Here is a good write up https://jamesdobson.name/post/mtu/

 

Part 2

With Part two of this series we will use an ASA for IPv6 using the PFsense router as an tunnel-endpoint

Example Network

Link spotlight

– Understanding IPv6 EUI-64 Bit Address

– IPv6 Stateless Auto Configuration

– Configure the ASA to Pass IPv6 Traffic

– Setup IPv6 TunnelBroker – NetGate

– ipv6-at-home Part 1 | Part II | Part III

Until next time….

You May Also Like

Kubernetes Loop

I’ve been diving deep into systems architecture lately, specifically Kubernetes

Strip away the UIs, the YAML, and the ceremony, and Kubernetes boils down to:

A very stubborn event driven collection of control loops

aka the reconciliation (Control) loop, and everything I read is calling this the “gold standard” for distributed control planes.

Because it decomposes the control plane into many small, independent loops, each continuously correcting drift rather than trying to execute perfect one-shot workflows. these loops are triggered by events or state changes, but what they do is determined by the the spec. vs observed state (status)

Now we have both:

  • spec: desired state
  • status: observed state

Kubernetes lives in that gap.

When spec and status match, everything’s quiet. When they don’t, something wakes up to ensure current state matches the declared state.

The Architecture of Trust

In Kubernetes, they don’t coordinate via direct peer-to-peer orchestration; They coordinate by writing to and watching one shared “state.”

That state lives behind the API server, and the API server validates it and persists it into etcd.

Role of the API server

The API server is the front door to the cluster’s shared truth: it’s the only place that can accept, validate, and persist declared intent as Kubernetes API objects (metadata/spec/status).

When you install a CRD, you’re extending the API itself with a new type (a new endpoint) or a schema the API server can validate against

When we use kubectl apply (or any client) to submit YAML/JSON to the API server, the API server validates it (built-in rules, CRD OpenAPI v3 schema / CEL rules, and potentially admission webhooks) and rejects invalid objects before they’re stored.

If the request passes validation, the API server persists the object into etcd (the whole API object, not just “intent”), and controllers/operators then watch that stored state and do the reconciliation work to make reality match it.

Once stored, controllers/operators (loops) watch those objects and run reconciliation to push the real world toward what’s declared.

it turns out In practice, most controllers don’t act directly on raw watch events, they consume changes through informer caches and queue work onto a rate-limited workqueue. They also often watch related/owned resources (secondary watches), not just the primary object, to stay convergent.

spec is often user-authored as discussed above, but it isn’t exclusively human-written, the scheduler and some controllers also update parts of it (e.g., scheduling decisions/bindings and defaulting).

Role of etcd cluster

etcd is the control plane’s durable record of “the authoritative reference for what the cluster believes that should exist and what it currently reports.”

If an intent (an API object) isn’t in etcd, controllers can’t converge on it—because there’s nothing recorded to reconcile toward

This makes the system inherently self-healing because it trusts the declared state and keeps trying to morph the world to match until those two align.

One tidbit worth noting:

In production, Nodes, runtimes, cloud load balancers can drift independently. Controllers treat those systems as observed state, and they keep measuring reality against what the API says should exist.

How the Loop Actually Works

 Kubernetes isn’t one loop. It’s a bunch of loops(controllers) that all behave the same way:

  • read desired state (what the API says should exist)
  • observe actual state (what’s really happening)
  • calculate the diff
  • push reality toward the spec

 

As an example, let’s look at a simple nginx workload deployment

1) Intent (Desired State)

To Deploy the Nginx workload. You run:

kubectl apply -f nginx.yaml

 

The API server validates the object (and its schema, if it’s a CRD-backed type) and writes it into etcd.

At that point, Kubernetes has only recorded your intent. Nothing has “deployed” yet in the physical sense. The cluster has simply accepted:

“This is what the world should look like.”

2) Watch (The Trigger)

Controllers and schedulers aren’t polling the cluster like a bash script with a sleep 10.

They watch the API server.

When desired state changes, the loop responsible for it wakes up, runs through its logic, and acts:

“New desired state: someone wants an Nginx Pod.”

watches aren’t gospel. Events can arrive twice, late, or never, and your controller still has to converge. Controllers use list+watch patterns with periodic resync as a safety net. The point isn’t perfect signals it’s building a loop that stays correct under imperfect signals.

Controllers also don’t spin constantly they queue work. Events enqueue object keys; workers dequeue and reconcile; failures requeue with backoff. This keeps one bad object from melting the control plane.

3) Reconcile (Close the Gap)

Here’s the mental map that made sense to me:

Kubernetes is a set of level-triggered control loops. You declare desired state in the API, and independent loops keep working until the real world matches what you asked for.

  • Controllers (Deployment/ReplicaSet/etc.) watch the API for desired state and write more desired state.
    • Example: a Deployment creates/updates a ReplicaSet; a ReplicaSet creates/updates Pods.
  • The scheduler finds Pods with no node assigned and picks a node.
    • It considers resource requests, node capacity, taints/tolerations, node selectors, (anti)affinity, topology spread, and other constraints.
    • It records its decision by setting spec.nodeName on the Pod.
  • The kubelet on the chosen node notices “a Pod is assigned to me” and makes it real.
    • pulls images (if needed) via the container runtime (CRI)
    • sets up volumes/mounts (often via CSI)
    • triggers networking setup (CNI plugins do the actual wiring)
    • starts/monitors containers and reports status back to the API

Each component writes its state back into the API, and the next loop uses that as input. No single component “runs the whole workflow.”

One property makes this survivable: reconcile must be safe to repeat (idempotent). The loop might run once or a hundred times (retries, resyncs, restarts, duplicate/missed watch events), and it should still converge to the same end result.

if the desired state is already satisfied, reconcile should do nothing; if something is missing, it should fill the gap, without creating duplicates or making things worse.

When concurrent updates happen (two controllers might try to update the same object at the same time)

Kubernetes handles this with optimistic concurrency. Every object has a resourceVersion (what version of this object did you read?”). If you try to write an update using an older version, the API server rejects it (often as a conflict).

Then the flow is: re-fetch the latest object, apply your change again, and retry.

4) Status (Report Back)

Once the pod is actually running, status flows back into the API.

The Loop Doesn’t Protect You From Yourself

What if the declared state says to delete something critical like kube-proxy or a CNI component? The loop doesn’t have opinions. It just does what the spec says.

A few things keep this from being a constant disaster:

  • Control plane components are special. The API server, etcd, scheduler, controller-manager these usually run as static pods managed directly by kubelet, not through the API. The reconciliation loop can’t easily delete the thing running the reconciliation loop as long as its manifest exists on disk.
  • DaemonSets recreate pods. Delete a kube-proxy pod and the DaemonSet controller sees “desired: 1, actual: 0” and spins up a new one. You’d have to delete the DaemonSet itself.
  • RBAC limits who can do what. Most users can’t touch kube-system resources.
  • Admission controllers can reject bad changes before they hit etcd.

But at the end, if your source of truth says “delete this,” the system will try. The model assumes your declared state is correct. Garbage in, garbage out.

Why This Pattern Matters Outside Kubernetes

This pattern shows up anywhere you manage state over time.

Scripts are fine until they aren’t:

  • they assume the world didn’t change since last run
  • they fail halfway and leave junk behind
  • they encode “steps” instead of “truth”

A loop is simpler:

  • define the desired state
  • store it somewhere authoritative
  • continuously reconcile reality back to it

Ref

You May Also Like

Change the location of the Docker overlay2 storage directory

If you found this page you already know why you are looking for this, your server /dev/mapper/cs-root is filled due to /var/lib/docker taking up most of the space

Yes, you can change the location of the Docker overlay2 storage directory by modifying the daemon.json file. Here’s how to do it:

Open or create the daemon.json file using a text editor:

sudo nano /etc/docker/daemon.json

{
    "data-root": "/path/to/new/location/docker"
}

Replace “/path/to/new/location/docker” with the path to the new location of the overlay2 directory.

If the file already contains other configuration settings, add the "data-root" setting to the file under the "storage-driver" setting:

{
    "storage-driver": "overlay2",
    "data-root": "/path/to/new/location/docker"
}

Save the file and Restart docker

sudo systemctl restart docker

Don’t forget to remove the old data

rm -rf /var/lib/docker/overlay2

You May Also Like

Kafka 3.8 with Zookeeper SASL_SCRAM

 

Transport Encryption Methods:

SASL/SSL (Solid Teal/Green Lines):

  1. Used for securing communication between producers/consumers and Kafka brokers.
    • SASL (Simple Authentication and Security Layer): Authenticates clients (producers/consumers) to brokers, using SCRAM .
    • SSL/TLS (Secure Sockets Layer/Transport Layer Security): Encrypts the data in transit, ensuring confidentiality and integrity during transmission.

Digest-MD5 (Dashed Yellow Lines):

  1. Secures communication between Kafka brokers and the Zookeeper cluster.
    • Digest-MD5: A challenge-response authentication mechanism providing basic encryption

Notes:

While functional, Digest-MD5 is an older algorithm. we opted for this to reduce complexity and the fact the zookeepers have issues with connecting with Brokers via SSL/TLS

  1. We need to test and switch over KRAFT Protocol, this removes the use of Zookeeper altogether
  2. Add IP ACLs for Zookeeper connections using firewalld to limit traffic between the nodes for replication

PKI and Certificate Signing

CA cert for local PKI,

We need to share this PEM file(without the private key) with the customer to authenticate

Internal applications the CA file must be used for authentication – Refer to the Configuration example documents

# Generate CA Key
openssl genrsa -out multicastbits_CA.key 4096
# Generate CA Certificate
openssl req -x509 -new -nodes -key multicastbits_CA.key -sha256 -days 3650 -out multicastbits_CA.crt -subj "/CN=multicastbits_CA"

 

 

Kafka Broker Certificates

# For Node1 - Repeat for other nodes

openssl req -new -nodes -out node1.csr -newkey rsa:2048 -keyout node1.key -subj "/CN=kafka01.multicastbits.com"

openssl x509 -req -CA multicastbits_CA.crt -CAkey multicastbits_CA.key -CAcreateserial -in node1.csr -out node1.crt -days 3650 -sha256

 

 

Create the kafka and zookeeper users

⚠️ Important: Do not skip this step. we need these users to setup Authentication in JaaS configuration

Before configuring the cluster with SSL and SASL, let’s start up the cluster without authentication and SSL to create the users. This allows us to:

  1. Verify basic dependencies and confirm the zookeeper and Kafka clusters are coming up without any issues “make sure the car starts”
  2. Create necessary user accounts for SCRAM
  3. Test for any inter-node communication issues (Blocked Ports 9092, 9093 ,2181 etc)

 

Here’s how to set up this initial configuration:

Zookeeper Configuration (No SSL or Auth)

Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

# Zookeeper Configuration without Auth
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888

 

Kafka Broker Configuration (No SSL or Auth)

Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

# Kafka Broker Configuration without Auth/SSL
broker.id=1
listeners=PLAINTEXT://kafka01.multicastbits.com:9092
advertised.listeners=PLAINTEXT://kafka01.multicastbits.com:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181

 

Open a new shell to the server Start Zookeeper:

/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

 

Open a new shell to start Kafka:

/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

 

 

Create the users:

Open a new shell and run the following commands:

kafka-configs.sh --bootstrap-server ext-kafka01.fleetcam.io:9092 --alter --add-config 'SCRAM-SHA-512=[password=zookeeper-password]' --entity-type users --entity-name ftszk

kafka-configs.sh --zookeeper ext-kafka01.fleetcam.io:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafkaadmin-password]' --entity-type users --entity-name ftskafkaadminAfter the users are created without errors, press Ctrl+C to shut down the services we started earlier.

 

 

SASL_SSL configuration with SCRAM

Zookeeper configuration Notes

  • Zookeeper is configured with SASL/MD5 due to the SSL issues we faced during the initial setup
  • Zookeeper Traffic is isolated with in the Broker nodes to maintain security
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl

 

 

/Data_Disk/zookeeper/myid file is updated corresponding to the zookeeper nodeID

cat /Data_Disk/zookeeper/myid
1

 

 

Jaas configuration

Create the Jaas configuration for zookeeper authentication, it has the follow this syntax

/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf

Server {
   org.apache.zookeeper.server.auth.DigestLoginModule required
   user_multicastbitszk="zkpassword";
};

 

KafkaOPTS

KafkaOPTS Java varible need to be passed when the zookeeper is started to point to the correct JaaS file

export KAFKA_OPTS="-Djava.security.auth.login.config="Path to the zookeeper-jaas.conf"

export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"

 

 

There are few ways to handle this, you can add a script under profile.d or use a custom Zookeeper launch script for the systemd service

Systemd service

Create the launch shell script for Zookeeper

/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s

#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
#Start the zookeeper service
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

 

 

After you save the file

chomod +x /opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s

sudo chown -R multicastbitskafka:multicastbitskafka /opt/kafka/kafka_2.13-3.8.0

Create the systemd service file

/etc/systemd/system/zookeeper.service

[Unit]
Description=Apache Zookeeper Service
After=network.target
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.sh
Restart=on-failure
[Install]

 

WantedBy=multi-user.target

After the file is saved, start the service

sudo systemctl daemon-reload.
sudo systemctl enable zookeeper
sudo systemctl start zookeeper

 

Kafka Broker configuration Notes

/opt/kafka/kafka_2.13-3.8.0/config/server.properties

broker.id=1
listeners=SASL_SSL://kafka01.multicastbits.com:9093
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
listener.security.protocol.map=SASL_SSL:SASL_SSL
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
#SASL/SCRAM Authentication
sasl.enabled.mechanisms=SCRAM-SHA-256, SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.client=SCRAM-SHA-512
security.inter.broker.protocol=SASL_SSL
#zookeeper
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
zookeeper.sasl.client=true
zookeeper.sasl.clientconfig=ZookeeperClient

 

zookeeper connect options

Define the zookeeper servers the broker will connect to

zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181

Enable SASL

zookeeper.sasl.client=true

Tell the broker to use the creds defined under ZookeeperClient section on the JaaS file used by the kafka service

zookeeper.sasl.clientconfig=ZookeeperClient

Broker and listener configuration

Define the broker id

broker.id=1

Define the servers listener name and port

listeners=SASL_SSL://kafka01.multicastbits.com:9093

Define the servers advertised listener name and port

advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093

Define the SASL_SSL for security protocol

listener.security.protocol.map=SASL_SSL:SASL_SSL

Enable ACLs

authorizer.class.name=kafka.security.authorizer.AclAuthorizer

Define the Java Keystores

ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks

ssl.keystore.password=keystorePassword

ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks

ssl.truststore.password=truststorePassword

Jaas configuration

/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf

KafkaServer {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="multicastbitskafkaadmin"
  password="kafkaadmin-password";
};
ZookeeperClient {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="multicastbitszk"
  password="Zookeeper_password";
};

 

 

SASL and SCRAM configuration Notes

Enable SASL SCRAM for authentication

org.apache.kafka.common.security.scram.ScramLoginModule required

Use MD5 for Zookeeper authentication

org.apache.zookeeper.server.auth.DigestLoginModule required

KafkaOPTS

KafkaOPTS Java variable need to be passed and must point to the correct JaaS file, when the kafka service is started

export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"

 

 

Systemd service

Create the launch shell script for kafka

/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh

#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
#Start the kafka service
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

 

 

Create the systemd service

/etc/systemd/system/kafka.service

[Unit]
Description=Apache Kafka Broker Service
After=network.target zookeeper.service
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target

 

 

Connect authenticate and use Kafka CLI tools

Requirements

  • multicastbitsadmin.keystore.jks
  • multicastbitsadmin.truststore.jks
  • WSL2 with java-11-openjdk-devel wget nano
  • Kafka 3.8 folder extracted locally

Setup your environment

  • Setup WSL2

You can use any Linux environment with JDK17 or 11

  • install dependencies

dnf install -y wget nano java-11-openjdk-devel

Download Kafka and extract it (in going to extract it to the home DIR under kafka)

# 1. Download Kafka (Choose a version compatible with your server)
wget https://dlcdn.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
# 2. Extract
tar xzf kafka_2.13-3.8.0.tgz

 

Copy the jks files (You should generate them with the CA JKS, or use one from one of the nodes) to ~/

cp multicastbitsadmin.keystore.jks ~/

 

cp multicastbitsadmin.truststore.jks ~/

Create your admin client properties file

change the path to fit your setup

nano ~/kafka-adminclient.properties

# Security protocol and SASL/SSL configuration
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
# SSL Configuration
ssl.keystore.location=/opt/kafka/secrets/multicastbitsadmin.keystore.jks
ssl.keystore.password=keystorepw
ssl.truststore.location=/opt/kafka/secrets/multicastbitsadmin.truststore.jks
ssl.truststore.password=truststorepw
# SASL Configuration
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required 
    username="#youradminUser#" 
		password="#your-admin-PW#";

 

 

Create the JaaS file for the admin client

nano ~/kafka_client_jaas.conf

Some kafka-cli tools still look for the jaas.conf under KAFKA_OPTS environment variable

KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="#youradminUser#"
  password="#your-admin-PW#";
};

 

Export the Kafka environment variables

export KAFKA_HOME=/opt/kafka/kafka_2.13-3.8.0
export PATH=$PATH:$KAFKA_HOME/bin
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
export KAFKA_OPTS="-Djava.security.auth.login.config=~/kafka_client_jaas.conf"
source ~/.bashrc

 

 

Kafka CLI Usage Examples

Create a user

kafka-configs.sh --bootstrap-server kafka01.multicastbits.com:9093 --alter --add-config 'SCRAM-SHA-512=[password=#password#]' --entity-type users --entity-name %username%--command-config ~/kafka-adminclient.properties

 

 

Create a topic

kafka-topics.sh --bootstrap-server kafka01.multicastbits.com:9093 --create --topic %topicname% --partitions 10 --replication-factor 3 --command-config ~/kafka-adminclient.properties

 

 

Create ACLs

External customer user with READ DESCRIBE privileges to a single topic

kafka-acls.sh --bootstrap-server kafka01.multicastbits.com:9093 
  --command-config ~/kafka-adminclient.properties 
  --add --allow-principal User:customer-user01 
  --operation READ --operation DESCRIBE --topic Customer_topic

 

 

Troubleshooting

Here are some common issues you might encounter when setting up and using Kafka with SASL_SCRAM authentication, along with their solutions:

1. Connection refused errors

Issue: Clients unable to connect to Kafka brokers.

Solution:

  • Verify that the Kafka brokers are running and listening on the correct ports.
  • Check firewall settings to ensure the Kafka ports are open and accessible.
  • Confirm that the bootstrap server addresses in client configurations are correct.

2. Authentication failures

Issue: Clients fail to authenticate with Kafka brokers.

Solution:

  • Double-check username and password in the JAAS configuration file.
  • Ensure the SCRAM credentials are properly set up on the Kafka brokers.
  • Verify that the correct SASL mechanism (SCRAM-SHA-512) is specified in client configurations.

3. SSL/TLS certificate issues

Issue: SSL handshake failures or certificate validation errors.

Solution:

  • Confirm that the keystore and truststore files are correctly referenced in configurations.
  • Verify that the certificates in the truststore are up-to-date and not expired.
  • Ensure that the hostname in the certificate matches the broker’s advertised listener.

4. Zookeeper connection issues

Issue: Kafka brokers unable to connect to Zookeeper ensemble.

Solution:

  • Verify Zookeeper connection string in Kafka broker configurations.
  • Ensure Zookeeper servers are running and accessible and the ports are open
  • Check Zookeeper client authentication settings in JAAS configuration file

 

 

You May Also Like

You May Also Like