IP version 6 with Dual-stack using a Tunnel broker 6in4 – PFSense/ASA -Part 01
If your ISP doesn’t have Native IP version 6 Support with Dual Stack here is a workaround to get it setup for your home lab enviroment
What you need
> Router/Firewall/UTM that supports IPv6 Tunneling
- PFsense/OpenSense/VyOS
- DD-WRT
- Cisco ISR
- Juniper SRX
> Active Account with an Ipv6 Tunnel Broker
For this example we are going to be using Hurricane Electric Free IPv6 Tunnel Broker
Overview of the setup

For part 1 of this series we are going to cover the following
- Dual Stack Setup
- DHCPV6 configuration and explanation
– Guide –
I used my a Netgate router running PfSense to terminate the 6in4 tunnel.it adds the firewall and monitoring capabilities on your Ipv6 network
Before we begin, we need to make a few adjustments on the firewall
Allow IPv6 Traffic
On new installations of pfSense after 2.1, IPv6 traffic is allowed by default. If the configuration on the firewall has been upgraded from older versions, then IPv6 would still be blocked. To enable IPv6 traffic on PFsense, perform the following:
- Navigate to System > Advanced on the Networking tab
- Check Allow IPv6 if not already checked
- Click Save
Allow ICMP
ICMP echo requests must be allowed on the WAN address that is terminating the tunnel to ensure that it is online and reachable.
Firewall> Rules > WAN


Create a regular tunnel.
Enter your IPv4 address as the tunnel’s endpoint address.
Note – After entering your IPv4 address, the website will check to make sure that it can ping your machine. If it cannot ping your machine, you will get an error like the one below:

You can access the tunnel information from the accounts page


While you are here go to “Advance Tab” and setup an “Update key”. (We need it later)
Create and Assign the GIF Interface
Next, create the interface for the GIF tunnel in pfSense. Complete the fields with the corresponding information from the tunnel broker configuration summary.
- Navigate to Interfaces > (assign) on the GIF tab.
- Click
Add to add a new entry. - Set the Parent Interface to the WAN where the tunnel terminates. This would be the WAN which has the Client IPv4 Address on the tunnel broker.
- Set the GIF Remote Address in pfSense to the Server IPv4 Address on the summary.
- Set the GIF Tunnel Local Address in pfSense to the Client IPv6 Address on the summary.
- Set the GIF Tunnel Remote Address in pfSense to the Server IPv6 Address on the summary, along the with prefix length (typically / 64).
- Leave remaining options blank or unchecked.
- Enter a Description.
- Click Save.
Example GIF Tunnel.

Assign GIF Interface
Click
on Interfaces > (Assignments)
choose the GIF interface to be used for an OPT interface. In this example, the OPT interface has been renamed WAN_HP_NET_IPv6. Click Save and Apply Changes if they appear.
![]()
Configure OPT Interface
With the OPT interface assigned, Click on the OPT interface from the Interfaces menu to enable it Keep IPv6 Configuration Type set to None.
Setup the IPv6 Gateway
When the interface is configured as listed above, a dynamic IPv6 gateway is added automatically, but it is not yet marked as default.
- Navigate to System > Routing
- Edit the dynamic IPv6 gateway with the same name as the IPv6 WAN created above.
- Check Default Gateway.
- Click Save.
- Click Apply Changes.


Set Up the LAN Interface for IPv6
The LAN interface may be configured for static IPv6 network. The network used for IPv6 addressing on the LAN Interface is an address in the Routed /64 or /48 subnet assigned by the tunnel broker.
- The Routed /64 or /48 is the basis for the IPv6 Address field

For this exercise we are going to use ::1 for the LAN interface IP from the Prefixes provided above
Interface IP – 2001:470:1f07:79a::1

Set Up DHCPv6 and RA (Router Advertisements)
Now that we have the tunnel up and running we need to make sure devices behind the lan interface can get a IPv6 address
There are couple of ways to handle the addressing
Sateless Auto Address Configuration (SLAAC)
SLAAC just means Stateless Auto Address Configuration, but it shouldn’t be confused with Stateless DHCPv6. In fact, we are talking about two different approaches.
SLAAC is the simplest way to give an IPv6 address to a client, because it exclusively rely on Neighbor Discovery Protocol. This protocol, that we simply call NDP, allows devices on a network to discover their Layer 3 neighbors. We use it to retrieve the layer 2 reachability information, like ARP, and to find out routers on the network.
When a device comes online, it sends a Router Solicitation message. It’s basically asking “Are there some routers out there?”. If we have a router on the same network, that router will reply with a Router Advertisement (RA) message. Using this message, the router will tell the client some information about the network:
- Who is the default gateway (the link-local address of the router itself)
- What is the global unicast prefix (for example,
2001:DB8:ACAD:10::/64)
With these information, the client is going to create a new global unicast address using the EUI-64 technique. Now the client has an IP address from the global unicast prefix range of the router, and that address is valid over the Internet.
This method is extremely simple, and requires virtually no configuration. However, we can’t centralize it and we cannot specify further information, such as DNS settings. To do that, we need to use a DHCPv6 technique
Just like IP v4 we need to setup DHCP for the IPv6 range for the devices behind the firewall to use SLAAT
Stateless DHCPv6
Stateless DHCPv6 brings to the picture the DHCPv6 protocol. With this approach, we still use SLAAC to obtain reachability information, and we use DHCPv6 for extra items.
The client always starts with a Router Solicitation, and the router on the segment responds with a Router Advertisement. This time, the Router Advertisement has a flag called other-config set to 1. Once the client receives the message, it will still use SLAAC to craft its own IPv6 address. However, the flag tells the client to do something more.
After the SLAAC process succeed, the client will craft a DHCPv6 request and send it through the network. A DHCPv6 server will eventually reply with all the extra information we needed, such as DNS server or domain name.
This approach is called stateless since the DHCPv6 server does not manage any lease for the clients. Instead, it just gives extra information as needed.
Configuring IPv6 Router Advertisements
Router Advertisements (RA) tell an IPv6 network not only which routers are available to reach other networks, but also tell clients how to obtain an IPv6 address. These options are configured per-interface and work similar to and/or in conjunction with DHCPv6.
DHCPv6 is not able to send clients a router for use as a gateway as is traditionally done with IPv4 DHCP. The task of announcing gateways falls to RA.
Operating Mode: Controls how clients behave. All modes advertise this firewall as a router for IPv6. The following modes are available:
- Router Only: Clients will need to set addresses statically
- Unmanaged: Client addresses obtained only via Stateless Address Autoconfiguration (SLAAC).
- Managed: Client addresses assigned only via DHCPv6.
- Assisted: Client addresses assigned by either DHCPv6 or SLAAC (or both).
Enable DHCPv6 Server on the interface
Setup IPv6 DNS Addresses
we are going to use cloud-flare DNS (At the time of writing CF is rated as the fastest resolver by Thousandeyes.com)
https://developers.cloudflare.com/1.1.1.1/setting-up-1.1.1.1/

- 2606:4700:4700::1111
- 2606:4700:4700::1001


Keeping your Tunnel endpoint Address Updated with your Dynamic IP
This only applies if you have a dynamic IPv4 from your ISP
As you may remember from our first step when registering the 6in4 tunnel on the website we had to enter our Public IP and enable ICMP
We need to make sure we keep this updated when our IP changes ovetime
There are few ways to accomplish this
- Use PFsense DynDNS feature

- Use DNS-O-Matic
dnsomatic.com is wonderful free service to update your dynamic IP on multiple locations, i used this because if needed i have the freedom to change routers/firewalls with out messing up my config (Im using a one of my RasPi’s to update DNS-O-Matic)
im working on another article for this, will link it to this section ASAP
Few Notes –
Android OS, Chrome OS still doesn’t support DHCPv6
Mac OSX and windows 10, Server 2016 uses and prefers Ipv6
Check the windows firewall rules if you have issues with NAT rules and manually update rules
Your MTU will drop-down since you are sending the IPv6 headers encapsulated in the Ipv4 packets.Personally i have no issues with my Ipv6 network Behind a spectrum DOCSIS modem. but this may cause issues depending on your ISP ie : CGNat
Here is a good write up https://jamesdobson.name/post/mtu/
Part 2
With Part two of this series we will use an ASA for IPv6 using the PFsense router as an tunnel-endpoint

Link spotlight
– IPv6 Stateless Auto Configuration
– Configure the ASA to Pass IPv6 Traffic
– Setup IPv6 TunnelBroker – NetGate
– ipv6-at-home Part 1 | Part II | Part III
Until next time….
Find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox
i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)
anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting
Guide
First we need to find the occupied slots and the Bus address for each slot
sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"
Output will show the Slot ID, Usage and then the Bus Address
Designation: CPU SLOT1 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT2 PCI-E 4.0 X8
Current Usage: In Use
Bus Address: 0000:41:00.0
Designation: CPU SLOT3 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c1:00.0
Designation: CPU SLOT4 PCI-E 4.0 X8
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT5 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c2:00.0
Designation: CPU SLOT6 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT7 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:81:00.0
Designation: PCI-E M.2-M1
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: PCI-E M.2-M2
Current Usage: Available
Bus Address: 0000:ff:00.0
We can use lspci -s #BusAddress# to locate whats installed on each slot
lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)
lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments
Until next time!!!
Reference –
https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl
Install OpenVPN on fireTV (no root required) for NORD (MAC, Windows, Linux)
DISCLAIMER: No copyright infringement intended. This article is for entertainment and educational purposes only,
Alright!! now that’s out of the way I’m going to keep this short and simple
Scope : –
Install OpenVPN client
import profile with username and password
connect to your preferred VPN server
Use case : –
- Secure your fireTV traffic using any OpenVPN supported VPN services=
- Connect to your home file server/NAS and stream files when traveling via your FireTV or Firestick using your own VPN server (not covered in this article)
- Watch Streaming services when traveling using your own VPN server (not covered in this article)
Project Summary
Hardware – FireTV 4K Latest firmware
Platform – Windows 10 Enterprise
in this guide im using ADB to install OpenVPN client on my fireTV and use that to connect to the NORDVPN service
All Project files are located on C:NoRDVPN
Files Needed (Please download these files to your workstation before proceeding)
OpenVPN client APK – http://plai.de/android/
NORDVPN OpenVPN configuration files – https://nordvpn.com/ovpn/
ADBLink – http://jocala.com
01. Enable Developer mode on Fire tv
http://www.aftvnews.com/how-to-enable-adb-debugging-on-an-amazon-fire-tv-or-fire-tv-stick/
- From the Fire TV or Fire TV Stick’s home screen, scroll to “Settings”.

- Next, scroll to the right and select “Device”.

- Next, scroll down and select “Developer options”.

- Then select “ADB debugging” to turn the option to “ON”.

- Click on “File Manager” on adbLink
- Create a folder (I’m going to call it NORD_VPN)
- Installed OpenVPN on the FireTV system
- Customized the VPN configuration files
- Copied the VPN configuration files to the Root of the SDcard on the FireTV system
Select and launch OpenVPN Client
Use the + sign to add a profile
Click Import
Browse and Select the ovpn configuration file using the browser
Deploying User Cutomizations & Office suit setting for M$ Office via Group Policy
Hello internetzzz
As an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized Ribbon, Quick toolbars, etc for Office applications on user Computers, or in my case Terminal servers.
here is a quick and dirty guide on how to do this via group policy.
For instance, lets say we have to deploy a button to initiate a 3rd party productivity program with in outlook and MS word.
First off, make the necessary changes to outlook or word on a Client pc running MS office.
To customize the Ribbon
- On the File tab, click Options, and then click Customize Ribbon to open the Ribbon customization dialog.
To customize the Quick Access Toolbar
- On the File tab, click Options, and then click Quick Access Toolbar to open the Quick Access Toolbar customization dialog.
You can also export your Ribbon and Quick Access Toolbar customizations into a file.
when we make changes to the default Ribbon these user customizations are saved in as .officeUI Files
%localappdata%MicrosoftOffice
The file names will differ according to the office program and the portion of the Ribbon UI you customized.
| Application | Description Of .Ribbon File | .officeUI File Name |
|---|---|---|
| Outlook 2010 | Outlook Explorer | olkexplorer.officeUI |
| Outlook 2010 | Contact | olkaddritem.officeUI |
| Outlook 2010 | Appointment/Meeting (organizer on compose, organizer after compose, attendee) | olkapptitem.officeUI |
| Outlook 2010 | Contact Group (formerly known as Distribution List) | olkdlstitem.officeUI |
| Outlook 2010 | Journal Item | olklogitem.officeUI |
| Outlook 2010 | Mail Compose | olkmailitem.officeUI |
| Outlook 2010 | Mail Read | olkmailread.officeUI |
| Outlook 2010 | Multimedia Message Compose | olkmmsedit.officeUI |
| Outlook 2010 | Multimedia Message Read | olkmmsread.officeUI |
| Outlook 2010 | Received Meeting Request | olkmreqread.officeUI |
| Outlook 2010 | Forward Meeting Request | olkmreqsend.officeUI |
| Outlook 2010 | Post Item Compose | olkpostitem.officeUI |
| Outlook 2010 | Post Item Read | olkpostread.officeUI |
| Outlook 2010 | NDR | olkreportitem.officeUI |
| Outlook 2010 | Send Again Item | olkresenditem.officeUI |
| Outlook 2010 | Counter Response to a Meeting Request | olkrespcounter.officeUI |
| Outlook 2010 | Received Meeting Response | olkresponseread.officeUI |
| Outlook 2010 | Edit Meeting Response | olkresponsesend.officeUI |
| Outlook 2010 | RSS Item | olkrssitem.officeUI |
| Outlook 2010 | Sharing Item Compose | olkshareitem.officeUI |
| Outlook 2010 | Sharing Item Read | olkshareread.officeUI |
| Outlook 2010 | Text Message Compose | olksmsedit.officeUI |
| Outlook 2010 | Text Message Read | olksmsread.officeUI |
| Outlook 2010 | Task Item (Task/Task Request, etc.) | olktaskitem.officeUI |
| Access 2010 | Access Ribbon | Access.officeUI |
| Excel 2010 | Excel Ribbon | Excel.officeUI |
| InfoPath 2010 | InfoPath Designer Ribbon | IPDesigner.officeUI |
| InfoPath 2010 | InfoPath Editor Ribbon | IPEditor.officeUI |
| OneNote 2010 | OneNote Ribbon | OneNote.officeUI |
| PowerPoint | PowerPoint Ribbon | PowerPoint.officeUI |
| Project 2010 | Project Ribbon | MSProject.officeUI |
| Publisher 2010 | Publisher Ribbon | Publisher.officeUI |
| *SharePoint 2010 | SharePoint Workspaces Ribbon | GrooveLB.officeUI |
| *SharePoint 2010 | SharePoint Workspaces Ribbon | GrooveWE.officeUI |
| SharePoint Designer 2010 | SharePoint Designer Ribbon | spdesign.officeUI |
| Visio 2010 | Visio Ribbon | Visio.officeUI |
| Word 2010 | Word Ribbon | Word.officeUI |
You can use these files and push it via Group policy using a simple start up script..@echo off
setlocal
set userdir=%localappdata%MicrosoftOffice
set remotedir=\MyServerLogonFilespublicOfficeUI
for %%r in (Word Excel PowerPoint) do if not exist %userdir%%%r.officeUI cp %remotedir%%%r.officeUI %userdir%%%r.officeUI
endlocal
A basic script to copy .officeUI files from a network share into the user’s local AppData directory, if no .officeUI file currently exists there.
Can easily be modified to use the roaming AppData directory (replace %localappdata% with %appdata%) or to include additional ribbon customizations.
Managing Office suit setting via Group Policy
Download and import the ADM templates to the Group policy object editor.
This will allow you to manage settings Security, UI related options, Trust center, etc.. on office 2010 using GPO
Download Office 2010 Administrative Template files (ADM, ADMX/ADML)
hopefully, this will be help full to someone..
until next time cháo
Upgrading VMware EXSI Hosts using Vcenter Update Manager Baseline (6.5 to 6.7 Update 2)
Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client. we can use the component to
- patch/upgrade hosts
- deploy .vib files within the V-Center
- Scan your VC environment and report on any out of compliance hosts
Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.

Now that’s out of the way Let’s get to the how-to part of this
In Vcenter click the “Menu” and drill down to the “Update Manager”

This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels
Click on the “Baselines” Tab

You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now
Navigate to the “ESXi Images” Tab, and Click “Import”

Once the Upload is complete, Click on “New Baseline”
Fill in the Name and Description that makes sense to anyone that logs in and click Next

Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it
Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time
Now click the “Menu” and Navigate Backup to “Hosts and Clusters”
Now you can apply the Baseline this at various levels within the Vcenter Hierarchy
Vcenter | DataCenter | Cluster | Host
Depending on your use case pick the right level
Excerpt from the KB For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel. When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster. The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded. Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.
Moving on; for this example, since I have only 2 hosts. we are going apply the baseline at the cluster level but apply the remediation at host level
Host 1 > Enter Maintenance > Remediation > Update complete and online
Host 2 > Enter Maintenance > Remediation > Update complete and online
Select the cluster, Click the “Updates” Tab and click on “Attach” on the Attached baselines section

Select and attach the baseline we created before
Click “Check Compliance” to scan and get a report
Select the host in the cluster, enter maintenance mode
Click “REMEDIATE” to start the upgrade. (if you do this at a cluster level if you have DRS, Update Manager will update each node)
This will reboot the host and go through the update process

Foot Notes –
You might run into the following issue
“vCenter cannot deploy Host upgrade agent to host”

Cause 1
Scratch partition is full use Vcenter and change the scratch folder location
Creating a persistent scratch location for ESXi – https://kb.vmware.com/s/article/1033696
Cause 2
Hardware is not compatible,
I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing
Vmware HCL – Link
ESXI and Vcenter log file locations – link
Vagrant Ansible LAB Guide – Bridged network
Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.
Alright lets get started
TLDR Version
- Install Vagrant
- Install Virtual-box
- Create project folder and CD in to it
Vagrant init
- Vagrantfile – link
- Vagrant Provisioning Shell Script to Deploy Ansible – link
- Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
- Bring up the Vagrant environment
Vagrant up
Install Vagrant and Virtual box
For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX
Windows
Download Vagrant and virtual box and install it the good ol way –
https://www.vagrantup.com/downloads.html https://www.virtualbox.org/wiki/Downloads https://www.vagrantmanager.com/downloads/
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
Or Using chocolatey
choco install vagrant
choco install virtualbox
choco install vagrant-manager
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
MAC OSX – using Brewcask
Install virtual box
$ brew cask install virtualbox
Now install Vagrant either from the website or use homebrew for installing it.
$ brew cask install vagrant
Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.
$ brew cask install vagrant-manager
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
Setup the Vagrant Environment
Open Powershell
to get started lets check our environment
vagrant version

Create a project directory and Initialize the environment
for the project directory im using D:\vagrant
Open powershell and run
mkdir D:\vagrant cd D:\vagrant
Initialize the environment under the project folder
vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data
Vagrantfile – Vagrant config file
Lets Create the Vagrantfile to deploy the VMs
https://www.vagrantup.com/docs/vagrantfile/
The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files
Im using Atom to edit the vagrantfile
Vagrant.configure("2") do |config|
config.vm.define "controller" do |controller|
controller.vm.box = "ubuntu/trusty64"
controller.vm.hostname = "LAB-Controller"
controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
controller.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
end
(1..3).each do |i|
config.vm.define "vls-node#{i}" do |node|
node.vm.box = "ubuntu/trusty64"
node.vm.hostname = "vls-node#{i}"
node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
end
end
end
You can grab the code from my Repo
https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile
Let’s talk a little bit about this code and unpack this
Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.
Provisioning the Ansible VM

This will
- Provision the controller Ubuntu VM
- Create a bridged network adapter
- Set the host-name – LAB-Controller
- Set the static IP – 172.17.10.120/24
- Run the Shell script that installs Ansible using apt-get install (We will get to this below)
Lets start digging in…
Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.
You can find different base boxes from app.vagrantup.com
Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this
Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"
in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute
You can the relavant interface name using
get-netadapter

You can also create a host-only private network
controller.vm.network :private_network, ip: "10.0.0.10"
for more info checkout the network section in the KB
https://www.vagrantup.com/docs/networking/
Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048
You can get more granular with this, refer to the below KB
https://www.vagrantup.com/docs/virtualbox/configuration.html
Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script
this script installs Ansible and set the host file entries to make your life easier
In case you are wondering VLS stands for V=virtual,L – linux S – server.
I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname
!/bin/bash sudo apt-get update sudo apt-get install software-propetise-common -y sudo apt-add-repository ppa:ansible/ansible sudo apt-get update sudo apt-get install ansible -y echo " 172.17.10.120 LAB-controller 172.17.10.121 vls-node1 172.17.10.122 vls-node2 172.17.10.123 vls-node3" >> /etc/hosts
create this file and save it as Ansible_LAB_setup.sh in the Project folder
in this case I’m going to save it under D:\vagrant
You can also do this inline with a script block instead of using a separate file
https://www.vagrantup.com/docs/provisioning/basic_usage.html
Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)
This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter
vls-node1 – 172.17.10.121
vls-node2 – 172.17.10.122
vls-node1 – 172.17.10.123
So now that we are done with explaining the code, let’s run this
Building the Lab environment using Vagrant
Issue the following command to check your syntax
Vagrant status
Issue the following command to bring up the environment
Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled
Intel – VT-D
AMD Ryzen – SVM
If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified
Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS
==> vls-node3: Machine booted and ready! [vls-node3] No Virtualbox Guest Additions installation found. rmmod: ERROR: Module vboxsf is not currently loaded rmmod: ERROR: Module vboxguest is not currently loaded Reading package lists... Building dependency tree... Reading state information... Package 'virtualbox-guest-x11' is not installed, so not removed The following packages will be REMOVED: virtualbox-guest-utils* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 5799 kB disk space will be freed. (Reading database ... 61617 files and directories currently installed.) Removing virtualbox-guest-utils (6.0.14-dfsg-1) ... Processing triggers for man-db (2.8.7-3) ... (Reading database ... 61604 files and directories currently installed.) Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ... Processing triggers for systemd (242-7ubuntu3.7) ... Reading package lists... Building dependency tree... Reading state information... linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44). linux-headers-5.3.0-51-generic set to manually installed.
Check the status
Vagrant status


Testing
Connecting via SSH to your VMs
vagrant ssh controller
“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal
We are going to connect to our controller and check everything


Little bit more information on the networking side
Vagrant Adds two interfaces, for each VM
NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs
Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment
vagrant destroy -f
https://www.vagrantup.com/intro/getting-started/teardown.html
I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.
let me know in the comments if you see any issues or mistakes.
Until Next time…..
Managing calendar permissions in Exchange Server 2010
these Sharing options are not available in EMC, so we have to use exchange power shell on the server to manipulate them.
Get-MailboxFolderPermission -identity "Networking Calendar:Calendar"user – “Nyckie” – full permissions
all users – permissions to add events without the delete permission
- To assign calendar permissions to new users “Add-MailboxFolderPermission”
Add-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User [email protected] -AccessRights Owner - To Change existing calendar permissions “set-MailboxFolderPermission”
set-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User default -AccessRights NonEditingAuthor LimitedDetails – View availability data with subject and location
source –
technet.microsoft.com
http://blog.powershell.no/2010/09/20/managing-calendar-permissions-in-exchange-server-2010/
How to extend root (cs-root) Filesystem using LVM Cent OS/RHEL/Almalinux
This guide will walk you through on how to extend and increase space for the root filesystem on a alma linux. Cent OS, REHL Server/Desktop/VM
Method A – Expanding the current disk
Edit the VM and Add space to the Disk

install the cloud-utils-growpart package, as the growpart command in it makes it really easy to extend partitioned virtual disks.
sudo dnf install cloud-utils-growpart
Verify that the VM’s operating system recognizes the new increased size of the sda virtual disk, using lsblk or fdisk -l
sudo fdisk -l

Notes - Note down the disk id and the partition number for Linux LVM - in this demo disk id is sda and lvm partition is sda 3
lets trigger a rescan of a block devices (Disks)
#elevate to root
sudo su
#trigger a rescan, Make sure to match the disk ID you noted down before
echo 1 > /sys/block/sda/device/rescan
exit
Now sudo fdisk -l shows the correct size of the disks

Use growpart to increase the partition size for the lvm
sudo growpart /dev/sda 3

Confirm the volume group name
sudo vgs

Extend the logical volume
sudo lvextend -l +100%FREE /dev/almalinux/root
Grow the file system size
sudo xfs_growfs /dev/almalinux/root
Notes - You can use this same steps to add space to different partitions such as home, swap if needed
Method B -Adding a second Disk to the LVM and expanding space
Why add a second disk? may be the the current Disk is locked due to a snapshot and you cant remove it, Only solution would be to add a second disk/
Check the current space available
sudo df -h

Notes - If you have 0% ~1MB left on the cs-root command auto-complete with tab and some of the later commands wont work, You should clear up atleast 4-10mb by clearing log files, temp files, etc
Mount an additional disk to the VM (Assuming this is a VM) and make sure the disk is visible on the OS level
sudo lvmdiskscan

OR
sudo fdisk -l

Confirm the volume group name
sudo vgs

Lets increase the space
First lets initialize the new disk we mounted
sudo mkfs.xfs /dev/sdb

Create the Physical volume
sudo pvcreate /dev/sdb

extend the volume group
sudo vgextend cs /dev/sdb

Volume group "cs" successfully extended
Extend the logical volume
sudo lvextend -l +100%FREE /dev/cs/root
Grow the file system size
sudo xfs_growfs /dev/cs/root

Confirm the changes
sudo df -h

Just making easy for us!!
#Method A - Expanding the current disk
#AlmaLinux
sudo dnf install cloud-utils-growpart
sudo lvmdiskscan
sudo fdisk -l #note down the disk ID and partition num
sudo su #elevate to root
echo 1 > /sys/block/sda/device/rescan #trigger a rescan
exit #exit root shell
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
#Method B - Adding a second Disk
#CentOS
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend cs /dev/sdb
sudo lvextend -l +100%FREE /dev/cs/root
sudo xfs_growfs /dev/cs/root
sudo df -h
#AlmaLinux
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend almalinux /dev/sdb
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
Server 2016 Data De-duplication Report – Powershell
I put together this crude little script to send out a report on a daily basis
it’s not that fancy but its functional
I’m working on the second revision with an HTML body, lists of corrupted files, Resource usage, more features will be added as I dive further into Dedupe CMDlets.
https://technet.microsoft.com/en-us/library/hh848450.aspx
Link to the Script – Dedupe_report.ps1
https://dl.dropboxusercontent.com/s/bltp675prlz1slo/Dedupe_report_Rev2_pub.txt
If you have any suggestions for improvements please comment and share with everyone
# Malinda Ratnayake | 2016 # Can only be run on Windows Server 2012 R2 # # Get the date and set the variable $Now = Get-Date # Import the cmdlets Import-Module Deduplication # $logFile01 = "C:_ScriptsLogsDedupe_Report.txt" # # Get the cluster vip and set to variable $HostName = (Get-WmiObject win32_computersystem).DNSHostName+"."+(Get-WmiObject win32_computersystem).Domain # #$OS = Get-Host {$_.WindowsProductName} # # delete previous days check del $logFile01 # Out-File "$logFile01" -Encoding ASCII Add-Content $logFile01 "Dedupication Report for $HostName" -Encoding ASCII Add-Content $logFile01 "`n$Now" -Encoding ASCII Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupJob Add-Content $logFile01 "Deduplication job Queue" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupJob | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupSchedule Add-Content $logFile01 "Deduplication Schedule" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupSchedule | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 # #Last Optimization Result and time Add-Content $logFile01 "Last Optimization Result and time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastOptimizationTime ,LastOptimizationResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # #Last Garbage Collection Result and Time Add-Content $logFile01 "Last Garbage Collection Result and Time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastGarbageCollectionTime ,LastGarbageCollectionResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # Get-DedupVolume $DedupVolumeLetter = Get-DedupVolume | select -ExpandProperty Volume Add-Content $logFile01 "Deduplication Enabled Volumes" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupVolume | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Volume $DedupVolumeLetter Details - " -Encoding ASCII Get-DedupVolume | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupStatus Add-Content $logFile01 "Deduplication Summary" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Deduplication Status Details" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupMetadata Add-Content $logFile01 "Deduplication MetaData" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Add-Content $logFile01 "note - details about how deduplication processed the data on volume $DedupVolumeLetter " -Encoding ASCII Get-DedupMetadata | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-Dedupe Events # Get-Dedupe Events - Resource usage - WIP Add-Content $logFile01 "Deduplication Events" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-WinEvent -MaxEvents 10 -LogName Microsoft-Windows-Deduplication/Diagnostic | where ID -EQ "10243" | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Change the -To, -From and -SmtpServer values to match your servers. $Emailbody = Get-Content -Path $logFile01 [string[]]$recipients = "[email protected]" Send-MailMessage -To $recipients -From [email protected] -subject "File services - Deduplication Report : $HostName " -SmtpServer smtp-relay.gmail.com -Attachments $logFile01
Kafka 3.8 with Zookeeper SASL_SCRAM
Transport Encryption Methods:
SASL/SSL (Solid Teal/Green Lines):
- Used for securing communication between producers/consumers and Kafka brokers.
- SASL (Simple Authentication and Security Layer): Authenticates clients (producers/consumers) to brokers, using SCRAM .
- SSL/TLS (Secure Sockets Layer/Transport Layer Security): Encrypts the data in transit, ensuring confidentiality and integrity during transmission.
Digest-MD5 (Dashed Yellow Lines):
- Secures communication between Kafka brokers and the Zookeeper cluster.
- Digest-MD5: A challenge-response authentication mechanism providing basic encryption
Notes:
While functional, Digest-MD5 is an older algorithm. we opted for this to reduce complexity and the fact the zookeepers have issues with connecting with Brokers via SSL/TLS
- We need to test and switch over KRAFT Protocol, this removes the use of Zookeeper altogether
- Add IP ACLs for Zookeeper connections using firewalld to limit traffic between the nodes for replication
PKI and Certificate Signing
CA cert for local PKI,
We need to share this PEM file(without the private key) with the customer to authenticate
Internal applications the CA file must be used for authentication – Refer to the Configuration example documents
# Generate CA Key
openssl genrsa -out multicastbits_CA.key 4096
# Generate CA Certificate
openssl req -x509 -new -nodes -key multicastbits_CA.key -sha256 -days 3650 -out multicastbits_CA.crt -subj "/CN=multicastbits_CA"
Kafka Broker Certificates
# For Node1 - Repeat for other nodes
openssl req -new -nodes -out node1.csr -newkey rsa:2048 -keyout node1.key -subj "/CN=kafka01.multicastbits.com"
openssl x509 -req -CA multicastbits_CA.crt -CAkey multicastbits_CA.key -CAcreateserial -in node1.csr -out node1.crt -days 3650 -sha256
Create the kafka and zookeeper users
⚠️ Important: Do not skip this step. we need these users to setup Authentication in JaaS configuration
Before configuring the cluster with SSL and SASL, let’s start up the cluster without authentication and SSL to create the users. This allows us to:
- Verify basic dependencies and confirm the zookeeper and Kafka clusters are coming up without any issues “make sure the car starts”
- Create necessary user accounts for SCRAM
- Test for any inter-node communication issues (Blocked Ports 9092, 9093 ,2181 etc)
Here’s how to set up this initial configuration:
Zookeeper Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
# Zookeeper Configuration without Auth
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
Kafka Broker Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
# Kafka Broker Configuration without Auth/SSL
broker.id=1
listeners=PLAINTEXT://kafka01.multicastbits.com:9092
advertised.listeners=PLAINTEXT://kafka01.multicastbits.com:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Open a new shell to the server Start Zookeeper:
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
Open a new shell to start Kafka:
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the users:
Open a new shell and run the following commands:
kafka-configs.sh --bootstrap-server ext-kafka01.fleetcam.io:9092 --alter --add-config 'SCRAM-SHA-512=[password=zookeeper-password]' --entity-type users --entity-name ftszk
kafka-configs.sh --zookeeper ext-kafka01.fleetcam.io:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafkaadmin-password]' --entity-type users --entity-name ftskafkaadminAfter the users are created without errors, press Ctrl+C to shut down the services we started earlier.
SASL_SSL configuration with SCRAM
Zookeeper configuration Notes
- Zookeeper is configured with SASL/MD5 due to the SSL issues we faced during the initial setup
- Zookeeper Traffic is isolated with in the Broker nodes to maintain security
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
/Data_Disk/zookeeper/myid file is updated corresponding to the zookeeper nodeID
cat /Data_Disk/zookeeper/myid
1
Jaas configuration
Create the Jaas configuration for zookeeper authentication, it has the follow this syntax
/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_multicastbitszk="zkpassword";
};
KafkaOPTS
KafkaOPTS Java varible need to be passed when the zookeeper is started to point to the correct JaaS file
export KAFKA_OPTS="-Djava.security.auth.login.config="Path to the zookeeper-jaas.conf"
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
There are few ways to handle this, you can add a script under profile.d or use a custom Zookeeper launch script for the systemd service
Systemd service
Create the launch shell script for Zookeeper
/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
#Start the zookeeper service
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
After you save the file
chomod +x /opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
sudo chown -R multicastbitskafka:multicastbitskafka /opt/kafka/kafka_2.13-3.8.0
Create the systemd service file
/etc/systemd/system/zookeeper.service
[Unit]
Description=Apache Zookeeper Service
After=network.target
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
After the file is saved, start the service
sudo systemctl daemon-reload.
sudo systemctl enable zookeeper
sudo systemctl start zookeeper
Kafka Broker configuration Notes
/opt/kafka/kafka_2.13-3.8.0/config/server.properties
broker.id=1
listeners=SASL_SSL://kafka01.multicastbits.com:9093
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
listener.security.protocol.map=SASL_SSL:SASL_SSL
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
#SASL/SCRAM Authentication
sasl.enabled.mechanisms=SCRAM-SHA-256, SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.client=SCRAM-SHA-512
security.inter.broker.protocol=SASL_SSL
#zookeeper
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
zookeeper.sasl.client=true
zookeeper.sasl.clientconfig=ZookeeperClient
zookeeper connect options
Define the zookeeper servers the broker will connect to
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Enable SASL
zookeeper.sasl.client=true
Tell the broker to use the creds defined under ZookeeperClient section on the JaaS file used by the kafka service
zookeeper.sasl.clientconfig=ZookeeperClient
Broker and listener configuration
Define the broker id
broker.id=1
Define the servers listener name and port
listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the servers advertised listener name and port
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the SASL_SSL for security protocol
listener.security.protocol.map=SASL_SSL:SASL_SSL
Enable ACLs
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Define the Java Keystores
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
Jaas configuration
/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="multicastbitskafkaadmin"
password="kafkaadmin-password";
};
ZookeeperClient {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="multicastbitszk"
password="Zookeeper_password";
};
SASL and SCRAM configuration Notes
Enable SASL SCRAM for authentication
org.apache.kafka.common.security.scram.ScramLoginModule required
Use MD5 for Zookeeper authentication
org.apache.zookeeper.server.auth.DigestLoginModule required
KafkaOPTS
KafkaOPTS Java variable need to be passed and must point to the correct JaaS file, when the kafka service is started
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
Systemd service
Create the launch shell script for kafka
/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
#Start the kafka service
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the systemd service
/etc/systemd/system/kafka.service
[Unit]
Description=Apache Kafka Broker Service
After=network.target zookeeper.service
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Connect authenticate and use Kafka CLI tools
Requirements
multicastbitsadmin.keystore.jksmulticastbitsadmin.truststore.jks- WSL2 with
java-11-openjdk-develwgetnano - Kafka 3.8 folder extracted locally
Setup your environment
- Setup WSL2
You can use any Linux environment with JDK17 or 11
- install dependencies
dnf install -y wget nano java-11-openjdk-devel
Download Kafka and extract it (in going to extract it to the home DIR under kafka)
# 1. Download Kafka (Choose a version compatible with your server)
wget https://dlcdn.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
# 2. Extract
tar xzf kafka_2.13-3.8.0.tgz
Copy the jks files (You should generate them with the CA JKS, or use one from one of the nodes) to ~/
cp multicastbitsadmin.keystore.jks ~/
cp multicastbitsadmin.truststore.jks ~/
Create your admin client properties file
change the path to fit your setup
nano ~/kafka-adminclient.properties
# Security protocol and SASL/SSL configuration
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
# SSL Configuration
ssl.keystore.location=/opt/kafka/secrets/multicastbitsadmin.keystore.jks
ssl.keystore.password=keystorepw
ssl.truststore.location=/opt/kafka/secrets/multicastbitsadmin.truststore.jks
ssl.truststore.password=truststorepw
# SASL Configuration
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
Create the JaaS file for the admin client
nano ~/kafka_client_jaas.conf
Some kafka-cli tools still look for the jaas.conf under KAFKA_OPTS environment variable
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
};
Export the Kafka environment variables
export KAFKA_HOME=/opt/kafka/kafka_2.13-3.8.0
export PATH=$PATH:$KAFKA_HOME/bin
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
export KAFKA_OPTS="-Djava.security.auth.login.config=~/kafka_client_jaas.conf"
source ~/.bashrc
Kafka CLI Usage Examples
Create a user
kafka-configs.sh --bootstrap-server kafka01.multicastbits.com:9093 --alter --add-config 'SCRAM-SHA-512=[password=#password#]' --entity-type users --entity-name %username%--command-config ~/kafka-adminclient.properties
Create a topic
kafka-topics.sh --bootstrap-server kafka01.multicastbits.com:9093 --create --topic %topicname% --partitions 10 --replication-factor 3 --command-config ~/kafka-adminclient.properties
Create ACLs
External customer user with READ DESCRIBE privileges to a single topic
kafka-acls.sh --bootstrap-server kafka01.multicastbits.com:9093
--command-config ~/kafka-adminclient.properties
--add --allow-principal User:customer-user01
--operation READ --operation DESCRIBE --topic Customer_topic
Troubleshooting
Here are some common issues you might encounter when setting up and using Kafka with SASL_SCRAM authentication, along with their solutions:
1. Connection refused errors
Issue: Clients unable to connect to Kafka brokers.
Solution:
- Verify that the Kafka brokers are running and listening on the correct ports.
- Check firewall settings to ensure the Kafka ports are open and accessible.
- Confirm that the bootstrap server addresses in client configurations are correct.
2. Authentication failures
Issue: Clients fail to authenticate with Kafka brokers.
Solution:
- Double-check username and password in the JAAS configuration file.
- Ensure the SCRAM credentials are properly set up on the Kafka brokers.
- Verify that the correct SASL mechanism (SCRAM-SHA-512) is specified in client configurations.
3. SSL/TLS certificate issues
Issue: SSL handshake failures or certificate validation errors.
Solution:
- Confirm that the keystore and truststore files are correctly referenced in configurations.
- Verify that the certificates in the truststore are up-to-date and not expired.
- Ensure that the hostname in the certificate matches the broker’s advertised listener.
4. Zookeeper connection issues
Issue: Kafka brokers unable to connect to Zookeeper ensemble.
Solution:
- Verify Zookeeper connection string in Kafka broker configurations.
- Ensure Zookeeper servers are running and accessible and the ports are open
- Check Zookeeper client authentication settings in JAAS configuration file






















