“System logs on hosts are stored on non-persistent storage” message on VCenter
Ran into this pesky little error message recently, on a vcenter environment
If the logs are stored on a local scratch disk, vCenter will display an alert stating – “System logs on host xxx are stored on non-persistent storage”

Configure ESXi Syslog location – vSphere Web Client
Vcenter > Select “Host”> Configure > Advance System Settings

Click on Edit and search for “Syslog.global.logDir”

Edit the value and in this case, I’m going to use the local data store (Localhost_DataStore01) to store the syslogs.
You can also define a remote syslog server using the “Syslog.global.LogHost” setting

Configure ESXi Syslog location – ESXCLI
Ssh on to the host
Check the current location
esxcli system syslog config get

*logs stored on the local scratch disk
Manually Set the Path
esxcli system syslog config set –logdir=/vmfs/directory/path
you can find the VMFS volume names/UUIDs under –
/vmfs/volumes
remote syslog server can be set using
esxcli system syslog config set –loghost=’tcp://hostname:port’
Load the configuration changes with the syslog reload command
esxcli system syslog reload
The logs will immediately begin populating the specified location.
Vagrant Ansible LAB Guide – Bridged network
Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.
Alright lets get started
TLDR Version
- Install Vagrant
- Install Virtual-box
- Create project folder and CD in to it
Vagrant init
- Vagrantfile – link
- Vagrant Provisioning Shell Script to Deploy Ansible – link
- Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
- Bring up the Vagrant environment
Vagrant up
Install Vagrant and Virtual box
For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX
Windows
Download Vagrant and virtual box and install it the good ol way –
https://www.vagrantup.com/downloads.html https://www.virtualbox.org/wiki/Downloads https://www.vagrantmanager.com/downloads/
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
Or Using chocolatey
choco install vagrant
choco install virtualbox
choco install vagrant-manager
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
MAC OSX – using Brewcask
Install virtual box
$ brew cask install virtualbox
Now install Vagrant either from the website or use homebrew for installing it.
$ brew cask install vagrant
Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.
$ brew cask install vagrant-manager
Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)
vagrant plugin install vagrant-vbguest
Setup the Vagrant Environment
Open Powershell
to get started lets check our environment
vagrant version

Create a project directory and Initialize the environment
for the project directory im using D:\vagrant
Open powershell and run
mkdir D:\vagrant cd D:\vagrant
Initialize the environment under the project folder
vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data
Vagrantfile – Vagrant config file
Lets Create the Vagrantfile to deploy the VMs
https://www.vagrantup.com/docs/vagrantfile/
The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files
Im using Atom to edit the vagrantfile
Vagrant.configure("2") do |config|
config.vm.define "controller" do |controller|
controller.vm.box = "ubuntu/trusty64"
controller.vm.hostname = "LAB-Controller"
controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
controller.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
end
(1..3).each do |i|
config.vm.define "vls-node#{i}" do |node|
node.vm.box = "ubuntu/trusty64"
node.vm.hostname = "vls-node#{i}"
node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
end
end
end
You can grab the code from my Repo
https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile
Let’s talk a little bit about this code and unpack this
Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.
Provisioning the Ansible VM

This will
- Provision the controller Ubuntu VM
- Create a bridged network adapter
- Set the host-name – LAB-Controller
- Set the static IP – 172.17.10.120/24
- Run the Shell script that installs Ansible using apt-get install (We will get to this below)
Lets start digging in…
Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.
You can find different base boxes from app.vagrantup.com
Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this
Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"
in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute
You can the relavant interface name using
get-netadapter

You can also create a host-only private network
controller.vm.network :private_network, ip: "10.0.0.10"
for more info checkout the network section in the KB
https://www.vagrantup.com/docs/networking/
Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048
You can get more granular with this, refer to the below KB
https://www.vagrantup.com/docs/virtualbox/configuration.html
Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script
this script installs Ansible and set the host file entries to make your life easier
In case you are wondering VLS stands for V=virtual,L – linux S – server.
I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname
!/bin/bash sudo apt-get update sudo apt-get install software-propetise-common -y sudo apt-add-repository ppa:ansible/ansible sudo apt-get update sudo apt-get install ansible -y echo " 172.17.10.120 LAB-controller 172.17.10.121 vls-node1 172.17.10.122 vls-node2 172.17.10.123 vls-node3" >> /etc/hosts
create this file and save it as Ansible_LAB_setup.sh in the Project folder
in this case I’m going to save it under D:\vagrant
You can also do this inline with a script block instead of using a separate file
https://www.vagrantup.com/docs/provisioning/basic_usage.html
Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)
This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter
vls-node1 – 172.17.10.121
vls-node2 – 172.17.10.122
vls-node1 – 172.17.10.123
So now that we are done with explaining the code, let’s run this
Building the Lab environment using Vagrant
Issue the following command to check your syntax
Vagrant status
Issue the following command to bring up the environment
Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled
Intel – VT-D
AMD Ryzen – SVM
If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified
Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS
==> vls-node3: Machine booted and ready! [vls-node3] No Virtualbox Guest Additions installation found. rmmod: ERROR: Module vboxsf is not currently loaded rmmod: ERROR: Module vboxguest is not currently loaded Reading package lists... Building dependency tree... Reading state information... Package 'virtualbox-guest-x11' is not installed, so not removed The following packages will be REMOVED: virtualbox-guest-utils* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 5799 kB disk space will be freed. (Reading database ... 61617 files and directories currently installed.) Removing virtualbox-guest-utils (6.0.14-dfsg-1) ... Processing triggers for man-db (2.8.7-3) ... (Reading database ... 61604 files and directories currently installed.) Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ... Processing triggers for systemd (242-7ubuntu3.7) ... Reading package lists... Building dependency tree... Reading state information... linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44). linux-headers-5.3.0-51-generic set to manually installed.
Check the status
Vagrant status


Testing
Connecting via SSH to your VMs
vagrant ssh controller
“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal
We are going to connect to our controller and check everything


Little bit more information on the networking side
Vagrant Adds two interfaces, for each VM
NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs
Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment
vagrant destroy -f
https://www.vagrantup.com/intro/getting-started/teardown.html
I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.
let me know in the comments if you see any issues or mistakes.
Until Next time…..
Setup guide for VSFTPD FTP Server – SELinux enforced with fail2ban (RHEL, CentOS, Almalinux)
Few things to note
- if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
- For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
lsblk

Create the XFS partition
sudo mkfs.xfs /dev/sdb
# use mkfs.ext4 for ext4
Why XFS? https://access.redhat.com/articles/3129891

Create the folder for the mount point
sudo mkdir /FTP_DATA_DISK
Update the etc/fstab file and add the following line
sudo nano etc/fstab
/dev/sdb /FTP_DATA_DISK xfs defaults 1 2
Mount the disk
sudo mount -a
Testing
mount | grep sdb

Setup the VSFTPD Data and Log Folders
Setup the FTP Data folder
sudo mkdir /FTP_DATA_DISK/FTP_Root -p
Create the log directory
sudo mkdir /FTP_DATA_DISK/_logs/ -p
Set permissions
sudo chgrp -R FTP_Root_RW /FTP_DATA_DISK/FTP_Root/
sudo chmod 775 -R /FTP_DATA_DISK/FTP_Root/
Setup the VSFTPD Config File
Backup the default vsftpd.conf and create a newone
sudo mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpdconfback
sudo nano /etc/vsftpd/vsftpd.conf
#KB Link - ####
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=002
dirmessage_enable=YES
ftpd_banner=Welcome to multicastbits Secure FTP service.
chroot_local_user=NO
chroot_list_enable=NO
chroot_list_file=/etc/vsftpd/chroot_list
listen=YES
listen_ipv6=NO
userlist_file=/etc/vsftpd/user_list
pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
listen_port=21
connect_from_port_20=YES
local_root=/FTP_DATA_DISK/FTP_Root/
xferlog_enable=YES
vsftpd_log_file=/FTP_DATA_DISK/_logs/vsftpd.log
log_ftp_protocol=YES
dirlist_enable=YES
download_enable=NO
pasv_enable=Yes
pasv_max_port=65535
pasv_min_port=60000
Add the FTP users to the userlist file
Backup the Original file
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
sudo systemctl start vsftpd
sudo systemctl enable vsftpd
sudo systemctl status vsftpd

Setup SELinux
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
sudo nano /etc/fail2ban/jail.local
vsftpd]
enabled = true
port = ftp,ftp-data,ftps,ftps-data
logpath = /FTP_DATA_DISK/_logs/vsftpd.log
maxretry = 5
bantime = 7200
Make sure to update the logpath directive to match the vsftpd log file we defined on the vsftpd.conf file
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
sudo systemctl status fail2ban
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log

Fail2ban injects and manages the following rich rules

Client will fail to connect using FTP until the ban is lifted

Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Kafka 3.8 with Zookeeper SASL_SCRAM
Transport Encryption Methods:
SASL/SSL (Solid Teal/Green Lines):
- Used for securing communication between producers/consumers and Kafka brokers.
- SASL (Simple Authentication and Security Layer): Authenticates clients (producers/consumers) to brokers, using SCRAM .
- SSL/TLS (Secure Sockets Layer/Transport Layer Security): Encrypts the data in transit, ensuring confidentiality and integrity during transmission.
Digest-MD5 (Dashed Yellow Lines):
- Secures communication between Kafka brokers and the Zookeeper cluster.
- Digest-MD5: A challenge-response authentication mechanism providing basic encryption
Notes:
While functional, Digest-MD5 is an older algorithm. we opted for this to reduce complexity and the fact the zookeepers have issues with connecting with Brokers via SSL/TLS
- We need to test and switch over KRAFT Protocol, this removes the use of Zookeeper altogether
- Add IP ACLs for Zookeeper connections using firewalld to limit traffic between the nodes for replication
PKI and Certificate Signing
CA cert for local PKI,
We need to share this PEM file(without the private key) with the customer to authenticate
Internal applications the CA file must be used for authentication – Refer to the Configuration example documents
# Generate CA Key
openssl genrsa -out multicastbits_CA.key 4096
# Generate CA Certificate
openssl req -x509 -new -nodes -key multicastbits_CA.key -sha256 -days 3650 -out multicastbits_CA.crt -subj "/CN=multicastbits_CA"
Kafka Broker Certificates
# For Node1 - Repeat for other nodes
openssl req -new -nodes -out node1.csr -newkey rsa:2048 -keyout node1.key -subj "/CN=kafka01.multicastbits.com"
openssl x509 -req -CA multicastbits_CA.crt -CAkey multicastbits_CA.key -CAcreateserial -in node1.csr -out node1.crt -days 3650 -sha256
Create the kafka and zookeeper users
⚠️ Important: Do not skip this step. we need these users to setup Authentication in JaaS configuration
Before configuring the cluster with SSL and SASL, let’s start up the cluster without authentication and SSL to create the users. This allows us to:
- Verify basic dependencies and confirm the zookeeper and Kafka clusters are coming up without any issues “make sure the car starts”
- Create necessary user accounts for SCRAM
- Test for any inter-node communication issues (Blocked Ports 9092, 9093 ,2181 etc)
Here’s how to set up this initial configuration:
Zookeeper Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
# Zookeeper Configuration without Auth
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
Kafka Broker Configuration (No SSL or Auth)
Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
# Kafka Broker Configuration without Auth/SSL
broker.id=1
listeners=PLAINTEXT://kafka01.multicastbits.com:9092
advertised.listeners=PLAINTEXT://kafka01.multicastbits.com:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Open a new shell to the server Start Zookeeper:
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
Open a new shell to start Kafka:
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the users:
Open a new shell and run the following commands:
kafka-configs.sh --bootstrap-server ext-kafka01.fleetcam.io:9092 --alter --add-config 'SCRAM-SHA-512=[password=zookeeper-password]' --entity-type users --entity-name ftszk
kafka-configs.sh --zookeeper ext-kafka01.fleetcam.io:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafkaadmin-password]' --entity-type users --entity-name ftskafkaadminAfter the users are created without errors, press Ctrl+C to shut down the services we started earlier.
SASL_SSL configuration with SCRAM
Zookeeper configuration Notes
- Zookeeper is configured with SASL/MD5 due to the SSL issues we faced during the initial setup
- Zookeeper Traffic is isolated with in the Broker nodes to maintain security
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
/Data_Disk/zookeeper/myid file is updated corresponding to the zookeeper nodeID
cat /Data_Disk/zookeeper/myid
1
Jaas configuration
Create the Jaas configuration for zookeeper authentication, it has the follow this syntax
/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_multicastbitszk="zkpassword";
};
KafkaOPTS
KafkaOPTS Java varible need to be passed when the zookeeper is started to point to the correct JaaS file
export KAFKA_OPTS="-Djava.security.auth.login.config="Path to the zookeeper-jaas.conf"
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
There are few ways to handle this, you can add a script under profile.d or use a custom Zookeeper launch script for the systemd service
Systemd service
Create the launch shell script for Zookeeper
/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
#Start the zookeeper service
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties
After you save the file
chomod +x /opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s
sudo chown -R multicastbitskafka:multicastbitskafka /opt/kafka/kafka_2.13-3.8.0
Create the systemd service file
/etc/systemd/system/zookeeper.service
[Unit]
Description=Apache Zookeeper Service
After=network.target
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
After the file is saved, start the service
sudo systemctl daemon-reload.
sudo systemctl enable zookeeper
sudo systemctl start zookeeper
Kafka Broker configuration Notes
/opt/kafka/kafka_2.13-3.8.0/config/server.properties
broker.id=1
listeners=SASL_SSL://kafka01.multicastbits.com:9093
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
listener.security.protocol.map=SASL_SSL:SASL_SSL
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
#SASL/SCRAM Authentication
sasl.enabled.mechanisms=SCRAM-SHA-256, SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.client=SCRAM-SHA-512
security.inter.broker.protocol=SASL_SSL
#zookeeper
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
zookeeper.sasl.client=true
zookeeper.sasl.clientconfig=ZookeeperClient
zookeeper connect options
Define the zookeeper servers the broker will connect to
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
Enable SASL
zookeeper.sasl.client=true
Tell the broker to use the creds defined under ZookeeperClient section on the JaaS file used by the kafka service
zookeeper.sasl.clientconfig=ZookeeperClient
Broker and listener configuration
Define the broker id
broker.id=1
Define the servers listener name and port
listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the servers advertised listener name and port
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
Define the SASL_SSL for security protocol
listener.security.protocol.map=SASL_SSL:SASL_SSL
Enable ACLs
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Define the Java Keystores
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
Jaas configuration
/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="multicastbitskafkaadmin"
password="kafkaadmin-password";
};
ZookeeperClient {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="multicastbitszk"
password="Zookeeper_password";
};
SASL and SCRAM configuration Notes
Enable SASL SCRAM for authentication
org.apache.kafka.common.security.scram.ScramLoginModule required
Use MD5 for Zookeeper authentication
org.apache.zookeeper.server.auth.DigestLoginModule required
KafkaOPTS
KafkaOPTS Java variable need to be passed and must point to the correct JaaS file, when the kafka service is started
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
Systemd service
Create the launch shell script for kafka
/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
#Start the kafka service
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties
Create the systemd service
/etc/systemd/system/kafka.service
[Unit]
Description=Apache Kafka Broker Service
After=network.target zookeeper.service
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Connect authenticate and use Kafka CLI tools
Requirements
multicastbitsadmin.keystore.jksmulticastbitsadmin.truststore.jks- WSL2 with
java-11-openjdk-develwgetnano - Kafka 3.8 folder extracted locally
Setup your environment
- Setup WSL2
You can use any Linux environment with JDK17 or 11
- install dependencies
dnf install -y wget nano java-11-openjdk-devel
Download Kafka and extract it (in going to extract it to the home DIR under kafka)
# 1. Download Kafka (Choose a version compatible with your server)
wget https://dlcdn.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
# 2. Extract
tar xzf kafka_2.13-3.8.0.tgz
Copy the jks files (You should generate them with the CA JKS, or use one from one of the nodes) to ~/
cp multicastbitsadmin.keystore.jks ~/
cp multicastbitsadmin.truststore.jks ~/
Create your admin client properties file
change the path to fit your setup
nano ~/kafka-adminclient.properties
# Security protocol and SASL/SSL configuration
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
# SSL Configuration
ssl.keystore.location=/opt/kafka/secrets/multicastbitsadmin.keystore.jks
ssl.keystore.password=keystorepw
ssl.truststore.location=/opt/kafka/secrets/multicastbitsadmin.truststore.jks
ssl.truststore.password=truststorepw
# SASL Configuration
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
Create the JaaS file for the admin client
nano ~/kafka_client_jaas.conf
Some kafka-cli tools still look for the jaas.conf under KAFKA_OPTS environment variable
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="#youradminUser#"
password="#your-admin-PW#";
};
Export the Kafka environment variables
export KAFKA_HOME=/opt/kafka/kafka_2.13-3.8.0
export PATH=$PATH:$KAFKA_HOME/bin
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
export KAFKA_OPTS="-Djava.security.auth.login.config=~/kafka_client_jaas.conf"
source ~/.bashrc
Kafka CLI Usage Examples
Create a user
kafka-configs.sh --bootstrap-server kafka01.multicastbits.com:9093 --alter --add-config 'SCRAM-SHA-512=[password=#password#]' --entity-type users --entity-name %username%--command-config ~/kafka-adminclient.properties
Create a topic
kafka-topics.sh --bootstrap-server kafka01.multicastbits.com:9093 --create --topic %topicname% --partitions 10 --replication-factor 3 --command-config ~/kafka-adminclient.properties
Create ACLs
External customer user with READ DESCRIBE privileges to a single topic
kafka-acls.sh --bootstrap-server kafka01.multicastbits.com:9093
--command-config ~/kafka-adminclient.properties
--add --allow-principal User:customer-user01
--operation READ --operation DESCRIBE --topic Customer_topic
Troubleshooting
Here are some common issues you might encounter when setting up and using Kafka with SASL_SCRAM authentication, along with their solutions:
1. Connection refused errors
Issue: Clients unable to connect to Kafka brokers.
Solution:
- Verify that the Kafka brokers are running and listening on the correct ports.
- Check firewall settings to ensure the Kafka ports are open and accessible.
- Confirm that the bootstrap server addresses in client configurations are correct.
2. Authentication failures
Issue: Clients fail to authenticate with Kafka brokers.
Solution:
- Double-check username and password in the JAAS configuration file.
- Ensure the SCRAM credentials are properly set up on the Kafka brokers.
- Verify that the correct SASL mechanism (SCRAM-SHA-512) is specified in client configurations.
3. SSL/TLS certificate issues
Issue: SSL handshake failures or certificate validation errors.
Solution:
- Confirm that the keystore and truststore files are correctly referenced in configurations.
- Verify that the certificates in the truststore are up-to-date and not expired.
- Ensure that the hostname in the certificate matches the broker’s advertised listener.
4. Zookeeper connection issues
Issue: Kafka brokers unable to connect to Zookeeper ensemble.
Solution:
- Verify Zookeeper connection string in Kafka broker configurations.
- Ensure Zookeeper servers are running and accessible and the ports are open
- Check Zookeeper client authentication settings in JAAS configuration file
Install OpenVPN on fireTV (no root required) for NORD (MAC, Windows, Linux)
DISCLAIMER: No copyright infringement intended. This article is for entertainment and educational purposes only,
Alright!! now that’s out of the way I’m going to keep this short and simple
Scope : –
Install OpenVPN client
import profile with username and password
connect to your preferred VPN server
Use case : –
- Secure your fireTV traffic using any OpenVPN supported VPN services=
- Connect to your home file server/NAS and stream files when traveling via your FireTV or Firestick using your own VPN server (not covered in this article)
- Watch Streaming services when traveling using your own VPN server (not covered in this article)
Project Summary
Hardware – FireTV 4K Latest firmware
Platform – Windows 10 Enterprise
in this guide im using ADB to install OpenVPN client on my fireTV and use that to connect to the NORDVPN service
All Project files are located on C:NoRDVPN
Files Needed (Please download these files to your workstation before proceeding)
OpenVPN client APK – http://plai.de/android/
NORDVPN OpenVPN configuration files – https://nordvpn.com/ovpn/
ADBLink – http://jocala.com
01. Enable Developer mode on Fire tv
http://www.aftvnews.com/how-to-enable-adb-debugging-on-an-amazon-fire-tv-or-fire-tv-stick/
- From the Fire TV or Fire TV Stick’s home screen, scroll to “Settings”.

- Next, scroll to the right and select “Device”.

- Next, scroll down and select “Developer options”.

- Then select “ADB debugging” to turn the option to “ON”.

- Click on “File Manager” on adbLink
- Create a folder (I’m going to call it NORD_VPN)
- Installed OpenVPN on the FireTV system
- Customized the VPN configuration files
- Copied the VPN configuration files to the Root of the SDcard on the FireTV system
Select and launch OpenVPN Client
Use the + sign to add a profile
Click Import
Browse and Select the ovpn configuration file using the browser
Hacking WatchGuard Firebox to Run pfsense- nanoBSD

Hi Internetz, its been a while…
So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums.
It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.
What you need :
Hardware
- Firebox
- Female to Female Serial Cable – link
- 4GB CF Card (We can use 1Gb, 2Gb but personally I would recommend at-least 4GB)
- CF Card Reader
Software
The firebox X700
This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware
The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device
There are several methods to run pfsense on this device.
HDD
Install PF sense on a PC and Plug the HDD to the firebox.
This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days
CF card
This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it.
In this tutorial we are using the CF card method
Installing PFsense
- Download the relevant pfsense image
Since we are using a CF card we need to use the PFsense version built to work on embedded devices.
NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle
Since we are using a 4GB CF card, we are going to use the 4G image
- Flashing the nanoBSD image to the CF card
Extract the physdiskwrite program and run the PhysGUI.exe
This software is written in German i think but operating it is not that hard
Select the CF card from the list.
Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID
Load the ISO file
Right click on the Disk “Image laden > offnen”
select the ISO file from the “open file” window
program will prompt you with the following dialog box
Select the remove 2GB restriction and click “OK”
It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress
- Installing the CF card on the Firebox
Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card
Remove the protective glue and replace the card with the new CF card flashed with pfsense image.
- Booting up and configuring PFsense
since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device
Install “teraTerm pro web” program we downloaded earlier.
I tried using putty and many other telnet clients didn’t work properly
Open up the terminal window
Connect the firebox to the PC using the serial cable, and power it up
Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)
By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.
It will ask you to setup VLan
Assign the WAN, LAN, OPT1 interfaces.
ON X700 interface names are as follows
Please refer to pfsense Docs for more info on setting up
http://doc.pfsense.org/index.php/Tutorials#Advanced_Tutorials
After the initial config is completed. you do not need the console cable and Tera Term
you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP
Addtional configuration
- Enabling the LCD panel
All firebox units have a LCD panel in front
We can use the pfsense LCDproc-dev package to enable and display various information
Install the LCDproc-dev Package via the package Manager
Go to Services > LCDProc
Set the settings as follows
Hope this article helped you guys.Dont forget to leave a comment with your thoughts
Sources –
http://forum.pfsense.org/index.php?board=5.0
Kubernetes Loop
- The Architecture of Trust
- Role of the API server
- Role of etcd cluster
- How the Loop Actually Works
- As an example, let’s look at a simple nginx workload deployment
- 1) Intent (Desired State)
- 2) Watch (The Trigger)
- 3) Reconcile (Close the Gap)
- 4) Status (Report Back)
- The Loop Doesn’t Protect You From Yourself
- Why This Pattern Matters Outside Kubernetes
- Ref
I’ve been diving deep into systems architecture lately, specifically Kubernetes
Strip away the UIs, the YAML, and the ceremony, and Kubernetes boils down to:
A very stubborn event driven collection of control loops
aka the reconciliation (Control) loop, and everything I read is calling this the “gold standard” for distributed control planes.
Because it decomposes the control plane into many small, independent loops, each continuously correcting drift rather than trying to execute perfect one-shot workflows. these loops are triggered by events or state changes, but what they do is determined by the the spec. vs observed state (status)
Now we have both:
- spec: desired state
- status: observed state
Kubernetes lives in that gap.
When spec and status match, everything’s quiet. When they don’t, something wakes up to ensure current state matches the declared state.
The Architecture of Trust
In Kubernetes, they don’t coordinate via direct peer-to-peer orchestration; They coordinate by writing to and watching one shared “state.”
That state lives behind the API server, and the API server validates it and persists it into etcd.
Role of the API server
The API server is the front door to the cluster’s shared truth: it’s the only place that can accept, validate, and persist declared intent as Kubernetes API objects (metadata/spec/status).
When you install a CRD, you’re extending the API itself with a new type (a new endpoint) or a schema the API server can validate against
When we use kubectl apply (or any client) to submit YAML/JSON to the API server, the API server validates it (built-in rules, CRD OpenAPI v3 schema / CEL rules, and potentially admission webhooks) and rejects invalid objects before they’re stored.
If the request passes validation, the API server persists the object into etcd (the whole API object, not just “intent”), and controllers/operators then watch that stored state and do the reconciliation work to make reality match it.
Once stored, controllers/operators (loops) watch those objects and run reconciliation to push the real world toward what’s declared.
it turns out In practice, most controllers don’t act directly on raw watch events, they consume changes through informer caches and queue work onto a rate-limited workqueue. They also often watch related/owned resources (secondary watches), not just the primary object, to stay convergent.
spec is often user-authored as discussed above, but it isn’t exclusively human-written, the scheduler and some controllers also update parts of it (e.g., scheduling decisions/bindings and defaulting).
Role of etcd cluster
etcd is the control plane’s durable record of “the authoritative reference for what the cluster believes that should exist and what it currently reports.”
If an intent (an API object) isn’t in etcd, controllers can’t converge on it—because there’s nothing recorded to reconcile toward
This makes the system inherently self-healing because it trusts the declared state and keeps trying to morph the world to match until those two align.
One tidbit worth noting:
In production, Nodes, runtimes, cloud load balancers can drift independently. Controllers treat those systems as observed state, and they keep measuring reality against what the API says should exist.
How the Loop Actually Works
Kubernetes isn’t one loop. It’s a bunch of loops(controllers) that all behave the same way:
- read desired state (what the API says should exist)
- observe actual state (what’s really happening)
- calculate the diff
- push reality toward the spec

As an example, let’s look at a simple nginx workload deployment
1) Intent (Desired State)
To Deploy the Nginx workload. You run:
kubectl apply -f nginx.yaml
The API server validates the object (and its schema, if it’s a CRD-backed type) and writes it into etcd.
At that point, Kubernetes has only recorded your intent. Nothing has “deployed” yet in the physical sense. The cluster has simply accepted:
“This is what the world should look like.”
2) Watch (The Trigger)
Controllers and schedulers aren’t polling the cluster like a bash script with a sleep 10.
They watch the API server.
When desired state changes, the loop responsible for it wakes up, runs through its logic, and acts:
“New desired state: someone wants an Nginx Pod.”
watches aren’t gospel. Events can arrive twice, late, or never, and your controller still has to converge. Controllers use list+watch patterns with periodic resync as a safety net. The point isn’t perfect signals it’s building a loop that stays correct under imperfect signals.
Controllers also don’t spin constantly they queue work. Events enqueue object keys; workers dequeue and reconcile; failures requeue with backoff. This keeps one bad object from melting the control plane.
3) Reconcile (Close the Gap)
Here’s the mental map that made sense to me:
Kubernetes is a set of level-triggered control loops. You declare desired state in the API, and independent loops keep working until the real world matches what you asked for.
- Controllers (Deployment/ReplicaSet/etc.) watch the API for desired state and write more desired state.
- Example: a Deployment creates/updates a ReplicaSet; a ReplicaSet creates/updates Pods.
- The scheduler finds Pods with no node assigned and picks a node.
- It considers resource requests, node capacity, taints/tolerations, node selectors, (anti)affinity, topology spread, and other constraints.
- It records its decision by setting
spec.nodeNameon the Pod.
- The kubelet on the chosen node notices “a Pod is assigned to me” and makes it real.
- pulls images (if needed) via the container runtime (CRI)
- sets up volumes/mounts (often via CSI)
- triggers networking setup (CNI plugins do the actual wiring)
- starts/monitors containers and reports status back to the API
Each component writes its state back into the API, and the next loop uses that as input. No single component “runs the whole workflow.”
One property makes this survivable: reconcile must be safe to repeat (idempotent). The loop might run once or a hundred times (retries, resyncs, restarts, duplicate/missed watch events), and it should still converge to the same end result.
if the desired state is already satisfied, reconcile should do nothing; if something is missing, it should fill the gap, without creating duplicates or making things worse.
When concurrent updates happen (two controllers might try to update the same object at the same time)
Kubernetes handles this with optimistic concurrency. Every object has a resourceVersion (what version of this object did you read?”). If you try to write an update using an older version, the API server rejects it (often as a conflict).
Then the flow is: re-fetch the latest object, apply your change again, and retry.
4) Status (Report Back)
Once the pod is actually running, status flows back into the API.
The Loop Doesn’t Protect You From Yourself
What if the declared state says to delete something critical like kube-proxy or a CNI component? The loop doesn’t have opinions. It just does what the spec says.
A few things keep this from being a constant disaster:
- Control plane components are special. The API server, etcd, scheduler, controller-manager these usually run as static pods managed directly by kubelet, not through the API. The reconciliation loop can’t easily delete the thing running the reconciliation loop as long as its manifest exists on disk.
- DaemonSets recreate pods. Delete a kube-proxy pod and the DaemonSet controller sees “desired: 1, actual: 0” and spins up a new one. You’d have to delete the DaemonSet itself.
- RBAC limits who can do what. Most users can’t touch kube-system resources.
- Admission controllers can reject bad changes before they hit etcd.
But at the end, if your source of truth says “delete this,” the system will try. The model assumes your declared state is correct. Garbage in, garbage out.
Why This Pattern Matters Outside Kubernetes
This pattern shows up anywhere you manage state over time.
Scripts are fine until they aren’t:
- they assume the world didn’t change since last run
- they fail halfway and leave junk behind
- they encode “steps” instead of “truth”
A loop is simpler:
- define the desired state
- store it somewhere authoritative
- continuously reconcile reality back to it
Ref
- So you wanna write Kubernetes controllers?
- What does idempotent mean in software systems? • Particular Software
- The Principle of Reconciliation
- Controllers | Kubernetes
- Reference | Kubernetes
- How Operators work in Kubernetes | Red Hat Developer
- Good Practices – The Kubebuilder Book
- Understanding Kubernetes controllers part I – queues and the core controller loop – LeftAsExercise
Server 2016 Data De-duplication Report – Powershell
I put together this crude little script to send out a report on a daily basis
it’s not that fancy but its functional
I’m working on the second revision with an HTML body, lists of corrupted files, Resource usage, more features will be added as I dive further into Dedupe CMDlets.
https://technet.microsoft.com/en-us/library/hh848450.aspx
Link to the Script – Dedupe_report.ps1
https://dl.dropboxusercontent.com/s/bltp675prlz1slo/Dedupe_report_Rev2_pub.txt
If you have any suggestions for improvements please comment and share with everyone
# Malinda Ratnayake | 2016 # Can only be run on Windows Server 2012 R2 # # Get the date and set the variable $Now = Get-Date # Import the cmdlets Import-Module Deduplication # $logFile01 = "C:_ScriptsLogsDedupe_Report.txt" # # Get the cluster vip and set to variable $HostName = (Get-WmiObject win32_computersystem).DNSHostName+"."+(Get-WmiObject win32_computersystem).Domain # #$OS = Get-Host {$_.WindowsProductName} # # delete previous days check del $logFile01 # Out-File "$logFile01" -Encoding ASCII Add-Content $logFile01 "Dedupication Report for $HostName" -Encoding ASCII Add-Content $logFile01 "`n$Now" -Encoding ASCII Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupJob Add-Content $logFile01 "Deduplication job Queue" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupJob | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupSchedule Add-Content $logFile01 "Deduplication Schedule" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupSchedule | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 # #Last Optimization Result and time Add-Content $logFile01 "Last Optimization Result and time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastOptimizationTime ,LastOptimizationResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # #Last Garbage Collection Result and Time Add-Content $logFile01 "Last Garbage Collection Result and Time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastGarbageCollectionTime ,LastGarbageCollectionResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # Get-DedupVolume $DedupVolumeLetter = Get-DedupVolume | select -ExpandProperty Volume Add-Content $logFile01 "Deduplication Enabled Volumes" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupVolume | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Volume $DedupVolumeLetter Details - " -Encoding ASCII Get-DedupVolume | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupStatus Add-Content $logFile01 "Deduplication Summary" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Deduplication Status Details" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupMetadata Add-Content $logFile01 "Deduplication MetaData" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Add-Content $logFile01 "note - details about how deduplication processed the data on volume $DedupVolumeLetter " -Encoding ASCII Get-DedupMetadata | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-Dedupe Events # Get-Dedupe Events - Resource usage - WIP Add-Content $logFile01 "Deduplication Events" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-WinEvent -MaxEvents 10 -LogName Microsoft-Windows-Deduplication/Diagnostic | where ID -EQ "10243" | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Change the -To, -From and -SmtpServer values to match your servers. $Emailbody = Get-Content -Path $logFile01 [string[]]$recipients = "[email protected]" Send-MailMessage -To $recipients -From [email protected] -subject "File services - Deduplication Report : $HostName " -SmtpServer smtp-relay.gmail.com -Attachments $logFile01
NFS Provisioner Setup and Testing Guide for Rancher RKE2/Kubernetes
This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.
Example use cases:
- Database migrations
- Apache Kafka clusters
- Data processing pipelines
Requirements:
- An accessible NFS share exported with:
rw,sync,no_subtree_check,no_root_squash - NFSv3 or NFSv4 protocol
- Kubernetes v1.31.7+ or RKE2 with rke2r1 or later
lets get to it
1. NFS Server Export Setup
Ensure your NFS server exports the shared directory correctly:
# /etc/exports
/rke-pv-storage worker-node-ips(rw,sync,no_subtree_check,no_root_squash)
- Replace
worker-node-ipswith actual IPs or CIDR blocks of your worker nodes. - Run
sudo exportfs -rto reload the export table.
2. Install NFS Subdir External Provisioner
Add the Helm repo and install the provisioned:
helm repo add nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-client-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace kube-system \
--set nfs.server=192.168.162.100 \
--set nfs.path=/rke-pv-storage \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=false
Notes:
- If you want this to be the default storage class, change
storageClass.defaultClass=true. nfs.servershould point to the IP of your NFS server.nfs.pathmust be a valid exported directory from that NFS server.storageClass.namecan be referenced in your PersistentVolumeClaim YAMLs using storageClassName:nfs-client.
3. PVC and Pod Test
Create a test PVC and pod using the following YAML:
# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-nfs-pod
spec:
containers:
- name: shell
image: busybox
command: [ "sh", "-c", "sleep 3600" ]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-nfs-pvc
Apply it:
kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-nfs-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 1Gi RWX nfs-client 30s
4. Troubleshooting
If the PVC remains in Pending, follow these steps:
Check the provisioner pod status:
kubectl get pods -n kube-system | grep nfs-client-provisioner
Inspect the provisioner pod:
kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>
Common Issues:
- Broken State: Bad NFS mount
mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka- This usually means the NFS path is misspelled or not exported properly.
- Broken State: root_squash enabled
failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied- Fix by changing the export to use
no_root_squashor chown the directory tonobody:nogroup.
- Fix by changing the export to use
- ImagePullBackOff
- Ensure nodes have internet access and can reach
registry.k8s.io.
- Ensure nodes have internet access and can reach
- RBAC errors
- Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.
5. Healthy State Example
kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m 1/1 Running 0 3m39s
kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True
kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701 1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763 1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301 1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...
Once test-nfs-pvc is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client in other workloads (e.g., Strimzi KafkaNodePool).
VRF Setup with route leaking guide Dell S4112F-ON – OS 10.5.1.3
Scope –
Create Three VRFs for Three separate clients
Create a Shared VRF
Leak routes from each VRF to the Shared_VRF

Logical overview


Create the VRFs
ip vrf Tenant01_VRF ip vrf Tenant02_VRF ip vrf Tenant03_VRF
Create and initialize the Interfaces (SVI, Layer 3 interface, Loopback)
We are creating Layer 3 SVIs Per tenant
interface vlan200 mode L3 description Tenant01_NET01 no shutdown ip vrf forwarding Tenant01_VRF ip address 10.251.100.254/24 ! interface vlan201 mode L3 description Tenant01_NET02 no shutdown ip vrf forwarding Tenant01_VRF ip address 10.251.101.254/24 ! interface vlan210 mode L3 description Tenant02_NET01 no shutdown ip vrf forwarding Tenant02_VRF ip address 172.17.100.254/24 ! interface vlan220 no ip address description Tenant03_NET01 no shutdown ip vrf forwarding Tenant03_VRF ip address 192.168.110.254/24 ! interface vlan250 mode L3 description OSPF_Routing no shutdown ip vrf forwarding Shared_VRF ip address 10.252.250.6/29
Confirmation
LABCORE# show i image interface inventory ip ipv6 iscsi LABCORE# show ip interface brief Interface Name IP-Address OK Method Status Protocol ========================================================================================= Vlan 200 10.251.100.254/24 YES manual up up Vlan 201 10.251.101.254/24 YES manual up up Vlan 210 172.17.100.254/24 YES manual up up Vlan 220 192.168.110.254/24 YES manual up up Vlan 250 10.252.250.6/29 YES manual up up
LABCORE# show ip vrf VRF-Name Interfaces Shared_VRF Vlan250 Tenant01_VRF Vlan200-201 Tenant02_VRF Vlan210 Tenant03_VRF Vlan220 default Vlan1 management Mgmt1/1/1
Route leaking
For this Example we are going to Leak routes from each of these tenant VRFs in to the Shared VRF
This design will allow each VLAN within the VRFs to see each other, which can be a security issue how ever you can easily control this by
- narrowing the routes down to hosts
- Using Access-lists (not the most ideal but if you have a playbook you can program this in with out any issues)
Real world use cases may differ use this as a template on how to leak routes with in VRFs, update your config as needed
Create the route export statements wihtin the VRFS
ip vrf Shared_VRF ip route-import 2:100 ip route-import 3:100 ip route-import 4:100 ip route-export 1:100 ip vrf Tenant01_VRF ip route-export 2:100 ip route-import 1:100 ip vrf Tenant02_VRF ip route-export 3:100 ip route-import 1:100 ip vrf Tenant03_VRF ip route-export 4:100 ip route-import 1:100
Lets Explain this a bit
ip vrf Shared_VRF ip route-import 2:100 -----------> Import Leaked routes from target 2:100 ip route-import 3:100 -----------> Import Leaked routes from target 3:100 ip route-import 4:100 -----------> Import Leaked routes from target 4:100 ip route-export 1:100 -----------> Export routes to target 1:100
if you need to filter out who can import the routes you need to use the route-map with prefixes to filter it out
Setup static routes per VRF as needed
ip route vrf Tenant01_VRF 10.251.100.0/24 interface vlan200 ip route vrf Tenant01_VRF 10.251.101.0/24 interface vlan201 ! ip route vrf Tenant02_VRF 172.17.100.0/24 interface vlan210 ! ip route vrf Tenant03_VRF 192.168.110.0/24 interface vlan220 ! ip route vrf Shared_VRF 0.0.0.0/0 10.252.250.1 interface vlan25
- Now these static routes will be leaked and learned by the shared VRF
- the Default route on the Shared VRF will be learned downstream by the tenant VRFs
- instead of the default route on the shared VRF, if you scope it to a certain IP or a subnet you can prevent the traffic routing between the VRFs via the Shared VRF
- if you need routes directly leaked between Tenents use the ip route-import on the VRF as needed
Confirmation
Routes are being distributed via internal BGP process
LABCORE# show ip route vrf Tenant01_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:42
C 10.251.100.0/24 via 10.251.100.254 vlan200 0/0 12:43:46
C 10.251.101.0/24 via 10.251.101.254 vlan201 0/0 12:43:46
LABCORE#
LABCORE# show ip route vrf Tenant02_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:45
C 172.17.100.0/24 via 172.17.100.254 vlan210 0/0 12:43:49
LABCORE#
LABCORE# show ip route vrf Tenant03_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:48
C 192.168.110.0/24 via 192.168.110.254 vlan220 0/0 12:43:52
LABCORE# show ip route vrf Shared_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*S 0.0.0.0/0 via 10.252.250.1 vlan250 1/0 12:21:33
B IN 10.251.100.0/24 Direct,Tenant01_VRF vlan200 200/0 09:01:28
B IN 10.251.101.0/24 Direct,Tenant01_VRF vlan201 200/0 09:01:28
C 10.252.250.0/29 via 10.252.250.6 vlan250 0/0 12:42:53
B IN 172.17.100.0/24 Direct,Tenant02_VRF vlan210 200/0 09:01:28
B IN 192.168.110.0/24 Direct,Tenant03_VRF vlan220 200/0 09:02:09
We can ping outside to the internet from the VRF IPs

Redistribute leaked routes via IGP
You can use a Internal BGP process to pickup routes from any VRF and redistribute them to other IGP processes as needed – Check the Article for that information



























