As a Part of my pre-flight check for Vcenter upgrades i like to mount the ISO and go through the first 3 steps, during this I noticed the installer cannot connect to the source appliance with this error
2019-05-01T20:05:02.052Z - info: Stream :: close
2019-05-01T20:05:02.052Z - info: Password not expired
2019-05-01T20:05:02.054Z - error: sourcePrecheck: error in getting source Info: ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials.
2019-05-01T20:05:03.328Z - error: Request timed out after 30000 ms, url: https://vcenter.companyABC.local:443/
2019-05-01T20:05:09.675Z - info: Log file was saved at: C:\Users\MCbits\Desktop\installer-20190501-160025555.log
trying to reset via the admin interface or the DCUI didn’t work, after digging around found a way to reset it by forcing the vcenter to boot in to single user mode
Procedure:
- Take a snapshot or backup of the vCenter Server Appliance before proceeding. Do not skip this step.
- Reboot the vCenter Server Appliance.
- After the OS starts, press e key to enter the GNU GRUB Edit Menu.
- Locate the line that begins with the word Linux.
- Append these entries to the end of the line: rw init=/bin/bash The line should look like the following screenshot:

After adding the statement, press F10 to continue booting
Vcenter appliance will boot into single user mode
Type passwd to reset the root password
if you run into the following error message
"Authentication token lock busy"

you need to re-mount the filesystem in RW, which lets you change between read-only and read-write. this will allow you to make changes
mount -o remount,rw /

Until next time !!!
Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client. we can use the component to
- patch/upgrade hosts
- deploy .vib files within the V-Center
- Scan your VC environment and report on any out of compliance hosts
Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.

Now that’s out of the way Let’s get to the how-to part of this
In Vcenter click the “Menu” and drill down to the “Update Manager”

This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels
Click on the “Baselines” Tab

You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now
Navigate to the “ESXi Images” Tab, and Click “Import”

Once the Upload is complete, Click on “New Baseline”
Fill in the Name and Description that makes sense to anyone that logs in and click Next

Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it
Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time
Now click the “Menu” and Navigate Backup to “Hosts and Clusters”
Now you can apply the Baseline this at various levels within the Vcenter Hierarchy
Vcenter | DataCenter | Cluster | Host
Depending on your use case pick the right level
Excerpt from the KB For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel. When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster. The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded. Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.
Moving on; for this example, since I have only 2 hosts. we are going apply the baseline at the cluster level but apply the remediation at host level
Host 1 > Enter Maintenance > Remediation > Update complete and online
Host 2 > Enter Maintenance > Remediation > Update complete and online
Select the cluster, Click the “Updates” Tab and click on “Attach” on the Attached baselines section

Select and attach the baseline we created before
Click “Check Compliance” to scan and get a report
Select the host in the cluster, enter maintenance mode
Click “REMEDIATE” to start the upgrade. (if you do this at a cluster level if you have DRS, Update Manager will update each node)
This will reboot the host and go through the update process

Foot Notes –
You might run into the following issue
“vCenter cannot deploy Host upgrade agent to host”

Cause 1
Scratch partition is full use Vcenter and change the scratch folder location
Creating a persistent scratch location for ESXi – https://kb.vmware.com/s/article/1033696
Cause 2
Hardware is not compatible,
I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing
Vmware HCL – Link
ESXI and Vcenter log file locations – link

well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee.. 
I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!!
.
Crucial’s Official Release Notes:
“Release Date: 08/25/2011
Change Log:
Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002)
Improved throughput performance.
Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems.
Improved write latency for better performance under heavy write workloads.
Faster boot up times.
Improved compatibility with latest chipsets.
Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device.
Improvement for intermittent failures in cold boot up related to some specific host systems.”
Firmware Download:http://www.crucial.com/eu/support/firmware.aspx?AID=10273954&PID=4176827&SID=1iv16ri5z4e7x
to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???
to do this we are gonna use a niffty lil program called UNetbootin
ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website
so here we go then…
* First off Download – http://unetbootin.sourceforge.net/

* Run the program
* Select DiskImage Radio button (as shown on the image)
* browse and select the iso file you downloaded from crucial
* Type – USB Drive
* select the Drive letter of your Pendrive
* Click OK!!!
reboot
*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important
*Boot from your Pen drive
*Follow the instructions on screen to update
and Voila
****remember to set your SATA controller to AHCI again in Bios / EFI ****

Hi Internetz, its been a while…
So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums.
It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.
What you need :
Hardware
- Firebox
- Female to Female Serial Cable – link
- 4GB CF Card (We can use 1Gb, 2Gb but personally I would recommend at-least 4GB)
- CF Card Reader
Software
The firebox X700
This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware
The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device
There are several methods to run pfsense on this device.
HDD
Install PF sense on a PC and Plug the HDD to the firebox.
This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days
CF card
This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it.
In this tutorial we are using the CF card method
Installing PFsense
- Download the relevant pfsense image
Since we are using a CF card we need to use the PFsense version built to work on embedded devices.
NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle
Since we are using a 4GB CF card, we are going to use the 4G image
- Flashing the nanoBSD image to the CF card
Extract the physdiskwrite program and run the PhysGUI.exe
This software is written in German i think but operating it is not that hard
Select the CF card from the list.
Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID
Load the ISO file
Right click on the Disk “Image laden > offnen”
select the ISO file from the “open file” window
program will prompt you with the following dialog box
Select the remove 2GB restriction and click “OK”
It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress
- Installing the CF card on the Firebox
Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card
Remove the protective glue and replace the card with the new CF card flashed with pfsense image.
- Booting up and configuring PFsense
since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device
Install “teraTerm pro web” program we downloaded earlier.
I tried using putty and many other telnet clients didn’t work properly
Open up the terminal window
Connect the firebox to the PC using the serial cable, and power it up
Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)
By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.
It will ask you to setup VLan
Assign the WAN, LAN, OPT1 interfaces.
ON X700 interface names are as follows
Please refer to pfsense Docs for more info on setting up
http://doc.pfsense.org/index.php/Tutorials#Advanced_Tutorials
After the initial config is completed. you do not need the console cable and Tera Term
you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP
Addtional configuration
- Enabling the LCD panel
All firebox units have a LCD panel in front
We can use the pfsense LCDproc-dev package to enable and display various information
Install the LCDproc-dev Package via the package Manager
Go to Services > LCDProc
Set the settings as follows
Hope this article helped you guys.Dont forget to leave a comment with your thoughts
Sources –
http://forum.pfsense.org/index.php?board=5.0
|
Cmdlet
|
Description
|
Topic
|
|
Start the process of exporting a mailbox or personal archive to a .pst file. You can create more than one export request per mailbox. Each request must have a unique name.
|
||
|
Change export request options after the request is created or recover from a failed request.
|
||
|
Suspend an export request any time after the request is created but before the request reaches the status of Completed.
|
||
|
Resume an export request that’s suspended or failed.
|
||
|
Remove fully or partially completed export requests. Completed export requests aren’t automatically cleared. You must use this cmdlet to remove them.
|
||
|
View general information about an export request.
|
||
|
View detailed information about an export request.
|
Folder permissions –
For this example we are going to use New-MailboxExportRequest cmdlet with the following parameters :
|
AcceptLargeDataLoss
|
The AcceptLargeDataLoss parameter specifies that a large amount of data loss is acceptable if the BadItemLimit is set to 51 or higher. Items are considered corrupted if the item can’t be read from the source database or can’t be written to the target database. Corrupted items won’t be available in the destination mailbox or .pst file.
|
|
baditemlimit
|
The BadItemLimit parameter specifies the number of bad items to skip if the request encounters corruption in the mailbox. Use 0 to not skip bad items. The valid input range for this parameter is from 0 through 2147483647. The default value is 0.
|
New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber -IsArchive -FilePath \Exch01PST_exportamy.webber-Archive.pst
Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).
I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.
Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way
I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.
This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x
So lets get stahhhted…….
Scope
- Setup Pi-hole as a docker container on a VM
- Enable IPV6 support
- Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
- Deploy and test

What we need
- Linux VM (Ubuntu, Hardened BSD, etc)
- Docker and Docker Compose
- Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)
Deployment
Setup Dynamic DNS solution to track your Dynamic WAN IP
for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)
Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address.
For Network A and Network B, I’m going to use the routers built-in DDNS update features
Network A gateway – UDM Pro

Network B Gateway – Netgear R6230

Confirmation

Setup the VM with Docker-compose
Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS
- Linode
- AWS lightsail
- IBM cloud
- Oracle cloud
- Google Compute
- Digital Ocean droplet
Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource
For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects
Setup your Linode VM – Getting started Guide

SSH in to the VM or use weblish console
Update your packages and sources
sudo apt-get update
install Docker and Docker Compose
Assuming you already have SSH access to the VM with a static IPv4 and IPv6 address
Guide to installing Docker Engine on Ubuntu
Guide to Installing Docker-Compose
Once you have this setup confirm the docker setup
docker-compose version

Setup the Pi-hole Docker Image
Lets Configure the docker networking side to fit our Needs
Create a Seperate Bridge network for the Pi-hole container
I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have
docker network create --ipv6 --driver bridge --subnet "fd01::/64" Piholev6
verification

We will use this network later in docker compose
With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53
which will prevent the container from starting. because Pi-hole binds to the same port UDP 53
we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates
After some google fu and trickering around this this is the workaround i found.
- Disable the stub-listener
- Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
- push the external name servers so the VM won’t look at loopback to resolve DNS
- Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener
We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.
sudo nano /etc/systemd/resolved.conf
Find and uncomment the line “DNSStubListener=yes” and change it to “no”

After this we need to push the external DNS servers to the box, this setting is stored on the following file
/etc/resolv.conf
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
But we cant manually update this file with out own DNS servers, lets investigate

ls -l /etc/resolv.conf

its a symlink to the another system file
/run/systemd/resolve/stub-resolv.conf
When you take a look at the directory where that file resides, there are two files

When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers
You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file
i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case
If you want to use your own servers here – Follow this guide

Lets change the symlink to this file instead of the stub-resolve.conf
$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Now that its pointing to the right file

Lets restart the systemd-resolved
systemctl restart systemd-resolved

Now you can resolve DNS and install packages, etc

Docker compose script file for the PI-Hole
sudo mkdir /Docker_Images/ sudo mkdir /Docker_Images/Piholev6/

Lets navigate to this directory and start setting up our environment
nano /Docker_Images/Piholev6/docker-compose.yml
version: '3.4'
services:
Pihole:
container_name: pihole_v6
image: pihole/pihole:latest
hostname: Multicastbits-DNSService
ports:
- "53:53/tcp"
- "53:53/udp"
- "8080:80/tcp"
- "4343:443/tcp"
environment:
TZ: America/New_York
DNS1: 1.1.1.1
DNS2: 8.8.8.8
WEBPASSWORD: F1ghtm4_Keng3n4sura
ServerIP: 45.33.73.186
enable_ipv6: "true"
ServerIPv6: 2600:3c03::f03c:92ff:feb9:ea9c
volumes:
- '${ROOT}/pihole/etc-pihole/:/etc/pihole/'
- '${ROOT}/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
dns:
- 127.0.0.1
- 1.1.1.1
cap_add:
- NET_ADMIN
restart: always
networks:
default:
external:
name: Piholev6
networks:
default:
external:
name: Piholev6
Lets break this down a littlebit
- Version – Declare Docker compose version
- container_name – This is the name of the container on the docker container registry
- image – What image to pull from the Docker Hub
- hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
- ports – What ports should be NATed via the Docker Bridge to the host VM
- TZ – Time Zone
- DNS1 – DNS server used with in the image
- DNS2 – DNS server used with in the image
- WEBPASSWORD – Password for the Pi-Hole web console
- ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
- IPv6 – Enable Disable IPv6 support
- ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
- volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
- cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
- restart: always – This will make sure the container gets restarted every time the VM boots up – Link
- networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before
Now lets bring up the Docker container
docker-compose up -d
-d switch will bring up the Docker container in the background
Run ‘Docker ps’ to confirm

Now you can access the web interface and use the Pihole

verifying its using the bridge network you created
Grab the network ID for the bridge network we create before and use the inspect switch to check the config
docker network ls

docker network inspect f7ba28db09ae
This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag

Testing
I manually configured my workstations primary DNS to the Pi-Hole IPs

Updating the docker Image
Pull the new image from the Registry
docker pull pihole/pihole

Take down the current container
docker-compose down
Run the new container
docker-compose up -d

Your settings will persist this update
Securing the install
now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed
but this is open to the public internet and will fall victim to DNS reflection attacks, etc
lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed
Disable IPtables from the docker daemon
Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect
So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW
This file may not exist already if so nano will create it for you
sudo nano /etc/docker/daemon.json
Add the following lines to the file
{
"iptables": false
}
restart the docker services
sudo systemctl restart docker
now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.
Automatically updating Firewall Rules based on the DYN DNS Host records
we are going to create a shell script and run it every hour using crontab
Shell Script Dry run
- Get the IP from the DYNDNS Host records
- remove/Cleanup existing rules
- Add Default deny Rules
- Add allow rules using the resolved IPs as the source
Dynamic IP addresses are updated on the following DNS records
- trusted-Network01.selfip.net
- trusted-Network02.selfip.net
Lets start by creating the script file under /bin/*
sudo touch /bin/PIHolefwruleupdate.sh sudo chmod +x /bin/PIHolefwruleupdate.sh sudo nano /bin/PIHolefwruleupdate.sh
now lets build the script
#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"
Lets run the script to make sure its working


I used a online port scanner to confirm

Setup Scheduled job with logging
lets use crontab and setup a scheduled job to run this script every hour
Make sure the script is copied to the /bin folder with the executable permissions
using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)
crontab -e

Add the following line
0 * * * * /bin/PIHolefwruleupdate.sh >> /var/log/PIHolefwruleupdate_Cronoutput.log 2>&1
Lets break this down
0 * * * *
this will run the script every time minutes hit zero which is usually every hour
/bin/PIHolefwruleupdate.sh
Script Path to execute
/var/log/PIHolefwruleupdate_Cronoutput.log 2>&1
Log file with errors captured
We will proceed assuming
you already configured the ASA with the primary link
Configured the WAN2 on a port with the static IP or DHCP depending on the connection – you should be able to ping the secondary WAN link gateway from the ASA
Note:
Please remove the existing Static Route for the primary WAN link
Configure Route tracking
ASA(config)# route outside 0.0.0.0 0.0.0.0 <ISP 1(WAN1) Gateway> 1 track 1
ASA(config)# route Backup_Wan 0.0.0.0 0.0.0.0 <ISP 2 (WAN2) Gateway> 254
Now lets break it down
Line 01 – you add the WAN1 route with a administrative distance of 1 and we also include the track 1 statement for the SLA monitor tracking (See below)
Line 02 – with the second line we add the default route for the BackupWan link with a higher administrative distance to make it the secondary link
Examples
ASA(config)# route outside 0.0.0.0 0.0.0.0 100.100.100.10 1 track 1
ASA(config)# route Backup_Wan 0.0.0.0 0.0.0.0 200.200.200.10 254
Setup SLA monitoring and Route tracking
ASA(config)# sla monitor 10
Configure the SLA monitor with ID 10
ASA(config-sla-monitor)# type echo protocol ipIcmpEcho 8.8.8.8 interface outside
Configure the monitoring protocol, the target IP for the probe and the interface use
SLA monitor will keep probing the IP we define here and report if its unreachable via the given interface
In this senario im using 8.8.8.8 as the target IP you can use any public IP for monitoring
ASA(config-sla-monitor-echo)# num-packets 4
Number of packets sent to the probe
ASA(config-sla-monitor-echo)# timeout 1000
Timeout value in milliseconds. if you have a slow link as the primary increase the time out accordingly
ASA(config-sla-monitor-echo)# frequency 10
Frequency of the probe in seconds – SLA monitor will probe the IP every 10 seconds
ASA(config)# sla monitor schedule 10 life forever start-time now
Set the ASA to start the SLA monitor now and keep it running for ever
ASA(config)# track 1 rtr 10 reachability
This command will tell the ASA to keep tracking the SLA monitor with the ID:10 and the Default route defined with “Track 1”
if the probe fails to reach the target IP (in this case 8.8.8.8) via the designated interface it will remove the route defined with “Track 1” from the routing table
The next best possible route in this scenario the backup ISP route with administrative distance of 254 takes its place
Configure dynamic NAT Rules (Important)
nat (inside,<ISP 1(WAN1) Interface Name) source dynamic any interface
nat (inside,<ISP 2(WAN2) Interface Name>) source dynamic any interface
Examples
nat (inside,outside) source dynamic any interface
nat (inside,Backup_Wan) source dynamic any interface
This method worked well for me personally and keep in mind i’m no Cisco Guru so if i made a mistake or if you feel like there is a better way to do this please leave comment. its all about the community after all
Until next time stay awesome internetz
Few things to note
- if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
- For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
lsblk

Create the XFS partition
sudo mkfs.xfs /dev/sdb
# use mkfs.ext4 for ext4
Why XFS? https://access.redhat.com/articles/3129891

Create the folder for the mount point
sudo mkdir /FTP_DATA_DISK
Update the etc/fstab file and add the following line
sudo nano etc/fstab
/dev/sdb /FTP_DATA_DISK xfs defaults 1 2
Mount the disk
sudo mount -a
Testing
mount | grep sdb

Setup the VSFTPD Data and Log Folders
Setup the FTP Data folder
sudo mkdir /FTP_DATA_DISK/FTP_Root -p
Create the log directory
sudo mkdir /FTP_DATA_DISK/_logs/ -p
Set permissions
sudo chgrp -R FTP_Root_RW /FTP_DATA_DISK/FTP_Root/
sudo chmod 775 -R /FTP_DATA_DISK/FTP_Root/
Setup the VSFTPD Config File
Backup the default vsftpd.conf and create a newone
sudo mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpdconfback
sudo nano /etc/vsftpd/vsftpd.conf
#KB Link - ####
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=002
dirmessage_enable=YES
ftpd_banner=Welcome to multicastbits Secure FTP service.
chroot_local_user=NO
chroot_list_enable=NO
chroot_list_file=/etc/vsftpd/chroot_list
listen=YES
listen_ipv6=NO
userlist_file=/etc/vsftpd/user_list
pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
listen_port=21
connect_from_port_20=YES
local_root=/FTP_DATA_DISK/FTP_Root/
xferlog_enable=YES
vsftpd_log_file=/FTP_DATA_DISK/_logs/vsftpd.log
log_ftp_protocol=YES
dirlist_enable=YES
download_enable=NO
pasv_enable=Yes
pasv_max_port=65535
pasv_min_port=60000
Add the FTP users to the userlist file
Backup the Original file
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
sudo systemctl start vsftpd
sudo systemctl enable vsftpd
sudo systemctl status vsftpd

Setup SELinux
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
sudo nano /etc/fail2ban/jail.local
vsftpd]
enabled = true
port = ftp,ftp-data,ftps,ftps-data
logpath = /FTP_DATA_DISK/_logs/vsftpd.log
maxretry = 5
bantime = 7200
Make sure to update the logpath directive to match the vsftpd log file we defined on the vsftpd.conf file
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
sudo systemctl status fail2ban
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log

Fail2ban injects and manages the following rich rules

Client will fail to connect using FTP until the ban is lifted

Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
- The Architecture of Trust
- Role of the API server
- Role of etcd cluster
- How the Loop Actually Works
- As an example, let’s look at a simple nginx workload deployment
- 1) Intent (Desired State)
- 2) Watch (The Trigger)
- 3) Reconcile (Close the Gap)
- 4) Status (Report Back)
- The Loop Doesn’t Protect You From Yourself
- Why This Pattern Matters Outside Kubernetes
- Ref
I’ve been diving deep into systems architecture lately, specifically Kubernetes
Strip away the UIs, the YAML, and the ceremony, and Kubernetes boils down to:
A very stubborn event driven collection of control loops
aka the reconciliation (Control) loop, and everything I read is calling this the “gold standard” for distributed control planes.
Because it decomposes the control plane into many small, independent loops, each continuously correcting drift rather than trying to execute perfect one-shot workflows. these loops are triggered by events or state changes, but what they do is determined by the the spec. vs observed state (status)
Now we have both:
- spec: desired state
- status: observed state
Kubernetes lives in that gap.
When spec and status match, everything’s quiet. When they don’t, something wakes up to ensure current state matches the declared state.
The Architecture of Trust
In Kubernetes, they don’t coordinate via direct peer-to-peer orchestration; They coordinate by writing to and watching one shared “state.”
That state lives behind the API server, and the API server validates it and persists it into etcd.
Role of the API server
The API server is the front door to the cluster’s shared truth: it’s the only place that can accept, validate, and persist declared intent as Kubernetes API objects (metadata/spec/status).
When you install a CRD, you’re extending the API itself with a new type (a new endpoint) or a schema the API server can validate against
When we use kubectl apply (or any client) to submit YAML/JSON to the API server, the API server validates it (built-in rules, CRD OpenAPI v3 schema / CEL rules, and potentially admission webhooks) and rejects invalid objects before they’re stored.
If the request passes validation, the API server persists the object into etcd (the whole API object, not just “intent”), and controllers/operators then watch that stored state and do the reconciliation work to make reality match it.
Once stored, controllers/operators (loops) watch those objects and run reconciliation to push the real world toward what’s declared.
it turns out In practice, most controllers don’t act directly on raw watch events, they consume changes through informer caches and queue work onto a rate-limited workqueue. They also often watch related/owned resources (secondary watches), not just the primary object, to stay convergent.
spec is often user-authored as discussed above, but it isn’t exclusively human-written, the scheduler and some controllers also update parts of it (e.g., scheduling decisions/bindings and defaulting).
Role of etcd cluster
etcd is the control plane’s durable record of “the authoritative reference for what the cluster believes that should exist and what it currently reports.”
If an intent (an API object) isn’t in etcd, controllers can’t converge on it—because there’s nothing recorded to reconcile toward
This makes the system inherently self-healing because it trusts the declared state and keeps trying to morph the world to match until those two align.
One tidbit worth noting:
In production, Nodes, runtimes, cloud load balancers can drift independently. Controllers treat those systems as observed state, and they keep measuring reality against what the API says should exist.
How the Loop Actually Works
Kubernetes isn’t one loop. It’s a bunch of loops(controllers) that all behave the same way:
- read desired state (what the API says should exist)
- observe actual state (what’s really happening)
- calculate the diff
- push reality toward the spec

As an example, let’s look at a simple nginx workload deployment
1) Intent (Desired State)
To Deploy the Nginx workload. You run:
kubectl apply -f nginx.yaml
The API server validates the object (and its schema, if it’s a CRD-backed type) and writes it into etcd.
At that point, Kubernetes has only recorded your intent. Nothing has “deployed” yet in the physical sense. The cluster has simply accepted:
“This is what the world should look like.”
2) Watch (The Trigger)
Controllers and schedulers aren’t polling the cluster like a bash script with a sleep 10.
They watch the API server.
When desired state changes, the loop responsible for it wakes up, runs through its logic, and acts:
“New desired state: someone wants an Nginx Pod.”
watches aren’t gospel. Events can arrive twice, late, or never, and your controller still has to converge. Controllers use list+watch patterns with periodic resync as a safety net. The point isn’t perfect signals it’s building a loop that stays correct under imperfect signals.
Controllers also don’t spin constantly they queue work. Events enqueue object keys; workers dequeue and reconcile; failures requeue with backoff. This keeps one bad object from melting the control plane.
3) Reconcile (Close the Gap)
Here’s the mental map that made sense to me:
Kubernetes is a set of level-triggered control loops. You declare desired state in the API, and independent loops keep working until the real world matches what you asked for.
- Controllers (Deployment/ReplicaSet/etc.) watch the API for desired state and write more desired state.
- Example: a Deployment creates/updates a ReplicaSet; a ReplicaSet creates/updates Pods.
- The scheduler finds Pods with no node assigned and picks a node.
- It considers resource requests, node capacity, taints/tolerations, node selectors, (anti)affinity, topology spread, and other constraints.
- It records its decision by setting
spec.nodeNameon the Pod.
- The kubelet on the chosen node notices “a Pod is assigned to me” and makes it real.
- pulls images (if needed) via the container runtime (CRI)
- sets up volumes/mounts (often via CSI)
- triggers networking setup (CNI plugins do the actual wiring)
- starts/monitors containers and reports status back to the API
Each component writes its state back into the API, and the next loop uses that as input. No single component “runs the whole workflow.”
One property makes this survivable: reconcile must be safe to repeat (idempotent). The loop might run once or a hundred times (retries, resyncs, restarts, duplicate/missed watch events), and it should still converge to the same end result.
if the desired state is already satisfied, reconcile should do nothing; if something is missing, it should fill the gap, without creating duplicates or making things worse.
When concurrent updates happen (two controllers might try to update the same object at the same time)
Kubernetes handles this with optimistic concurrency. Every object has a resourceVersion (what version of this object did you read?”). If you try to write an update using an older version, the API server rejects it (often as a conflict).
Then the flow is: re-fetch the latest object, apply your change again, and retry.
4) Status (Report Back)
Once the pod is actually running, status flows back into the API.
The Loop Doesn’t Protect You From Yourself
What if the declared state says to delete something critical like kube-proxy or a CNI component? The loop doesn’t have opinions. It just does what the spec says.
A few things keep this from being a constant disaster:
- Control plane components are special. The API server, etcd, scheduler, controller-manager these usually run as static pods managed directly by kubelet, not through the API. The reconciliation loop can’t easily delete the thing running the reconciliation loop as long as its manifest exists on disk.
- DaemonSets recreate pods. Delete a kube-proxy pod and the DaemonSet controller sees “desired: 1, actual: 0” and spins up a new one. You’d have to delete the DaemonSet itself.
- RBAC limits who can do what. Most users can’t touch kube-system resources.
- Admission controllers can reject bad changes before they hit etcd.
But at the end, if your source of truth says “delete this,” the system will try. The model assumes your declared state is correct. Garbage in, garbage out.
Why This Pattern Matters Outside Kubernetes
This pattern shows up anywhere you manage state over time.
Scripts are fine until they aren’t:
- they assume the world didn’t change since last run
- they fail halfway and leave junk behind
- they encode “steps” instead of “truth”
A loop is simpler:
- define the desired state
- store it somewhere authoritative
- continuously reconcile reality back to it
Ref
- So you wanna write Kubernetes controllers?
- What does idempotent mean in software systems? • Particular Software
- The Principle of Reconciliation
- Controllers | Kubernetes
- Reference | Kubernetes
- How Operators work in Kubernetes | Red Hat Developer
- Good Practices – The Kubebuilder Book
- Understanding Kubernetes controllers part I – queues and the core controller loop – LeftAsExercise
So recently I ran into this annoying error message with Exchange 2016 CU11 Update.
Environment info-
- Exchange 2016 upgrade from CU8 to CU11
- Exchange binaries are installed under D:\Microsoft\Exchange_Server_V15\..
Microsoft.PowerShell.Commands.GetItemCommand.ProcessRecord()". [12/04/2018 16:41:43.0233] [1] [ERROR] Cannot find path 'D:\Microsoft\Exchange_Server_V15\UnifiedMessaging\grammars' because it does not exist.
[12/04/2018 16:41:43.0233] [1] [ERROR-REFERENCE] Id=UnifiedMessagingComponent___99d8be02cb8d413eafc6ff15e437e13d Component=EXCHANGE14:\Current\Release\Shared\Datacenter\Setup
[12/04/2018 16:41:43.0234] [1] Setup is stopping now because of one or more critical errors. [12/04/2018 16:41:43.0234] [1] Finished executing component tasks.
[12/04/2018 16:41:43.0318] [1] Ending processing Install-UnifiedMessagingRole
[12/04/2018 16:44:51.0116] [0] CurrentResult setupbase.maincore:396: 0 [12/04/2018 16:44:51.0118] [0] End of Setup
[12/04/2018 16:44:51.0118] [0] **********************************************
Root Cause
Ran the Setup again and it failed with the same error
while going though the log files i notice that the setup looks for this file path while configuring the "Mailbox role: Unified Messaging service" (Stage 6 on the GUI installer)
$grammarPath = join-path $RoleInstallPath "UnifiedMessaging\grammars\*";
There was no folder present with the name grammars under the Path specified on the error
just to confirm, i checked another server on CU8 and the grammars folder is there.
Not sure why the folder got removed, it may have happened during the first run of the CU11 setup that failed,
Resolution
My first thought was to copy the folder from an existing CU8 server. but just to avoid any issues (since exchange is sensitive to file versions)
I created an empty folder with the name "grammars" under D:\Microsoft\Exchange_Server_V15\UnifiedMessaging\
Ran the setup again and it continued the upgrade process and completed without any issues...¯\_(ツ)_/¯
[12/04/2018 18:07:50.0416] [2] Ending processing Set-ServerComponentState
[12/04/2018 18:07:50.0417] [2] Beginning processing Write-ExchangeSetupLog
[12/04/2018 18:07:50.0420] [2] Install is complete. Server state has been set to Active.
[12/04/2018 18:07:50.0421] [2] Ending processing Write-ExchangeSetupLog
[12/04/2018 18:07:50.0422] [1] Finished executing component tasks.
[12/04/2018 18:07:50.0429] [1] Ending processing Start-PostSetup
[12/04/2018 18:07:50.0524] [0] CurrentResult setupbase.maincore:396: 0
[12/04/2018 18:07:50.0525] [0] End of Setup
[12/04/2018 18:07:50.0525] [0] **********************************************
Considering cost of this software M$ really have to be better about error handling IMO, i have run in to silly issues like this way too many times since Exchange 2010.



















