Output will show the Slot ID, Usage and then the Bus Address
Designation: CPU SLOT1 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT2 PCI-E 4.0 X8
Current Usage: In Use
Bus Address: 0000:41:00.0
Designation: CPU SLOT3 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c1:00.0
Designation: CPU SLOT4 PCI-E 4.0 X8
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT5 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c2:00.0
Designation: CPU SLOT6 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT7 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:81:00.0
Designation: PCI-E M.2-M1
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: PCI-E M.2-M2
Current Usage: Available
Bus Address: 0000:ff:00.0
We can use lspci -s #BusAddress# to locate whats installed on each slot
Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments
just something that came up while setting up a monitoring script using mailx, figured ill note it down here so i can get it to easily later when I need it 😀
Important prerequisites
You need to enable smtp basic Auth on Office 365 for the account used for authentication
Create an App password for the user account
nssdb folder must be available and readable by the user running the mailx command
Assuming all of the above prerequisite are $true we can proceed with the setup
Install mailx
RHEL/Alma linux
sudo dnf install mailx
NSSDB Folder
make sure the nssdb folder must be available and readable by the user running the mailx command
certutil -L -d /etc/pki/nssdb
The Output might be empty, but that’s ok; this is there if you need to add a locally signed cert or another CA cert manually, Microsoft Certs are trusted by default if you are on an up to date operating system with the local System-wide Trust Store
Append/prepend the following lines and Comment out or remove the same lines already defined on the existing config files
set smtp=smtp.office365.com
set smtp-auth-user=###[email protected]###
set smtp-auth-password=##Office365-App-password#
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-use-starttls
set from="###[email protected]###"
This is the bare minimum needed other switches are located here – link
-v switch will print the verbos debug log to console
Connecting to 52.96.40.242:smtp . . . connected.
220 xxde10CA0031.outlook.office365.com Microsoft ESMTP MAIL Service ready at Sun, 6 Aug 2023 22:14:56 +0000
>>> EHLO vls-xxx.multicastbits.local
250-MN2PR10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> STARTTLS
220 2.0.0 SMTP server ready
>>> EHLO vls-xxx.multicastbits.local
250-xxde10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-AUTH LOGIN XOAUTH2
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> AUTH LOGIN
334 VXNlcm5hbWU6
>>> Zxxxxxxxxxxxc0BmdC1zeXMuY29t
334 UGsxxxxxmQ6
>>> c2Rxxxxxxxxxxducw==
235 2.7.0 Authentication successful
>>> MAIL FROM:<###[email protected]###>
250 2.1.0 Sender OK
>>> RCPT TO:<[email protected]>
250 2.1.5 Recipient OK
>>> DATA
354 Start mail input; end with <CRLF>.<CRLF>
>>> .
250 2.0.0 OK <[email protected]> [Hostname=Bsxsss744.namprd11.prod.outlook.com]
>>> QUIT
221 2.0.0 Service closing transmission channel
Now you can use this in your automation scripts or timers using the mailx command
#!/bin/bash
log_file="/etc/app/runtime.log"
recipient="[email protected]"
subject="Log file from /etc/app/runtime.log"
# Check if the log file exists
if [ ! -f "$log_file" ]; then
echo "Error: Log file not found: $log_file"
exit 1
fi
# Use mailx to send the log file as an attachment
echo "Sending log file..."
mailx -s "$subject" -a "$log_file" -r "[email protected]" "$recipient" < /dev/null
echo "Log file sent successfully."
The above commands change the file’s owner and group to root, then set the file permissions to 600, which means only the owner (root) has read and write permissions and other users have no access to the file.
Use Environment Variables: Avoid storing sensitive information like passwords directly in the mail.rc file, consider using environment variables for sensitive data and reference those variables in the configuration.
For example, in the mail.rc file, you can set:
set smtp-auth-password=$MY_EMAIL_PASSWORD
You can set the variable using another config file or store it in the Ansible vault during runtime or use something like Hashicorp.
Sure, I would just use Python or PowerShell core, but you will run into more locked-down environments like OCI-managed DB servers with only Mailx is preinstalled and the only tool you can use 🙁
the Fact that you are here means you are already in the same boat. Hope this helped… until next time
After looking around for a few hours digging in to the logs i figured out the issue, hopefully this helps some one else our there in the situation save some time.
Make sure the IPVS mode is enabled on the cluster configuration
If you are using :
RKE2 – edit the cluster.yaml file
RKE1 – Edit the cluster configuration from the rancher UI > Cluster management > Select the cluster > edit configuration > edit as YAML
Locate the services field under rancher_kubernetes_engine_config and add the following options to enable IPVS
Make sure the Kernel modules are enabled on the nodes running control planes
Background
Example Rancher – RKE1 cluster
sudo docker ps | grep proxy # find the container ID for kubproxy
sudo docker logs ####containerID###
0313 21:44:08.315888 108645 feature_gate.go:245] feature gates: &{map[]}
I0313 21:44:08.346872 108645 proxier.go:652] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack_ipv4"
E0313 21:44:08.347024 108645 server_others.go:107] "Can't use the IPVS proxier" err="IPVS proxier will not be used because the following required kernel modules are not loaded: [ip_vs_lc]"
Kubproxy is trying to load the needed kernel modules and failing to enable IPVS
This guide will walk you through on how to extend and increase space for the root filesystem on a alma linux. Cent OS, REHL Server/Desktop/VM
Method A – Expanding the current disk
Edit the VM and Add space to the Disk
install the cloud-utils-growpart package, as the growpart command in it makes it really easy to extend partitioned virtual disks.
sudo dnf install cloud-utils-growpart
Verify that the VM’s operating system recognizes the new increased size of the sda virtual disk, using lsblk or fdisk -l
sudo fdisk -l
Notes -
Note down the disk id and the partition number for Linux LVM - in this demo disk id is sda and lvm partition is sda 3
lets trigger a rescan of a block devices (Disks)
#elevate to root
sudo su
#trigger a rescan, Make sure to match the disk ID you noted down before
echo 1 > /sys/block/sda/device/rescan
exit
Now sudo fdisk -l shows the correct size of the disks
Use growpart to increase the partition size for the lvm
sudo growpart /dev/sda 3
Confirm the volume group name
sudo vgs
Extend the logical volume
sudo lvextend -l +100%FREE /dev/almalinux/root
Grow the file system size
sudo xfs_growfs /dev/almalinux/root
Notes -
You can use this same steps to add space to different partitions such as home, swap if needed
Method B -Adding a second Disk to the LVM and expanding space
Why add a second disk?
may be the the current Disk is locked due to a snapshot and you cant remove it, Only solution would be to add a second disk/
Check the current space available
sudo df -h
Notes -
If you have 0% ~1MB left on the cs-root command auto-complete with tab and some of the later commands wont work, You should clear up atleast 4-10mb by clearing log files, temp files, etc
Mount an additional disk to the VM (Assuming this is a VM) and make sure the disk is visible on the OS level
sudo lvmdiskscan
OR
sudo fdisk -l
Confirm the volume group name
sudo vgs
Lets increase the space
First lets initialize the new disk we mounted
sudo mkfs.xfs /dev/sdb
Create the Physical volume
sudo pvcreate /dev/sdb
extend the volume group
sudo vgextend cs /dev/sdb
Volume group "cs" successfully extended
Extend the logical volume
sudo lvextend -l +100%FREE /dev/cs/root
Grow the file system size
sudo xfs_growfs /dev/cs/root
Confirm the changes
sudo df -h
Just making easy for us!!
#Method A - Expanding the current disk
#AlmaLinux
sudo dnf install cloud-utils-growpart
sudo lvmdiskscan
sudo fdisk -l #note down the disk ID and partition num
sudo su #elevate to root
echo 1 > /sys/block/sda/device/rescan #trigger a rescan
exit #exit root shell
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
#Method B - Adding a second Disk
#CentOS
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend cs /dev/sdb
sudo lvextend -l +100%FREE /dev/cs/root
sudo xfs_growfs /dev/cs/root
sudo df -h
#AlmaLinux
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend almalinux /dev/sdb
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log
Fail2ban injects and manages the following rich rules
Client will fail to connect using FTP until the ban is lifted
Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
If you found this page you already know why you are looking for this, your server /dev/mapper/cs-root is filled due to /var/lib/docker taking up most of the space
Yes, you can change the location of the Docker overlay2 storage directory by modifying the daemon.json file. Here’s how to do it:
Open or create the daemon.json file using a text editor:
sudo nano /etc/docker/daemon.json
{
"data-root": "/path/to/new/location/docker"
}
Replace “/path/to/new/location/docker” with the path to the new location of the overlay2 directory.
If the file already contains other configuration settings, add the "data-root" setting to the file under the "storage-driver" setting:
Root CA cert is pushed out to all Servers/Desktops – This happens by default
Contents
Setup CA Certificate template
Deploy Auto-enrolled Certificates via Group Policy
Powershell logon script to set the WinRM listener
Deploy the script as a logon script via Group Policy
Testing
1 – Setup CA Certificate template to allow Client Servers/Desktops to checkout the certificate from the CA
Connect to the The Certification Authority Microsoft Management Console (MMC)
Navigate to Certificate Templates > Manage
On the “Certificate templates Console” window > Select Web Server > Duplicate Template
Under the new Template window Set the following attributes
General – Pick a Name and Validity Period – This is up to you
Compatibility – Set the compatibility attributes (You can leave this on the default values, It up to you)
Subject Name – Set ‘Subject Name’ attributes (Important)
Security – Add “Domain Computers” Security Group and Set the following permissions
Read – Allow
Enroll – Allow
Autoenroll – Allow
Click “OK” to save and close out of “Certificate template console”
Issue to the new template
Go back to the “The Certification Authority Microsoft Management Console” (MMC)
Under templates (Right click the empty space) > Select New > Certificate template to Issue
Under the Enable Certificate template window > Select the Template you just created
Allow few minutes for ADDS to replicate and pick up the changes with in the forest
2 – Deploy Auto-enrolled Certificates via Group Policy
Create a new GPO
Windows Settings > Security Settings > Public Key Policies/Certificate Services Client – Auto-Enrollment Settings
Link the GPO to the relevant OU with in your ADDS environment
Note – You can push out the root CA cert as a trusted root certificate with this same policy if you want to force computers to pick up the CA cert,
Testing
If you need to test it gpupdate/force or reboot your test machine, The Server VM/PC will pickup a certificate from ADCS PKI
3 – Powershell logon script to set the WINRM listener
Dry run
Setup the log file
Check for the Certificate matching the machines FQDN Auto-enrolled from AD CS
If exist
Set up the HTTPS WInRM listener and bind the certificate
Write log
else
Write log
#Malinda Rathnayake- 2020
#
#variable
$Date = Get-Date -Format "dd_MM_yy"
$port=5986
$SessionRunTime = Get-Date -Format "dd_yyyy_HH-mm"
#
#Setup Logs folder and log File
$ScriptVersion = '1.0'
$locallogPath = "C:\_Scripts\_Logs\WINRM_HTTPS_ListenerBinding"
#
$logging_Folder = (New-Item -Path $locallogPath -ItemType Directory -Name $Date -Force)
$ScriptSessionlogFile = New-Item $logging_Folder\ScriptSessionLog_$SessionRunTime.txt -Force
$ScriptSessionlogFilePath = $ScriptSessionlogFile.VersionInfo.FileName
#
#Check for the the auto-enrolled SSL Cert
$RootCA = "Company-Root-CA" #change This
$hostname = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
$certinfo = (Get-ChildItem -Path Cert:\LocalMachine\My\ |? {($_.Subject -Like "CN=$hostname") -and ($_.Issuer -Like "CN=$RootCA*")})
$certThumbprint = $certinfo.Thumbprint
#
#Script-------------------------------------------------------
#
#Remove the existing WInRM Listener if there is any
Get-ChildItem WSMan:\Localhost\Listener | Where -Property Keys -eq "Transport=HTTPS" | Remove-Item -Recurse -Force
#
#If the client certificate exists Setup the WinRM HTTPS listener with the cert else Write log
if ($certThumbprint){
#
New-Item -Path WSMan:\Localhost\Listener -Transport HTTPS -Address * -CertificateThumbprint $certThumbprint -HostName $hostname -Force
#
netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=$port
#
Add-Content -Path $ScriptSessionlogFilePath -Value "Certbinding with the HTTPS WinRM HTTPS Listener Completed"
Add-Content -Path $ScriptSessionlogFilePath -Value "$certinfo.Subject"}
else{
Add-Content -Path $ScriptSessionlogFilePath -Value "No Cert matching the Server FQDN found, Please run gpupdate/force or reboot the system"
}
Script is commented with Explaining each section (should have done functions but i was pressed for time, never got around to do it, if you do fix it up and improve this please let me know in the comments :D)
5 – Deploy the script as a logon script via Group Policy
Setup a GPO and set this script as a logon Powershell script
Im using a user policy with GPO Loop-back processing set to Merge applied to the server OU
Testing
To confirm WinRM is listening on HTTPS, type the following commands:
winrm enumerate winrm/config/listener
Winrm get http://schemas.microsoft.com/wbem/wsman/1/config
VSX is a cluster technology that allows the two VSX switches to run with independent control planes (OSPF/BGP) and present themselves as different routers in the network. In the datapath, however, they function as a single router and support active-active forwarding.
VSX allows you to mitigate inherent issues with a shared control plane that comes with traditional stacking while maintaining all the benefits
Control plane: Inter-Switch-Link and Keepalive
Data plane L2: MCLAGs
Data plane L3: Active gateway
This is a very similar technology compared to Dell VLT stacking with Dell OS10
Basic feature Comparison with Dell VLT Stacking
Dell VLT Stacking
Aruba VSX
Supports Multi chassis Lag
✅
✅
independent control planes
✅
✅
All active-gateway configuration (L3 load balancing)
✅(VLT Peer routing)
✅(VSX Active forwarding)
Layer 3 Packet load balancing
✅
✅
Can Participate in Spanning tree MST/RSTP
✅
✅
Gateway IP Redundancy
✅VRRP
✅(VSX Active Gateway or VRRP)
Setup Guide
What you need?
10/25/40/100GE Port for the interswitch link
VSX supported switch, VSX is only supported on switches above CX6300 SKU
Switch Series
VSX
CX 6200 series
X
CX 6300 series
X
CX 6400 series
✅
CX 8200 series
✅
CX 8320/8325 series
✅
CX 8360 series
✅
**Updated 2020-Dec
For this guide im using a 8325 series switch
Dry run
Setup LAG interface for the inter-switch link (ISL)
Create the VSX cluster
Setup a keepalive link and a new VRF for the keepalive traffic
Setup LAG interface for the inter-switch link (ISL)
In order to form the VSX cluster, we need a LAG interface for the inter switch communication
Naturally i pick the fastest ports on the switch to create this 10/25/40/100GE
Depending on what switch you have, The ISL bandwidth can be a limitation/Bottleneck, Account for this factor when designing a VSX based solution Utilize VSX-Activeforwarding or Active gateways to mitigate this
Create the LAG interface
This is a regular Port channel no special configurations, you need to create this on both switches
Native VLAN needs to be the default VLAN
Trunk port and All VLANs allowed
CORE01#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
-------------------------------
CORE02#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
Add/Assign the physical ports to the LAG interface
I’m using two 100GE ports for the ISL LAG
CORE01#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
-------------------------------
CORE02#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
Do the same configuration on the VSX Peer switch (Second Switch)
Connect the cables via DAC/Optical and confirm the Port-channel health
CORE01# sh lag 256
System-ID : b8:d4:e7:d5:36:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:36:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
-------------------------------------------------------------------
CORE02# sh lag 256
System-ID : b8:d4:e7:d5:f3:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:f3:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
Form the VSX Cluster
under the configuration mode, go in to the VSX context by entering “vsx” and issue the following commands on both switches
CORE01#
vsx
inter-switch-link lag 256
role primary
linkup-delay-timer 30
-------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
linkup-delay-timer 30
Check the VSX Status
CORE01# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:36:00 b8:d4:e7:d5:f3:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role primary secondary
----------------------------------------
CORE02# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:f3:00 b8:d4:e7:d5:36:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role secondary primary
Setup the Keepalive Link
its recommended to set up a Keepalive link to avoid Split-brain scenarios if the ISL goes down, We are trying to prevent both switches from thinking they are the active devices creating STP loops and other issues on the network
This is not a must-have, it’s nice to have, As of Aruba CX OS 10.06.x you need to sacrifice one of your data ports for this
Dell OS10 VLT archives this via the OOBM network ports, Supposedly Keepalive over OOBM is something Aruba is working on for future releases
Few things to note
It’s recommended using a routed port in a separate VRF for the keepalive link
can use a 1Gbps link for this if needed
Provision the port and VRF
CORE01#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.1/24
exit
-----------------------------------------
CORE02#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.2/24
exit
Define the Keepalive link
Note – Remember to define the vrf id in the keepalive statement
Thanks /u/illumynite for pointing that out
CORE01#
vsx
inter-switch-link lag 256
role primary
keepalive peer 192.168.168.2 source 192.168.168.1 vrf KEEPALIVE
linkup-delay-timer 30
-----------------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
keepalive peer 192.168.168.1 source 192.168.168.2 vrf KEEPALIVE
linkup-delay-timer 30
Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).
I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.
Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way
I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.
This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x
So lets get stahhhted…….
Scope
Setup Pi-hole as a docker container on a VM
Enable IPV6 support
Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
Deploy and test
What we need
Linux VM (Ubuntu, Hardened BSD, etc)
Docker and Docker Compose
Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)
Deployment
Setup Dynamic DNS solution to track your Dynamic WAN IP
for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)
Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address.
For Network A and Network B, I’m going to use the routers built-in DDNS update features
Network A gateway – UDM Pro
Network B Gateway – Netgear R6230
Confirmation
Setup the VM with Docker-compose
Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS
Linode
AWS lightsail
IBM cloud
Oracle cloud
Google Compute
Digital Ocean droplet
Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource
For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects
Lets Configure the docker networking side to fit our Needs
Create a Seperate Bridge network for the Pi-holecontainer
I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have
With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53
which will prevent the container from starting. because Pi-hole binds to the same port UDP 53
we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates
After some google fu and trickering around this this is the workaround i found.
Disable the stub-listener
Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
push the external name servers so the VM won’t look at loopback to resolve DNS
Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener
We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.
sudo nano /etc/systemd/resolved.conf
Find and uncomment the line “DNSStubListener=yes” and change it to “no”
After this we need to push the external DNS servers to the box, this setting is stored on the following file
/etc/resolv.conf
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
But we cant manually update this file with out own DNS servers, lets investigate
ls -l /etc/resolv.conf
its a symlink to the another system file
/run/systemd/resolve/stub-resolv.conf
When you take a look at the directory where that file resides, there are two files
When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers
You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file
i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case
container_name – This is the name of the container on the docker container registry
image – What image to pull from the Docker Hub
hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
ports – What ports should be NATed via the Docker Bridge to the host VM
TZ – Time Zone
DNS1 – DNS server used with in the image
DNS2 – DNS server used with in the image
WEBPASSWORD – Password for the Pi-Hole web console
ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
IPv6 – Enable Disable IPv6 support
ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
restart: always – This will make sure the container gets restarted every time the VM boots up – Link
networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before
Now lets bring up the Docker container
docker-compose up -d
-d switch will bring up the Docker container in the background
Run ‘Docker ps’ to confirm
Now you can access the web interface and use the Pihole
verifying its using the bridge network you created
Grab the network ID for the bridge network we create before and use the inspect switch to check the config
docker network ls
docker network inspect f7ba28db09ae
This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag
Testing
I manually configured my workstations primary DNS to the Pi-Hole IPs
Updating the docker Image
Pull the new image from the Registry
docker pull pihole/pihole
Take down the current container
docker-compose down
Run the new container
docker-compose up -d
Your settings will persist this update
Securing the install
now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed
but this is open to the public internet and will fall victim to DNS reflection attacks, etc
lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed
Disable IPtables from the docker daemon
Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect
So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW
This file may not exist already if so nano will create it for you
sudo nano /etc/docker/daemon.json
Add the following lines to the file
{
"iptables": false
}
restart the docker services
sudo systemctl restart docker
now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.
Automatically updating Firewall Rules based on the DYN DNS Host records
we are going to create a shell script and run it every hour using crontab
Shell Script Dry run
Get the IP from the DYNDNS Host records
remove/Cleanup existing rules
Add Default deny Rules
Add allow rules using the resolved IPs as the source
Dynamic IP addresses are updated on the following DNS records
trusted-Network01.selfip.net
trusted-Network02.selfip.net
Lets start by creating the script file under /bin/*
#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"
Lets run the script to make sure its working
I used a online port scanner to confirm
Setup Scheduled job with logging
lets use crontab and setup a scheduled job to run this script every hour
Make sure the script is copied to the /bin folder with the executable permissions
using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)
Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective
For this example we are going to use BGP to collect connected routes and advertise that over OSPF
interface vlan250
mode L3
description OSPF_Routing
no shutdown
ip vrf forwarding Shared_VRF
ip address 10.252.250.6/29
ip ospf 250 area 0.0.0.0
ip ospf mtu-ignore
ip ospf priority 10