VRF Setup with route leaking guide Dell S4112F-ON – OS 10.5.1.3
Scope –
Create Three VRFs for Three separate clients
Create a Shared VRF
Leak routes from each VRF to the Shared_VRF

Logical overview


Create the VRFs
ip vrf Tenant01_VRF ip vrf Tenant02_VRF ip vrf Tenant03_VRF
Create and initialize the Interfaces (SVI, Layer 3 interface, Loopback)
We are creating Layer 3 SVIs Per tenant
interface vlan200 mode L3 description Tenant01_NET01 no shutdown ip vrf forwarding Tenant01_VRF ip address 10.251.100.254/24 ! interface vlan201 mode L3 description Tenant01_NET02 no shutdown ip vrf forwarding Tenant01_VRF ip address 10.251.101.254/24 ! interface vlan210 mode L3 description Tenant02_NET01 no shutdown ip vrf forwarding Tenant02_VRF ip address 172.17.100.254/24 ! interface vlan220 no ip address description Tenant03_NET01 no shutdown ip vrf forwarding Tenant03_VRF ip address 192.168.110.254/24 ! interface vlan250 mode L3 description OSPF_Routing no shutdown ip vrf forwarding Shared_VRF ip address 10.252.250.6/29
Confirmation
LABCORE# show i image interface inventory ip ipv6 iscsi LABCORE# show ip interface brief Interface Name IP-Address OK Method Status Protocol ========================================================================================= Vlan 200 10.251.100.254/24 YES manual up up Vlan 201 10.251.101.254/24 YES manual up up Vlan 210 172.17.100.254/24 YES manual up up Vlan 220 192.168.110.254/24 YES manual up up Vlan 250 10.252.250.6/29 YES manual up up
LABCORE# show ip vrf VRF-Name Interfaces Shared_VRF Vlan250 Tenant01_VRF Vlan200-201 Tenant02_VRF Vlan210 Tenant03_VRF Vlan220 default Vlan1 management Mgmt1/1/1
Route leaking
For this Example we are going to Leak routes from each of these tenant VRFs in to the Shared VRF
This design will allow each VLAN within the VRFs to see each other, which can be a security issue how ever you can easily control this by
- narrowing the routes down to hosts
- Using Access-lists (not the most ideal but if you have a playbook you can program this in with out any issues)
Real world use cases may differ use this as a template on how to leak routes with in VRFs, update your config as needed
Create the route export statements wihtin the VRFS
ip vrf Shared_VRF ip route-import 2:100 ip route-import 3:100 ip route-import 4:100 ip route-export 1:100 ip vrf Tenant01_VRF ip route-export 2:100 ip route-import 1:100 ip vrf Tenant02_VRF ip route-export 3:100 ip route-import 1:100 ip vrf Tenant03_VRF ip route-export 4:100 ip route-import 1:100
Lets Explain this a bit
ip vrf Shared_VRF ip route-import 2:100 -----------> Import Leaked routes from target 2:100 ip route-import 3:100 -----------> Import Leaked routes from target 3:100 ip route-import 4:100 -----------> Import Leaked routes from target 4:100 ip route-export 1:100 -----------> Export routes to target 1:100
if you need to filter out who can import the routes you need to use the route-map with prefixes to filter it out
Setup static routes per VRF as needed
ip route vrf Tenant01_VRF 10.251.100.0/24 interface vlan200 ip route vrf Tenant01_VRF 10.251.101.0/24 interface vlan201 ! ip route vrf Tenant02_VRF 172.17.100.0/24 interface vlan210 ! ip route vrf Tenant03_VRF 192.168.110.0/24 interface vlan220 ! ip route vrf Shared_VRF 0.0.0.0/0 10.252.250.1 interface vlan25
- Now these static routes will be leaked and learned by the shared VRF
- the Default route on the Shared VRF will be learned downstream by the tenant VRFs
- instead of the default route on the shared VRF, if you scope it to a certain IP or a subnet you can prevent the traffic routing between the VRFs via the Shared VRF
- if you need routes directly leaked between Tenents use the ip route-import on the VRF as needed
Confirmation
Routes are being distributed via internal BGP process
LABCORE# show ip route vrf Tenant01_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:42
C 10.251.100.0/24 via 10.251.100.254 vlan200 0/0 12:43:46
C 10.251.101.0/24 via 10.251.101.254 vlan201 0/0 12:43:46
LABCORE#
LABCORE# show ip route vrf Tenant02_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:45
C 172.17.100.0/24 via 172.17.100.254 vlan210 0/0 12:43:49
LABCORE#
LABCORE# show ip route vrf Tenant03_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:48
C 192.168.110.0/24 via 192.168.110.254 vlan220 0/0 12:43:52
LABCORE# show ip route vrf Shared_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*S 0.0.0.0/0 via 10.252.250.1 vlan250 1/0 12:21:33
B IN 10.251.100.0/24 Direct,Tenant01_VRF vlan200 200/0 09:01:28
B IN 10.251.101.0/24 Direct,Tenant01_VRF vlan201 200/0 09:01:28
C 10.252.250.0/29 via 10.252.250.6 vlan250 0/0 12:42:53
B IN 172.17.100.0/24 Direct,Tenant02_VRF vlan210 200/0 09:01:28
B IN 192.168.110.0/24 Direct,Tenant03_VRF vlan220 200/0 09:02:09
We can ping outside to the internet from the VRF IPs

Redistribute leaked routes via IGP
You can use a Internal BGP process to pickup routes from any VRF and redistribute them to other IGP processes as needed – Check the Article for that information
Crucial M4 SSD New Firmware and how to Flash using a USB thumb drive !!Update!!

well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee.. 
I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!!
.
Crucial’s Official Release Notes:
“Release Date: 08/25/2011
Change Log:
Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002)
Improved throughput performance.
Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems.
Improved write latency for better performance under heavy write workloads.
Faster boot up times.
Improved compatibility with latest chipsets.
Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device.
Improvement for intermittent failures in cold boot up related to some specific host systems.”
Firmware Download:http://www.crucial.com/eu/support/firmware.aspx?AID=10273954&PID=4176827&SID=1iv16ri5z4e7x
to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???
to do this we are gonna use a niffty lil program called UNetbootin
ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website
so here we go then…
* First off Download – http://unetbootin.sourceforge.net/

* Run the program
* Select DiskImage Radio button (as shown on the image)
* browse and select the iso file you downloaded from crucial
* Type – USB Drive
* select the Drive letter of your Pendrive
* Click OK!!!
reboot
*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important
*Boot from your Pen drive
*Follow the instructions on screen to update
and Voila
****remember to set your SATA controller to AHCI again in Bios / EFI ****
Managing calendar permissions in Exchange Server 2010
these Sharing options are not available in EMC, so we have to use exchange power shell on the server to manipulate them.
Get-MailboxFolderPermission -identity "Networking Calendar:Calendar"user – “Nyckie” – full permissions
all users – permissions to add events without the delete permission
- To assign calendar permissions to new users “Add-MailboxFolderPermission”
Add-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User [email protected] -AccessRights Owner - To Change existing calendar permissions “set-MailboxFolderPermission”
set-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User default -AccessRights NonEditingAuthor LimitedDetails – View availability data with subject and location
source –
technet.microsoft.com
http://blog.powershell.no/2010/09/20/managing-calendar-permissions-in-exchange-server-2010/
Reducing Dell PowerEdge (PE) 2950/2900/2800 II/III fan noise – Fan mod + BMC firmware mod (Noob friendly guide)
Dell 2950 III is one of the best bang for the buck servers you can find on Ebaym but there is one problem this server runs very loud by design.
Example (video Credit David Lohle)
I have my lab setup in my room so I had to do something about this.
After wondering around in the OSMA, DRAC and BIOS with no luck, I turned to almighty Google for help.
Turns out Dell decided not to expose the BMC’s fan controller settings to the users. It’s baked in to the firmware.
Reducing the noise involves two mods, hardware and firmware.
- Fan MOD – Lower the Fan speeds to reduce the noise
- Firmware mod – Lowering the BMC fan rpm thresholds
Update:
I stress tested the server after the mod, check here for details – Dell PE 2950 Stress test
01. Fan MOD – Lower the Fan speeds to reduce the noise
I stumbled upon this post on the “Blind Caveman’s blog”. – http://blindcaveman.wordpress.com/2013/08/23/problem-dell-poweredge-2950-iii-jet-engine-fan-noise/
Apparently he had success with adding a 47ohm resistor in line to all 4 intake fans, he has a very comprehensive guide on the mod.
I’m just going to put the summery of what I did. (Props to Caveman for coming up with this solution)
Items you need
- 4pc of 47ohm ½ watt resistors. (Radio shack $1.49)
- Heat Shrink. (Radio shack $4.59)
- Soldering iron.
10v = 12 ohms
9v = 2020 ohms
8v = 3030 ohms
7v = 4242 ohms
Fan Mod – Steps
01. Remove the cover.
02. Remove the fan by pulling the orange tabs and gently lifting up.
03. Remove the wire clip cut the “Red” wire and solder the resistor in line with the wire.
04. Re-seat the fans back on the server. (be careful not to let it touch the heat sink right next to it)
Note:
I just modded the intake fans, OP suggest to mod the PSU fans but I don’t think you need to mess with the power supply fans for 3 reasons.
- It’s not going to make a huge difference. (my PE is running below 52db with just the intake fans modded)
- PSU is Expensive to replace. (on Ebay PSU is around $100 but four Dell 2950 Fans cost less than $10)
- I believe the PSU units should run very cool and efficient as much as possible.
—————————————————————————————————————————
So after the mod, I booted up the server, it was running significantly quieter. BUT… yes there’s a huge but….
Issue 01 – OSMA Errors and fan speed issues
The fan speeds were ramping up and down every few minutes.
When i monitored the fan speeds via DRAC and it showed an error with the fans failing since the idle rpm is lower than the minimum rpm threshold.
What is happening
His name is Arnuschky – Link | Post link
His post is well written to the point (Kudos to you sir) but its not very noob friendly. 🙁
So I’m going to make a step by step guide using his post as reference with few more additions, for anyone who is new to open source and messing with dell firmwares.
The solution explained-
Arnuschky figured out the exact setting in the BMC’s firmware, the check-sums etc to modify the fan rpm thresholds and wrote a very nifty script to help us modify the values on a downloaded firmware file.
What is BMC (board management controller)
- Among many other things, fans are controlled by the BMC and the fan curve and all the values are baked in to the firmware.
- BMC (board management controller) by design will ramp up the RPM of the fans every time you add more hardware to the system such as – Add-on cards, RAM, HDD’s, etc
What is IPMI
- Intelligent Platform Management Interface, this tool set can be easily installed on any linux distribution and after you enable IPMI in the BIOS (DRAC interface) you can query sensory data from BMC and configure parameters on the BMC.
Procedure
Things you should know –
- This worked for many people including me. Myself nor anyone involved will not be held responsible for any damages caused by proceeding with the firmware mod.
- You cannot perform this mod on ESXI. But if you are running a base OS like Redhat/CentOS/Ubuntu you should be good to go.
- You cannot flash the firmware using a VM (If you know a way please let us know)
- To modify the firmware you have to be on a Linux server, you can technically flash the modified firmware from windows server. I will add the details later in the post
Packages required
- BMC Firmware file – Dell Drivers and support
- IPMI tools
- glibc.i686 (If you are on a 64bit OS)
I have Esxi 5.5 installed on the Dell server so I used a Cent OS 6.4 installation running off a USB stick to do the modifications and flashing
Enable IPMI on the DRAC interface
- You can do this by logging in to the DRAC web interface or though the bios screen
- Press ctrl+E during the post screen to access the DRAC card configuration screen and Enable IPMI
Setting up IPMI Tools
yum install OpenIPMI OpenIPMI-tools
StartEnable the Service
chkconfig ipmi on
service ipmi start
Run the following commands to see if IPMI is working
ipmitool sdr type Temperature
Temp | 01h | ok | 3.1 | -48 degrees C
Temp | 02h | ok | 3.2 | -42 degrees C
Temp | 05h | ok | 10.1 | 40 degrees C
Temp | 06h | ok | 10.2 | 40 degrees C
Ambient Temp | 08h | ok | 7.1 | 27 degrees C
CPU Temp Interf | 76h | ns | 7.1 | Disabled
ipmitool sdr type Fan
FAN 1 RPM | 30h | ok | 7.1 | 4200 RPM
FAN 2 RPM | 31h | ok | 7.1 | 4200 RPM
FAN 3 RPM | 32h | ok | 7.1 | 4200 RPM
FAN 4 RPM | 33h | ok | 7.1 | 4200 RPM
FAN 5 RPM | 34h | ok | 7.1 | 4200 RPM
FAN 6 RPM | 35h | ok | 7.1 | 4200 RPM
Fan Redundancy | 75h | ok | 7.1 | Fully Redundant
Install glibc.i686
yum install glibc.i686
note:
Firmware Flash program is 32bit and it will fail with the following warning on 64bit OS
/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
Download the relevant firmware file
- Visit – http://www.dell.com/support/
- Enter your service tag
- Select OS version – Redhat or any other linux flavor (This will allow you to download the .bin file containing the firmware, this is what we need to modify the values)
To save you time here’s the link for the Dell PE 2950 II, BMC firmware V2.50 – direct link
mkdir bmcfwmod
cd bmcfwmod #create project directory
wget "http://downloads.dell.com/FOLDER00928606M/1/2950_ESM_Firmware_4NNNG_LN32_2.50_A00.BIN"
Set permissions and extract the firmware .bin file
chmod 755 BMC_FRMW_LX_R223079.BIN # make executable
sudo mkdir bmc_firmware # create dir as root
sudo ./BMC_FRMW_LX_R223079.BIN --extract bmc_firmware # yes, you have to do this as root! :(
cd bmc_firmware
Note : You have to extract the bin file in-order to proceed..
Above commands will extract the firmware bin file, in to the bmc_firmware folder.
Check inside the folder to see if you have a file called /payload/bmcflsh.dat.
If not that means your system is not compatible with this mod. If yes, please continue.
Patching the firmware file
Note:
You should be in the bmc_firmware directory created above
Download and run the script
–no-check-certificate switch is used to get around the cert issue due to the github domain name mismatch
wget "https://raw.github.com/arnuschky/dell-bmc-firmware/master/adjust-fan-thresholds/dell-adjust-fan-thresholds.py --no-check-certificate"
chmod 755 dell-adjust-fan-thresholds.py # set permissions
./dell-adjust-fan-thresholds.py payload/bmcflsh.dat #execute the py script on the bmcflsh.dat file
The script will prompt you with the following screen
Select your server model in this case I selected Dell PowerEdge 2950 = number 3
Then it will prompt you to select the fans and adjust the threshold.
On the DRAC interface the intake fans shows up numbered 1-4,
I edited the values for the fans 1 thorough 4 (Only the intake fans will be effected)
Setting the value
When you select the fan number it will ask you to enter the value for the new threshold
This should be entered in multiples of 75 for example the default value is 2025 which is a 27×75 so the default value is 27
So to reduce the threshold value you need to enter something lower than 27
I choose 18 as the value, this will drop my threshold to 1350rpm (18×75=1350)
Saving the changes
After editing the appropriate values, enter “W” to write the changes to the firmware as prompted.
This will update the bmcflsh.dat with the modified values
Flashing the modified firmware
If you are on a 64bit OS make sure you have the glibc.i686 package installed
LD_LIBRARY_PATH=./hapi/opt/dell/dup/lib:$LD_LIBRARY_PATH ./bmcfl32l -i=payload/bmcflsh.dat –f
This will map the necessary Shared Libraries and execute the bmcfl32l to flash the firmware file
Fans will rev up and stop for a brief moment during the update, don’t worry it will spool up again in a second.
You do not need to reboot to see the changes, but do a reboot just in case.
So there you go, your Dell 2950 should be purring away on the shelf silently.
Note:
You should disable the IPMI on DRAC since it is a big security risk.
Tested for more 24 hours
Update: Dell PE 2950 Stress test after the mod
- No noticeable temperature difference with the components
- No post errors
- No OMSA or DRAC errors
Noise Level comparison
Before the mod
After the mod
Its a very long post and its almost morning. so forgive me for any grammar, spelling or formatting mistakes.
Until next time…….
Server 2016 Data De-duplication Report – Powershell
I put together this crude little script to send out a report on a daily basis
it’s not that fancy but its functional
I’m working on the second revision with an HTML body, lists of corrupted files, Resource usage, more features will be added as I dive further into Dedupe CMDlets.
https://technet.microsoft.com/en-us/library/hh848450.aspx
Link to the Script – Dedupe_report.ps1
https://dl.dropboxusercontent.com/s/bltp675prlz1slo/Dedupe_report_Rev2_pub.txt
If you have any suggestions for improvements please comment and share with everyone
# Malinda Ratnayake | 2016 # Can only be run on Windows Server 2012 R2 # # Get the date and set the variable $Now = Get-Date # Import the cmdlets Import-Module Deduplication # $logFile01 = "C:_ScriptsLogsDedupe_Report.txt" # # Get the cluster vip and set to variable $HostName = (Get-WmiObject win32_computersystem).DNSHostName+"."+(Get-WmiObject win32_computersystem).Domain # #$OS = Get-Host {$_.WindowsProductName} # # delete previous days check del $logFile01 # Out-File "$logFile01" -Encoding ASCII Add-Content $logFile01 "Dedupication Report for $HostName" -Encoding ASCII Add-Content $logFile01 "`n$Now" -Encoding ASCII Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupJob Add-Content $logFile01 "Deduplication job Queue" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupJob | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupSchedule Add-Content $logFile01 "Deduplication Schedule" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupSchedule | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 # #Last Optimization Result and time Add-Content $logFile01 "Last Optimization Result and time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastOptimizationTime ,LastOptimizationResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # #Last Garbage Collection Result and Time Add-Content $logFile01 "Last Garbage Collection Result and Time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastGarbageCollectionTime ,LastGarbageCollectionResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # Get-DedupVolume $DedupVolumeLetter = Get-DedupVolume | select -ExpandProperty Volume Add-Content $logFile01 "Deduplication Enabled Volumes" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupVolume | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Volume $DedupVolumeLetter Details - " -Encoding ASCII Get-DedupVolume | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupStatus Add-Content $logFile01 "Deduplication Summary" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Deduplication Status Details" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupMetadata Add-Content $logFile01 "Deduplication MetaData" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Add-Content $logFile01 "note - details about how deduplication processed the data on volume $DedupVolumeLetter " -Encoding ASCII Get-DedupMetadata | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-Dedupe Events # Get-Dedupe Events - Resource usage - WIP Add-Content $logFile01 "Deduplication Events" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-WinEvent -MaxEvents 10 -LogName Microsoft-Windows-Deduplication/Diagnostic | where ID -EQ "10243" | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Change the -To, -From and -SmtpServer values to match your servers. $Emailbody = Get-Content -Path $logFile01 [string[]]$recipients = "[email protected]" Send-MailMessage -To $recipients -From [email protected] -subject "File services - Deduplication Report : $HostName " -SmtpServer smtp-relay.gmail.com -Attachments $logFile01
“System logs on hosts are stored on non-persistent storage” message on VCenter
Ran into this pesky little error message recently, on a vcenter environment
If the logs are stored on a local scratch disk, vCenter will display an alert stating – “System logs on host xxx are stored on non-persistent storage”

Configure ESXi Syslog location – vSphere Web Client
Vcenter > Select “Host”> Configure > Advance System Settings

Click on Edit and search for “Syslog.global.logDir”

Edit the value and in this case, I’m going to use the local data store (Localhost_DataStore01) to store the syslogs.
You can also define a remote syslog server using the “Syslog.global.LogHost” setting

Configure ESXi Syslog location – ESXCLI
Ssh on to the host
Check the current location
esxcli system syslog config get

*logs stored on the local scratch disk
Manually Set the Path
esxcli system syslog config set –logdir=/vmfs/directory/path
you can find the VMFS volume names/UUIDs under –
/vmfs/volumes
remote syslog server can be set using
esxcli system syslog config set –loghost=’tcp://hostname:port’
Load the configuration changes with the syslog reload command
esxcli system syslog reload
The logs will immediately begin populating the specified location.
Advertising VRF Connected/Static routes via MP BGP to OSPF – Guide Dell S4112F-ON – OS 10.5.1.3
Im going to base this off my VRF Setup and Route leaking article and continue building on top of it
Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective
For this example we are going to use BGP to collect connected routes and advertise that over OSPF

Setup the BGP process to collect connected routes
router bgp 65000 router-id 10.252.250.6 ! address-family ipv4 unicast ! neighbor 10.252.250.1 ! vrf Tenant01_VRF ! address-family ipv4 unicast redistribute connected ! vrf Tenant02_VRF ! address-family ipv4 unicast redistribute connected ! vrf Tenant03_VRF ! address-family ipv4 unicast redistribute connected ! vrf Shared_VRF ! address-family ipv4 unicast redistribute connected
Setup OSPF to Redistribute the routes collected via BGP
router ospf 250 vrf Shared_VRF area 0.0.0.0 default-cost 0 redistribute bgp 65000
interface vlan250 mode L3 description OSPF_Routing no shutdown ip vrf forwarding Shared_VRF ip address 10.252.250.6/29 ip ospf 250 area 0.0.0.0 ip ospf mtu-ignore ip ospf priority 10
Testing and confirmation
Local OSPF Database

Remote device

Recover Exchange Mail store Using Database portability feature in Exchange 2010
“Mail server crashed” worst nightmare for a System admin. Followed by tight Dead lines, incident reports, load of complains you have to listen to, Its a full fledged disaster.
In this scenario its a medium size business with with just one single Server running AD and Exchange 2010(not ideal i know) which was upgraded from SBS 2003
AD and DNS failed after de-commissioning the old SBS server.
Recovering from full server backup was not an option and we had the Databases on a separate drive.
Important things to keep in mind when recovering DB’s on a different AD domain
- Organization name and the Exchange Administrative Group should be the same in order for the Portability feature to work
- Database must be in a clean shutdown state
- After mounting the old DB’s always move the mail boxes to new database’s
- Exchange 2010 Slandered only supports up to 5 Databases.
there are few method’s to recover DB’s on exchange 2010, This is the method we used.
Check List before proceeding further
Once you have
- Restored the Old Databases from backup to a different location on the server
- installed the AD (with the same domain name) and the Exchange with the same Administrative Group as the earlier
Preparing the Databases
Checking the statues of the database file
in order for the Database portability feature to work. we need the DB’s in clean shutdown state. To check the Database State we are gonna be using the Exchange Server Database Utility’s file dump mode
More Detail on eseutil /MH – link
Launch command prompt and type
eseutil /MH “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Check the output you get and check if the DB is in a Dirty shutdown or a clean shutdown state
If the DB file is in Dirty shutdown state
In this case we did not have any recent backups and we were not able to soft recover the DB since this is a new DC. so we had to do a hard recovery using this command.
eseutil /P “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Click ok on the prompt to continue
After the Hard recovery to fully rebuild indexes and defragment the database
eseutil /D “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Mounting the Database using the Portability feature.
Create a new Database
Create a new Database for example we will create one named – recoveryDB1
Go to properties of the new DB > Maintenance Tab > Select the option “This Database can be overwritten by a restore”
Apply the Changes and dismount the Database
Replace the new Database file with the Repaired Database
Firstly go to the folder where the new DB file(recoveryDB1.edb) is located and Rename or delete it
Delete the log files / Catalog files
———————————————————————————————————————–
Rename the Recovered Database
Go to the Folder where the Database we repaired before is located and Rename it to “recoveryDB1”
————————————————————————————————————————–
Replace the newly created Database
Copy the Repaired DB file and replace the new Database file recoveryDB1.edb
Remember the Log files should be deleted or moved before you mount this DB.
Mount the “recoveryDB1” Database From EMC
now the mailStore should be mounted with out an issue
Errors you might run in to
In case you do get errors when mounting the DB such as
Operation terminated with error -1216 (JET_errAttachedDatabaseMismatch, An outstanding database attachment has been detected at the start or end of recovery, but database is missing or does not match attachment info) after 11.625 seconds.
you are getting this error because The DB is in dirty shutdown state, refer to the Preparing the Database Section above to fix the issue by performing a Hard Recovery
unable to mount error ‘s
The new Database Log files are still present, Delete them or move them.
Now you can go ahead and Attach the Mailboxes to the corresponding user accounts.
Word of advice
It will be wise to not to keep this recovered Mail Store in production for long. you will run in to various issues as the Mails start to flow in and out
Create new Mail stores’s and Move the mail boxes to avoid future problems.
Some mailboxes might be corrupted. in that case
Easiest way is to use the
“New-MailboxRepairRequest” cmdlet
Refer to this tech-net article for more info – link
Or
- Export it to a PST
- Attach the user to a fresh mailbox
- Sync back the Data you need through outlook
Kubernetes Loop
- The Architecture of Trust
- Role of the API server
- Role of etcd cluster
- How the Loop Actually Works
- As an example, let’s look at a simple nginx workload deployment
- 1) Intent (Desired State)
- 2) Watch (The Trigger)
- 3) Reconcile (Close the Gap)
- 4) Status (Report Back)
- The Loop Doesn’t Protect You From Yourself
- Why This Pattern Matters Outside Kubernetes
- Ref
I’ve been diving deep into systems architecture lately, specifically Kubernetes
Strip away the UIs, the YAML, and the ceremony, and Kubernetes boils down to:
A very stubborn event driven collection of control loops
aka the reconciliation (Control) loop, and everything I read is calling this the “gold standard” for distributed control planes.
Because it decomposes the control plane into many small, independent loops, each continuously correcting drift rather than trying to execute perfect one-shot workflows. these loops are triggered by events or state changes, but what they do is determined by the the spec. vs observed state (status)
Now we have both:
- spec: desired state
- status: observed state
Kubernetes lives in that gap.
When spec and status match, everything’s quiet. When they don’t, something wakes up to ensure current state matches the declared state.
The Architecture of Trust
In Kubernetes, they don’t coordinate via direct peer-to-peer orchestration; They coordinate by writing to and watching one shared “state.”
That state lives behind the API server, and the API server validates it and persists it into etcd.
Role of the API server
The API server is the front door to the cluster’s shared truth: it’s the only place that can accept, validate, and persist declared intent as Kubernetes API objects (metadata/spec/status).
When you install a CRD, you’re extending the API itself with a new type (a new endpoint) or a schema the API server can validate against
When we use kubectl apply (or any client) to submit YAML/JSON to the API server, the API server validates it (built-in rules, CRD OpenAPI v3 schema / CEL rules, and potentially admission webhooks) and rejects invalid objects before they’re stored.
If the request passes validation, the API server persists the object into etcd (the whole API object, not just “intent”), and controllers/operators then watch that stored state and do the reconciliation work to make reality match it.
Once stored, controllers/operators (loops) watch those objects and run reconciliation to push the real world toward what’s declared.
it turns out In practice, most controllers don’t act directly on raw watch events, they consume changes through informer caches and queue work onto a rate-limited workqueue. They also often watch related/owned resources (secondary watches), not just the primary object, to stay convergent.
spec is often user-authored as discussed above, but it isn’t exclusively human-written, the scheduler and some controllers also update parts of it (e.g., scheduling decisions/bindings and defaulting).
Role of etcd cluster
etcd is the control plane’s durable record of “the authoritative reference for what the cluster believes that should exist and what it currently reports.”
If an intent (an API object) isn’t in etcd, controllers can’t converge on it—because there’s nothing recorded to reconcile toward
This makes the system inherently self-healing because it trusts the declared state and keeps trying to morph the world to match until those two align.
One tidbit worth noting:
In production, Nodes, runtimes, cloud load balancers can drift independently. Controllers treat those systems as observed state, and they keep measuring reality against what the API says should exist.
How the Loop Actually Works
Kubernetes isn’t one loop. It’s a bunch of loops(controllers) that all behave the same way:
- read desired state (what the API says should exist)
- observe actual state (what’s really happening)
- calculate the diff
- push reality toward the spec

As an example, let’s look at a simple nginx workload deployment
1) Intent (Desired State)
To Deploy the Nginx workload. You run:
kubectl apply -f nginx.yaml
The API server validates the object (and its schema, if it’s a CRD-backed type) and writes it into etcd.
At that point, Kubernetes has only recorded your intent. Nothing has “deployed” yet in the physical sense. The cluster has simply accepted:
“This is what the world should look like.”
2) Watch (The Trigger)
Controllers and schedulers aren’t polling the cluster like a bash script with a sleep 10.
They watch the API server.
When desired state changes, the loop responsible for it wakes up, runs through its logic, and acts:
“New desired state: someone wants an Nginx Pod.”
watches aren’t gospel. Events can arrive twice, late, or never, and your controller still has to converge. Controllers use list+watch patterns with periodic resync as a safety net. The point isn’t perfect signals it’s building a loop that stays correct under imperfect signals.
Controllers also don’t spin constantly they queue work. Events enqueue object keys; workers dequeue and reconcile; failures requeue with backoff. This keeps one bad object from melting the control plane.
3) Reconcile (Close the Gap)
Here’s the mental map that made sense to me:
Kubernetes is a set of level-triggered control loops. You declare desired state in the API, and independent loops keep working until the real world matches what you asked for.
- Controllers (Deployment/ReplicaSet/etc.) watch the API for desired state and write more desired state.
- Example: a Deployment creates/updates a ReplicaSet; a ReplicaSet creates/updates Pods.
- The scheduler finds Pods with no node assigned and picks a node.
- It considers resource requests, node capacity, taints/tolerations, node selectors, (anti)affinity, topology spread, and other constraints.
- It records its decision by setting
spec.nodeNameon the Pod.
- The kubelet on the chosen node notices “a Pod is assigned to me” and makes it real.
- pulls images (if needed) via the container runtime (CRI)
- sets up volumes/mounts (often via CSI)
- triggers networking setup (CNI plugins do the actual wiring)
- starts/monitors containers and reports status back to the API
Each component writes its state back into the API, and the next loop uses that as input. No single component “runs the whole workflow.”
One property makes this survivable: reconcile must be safe to repeat (idempotent). The loop might run once or a hundred times (retries, resyncs, restarts, duplicate/missed watch events), and it should still converge to the same end result.
if the desired state is already satisfied, reconcile should do nothing; if something is missing, it should fill the gap, without creating duplicates or making things worse.
When concurrent updates happen (two controllers might try to update the same object at the same time)
Kubernetes handles this with optimistic concurrency. Every object has a resourceVersion (what version of this object did you read?”). If you try to write an update using an older version, the API server rejects it (often as a conflict).
Then the flow is: re-fetch the latest object, apply your change again, and retry.
4) Status (Report Back)
Once the pod is actually running, status flows back into the API.
The Loop Doesn’t Protect You From Yourself
What if the declared state says to delete something critical like kube-proxy or a CNI component? The loop doesn’t have opinions. It just does what the spec says.
A few things keep this from being a constant disaster:
- Control plane components are special. The API server, etcd, scheduler, controller-manager these usually run as static pods managed directly by kubelet, not through the API. The reconciliation loop can’t easily delete the thing running the reconciliation loop as long as its manifest exists on disk.
- DaemonSets recreate pods. Delete a kube-proxy pod and the DaemonSet controller sees “desired: 1, actual: 0” and spins up a new one. You’d have to delete the DaemonSet itself.
- RBAC limits who can do what. Most users can’t touch kube-system resources.
- Admission controllers can reject bad changes before they hit etcd.
But at the end, if your source of truth says “delete this,” the system will try. The model assumes your declared state is correct. Garbage in, garbage out.
Why This Pattern Matters Outside Kubernetes
This pattern shows up anywhere you manage state over time.
Scripts are fine until they aren’t:
- they assume the world didn’t change since last run
- they fail halfway and leave junk behind
- they encode “steps” instead of “truth”
A loop is simpler:
- define the desired state
- store it somewhere authoritative
- continuously reconcile reality back to it
Ref
- So you wanna write Kubernetes controllers?
- What does idempotent mean in software systems? • Particular Software
- The Principle of Reconciliation
- Controllers | Kubernetes
- Reference | Kubernetes
- How Operators work in Kubernetes | Red Hat Developer
- Good Practices – The Kubebuilder Book
- Understanding Kubernetes controllers part I – queues and the core controller loop – LeftAsExercise
ArubaOS CX Virtual Switching Extension – VSX Stacking Guide
What is VSX?
VSX is a cluster technology that allows the two VSX switches to run with independent control planes (OSPF/BGP) and present themselves as different routers in the network. In the datapath, however, they function as a single router and support active-active forwarding.
VSX allows you to mitigate inherent issues with a shared control plane that comes with traditional stacking while maintaining all the benefits

- Control plane: Inter-Switch-Link and Keepalive
- Data plane L2: MCLAGs
- Data plane L3: Active gateway
This is a very similar technology compared to Dell VLT stacking with Dell OS10
Basic feature Comparison with Dell VLT Stacking
| Dell VLT Stacking | Aruba VSX | |
| Supports Multi chassis Lag | ✅ | ✅ |
| independent control planes | ✅ | ✅ |
| All active-gateway configuration (L3 load balancing) | ✅(VLT Peer routing) | ✅(VSX Active forwarding) |
| Layer 3 Packet load balancing | ✅ | ✅ |
| Can Participate in Spanning tree MST/RSTP | ✅ | ✅ |
| Gateway IP Redundancy | ✅VRRP | ✅(VSX Active Gateway or VRRP) |
Setup Guide
What you need?
- 10/25/40/100GE Port for the interswitch link
- VSX supported switch, VSX is only supported on switches above CX6300 SKU
| Switch Series | VSX |
| CX 6200 series | X |
| CX 6300 series | X |
| CX 6400 series | ✅ |
| CX 8200 series | ✅ |
| CX 8320/8325 series | ✅ |
| CX 8360 series | ✅ |
For this guide im using a 8325 series switch
Dry run
- Setup LAG interface for the inter-switch link (ISL)
- Create the VSX cluster
- Setup a keepalive link and a new VRF for the keepalive traffic
Setup LAG interface for the inter-switch link (ISL)
In order to form the VSX cluster, we need a LAG interface for the inter switch communication

Naturally i pick the fastest ports on the switch to create this 10/25/40/100GE
Depending on what switch you have, The ISL bandwidth can be a limitation/Bottleneck, Account for this factor when designing a VSX based solution
Utilize VSX-Activeforwarding or Active gateways to mitigate this
Create the LAG interface
This is a regular Port channel no special configurations, you need to create this on both switches
- Native VLAN needs to be the default VLAN
- Trunk port and All VLANs allowed
CORE01# interface lag 256 no shutdown description VSX-LAG no routing vlan trunk native 1 tag vlan trunk allowed all lacp mode active exit ------------------------------- CORE02# interface lag 256 no shutdown description VSX-LAG no routing vlan trunk native 1 tag vlan trunk allowed all lacp mode active exit
Add/Assign the physical ports to the LAG interface
I’m using two 100GE ports for the ISL LAG

CORE01# interface 1/1/55 no shutdown lag 256 exit interface 1/1/56 no shutdown lag 256 exit ------------------------------- CORE02# interface 1/1/55 no shutdown lag 256 exit interface 1/1/56 no shutdown lag 256 exit
Do the same configuration on the VSX Peer switch (Second Switch)
Connect the cables via DAC/Optical and confirm the Port-channel health
CORE01# sh lag 256 System-ID : b8:d4:e7:d5:36:00 System-priority : 65534 Aggregate lag256 is up Admin state is up Description : VSX-LAG Type : normal MAC Address : b8:d4:e7:d5:36:00 Aggregated-interfaces : 1/1/55 1/1/56 Aggregation-key : 256 Aggregate mode : active Hash : l3-src-dst LACP rate : slow Speed : 200000 Mb/s Mode : trunk ------------------------------------------------------------------- CORE02# sh lag 256 System-ID : b8:d4:e7:d5:f3:00 System-priority : 65534 Aggregate lag256 is up Admin state is up Description : VSX-LAG Type : normal MAC Address : b8:d4:e7:d5:f3:00 Aggregated-interfaces : 1/1/55 1/1/56 Aggregation-key : 256 Aggregate mode : active Hash : l3-src-dst LACP rate : slow Speed : 200000 Mb/s Mode : trunk
Form the VSX Cluster
under the configuration mode, go in to the VSX context by entering “vsx” and issue the following commands on both switches
CORE01#
vsx
inter-switch-link lag 256
role primary
linkup-delay-timer 30
-------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
linkup-delay-timer 30
Check the VSX Status
CORE01# sh vsx status VSX Operational State --------------------- ISL channel : In-Sync ISL mgmt channel : operational Config Sync Status : In-Sync NAE : peer_reachable HTTPS Server : peer_reachable Attribute Local Peer ------------ -------- -------- ISL link lag256 lag256 ISL version 2 2 System MAC b8:d4:e7:d5:36:00 b8:d4:e7:d5:f3:00 Platform 8325 8325 Software Version GL.10.06.0001 GL.10.06.0001 Device Role primary secondary ---------------------------------------- CORE02# sh vsx status VSX Operational State --------------------- ISL channel : In-Sync ISL mgmt channel : operational Config Sync Status : In-Sync NAE : peer_reachable HTTPS Server : peer_reachable Attribute Local Peer ------------ -------- -------- ISL link lag256 lag256 ISL version 2 2 System MAC b8:d4:e7:d5:f3:00 b8:d4:e7:d5:36:00 Platform 8325 8325 Software Version GL.10.06.0001 GL.10.06.0001 Device Role secondary primary
Setup the Keepalive Link
its recommended to set up a Keepalive link to avoid Split-brain scenarios if the ISL goes down, We are trying to prevent both switches from thinking they are the active devices creating STP loops and other issues on the network
This is not a must-have, it’s nice to have, As of Aruba CX OS 10.06.x you need to sacrifice one of your data ports for this
Dell OS10 VLT archives this via the OOBM network ports, Supposedly Keepalive over OOBM is something Aruba is working on for future releases
Few things to note
- It’s recommended using a routed port in a separate VRF for the keepalive link
- can use a 1Gbps link for this if needed
Provision the port and VRF
CORE01# vrf KEEPALIVE interface 1/1/48 no shutdown vrf attach KEEPALIVE description VSX-keepalive-Link ip address 192.168.168.1/24 exit ----------------------------------------- CORE02# vrf KEEPALIVE interface 1/1/48 no shutdown vrf attach KEEPALIVE description VSX-keepalive-Link ip address 192.168.168.2/24 exit
Define the Keepalive link
Note – Remember to define the vrf id in the keepalive statement
Thanks /u/illumynite for pointing that out
CORE01#
vsx
inter-switch-link lag 256
role primary
keepalive peer 192.168.168.2 source 192.168.168.1 vrf KEEPALIVE
linkup-delay-timer 30
-----------------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
keepalive peer 192.168.168.1 source 192.168.168.2 vrf KEEPALIVE
linkup-delay-timer 30
Next up…….
- VSX MC-LAG
- VSX Active forwarding
- VSX Active gateway
References
AOS-CX 10.06 Virtual SwitchingExtension (VSX) Guide
As always if you notice any mistakes please do let me know in the comments



















