Root CA cert is pushed out to all Servers/Desktops – This happens by default
Contents
Setup CA Certificate template
Deploy Auto-enrolled Certificates via Group Policy
Powershell logon script to set the WinRM listener
Deploy the script as a logon script via Group Policy
Testing
1 – Setup CA Certificate template to allow Client Servers/Desktops to checkout the certificate from the CA
Connect to the The Certification Authority Microsoft Management Console (MMC)
Navigate to Certificate Templates > Manage
On the “Certificate templates Console” window > Select Web Server > Duplicate Template
Under the new Template window Set the following attributes
General – Pick a Name and Validity Period – This is up to you
Compatibility – Set the compatibility attributes (You can leave this on the default values, It up to you)
Subject Name – Set ‘Subject Name’ attributes (Important)
Security – Add “Domain Computers” Security Group and Set the following permissions
Read – Allow
Enroll – Allow
Autoenroll – Allow
Click “OK” to save and close out of “Certificate template console”
Issue to the new template
Go back to the “The Certification Authority Microsoft Management Console” (MMC)
Under templates (Right click the empty space) > Select New > Certificate template to Issue
Under the Enable Certificate template window > Select the Template you just created
Allow few minutes for ADDS to replicate and pick up the changes with in the forest
2 – Deploy Auto-enrolled Certificates via Group Policy
Create a new GPO
Windows Settings > Security Settings > Public Key Policies/Certificate Services Client – Auto-Enrollment Settings
Link the GPO to the relevant OU with in your ADDS environment
Note – You can push out the root CA cert as a trusted root certificate with this same policy if you want to force computers to pick up the CA cert,
Testing
If you need to test it gpupdate/force or reboot your test machine, The Server VM/PC will pickup a certificate from ADCS PKI
3 – Powershell logon script to set the WINRM listener
Dry run
Setup the log file
Check for the Certificate matching the machines FQDN Auto-enrolled from AD CS
If exist
Set up the HTTPS WInRM listener and bind the certificate
Write log
else
Write log
#Malinda Rathnayake- 2020
#
#variable
$Date = Get-Date -Format "dd_MM_yy"
$port=5986
$SessionRunTime = Get-Date -Format "dd_yyyy_HH-mm"
#
#Setup Logs folder and log File
$ScriptVersion = '1.0'
$locallogPath = "C:\_Scripts\_Logs\WINRM_HTTPS_ListenerBinding"
#
$logging_Folder = (New-Item -Path $locallogPath -ItemType Directory -Name $Date -Force)
$ScriptSessionlogFile = New-Item $logging_Folder\ScriptSessionLog_$SessionRunTime.txt -Force
$ScriptSessionlogFilePath = $ScriptSessionlogFile.VersionInfo.FileName
#
#Check for the the auto-enrolled SSL Cert
$RootCA = "Company-Root-CA" #change This
$hostname = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
$certinfo = (Get-ChildItem -Path Cert:\LocalMachine\My\ |? {($_.Subject -Like "CN=$hostname") -and ($_.Issuer -Like "CN=$RootCA*")})
$certThumbprint = $certinfo.Thumbprint
#
#Script-------------------------------------------------------
#
#Remove the existing WInRM Listener if there is any
Get-ChildItem WSMan:\Localhost\Listener | Where -Property Keys -eq "Transport=HTTPS" | Remove-Item -Recurse -Force
#
#If the client certificate exists Setup the WinRM HTTPS listener with the cert else Write log
if ($certThumbprint){
#
New-Item -Path WSMan:\Localhost\Listener -Transport HTTPS -Address * -CertificateThumbprint $certThumbprint -HostName $hostname -Force
#
netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=$port
#
Add-Content -Path $ScriptSessionlogFilePath -Value "Certbinding with the HTTPS WinRM HTTPS Listener Completed"
Add-Content -Path $ScriptSessionlogFilePath -Value "$certinfo.Subject"}
else{
Add-Content -Path $ScriptSessionlogFilePath -Value "No Cert matching the Server FQDN found, Please run gpupdate/force or reboot the system"
}
Script is commented with Explaining each section (should have done functions but i was pressed for time, never got around to do it, if you do fix it up and improve this please let me know in the comments :D)
5 – Deploy the script as a logon script via Group Policy
Setup a GPO and set this script as a logon Powershell script
Im using a user policy with GPO Loop-back processing set to Merge applied to the server OU
Testing
To confirm WinRM is listening on HTTPS, type the following commands:
winrm enumerate winrm/config/listener
Winrm get http://schemas.microsoft.com/wbem/wsman/1/config
VSX is a cluster technology that allows the two VSX switches to run with independent control planes (OSPF/BGP) and present themselves as different routers in the network. In the datapath, however, they function as a single router and support active-active forwarding.
VSX allows you to mitigate inherent issues with a shared control plane that comes with traditional stacking while maintaining all the benefits
Control plane: Inter-Switch-Link and Keepalive
Data plane L2: MCLAGs
Data plane L3: Active gateway
This is a very similar technology compared to Dell VLT stacking with Dell OS10
Basic feature Comparison with Dell VLT Stacking
Dell VLT Stacking
Aruba VSX
Supports Multi chassis Lag
✅
✅
independent control planes
✅
✅
All active-gateway configuration (L3 load balancing)
✅(VLT Peer routing)
✅(VSX Active forwarding)
Layer 3 Packet load balancing
✅
✅
Can Participate in Spanning tree MST/RSTP
✅
✅
Gateway IP Redundancy
✅VRRP
✅(VSX Active Gateway or VRRP)
Setup Guide
What you need?
10/25/40/100GE Port for the interswitch link
VSX supported switch, VSX is only supported on switches above CX6300 SKU
Switch Series
VSX
CX 6200 series
X
CX 6300 series
X
CX 6400 series
✅
CX 8200 series
✅
CX 8320/8325 series
✅
CX 8360 series
✅
**Updated 2020-Dec
For this guide im using a 8325 series switch
Dry run
Setup LAG interface for the inter-switch link (ISL)
Create the VSX cluster
Setup a keepalive link and a new VRF for the keepalive traffic
Setup LAG interface for the inter-switch link (ISL)
In order to form the VSX cluster, we need a LAG interface for the inter switch communication
Naturally i pick the fastest ports on the switch to create this 10/25/40/100GE
Depending on what switch you have, The ISL bandwidth can be a limitation/Bottleneck, Account for this factor when designing a VSX based solution Utilize VSX-Activeforwarding or Active gateways to mitigate this
Create the LAG interface
This is a regular Port channel no special configurations, you need to create this on both switches
Native VLAN needs to be the default VLAN
Trunk port and All VLANs allowed
CORE01#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
-------------------------------
CORE02#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
Add/Assign the physical ports to the LAG interface
I’m using two 100GE ports for the ISL LAG
CORE01#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
-------------------------------
CORE02#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
Do the same configuration on the VSX Peer switch (Second Switch)
Connect the cables via DAC/Optical and confirm the Port-channel health
CORE01# sh lag 256
System-ID : b8:d4:e7:d5:36:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:36:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
-------------------------------------------------------------------
CORE02# sh lag 256
System-ID : b8:d4:e7:d5:f3:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:f3:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
Form the VSX Cluster
under the configuration mode, go in to the VSX context by entering “vsx” and issue the following commands on both switches
CORE01#
vsx
inter-switch-link lag 256
role primary
linkup-delay-timer 30
-------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
linkup-delay-timer 30
Check the VSX Status
CORE01# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:36:00 b8:d4:e7:d5:f3:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role primary secondary
----------------------------------------
CORE02# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:f3:00 b8:d4:e7:d5:36:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role secondary primary
Setup the Keepalive Link
its recommended to set up a Keepalive link to avoid Split-brain scenarios if the ISL goes down, We are trying to prevent both switches from thinking they are the active devices creating STP loops and other issues on the network
This is not a must-have, it’s nice to have, As of Aruba CX OS 10.06.x you need to sacrifice one of your data ports for this
Dell OS10 VLT archives this via the OOBM network ports, Supposedly Keepalive over OOBM is something Aruba is working on for future releases
Few things to note
It’s recommended using a routed port in a separate VRF for the keepalive link
can use a 1Gbps link for this if needed
Provision the port and VRF
CORE01#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.1/24
exit
-----------------------------------------
CORE02#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.2/24
exit
Define the Keepalive link
Note – Remember to define the vrf id in the keepalive statement
Thanks /u/illumynite for pointing that out
CORE01#
vsx
inter-switch-link lag 256
role primary
keepalive peer 192.168.168.2 source 192.168.168.1 vrf KEEPALIVE
linkup-delay-timer 30
-----------------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
keepalive peer 192.168.168.1 source 192.168.168.2 vrf KEEPALIVE
linkup-delay-timer 30
Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).
I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.
Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way
I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.
This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x
So lets get stahhhted…….
Scope
Setup Pi-hole as a docker container on a VM
Enable IPV6 support
Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
Deploy and test
What we need
Linux VM (Ubuntu, Hardened BSD, etc)
Docker and Docker Compose
Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)
Deployment
Setup Dynamic DNS solution to track your Dynamic WAN IP
for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)
Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address.
For Network A and Network B, I’m going to use the routers built-in DDNS update features
Network A gateway – UDM Pro
Network B Gateway – Netgear R6230
Confirmation
Setup the VM with Docker-compose
Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS
Linode
AWS lightsail
IBM cloud
Oracle cloud
Google Compute
Digital Ocean droplet
Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource
For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects
Lets Configure the docker networking side to fit our Needs
Create a Seperate Bridge network for the Pi-holecontainer
I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have
With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53
which will prevent the container from starting. because Pi-hole binds to the same port UDP 53
we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates
After some google fu and trickering around this this is the workaround i found.
Disable the stub-listener
Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
push the external name servers so the VM won’t look at loopback to resolve DNS
Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener
We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.
sudo nano /etc/systemd/resolved.conf
Find and uncomment the line “DNSStubListener=yes” and change it to “no”
After this we need to push the external DNS servers to the box, this setting is stored on the following file
/etc/resolv.conf
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
But we cant manually update this file with out own DNS servers, lets investigate
ls -l /etc/resolv.conf
its a symlink to the another system file
/run/systemd/resolve/stub-resolv.conf
When you take a look at the directory where that file resides, there are two files
When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers
You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file
i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case
container_name – This is the name of the container on the docker container registry
image – What image to pull from the Docker Hub
hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
ports – What ports should be NATed via the Docker Bridge to the host VM
TZ – Time Zone
DNS1 – DNS server used with in the image
DNS2 – DNS server used with in the image
WEBPASSWORD – Password for the Pi-Hole web console
ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
IPv6 – Enable Disable IPv6 support
ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
restart: always – This will make sure the container gets restarted every time the VM boots up – Link
networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before
Now lets bring up the Docker container
docker-compose up -d
-d switch will bring up the Docker container in the background
Run ‘Docker ps’ to confirm
Now you can access the web interface and use the Pihole
verifying its using the bridge network you created
Grab the network ID for the bridge network we create before and use the inspect switch to check the config
docker network ls
docker network inspect f7ba28db09ae
This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag
Testing
I manually configured my workstations primary DNS to the Pi-Hole IPs
Updating the docker Image
Pull the new image from the Registry
docker pull pihole/pihole
Take down the current container
docker-compose down
Run the new container
docker-compose up -d
Your settings will persist this update
Securing the install
now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed
but this is open to the public internet and will fall victim to DNS reflection attacks, etc
lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed
Disable IPtables from the docker daemon
Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect
So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW
This file may not exist already if so nano will create it for you
sudo nano /etc/docker/daemon.json
Add the following lines to the file
{
"iptables": false
}
restart the docker services
sudo systemctl restart docker
now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.
Automatically updating Firewall Rules based on the DYN DNS Host records
we are going to create a shell script and run it every hour using crontab
Shell Script Dry run
Get the IP from the DYNDNS Host records
remove/Cleanup existing rules
Add Default deny Rules
Add allow rules using the resolved IPs as the source
Dynamic IP addresses are updated on the following DNS records
trusted-Network01.selfip.net
trusted-Network02.selfip.net
Lets start by creating the script file under /bin/*
#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"
Lets run the script to make sure its working
I used a online port scanner to confirm
Setup Scheduled job with logging
lets use crontab and setup a scheduled job to run this script every hour
Make sure the script is copied to the /bin folder with the executable permissions
using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)
Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective
For this example we are going to use BGP to collect connected routes and advertise that over OSPF
interface vlan250
mode L3
description OSPF_Routing
no shutdown
ip vrf forwarding Shared_VRF
ip address 10.252.250.6/29
ip ospf 250 area 0.0.0.0
ip ospf mtu-ignore
ip ospf priority 10
ip vrf Tenant01_VRF
ip vrf Tenant02_VRF
ip vrf Tenant03_VRF
Create and initialize the Interfaces (SVI, Layer 3 interface, Loopback)
We are creating Layer 3 SVIs Per tenant
interface vlan200
mode L3
description Tenant01_NET01
no shutdown
ip vrf forwarding Tenant01_VRF
ip address 10.251.100.254/24
!
interface vlan201
mode L3
description Tenant01_NET02
no shutdown
ip vrf forwarding Tenant01_VRF
ip address 10.251.101.254/24
!
interface vlan210
mode L3
description Tenant02_NET01
no shutdown
ip vrf forwarding Tenant02_VRF
ip address 172.17.100.254/24
!
interface vlan220
no ip address
description Tenant03_NET01
no shutdown
ip vrf forwarding Tenant03_VRF
ip address 192.168.110.254/24
!
interface vlan250
mode L3
description OSPF_Routing
no shutdown
ip vrf forwarding Shared_VRF
ip address 10.252.250.6/29
Confirmation
LABCORE# show i
image interface inventory ip ipv6 iscsi
LABCORE# show ip interface brief
Interface Name IP-Address OK Method Status Protocol
=========================================================================================
Vlan 200 10.251.100.254/24 YES manual up up
Vlan 201 10.251.101.254/24 YES manual up up
Vlan 210 172.17.100.254/24 YES manual up up
Vlan 220 192.168.110.254/24 YES manual up up
Vlan 250 10.252.250.6/29 YES manual up up
LABCORE# show ip vrf
VRF-Name Interfaces
Shared_VRF Vlan250
Tenant01_VRF Vlan200-201
Tenant02_VRF Vlan210
Tenant03_VRF Vlan220
default Vlan1
management Mgmt1/1/1
Route leaking
For this Example we are going to Leak routes from each of these tenant VRFs in to the Shared VRF
This design will allow each VLAN within the VRFs to see each other, which can be a security issue how ever you can easily control this by
narrowing the routes down to hosts
Using Access-lists (not the most ideal but if you have a playbook you can program this in with out any issues)
Real world use cases may differ use this as a template on how to leak routes with in VRFs, update your config as needed
Create the route export statements wihtin the VRFS
ip vrf Shared_VRF
ip route-import 2:100
ip route-import 3:100
ip route-import 4:100
ip route-export 1:100
ip vrf Tenant01_VRF
ip route-export 2:100
ip route-import 1:100
ip vrf Tenant02_VRF
ip route-export 3:100
ip route-import 1:100
ip vrf Tenant03_VRF
ip route-export 4:100
ip route-import 1:100
Lets Explain this a bit
ip vrf Shared_VRF
ip route-import 2:100 -----------> Import Leaked routes from target 2:100
ip route-import 3:100 -----------> Import Leaked routes from target 3:100
ip route-import 4:100 -----------> Import Leaked routes from target 4:100
ip route-export 1:100 -----------> Export routes to target 1:100
if you need to filter out who can import the routes you need to use the route-map with prefixes to filter it out
Setup static routes per VRF as needed
ip route vrf Tenant01_VRF 10.251.100.0/24 interface vlan200
ip route vrf Tenant01_VRF 10.251.101.0/24 interface vlan201
!
ip route vrf Tenant02_VRF 172.17.100.0/24 interface vlan210
!
ip route vrf Tenant03_VRF 192.168.110.0/24 interface vlan220
!
ip route vrf Shared_VRF 0.0.0.0/0 10.252.250.1 interface vlan25
Now these static routes will be leaked and learned by the shared VRF
the Default route on the Shared VRF will be learned downstream by the tenant VRFs
instead of the default route on the shared VRF, if you scope it to a certain IP or a subnet you can prevent the traffic routing between the VRFs via the Shared VRF
if you need routes directly leaked between Tenents use the ip route-import on the VRF as needed
Confirmation
Routes are being distributed via internal BGP process
LABCORE# show ip route vrf Tenant01_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:42
C 10.251.100.0/24 via 10.251.100.254 vlan200 0/0 12:43:46
C 10.251.101.0/24 via 10.251.101.254 vlan201 0/0 12:43:46
LABCORE#
LABCORE# show ip route vrf Tenant02_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:45
C 172.17.100.0/24 via 172.17.100.254 vlan210 0/0 12:43:49
LABCORE#
LABCORE# show ip route vrf Tenant03_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*B IN 0.0.0.0/0 via 10.252.250.1 200/0 12:17:48
C 192.168.110.0/24 via 192.168.110.254 vlan220 0/0 12:43:52
LABCORE# show ip route vrf Shared_VRF
Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
Destination Gateway Dist/Metric Last Change
----------------------------------------------------------------------------------------------------------
*S 0.0.0.0/0 via 10.252.250.1 vlan250 1/0 12:21:33
B IN 10.251.100.0/24 Direct,Tenant01_VRF vlan200 200/0 09:01:28
B IN 10.251.101.0/24 Direct,Tenant01_VRF vlan201 200/0 09:01:28
C 10.252.250.0/29 via 10.252.250.6 vlan250 0/0 12:42:53
B IN 172.17.100.0/24 Direct,Tenant02_VRF vlan210 200/0 09:01:28
B IN 192.168.110.0/24 Direct,Tenant03_VRF vlan220 200/0 09:02:09
We can ping outside to the internet from the VRF IPs
You can use a Internal BGP process to pickup routes from any VRF and redistribute them to other IGP processes as needed – Check the Article for that information
Let’s talk a little bit about this code and unpack this
Vagrant API version
Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.
Provisioning the Ansible VM
This will
Provision the controller Ubuntu VM
Create a bridged network adapter
Set the host-name – LAB-Controller
Set the static IP – 172.17.10.120/24
Run the Shell script that installs Ansible using apt-get install (We will get to this below)
Lets start digging in…
Specifying the Controller VM Name, base box and hostname
Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.
Define the shell script to customize the VM config and install the Ansible Package
Now this is where we define the provisioning shell script
this script installs Ansible and set the host file entries to make your life easier
In case you are wondering VLS stands for V=virtual,L – linux S – server.
I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname
We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)
This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter
vls-node1 – 172.17.10.121
vls-node2 – 172.17.10.122
vls-node1 – 172.17.10.123
So now that we are done with explaining the code, let’s run this
Building the Lab environment using Vagrant
Issue the following command to check your syntax
Vagrant status
Issue the following command to bring up the environment
Vagrant up
If you get this message Reboot in to UEFI and make sure virtualization is enabled
Intel – VT-D
AMD Ryzen – SVM
If everything is kumbaya you will see vagrant firing up the deployment
It will provision 4 VMs as we specified
Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS
==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.
Check the status
Vagrant status
Testing
Connecting via SSH to your VMs
vagrant ssh controller
“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal
We are going to connect to our controller and check everything
Little bit more information on the networking side
Vagrant Adds two interfaces, for each VM
NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)
NIC 2 – Bridged adapter we provisioned in the script with the IP Address
Default route is set via the Private(NAT’d) interface (you cant change it)
Netplan configs
Vagrant creates a custom netplan yaml for interface configs
I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.
let me know in the comments if you see any issues or mistakes.
Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.
When i manually initiate a delta sync, i see the following logs
"The Specified Domain either does not exist or could not be contacted"
(click to enlarge)
Checked the following
Restarted ADsync Services
Resolve the ADDS Domain FQDN and DNS – Working
Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values
Root Cause
Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)
Assumption
when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.
This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high
Resolution
Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding
in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)
During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:
Hybrid Co-Existence Environment with AAD-Sync
User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
[email protected] had a cloud-only mailbox prior to the initial AD-sync was run
A user account is registered as a mail-user and has a valid license attached
During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox. + CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException + FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep lication.MoveRequest.NewMoveRequest + PSComputerName : Pl-ex001.Paladin.org
It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user
Option 1 (nuclear option)
to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.
Option 2
Clean up only the office 365 mailbox pointer information
Confirm Are you sure you want to perform this action? Delete all existing information about user "[email protected]"?. This operation will clear existing values from Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to continue? [Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a
Executing this leaves you with a clean object without the duplicate-mailbox problem,
in some cases when you run this command you will get the following output
“Command completed successfully, but no user settings were changed.”
If this happens
Remove the license from the user temporarily and run the command to remove previous mailbox data
Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client. we can use the component to
patch/upgrade hosts
deploy .vib files within the V-Center
Scan your VC environment and report on any out of compliance hosts
Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.
Now that’s out of the way Let’s get to the how-to part of this
In Vcenter click the “Menu” and drill down to the “Update Manager”
This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels
Click on the “Baselines” Tab
You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now
Navigate to the “ESXi Images” Tab, and Click “Import”
Once the Upload is complete, Click on “New Baseline”
Fill in the Name and Description that makes sense to anyone that logs in and click Next
Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it
Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time
Now click the “Menu” and Navigate Backup to “Hosts and Clusters”
Now you can apply the Baseline this at various levels within the Vcenter Hierarchy
Vcenter | DataCenter | Cluster | Host
Depending on your use case pick the right level
Excerpt from the KB
For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel.
When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster.
The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded.
Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.
Creating a persistent scratch location for ESXi – https://kb.vmware.com/s/article/1033696
Cause 2
Hardware is not compatible,
I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing