Output will show the Slot ID, Usage and then the Bus Address
Designation: CPU SLOT1 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT2 PCI-E 4.0 X8
Current Usage: In Use
Bus Address: 0000:41:00.0
Designation: CPU SLOT3 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c1:00.0
Designation: CPU SLOT4 PCI-E 4.0 X8
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT5 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c2:00.0
Designation: CPU SLOT6 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT7 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:81:00.0
Designation: PCI-E M.2-M1
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: PCI-E M.2-M2
Current Usage: Available
Bus Address: 0000:ff:00.0
We can use lspci -s #BusAddress# to locate whats installed on each slot
Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments
VSX is a cluster technology that allows the two VSX switches to run with independent control planes (OSPF/BGP) and present themselves as different routers in the network. In the datapath, however, they function as a single router and support active-active forwarding.
VSX allows you to mitigate inherent issues with a shared control plane that comes with traditional stacking while maintaining all the benefits
Control plane: Inter-Switch-Link and Keepalive
Data plane L2: MCLAGs
Data plane L3: Active gateway
This is a very similar technology compared to Dell VLT stacking with Dell OS10
Basic feature Comparison with Dell VLT Stacking
Dell VLT Stacking
Aruba VSX
Supports Multi chassis Lag
✅
✅
independent control planes
✅
✅
All active-gateway configuration (L3 load balancing)
✅(VLT Peer routing)
✅(VSX Active forwarding)
Layer 3 Packet load balancing
✅
✅
Can Participate in Spanning tree MST/RSTP
✅
✅
Gateway IP Redundancy
✅VRRP
✅(VSX Active Gateway or VRRP)
Setup Guide
What you need?
10/25/40/100GE Port for the interswitch link
VSX supported switch, VSX is only supported on switches above CX6300 SKU
Switch Series
VSX
CX 6200 series
X
CX 6300 series
X
CX 6400 series
✅
CX 8200 series
✅
CX 8320/8325 series
✅
CX 8360 series
✅
**Updated 2020-Dec
For this guide im using a 8325 series switch
Dry run
Setup LAG interface for the inter-switch link (ISL)
Create the VSX cluster
Setup a keepalive link and a new VRF for the keepalive traffic
Setup LAG interface for the inter-switch link (ISL)
In order to form the VSX cluster, we need a LAG interface for the inter switch communication
Naturally i pick the fastest ports on the switch to create this 10/25/40/100GE
Depending on what switch you have, The ISL bandwidth can be a limitation/Bottleneck, Account for this factor when designing a VSX based solution Utilize VSX-Activeforwarding or Active gateways to mitigate this
Create the LAG interface
This is a regular Port channel no special configurations, you need to create this on both switches
Native VLAN needs to be the default VLAN
Trunk port and All VLANs allowed
CORE01#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
-------------------------------
CORE02#
interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
Add/Assign the physical ports to the LAG interface
I’m using two 100GE ports for the ISL LAG
CORE01#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
-------------------------------
CORE02#
interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit
Do the same configuration on the VSX Peer switch (Second Switch)
Connect the cables via DAC/Optical and confirm the Port-channel health
CORE01# sh lag 256
System-ID : b8:d4:e7:d5:36:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:36:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
-------------------------------------------------------------------
CORE02# sh lag 256
System-ID : b8:d4:e7:d5:f3:00
System-priority : 65534
Aggregate lag256 is up
Admin state is up
Description : VSX-LAG
Type : normal
MAC Address : b8:d4:e7:d5:f3:00
Aggregated-interfaces : 1/1/55 1/1/56
Aggregation-key : 256
Aggregate mode : active
Hash : l3-src-dst
LACP rate : slow
Speed : 200000 Mb/s
Mode : trunk
Form the VSX Cluster
under the configuration mode, go in to the VSX context by entering “vsx” and issue the following commands on both switches
CORE01#
vsx
inter-switch-link lag 256
role primary
linkup-delay-timer 30
-------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
linkup-delay-timer 30
Check the VSX Status
CORE01# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:36:00 b8:d4:e7:d5:f3:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role primary secondary
----------------------------------------
CORE02# sh vsx status
VSX Operational State
---------------------
ISL channel : In-Sync
ISL mgmt channel : operational
Config Sync Status : In-Sync
NAE : peer_reachable
HTTPS Server : peer_reachable
Attribute Local Peer
------------ -------- --------
ISL link lag256 lag256
ISL version 2 2
System MAC b8:d4:e7:d5:f3:00 b8:d4:e7:d5:36:00
Platform 8325 8325
Software Version GL.10.06.0001 GL.10.06.0001
Device Role secondary primary
Setup the Keepalive Link
its recommended to set up a Keepalive link to avoid Split-brain scenarios if the ISL goes down, We are trying to prevent both switches from thinking they are the active devices creating STP loops and other issues on the network
This is not a must-have, it’s nice to have, As of Aruba CX OS 10.06.x you need to sacrifice one of your data ports for this
Dell OS10 VLT archives this via the OOBM network ports, Supposedly Keepalive over OOBM is something Aruba is working on for future releases
Few things to note
It’s recommended using a routed port in a separate VRF for the keepalive link
can use a 1Gbps link for this if needed
Provision the port and VRF
CORE01#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.1/24
exit
-----------------------------------------
CORE02#
vrf KEEPALIVE
interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.2/24
exit
Define the Keepalive link
Note – Remember to define the vrf id in the keepalive statement
Thanks /u/illumynite for pointing that out
CORE01#
vsx
inter-switch-link lag 256
role primary
keepalive peer 192.168.168.2 source 192.168.168.1 vrf KEEPALIVE
linkup-delay-timer 30
-----------------------------------------
CORE02#
vsx
inter-switch-link lag 256
role secondary
keepalive peer 192.168.168.1 source 192.168.168.2 vrf KEEPALIVE
linkup-delay-timer 30
Let’s talk a little bit about this code and unpack this
Vagrant API version
Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.
Provisioning the Ansible VM
This will
Provision the controller Ubuntu VM
Create a bridged network adapter
Set the host-name – LAB-Controller
Set the static IP – 172.17.10.120/24
Run the Shell script that installs Ansible using apt-get install (We will get to this below)
Lets start digging in…
Specifying the Controller VM Name, base box and hostname
Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.
Define the shell script to customize the VM config and install the Ansible Package
Now this is where we define the provisioning shell script
this script installs Ansible and set the host file entries to make your life easier
In case you are wondering VLS stands for V=virtual,L – linux S – server.
I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname
We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)
This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter
vls-node1 – 172.17.10.121
vls-node2 – 172.17.10.122
vls-node1 – 172.17.10.123
So now that we are done with explaining the code, let’s run this
Building the Lab environment using Vagrant
Issue the following command to check your syntax
Vagrant status
Issue the following command to bring up the environment
Vagrant up
If you get this message Reboot in to UEFI and make sure virtualization is enabled
Intel – VT-D
AMD Ryzen – SVM
If everything is kumbaya you will see vagrant firing up the deployment
It will provision 4 VMs as we specified
Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS
==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.
Check the status
Vagrant status
Testing
Connecting via SSH to your VMs
vagrant ssh controller
“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal
We are going to connect to our controller and check everything
Little bit more information on the networking side
Vagrant Adds two interfaces, for each VM
NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)
NIC 2 – Bridged adapter we provisioned in the script with the IP Address
Default route is set via the Private(NAT’d) interface (you cant change it)
Netplan configs
Vagrant creates a custom netplan yaml for interface configs
I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.
let me know in the comments if you see any issues or mistakes.