Find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox

i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)

anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting

Guide

First we need to find the occupied slots and the Bus address for each slot

sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"

Output will show the Slot ID, Usage and then the Bus Address

        Designation: CPU SLOT1 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT2 PCI-E 4.0 X8
        Current Usage: In Use
        Bus Address: 0000:41:00.0
        Designation: CPU SLOT3 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c1:00.0
        Designation: CPU SLOT4 PCI-E 4.0 X8
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT5 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c2:00.0
        Designation: CPU SLOT6 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT7 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:81:00.0
        Designation: PCI-E M.2-M1
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: PCI-E M.2-M2
        Current Usage: Available
        Bus Address: 0000:ff:00.0

We can use lspci -s #BusAddress# to locate whats installed on each slot

lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)

lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)

Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments

Until next time!!!

Reference –

https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl

ArubaOS CX Virtual Switching Extension – VSX Stacking Guide

What is VSX?

VSX is a cluster technology that allows the two VSX switches to run with independent control planes (OSPF/BGP) and present themselves as different routers in the network. In the datapath, however, they function as a single router and support active-active forwarding.

VSX allows you to mitigate inherent issues with a shared control plane that comes with traditional stacking while maintaining all the benefits

  • Control plane: Inter-Switch-Link and Keepalive
  • Data plane L2: MCLAGs
  • Data plane L3: Active gateway

This is a very similar technology compared to Dell VLT stacking with Dell OS10

Basic feature Comparison with Dell VLT Stacking

Dell VLT StackingAruba VSX
Supports Multi chassis Lag
independent control planes
All active-gateway configuration (L3 load balancing)✅(VLT Peer routing)(VSX Active forwarding)
Layer 3 Packet load balancing
Can Participate in Spanning tree MST/RSTP
Gateway IP Redundancy ✅VRRP✅(VSX Active Gateway or VRRP)

Setup Guide

What you need?

  • 10/25/40/100GE Port for the interswitch link
  • VSX supported switch, VSX is only supported on switches above CX6300 SKU
Switch SeriesVSX
CX 6200 seriesX
CX 6300 seriesX
CX 6400 series
CX 8200 series
CX 8320/8325 series
CX 8360 series
**Updated 2020-Dec

For this guide im using a 8325 series switch

Dry run

  • Setup LAG interface for the inter-switch link (ISL)
  • Create the VSX cluster
  • Setup a keepalive link and a new VRF for the keepalive traffic

Setup LAG interface for the inter-switch link (ISL)

In order to form the VSX cluster, we need a LAG interface for the inter switch communication

Naturally i pick the fastest ports on the switch to create this 10/25/40/100GE

Depending on what switch you have, The ISL bandwidth can be a limitation/Bottleneck, Account for this factor when designing a VSX based solution 
Utilize VSX-Activeforwarding or Active gateways to mitigate this

Create the LAG interface

This is a regular Port channel no special configurations, you need to create this on both switches

  • Native VLAN needs to be the default VLAN
  • Trunk port and All VLANs allowed
CORE01#

interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit


-------------------------------

CORE02#

interface lag 256
no shutdown
description VSX-LAG
no routing
vlan trunk native 1 tag
vlan trunk allowed all
lacp mode active
exit
Add/Assign the physical ports to the LAG interface

I’m using two 100GE ports for the ISL LAG

CORE01#

interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit

-------------------------------

CORE02#

interface 1/1/55
no shutdown
lag 256
exit
interface 1/1/56
no shutdown
lag 256
exit

Do the same configuration on the VSX Peer switch (Second Switch)

Connect the cables via DAC/Optical and confirm the Port-channel health

CORE01# sh lag 256
System-ID       : b8:d4:e7:d5:36:00
System-priority : 65534

Aggregate lag256 is up
 Admin state is up
 Description : VSX-LAG
 Type                        : normal
 MAC Address                 : b8:d4:e7:d5:36:00
 Aggregated-interfaces       : 1/1/55 1/1/56
 Aggregation-key             : 256
 Aggregate mode              : active
 Hash                        : l3-src-dst
 LACP rate                   : slow
 Speed                       : 200000 Mb/s
 Mode                        : trunk


-------------------------------------------------------------------

CORE02# sh lag 256
System-ID       : b8:d4:e7:d5:f3:00
System-priority : 65534

Aggregate lag256 is up
 Admin state is up
 Description : VSX-LAG
 Type                        : normal
 MAC Address                 : b8:d4:e7:d5:f3:00
 Aggregated-interfaces       : 1/1/55 1/1/56
 Aggregation-key             : 256
 Aggregate mode              : active
 Hash                        : l3-src-dst
 LACP rate                   : slow
 Speed                       : 200000 Mb/s
 Mode                        : trunk


Form the VSX Cluster

under the configuration mode, go in to the VSX context by entering “vsx” and issue the following commands on both switches

CORE01#

vsx
    inter-switch-link lag 256
    role primary
    linkup-delay-timer 30

-------------------------------

CORE02#

vsx
    inter-switch-link lag 256
    role secondary
    linkup-delay-timer 30

Check the VSX Status

CORE01# sh vsx status
VSX Operational State
---------------------
  ISL channel             : In-Sync
  ISL mgmt channel        : operational
  Config Sync Status      : In-Sync
  NAE                     : peer_reachable
  HTTPS Server            : peer_reachable

Attribute           Local               Peer
------------        --------            --------
ISL link            lag256              lag256
ISL version         2                   2
System MAC          b8:d4:e7:d5:36:00   b8:d4:e7:d5:f3:00
Platform            8325                8325
Software Version    GL.10.06.0001       GL.10.06.0001
Device Role         primary             secondary

----------------------------------------

CORE02# sh vsx status
VSX Operational State
---------------------
  ISL channel             : In-Sync
  ISL mgmt channel        : operational
  Config Sync Status      : In-Sync
  NAE                     : peer_reachable
  HTTPS Server            : peer_reachable

Attribute           Local               Peer
------------        --------            --------
ISL link            lag256              lag256
ISL version         2                   2
System MAC          b8:d4:e7:d5:f3:00   b8:d4:e7:d5:36:00
Platform            8325                8325
Software Version    GL.10.06.0001       GL.10.06.0001
Device Role         secondary           primary

Setup the Keepalive Link

its recommended to set up a Keepalive link to avoid Split-brain scenarios if the ISL goes down, We are trying to prevent both switches from thinking they are the active devices creating STP loops and other issues on the network

This is not a must-have, it’s nice to have, As of Aruba CX OS 10.06.x you need to sacrifice one of your data ports for this

Dell OS10 VLT archives this via the OOBM network ports, Supposedly Keepalive over OOBM is something Aruba is working on for future releases

Few things to note

  • It’s recommended using a routed port in a separate VRF for the keepalive link
  • can use a 1Gbps link for this if needed

Provision the port and VRF

CORE01#

vrf KEEPALIVE

interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.1/24
exit

-----------------------------------------

CORE02#

vrf KEEPALIVE

interface 1/1/48
no shutdown
vrf attach KEEPALIVE
description VSX-keepalive-Link
ip address 192.168.168.2/24
exit


Define the Keepalive link

Note – Remember to define the vrf id in the keepalive statement

Thanks /u/illumynite for pointing that out

CORE01#

vsx
    inter-switch-link lag 256
    role primary
    keepalive peer 192.168.168.2 source 192.168.168.1 vrf KEEPALIVE
    linkup-delay-timer 30

-----------------------------------------

CORE02#

vsx
    inter-switch-link lag 256
    role secondary
    keepalive peer 192.168.168.1 source 192.168.168.2 vrf KEEPALIVE
    linkup-delay-timer 30

Next up…….

  • VSX MC-LAG
  • VSX Active forwarding
  • VSX Active gateway

References

AOS-CX 10.06 Virtual SwitchingExtension (VSX) Guide

As always if you notice any mistakes please do let me know in the comments

Vagrant Ansible LAB Guide – Bridged network

Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.

Alright lets get started

TLDR Version

  • Install Vagrant
  • Install Virtual-box
  • Create project folder and CD in to it
Vagrant init
  • Vagrantfile – link
  • Vagrant Provisioning Shell Script to Deploy Ansible – link
  • Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
  • Bring up the Vagrant environment
Vagrant up

Install Vagrant and Virtual box

For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX

Windows

Download Vagrant and virtual box and install it the good ol way –

https://www.vagrantup.com/downloads.html

https://www.virtualbox.org/wiki/Downloads

https://www.vagrantmanager.com/downloads/

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Or Using chocolatey

choco install vagrant
choco install virtualbox
choco install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

MAC OSX – using Brewcask

Install virtual box

$ brew cask install virtualbox

Now install Vagrant either from the website or use homebrew for installing it.

$ brew cask install vagrant

Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.

$ brew cask install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Setup the Vagrant Environment

Open Powershell

to get started lets check our environment

vagrant version

Create a project directory and Initialize the environment

for the project directory im using D:\vagrant

Open powershell and run

mkdir D:\vagrant
cd D:\vagrant

Initialize the environment under the project folder

vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data

Vagrantfile – Vagrant config file

Lets Create the Vagrantfile to deploy the VMs

https://www.vagrantup.com/docs/vagrantfile/

The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files

Im using Atom to edit the vagrantfile

Vagrant.configure("2") do |config|
     config.vm.define "controller" do |controller|
                  controller.vm.box = "ubuntu/trusty64"
                  controller.vm.hostname = "LAB-Controller"
                  controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
                    controller.vm.provider "virtualbox" do |vb|
                                 vb.memory = "2048"
                  end
                  controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
   end
   (1..3).each do |i|
         config.vm.define "vls-node#{i}" do |node|
                       node.vm.box = "ubuntu/trusty64"
                       node.vm.hostname = "vls-node#{i}"
                       node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
                      node.vm.provider "virtualbox" do |vb|
                                                  vb.memory = "1024"
                     end
              end
        end
end

You can grab the code from my Repo

https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile

Let’s talk a little bit about this code and unpack this

Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.

Provisioning the Ansible VM

This will

  • Provision the controller Ubuntu VM
  • Create a bridged network adapter
  • Set the host-name – LAB-Controller
  • Set the static IP – 172.17.10.120/24
  • Run the Shell script that installs Ansible using apt-get install (We will get to this below)

Lets start digging in…

Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.

You can find different base boxes from app.vagrantup.com

Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this

Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"

in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute

You can the relavant interface name using

get-netadapter

You can also create a host-only private network

controller.vm.network :private_network, ip: "10.0.0.10"

for more info checkout the network section in the KB

https://www.vagrantup.com/docs/networking/

Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048

You can get more granular with this, refer to the below KB

https://www.vagrantup.com/docs/virtualbox/configuration.html

Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script

this script installs Ansible and set the host file entries to make your life easier

In case you are wondering VLS stands for V=virtual,L – linux S – server.

I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname

!/bin/bash
sudo apt-get update
sudo apt-get install software-propetise-common -y
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible -y
echo "
172.17.10.120 LAB-controller
172.17.10.121 vls-node1
172.17.10.122 vls-node2
172.17.10.123 vls-node3" >> /etc/hosts

create this file and save it as Ansible_LAB_setup.sh in the Project folder

in this case I’m going to save it under D:\vagrant

You can also do this inline with a script block instead of using a separate file

https://www.vagrantup.com/docs/provisioning/basic_usage.html

Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)

This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter

vls-node1 – 172.17.10.121

vls-node2 – 172.17.10.122

vls-node1 – 172.17.10.123

So now that we are done with explaining the code, let’s run this

Building the Lab environment using Vagrant

Issue the following command to check your syntax

Vagrant status

Issue the following command to bring up the environment

Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled

Intel – VT-D

AMD Ryzen – SVM

If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified

Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS

==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
  virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.

Check the status

Vagrant status

Testing

Connecting via SSH to your VMs

vagrant ssh controller

“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal

We are going to connect to our controller and check everything

Little bit more information on the networking side

Vagrant Adds two interfaces, for each VM

NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs

Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment

vagrant destroy -f

https://www.vagrantup.com/intro/getting-started/teardown.html

I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.

let me know in the comments if you see any issues or mistakes.

Until Next time…..