Vagrant Ansible LAB Guide – Bridged network

Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.

Alright lets get started

TLDR Version

  • Install Vagrant
  • Install Virtual-box
  • Create project folder and CD in to it
Vagrant init
  • Vagrantfile – link
  • Vagrant Provisioning Shell Script to Deploy Ansible – link
  • Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
  • Bring up the Vagrant environment
Vagrant up

Install Vagrant and Virtual box

For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX

Windows

Download Vagrant and virtual box and install it the good ol way –

https://www.vagrantup.com/downloads.html

https://www.virtualbox.org/wiki/Downloads

https://www.vagrantmanager.com/downloads/

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Or Using chocolatey

choco install vagrant
choco install virtualbox
choco install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

MAC OSX – using Brewcask

Install virtual box

$ brew cask install virtualbox

Now install Vagrant either from the website or use homebrew for installing it.

$ brew cask install vagrant

Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.

$ brew cask install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Setup the Vagrant Environment

Open Powershell

to get started lets check our environment

vagrant version

Create a project directory and Initialize the environment

for the project directory im using D:\vagrant

Open powershell and run

mkdir D:\vagrant
cd D:\vagrant

Initialize the environment under the project folder

vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data

Vagrantfile – Vagrant config file

Lets Create the Vagrantfile to deploy the VMs

https://www.vagrantup.com/docs/vagrantfile/

The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files

Im using Atom to edit the vagrantfile

Vagrant.configure("2") do |config|
     config.vm.define "controller" do |controller|
                  controller.vm.box = "ubuntu/trusty64"
                  controller.vm.hostname = "LAB-Controller"
                  controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
                    controller.vm.provider "virtualbox" do |vb|
                                 vb.memory = "2048"
                  end
                  controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
   end
   (1..3).each do |i|
         config.vm.define "vls-node#{i}" do |node|
                       node.vm.box = "ubuntu/trusty64"
                       node.vm.hostname = "vls-node#{i}"
                       node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
                      node.vm.provider "virtualbox" do |vb|
                                                  vb.memory = "1024"
                     end
              end
        end
end

You can grab the code from my Repo

https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile

Let’s talk a little bit about this code and unpack this

Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.

Provisioning the Ansible VM

This will

  • Provision the controller Ubuntu VM
  • Create a bridged network adapter
  • Set the host-name – LAB-Controller
  • Set the static IP – 172.17.10.120/24
  • Run the Shell script that installs Ansible using apt-get install (We will get to this below)

Lets start digging in…

Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.

You can find different base boxes from app.vagrantup.com

Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this

Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"

in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute

You can the relavant interface name using

get-netadapter

You can also create a host-only private network

controller.vm.network :private_network, ip: "10.0.0.10"

for more info checkout the network section in the KB

https://www.vagrantup.com/docs/networking/

Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048

You can get more granular with this, refer to the below KB

https://www.vagrantup.com/docs/virtualbox/configuration.html

Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script

this script installs Ansible and set the host file entries to make your life easier

In case you are wondering VLS stands for V=virtual,L – linux S – server.

I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname

!/bin/bash
sudo apt-get update
sudo apt-get install software-propetise-common -y
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible -y
echo "
172.17.10.120 LAB-controller
172.17.10.121 vls-node1
172.17.10.122 vls-node2
172.17.10.123 vls-node3" >> /etc/hosts

create this file and save it as Ansible_LAB_setup.sh in the Project folder

in this case I’m going to save it under D:\vagrant

You can also do this inline with a script block instead of using a separate file

https://www.vagrantup.com/docs/provisioning/basic_usage.html

Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)

This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter

vls-node1 – 172.17.10.121

vls-node2 – 172.17.10.122

vls-node1 – 172.17.10.123

So now that we are done with explaining the code, let’s run this

Building the Lab environment using Vagrant

Issue the following command to check your syntax

Vagrant status

Issue the following command to bring up the environment

Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled

Intel – VT-D

AMD Ryzen – SVM

If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified

Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS

==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
  virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.

Check the status

Vagrant status

Testing

Connecting via SSH to your VMs

vagrant ssh controller

“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal

We are going to connect to our controller and check everything

Little bit more information on the networking side

Vagrant Adds two interfaces, for each VM

NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs

Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment

vagrant destroy -f

https://www.vagrantup.com/intro/getting-started/teardown.html

I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.

let me know in the comments if you see any issues or mistakes.

Until Next time…..

ASIC’S in Cisco Catalyst switches

Preface-

After I started working with Open networking switches, wanted to know more about the Cisco catalyst range I work with every day.

Information on older ASICS is very hard to find, but recently they have started to talk a lot about the new chips like UADP 2.0 with the Catalyst 9k / Nexus launch, This is more likely due to the rise of Desegregated Network Operating Systems DNOS such as Cumulus and PICA8, etc forcing customers to be more aware of what’s under the hood rather than listening and believing shiny PDF files with a laundry list of features.

The information was there but scattered all over the web, I went though CiscoLive, TechFieldDay slides/videos, interviews, partner update PDFs, Press leases and whitepapers and even LinkedIn profiles to gather information

If you notice a mistake please let me know

Scope –

we are going to focus on the ASIC’s used in the well-known 2960S/X/XR and the 36xx,37xx,38XX  and the new Cat 9K series

Timeline

 

Summary

Cisco Brought a bunch of companies to acquire the switching technology they needed that later bloomed into the switching platforms we know today

  • Crescendo Communications (1993) – Catalyst 5K and 6K chassis
  • Kalpana (1994) – Catalyst 3K (Fun Fact they invented VLANs that later got standardized as 802.1q)
  • Grand Junction (1995) – Catalyst 17xx, 19xx, 28xx, 29xx
  • Granite Systems (1996) – Catalyst 4K (K series ASIC)

After the original Cisco 3750/2950 switches, Cisco 3xxx/2xxx-G  (G for Gigabit) was released

Next, the Cisco 3xxx-E series with enterprise management functions was released

later, Cisco developed the Cisco 3750-V series with the function of energy-saving version for –E series, later replaced by Cisco 3750 V2 series (Possibly a die shrink)

G series and E series were later phased out and integrated into Cisco X series. which is still being sold and supported

in 2017-2018 Cisco released the catalyst 9k family to replace the 2K and 3K families

Sasquatch ASIC

from what I could  find there are two variants of this ASIC

The initial release in 2003

  • Fixed pipeline ASIC
  • 180 Nano-meter process
  • 60 Million Transistors

Shipped with the 10/100 3750 and 2960

Die Shrink to 130nm in 2005

  • Fixed pipeline ASIC
  • 130 Nano-meter process
  • 210 Million Transistors

Shipped in the 2960-G/3560-G/3750-G series

I couldn’t find much info about the chip design. will update this section when I find more.

 Strider ASIC

Initially Release in 2010

  • Fixed pipeline ASIC
  • Built on the 65-nanometer process
  • 1.3 Billion Transistors

Strider ASIC (circa 2010) was an improved design based on the 3750-E series was first shipped with the 2960-S family.

S88G_ASIC design
S88G ASIC

later in 2012-2013 with a die shrink to 45-nanometer, they managed to fit 2 ASICs in the same silicon, This shipped with the 2960-X/XR which replaced the 2960-S

  • higher stack speeds and features
  • limited layer 3 capabilities IP Lite feature (2960-XR only)
  • Better QoS and Netflow lite

Later down the line they silently rolled the ASIC design to a 32-nanometer process for better yield to achieve cheaper manufacturing costs

this switch is still being sold with no EOL announced as a cheaper Acess layer switch

On a side note – in 2017 Cisco released another version of the 2960 family the WS-2960-L This is a cheaper version built on a Marvel ASIC (Same as the SG55x) with a web UI and fanless design. I personally think this is the next evolution of their SMB market-oriented family the popular Cisco SG-5xx series. for the first the time the 2960 family had a fairly usable and pleasant web interface for the configuration and management. the new 9K series seems to be containing a more polished version of the web-UI

Unified Access Data Plane (UADP)

Due to the limitations in the fixed pipeline architecture and the costs involved with the re-rolling process to fix bugs they needed something new and had three options

As a compromise between all three Cisco Started dreaming up this programmable ASIC design in 2007-2008 the idea was to build a chip with programmable stages that can be updated with firmware updates instead of writing the logic into the silicon permanently.

they released the programmable ASIC technology initially for their QFP (Quantum flow processor) ASIC in the ISR router family to meet the customer needs (service providers and large enterprises)

This chip allowed them to support advanced routing technologies and new protocols without changing hardware simply via firmware updates improving the longevity of the investment allowing them to make more money out of the chips extended life cycle.

Eventually, this technology trickled downstream and the Doppler 1.0 was born

Improvements and features in UADP 1.0/1.1/2.0

  • Programmable stages in the pipeline
  • Cisco intent Driven networking support – DNA Center with ISE
  • Intergarted Stacking Support with Stack power – ASIC is built with pinouts for the stacking fabric allowing faster stacking performance
  • Rapid Recirculation (Encapsulation such as MPLS, VXLAN)
  • TrustSec
  • Advance on-chip QOS
  • Software-defined networking support – integrated NetFlow, SD access
  • Flex Parser – Programmable packet parser allowing them to introduce support for new protocols with firmware updates
  • On-chip Micro-engines – Highly specialized engines within in the chip to perform repetitive tasks such as
    • Reassembly
    • Encryption/Decryption
    •  Fragmentation
  • CAPWAP – Switch can function as a wireless Lan Controller
    • Mobility agent – Offload Qo and other functions from the WLC (IMO Works really nicely with multi-site wireless deployments)
    • Mobility Controller – Fullblown Wireless LAN controller (WLC)
  • Extended life cycle allowed integration of Cisco security technologies such as Cisco DNA + ISE later down the line even on the first generation switches
  • Multigig and 40GE speed support
  • Advanced malware detection capabilities via packet fingerprinting

Legacy Fixed pipeline architecture

Programmable pipeline architecture

Doppler/UADP 1.0 (2013)

While doppler1.0 programmable ASIC handling the Data plane coupled with Cavium CPU for the control plane the first generation of the switches to ship with these chip was the 3650-x and 3850-x gigabit versions

  • Built on 65 Nanometer Process
  • 1.3 billion transistors

UADP 1.1 (2015)

  • Die Shrink to 45 Nanometer
  • 3 billion transistors

UADP 2.0 (2017)

  • Built on 28nm/16nm Process
  • Equipped with an Intel Xeon D (Dual-core X86) CPU for the control plane
  • Open-IOS-XE

7.4 billion transistors

Flexible ASIC Templates –  

Allows Cisco to write templates that can optimize the chip resources for different use cases

the new Catalyst 9000 series will replace the following campus switching families built on the older Strider and more recent UADP 1 and 1.1 ASICS

  • Catalyst 2K —–> Catalyst 9200
  • Catalyst 3K —–> Catalyst 9300
  • Catalyst 4K —–> Catalyst 9400
  • Catalyst 6K —–> Catalyst 9500

I’m will update/fix this post when I find more info about the UADP 2 and the next evolution, stay tuned for a few more articles based on the silicon used in open networking X86 chassis.