Here's a is a quick guide to get you started with a "Ansible core lab" using Vagrant. Alright lets get
Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed.
Hello internetzzzAs an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized
Duplicate mailbox issue with hybrid exchange user mailbox migration
This guide will walk you through on how to extend and increase space for the root filesystem on a alma
Secure Unify controller on AWS using a free SSL certificate from LetsEncrypt, This guide will apply for any Debian based
Quick and easy way to find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox
Guide to show you how to enroll your servers/desktops with a CA signed cert and set up WinRM over HTTPS
Advertise connected routes within VRFs to an upstream or downstream ip address this is one of many ways to get
ADSync Password synchronization failed with the error The Specified Domain either does not exist or could not be contacted

Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.

Alright lets get started

TLDR Version

  • Install Vagrant
  • Install Virtual-box
  • Create project folder and CD in to it
Vagrant init
  • Vagrantfile – link
  • Vagrant Provisioning Shell Script to Deploy Ansible – link
  • Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
  • Bring up the Vagrant environment
Vagrant up

Install Vagrant and Virtual box

For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX

Windows

Download Vagrant and virtual box and install it the good ol way –

https://www.vagrantup.com/downloads.html

https://www.virtualbox.org/wiki/Downloads

https://www.vagrantmanager.com/downloads/

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Or Using chocolatey

choco install vagrant
choco install virtualbox
choco install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

MAC OSX – using Brewcask

Install virtual box

$ brew cask install virtualbox

Now install Vagrant either from the website or use homebrew for installing it.

$ brew cask install vagrant

Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.

$ brew cask install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Setup the Vagrant Environment

Open Powershell

to get started lets check our environment

vagrant version

Create a project directory and Initialize the environment

for the project directory im using D:\vagrant

Open powershell and run

mkdir D:\vagrant
cd D:\vagrant

Initialize the environment under the project folder

vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data

Vagrantfile – Vagrant config file

Lets Create the Vagrantfile to deploy the VMs

https://www.vagrantup.com/docs/vagrantfile/

The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files

Im using Atom to edit the vagrantfile

Vagrant.configure("2") do |config|
     config.vm.define "controller" do |controller|
                  controller.vm.box = "ubuntu/trusty64"
                  controller.vm.hostname = "LAB-Controller"
                  controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
                    controller.vm.provider "virtualbox" do |vb|
                                 vb.memory = "2048"
                  end
                  controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
   end
   (1..3).each do |i|
         config.vm.define "vls-node#{i}" do |node|
                       node.vm.box = "ubuntu/trusty64"
                       node.vm.hostname = "vls-node#{i}"
                       node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
                      node.vm.provider "virtualbox" do |vb|
                                                  vb.memory = "1024"
                     end
              end
        end
end

You can grab the code from my Repo

https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile

Let’s talk a little bit about this code and unpack this

Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.

Provisioning the Ansible VM

This will

  • Provision the controller Ubuntu VM
  • Create a bridged network adapter
  • Set the host-name – LAB-Controller
  • Set the static IP – 172.17.10.120/24
  • Run the Shell script that installs Ansible using apt-get install (We will get to this below)

Lets start digging in…

Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.

You can find different base boxes from app.vagrantup.com

Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this

Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"

in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute

You can the relavant interface name using

get-netadapter

You can also create a host-only private network

controller.vm.network :private_network, ip: "10.0.0.10"

for more info checkout the network section in the KB

https://www.vagrantup.com/docs/networking/

Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048

You can get more granular with this, refer to the below KB

https://www.vagrantup.com/docs/virtualbox/configuration.html

Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script

this script installs Ansible and set the host file entries to make your life easier

In case you are wondering VLS stands for V=virtual,L – linux S – server.

I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname

!/bin/bash
sudo apt-get update
sudo apt-get install software-propetise-common -y
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible -y
echo "
172.17.10.120 LAB-controller
172.17.10.121 vls-node1
172.17.10.122 vls-node2
172.17.10.123 vls-node3" >> /etc/hosts

create this file and save it as Ansible_LAB_setup.sh in the Project folder

in this case I’m going to save it under D:\vagrant

You can also do this inline with a script block instead of using a separate file

https://www.vagrantup.com/docs/provisioning/basic_usage.html

Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)

This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter

vls-node1 – 172.17.10.121

vls-node2 – 172.17.10.122

vls-node1 – 172.17.10.123

So now that we are done with explaining the code, let’s run this

Building the Lab environment using Vagrant

Issue the following command to check your syntax

Vagrant status

Issue the following command to bring up the environment

Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled

Intel – VT-D

AMD Ryzen – SVM

If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified

Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS

==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
  virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.

Check the status

Vagrant status

Testing

Connecting via SSH to your VMs

vagrant ssh controller

“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal

We are going to connect to our controller and check everything

Little bit more information on the networking side

Vagrant Adds two interfaces, for each VM

NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs

Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment

vagrant destroy -f

https://www.vagrantup.com/intro/getting-started/teardown.html

I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.

let me know in the comments if you see any issues or mistakes.

Until Next time…..

Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed. i googled around for the tool but ended up putting together this script.

Its not the most prettiest but it works, and im sure you guys will make something much better out of it.

# Define the host names here for the servers that needs to be monitored
$servers = "relay1.host.com","relay2.host.com"
# Define port number
$tcp_port = "25"

# Loop through each host to get an individual result.
ForEach($srv in $servers) {

$tcpClient = New-Object System.Net.Sockets.TCPClient
$tcpClient.Connect($srv,$tcp_port)

$connectState = $tcpClient.Connected

If($connectState -eq $true) {
Write-Host "$srv is online"
}
Else {
Write-Host "$srv is offline"
}

$tcpClient.Dispose()

}

If something is wrong or if you think there is a better way please free feel to comment and let everyone know. its all about community after all.

Update 4/18/2016 –

Updated the script with the one provided by Donald Gray – Thanks a lot : )

Hello internetzzz

As an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized Ribbon, Quick toolbars, etc for Office applications on user Computers, or in my case Terminal servers.

here is a quick and dirty guide on how to do this via group policy.

For instance, lets say we have to deploy a button to initiate a 3rd party productivity program with in outlook and MS word.

First off, make the necessary changes to outlook or word on a Client pc running MS office.

To customize the Ribbon

  • On the File tab, click Options, and then click Customize Ribbon to open the Ribbon customization dialog.

To customize the Quick Access Toolbar

  • On the File tab, click Options, and then click Quick Access Toolbar to open the Quick Access Toolbar customization dialog.

You can also export your Ribbon and Quick Access Toolbar customizations into a file.

 

when we make changes to the default Ribbon these user customizations are saved in as .officeUI Files

%localappdata%MicrosoftOffice

The file names will differ according to the office program and the portion of the Ribbon UI  you customized.

Application Description Of .Ribbon File .officeUI File Name
Outlook 2010 Outlook Explorer olkexplorer.officeUI
Outlook 2010 Contact olkaddritem.officeUI
Outlook 2010 Appointment/Meeting (organizer on compose, organizer after compose, attendee) olkapptitem.officeUI
Outlook 2010 Contact Group (formerly known as Distribution List) olkdlstitem.officeUI
Outlook 2010 Journal Item olklogitem.officeUI
Outlook 2010 Mail Compose olkmailitem.officeUI
Outlook 2010 Mail Read olkmailread.officeUI
Outlook 2010 Multimedia Message Compose olkmmsedit.officeUI
Outlook 2010 Multimedia Message Read olkmmsread.officeUI
Outlook 2010 Received Meeting Request olkmreqread.officeUI
Outlook 2010 Forward Meeting Request olkmreqsend.officeUI
Outlook 2010 Post Item Compose olkpostitem.officeUI
Outlook 2010 Post Item Read olkpostread.officeUI
Outlook 2010 NDR olkreportitem.officeUI
Outlook 2010 Send Again Item olkresenditem.officeUI
Outlook 2010 Counter Response to a Meeting Request olkrespcounter.officeUI
Outlook 2010 Received Meeting Response olkresponseread.officeUI
Outlook 2010 Edit Meeting Response olkresponsesend.officeUI
Outlook 2010 RSS Item olkrssitem.officeUI
Outlook 2010 Sharing Item Compose olkshareitem.officeUI
Outlook 2010 Sharing Item Read olkshareread.officeUI
Outlook 2010 Text Message Compose olksmsedit.officeUI
Outlook 2010 Text Message Read olksmsread.officeUI
Outlook 2010 Task Item (Task/Task Request, etc.) olktaskitem.officeUI
Access 2010 Access Ribbon Access.officeUI
Excel 2010 Excel Ribbon Excel.officeUI
InfoPath 2010 InfoPath Designer Ribbon IPDesigner.officeUI
InfoPath 2010 InfoPath Editor Ribbon IPEditor.officeUI
OneNote 2010 OneNote Ribbon OneNote.officeUI
PowerPoint PowerPoint Ribbon PowerPoint.officeUI
Project 2010 Project Ribbon MSProject.officeUI
Publisher 2010 Publisher Ribbon Publisher.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveLB.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveWE.officeUI
SharePoint Designer 2010 SharePoint Designer Ribbon spdesign.officeUI
Visio 2010 Visio Ribbon Visio.officeUI
Word 2010 Word Ribbon Word.officeUI

You can use these files and push it via Group policy using a simple start up script..

@echo off 
setlocal
set userdir=%localappdata%MicrosoftOffice
set remotedir=\MyServerLogonFilespublicOfficeUI 
for %%r in (Word Excel PowerPoint) do if not exist %userdir%%%r.officeUI cp %remotedir%%%r.officeUI %userdir%%%r.officeUI
endlocal 

A basic script to copy .officeUI files from a network share into the user’s local AppData directory, if no .officeUI file currently exists there.
Can easily be modified to use the roaming AppData directory (replace %localappdata% with %appdata%) or to include additional ribbon customizations.

 

Managing Office suit setting via Group Policy

Download and import the ADM templates to the Group policy object editor.
This will allow you to  manage settings Security, UI related options, Trust center, etc.. on office 2010 using GPO

Download Office 2010 Administrative Template files (ADM, ADMX/ADML)

hopefully, this will be help full to someone..
until next time cháo

During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:

  • Hybrid Co-Existence Environment with AAD-Sync
  • User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
  • [email protected] had a cloud-only mailbox prior to the initial AD-sync was run
  • A user account is registered as a mail-user and has a valid license attached
  • During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox.
+ CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException
+ FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep
lication.MoveRequest.NewMoveRequest
+ PSComputerName : Pl-ex001.Paladin.org

It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user

Option 1 (nuclear option)

to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.

Option 2

Clean up only the office 365 mailbox pointer information

PS C:\> Set-User [email protected] -PermanentlyClearPreviousMailboxInfo 
Confirm
Confirm
Are you sure you want to perform this action?
Delete all existing information about user "[email protected]"?. This operation will clear existing values from
Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that
existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to
continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a

Executing this leaves you with a clean object without the duplicate-mailbox problem,

in some cases when you run this command you will get the following output 

 “Command completed successfully, but no user settings were changed.”

If this happens

Remove the license from the user temporarily and run the command to remove previous mailbox data

then you can re-add the license 

 

This guide will walk you through on how to extend and increase space for the root filesystem on a alma linux. Cent OS, REHL Server/Desktop/VM

Method A – Expanding the current disk

Edit the VM and Add space to the Disk

install the cloud-utils-growpart package, as the growpart command in it makes it really easy to extend partitioned virtual disks.

sudo dnf install cloud-utils-growpart

Verify that the VM’s operating system recognizes the new increased size of the sda virtual disk, using lsblk or fdisk -l

sudo fdisk -l
Notes -
Note down the disk id and the partition number for Linux LVM - in this demo disk id is sda and lvm partition is sda 3

lets trigger a rescan of a block devices (Disks)

#elevate to root
sudo su 

#trigger a rescan, Make sure to match the disk ID you noted down before 
echo 1 > /sys/block/sda/device/rescan
exit

Now sudo fdisk -l shows the correct size of the disks

Use growpart to increase the partition size for the lvm

sudo growpart /dev/sda 3

Confirm the volume group name

sudo vgs

Extend the logical volume

sudo lvextend -l +100%FREE /dev/almalinux/root

Grow the file system size

sudo xfs_growfs /dev/almalinux/root
Notes -
You can use this same steps to add space to different partitions such as home, swap if needed

Method B -Adding a second Disk to the LVM and expanding space

Why add a second disk?
may be the the current Disk is locked due to a snapshot and you cant remove it, Only solution would be to add a second disk/

Check the current space available

sudo df -h 
Notes -
If you have 0% ~1MB left on the cs-root command auto-complete with tab and some of the later commands wont work, You should clear up atleast 4-10mb by clearing log files, temp files, etc

Mount an additional disk to the VM (Assuming this is a VM) and make sure the disk is visible on the OS level

sudo lvmdiskscan

OR

sudo fdisk -l

Confirm the volume group name

sudo vgs

Lets increase the space

First lets initialize the new disk we mounted

sudo mkfs.xfs /dev/sdb

Create the Physical volume

sudo pvcreate /dev/sdb

extend the volume group

sudo vgextend cs /dev/sdb
  Volume group "cs" successfully extended


Extend the logical volume

sudo lvextend -l +100%FREE /dev/cs/root

Grow the file system size

sudo xfs_growfs /dev/cs/root

Confirm the changes

sudo df -h

Just making easy for us!!

#Method A - Expanding the current disk 
#AlmaLinux
sudo dnf install cloud-utils-growpart

sudo lvmdiskscan
sudo fdisk -l                          #note down the disk ID and partition num


sudo su                                #elevate to root
echo 1 > /sys/block/sda/device/rescan  #trigger a rescan
exit                                   #exit root shell

sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h

#Method B - Adding a second Disk 
#CentOS

sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend cs /dev/sdb
sudo lvextend -l +100%FREE /dev/cs/root
sudo xfs_growfs /dev/cs/root
sudo df -h

#AlmaLinux

sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend almalinux /dev/sdb
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h

I found a solution for how to navigate cloud key issues and wanted to set up a ZTP solution for Unifi hardware so I can direct ship equipment to the site, and provision it securely via internet without having to stand up a L2L tunnel.

Alright, lets get started…

This guide is applicable for any Ubuntu based install, but I’m going to utilize Amazon Lightsail for the demo, since at the time of writing, it’s the cheapest option I can find with enough compute resources and a static IP included.

2 GB RAM, 1 vCPU60 GB SSD

OPex (Recurring Cost) – 10$ per Month – As of February 2019

Guide

Dry Run

1. Set up Lightsail instance
2. Create and attach static IP
3. Open necessary ports
4. Set up Unify packages
5. Set up SSL using certbot and letsencrypt
6. Add the certs to unify controller
7. Set up Cronjob for SSL auto Renewal
8. Adopting UniFi devices

1. Set up LightSail instance

Login to – https://lightsail.aws.amazon.com

Spin up a Lightsail instance:

Set a name for the instance and provision it.

2. Create and attach static IP

Click on the instance name and click on the networking tab:

Click “Create Static IP”:

3. Open necessary ports

TCP or UDP

Port Number

Usage

TCP

80

Port used inform-URL for adoption.

TCP

443

Port used for Cloud Access service.

UDP

3478

Port used for STUN.

TCP

8080

Port used for device and controller communication.

TCP

8443

Port used for controller GUI/API as seen in a web browser.

TCP

8880

Port used for HTTP portal redirection.

TCP

8843

Port used for HTTPS portal redirection.

You can disable or lock down the ports as needed using IP-tables depending on your security posture

Post spotlight-

https://www.lammertbies.nl/comm/info/iptables.html#intr

4. Set up Unify packages

https://help.ubnt.com/hc/en-us/articles/209376117-UniFi-Install-a-UniFi-Cloud-Controller-on-Amazon-Web-Services#7

Add the Ubiquiti repository to /etc/apt/sources.list:
sudo echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
Add the Ubiquiti GPG Key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50

Update the server’s repository information:

sudo apt-get update

Install JAVA 8 run time

You need Java Run-time 8 to run the UniFi Controller

Add Oracle’s PPA (Personal Package Archive) to your list of sources so that Ubuntu knows where to check for the updates. Use addaptrepository command for that.

sudo add-apt-repository ppa:webupd8team/java -y sudo apt install java-common oracle-java8-installer

update your package repository by issuing the following command

sudo apt-get update

The oracle-java8-set-default package will automatically set Oracle JDK8 as default. Once the installation is complete we can check Java version.

java -version

java version "1.8.0_191"

MongoDB

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
sudo apt update

Update. Retrieve the latest package information.

sudo apt update

sudo apt-get install apt-transport-https

Install UniFi Controller packages.

sudo apt install unifi

You should be able to Access the web interface and go through the initial setup wizard.

https://yourIPaddress:8443

5. Set up SSL using certbot and letsencrypt

Lets get that green-lock up in here shall we

So, a few things to note here… UniFi doesn’t really have a straightforward way to import certificates, you have to use the java keystore commands to import the cert, but there is a very handy script built by Steve Jenkins that makes this super easy.

First, we need to request a cert and sign it using lets encrypt certificate authority.

Let’s start with adding the repository and install the EFF certbot package – link

sudo apt-get update
sudo apt-get install software-properties-common 
sudo add-apt-repository universe 
sudo add-apt-repository ppa:certbot/certbot 
sudo apt-get update 
sudo apt-get install certbot

5.1 Update/add your DNS record and make sure its propagated (this is important)

Note - The DNS name should point to the static IP we attached to our light-sail instance
Im going to use the following A record for this example

unifyctrl01.multicastbits.com

Ping from the controller and make sure the server can resolve it.

ping unifyctrl01.multicastbits.com


You wont be able see any echo replies because ICMP is not allowed on the firewall rules in AWS - leave it as is we just need the server to see the IP resolving to DNS A record

5.2 Request the certificate

Issue the following command to start certbot in certonly mode

sudo certbot certonly
usage: 
  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate. The most common SUBCOMMANDS and flags are:

obtain, install, and renew certificates:
    (default) run   Obtain & install a certificate in your current webserver
    certonly        Obtain or renew a certificate, but do not install it
    renew           Renew all previously obtained certificates that are near expiry
    enhance         Add security enhancements to your existing configuration
   -d DOMAINS       Comma-separated list of domains to obtain a certificate for

 

5.3 Follow the wizard

Select the first option #1 (Spin up a temporary web server)

Enter all the information requested for the cert request.

This will save the certificate and the privet key generated to the following directory:

/etc/letsencrypt/live/DNSName/

All you need to worry about are these files:

  • cert.pem
  • fullchain.pem
  • privkey.pem

6 Import the certificate to the UniFi controller

You can do this manually using the keytool-import

https://crosstalksolutions.com/secure-unifi-controller/
https://docs.oracle.com/javase/tutorial/security/toolsign/rstep2.html

But for this we are going to use the handy SSL import script made by Steven Jenkins

6.1  Download Steve Jenkins UniFi SSL Import Script

Copy the unifi_ssl_import.sh script to your server

wget https://raw.githubusercontent.com/stevejenkins/unifi-linux-utils/master/unifi_ssl_import.sh

6.2 Modify Script

Install Nano if you don’t have it (it’s better than VI in my opinion. Some disagree, but hey, I’m entitled to my opinion)

sudo apt-get install nano
nano unifi_ssl_import.sh

Change your hostname.example.com to the actual hostname you wish to use. In my case, I’m using

UNIFI_HOSTNAME=your_DNS_Record

Since we are using Ubuntu comment following three lines for Fedora/RedHat/CentOS

#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore 

Uncomment following three lines for Debian/Ubuntu

UNIFI_DIR=/var/lib/unifi 
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

 Since we are using Letsencrypt

LE_MODE=yes

here’s what i used for this demo

#!/usr/bin/env bash

# unifi_ssl_import.sh
# UniFi Controller SSL Certificate Import Script for Unix/Linux Systems
# by Steve Jenkins <http://www.stevejenkins.com/>
# Part of https://github.com/stevejenkins/ubnt-linux-utils/
# Incorporates ideas from https://source.sosdg.org/brielle/lets-encrypt-scripts
# Version 2.8
# Last Updated Jan 13, 2017

# CONFIGURATION OPTIONS
UNIFI_HOSTNAME=unifyctrl01.multicastbits.com
UNIFI_SERVICE=unifi

# Uncomment following three lines for Fedora/RedHat/CentOS
#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu
UNIFI_DIR=/var/lib/unifi
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

# Uncomment following three lines for CloudKey
#UNIFI_DIR=/var/lib/unifi
#JAVA_DIR=/usr/lib/unifi
#KEYSTORE=${JAVA_DIR}/data/keystore

# FOR LET'S ENCRYPT SSL CERTIFICATES ONLY
# Generate your Let's Encrtypt key & cert with certbot before running this script
LE_MODE=yes
LE_LIVE_DIR=/etc/letsencrypt/live

# THE FOLLOWING OPTIONS NOT REQUIRED IF LE_MODE IS ENABLED
PRIV_KEY=/etc/ssl/private/hostname.example.com.key
SIGNED_CRT=/etc/ssl/certs/hostname.example.com.crt
CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt

#rest of the script Omitted

6.3 Make script executable:
chmod a+x unifi_ssl_import.sh
6.4 Run script:
sudo ./unifi_ssl_import.sh

This script will

  • Backup the old keystore file (very handy, something i always forget to do)
  • update the relevant keystore file with the LE cert
  • restart the services to apply the new cert

7. Setup Automatic Certificate renewal

Lets-encrypt cert expeires every 3 months you can easily renew this by using

letsencrypt renew

This will use the existing config you used to generate the cert and renew it

then run the SSL-import script to update the controller cert

you can automate this using a cronjob

Copy the modified import Script you used in Step 6 to “/bin/certupdate/unifi_ssl_import.sh”

sudo mkdir /bin/certupdate/
cp /home/user/unifi_ssl_import.sh /bin/certupdate/unifi_ssl_import.sh

switch to sudo and edit your cron-tab for root and add the following lines

sudo su
crontab -e
0 1 31 1,3,5,7,9,11 * root certbot renew
15 1 31 1,3,5,7,9,11 * root /bin/certupdate/unifi_ssl_import.sh

Save and exit nano by doing CTRL+X followed by Y. 

Check crontab for root and confirm

crontab -e

At 01:00 on day-of-month 31 in January, March, May, July, September, and November the command will attempt to renew the cert

At 01:15 on day-of-month 31 in January, March, May, July, September, and November it will update the keystore with the new cert

 

Useful links –

https://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/

https://crontab.guru/#

8. Adopting UniFi devices to the new Controller with SSH or other L3 adoption methods

If you can SSH into the AP, it’s possible to do L3-adoption via CLI command:

1. Make sure the AP is running the same firmware as the controller. If it is not, see this guide: UniFi – Changing the Firmware of a UniFi Device.

2. Make sure the AP is in factory default state. If it’s not, do:

syswrapper.sh restore-default

3. SSH into the device and type the following and hit enter:

set-inform http://ip-of-controller:8080/inform

4. After issuing the set-inform, the UniFi device will show up for adoption. Once you click adopt, the device will appear to go offline.

5. Once the device goes offline, issue the  set-inform  command from step 3 again. This will permanently save the inform address, and the device will start provisioning.

https://help.ubnt.com/hc/en-us/articles/204909754-UniFi-Device-Adoption-Methods-for-Remote-UniFi-Controllers

Managing the Unify controller services

# to stop the controller
$ sudo service unifi stop

# to start the controller
$ sudo service unifi start

# to restart the controller
$ sudo service unifi restart

# to view the controller's current status
$ sudo service unifi status

Troubleshooting  issues 

cat /var/log/unifi/server.log

go through the system logs and google the issue, best part about ubiquity gear is the strong community support 

 


i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)

anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting

Guide

First we need to find the occupied slots and the Bus address for each slot

sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"

Output will show the Slot ID, Usage and then the Bus Address

        Designation: CPU SLOT1 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT2 PCI-E 4.0 X8
        Current Usage: In Use
        Bus Address: 0000:41:00.0
        Designation: CPU SLOT3 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c1:00.0
        Designation: CPU SLOT4 PCI-E 4.0 X8
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT5 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c2:00.0
        Designation: CPU SLOT6 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT7 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:81:00.0
        Designation: PCI-E M.2-M1
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: PCI-E M.2-M2
        Current Usage: Available
        Bus Address: 0000:ff:00.0

We can use lspci -s #BusAddress# to locate whats installed on each slot

lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)

lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)

Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments

Until next time!!!

Reference –

https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl

This is a guide to show you how to enroll your servers/desktops to allow powershell remoting (WINRM) over HTTPS

Assumptions

  • You have a working Root CA on the ADDS environment – Guide
  • CRL and AIA is configured properly – Guide
  • Root CA cert is pushed out to all Servers/Desktops – This happens by default

Contents

  1. Setup CA Certificate template
  2. Deploy Auto-enrolled Certificates via Group Policy
  3. Powershell logon script to set the WinRM listener
  4. Deploy the script as a logon script via Group Policy
  5. Testing
1 – Setup CA Certificate template to allow Client Servers/Desktops to checkout the certificate from the CA

Connect to the The Certification Authority Microsoft Management Console (MMC)

Navigate to Certificate Templates > Manage

On the “Certificate templates Console” window > Select Web Server > Duplicate Template

Under the new Template window Set the following attributes

General – Pick a Name and Validity Period – This is up to you

Compatibility – Set the compatibility attributes (You can leave this on the default values, It up to you)

Subject Name – Set ‘Subject Name’ attributes (Important)

Security – Add “Domain Computers” Security Group and Set the following permissions

  • Read – Allow
  • Enroll – Allow
  • Autoenroll – Allow

Click “OK” to save and close out of “Certificate template console”

Issue to the new template

Go back to the “The Certification Authority Microsoft Management Console” (MMC)

Under templates (Right click the empty space) > Select New > Certificate template to Issue

Under the Enable Certificate template window > Select the Template you just created

Allow few minutes for ADDS to replicate and pick up the changes with in the forest

2 – Deploy Auto-enrolled Certificates via Group Policy

Create a new GPO

Windows Settings > Security Settings > Public Key Policies/Certificate Services Client – Auto-Enrollment Settings

Link the GPO to the relevant OU with in your ADDS environment

Note – You can push out the root CA cert as a trusted root certificate with this same policy if you want to force computers to pick up the CA cert,

Testing

If you need to test it gpupdate/force or reboot your test machine, The Server VM/PC will pickup a certificate from ADCS PKI

3 – Powershell logon script to set the WINRM listener

Dry run

  • Setup the log file
  • Check for the Certificate matching the machines FQDN Auto-enrolled from AD CS
  • If exist
    • Set up the HTTPS WInRM listener and bind the certificate
    • Write log
  • else
    • Write log
#Malinda Rathnayake- 2020
#
#variable
$Date = Get-Date -Format "dd_MM_yy"
$port=5986
$SessionRunTime = Get-Date -Format "dd_yyyy_HH-mm"
#
#Setup Logs folder and log File
$ScriptVersion = '1.0'
$locallogPath = "C:\_Scripts\_Logs\WINRM_HTTPS_ListenerBinding"
#
$logging_Folder = (New-Item -Path $locallogPath -ItemType Directory -Name $Date -Force)
$ScriptSessionlogFile = New-Item $logging_Folder\ScriptSessionLog_$SessionRunTime.txt -Force
$ScriptSessionlogFilePath = $ScriptSessionlogFile.VersionInfo.FileName
#
#Check for the the auto-enrolled SSL Cert
$RootCA = "Company-Root-CA" #change This
$hostname = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
$certinfo = (Get-ChildItem -Path Cert:\LocalMachine\My\ |? {($_.Subject -Like "CN=$hostname") -and ($_.Issuer -Like "CN=$RootCA*")})
$certThumbprint = $certinfo.Thumbprint
#
#Script-------------------------------------------------------
#
#Remove the existing WInRM Listener if there is any
Get-ChildItem WSMan:\Localhost\Listener | Where -Property Keys -eq "Transport=HTTPS" | Remove-Item -Recurse -Force
#
#If the client certificate exists Setup the WinRM HTTPS listener with the cert else Write log
if ($certThumbprint){
#
New-Item -Path WSMan:\Localhost\Listener -Transport HTTPS -Address * -CertificateThumbprint $certThumbprint -HostName $hostname -Force
#
netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=$port
#
Add-Content -Path $ScriptSessionlogFilePath -Value "Certbinding with the HTTPS WinRM HTTPS Listener Completed"
Add-Content -Path $ScriptSessionlogFilePath -Value "$certinfo.Subject"}
else{
Add-Content -Path $ScriptSessionlogFilePath -Value "No Cert matching the Server FQDN found, Please run gpupdate/force or reboot the system"
}

Script is commented with Explaining each section (should have done functions but i was pressed for time, never got around to do it, if you do fix it up and improve this please let me know in the comments :D)

5 – Deploy the script as a logon script via Group Policy

Setup a GPO and set this script as a logon Powershell script

Im using a user policy with GPO Loop-back processing set to Merge applied to the server OU

Testing

To confirm WinRM is listening on HTTPS, type the following commands:

winrm enumerate winrm/config/listener
Winrm get http://schemas.microsoft.com/wbem/wsman/1/config

Sources that helped me

https://docs.microsoft.com/en-us/troubleshoot/windows-client/system-management-components/configure-winrm-for-https

https://gmusumeci.medium.com/get-rid-of-those-annoying-self-signed-certificates-with-microsoft-certificate-services-part-3-9d4b8e819f45

http://vcloud-lab.com/entries/powershell/powershell-remoting-over-https-using-self-signed-ssl-certificate

Im going to base this off my VRF Setup and Route leaking article and continue building on top of it

Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective

For this example we are going to use BGP to collect connected routes and advertise that over OSPF

Setup the BGP process to collect connected routes

router bgp 65000
 router-id 10.252.250.6
 !
 address-family ipv4 unicast
 !
 neighbor 10.252.250.1
!
vrf Tenant01_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant02_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant03_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Shared_VRF
 !
 address-family ipv4 unicast
  redistribute connected

Setup OSPF to Redistribute the routes collected via BGP

router ospf 250 vrf Shared_VRF
 area 0.0.0.0 default-cost 0
 redistribute bgp 65000
interface vlan250
 mode L3
 description OSPF_Routing
 no shutdown
 ip vrf forwarding Shared_VRF
 ip address 10.252.250.6/29
 ip ospf 250 area 0.0.0.0
 ip ospf mtu-ignore
 ip ospf priority 10

Testing and confirmation

Local OSPF Database

Remote device

Issue

Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs

"The Specified Domain either does not exist or could not be contacted"

(click to enlarge)

Checked the following

  • Restarted ADsync Services
  • Resolve the ADDS Domain FQDN and DNS – Working
  • Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values

Root Cause

Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)

Assumption

when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.

This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high

Resolution

Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding

in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)