When you strip away the UIs and YAML, Kubernetes is really just one thing: a stubborn, event-driven collection of control
Domain Trust relationship failures, it may be a virus making it impossible to login using domain credentials..you are bound to
Remember the Export-Mailbox command on exchange 2007??? The main problem I personally had was the annoying outlook requirement. With the exchange
Secure Unify controller on AWS using a free SSL certificate from LetsEncrypt, This guide will apply for any Debian based
We will proceed assuming  you already configured the ASA with the primary link Configured the WAN2 on a port with
Duplicate mailbox issue with hybrid exchange user mailbox migration
ASIC technology used in Cisco Catalyst 2960 3750 3850 9200 9300 9400
Guide on how to setup mailx with office 365 using app passwords - valid as of Aug 2023

I’ve been diving deep into systems architecture lately, specifically Kubernetes

Strip away the UIs, the YAML, and the ceremony, and Kubernetes boils down to:

A very stubborn event driven collection of control loops

aka the reconciliation (Control) loop, and everything I read is calling this the “gold standard” for distributed control planes.

Because it decomposes the control plane into many small, independent loops, each continuously correcting drift rather than trying to execute perfect one-shot workflows. these loops are triggered by events or state changes, but what they do is determined by the the spec. vs observed state (status)

Now we have both:

  • spec: desired state
  • status: observed state

Kubernetes lives in that gap.

When spec and status match, everything’s quiet. When they don’t, something wakes up to ensure current state matches the declared state.

The Architecture of Trust

In Kubernetes, they don’t coordinate via direct peer-to-peer orchestration; They coordinate by writing to and watching one shared “state.”

That state lives behind the API server, and the API server validates it and persists it into etcd.

Role of the API server

The API server is the front door to the cluster’s shared truth: it’s the only place that can accept, validate, and persist declared intent as Kubernetes API objects (metadata/spec/status).

When you install a CRD, you’re extending the API itself with a new type (a new endpoint) or a schema the API server can validate against

When we use kubectl apply (or any client) to submit YAML/JSON to the API server, the API server validates it (built-in rules, CRD OpenAPI v3 schema / CEL rules, and potentially admission webhooks) and rejects invalid objects before they’re stored.

If the request passes validation, the API server persists the object into etcd (the whole API object, not just “intent”), and controllers/operators then watch that stored state and do the reconciliation work to make reality match it.

Once stored, controllers/operators (loops) watch those objects and run reconciliation to push the real world toward what’s declared.

it turns out In practice, most controllers don’t act directly on raw watch events, they consume changes through informer caches and queue work onto a rate-limited workqueue. They also often watch related/owned resources (secondary watches), not just the primary object, to stay convergent.

spec is often user-authored as discussed above, but it isn’t exclusively human-written, the scheduler and some controllers also update parts of it (e.g., scheduling decisions/bindings and defaulting).

Role of etcd cluster

etcd is the control plane’s durable record of “the authoritative reference for what the cluster believes that should exist and what it currently reports.”

If an intent (an API object) isn’t in etcd, controllers can’t converge on it—because there’s nothing recorded to reconcile toward

This makes the system inherently self-healing because it trusts the declared state and keeps trying to morph the world to match until those two align.

One tidbit worth noting:

In production, Nodes, runtimes, cloud load balancers can drift independently. Controllers treat those systems as observed state, and they keep measuring reality against what the API says should exist.

How the Loop Actually Works

 Kubernetes isn’t one loop. It’s a bunch of loops(controllers) that all behave the same way:

  • read desired state (what the API says should exist)
  • observe actual state (what’s really happening)
  • calculate the diff
  • push reality toward the spec

 

As an example, let’s look at a simple nginx workload deployment

1) Intent (Desired State)

To Deploy the Nginx workload. You run:

kubectl apply -f nginx.yaml

 

The API server validates the object (and its schema, if it’s a CRD-backed type) and writes it into etcd.

At that point, Kubernetes has only recorded your intent. Nothing has “deployed” yet in the physical sense. The cluster has simply accepted:

“This is what the world should look like.”

2) Watch (The Trigger)

Controllers and schedulers aren’t polling the cluster like a bash script with a sleep 10.

They watch the API server.

When desired state changes, the loop responsible for it wakes up, runs through its logic, and acts:

“New desired state: someone wants an Nginx Pod.”

watches aren’t gospel. Events can arrive twice, late, or never, and your controller still has to converge. Controllers use list+watch patterns with periodic resync as a safety net. The point isn’t perfect signals it’s building a loop that stays correct under imperfect signals.

Controllers also don’t spin constantly they queue work. Events enqueue object keys; workers dequeue and reconcile; failures requeue with backoff. This keeps one bad object from melting the control plane.

3) Reconcile (Close the Gap)

Here’s the mental map that made sense to me:

Kubernetes is a set of level-triggered control loops. You declare desired state in the API, and independent loops keep working until the real world matches what you asked for.

  • Controllers (Deployment/ReplicaSet/etc.) watch the API for desired state and write more desired state.
    • Example: a Deployment creates/updates a ReplicaSet; a ReplicaSet creates/updates Pods.
  • The scheduler finds Pods with no node assigned and picks a node.
    • It considers resource requests, node capacity, taints/tolerations, node selectors, (anti)affinity, topology spread, and other constraints.
    • It records its decision by setting spec.nodeName on the Pod.
  • The kubelet on the chosen node notices “a Pod is assigned to me” and makes it real.
    • pulls images (if needed) via the container runtime (CRI)
    • sets up volumes/mounts (often via CSI)
    • triggers networking setup (CNI plugins do the actual wiring)
    • starts/monitors containers and reports status back to the API

Each component writes its state back into the API, and the next loop uses that as input. No single component “runs the whole workflow.”

One property makes this survivable: reconcile must be safe to repeat (idempotent). The loop might run once or a hundred times (retries, resyncs, restarts, duplicate/missed watch events), and it should still converge to the same end result.

if the desired state is already satisfied, reconcile should do nothing; if something is missing, it should fill the gap, without creating duplicates or making things worse.

When concurrent updates happen (two controllers might try to update the same object at the same time)

Kubernetes handles this with optimistic concurrency. Every object has a resourceVersion (what version of this object did you read?”). If you try to write an update using an older version, the API server rejects it (often as a conflict).

Then the flow is: re-fetch the latest object, apply your change again, and retry.

4) Status (Report Back)

Once the pod is actually running, status flows back into the API.

The Loop Doesn’t Protect You From Yourself

What if the declared state says to delete something critical like kube-proxy or a CNI component? The loop doesn’t have opinions. It just does what the spec says.

A few things keep this from being a constant disaster:

  • Control plane components are special. The API server, etcd, scheduler, controller-manager these usually run as static pods managed directly by kubelet, not through the API. The reconciliation loop can’t easily delete the thing running the reconciliation loop as long as its manifest exists on disk.
  • DaemonSets recreate pods. Delete a kube-proxy pod and the DaemonSet controller sees “desired: 1, actual: 0” and spins up a new one. You’d have to delete the DaemonSet itself.
  • RBAC limits who can do what. Most users can’t touch kube-system resources.
  • Admission controllers can reject bad changes before they hit etcd.

But at the end, if your source of truth says “delete this,” the system will try. The model assumes your declared state is correct. Garbage in, garbage out.

Why This Pattern Matters Outside Kubernetes

This pattern shows up anywhere you manage state over time.

Scripts are fine until they aren’t:

  • they assume the world didn’t change since last run
  • they fail halfway and leave junk behind
  • they encode “steps” instead of “truth”

A loop is simpler:

  • define the desired state
  • store it somewhere authoritative
  • continuously reconcile reality back to it

Ref

Remember the Export-Mailbox command on exchange 2007??? The main problem I personally had was the annoying outlook requirement. 
With the exchange server 2010 service pack 1 release, M$ introduced a new Cmdlet to export mailboxes on the server. And it does not require outlook.
New-MailboxExportRequest

Step 01 – Mailbox Import Export Role Assignment
Grant the user account permissions to export mailboxes (By default no account has the privileges to export mailboxes)
New-ManagementRoleAssignment -Role “Mailbox Import Export” -User administrator

Step 02 – Setup the Export File Location
We need a network share to export files. (Eg – \Exch01PST_export)
Note:
The Cmdlet gives an error if you point to a directory directly on the Hardisk (Eg – F:PST_export)
Create a Shared folder on a serverNAS and grant Exchange Trusted Subsystem user account read/write permissions to the folder
Exporting Mailbox Items with “New-MailboxExportRequest”

Supporting Cmdlets that can be used with MailboxExportRequest
Cmdlet
Description
Topic
Start the process of exporting a mailbox or personal archive to a .pst file. You can create more than one export request per mailbox. Each request must have a unique name.
Change export request options after the request is created or recover from a failed request.
Suspend an export request any time after the request is created but before the request reaches the status of Completed.
Resume an export request that’s suspended or failed.
Remove fully or partially completed export requests. Completed export requests aren’t automatically cleared. You must use this cmdlet to remove them.
View general information about an export request.
View detailed information about an export request.
In this example
Shared folder name-  PST_export
server name- EXCH01
Share Path –  \Exch01PST_export
Mailbox – amy.webber

Folder permissions – 

For this example we are going to use New-MailboxExportRequest cmdlet with the following parameters :

-baditemlimit 200 -AcceptLargeDataLoss
AcceptLargeDataLoss
The AcceptLargeDataLoss parameter specifies that a large amount of data loss is acceptable if the BadItemLimit is set to 51 or higher. Items are considered corrupted if the item can’t be read from the source database or can’t be written to the target database. Corrupted items won’t be available in the destination mailbox or .pst file.
baditemlimit
The BadItemLimit parameter specifies the number of bad items to skip if the request encounters corruption in the mailbox. Use 0 to not skip bad items. The valid input range for this parameter is from 0 through 2147483647. The default value is 0.
Exporting the Whole Mailbox
Run the following Cmdlet to initiate the mailbox move request:  New-MailboxExportRequest
New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber -FilePath \Exch01PST_exportamy.webber.pst

Exporting the User’s Online Archive
If you want to export the user’s online archive to .pst, use the –IsArchive parameter.

New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber  -IsArchive -FilePath \Exch01PST_exportamy.webber-Archive.pst

Exporting a Specific Folder
You can export a folder from the users mailbox using the -IncludeFolders parameter
Eg: inbox folder layout-
To export the inbox folder
New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber -IncludeFolders #Inbox# -FilePath \Exch01PST_exportamy.webber.pst
Checking the Progress of the Mailbox Export Request

To check the current statues of the mailbox export request use the following cmdlet:
Get-MailboxExportRequest | Get-MailboxExportRequestStatistics
People do crazy stuff scripting with this Cmdlet. Look around in the interwebs for some scripts.
Useful links:
Until next time…

I found a solution for how to navigate cloud key issues and wanted to set up a ZTP solution for Unifi hardware so I can direct ship equipment to the site, and provision it securely via internet without having to stand up a L2L tunnel.

Alright, lets get started…

This guide is applicable for any Ubuntu based install, but I’m going to utilize Amazon Lightsail for the demo, since at the time of writing, it’s the cheapest option I can find with enough compute resources and a static IP included.

2 GB RAM, 1 vCPU60 GB SSD

OPex (Recurring Cost) – 10$ per Month – As of February 2019

Guide

Dry Run

1. Set up Lightsail instance
2. Create and attach static IP
3. Open necessary ports
4. Set up Unify packages
5. Set up SSL using certbot and letsencrypt
6. Add the certs to unify controller
7. Set up Cronjob for SSL auto Renewal
8. Adopting UniFi devices

1. Set up LightSail instance

Login to – https://lightsail.aws.amazon.com

Spin up a Lightsail instance:

Set a name for the instance and provision it.

2. Create and attach static IP

Click on the instance name and click on the networking tab:

Click “Create Static IP”:

3. Open necessary ports

TCP or UDP

Port Number

Usage

TCP

80

Port used inform-URL for adoption.

TCP

443

Port used for Cloud Access service.

UDP

3478

Port used for STUN.

TCP

8080

Port used for device and controller communication.

TCP

8443

Port used for controller GUI/API as seen in a web browser.

TCP

8880

Port used for HTTP portal redirection.

TCP

8843

Port used for HTTPS portal redirection.

You can disable or lock down the ports as needed using IP-tables depending on your security posture

Post spotlight-

https://www.lammertbies.nl/comm/info/iptables.html#intr

4. Set up Unify packages

https://help.ubnt.com/hc/en-us/articles/209376117-UniFi-Install-a-UniFi-Cloud-Controller-on-Amazon-Web-Services#7

Add the Ubiquiti repository to /etc/apt/sources.list:
sudo echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
Add the Ubiquiti GPG Key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50

Update the server’s repository information:

sudo apt-get update

Install JAVA 8 run time

You need Java Run-time 8 to run the UniFi Controller

Add Oracle’s PPA (Personal Package Archive) to your list of sources so that Ubuntu knows where to check for the updates. Use addaptrepository command for that.

sudo add-apt-repository ppa:webupd8team/java -y sudo apt install java-common oracle-java8-installer

update your package repository by issuing the following command

sudo apt-get update

The oracle-java8-set-default package will automatically set Oracle JDK8 as default. Once the installation is complete we can check Java version.

java -version

java version "1.8.0_191"

MongoDB

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
sudo apt update

Update. Retrieve the latest package information.

sudo apt update

sudo apt-get install apt-transport-https

Install UniFi Controller packages.

sudo apt install unifi

You should be able to Access the web interface and go through the initial setup wizard.

https://yourIPaddress:8443

5. Set up SSL using certbot and letsencrypt

Lets get that green-lock up in here shall we

So, a few things to note here… UniFi doesn’t really have a straightforward way to import certificates, you have to use the java keystore commands to import the cert, but there is a very handy script built by Steve Jenkins that makes this super easy.

First, we need to request a cert and sign it using lets encrypt certificate authority.

Let’s start with adding the repository and install the EFF certbot package – link

sudo apt-get update
sudo apt-get install software-properties-common 
sudo add-apt-repository universe 
sudo add-apt-repository ppa:certbot/certbot 
sudo apt-get update 
sudo apt-get install certbot

5.1 Update/add your DNS record and make sure its propagated (this is important)

Note - The DNS name should point to the static IP we attached to our light-sail instance
Im going to use the following A record for this example

unifyctrl01.multicastbits.com

Ping from the controller and make sure the server can resolve it.

ping unifyctrl01.multicastbits.com


You wont be able see any echo replies because ICMP is not allowed on the firewall rules in AWS - leave it as is we just need the server to see the IP resolving to DNS A record

5.2 Request the certificate

Issue the following command to start certbot in certonly mode

sudo certbot certonly
usage: 
  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate. The most common SUBCOMMANDS and flags are:

obtain, install, and renew certificates:
    (default) run   Obtain & install a certificate in your current webserver
    certonly        Obtain or renew a certificate, but do not install it
    renew           Renew all previously obtained certificates that are near expiry
    enhance         Add security enhancements to your existing configuration
   -d DOMAINS       Comma-separated list of domains to obtain a certificate for

 

5.3 Follow the wizard

Select the first option #1 (Spin up a temporary web server)

Enter all the information requested for the cert request.

This will save the certificate and the privet key generated to the following directory:

/etc/letsencrypt/live/DNSName/

All you need to worry about are these files:

  • cert.pem
  • fullchain.pem
  • privkey.pem

6 Import the certificate to the UniFi controller

You can do this manually using the keytool-import

https://crosstalksolutions.com/secure-unifi-controller/
https://docs.oracle.com/javase/tutorial/security/toolsign/rstep2.html

But for this we are going to use the handy SSL import script made by Steven Jenkins

6.1  Download Steve Jenkins UniFi SSL Import Script

Copy the unifi_ssl_import.sh script to your server

wget https://raw.githubusercontent.com/stevejenkins/unifi-linux-utils/master/unifi_ssl_import.sh

6.2 Modify Script

Install Nano if you don’t have it (it’s better than VI in my opinion. Some disagree, but hey, I’m entitled to my opinion)

sudo apt-get install nano
nano unifi_ssl_import.sh

Change your hostname.example.com to the actual hostname you wish to use. In my case, I’m using

UNIFI_HOSTNAME=your_DNS_Record

Since we are using Ubuntu comment following three lines for Fedora/RedHat/CentOS

#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore 

Uncomment following three lines for Debian/Ubuntu

UNIFI_DIR=/var/lib/unifi 
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

 Since we are using Letsencrypt

LE_MODE=yes

here’s what i used for this demo

#!/usr/bin/env bash

# unifi_ssl_import.sh
# UniFi Controller SSL Certificate Import Script for Unix/Linux Systems
# by Steve Jenkins <http://www.stevejenkins.com/>
# Part of https://github.com/stevejenkins/ubnt-linux-utils/
# Incorporates ideas from https://source.sosdg.org/brielle/lets-encrypt-scripts
# Version 2.8
# Last Updated Jan 13, 2017

# CONFIGURATION OPTIONS
UNIFI_HOSTNAME=unifyctrl01.multicastbits.com
UNIFI_SERVICE=unifi

# Uncomment following three lines for Fedora/RedHat/CentOS
#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu
UNIFI_DIR=/var/lib/unifi
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

# Uncomment following three lines for CloudKey
#UNIFI_DIR=/var/lib/unifi
#JAVA_DIR=/usr/lib/unifi
#KEYSTORE=${JAVA_DIR}/data/keystore

# FOR LET'S ENCRYPT SSL CERTIFICATES ONLY
# Generate your Let's Encrtypt key & cert with certbot before running this script
LE_MODE=yes
LE_LIVE_DIR=/etc/letsencrypt/live

# THE FOLLOWING OPTIONS NOT REQUIRED IF LE_MODE IS ENABLED
PRIV_KEY=/etc/ssl/private/hostname.example.com.key
SIGNED_CRT=/etc/ssl/certs/hostname.example.com.crt
CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt

#rest of the script Omitted

6.3 Make script executable:
chmod a+x unifi_ssl_import.sh
6.4 Run script:
sudo ./unifi_ssl_import.sh

This script will

  • Backup the old keystore file (very handy, something i always forget to do)
  • update the relevant keystore file with the LE cert
  • restart the services to apply the new cert

7. Setup Automatic Certificate renewal

Lets-encrypt cert expeires every 3 months you can easily renew this by using

letsencrypt renew

This will use the existing config you used to generate the cert and renew it

then run the SSL-import script to update the controller cert

you can automate this using a cronjob

Copy the modified import Script you used in Step 6 to “/bin/certupdate/unifi_ssl_import.sh”

sudo mkdir /bin/certupdate/
cp /home/user/unifi_ssl_import.sh /bin/certupdate/unifi_ssl_import.sh

switch to sudo and edit your cron-tab for root and add the following lines

sudo su
crontab -e
0 1 31 1,3,5,7,9,11 * root certbot renew
15 1 31 1,3,5,7,9,11 * root /bin/certupdate/unifi_ssl_import.sh

Save and exit nano by doing CTRL+X followed by Y. 

Check crontab for root and confirm

crontab -e

At 01:00 on day-of-month 31 in January, March, May, July, September, and November the command will attempt to renew the cert

At 01:15 on day-of-month 31 in January, March, May, July, September, and November it will update the keystore with the new cert

 

Useful links –

https://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/

https://crontab.guru/#

8. Adopting UniFi devices to the new Controller with SSH or other L3 adoption methods

If you can SSH into the AP, it’s possible to do L3-adoption via CLI command:

1. Make sure the AP is running the same firmware as the controller. If it is not, see this guide: UniFi – Changing the Firmware of a UniFi Device.

2. Make sure the AP is in factory default state. If it’s not, do:

syswrapper.sh restore-default

3. SSH into the device and type the following and hit enter:

set-inform http://ip-of-controller:8080/inform

4. After issuing the set-inform, the UniFi device will show up for adoption. Once you click adopt, the device will appear to go offline.

5. Once the device goes offline, issue the  set-inform  command from step 3 again. This will permanently save the inform address, and the device will start provisioning.

https://help.ubnt.com/hc/en-us/articles/204909754-UniFi-Device-Adoption-Methods-for-Remote-UniFi-Controllers

Managing the Unify controller services

# to stop the controller
$ sudo service unifi stop

# to start the controller
$ sudo service unifi start

# to restart the controller
$ sudo service unifi restart

# to view the controller's current status
$ sudo service unifi status

Troubleshooting  issues 

cat /var/log/unifi/server.log

go through the system logs and google the issue, best part about ubiquity gear is the strong community support 

 



We will proceed assuming 

you already configured the ASA with the primary link


Configured the WAN2 on a port with the static IP or DHCP depending on the connection – you should be able to ping the secondary WAN link gateway from the ASA


Note:

Please remove the existing Static Route for the primary WAN link

Configure Route tracking

ASA(config)# route outside 0.0.0.0 0.0.0.0 <ISP 1(WAN1) Gateway> 1 track 1
ASA(config)# route Backup_Wan 0.0.0.0 0.0.0.0 <ISP 2 (WAN2) Gateway> 254


Now lets break it down

Line 01 –  you add the WAN1 route with a administrative distance of 1 and we also include the track 1 statement for the SLA monitor tracking (See below)


Line 02 – with the second line we add the default route for the BackupWan link with a higher administrative distance to make it the secondary link


Examples 

ASA(config)# route outside  0.0.0.0 0.0.0.0 100.100.100.10 1 track 1
ASA(config)# route Backup_Wan  0.0.0.0 0.0.0.0 200.200.200.10 254



Setup SLA monitoring and Route tracking 

ASA(config)# sla monitor 10


Configure the SLA monitor with ID 10

ASA(config-sla-monitor)# type echo protocol ipIcmpEcho 8.8.8.8 interface outside


Configure the monitoring protocol, the target IP for the probe and the interface use

SLA monitor will keep probing the IP we define here and report if its unreachable via the given interface
In this senario im using 8.8.8.8 as the target IP you can use any public IP for monitoring


ASA(config-sla-monitor-echo)# num-packets 4


Number of packets sent to the probe

ASA(config-sla-monitor-echo)# timeout 1000


Timeout value in milliseconds. if you have a slow link as the primary increase the time out accordingly

ASA(config-sla-monitor-echo)# frequency 10


Frequency of the probe in seconds – SLA monitor will probe the IP every 10 seconds

ASA(config)# sla monitor schedule 10 life forever start-time now


Set the ASA to start the SLA monitor now and keep it running for ever

ASA(config)# track 1 rtr 10 reachability


This command will tell the ASA to keep tracking the SLA monitor with the ID:10 and the Default route defined with “Track 1”

if the probe fails to reach the target IP (in this case 8.8.8.8) via the designated interface it will remove the route defined with “Track 1” from the routing table 

The next best possible route in this scenario the backup ISP route with administrative distance of 254 takes its place


Configure dynamic NAT Rules (Important)

nat (inside,<ISP 1(WAN1) Interface Name) source dynamic any interface
nat (inside,<ISP 2(WAN2) Interface Name>) source dynamic any interface


Configure the two NAT statements required so that either interface can provide NATting,

Examples 

nat (inside,outside) source dynamic any interface
nat (inside,Backup_Wan) source dynamic any interface


This method worked well for me personally and keep in mind i’m no Cisco Guru so if i made a mistake or if you feel like there is a better way to do this please leave comment. its all about the community after all

Until next time stay awesome internetz

During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:

  • Hybrid Co-Existence Environment with AAD-Sync
  • User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
  • [email protected] had a cloud-only mailbox prior to the initial AD-sync was run
  • A user account is registered as a mail-user and has a valid license attached
  • During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox.
+ CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException
+ FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep
lication.MoveRequest.NewMoveRequest
+ PSComputerName : Pl-ex001.Paladin.org

It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user

Option 1 (nuclear option)

to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.

Option 2

Clean up only the office 365 mailbox pointer information

PS C:\> Set-User [email protected] -PermanentlyClearPreviousMailboxInfo 
Confirm
Confirm
Are you sure you want to perform this action?
Delete all existing information about user "[email protected]"?. This operation will clear existing values from
Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that
existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to
continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a

Executing this leaves you with a clean object without the duplicate-mailbox problem,

in some cases when you run this command you will get the following output 

 “Command completed successfully, but no user settings were changed.”

If this happens

Remove the license from the user temporarily and run the command to remove previous mailbox data

then you can re-add the license 

 

Preface-

After I started working with Open networking switches, wanted to know more about the Cisco catalyst range I work with every day.

Information on older ASICS is very hard to find, but recently they have started to talk a lot about the new chips like UADP 2.0 with the Catalyst 9k / Nexus launch, This is more likely due to the rise of Desegregated Network Operating Systems DNOS such as Cumulus and PICA8, etc forcing customers to be more aware of what’s under the hood rather than listening and believing shiny PDF files with a laundry list of features.

The information was there but scattered all over the web, I went though CiscoLive, TechFieldDay slides/videos, interviews, partner update PDFs, Press leases and whitepapers and even LinkedIn profiles to gather information

If you notice a mistake please let me know

Scope –

we are going to focus on the ASIC’s used in the well-known 2960S/X/XR and the 36xx,37xx,38XX  and the new Cat 9K series

Timeline

 

Summary

Cisco Brought a bunch of companies to acquire the switching technology they needed that later bloomed into the switching platforms we know today

  • Crescendo Communications (1993) – Catalyst 5K and 6K chassis
  • Kalpana (1994) – Catalyst 3K (Fun Fact they invented VLANs that later got standardized as 802.1q)
  • Grand Junction (1995) – Catalyst 17xx, 19xx, 28xx, 29xx
  • Granite Systems (1996) – Catalyst 4K (K series ASIC)

After the original Cisco 3750/2950 switches, Cisco 3xxx/2xxx-G  (G for Gigabit) was released

Next, the Cisco 3xxx-E series with enterprise management functions was released

later, Cisco developed the Cisco 3750-V series with the function of energy-saving version for –E series, later replaced by Cisco 3750 V2 series (Possibly a die shrink)

G series and E series were later phased out and integrated into Cisco X series. which is still being sold and supported

in 2017-2018 Cisco released the catalyst 9k family to replace the 2K and 3K families

Sasquatch ASIC

from what I could  find there are two variants of this ASIC

The initial release in 2003

  • Fixed pipeline ASIC
  • 180 Nano-meter process
  • 60 Million Transistors

Shipped with the 10/100 3750 and 2960

Die Shrink to 130nm in 2005

  • Fixed pipeline ASIC
  • 130 Nano-meter process
  • 210 Million Transistors

Shipped in the 2960-G/3560-G/3750-G series

I couldn’t find much info about the chip design. will update this section when I find more.

 Strider ASIC

Initially Release in 2010

  • Fixed pipeline ASIC
  • Built on the 65-nanometer process
  • 1.3 Billion Transistors

Strider ASIC (circa 2010) was an improved design based on the 3750-E series was first shipped with the 2960-S family.

S88G_ASIC design
S88G ASIC

later in 2012-2013 with a die shrink to 45-nanometer, they managed to fit 2 ASICs in the same silicon, This shipped with the 2960-X/XR which replaced the 2960-S

  • higher stack speeds and features
  • limited layer 3 capabilities IP Lite feature (2960-XR only)
  • Better QoS and Netflow lite

Later down the line they silently rolled the ASIC design to a 32-nanometer process for better yield to achieve cheaper manufacturing costs

this switch is still being sold with no EOL announced as a cheaper Acess layer switch

On a side note – in 2017 Cisco released another version of the 2960 family the WS-2960-L This is a cheaper version built on a Marvel ASIC (Same as the SG55x) with a web UI and fanless design. I personally think this is the next evolution of their SMB market-oriented family the popular Cisco SG-5xx series. for the first the time the 2960 family had a fairly usable and pleasant web interface for the configuration and management. the new 9K series seems to be containing a more polished version of the web-UI

Unified Access Data Plane (UADP)

Due to the limitations in the fixed pipeline architecture and the costs involved with the re-rolling process to fix bugs they needed something new and had three options

As a compromise between all three Cisco Started dreaming up this programmable ASIC design in 2007-2008 the idea was to build a chip with programmable stages that can be updated with firmware updates instead of writing the logic into the silicon permanently.

they released the programmable ASIC technology initially for their QFP (Quantum flow processor) ASIC in the ISR router family to meet the customer needs (service providers and large enterprises)

This chip allowed them to support advanced routing technologies and new protocols without changing hardware simply via firmware updates improving the longevity of the investment allowing them to make more money out of the chips extended life cycle.

Eventually, this technology trickled downstream and the Doppler 1.0 was born

Improvements and features in UADP 1.0/1.1/2.0

  • Programmable stages in the pipeline
  • Cisco intent Driven networking support – DNA Center with ISE
  • Intergarted Stacking Support with Stack power – ASIC is built with pinouts for the stacking fabric allowing faster stacking performance
  • Rapid Recirculation (Encapsulation such as MPLS, VXLAN)
  • TrustSec
  • Advance on-chip QOS
  • Software-defined networking support – integrated NetFlow, SD access
  • Flex Parser – Programmable packet parser allowing them to introduce support for new protocols with firmware updates
  • On-chip Micro-engines – Highly specialized engines within in the chip to perform repetitive tasks such as
    • Reassembly
    • Encryption/Decryption
    •  Fragmentation
  • CAPWAP – Switch can function as a wireless Lan Controller
    • Mobility agent – Offload Qo and other functions from the WLC (IMO Works really nicely with multi-site wireless deployments)
    • Mobility Controller – Fullblown Wireless LAN controller (WLC)
  • Extended life cycle allowed integration of Cisco security technologies such as Cisco DNA + ISE later down the line even on the first generation switches
  • Multigig and 40GE speed support
  • Advanced malware detection capabilities via packet fingerprinting

Legacy Fixed pipeline architecture

Programmable pipeline architecture

Doppler/UADP 1.0 (2013)

While doppler1.0 programmable ASIC handling the Data plane coupled with Cavium CPU for the control plane the first generation of the switches to ship with these chip was the 3650-x and 3850-x gigabit versions

  • Built on 65 Nanometer Process
  • 1.3 billion transistors

UADP 1.1 (2015)

  • Die Shrink to 45 Nanometer
  • 3 billion transistors

UADP 2.0 (2017)

  • Built on 28nm/16nm Process
  • Equipped with an Intel Xeon D (Dual-core X86) CPU for the control plane
  • Open-IOS-XE

7.4 billion transistors

Flexible ASIC Templates –  

Allows Cisco to write templates that can optimize the chip resources for different use cases

the new Catalyst 9000 series will replace the following campus switching families built on the older Strider and more recent UADP 1 and 1.1 ASICS

  • Catalyst 2K —–> Catalyst 9200
  • Catalyst 3K —–> Catalyst 9300
  • Catalyst 4K —–> Catalyst 9400
  • Catalyst 6K —–> Catalyst 9500

I’m will update/fix this post when I find more info about the UADP 2 and the next evolution, stay tuned for a few more articles based on the silicon used in open networking X86 chassis.

just something that came up while setting up a monitoring script using mailx, figured ill note it down here so i can get it to easily later when I need it 😀

Important prerequisites

  • You need to enable smtp basic Auth on Office 365 for the account used for authentication
  • Create an App password for the user account
  • nssdb folder must be available and readable by the user running the mailx command

Assuming all of the above prerequisite are $true we can proceed with the setup

Install mailx

RHEL/Alma linux

sudo dnf install mailx

NSSDB Folder

make sure the nssdb folder must be available and readable by the user running the mailx command

certutil -L -d /etc/pki/nssdb

The Output might be empty, but that’s ok; this is there if you need to add a locally signed cert or another CA cert manually, Microsoft Certs are trusted by default if you are on an up to date operating system with the local System-wide Trust Store

Reference – RHEL-sec-shared-system-certificates

Configure Mailx config file

sudo nano /etc/mail.rc

Append/prepend the following lines and Comment out or remove the same lines already defined on the existing config files

set smtp=smtp.office365.com
set smtp-auth-user=###[email protected]###
set smtp-auth-password=##Office365-App-password#
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-use-starttls
set from="###[email protected]###"

This is the bare minimum needed other switches are located here – link

Testing

echo "Your message is sent!" | mailx -v -s "test" [email protected]

-v switch will print the verbos debug log to console

Connecting to 52.96.40.242:smtp . . . connected.
220 xxde10CA0031.outlook.office365.com Microsoft ESMTP MAIL Service ready at Sun, 6 Aug 2023 22:14:56 +0000
>>> EHLO vls-xxx.multicastbits.local
250-MN2PR10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> STARTTLS
220 2.0.0 SMTP server ready
>>> EHLO vls-xxx.multicastbits.local
250-xxde10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-AUTH LOGIN XOAUTH2
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> AUTH LOGIN
334 VXNlcm5hbWU6
>>> Zxxxxxxxxxxxc0BmdC1zeXMuY29t
334 UGsxxxxxmQ6
>>> c2Rxxxxxxxxxxducw==
235 2.7.0 Authentication successful
>>> MAIL FROM:<###[email protected]###>
250 2.1.0 Sender OK
>>> RCPT TO:<[email protected]>
250 2.1.5 Recipient OK
>>> DATA
354 Start mail input; end with <CRLF>.<CRLF>
>>> .
250 2.0.0 OK <[email protected]> [Hostname=Bsxsss744.namprd11.prod.outlook.com]
>>> QUIT
221 2.0.0 Service closing transmission channel 

Now you can use this in your automation scripts or timers using the mailx command

#!/bin/bash

log_file="/etc/app/runtime.log"
recipient="[email protected]"
subject="Log file from /etc/app/runtime.log"

# Check if the log file exists
if [ ! -f "$log_file" ]; then
  echo "Error: Log file not found: $log_file"
  exit 1
fi

# Use mailx to send the log file as an attachment
echo "Sending log file..."
mailx -s "$subject" -a "$log_file" -r "[email protected]" "$recipient" < /dev/null
echo "Log file sent successfully."

Secure it

sudo chown root:root /etc/mail.rc
sudo chmod 600 /etc/mail.rc

The above commands change the file’s owner and group to root, then set the file permissions to 600, which means only the owner (root) has read and write permissions and other users have no access to the file.

Use Environment Variables: Avoid storing sensitive information like passwords directly in the mail.rc file, consider using environment variables for sensitive data and reference those variables in the configuration.

For example, in the mail.rc file, you can set:

set smtp-auth-password=$MY_EMAIL_PASSWORD

You can set the variable using another config file or store it in the Ansible vault during runtime or use something like Hashicorp.

Sure, I would just use Python or PowerShell core, but you will run into more locked-down environments like OCI-managed DB servers with only Mailx is preinstalled and the only tool you can use 🙁

the Fact that you are here means you are already in the same boat. Hope this helped… until next time