Kafka 3.8 with Zookeeper SASL_SCRAM

 

Transport Encryption Methods:

SASL/SSL (Solid Teal/Green Lines):

  1. Used for securing communication between producers/consumers and Kafka brokers.
    • SASL (Simple Authentication and Security Layer): Authenticates clients (producers/consumers) to brokers, using SCRAM .
    • SSL/TLS (Secure Sockets Layer/Transport Layer Security): Encrypts the data in transit, ensuring confidentiality and integrity during transmission.

Digest-MD5 (Dashed Yellow Lines):

  1. Secures communication between Kafka brokers and the Zookeeper cluster.
    • Digest-MD5: A challenge-response authentication mechanism providing basic encryption

Notes:

While functional, Digest-MD5 is an older algorithm. we opted for this to reduce complexity and the fact the zookeepers have issues with connecting with Brokers via SSL/TLS

  1. We need to test and switch over KRAFT Protocol, this removes the use of Zookeeper altogether
  2. Add IP ACLs for Zookeeper connections using firewalld to limit traffic between the nodes for replication

PKI and Certificate Signing

CA cert for local PKI,

We need to share this PEM file(without the private key) with the customer to authenticate

Internal applications the CA file must be used for authentication – Refer to the Configuration example documents

# Generate CA Key
openssl genrsa -out multicastbits_CA.key 4096
# Generate CA Certificate
openssl req -x509 -new -nodes -key multicastbits_CA.key -sha256 -days 3650 -out multicastbits_CA.crt -subj "/CN=multicastbits_CA"

 

 

Kafka Broker Certificates

# For Node1 - Repeat for other nodes

openssl req -new -nodes -out node1.csr -newkey rsa:2048 -keyout node1.key -subj "/CN=kafka01.multicastbits.com"

openssl x509 -req -CA multicastbits_CA.crt -CAkey multicastbits_CA.key -CAcreateserial -in node1.csr -out node1.crt -days 3650 -sha256

 

 

Create the kafka and zookeeper users

⚠️ Important: Do not skip this step. we need these users to setup Authentication in JaaS configuration

Before configuring the cluster with SSL and SASL, let’s start up the cluster without authentication and SSL to create the users. This allows us to:

  1. Verify basic dependencies and confirm the zookeeper and Kafka clusters are coming up without any issues “make sure the car starts”
  2. Create necessary user accounts for SCRAM
  3. Test for any inter-node communication issues (Blocked Ports 9092, 9093 ,2181 etc)

 

Here’s how to set up this initial configuration:

Zookeeper Configuration (No SSL or Auth)

Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

# Zookeeper Configuration without Auth
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888

 

Kafka Broker Configuration (No SSL or Auth)

Create the following file: /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

# Kafka Broker Configuration without Auth/SSL
broker.id=1
listeners=PLAINTEXT://kafka01.multicastbits.com:9092
advertised.listeners=PLAINTEXT://kafka01.multicastbits.com:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181

 

Open a new shell to the server Start Zookeeper:

/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

 

Open a new shell to start Kafka:

/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

 

 

Create the users:

Open a new shell and run the following commands:

kafka-configs.sh --bootstrap-server ext-kafka01.fleetcam.io:9092 --alter --add-config 'SCRAM-SHA-512=[password=zookeeper-password]' --entity-type users --entity-name ftszk

kafka-configs.sh --zookeeper ext-kafka01.fleetcam.io:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafkaadmin-password]' --entity-type users --entity-name ftskafkaadminAfter the users are created without errors, press Ctrl+C to shut down the services we started earlier.

 

 

SASL_SSL configuration with SCRAM

Zookeeper configuration Notes

  • Zookeeper is configured with SASL/MD5 due to the SSL issues we faced during the initial setup
  • Zookeeper Traffic is isolated with in the Broker nodes to maintain security
dataDir=/Data_Disk/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.166.110:2888:3888
server.2=192.168.166.111:2888:3888
server.3=192.168.166.112:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl

 

 

/Data_Disk/zookeeper/myid file is updated corresponding to the zookeeper nodeID

cat /Data_Disk/zookeeper/myid
1

 

 

Jaas configuration

Create the Jaas configuration for zookeeper authentication, it has the follow this syntax

/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf

Server {
   org.apache.zookeeper.server.auth.DigestLoginModule required
   user_multicastbitszk="zkpassword";
};

 

KafkaOPTS

KafkaOPTS Java varible need to be passed when the zookeeper is started to point to the correct JaaS file

export KAFKA_OPTS="-Djava.security.auth.login.config="Path to the zookeeper-jaas.conf"

export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"

 

 

There are few ways to handle this, you can add a script under profile.d or use a custom Zookeeper launch script for the systemd service

Systemd service

Create the launch shell script for Zookeeper

/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s

#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/zookeeper-jaas.conf"
#Start the zookeeper service
/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/zookeeper-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/zookeeper-NOSSL_AUTH.properties

 

 

After you save the file

chomod +x /opt/kafka/kafka_2.13-3.8.0/bin/zk-start.s

sudo chown -R multicastbitskafka:multicastbitskafka /opt/kafka/kafka_2.13-3.8.0

Create the systemd service file

/etc/systemd/system/zookeeper.service

[Unit]
Description=Apache Zookeeper Service
After=network.target
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/zk-start.sh
Restart=on-failure
[Install]

 

WantedBy=multi-user.target

After the file is saved, start the service

sudo systemctl daemon-reload.
sudo systemctl enable zookeeper
sudo systemctl start zookeeper

 

Kafka Broker configuration Notes

/opt/kafka/kafka_2.13-3.8.0/config/server.properties

broker.id=1
listeners=SASL_SSL://kafka01.multicastbits.com:9093
advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093
listener.security.protocol.map=SASL_SSL:SASL_SSL
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks
ssl.keystore.password=keystorePassword
ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks
ssl.truststore.password=truststorePassword
#SASL/SCRAM Authentication
sasl.enabled.mechanisms=SCRAM-SHA-256, SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.client=SCRAM-SHA-512
security.inter.broker.protocol=SASL_SSL
#zookeeper
zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181
zookeeper.sasl.client=true
zookeeper.sasl.clientconfig=ZookeeperClient

 

zookeeper connect options

Define the zookeeper servers the broker will connect to

zookeeper.connect=kafka01.multicastbits.com:2181,kafka02.multicastbits.com:2181,kafka03.multicastbits.com:2181

Enable SASL

zookeeper.sasl.client=true

Tell the broker to use the creds defined under ZookeeperClient section on the JaaS file used by the kafka service

zookeeper.sasl.clientconfig=ZookeeperClient

Broker and listener configuration

Define the broker id

broker.id=1

Define the servers listener name and port

listeners=SASL_SSL://kafka01.multicastbits.com:9093

Define the servers advertised listener name and port

advertised.listeners=SASL_SSL://kafka01.multicastbits.com:9093

Define the SASL_SSL for security protocol

listener.security.protocol.map=SASL_SSL:SASL_SSL

Enable ACLs

authorizer.class.name=kafka.security.authorizer.AclAuthorizer

Define the Java Keystores

ssl.keystore.location=/opt/kafka/secrets/kafkanode1.keystore.jks

ssl.keystore.password=keystorePassword

ssl.truststore.location=/opt/kafka/secrets/kafkanode1.truststore.jks

ssl.truststore.password=truststorePassword

Jaas configuration

/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf

KafkaServer {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="multicastbitskafkaadmin"
  password="kafkaadmin-password";
};
ZookeeperClient {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="multicastbitszk"
  password="Zookeeper_password";
};

 

 

SASL and SCRAM configuration Notes

Enable SASL SCRAM for authentication

org.apache.kafka.common.security.scram.ScramLoginModule required

Use MD5 for Zookeeper authentication

org.apache.zookeeper.server.auth.DigestLoginModule required

KafkaOPTS

KafkaOPTS Java variable need to be passed and must point to the correct JaaS file, when the kafka service is started

export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"

 

 

Systemd service

Create the launch shell script for kafka

/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh

#!/bin/bash
#export the env variable
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
#Start the kafka service
/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server.properties
#debug - launch config with no SSL - we need this for initial setup and debug
#/opt/kafka/kafka_2.13-3.8.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.13-3.8.0/config/server-NOSSL_AUTH.properties

 

 

Create the systemd service

/etc/systemd/system/kafka.service

[Unit]
Description=Apache Kafka Broker Service
After=network.target zookeeper.service
[Service]
User=multicastbitskafka
Group=multicastbitskafka
ExecStart=/opt/kafka/kafka_2.13-3.8.0/bin/multicastbitskafka-server-start.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target

 

 

Connect authenticate and use Kafka CLI tools

Requirements

  • multicastbitsadmin.keystore.jks
  • multicastbitsadmin.truststore.jks
  • WSL2 with java-11-openjdk-devel wget nano
  • Kafka 3.8 folder extracted locally

Setup your environment

  • Setup WSL2

You can use any Linux environment with JDK17 or 11

  • install dependencies

dnf install -y wget nano java-11-openjdk-devel

Download Kafka and extract it (in going to extract it to the home DIR under kafka)

# 1. Download Kafka (Choose a version compatible with your server)
wget https://dlcdn.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
# 2. Extract
tar xzf kafka_2.13-3.8.0.tgz

 

Copy the jks files (You should generate them with the CA JKS, or use one from one of the nodes) to ~/

cp multicastbitsadmin.keystore.jks ~/

 

cp multicastbitsadmin.truststore.jks ~/

Create your admin client properties file

change the path to fit your setup

nano ~/kafka-adminclient.properties

# Security protocol and SASL/SSL configuration
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
# SSL Configuration
ssl.keystore.location=/opt/kafka/secrets/multicastbitsadmin.keystore.jks
ssl.keystore.password=keystorepw
ssl.truststore.location=/opt/kafka/secrets/multicastbitsadmin.truststore.jks
ssl.truststore.password=truststorepw
# SASL Configuration
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required 
    username="#youradminUser#" 
		password="#your-admin-PW#";

 

 

Create the JaaS file for the admin client

nano ~/kafka_client_jaas.conf

Some kafka-cli tools still look for the jaas.conf under KAFKA_OPTS environment variable

KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="#youradminUser#"
  password="#your-admin-PW#";
};

 

Export the Kafka environment variables

export KAFKA_HOME=/opt/kafka/kafka_2.13-3.8.0
export PATH=$PATH:$KAFKA_HOME/bin
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
export KAFKA_OPTS="-Djava.security.auth.login.config=~/kafka_client_jaas.conf"
source ~/.bashrc

 

 

Kafka CLI Usage Examples

Create a user

kafka-configs.sh --bootstrap-server kafka01.multicastbits.com:9093 --alter --add-config 'SCRAM-SHA-512=[password=#password#]' --entity-type users --entity-name %username%--command-config ~/kafka-adminclient.properties

 

 

Create a topic

kafka-topics.sh --bootstrap-server kafka01.multicastbits.com:9093 --create --topic %topicname% --partitions 10 --replication-factor 3 --command-config ~/kafka-adminclient.properties

 

 

Create ACLs

External customer user with READ DESCRIBE privileges to a single topic

kafka-acls.sh --bootstrap-server kafka01.multicastbits.com:9093 
  --command-config ~/kafka-adminclient.properties 
  --add --allow-principal User:customer-user01 
  --operation READ --operation DESCRIBE --topic Customer_topic

 

 

Troubleshooting

Here are some common issues you might encounter when setting up and using Kafka with SASL_SCRAM authentication, along with their solutions:

1. Connection refused errors

Issue: Clients unable to connect to Kafka brokers.

Solution:

  • Verify that the Kafka brokers are running and listening on the correct ports.
  • Check firewall settings to ensure the Kafka ports are open and accessible.
  • Confirm that the bootstrap server addresses in client configurations are correct.

2. Authentication failures

Issue: Clients fail to authenticate with Kafka brokers.

Solution:

  • Double-check username and password in the JAAS configuration file.
  • Ensure the SCRAM credentials are properly set up on the Kafka brokers.
  • Verify that the correct SASL mechanism (SCRAM-SHA-512) is specified in client configurations.

3. SSL/TLS certificate issues

Issue: SSL handshake failures or certificate validation errors.

Solution:

  • Confirm that the keystore and truststore files are correctly referenced in configurations.
  • Verify that the certificates in the truststore are up-to-date and not expired.
  • Ensure that the hostname in the certificate matches the broker’s advertised listener.

4. Zookeeper connection issues

Issue: Kafka brokers unable to connect to Zookeeper ensemble.

Solution:

  • Verify Zookeeper connection string in Kafka broker configurations.
  • Ensure Zookeeper servers are running and accessible and the ports are open
  • Check Zookeeper client authentication settings in JAAS configuration file

 

 

NFS Provisioner Setup and Testing Guide for Rancher RKE2/Kubernetes

This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.

Example use cases:

  • Database migrations
  • Apache Kafka clusters
  • Data processing pipelines

Requirements:

  • An accessible NFS share exported with: rw,sync,no_subtree_check,no_root_squash
  • NFSv3 or NFSv4 protocol
  • Kubernetes v1.31.7+ or RKE2 with rke2r1 or later

 

lets get to it


1. NFS Server Export Setup

Ensure your NFS server exports the shared directory correctly:

# /etc/exports
/rke-pv-storage  worker-node-ips(rw,sync,no_subtree_check,no_root_squash)

 

  • Replace worker-node-ips with actual IPs or CIDR blocks of your worker nodes.
  • Run sudo exportfs -r to reload the export table.

2. Install NFS Subdir External Provisioner

Add the Helm repo and install the provisioned:

helm repo add nfs-subdir-external-provisioner \
  https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

helm install nfs-client-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace kube-system \
  --set nfs.server=192.168.162.100 \
  --set nfs.path=/rke-pv-storage \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=false

Notes:

  • If you want this to be the default storage class, change storageClass.defaultClass=true.
  • nfs.server should point to the IP of your NFS server.
  • nfs.path must be a valid exported directory from that NFS server.
  • storageClass.name can be referenced in your PersistentVolumeClaim YAMLs using storageClassName: nfs-client.

3. PVC and Pod Test

Create a test PVC and pod using the following YAML:

# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: shell
    image: busybox
    command: [ "sh", "-c", "sleep 3600" ]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-nfs-pvc

 

Apply it:

kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w

 

Expected output:

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-nfs-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi        RWX            nfs-client     30s

 


4. Troubleshooting

If the PVC remains in Pending, follow these steps:

Check the provisioner pod status:

kubectl get pods -n kube-system | grep nfs-client-provisioner

 

Inspect the provisioner pod:

kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>

 

Common Issues:

  • Broken State: Bad NFS mount
    mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka

     

    • This usually means the NFS path is misspelled or not exported properly.
  • Broken State: root_squash enabled
    failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied

     

    • Fix by changing the export to use no_root_squash or chown the directory to nobody:nogroup.
  • ImagePullBackOff
    • Ensure nodes have internet access and can reach registry.k8s.io.
  • RBAC errors
    • Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.

5. Healthy State Example

kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m   1/1     Running     0          3m39s

 

kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True

 

kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701       1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763       1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301       1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...

 

Once test-nfs-pvc is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client in other workloads (e.g., Strimzi KafkaNodePool).


Advertising VRF Connected/Static routes via MP BGP to OSPF – Guide Dell S4112F-ON – OS 10.5.1.3

Im going to base this off my VRF Setup and Route leaking article and continue building on top of it

Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective

For this example we are going to use BGP to collect connected routes and advertise that over OSPF

Setup the BGP process to collect connected routes

router bgp 65000
 router-id 10.252.250.6
 !
 address-family ipv4 unicast
 !
 neighbor 10.252.250.1
!
vrf Tenant01_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant02_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant03_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Shared_VRF
 !
 address-family ipv4 unicast
  redistribute connected

Setup OSPF to Redistribute the routes collected via BGP

router ospf 250 vrf Shared_VRF
 area 0.0.0.0 default-cost 0
 redistribute bgp 65000
interface vlan250
 mode L3
 description OSPF_Routing
 no shutdown
 ip vrf forwarding Shared_VRF
 ip address 10.252.250.6/29
 ip ospf 250 area 0.0.0.0
 ip ospf mtu-ignore
 ip ospf priority 10

Testing and confirmation

Local OSPF Database

Remote device

VRF Setup with route leaking guide Dell S4112F-ON – OS 10.5.1.3

Scope –

Create Three VRFs for Three separate clients

Create a Shared VRF

Leak routes from each VRF to the Shared_VRF

Logical overview

Create the VRFs

ip vrf Tenant01_VRF
ip vrf Tenant02_VRF
ip vrf Tenant03_VRF

Create and initialize the Interfaces (SVI, Layer 3 interface, Loopback)

We are creating Layer 3 SVIs Per tenant

interface vlan200
 mode L3
 description Tenant01_NET01
 no shutdown
 ip vrf forwarding Tenant01_VRF
 ip address 10.251.100.254/24
!
interface vlan201
 mode L3
 description Tenant01_NET02
 no shutdown
 ip vrf forwarding Tenant01_VRF
 ip address 10.251.101.254/24
!
interface vlan210
 mode L3
 description Tenant02_NET01
 no shutdown
 ip vrf forwarding Tenant02_VRF
 ip address 172.17.100.254/24
!
interface vlan220
 no ip address
 description Tenant03_NET01
 no shutdown
 ip vrf forwarding Tenant03_VRF
 ip address 192.168.110.254/24
!
interface vlan250
 mode L3
 description OSPF_Routing
 no shutdown
 ip vrf forwarding Shared_VRF
 ip address 10.252.250.6/29

Confirmation

LABCORE# show i
image     interface inventory ip        ipv6      iscsi
LABCORE# show ip interface brief
Interface Name            IP-Address          OK       Method       Status     Protocol
=========================================================================================
Vlan 200                   10.251.100.254/24   YES      manual       up          up
Vlan 201                   10.251.101.254/24   YES      manual       up          up
Vlan 210                   172.17.100.254/24   YES      manual       up          up
Vlan 220                   192.168.110.254/24  YES      manual       up          up
Vlan 250                   10.252.250.6/29     YES      manual       up          up
LABCORE# show ip vrf
VRF-Name                          Interfaces

Shared_VRF                        Vlan250

Tenant01_VRF                      Vlan200-201

Tenant02_VRF                      Vlan210

Tenant03_VRF                      Vlan220

default                           Vlan1

management                        Mgmt1/1/1

Route leaking

For this Example we are going to Leak routes from each of these tenant VRFs in to the Shared VRF

This design will allow each VLAN within the VRFs to see each other, which can be a security issue how ever you can easily control this by

  • narrowing the routes down to hosts
  • Using Access-lists (not the most ideal but if you have a playbook you can program this in with out any issues)

Real world use cases may differ use this as a template on how to leak routes with in VRFs, update your config as needed

Create the route export statements wihtin the VRFS

ip vrf Shared_VRF
 ip route-import 2:100
 ip route-import 3:100
 ip route-import 4:100
 ip route-export 1:100
ip vrf Tenant01_VRF
 ip route-export 2:100
 ip route-import 1:100
ip vrf Tenant02_VRF
 ip route-export 3:100
 ip route-import 1:100
ip vrf Tenant03_VRF
 ip route-export 4:100
 ip route-import 1:100

Lets Explain this a bit

ip vrf Shared_VRF
 ip route-import 2:100 -----------> Import Leaked routes from target 2:100
 ip route-import 3:100 -----------> Import Leaked routes from target 3:100
 ip route-import 4:100 -----------> Import Leaked routes from target 4:100
 ip route-export 1:100  -----------> Export routes to target 1:100

if you need to filter out who can import the routes you need to use the route-map with prefixes to filter it out

Setup static routes per VRF as needed

ip route vrf Tenant01_VRF 10.251.100.0/24 interface vlan200
ip route vrf Tenant01_VRF 10.251.101.0/24 interface vlan201
!
ip route vrf Tenant02_VRF 172.17.100.0/24 interface vlan210
!
ip route vrf Tenant03_VRF 192.168.110.0/24 interface vlan220
!
ip route vrf Shared_VRF 0.0.0.0/0 10.252.250.1 interface vlan25
  • Now these static routes will be leaked and learned by the shared VRF
  • the Default route on the Shared VRF will be learned downstream by the tenant VRFs
  • instead of the default route on the shared VRF, if you scope it to a certain IP or a subnet you can prevent the traffic routing between the VRFs via the Shared VRF
  • if you need routes directly leaked between Tenents use the ip route-import on the VRF as needed

Confirmation

Routes are being distributed via internal BGP process

LABCORE# show ip route vrf Tenant01_VRF
Codes: C - connected
       S - static
       B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
       O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
       E2 - OSPF external type 2, * - candidate default,
       + - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
  Destination                 Gateway                                        Dist/Metric       Last Change
----------------------------------------------------------------------------------------------------------
  *B IN 0.0.0.0/0           via 10.252.250.1                                 200/0             12:17:42
  C     10.251.100.0/24     via 10.251.100.254       vlan200                 0/0               12:43:46
  C     10.251.101.0/24     via 10.251.101.254       vlan201                 0/0               12:43:46
LABCORE#
LABCORE# show ip route vrf Tenant02_VRF
Codes: C - connected
       S - static
       B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
       O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
       E2 - OSPF external type 2, * - candidate default,
       + - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
  Destination                 Gateway                                        Dist/Metric       Last Change
----------------------------------------------------------------------------------------------------------
  *B IN 0.0.0.0/0           via 10.252.250.1                                 200/0             12:17:45
  C     172.17.100.0/24     via 172.17.100.254       vlan210                 0/0               12:43:49
LABCORE#
LABCORE# show ip route vrf Tenant03_VRF
Codes: C - connected
       S - static
       B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
       O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
       E2 - OSPF external type 2, * - candidate default,
       + - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
  Destination                 Gateway                                        Dist/Metric       Last Change
----------------------------------------------------------------------------------------------------------
  *B IN 0.0.0.0/0           via 10.252.250.1                                 200/0             12:17:48
  C     192.168.110.0/24    via 192.168.110.254      vlan220                 0/0               12:43:52
LABCORE# show ip route vrf Shared_VRF
Codes: C - connected
       S - static
       B - BGP, IN - internal BGP, EX - external BGP, EV - EVPN BGP
       O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
       E2 - OSPF external type 2, * - candidate default,
       + - summary route, > - non-active route
Gateway of last resort is via 10.252.250.1 to network 0.0.0.0
  Destination                 Gateway                                        Dist/Metric       Last Change
----------------------------------------------------------------------------------------------------------
  *S    0.0.0.0/0           via 10.252.250.1         vlan250                 1/0               12:21:33
  B  IN 10.251.100.0/24     Direct,Tenant01_VRF      vlan200                 200/0             09:01:28
  B  IN 10.251.101.0/24     Direct,Tenant01_VRF      vlan201                 200/0             09:01:28
  C     10.252.250.0/29     via 10.252.250.6         vlan250                 0/0               12:42:53
  B  IN 172.17.100.0/24     Direct,Tenant02_VRF      vlan210                 200/0             09:01:28
  B  IN 192.168.110.0/24    Direct,Tenant03_VRF      vlan220                 200/0             09:02:09

We can ping outside to the internet from the VRF IPs

Redistribute leaked routes via IGP

You can use a Internal BGP process to pickup routes from any VRF and redistribute them to other IGP processes as needed – Check the Article for that information

Vagrant Ansible LAB Guide – Bridged network

Here’s a is a quick guide to get you started with a “Ansible core lab” using Vagrant.

Alright lets get started

TLDR Version

  • Install Vagrant
  • Install Virtual-box
  • Create project folder and CD in to it
Vagrant init
  • Vagrantfile – link
  • Vagrant Provisioning Shell Script to Deploy Ansible – link
  • Install the vagrant-vbguest plugin to deploy missing
vagrant plugin install vagrant-vbguest
  • Bring up the Vagrant environment
Vagrant up

Install Vagrant and Virtual box

For this demo we are using windows 10 1909 but you can use the same guide for MAC OSX

Windows

Download Vagrant and virtual box and install it the good ol way –

https://www.vagrantup.com/downloads.html

https://www.virtualbox.org/wiki/Downloads

https://www.vagrantmanager.com/downloads/

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Or Using chocolatey

choco install vagrant
choco install virtualbox
choco install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

MAC OSX – using Brewcask

Install virtual box

$ brew cask install virtualbox

Now install Vagrant either from the website or use homebrew for installing it.

$ brew cask install vagrant

Vagrant-Manager is a nice way to manage all your virtual machines in one place directly from the menu bar.

$ brew cask install vagrant-manager

Install the vagrant-vbguest plugin (We need this with newer versions of Ubuntu)

vagrant plugin install vagrant-vbguest

Setup the Vagrant Environment

Open Powershell

to get started lets check our environment

vagrant version

Create a project directory and Initialize the environment

for the project directory im using D:\vagrant

Open powershell and run

mkdir D:\vagrant
cd D:\vagrant

Initialize the environment under the project folder

vagrant init

this will create Two Items

.vagrant – Hidden folder holding Base Machines and meta data

Vagrantfile – Vagrant config file

Lets Create the Vagrantfile to deploy the VMs

https://www.vagrantup.com/docs/vagrantfile/

The syntax of Vagrantfiles is Ruby this gives us a lot of flexibility to program in logic when building your files

Im using Atom to edit the vagrantfile

Vagrant.configure("2") do |config|
     config.vm.define "controller" do |controller|
                  controller.vm.box = "ubuntu/trusty64"
                  controller.vm.hostname = "LAB-Controller"
                  controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "172.17.10.120"
                    controller.vm.provider "virtualbox" do |vb|
                                 vb.memory = "2048"
                  end
                  controller.vm.provision :shell, path: 'Ansible_LAB_setup.sh'
   end
   (1..3).each do |i|
         config.vm.define "vls-node#{i}" do |node|
                       node.vm.box = "ubuntu/trusty64"
                       node.vm.hostname = "vls-node#{i}"
                       node.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection" ip: "172.17.10.12#{i}"
                      node.vm.provider "virtualbox" do |vb|
                                                  vb.memory = "1024"
                     end
              end
        end
end

You can grab the code from my Repo

https://github.com/malindarathnayake/Ansible_Vagrant_LAB/blob/master/Vagrantfile

Let’s talk a little bit about this code and unpack this

Vagrant API version

Vagrant uses API versions for its configuration file, this is how it can stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.

Provisioning the Ansible VM

This will

  • Provision the controller Ubuntu VM
  • Create a bridged network adapter
  • Set the host-name – LAB-Controller
  • Set the static IP – 172.17.10.120/24
  • Run the Shell script that installs Ansible using apt-get install (We will get to this below)

Lets start digging in…

Specifying the Controller VM Name, base box and hostname

Vagrant uses a base image to clone a virtual machine quickly. These base images are known as “boxes” in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.

You can find different base boxes from app.vagrantup.com

Or you can create custom base boxes for pretty much anything including “CiscoVIRL(CML)” images – keep an eye out for the next article on this

Network configurations

controller.vm.network "public_network", bridge: "Intel(R) I211 Gigabit Network Connection", ip: "your IP"

in this case, we are asking it to create a bridged adapter using the Intel(R) I211 NIC and set the IP address you defined on under IP attribute

You can the relavant interface name using

get-netadapter

You can also create a host-only private network

controller.vm.network :private_network, ip: "10.0.0.10"

for more info checkout the network section in the KB

https://www.vagrantup.com/docs/networking/

Define the provider and VM resources

We declaring virtualbox(we installed this earlier) as the provider and setting VM memory to 2048

You can get more granular with this, refer to the below KB

https://www.vagrantup.com/docs/virtualbox/configuration.html

Define the shell script to customize the VM config and install the Ansible Package

Now this is where we define the provisioning shell script

this script installs Ansible and set the host file entries to make your life easier

In case you are wondering VLS stands for V=virtual,L – linux S – server.

I use this naming scheme for my VMs. Feel free to use anything you want; make sure it matches what you defined on the Vagrantfile under node.vm.hostname

!/bin/bash
sudo apt-get update
sudo apt-get install software-propetise-common -y
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible -y
echo "
172.17.10.120 LAB-controller
172.17.10.121 vls-node1
172.17.10.122 vls-node2
172.17.10.123 vls-node3" >> /etc/hosts

create this file and save it as Ansible_LAB_setup.sh in the Project folder

in this case I’m going to save it under D:\vagrant

You can also do this inline with a script block instead of using a separate file

https://www.vagrantup.com/docs/provisioning/basic_usage.html

Provisioning the Member servers for the lab

We covered most of the code used above, the only difference here is we are using each method to create 3 VMs with the same template (I’m lazy and it’s more convenient)

This will create three Ubuntu VMs with the following Host-names and IP addresses, you should update these values to match you LAN, or use a private Adapter

vls-node1 – 172.17.10.121

vls-node2 – 172.17.10.122

vls-node1 – 172.17.10.123

So now that we are done with explaining the code, let’s run this

Building the Lab environment using Vagrant

Issue the following command to check your syntax

Vagrant status

Issue the following command to bring up the environment

Vagrant up

If you get this message Reboot in to UEFI and make sure virtualization is enabled

Intel – VT-D

AMD Ryzen – SVM

If everything is kumbaya you will see vagrant firing up the deployment

It will provision 4 VMs as we specified

Notice since we have the “vagrant-vbguest” plugin installed, it will reinstall the relevant guest tools along with the dependencies for the OS

==> vls-node3: Machine booted and ready!
[vls-node3] No Virtualbox Guest Additions installation found.
rmmod: ERROR: Module vboxsf is not currently loaded
rmmod: ERROR: Module vboxguest is not currently loaded
Reading package lists...
Building dependency tree...
Reading state information...
Package 'virtualbox-guest-x11' is not installed, so not removed
The following packages will be REMOVED:
  virtualbox-guest-utils*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5799 kB disk space will be freed.
(Reading database ... 61617 files and directories currently installed.)
Removing virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for man-db (2.8.7-3) ...
(Reading database ... 61604 files and directories currently installed.)
Purging configuration files for virtualbox-guest-utils (6.0.14-dfsg-1) ...
Processing triggers for systemd (242-7ubuntu3.7) ...
Reading package lists...
Building dependency tree...
Reading state information...
linux-headers-5.3.0-51-generic is already the newest version (5.3.0-51.44).
linux-headers-5.3.0-51-generic set to manually installed.

Check the status

Vagrant status

Testing

Connecting via SSH to your VMs

vagrant ssh controller

“Controller” is the VMname we defined before not the hostname, You can find this by running Vagrant status on posh or your terminal

We are going to connect to our controller and check everything

Little bit more information on the networking side

Vagrant Adds two interfaces, for each VM

NIC 1 – Nat’d in to the host (control plane for Vagrant to manage the VMs)

NIC 2 – Bridged adapter we provisioned in the script with the IP Address

Default route is set via the Private(NAT’d) interface (you cant change it)

Netplan configs

Vagrant creates a custom netplan yaml for interface configs


Destroy/Tear-down the environment

vagrant destroy -f

https://www.vagrantup.com/intro/getting-started/teardown.html

I hope this helped someone. when I started with Vagrant a few years back it took me a few tries to figure out the system and the logic behind it, this will give you a basic understanding on how things are plugged together.

let me know in the comments if you see any issues or mistakes.

Until Next time…..

Azure AD Sync Connect No-Start-Connection status

Issue

Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs

"The Specified Domain either does not exist or could not be contacted"

(click to enlarge)

Checked the following

  • Restarted ADsync Services
  • Resolve the ADDS Domain FQDN and DNS – Working
  • Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values

Root Cause

Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)

Assumption

when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.

This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high

Resolution

Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding

in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)

Hybrid Exchange mailbox On-boarding : Target user already has a primary mailbox – Fix

During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:

  • Hybrid Co-Existence Environment with AAD-Sync
  • User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
  • [email protected] had a cloud-only mailbox prior to the initial AD-sync was run
  • A user account is registered as a mail-user and has a valid license attached
  • During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox.
+ CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException
+ FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep
lication.MoveRequest.NewMoveRequest
+ PSComputerName : Pl-ex001.Paladin.org

It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user

Option 1 (nuclear option)

to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.

Option 2

Clean up only the office 365 mailbox pointer information

PS C:\> Set-User [email protected] -PermanentlyClearPreviousMailboxInfo 
Confirm
Confirm
Are you sure you want to perform this action?
Delete all existing information about user "[email protected]"?. This operation will clear existing values from
Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that
existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to
continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a

Executing this leaves you with a clean object without the duplicate-mailbox problem,

in some cases when you run this command you will get the following output 

 “Command completed successfully, but no user settings were changed.”

If this happens

Remove the license from the user temporarily and run the command to remove previous mailbox data

then you can re-add the license 

 

Upgrading VMware EXSI Hosts using Vcenter Update Manager Baseline (6.5 to 6.7 Update 2)

Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client.  we can use the component to

  • patch/upgrade hosts
  • deploy .vib files within the V-Center
  • Scan your VC environment and report on any out of compliance hosts

Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.

Now that’s out of the way Let’s get to the how-to part of this

In Vcenter click the “Menu” and drill down to the “Update Manager”

This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels

Click on the “Baselines” Tab

You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now

Navigate to the “ESXi Images” Tab, and Click “Import”

Once the Upload is complete, Click on “New Baseline”

Fill in the Name and Description that makes sense to anyone that logs in and click Next

Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it

Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time 

Now click the “Menu” and Navigate Backup to “Hosts and Clusters”

Now you can apply the Baseline this at various levels within the Vcenter Hierarchy

Vcenter | DataCenter | Cluster | Host

Depending on your use case pick the right level

Excerpt from the KB 

For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel.

When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster.

The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded.

Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.

Link to KB on Remediation


Moving on; for this example, since I have only 2 hosts. we are going apply the baseline at the cluster level but apply the remediation at host level

Host 1 > Enter Maintenance > Remediation > Update complete and online

Host 2 > Enter Maintenance > Remediation > Update complete and online

Select the cluster, Click the “Updates” Tab and click on “Attach” on the Attached baselines section

Select and attach the baseline we created before

Click “Check Compliance” to scan and get a report

Select the host in the cluster, enter maintenance mode

Click “REMEDIATE” to start the upgrade. (if you do this at a cluster level if you have DRS, Update Manager will update each node)

This will reboot the host and go through the update process

Foot Notes –

You might run into the following issue

“vCenter cannot deploy Host upgrade agent to host”

Cause 1

Scratch partition is full use Vcenter and change the scratch folder location

VMWARE KB

Creating a persistent scratch location for ESXi  – https://kb.vmware.com/s/article/1033696

Cause 2

Hardware is not compatible,

I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing

Vmware HCL – Link

ESXI and Vcenter log file locations – link

“System logs on hosts are stored on non-persistent storage” message on VCenter

Ran into this pesky little error message recently, on a vcenter environment

If the logs are stored on a local scratch disk, vCenter will display an alert stating –  “System logs on host xxx are stored on non-persistent storage”

Configure ESXi Syslog location – vSphere Web Client

Vcenter > Select “Host”> Configure > Advance System Settings

Click on Edit and search for “Syslog.global.logDir”

Edit the value and in this case, I’m going to use the local data store (Localhost_DataStore01) to store the syslogs.

You can also define a remote syslog server using the “Syslog.global.LogHost” setting

Configure ESXi Syslog location – ESXCLI

Ssh on to the host

Check the current location

esxcli system syslog config get

*logs stored on the local scratch disk

Manually Set the Path

esxcli system syslog config set –logdir=/vmfs/directory/path

you can find the VMFS volume names/UUIDs under  –

/vmfs/volumes

remote syslog server can be set using

esxcli system syslog config set –loghost=’tcp://hostname:port’

Load the configuration changes with the syslog reload command

esxcli system syslog reload

The logs will immediately begin populating the specified location.

Unable to upgrade vCenter 6.5/6.7 to U2: Root password expired

As a Part of my pre-flight check for Vcenter upgrades i like to mount the ISO and go through the first 3 steps, during this I noticed the installer cannot connect to the source appliance with this error 

2019-05-01T20:05:02.052Z - info: Stream :: close
2019-05-01T20:05:02.052Z - info: Password not expired
2019-05-01T20:05:02.054Z - error: sourcePrecheck: error in getting source Info: ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials.
2019-05-01T20:05:03.328Z - error: Request timed out after 30000 ms, url: https://vcenter.companyABC.local:443/
2019-05-01T20:05:09.675Z - info: Log file was saved at: C:\Users\MCbits\Desktop\installer-20190501-160025555.log

trying to reset via the admin interface or the DCUI didn’t work,  after digging around found a way to reset it by forcing the vcenter to boot in to single user mode

Procedure:

  1. Take a snapshot or backup of the vCenter Server Appliance before proceeding. Do not skip this step.
  2. Reboot the vCenter Server Appliance.
  3. After the OS starts, press e key to enter the GNU GRUB Edit Menu.
  4. Locate the line that begins with the word Linux.
  5. Append these entries to the end of the line: rw init=/bin/bash The line should look like the following screenshot:

After adding the statement, press F10 to continue booting 

Vcenter appliance will boot into single user mode

Type passwd to reset the root password

if you run into the following error message

"Authentication token lock busy"

you need to re-mount the filesystem in RW, which lets you change between read-only and read-write. this will allow you to make changes

mount -o remount,rw /

Until next time !!!