Quick and easy way to find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox
Duplicate mailbox issue with hybrid exchange user mailbox migration
ADSync Password synchronization failed with the error The Specified Domain either does not exist or could not be contacted
Remember the Export-Mailbox command on exchange 2007??? The main problem I personally had was the annoying outlook requirement. With the exchange
Secure Unify controller on AWS using a free SSL certificate from LetsEncrypt, This guide will apply for any Debian based
This guide explains how to implement an NFS StorageClass and dynamic provisioner in Kubernetes via the nfs-subdir-external-provisioner Helm chart. It
IP version 6 with Dual Stack via Hurricane Electric
Hi Internetz, its been a while...So we had an old Firebox X700 laying around in office gathering dust. I saw
"Mail server crashed" worst nightmare for a System admin. Followed by tight Dead lines, incident reports, load of complains you
Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed.

i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)

anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting

Guide

First we need to find the occupied slots and the Bus address for each slot

sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"

Output will show the Slot ID, Usage and then the Bus Address

        Designation: CPU SLOT1 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT2 PCI-E 4.0 X8
        Current Usage: In Use
        Bus Address: 0000:41:00.0
        Designation: CPU SLOT3 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c1:00.0
        Designation: CPU SLOT4 PCI-E 4.0 X8
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT5 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c2:00.0
        Designation: CPU SLOT6 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT7 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:81:00.0
        Designation: PCI-E M.2-M1
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: PCI-E M.2-M2
        Current Usage: Available
        Bus Address: 0000:ff:00.0

We can use lspci -s #BusAddress# to locate whats installed on each slot

lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)

lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)

Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments

Until next time!!!

Reference –

https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl

During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:

  • Hybrid Co-Existence Environment with AAD-Sync
  • User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
  • [email protected] had a cloud-only mailbox prior to the initial AD-sync was run
  • A user account is registered as a mail-user and has a valid license attached
  • During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox.
+ CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException
+ FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep
lication.MoveRequest.NewMoveRequest
+ PSComputerName : Pl-ex001.Paladin.org

It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user

Option 1 (nuclear option)

to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.

Option 2

Clean up only the office 365 mailbox pointer information

PS C:\> Set-User [email protected] -PermanentlyClearPreviousMailboxInfo 
Confirm
Confirm
Are you sure you want to perform this action?
Delete all existing information about user "[email protected]"?. This operation will clear existing values from
Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that
existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to
continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a

Executing this leaves you with a clean object without the duplicate-mailbox problem,

in some cases when you run this command you will get the following output 

 “Command completed successfully, but no user settings were changed.”

If this happens

Remove the license from the user temporarily and run the command to remove previous mailbox data

then you can re-add the license 

 

Issue

Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs

"The Specified Domain either does not exist or could not be contacted"

(click to enlarge)

Checked the following

  • Restarted ADsync Services
  • Resolve the ADDS Domain FQDN and DNS – Working
  • Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values

Root Cause

Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)

Assumption

when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.

This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high

Resolution

Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding

in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)

Remember the Export-Mailbox command on exchange 2007??? The main problem I personally had was the annoying outlook requirement. 
With the exchange server 2010 service pack 1 release, M$ introduced a new Cmdlet to export mailboxes on the server. And it does not require outlook.
New-MailboxExportRequest

Step 01 – Mailbox Import Export Role Assignment
Grant the user account permissions to export mailboxes (By default no account has the privileges to export mailboxes)
New-ManagementRoleAssignment -Role “Mailbox Import Export” -User administrator

Step 02 – Setup the Export File Location
We need a network share to export files. (Eg – \Exch01PST_export)
Note:
The Cmdlet gives an error if you point to a directory directly on the Hardisk (Eg – F:PST_export)
Create a Shared folder on a serverNAS and grant Exchange Trusted Subsystem user account read/write permissions to the folder
Exporting Mailbox Items with “New-MailboxExportRequest”

Supporting Cmdlets that can be used with MailboxExportRequest
Cmdlet
Description
Topic
Start the process of exporting a mailbox or personal archive to a .pst file. You can create more than one export request per mailbox. Each request must have a unique name.
Change export request options after the request is created or recover from a failed request.
Suspend an export request any time after the request is created but before the request reaches the status of Completed.
Resume an export request that’s suspended or failed.
Remove fully or partially completed export requests. Completed export requests aren’t automatically cleared. You must use this cmdlet to remove them.
View general information about an export request.
View detailed information about an export request.
In this example
Shared folder name-  PST_export
server name- EXCH01
Share Path –  \Exch01PST_export
Mailbox – amy.webber

Folder permissions – 

For this example we are going to use New-MailboxExportRequest cmdlet with the following parameters :

-baditemlimit 200 -AcceptLargeDataLoss
AcceptLargeDataLoss
The AcceptLargeDataLoss parameter specifies that a large amount of data loss is acceptable if the BadItemLimit is set to 51 or higher. Items are considered corrupted if the item can’t be read from the source database or can’t be written to the target database. Corrupted items won’t be available in the destination mailbox or .pst file.
baditemlimit
The BadItemLimit parameter specifies the number of bad items to skip if the request encounters corruption in the mailbox. Use 0 to not skip bad items. The valid input range for this parameter is from 0 through 2147483647. The default value is 0.
Exporting the Whole Mailbox
Run the following Cmdlet to initiate the mailbox move request:  New-MailboxExportRequest
New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber -FilePath \Exch01PST_exportamy.webber.pst

Exporting the User’s Online Archive
If you want to export the user’s online archive to .pst, use the –IsArchive parameter.

New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber  -IsArchive -FilePath \Exch01PST_exportamy.webber-Archive.pst

Exporting a Specific Folder
You can export a folder from the users mailbox using the -IncludeFolders parameter
Eg: inbox folder layout-
To export the inbox folder
New-MailboxExportRequest -baditemlimit 200 -AcceptLargeDataLoss -Mailbox amy.webber -IncludeFolders #Inbox# -FilePath \Exch01PST_exportamy.webber.pst
Checking the Progress of the Mailbox Export Request

To check the current statues of the mailbox export request use the following cmdlet:
Get-MailboxExportRequest | Get-MailboxExportRequestStatistics
People do crazy stuff scripting with this Cmdlet. Look around in the interwebs for some scripts.
Useful links:
Until next time…

I found a solution for how to navigate cloud key issues and wanted to set up a ZTP solution for Unifi hardware so I can direct ship equipment to the site, and provision it securely via internet without having to stand up a L2L tunnel.

Alright, lets get started…

This guide is applicable for any Ubuntu based install, but I’m going to utilize Amazon Lightsail for the demo, since at the time of writing, it’s the cheapest option I can find with enough compute resources and a static IP included.

2 GB RAM, 1 vCPU60 GB SSD

OPex (Recurring Cost) – 10$ per Month – As of February 2019

Guide

Dry Run

1. Set up Lightsail instance
2. Create and attach static IP
3. Open necessary ports
4. Set up Unify packages
5. Set up SSL using certbot and letsencrypt
6. Add the certs to unify controller
7. Set up Cronjob for SSL auto Renewal
8. Adopting UniFi devices

1. Set up LightSail instance

Login to – https://lightsail.aws.amazon.com

Spin up a Lightsail instance:

Set a name for the instance and provision it.

2. Create and attach static IP

Click on the instance name and click on the networking tab:

Click “Create Static IP”:

3. Open necessary ports

TCP or UDP

Port Number

Usage

TCP

80

Port used inform-URL for adoption.

TCP

443

Port used for Cloud Access service.

UDP

3478

Port used for STUN.

TCP

8080

Port used for device and controller communication.

TCP

8443

Port used for controller GUI/API as seen in a web browser.

TCP

8880

Port used for HTTP portal redirection.

TCP

8843

Port used for HTTPS portal redirection.

You can disable or lock down the ports as needed using IP-tables depending on your security posture

Post spotlight-

https://www.lammertbies.nl/comm/info/iptables.html#intr

4. Set up Unify packages

https://help.ubnt.com/hc/en-us/articles/209376117-UniFi-Install-a-UniFi-Cloud-Controller-on-Amazon-Web-Services#7

Add the Ubiquiti repository to /etc/apt/sources.list:
sudo echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
Add the Ubiquiti GPG Key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50

Update the server’s repository information:

sudo apt-get update

Install JAVA 8 run time

You need Java Run-time 8 to run the UniFi Controller

Add Oracle’s PPA (Personal Package Archive) to your list of sources so that Ubuntu knows where to check for the updates. Use addaptrepository command for that.

sudo add-apt-repository ppa:webupd8team/java -y sudo apt install java-common oracle-java8-installer

update your package repository by issuing the following command

sudo apt-get update

The oracle-java8-set-default package will automatically set Oracle JDK8 as default. Once the installation is complete we can check Java version.

java -version

java version "1.8.0_191"

MongoDB

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
sudo apt update

Update. Retrieve the latest package information.

sudo apt update

sudo apt-get install apt-transport-https

Install UniFi Controller packages.

sudo apt install unifi

You should be able to Access the web interface and go through the initial setup wizard.

https://yourIPaddress:8443

5. Set up SSL using certbot and letsencrypt

Lets get that green-lock up in here shall we

So, a few things to note here… UniFi doesn’t really have a straightforward way to import certificates, you have to use the java keystore commands to import the cert, but there is a very handy script built by Steve Jenkins that makes this super easy.

First, we need to request a cert and sign it using lets encrypt certificate authority.

Let’s start with adding the repository and install the EFF certbot package – link

sudo apt-get update
sudo apt-get install software-properties-common 
sudo add-apt-repository universe 
sudo add-apt-repository ppa:certbot/certbot 
sudo apt-get update 
sudo apt-get install certbot

5.1 Update/add your DNS record and make sure its propagated (this is important)

Note - The DNS name should point to the static IP we attached to our light-sail instance
Im going to use the following A record for this example

unifyctrl01.multicastbits.com

Ping from the controller and make sure the server can resolve it.

ping unifyctrl01.multicastbits.com


You wont be able see any echo replies because ICMP is not allowed on the firewall rules in AWS - leave it as is we just need the server to see the IP resolving to DNS A record

5.2 Request the certificate

Issue the following command to start certbot in certonly mode

sudo certbot certonly
usage: 
  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate. The most common SUBCOMMANDS and flags are:

obtain, install, and renew certificates:
    (default) run   Obtain & install a certificate in your current webserver
    certonly        Obtain or renew a certificate, but do not install it
    renew           Renew all previously obtained certificates that are near expiry
    enhance         Add security enhancements to your existing configuration
   -d DOMAINS       Comma-separated list of domains to obtain a certificate for

 

5.3 Follow the wizard

Select the first option #1 (Spin up a temporary web server)

Enter all the information requested for the cert request.

This will save the certificate and the privet key generated to the following directory:

/etc/letsencrypt/live/DNSName/

All you need to worry about are these files:

  • cert.pem
  • fullchain.pem
  • privkey.pem

6 Import the certificate to the UniFi controller

You can do this manually using the keytool-import

https://crosstalksolutions.com/secure-unifi-controller/
https://docs.oracle.com/javase/tutorial/security/toolsign/rstep2.html

But for this we are going to use the handy SSL import script made by Steven Jenkins

6.1  Download Steve Jenkins UniFi SSL Import Script

Copy the unifi_ssl_import.sh script to your server

wget https://raw.githubusercontent.com/stevejenkins/unifi-linux-utils/master/unifi_ssl_import.sh

6.2 Modify Script

Install Nano if you don’t have it (it’s better than VI in my opinion. Some disagree, but hey, I’m entitled to my opinion)

sudo apt-get install nano
nano unifi_ssl_import.sh

Change your hostname.example.com to the actual hostname you wish to use. In my case, I’m using

UNIFI_HOSTNAME=your_DNS_Record

Since we are using Ubuntu comment following three lines for Fedora/RedHat/CentOS

#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore 

Uncomment following three lines for Debian/Ubuntu

UNIFI_DIR=/var/lib/unifi 
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

 Since we are using Letsencrypt

LE_MODE=yes

here’s what i used for this demo

#!/usr/bin/env bash

# unifi_ssl_import.sh
# UniFi Controller SSL Certificate Import Script for Unix/Linux Systems
# by Steve Jenkins <http://www.stevejenkins.com/>
# Part of https://github.com/stevejenkins/ubnt-linux-utils/
# Incorporates ideas from https://source.sosdg.org/brielle/lets-encrypt-scripts
# Version 2.8
# Last Updated Jan 13, 2017

# CONFIGURATION OPTIONS
UNIFI_HOSTNAME=unifyctrl01.multicastbits.com
UNIFI_SERVICE=unifi

# Uncomment following three lines for Fedora/RedHat/CentOS
#UNIFI_DIR=/opt/UniFi
#JAVA_DIR=${UNIFI_DIR}
#KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu
UNIFI_DIR=/var/lib/unifi
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

# Uncomment following three lines for CloudKey
#UNIFI_DIR=/var/lib/unifi
#JAVA_DIR=/usr/lib/unifi
#KEYSTORE=${JAVA_DIR}/data/keystore

# FOR LET'S ENCRYPT SSL CERTIFICATES ONLY
# Generate your Let's Encrtypt key & cert with certbot before running this script
LE_MODE=yes
LE_LIVE_DIR=/etc/letsencrypt/live

# THE FOLLOWING OPTIONS NOT REQUIRED IF LE_MODE IS ENABLED
PRIV_KEY=/etc/ssl/private/hostname.example.com.key
SIGNED_CRT=/etc/ssl/certs/hostname.example.com.crt
CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt

#rest of the script Omitted

6.3 Make script executable:
chmod a+x unifi_ssl_import.sh
6.4 Run script:
sudo ./unifi_ssl_import.sh

This script will

  • Backup the old keystore file (very handy, something i always forget to do)
  • update the relevant keystore file with the LE cert
  • restart the services to apply the new cert

7. Setup Automatic Certificate renewal

Lets-encrypt cert expeires every 3 months you can easily renew this by using

letsencrypt renew

This will use the existing config you used to generate the cert and renew it

then run the SSL-import script to update the controller cert

you can automate this using a cronjob

Copy the modified import Script you used in Step 6 to “/bin/certupdate/unifi_ssl_import.sh”

sudo mkdir /bin/certupdate/
cp /home/user/unifi_ssl_import.sh /bin/certupdate/unifi_ssl_import.sh

switch to sudo and edit your cron-tab for root and add the following lines

sudo su
crontab -e
0 1 31 1,3,5,7,9,11 * root certbot renew
15 1 31 1,3,5,7,9,11 * root /bin/certupdate/unifi_ssl_import.sh

Save and exit nano by doing CTRL+X followed by Y. 

Check crontab for root and confirm

crontab -e

At 01:00 on day-of-month 31 in January, March, May, July, September, and November the command will attempt to renew the cert

At 01:15 on day-of-month 31 in January, March, May, July, September, and November it will update the keystore with the new cert

 

Useful links –

https://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/

https://crontab.guru/#

8. Adopting UniFi devices to the new Controller with SSH or other L3 adoption methods

If you can SSH into the AP, it’s possible to do L3-adoption via CLI command:

1. Make sure the AP is running the same firmware as the controller. If it is not, see this guide: UniFi – Changing the Firmware of a UniFi Device.

2. Make sure the AP is in factory default state. If it’s not, do:

syswrapper.sh restore-default

3. SSH into the device and type the following and hit enter:

set-inform http://ip-of-controller:8080/inform

4. After issuing the set-inform, the UniFi device will show up for adoption. Once you click adopt, the device will appear to go offline.

5. Once the device goes offline, issue the  set-inform  command from step 3 again. This will permanently save the inform address, and the device will start provisioning.

https://help.ubnt.com/hc/en-us/articles/204909754-UniFi-Device-Adoption-Methods-for-Remote-UniFi-Controllers

Managing the Unify controller services

# to stop the controller
$ sudo service unifi stop

# to start the controller
$ sudo service unifi start

# to restart the controller
$ sudo service unifi restart

# to view the controller's current status
$ sudo service unifi status

Troubleshooting  issues 

cat /var/log/unifi/server.log

go through the system logs and google the issue, best part about ubiquity gear is the strong community support 

 


This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.

Example use cases:

  • Database migrations
  • Apache Kafka clusters
  • Data processing pipelines

Requirements:

  • An accessible NFS share exported with: rw,sync,no_subtree_check,no_root_squash
  • NFSv3 or NFSv4 protocol
  • Kubernetes v1.31.7+ or RKE2 with rke2r1 or later

 

lets get to it


1. NFS Server Export Setup

Ensure your NFS server exports the shared directory correctly:

# /etc/exports
/rke-pv-storage  worker-node-ips(rw,sync,no_subtree_check,no_root_squash)

 

  • Replace worker-node-ips with actual IPs or CIDR blocks of your worker nodes.
  • Run sudo exportfs -r to reload the export table.

2. Install NFS Subdir External Provisioner

Add the Helm repo and install the provisioned:

helm repo add nfs-subdir-external-provisioner \
  https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

helm install nfs-client-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace kube-system \
  --set nfs.server=192.168.162.100 \
  --set nfs.path=/rke-pv-storage \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=false

Notes:

  • If you want this to be the default storage class, change storageClass.defaultClass=true.
  • nfs.server should point to the IP of your NFS server.
  • nfs.path must be a valid exported directory from that NFS server.
  • storageClass.name can be referenced in your PersistentVolumeClaim YAMLs using storageClassName: nfs-client.

3. PVC and Pod Test

Create a test PVC and pod using the following YAML:

# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: shell
    image: busybox
    command: [ "sh", "-c", "sleep 3600" ]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-nfs-pvc

 

Apply it:

kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w

 

Expected output:

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-nfs-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi        RWX            nfs-client     30s

 


4. Troubleshooting

If the PVC remains in Pending, follow these steps:

Check the provisioner pod status:

kubectl get pods -n kube-system | grep nfs-client-provisioner

 

Inspect the provisioner pod:

kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>

 

Common Issues:

  • Broken State: Bad NFS mount
    mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka

     

    • This usually means the NFS path is misspelled or not exported properly.
  • Broken State: root_squash enabled
    failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied

     

    • Fix by changing the export to use no_root_squash or chown the directory to nobody:nogroup.
  • ImagePullBackOff
    • Ensure nodes have internet access and can reach registry.k8s.io.
  • RBAC errors
    • Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.

5. Healthy State Example

kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m   1/1     Running     0          3m39s

 

kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True

 

kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701       1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763       1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301       1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...

 

Once test-nfs-pvc is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client in other workloads (e.g., Strimzi KafkaNodePool).


If your ISP doesn’t have Native IP version 6 Support with Dual Stack  here is a workaround to get it setup for your home lab enviroment

What you need

> Router/Firewall/UTM that supports IPv6 Tunneling

  • PFsense/OpenSense/VyOS
  • DD-WRT 
  • Cisco ISR
  • Juniper SRX

> Active Account with an Ipv6 Tunnel Broker

      For this example we are going to be using Hurricane Electric Free IPv6 Tunnel Broker

Overview of the setup

For part 1 of this series  we are going to cover the following

  • Dual Stack Setup
  • DHCPV6 configuration and explanation

– Guide –

I used my a Netgate router running PfSense to terminate the 6in4 tunnel.it adds the firewall and monitoring capabilities on your Ipv6 network

Before we begin, we need to make a few adjustments on the firewall

Allow IPv6 Traffic

On new installations of pfSense after 2.1, IPv6 traffic is allowed by default. If the configuration on the firewall has been upgraded from older versions, then IPv6 would still be blocked. To enable IPv6 traffic on PFsense, perform the following:

  • Navigate to System > Advanced on the Networking tab
  • Check Allow IPv6 if not already checked
  • Click Save

Allow ICMP

ICMP echo requests must be allowed on the WAN address that is terminating the tunnel to ensure that it is online and reachable.

Firewall> Rules > WAN
Create a regular tunnel.

Enter your IPv4 address as the tunnel’s endpoint address.

Note – After entering your IPv4 address, the website will check to make sure that it can ping your machine. If it cannot ping your machine, you will get an error like the one below:

You can access the tunnel information from the accounts page

While you are here go to “Advance Tab” and setup an “Update key”. (We need it later)

Create and Assign the GIF Interface

Next, create the interface for the GIF tunnel in pfSense. Complete the fields with the corresponding information from the tunnel broker configuration summary.

  • Navigate to Interfaces > (assign) on the GIF tab.
  • Click fa-plus Add to add a new entry.
  • Set the Parent Interface to the WAN where the tunnel terminates. This would be the WAN which has the Client IPv4 Address on the tunnel broker.
  • Set the GIF Remote Address in pfSense to the Server IPv4 Address on the summary.
  • Set the GIF Tunnel Local Address in pfSense to the Client IPv6 Address on the summary.
  • Set the GIF Tunnel Remote Address in pfSense to the Server IPv6 Address on the summary, along the with prefix length (typically / 64).
  • Leave remaining options blank or unchecked.
  • Enter a Description.
  • Click Save.

Example GIF Tunnel.

Assign GIF Interface

Click fa-plus on Interfaces > (Assignments)

choose the GIF interface to be used for an OPT interface. In this example, the OPT interface has been renamed WAN_HP_NET_IPv6. Click Save and Apply Changes if they appear.

 

Configure OPT Interface

With the OPT interface assigned, Click on the OPT interface from the Interfaces menu to enable it  Keep IPv6 Configuration Type set to None.

Setup the IPv6 Gateway

When the interface is configured as listed above, a dynamic IPv6 gateway is added automatically, but it is not yet marked as default.

  • Navigate to System > Routing
  • Edit the dynamic IPv6 gateway with the same name as the IPv6 WAN created above.
  • Check Default Gateway.
  • Click Save.
  • Click Apply Changes.
 
Status > Gateways to view the gateway status. The gateway will show as “Online” if the configuration is successful

Set Up the LAN Interface for IPv6

The LAN interface may be configured for static IPv6 network. The network used for IPv6 addressing on the LAN Interface is an address in the Routed /64 or /48 subnet assigned by the tunnel broker.

  • The Routed /64 or /48 is the basis for the IPv6 Address field

For this exercise we are going to use ::1 for the LAN interface IP from the Prefixes provided above

Routed /64 : 2001:470:1f07:79a::/64

Interface IP – 2001:470:1f07:79a::1

Set Up DHCPv6 and RA (Router Advertisements)

Now that we have the tunnel up and running we need to make sure devices behind the lan interface can get a IPv6 address

There are couple of ways to handle the addressing

Sateless Auto Address Configuration (SLAAC)

SLAAC just means Stateless Auto Address Configuration, but it shouldn’t be confused with Stateless DHCPv6. In fact, we are talking about two different approaches.

SLAAC is the simplest way to give an IPv6 address to a client, because it exclusively rely on Neighbor Discovery Protocol. This protocol, that we simply call NDP, allows devices on a network to discover their Layer 3 neighbors. We use it to retrieve the layer 2 reachability information, like ARP, and to find out routers on the network.

When a device comes online, it sends a Router Solicitation message. It’s basically asking “Are there some routers out there?”. If we have a router on the same network, that router will reply with a Router Advertisement (RA) message. Using this message, the router will tell the client some information about the network:

  • Who is the default gateway (the link-local address of the router itself)
  • What is the global unicast prefix (for example, 2001:DB8:ACAD:10::/64)

With these information, the client is going to create a new global unicast address using the EUI-64 technique. Now the client has an IP address from the global unicast prefix range of the router, and that address is valid over the Internet.

This method is extremely simple, and requires virtually no configuration. However, we can’t centralize it and we cannot specify further information, such as DNS settings. To do that, we need to use a DHCPv6 technique

Just like IP v4 we need to setup DHCP for the IPv6 range for the devices behind the firewall to use SLAAT

Stateless DHCPv6

Stateless DHCPv6 brings to the picture the DHCPv6 protocol. With this approach, we still use SLAAC to obtain reachability information, and we use DHCPv6 for extra items.

The client always starts with a Router Solicitation, and the router on the segment responds with a Router Advertisement. This time, the Router Advertisement has a flag called other-config set to 1. Once the client receives the message, it will still use SLAAC to craft its own IPv6 address. However, the flag tells the client to do something more.

After the SLAAC process succeed, the client will craft a DHCPv6 request and send it through the network. A DHCPv6 server will eventually reply with all the extra information we needed, such as DNS server or domain name.

This approach is called stateless since the DHCPv6 server does not manage any lease for the clients. Instead, it just gives extra information as needed.

Configuring IPv6 Router Advertisements

Router Advertisements (RA) tell an IPv6 network not only which routers are available to reach other networks, but also tell clients how to obtain an IPv6 address. These options are configured per-interface and work similar to and/or in conjunction with DHCPv6.

DHCPv6 is not able to send clients a router for use as a gateway as is traditionally done with IPv4 DHCP. The task of announcing gateways falls to RA.

Operating Mode: Controls how clients behave. All modes advertise this firewall as a router for IPv6. The following modes are available:

  • Router Only: Clients will need to set addresses statically
  • Unmanaged: Client addresses obtained only via Stateless Address Autoconfiguration (SLAAC).
  • Managed: Client addresses assigned only via DHCPv6.
  • Assisted: Client addresses assigned by either DHCPv6 or SLAAC (or both).

Enable DHCPv6 Server on the interface

Setup IPv6 DNS Addresses

we are going to use cloud-flare DNS (At the time of writing CF is rated as the fastest resolver by Thousandeyes.com)

https://developers.cloudflare.com/1.1.1.1/setting-up-1.1.1.1/

1.1.1.1

  • 2606:4700:4700::1111
  • 2606:4700:4700::1001

Keeping your Tunnel endpoint Address Updated with your Dynamic IP

This only applies if you have a dynamic IPv4 from your ISP

As you may remember from our first step when registering the 6in4 tunnel on the website we had to enter our Public IP and enable ICMP

We need to make sure we keep this updated when our IP changes ovetime

There are few ways to accomplish this

  • Use PFsense DynDNS feature 

dnsomatic.com  is wonderful free service to update your dynamic IP on multiple locations, i used this because if needed i have the freedom to change routers/firewalls with out messing up my config (Im using a one of my RasPi’s to update DNS-O-Matic)

im working on another article for this, will link it to this section ASAP

 

Few Notes –

Android OS, Chrome OS still doesn’t support DHCPv6

Mac OSX and windows 10, Server 2016 uses and prefers Ipv6

Check the windows firewall rules if you have issues with NAT rules and manually update rules

Your MTU will drop-down since you are sending the IPv6 headers encapsulated in the Ipv4 packets.Personally i have no issues with my Ipv6 network Behind a spectrum DOCSIS modem. but this may cause issues depending on your ISP ie : CGNat

Here is a good write up https://jamesdobson.name/post/mtu/

 

Part 2

With Part two of this series we will use an ASA for IPv6 using the PFsense router as an tunnel-endpoint

Example Network

Link spotlight

– Understanding IPv6 EUI-64 Bit Address

– IPv6 Stateless Auto Configuration

– Configure the ASA to Pass IPv6 Traffic

– Setup IPv6 TunnelBroker – NetGate

– ipv6-at-home Part 1 | Part II | Part III

Until next time….


Hi Internetz, its been a while…

So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums. 
It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.

What you need :

Hardware

  • Firebox 
  • Female to Female Serial Cable – link
  • 4GB CF Card (We can use 1Gb, 2Gb but personally I would recommend at-least 4GB)
  • CF Card Reader

Software

  • pfsense NanoBSD
  • physdiskwrite –  Download
  • TeraTerm Pro Web – Enhanced Telnet/SSH2 Client – Download

The firebox X700

This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware
The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device

There are several methods to run pfsense on this device.

HDD

Install PF sense on a PC and Plug the HDD to the firebox.

This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days

CF card

This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it. 


In this tutorial we are using the CF card method

Installing PFsense

  • Download the relevant pfsense image


Since we are using a CF card we need to use the PFsense version built to work on embedded devices.

NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle

Since we are using a 4GB CF card, we are going to use the 4G image

  • Flashing the nanoBSD image to the CF card


Extract the physdiskwrite program and run the PhysGUI.exe
This software is written in German i think but operating it is not that hard

Select the CF card from the list.

Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID

Load the ISO file
Right click on the Disk “Image laden > offnen”

select the ISO file from the “open file” window
program will prompt you with the following dialog box

 


Select the remove 2GB restriction and click “OK”
It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress

  • Installing the CF card on the Firebox

Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card

Remove the protective glue and replace the card with the new CF card flashed with pfsense image.

  • Booting up and configuring PFsense

since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device

Install “teraTerm pro web” program we downloaded earlier.

I tried using putty and many other telnet clients didn’t work properly

Open up the terminal window

Connect the firebox to the PC using the serial cable, and power it up

Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)

  
Many other tutorials says to change the baud rates. but defaults worked just fine for me
Since we already flashed the PFsense image to the CF card we do not need to install the OS

By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.

It will ask you to setup VLan

Assign the WAN, LAN, OPT1 interfaces.

ON X700 interface names are as follows 

Please refer to pfsense Docs for more info on setting up 


http://doc.pfsense.org/index.php/Tutorials#Advanced_Tutorials


After the initial config is completed. you do not need the console cable and Tera Term
you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP



Addtional configuration

  • Enabling the LCD panel

All firebox units have a LCD panel in front
We can use the pfsense LCDproc-dev package to enable and display various information

Install the LCDproc-dev Package via the package Manager

Go to Services > LCDProc

Set the settings as follows


Hope this article helped you guys.Dont forget to leave a comment with your thoughts 

Sources –

http://forum.pfsense.org/index.php?board=5.0

Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed. i googled around for the tool but ended up putting together this script.

Its not the most prettiest but it works, and im sure you guys will make something much better out of it.

# Define the host names here for the servers that needs to be monitored
$servers = "relay1.host.com","relay2.host.com"
# Define port number
$tcp_port = "25"

# Loop through each host to get an individual result.
ForEach($srv in $servers) {

$tcpClient = New-Object System.Net.Sockets.TCPClient
$tcpClient.Connect($srv,$tcp_port)

$connectState = $tcpClient.Connected

If($connectState -eq $true) {
Write-Host "$srv is online"
}
Else {
Write-Host "$srv is offline"
}

$tcpClient.Dispose()

}

If something is wrong or if you think there is a better way please free feel to comment and let everyone know. its all about community after all.

Update 4/18/2016 –

Updated the script with the one provided by Donald Gray – Thanks a lot : )