well i think the Title pretty much speak for it self..but any how...Crucial released a new Firmware for the M4
This guide explains how to implement an NFS StorageClass and dynamic provisioner in Kubernetes via the nfs-subdir-external-provisioner Helm chart. It
ASIC technology used in Cisco Catalyst 2960 3750 3850 9200 9300 9400
Advertise connected routes within VRFs to an upstream or downstream ip address this is one of many ways to get
Setup your own cloud based DNS service on the cheap using pubic cloud service providers
ADSync Password synchronization failed with the error The Specified Domain either does not exist or could not be contacted
Stop treating your AI coding assistant like a magical junior dev and start treating it like a compiler that needs
Admin may get asked to set and add / Edit permissions for shared Calendars.these Sharing options are not available in

well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee.. Tongue

I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!!
.
Crucial’s Official Release Notes:

“Release Date: 08/25/2011

Change Log:

    Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002)
    Improved throughput performance.
    Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems.
    Improved write latency for better performance under heavy write workloads.
    Faster boot up times.
    Improved compatibility with latest chipsets.
    Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device.
    Improvement for intermittent failures in cold boot up related to some specific host systems.”

Firmware Download:http://www.crucial.com/eu/support/firmware.aspx?AID=10273954&PID=4176827&SID=1iv16ri5z4e7x

to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???

to do this we are gonna use a niffty lil program called UNetbootin
ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website

so here we go then…

* First off Download – http://unetbootin.sourceforge.net/

* Run the program
* Select DiskImage Radio button (as shown on the image)
* browse and select the iso file you downloaded from crucial
* Type – USB Drive
* select the Drive letter of your Pendrive
* Click OK!!!

reboot

*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important
*Boot from your Pen drive
*Follow the instructions on screen to update

and Voila

****remember to set your SATA controller to AHCI again in Bios / EFI ****

Update***

SATA 3 Benchmark.

SATA 2 Benchmark 
Well i messed around with some Benchmark programs here are the results 

This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.

Example use cases:

  • Database migrations
  • Apache Kafka clusters
  • Data processing pipelines

Requirements:

  • An accessible NFS share exported with: rw,sync,no_subtree_check,no_root_squash
  • NFSv3 or NFSv4 protocol
  • Kubernetes v1.31.7+ or RKE2 with rke2r1 or later

 

lets get to it


1. NFS Server Export Setup

Ensure your NFS server exports the shared directory correctly:

# /etc/exports
/rke-pv-storage  worker-node-ips(rw,sync,no_subtree_check,no_root_squash)

 

  • Replace worker-node-ips with actual IPs or CIDR blocks of your worker nodes.
  • Run sudo exportfs -r to reload the export table.

2. Install NFS Subdir External Provisioner

Add the Helm repo and install the provisioned:

helm repo add nfs-subdir-external-provisioner \
  https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

helm install nfs-client-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace kube-system \
  --set nfs.server=192.168.162.100 \
  --set nfs.path=/rke-pv-storage \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=false

Notes:

  • If you want this to be the default storage class, change storageClass.defaultClass=true.
  • nfs.server should point to the IP of your NFS server.
  • nfs.path must be a valid exported directory from that NFS server.
  • storageClass.name can be referenced in your PersistentVolumeClaim YAMLs using storageClassName: nfs-client.

3. PVC and Pod Test

Create a test PVC and pod using the following YAML:

# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: shell
    image: busybox
    command: [ "sh", "-c", "sleep 3600" ]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-nfs-pvc

 

Apply it:

kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w

 

Expected output:

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-nfs-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi        RWX            nfs-client     30s

 


4. Troubleshooting

If the PVC remains in Pending, follow these steps:

Check the provisioner pod status:

kubectl get pods -n kube-system | grep nfs-client-provisioner

 

Inspect the provisioner pod:

kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>

 

Common Issues:

  • Broken State: Bad NFS mount
    mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka

     

    • This usually means the NFS path is misspelled or not exported properly.
  • Broken State: root_squash enabled
    failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied

     

    • Fix by changing the export to use no_root_squash or chown the directory to nobody:nogroup.
  • ImagePullBackOff
    • Ensure nodes have internet access and can reach registry.k8s.io.
  • RBAC errors
    • Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.

5. Healthy State Example

kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m   1/1     Running     0          3m39s

 

kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True

 

kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701       1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763       1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301       1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...

 

Once test-nfs-pvc is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client in other workloads (e.g., Strimzi KafkaNodePool).


Preface-

After I started working with Open networking switches, wanted to know more about the Cisco catalyst range I work with every day.

Information on older ASICS is very hard to find, but recently they have started to talk a lot about the new chips like UADP 2.0 with the Catalyst 9k / Nexus launch, This is more likely due to the rise of Desegregated Network Operating Systems DNOS such as Cumulus and PICA8, etc forcing customers to be more aware of what’s under the hood rather than listening and believing shiny PDF files with a laundry list of features.

The information was there but scattered all over the web, I went though CiscoLive, TechFieldDay slides/videos, interviews, partner update PDFs, Press leases and whitepapers and even LinkedIn profiles to gather information

If you notice a mistake please let me know

Scope –

we are going to focus on the ASIC’s used in the well-known 2960S/X/XR and the 36xx,37xx,38XX  and the new Cat 9K series

Timeline

 

Summary

Cisco Brought a bunch of companies to acquire the switching technology they needed that later bloomed into the switching platforms we know today

  • Crescendo Communications (1993) – Catalyst 5K and 6K chassis
  • Kalpana (1994) – Catalyst 3K (Fun Fact they invented VLANs that later got standardized as 802.1q)
  • Grand Junction (1995) – Catalyst 17xx, 19xx, 28xx, 29xx
  • Granite Systems (1996) – Catalyst 4K (K series ASIC)

After the original Cisco 3750/2950 switches, Cisco 3xxx/2xxx-G  (G for Gigabit) was released

Next, the Cisco 3xxx-E series with enterprise management functions was released

later, Cisco developed the Cisco 3750-V series with the function of energy-saving version for –E series, later replaced by Cisco 3750 V2 series (Possibly a die shrink)

G series and E series were later phased out and integrated into Cisco X series. which is still being sold and supported

in 2017-2018 Cisco released the catalyst 9k family to replace the 2K and 3K families

Sasquatch ASIC

from what I could  find there are two variants of this ASIC

The initial release in 2003

  • Fixed pipeline ASIC
  • 180 Nano-meter process
  • 60 Million Transistors

Shipped with the 10/100 3750 and 2960

Die Shrink to 130nm in 2005

  • Fixed pipeline ASIC
  • 130 Nano-meter process
  • 210 Million Transistors

Shipped in the 2960-G/3560-G/3750-G series

I couldn’t find much info about the chip design. will update this section when I find more.

 Strider ASIC

Initially Release in 2010

  • Fixed pipeline ASIC
  • Built on the 65-nanometer process
  • 1.3 Billion Transistors

Strider ASIC (circa 2010) was an improved design based on the 3750-E series was first shipped with the 2960-S family.

S88G_ASIC design
S88G ASIC

later in 2012-2013 with a die shrink to 45-nanometer, they managed to fit 2 ASICs in the same silicon, This shipped with the 2960-X/XR which replaced the 2960-S

  • higher stack speeds and features
  • limited layer 3 capabilities IP Lite feature (2960-XR only)
  • Better QoS and Netflow lite

Later down the line they silently rolled the ASIC design to a 32-nanometer process for better yield to achieve cheaper manufacturing costs

this switch is still being sold with no EOL announced as a cheaper Acess layer switch

On a side note – in 2017 Cisco released another version of the 2960 family the WS-2960-L This is a cheaper version built on a Marvel ASIC (Same as the SG55x) with a web UI and fanless design. I personally think this is the next evolution of their SMB market-oriented family the popular Cisco SG-5xx series. for the first the time the 2960 family had a fairly usable and pleasant web interface for the configuration and management. the new 9K series seems to be containing a more polished version of the web-UI

Unified Access Data Plane (UADP)

Due to the limitations in the fixed pipeline architecture and the costs involved with the re-rolling process to fix bugs they needed something new and had three options

As a compromise between all three Cisco Started dreaming up this programmable ASIC design in 2007-2008 the idea was to build a chip with programmable stages that can be updated with firmware updates instead of writing the logic into the silicon permanently.

they released the programmable ASIC technology initially for their QFP (Quantum flow processor) ASIC in the ISR router family to meet the customer needs (service providers and large enterprises)

This chip allowed them to support advanced routing technologies and new protocols without changing hardware simply via firmware updates improving the longevity of the investment allowing them to make more money out of the chips extended life cycle.

Eventually, this technology trickled downstream and the Doppler 1.0 was born

Improvements and features in UADP 1.0/1.1/2.0

  • Programmable stages in the pipeline
  • Cisco intent Driven networking support – DNA Center with ISE
  • Intergarted Stacking Support with Stack power – ASIC is built with pinouts for the stacking fabric allowing faster stacking performance
  • Rapid Recirculation (Encapsulation such as MPLS, VXLAN)
  • TrustSec
  • Advance on-chip QOS
  • Software-defined networking support – integrated NetFlow, SD access
  • Flex Parser – Programmable packet parser allowing them to introduce support for new protocols with firmware updates
  • On-chip Micro-engines – Highly specialized engines within in the chip to perform repetitive tasks such as
    • Reassembly
    • Encryption/Decryption
    •  Fragmentation
  • CAPWAP – Switch can function as a wireless Lan Controller
    • Mobility agent – Offload Qo and other functions from the WLC (IMO Works really nicely with multi-site wireless deployments)
    • Mobility Controller – Fullblown Wireless LAN controller (WLC)
  • Extended life cycle allowed integration of Cisco security technologies such as Cisco DNA + ISE later down the line even on the first generation switches
  • Multigig and 40GE speed support
  • Advanced malware detection capabilities via packet fingerprinting

Legacy Fixed pipeline architecture

Programmable pipeline architecture

Doppler/UADP 1.0 (2013)

While doppler1.0 programmable ASIC handling the Data plane coupled with Cavium CPU for the control plane the first generation of the switches to ship with these chip was the 3650-x and 3850-x gigabit versions

  • Built on 65 Nanometer Process
  • 1.3 billion transistors

UADP 1.1 (2015)

  • Die Shrink to 45 Nanometer
  • 3 billion transistors

UADP 2.0 (2017)

  • Built on 28nm/16nm Process
  • Equipped with an Intel Xeon D (Dual-core X86) CPU for the control plane
  • Open-IOS-XE

7.4 billion transistors

Flexible ASIC Templates –  

Allows Cisco to write templates that can optimize the chip resources for different use cases

the new Catalyst 9000 series will replace the following campus switching families built on the older Strider and more recent UADP 1 and 1.1 ASICS

  • Catalyst 2K —–> Catalyst 9200
  • Catalyst 3K —–> Catalyst 9300
  • Catalyst 4K —–> Catalyst 9400
  • Catalyst 6K —–> Catalyst 9500

I’m will update/fix this post when I find more info about the UADP 2 and the next evolution, stay tuned for a few more articles based on the silicon used in open networking X86 chassis.

Im going to base this off my VRF Setup and Route leaking article and continue building on top of it

Lets say we need to advertise connected routes within VRFs using IGP to an upstream or downstream iP address this is one of many ways to get to that objective

For this example we are going to use BGP to collect connected routes and advertise that over OSPF

Setup the BGP process to collect connected routes

router bgp 65000
 router-id 10.252.250.6
 !
 address-family ipv4 unicast
 !
 neighbor 10.252.250.1
!
vrf Tenant01_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant02_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Tenant03_VRF
 !
 address-family ipv4 unicast
  redistribute connected
!
vrf Shared_VRF
 !
 address-family ipv4 unicast
  redistribute connected

Setup OSPF to Redistribute the routes collected via BGP

router ospf 250 vrf Shared_VRF
 area 0.0.0.0 default-cost 0
 redistribute bgp 65000
interface vlan250
 mode L3
 description OSPF_Routing
 no shutdown
 ip vrf forwarding Shared_VRF
 ip address 10.252.250.6/29
 ip ospf 250 area 0.0.0.0
 ip ospf mtu-ignore
 ip ospf priority 10

Testing and confirmation

Local OSPF Database

Remote device

Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).

I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.

Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way

I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.

This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x

So lets get stahhhted…….

Scope

  • Setup Pi-hole as a docker container on a VM
  • Enable IPV6 support
  • Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
  • Deploy and test

What we need

  • Linux VM (Ubuntu, Hardened BSD, etc)
  • Docker and Docker Compose
  • Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)

Deployment

Setup Dynamic DNS solution to track your Dynamic WAN IP

for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)

Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address. 

For Network A and Network B, I’m going to use the routers built-in DDNS update features

Network A gateway – UDM Pro

Network B Gateway – Netgear R6230

Confirmation

Setup the VM with Docker-compose

Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS

  • Linode
  • AWS lightsail
  • IBM cloud
  • Oracle cloud
  • Google Compute
  • Digital Ocean droplet

Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource

For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects

Setup your Linode VM – Getting started Guide

SSH in to the VM or use weblish console

Update your packages and sources

sudo apt-get update 
install Docker and Docker Compose

Assuming you already have SSH access to the VM with a static IPv4 and IPv6 address

Guide to installing Docker Engine on Ubuntu

Guide to Installing Docker-Compose

Once you have this setup confirm the docker setup

docker-compose version

Setup the Pi-hole Docker Image

Lets Configure the docker networking side to fit our Needs

Create a Seperate Bridge network for the Pi-hole container

I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have

docker network create --ipv6 --driver bridge --subnet "fd01::/64" Piholev6

verification

We will use this network later in docker compose

With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53

which will prevent the container from starting. because Pi-hole binds to the same port UDP 53

we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates

After some google fu and trickering around this this is the workaround i found.

  • Disable the stub-listener
  • Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
  • push the external name servers so the VM won’t look at loopback to resolve DNS
  • Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener

We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.

sudo nano /etc/systemd/resolved.conf

Find and uncomment the line “DNSStubListener=yes” and change it to “no”

After this we need to push the external DNS servers to the box, this setting is stored on the following file

/etc/resolv.conf
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 127.0.0.53

But we cant manually update this file with out own DNS servers, lets investigate

Cartoon of a detective investigate following footprints | Premium ...
ls -l /etc/resolv.conf

its a symlink to the another system file

/run/systemd/resolve/stub-resolv.conf

When you take a look at the directory where that file resides, there are two files

When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers

You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file

i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case

If you want to use your own servers here – Follow this guide

 Lets change the symlink to this file instead of the stub-resolve.conf

$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

Now that its pointing to the right file

Lets restart the systemd-resolved

systemctl restart systemd-resolved

Now you can resolve DNS and install packages, etc

Docker compose script file for the PI-Hole

sudo mkdir /Docker_Images/
sudo mkdir /Docker_Images/Piholev6/

Lets navigate to this directory and start setting up our environment

nano /Docker_Images/Piholev6/docker-compose.yml
version: '3.4'
services:

   Pihole:
    container_name: pihole_v6
    image: pihole/pihole:latest
    hostname: Multicastbits-DNSService
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "8080:80/tcp"
      - "4343:443/tcp"
    environment:
      TZ: America/New_York
      DNS1: 1.1.1.1
      DNS2: 8.8.8.8
      WEBPASSWORD: F1ghtm4_Keng3n4sura
      ServerIP: 45.33.73.186
      enable_ipv6: "true"
      ServerIPv6: 2600:3c03::f03c:92ff:feb9:ea9c
    volumes:
       - '${ROOT}/pihole/etc-pihole/:/etc/pihole/'
       - '${ROOT}/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
    dns:
      - 127.0.0.1
      - 1.1.1.1
    cap_add:
      - NET_ADMIN
    restart: always

networks:
  default:
    external:
      name: Piholev6
networks:
  default:
    external:
      name: Piholev6

Lets break this down a littlebit

  • Version – Declare Docker compose version
  • container_name – This is the name of the container on the docker container registry
  • image – What image to pull from the Docker Hub
  • hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
  • ports – What ports should be NATed via the Docker Bridge to the host VM
  • TZ – Time Zone
  • DNS1 – DNS server used with in the image
  • DNS2 – DNS server used with in the image
  • WEBPASSWORD – Password for the Pi-Hole web console
  • ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
  • IPv6 – Enable Disable IPv6 support
  • ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
  • volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
  • cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
  • restart: always – This will make sure the container gets restarted every time the VM boots up – Link
  • networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before

Now lets bring up the Docker container

docker-compose up -d

-d switch will bring up the Docker container in the background

Run ‘Docker ps’ to confirm

Now you can access the web interface and use the Pihole

verifying its using the bridge network you created

Grab the network ID for the bridge network we create before and use the inspect switch to check the config

docker network ls
docker network inspect f7ba28db09ae

This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag

Testing

I manually configured my workstations primary DNS to the Pi-Hole IPs

Updating the docker Image

Pull the new image from the Registry

docker pull pihole/pihole

Take down the current container

docker-compose down

Run the new container

docker-compose up -d

Your settings will persist this update

Securing the install

now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed

but this is open to the public internet and will fall victim to DNS reflection attacks, etc

lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed

Disable IPtables from the docker daemon

Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect

So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW

This file may not exist already if so nano will create it for you

sudo nano /etc/docker/daemon.json

Add the following lines to the file

{
"iptables": false
}

restart the docker services

sudo systemctl restart docker

now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.

Automatically updating Firewall Rules based on the DYN DNS Host records

we are going to create a shell script and run it every hour using crontab

Shell Script Dry run

  • Get the IP from the DYNDNS Host records
  • remove/Cleanup existing rules
  • Add Default deny Rules
  • Add allow rules using the resolved IPs as the source

Dynamic IP addresses are updated on the following DNS records

  • trusted-Network01.selfip.net
  • trusted-Network02.selfip.net

Lets start by creating the script file under /bin/*

sudo touch /bin/PIHolefwruleupdate.sh
sudo chmod +x /bin/PIHolefwruleupdate.sh
sudo nano /bin/PIHolefwruleupdate.sh

now lets build the script

#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"

Lets run the script to make sure its working

I used a online port scanner to confirm

Setup Scheduled job with logging

lets use crontab and setup a scheduled job to run this script every hour

Make sure the script is copied to the /bin folder with the executable permissions

using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)

crontab -e

Add the following line

0 * * * * /bin/PIHolefwruleupdate.sh >> /var/log/PIHolefwruleupdate_Cronoutput.log 2>&1
Lets break this down
0 * * * *

this will run the script every time minutes hit zero which is usually every hour

/bin/PIHolefwruleupdate.sh

Script Path to execute

/var/log/PIHolefwruleupdate_Cronoutput.log 2>&1

Log file with errors captured

Issue

Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs

"The Specified Domain either does not exist or could not be contacted"

(click to enlarge)

Checked the following

  • Restarted ADsync Services
  • Resolve the ADDS Domain FQDN and DNS – Working
  • Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values

Root Cause

Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)

Assumption

when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.

This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high

Resolution

Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding

in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)

You’ve probably noticed: coding models are eager to please. Too eager. Ask for something questionable and you’ll get it, wrapped in enthusiasm. Ask for feedback and you’ll get praise followed by gentle suggestions. Ask them to build something and they’ll start coding before understanding what you actually need.

This isn’t a bug. It’s trained behavior. And it’s costing you time, tokens, and code quality.

The Sycophancy Problem

Modern LLMs go through reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Users rate responses higher when the AI agrees with them, validates their ideas, and delivers quickly. So that’s what the models learn to do. Anthropic’s work on sycophancy in RLHF-tuned assistants makes this pretty explicit: models learn to match user beliefs, even when they’re wrong.

The result: an assistant that says “Great idea!” before pointing out your approach won’t scale. One that starts writing code before asking what systems it needs to integrate with. One that hedges every opinion with “but it depends on your use case.”

For consumer use cases, travel planning, recipe suggestions, general Q&A this is fine. For engineering work, it’s a liability.

When the models won’t push back, you lose the value of a second perspective. When it starts implementing before scoping, you burn tokens on code you’ll throw away. When it leaves library choices ambiguous, you get whatever the model defaults to which may not be what production needs.

Here’s a concrete example. I asked Claude for a “simple Prometheus exporter app,” gave it a minimal spec with scope and data flows, and still didn’t spell out anything about testability or structure. It happily produced:

  • A script with sys.exit() sprinkled everywhere
  • Logic glued directly into if __name__ == "__main__":
  • Debugging via print() calls instead of real logging

It technically “worked,” but it was painful to test, impossible to reuse and extend.

The Fix: Specs Before Code

Instead of giving it a set of requirements and asking to generate code. Start with specifications. Move the expensive iteration the “that’s not what I meant” cycles to the design phase where changes are cheap. Then hand a tight spec to your coding tool where implementation becomes mechanical.

The workflow:

  1. Describe what you want (rough is fine)
  2. Scope through pointed questions (5–8, not 20)
  3. Spec the solution with explicit implementation decisions
  4. Implement by handing the spec to Cursor/Cline/Copilot

This isn’t a brand new methodology. It’s the same spec-driven development (SDD) that tools like github spec-kit is promoting

write the spec first, then let a cheaper model implement against it.

By the time code gets written, the ambiguity is gone and the assistant is just a fast pair of hands that follows a tight spec with guard rails built in.

When This Workflow Pays Off

To be clear: this isn’t for everything. If you need a quick one-off script to parse a CSV or rename some files, writing a spec is overkill. Just ask for the code and move on with your life.

This workflow shines when:

  • The task spans multiple files or components
  • External integrations exist (databases, APIs, message queues, cloud services)
  • It will run in production and needs monitoring and observability
  • Infra is involved (Kubernetes, Terraform, CI/CD, exporters, operators)
  • Someone else might maintain it later
  • You’ve been burned before on similar scope

Rule of thumb: if it touches more than one system or more than one file, treat it as spec-worthy. If you can genuinely explain it in two sentences and keep it in a single file, skip straight to code.

Implementation Directives — Not “add a scheduler” but “use APScheduler with BackgroundScheduler, register an atexit handler for graceful shutdown.” Not “handle timeouts” but “use cx_Oracle call_timeout, not post-execution checks.”

Error Handling Matrix — List the important failure modes, how to detect them, what to log, and how to recover (retry, backoff, fail-fast, alert, etc.). No room for “the assistant will figure it out.”

Concurrency Decisions — What state is shared, what synchronization primitive to use, and lock ordering if multiple locks exist. Don’t let the assistant improvise concurrency.

Out of Scope — Explicit boundaries: “No auth changes,” “No schema migrations,” “Do not add retries at the HTTP client level.” This prevents the assistant from “helpfully” adding features you didn’t ask for.

Anticipate Anywhere the Model might guess, make a decision instead or make it validate/confirm with you before taking action.

The Handoff

When you hand off to your coding agent, make self-review part of the process:

Rules:
- Stop after each file for review
- Self-Review: Before presenting each file, verify against
  engineering-standards.md. Fix violations (logging, error
  handling, concurrency, resource cleanup) before stopping.
- Do not add features beyond this spec
- Use environment variables for all credentials
- Follow Implementation Directives exactly

 Pair this with a rules.md that encodes your engineering standards—error propagation patterns, lock discipline, resource cleanup. The agent internalizes the baseline, self-reviews against it, and you’re left checking logic rather than hunting for missing using statements, context managers, or retries.

Fixing the Partnership Dynamic

Specs help, but “be blunt” isn’t enough. The model can follow the vibe of your instructions and still waste your time by producing unstructured output, bluffing through unknowns, or “spec’ing anyway” when an integration is the real blocker. That means overriding the trained “be agreeable” behavior with explicit instructions.

For example:

Core directive: Be useful, not pleasant.

OUTPUT CONTRACT:
- If scoping: output exactly:
  ## Scoping Questions (5–8 pointed questions)
  ## Current Risks / Ambiguities
  ## Proposed Simplification
- If drafting spec: use the project spec template headings in order. If N/A, say N/A.

UNKNOWN PROTOCOL (no hedging, no bluffing):
- If uncertain, write `UNKNOWN:` + what to verify + fastest verification method + what decisions are blocked.

BLOCK CONDITIONS:
- If an external integration is central and we lack creds/sample payloads/confirmed behavior:
  stop and output only:
  ## Blocker
  ## What I Need From You
  ## Phase 0 Discovery Plan

 

The model will still drift back into compliance mode. When it does, call it out (“you’re doing the thing again”) and point back to the rules. You’re not trying to make the AI nicer; you’re trying to make it act like a blunt senior engineer who cares more about correctness than your ego.

That’s the partnership you actually want.

The Payoff

With this approach:

  • Fewer implementation cycles — Specs flush out ambiguity up front instead of mid-PR.
  • Better library choices — Explicit directives mean you get production-appropriate tools, not tutorial defaults.
  • Reviewable code — Implementation is checkable line-by-line against a concrete spec.
  • Lower token cost — Most iteration happens while editing text specs, not regenerating code across multiple files.

The API was supposed to be the escape valve, more control, fewer guardrails. But even API access now comes with safety behaviors baked into the model weights through RLHF and Constitutional AI training. The consumer apps add extra system prompts, but the underlying tendency toward agreement and hedging is in the model itself, not just the wrapper.

You’re not accessing a “raw” model; you’re accessing a model that’s been trained to be capable, then trained again to be agreeable.

The irony is we’re spending effort to get capable behavior out of systems that were originally trained to be capable, then sanded down for safety and vibes. Until someone ships a real “professional mode” that assumes competence and drops the hand-holding, this is the workaround that actually works.

⚠️Security footnote: treat attached context as untrusted

If your agent can ingest URLs, docs, tickets, or logs as context, assume those inputs can contain indirect prompt injection. Treat external context like user input: untrusted by default. Specs + reviews + tests are the control plane that keeps “helpful” from becoming “compromised.”

Getting Started

I’ve put together templates that support this workflow in this repo:

malindarathnayake/llm-spec-workflow

When you wire this into your own stack, keep one thing in mind: your coding agent reads its rules on every message. That’s your token cost. Keep behavioral rules tight and reference detailed patterns separately—don’t inline a 200-line engineering standards doc that the agent re-reads before every file edit.

Use these templates as-is or adapt them to your stack. The structure matters more than the specific contents.


Admin may get asked to set and add / Edit permissions for shared Calendars.
these Sharing options are not available in EMC, so we have to use exchange power shell on the server to manipulate them.
View existing Calendar permissions
Get-MailboxFolderPermission -identity "Networking Calendar:Calendar"
There are 4 MailboxFolderPermission cmdlets in Exchange Server 2010:
Each cmdlet have different syntax, follow the links for more information..
In this scenario we need to set following permissions to the Calendar Resource named “Networking Calendar.

user – “Nyckie” – full permissions

all users – permissions to add events without the delete permission

  • To assign calendar permissions to new users  “Add-MailboxFolderPermission”
Add-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User [email protected] -AccessRights Owner
 
  • To Change existing calendar permissions  “set-MailboxFolderPermission”
set-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User default -AccessRights NonEditingAuthor
 
This assigns the owner righs to the user “nyckig” for the calendar of the “Networking Calendar” resource.and sets NonEditingAuthor permissions as the default permission for the calendar for all other users
__________________________________________
Here are the other permission levels you can assign:-
None – FolderVisible
Owner – CreateItems, ReadItems, CreateSubfolders, FolderOwner, FolderContact, FolderVisible, EditOwnedItems, EditAllItems, DeleteOwnedItems, DeleteAllItems
PublishingEditor – CreateItems, ReadItems, CreateSubfolders, FolderVisible, EditOwnedItems, EditAllItems, DeleteOwnedItems, DeleteAllItems
Editor – CreateItems, ReadItems, FolderVisible, EditOwnedItems, EditAllItems, DeleteOwnedItems, DeleteAllItems
PublishingAuthor – CreateItems, ReadItems, CreateSubfolders, FolderVisible, EditOwnedItems, DeleteOwnedItems
Author – CreateItems, ReadItems, FolderVisible, EditOwnedItems, DeleteOwnedItems NonEditingAuthor – CreateItems, ReadItems, FolderVisible
Reviewer – ReadItems, FolderVisible
Contributor – CreateItems, FolderVisible
The following roles apply specifically to calendar folders:
AvailabilityOnly – View only availability data

LimitedDetails – View availability data with subject and location

source –

technet.microsoft.com

http://blog.powershell.no/2010/09/20/managing-calendar-permissions-in-exchange-server-2010/