Hello internetzzzAs an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized
Guide to show you how to enroll your servers/desktops with a CA signed cert and set up WinRM over HTTPS
Hi Internetz, its been a while...So we had an old Firebox X700 laying around in office gathering dust. I saw
This guide explains how to implement an NFS StorageClass and dynamic provisioner in Kubernetes via the nfs-subdir-external-provisioner Helm chart. It
Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed.
DISCLAIMER: No copyright infringement intended. This article is for entertainment and educational purposes only, Alright!! now that's out of the way
Quick and easy way to find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox
Stop treating your AI coding assistant like a magical junior dev and start treating it like a compiler that needs

Hello internetzzz

As an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized Ribbon, Quick toolbars, etc for Office applications on user Computers, or in my case Terminal servers.

here is a quick and dirty guide on how to do this via group policy.

For instance, lets say we have to deploy a button to initiate a 3rd party productivity program with in outlook and MS word.

First off, make the necessary changes to outlook or word on a Client pc running MS office.

To customize the Ribbon

  • On the File tab, click Options, and then click Customize Ribbon to open the Ribbon customization dialog.

To customize the Quick Access Toolbar

  • On the File tab, click Options, and then click Quick Access Toolbar to open the Quick Access Toolbar customization dialog.

You can also export your Ribbon and Quick Access Toolbar customizations into a file.

 

when we make changes to the default Ribbon these user customizations are saved in as .officeUI Files

%localappdata%MicrosoftOffice

The file names will differ according to the office program and the portion of the Ribbon UI  you customized.

Application Description Of .Ribbon File .officeUI File Name
Outlook 2010 Outlook Explorer olkexplorer.officeUI
Outlook 2010 Contact olkaddritem.officeUI
Outlook 2010 Appointment/Meeting (organizer on compose, organizer after compose, attendee) olkapptitem.officeUI
Outlook 2010 Contact Group (formerly known as Distribution List) olkdlstitem.officeUI
Outlook 2010 Journal Item olklogitem.officeUI
Outlook 2010 Mail Compose olkmailitem.officeUI
Outlook 2010 Mail Read olkmailread.officeUI
Outlook 2010 Multimedia Message Compose olkmmsedit.officeUI
Outlook 2010 Multimedia Message Read olkmmsread.officeUI
Outlook 2010 Received Meeting Request olkmreqread.officeUI
Outlook 2010 Forward Meeting Request olkmreqsend.officeUI
Outlook 2010 Post Item Compose olkpostitem.officeUI
Outlook 2010 Post Item Read olkpostread.officeUI
Outlook 2010 NDR olkreportitem.officeUI
Outlook 2010 Send Again Item olkresenditem.officeUI
Outlook 2010 Counter Response to a Meeting Request olkrespcounter.officeUI
Outlook 2010 Received Meeting Response olkresponseread.officeUI
Outlook 2010 Edit Meeting Response olkresponsesend.officeUI
Outlook 2010 RSS Item olkrssitem.officeUI
Outlook 2010 Sharing Item Compose olkshareitem.officeUI
Outlook 2010 Sharing Item Read olkshareread.officeUI
Outlook 2010 Text Message Compose olksmsedit.officeUI
Outlook 2010 Text Message Read olksmsread.officeUI
Outlook 2010 Task Item (Task/Task Request, etc.) olktaskitem.officeUI
Access 2010 Access Ribbon Access.officeUI
Excel 2010 Excel Ribbon Excel.officeUI
InfoPath 2010 InfoPath Designer Ribbon IPDesigner.officeUI
InfoPath 2010 InfoPath Editor Ribbon IPEditor.officeUI
OneNote 2010 OneNote Ribbon OneNote.officeUI
PowerPoint PowerPoint Ribbon PowerPoint.officeUI
Project 2010 Project Ribbon MSProject.officeUI
Publisher 2010 Publisher Ribbon Publisher.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveLB.officeUI
*SharePoint 2010 SharePoint Workspaces Ribbon GrooveWE.officeUI
SharePoint Designer 2010 SharePoint Designer Ribbon spdesign.officeUI
Visio 2010 Visio Ribbon Visio.officeUI
Word 2010 Word Ribbon Word.officeUI

You can use these files and push it via Group policy using a simple start up script..

@echo off 
setlocal
set userdir=%localappdata%MicrosoftOffice
set remotedir=\MyServerLogonFilespublicOfficeUI 
for %%r in (Word Excel PowerPoint) do if not exist %userdir%%%r.officeUI cp %remotedir%%%r.officeUI %userdir%%%r.officeUI
endlocal 

A basic script to copy .officeUI files from a network share into the user’s local AppData directory, if no .officeUI file currently exists there.
Can easily be modified to use the roaming AppData directory (replace %localappdata% with %appdata%) or to include additional ribbon customizations.

 

Managing Office suit setting via Group Policy

Download and import the ADM templates to the Group policy object editor.
This will allow you to  manage settings Security, UI related options, Trust center, etc.. on office 2010 using GPO

Download Office 2010 Administrative Template files (ADM, ADMX/ADML)

hopefully, this will be help full to someone..
until next time cháo

This is a guide to show you how to enroll your servers/desktops to allow powershell remoting (WINRM) over HTTPS

Assumptions

  • You have a working Root CA on the ADDS environment – Guide
  • CRL and AIA is configured properly – Guide
  • Root CA cert is pushed out to all Servers/Desktops – This happens by default

Contents

  1. Setup CA Certificate template
  2. Deploy Auto-enrolled Certificates via Group Policy
  3. Powershell logon script to set the WinRM listener
  4. Deploy the script as a logon script via Group Policy
  5. Testing
1 – Setup CA Certificate template to allow Client Servers/Desktops to checkout the certificate from the CA

Connect to the The Certification Authority Microsoft Management Console (MMC)

Navigate to Certificate Templates > Manage

On the “Certificate templates Console” window > Select Web Server > Duplicate Template

Under the new Template window Set the following attributes

General – Pick a Name and Validity Period – This is up to you

Compatibility – Set the compatibility attributes (You can leave this on the default values, It up to you)

Subject Name – Set ‘Subject Name’ attributes (Important)

Security – Add “Domain Computers” Security Group and Set the following permissions

  • Read – Allow
  • Enroll – Allow
  • Autoenroll – Allow

Click “OK” to save and close out of “Certificate template console”

Issue to the new template

Go back to the “The Certification Authority Microsoft Management Console” (MMC)

Under templates (Right click the empty space) > Select New > Certificate template to Issue

Under the Enable Certificate template window > Select the Template you just created

Allow few minutes for ADDS to replicate and pick up the changes with in the forest

2 – Deploy Auto-enrolled Certificates via Group Policy

Create a new GPO

Windows Settings > Security Settings > Public Key Policies/Certificate Services Client – Auto-Enrollment Settings

Link the GPO to the relevant OU with in your ADDS environment

Note – You can push out the root CA cert as a trusted root certificate with this same policy if you want to force computers to pick up the CA cert,

Testing

If you need to test it gpupdate/force or reboot your test machine, The Server VM/PC will pickup a certificate from ADCS PKI

3 – Powershell logon script to set the WINRM listener

Dry run

  • Setup the log file
  • Check for the Certificate matching the machines FQDN Auto-enrolled from AD CS
  • If exist
    • Set up the HTTPS WInRM listener and bind the certificate
    • Write log
  • else
    • Write log
#Malinda Rathnayake- 2020
#
#variable
$Date = Get-Date -Format "dd_MM_yy"
$port=5986
$SessionRunTime = Get-Date -Format "dd_yyyy_HH-mm"
#
#Setup Logs folder and log File
$ScriptVersion = '1.0'
$locallogPath = "C:\_Scripts\_Logs\WINRM_HTTPS_ListenerBinding"
#
$logging_Folder = (New-Item -Path $locallogPath -ItemType Directory -Name $Date -Force)
$ScriptSessionlogFile = New-Item $logging_Folder\ScriptSessionLog_$SessionRunTime.txt -Force
$ScriptSessionlogFilePath = $ScriptSessionlogFile.VersionInfo.FileName
#
#Check for the the auto-enrolled SSL Cert
$RootCA = "Company-Root-CA" #change This
$hostname = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
$certinfo = (Get-ChildItem -Path Cert:\LocalMachine\My\ |? {($_.Subject -Like "CN=$hostname") -and ($_.Issuer -Like "CN=$RootCA*")})
$certThumbprint = $certinfo.Thumbprint
#
#Script-------------------------------------------------------
#
#Remove the existing WInRM Listener if there is any
Get-ChildItem WSMan:\Localhost\Listener | Where -Property Keys -eq "Transport=HTTPS" | Remove-Item -Recurse -Force
#
#If the client certificate exists Setup the WinRM HTTPS listener with the cert else Write log
if ($certThumbprint){
#
New-Item -Path WSMan:\Localhost\Listener -Transport HTTPS -Address * -CertificateThumbprint $certThumbprint -HostName $hostname -Force
#
netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=$port
#
Add-Content -Path $ScriptSessionlogFilePath -Value "Certbinding with the HTTPS WinRM HTTPS Listener Completed"
Add-Content -Path $ScriptSessionlogFilePath -Value "$certinfo.Subject"}
else{
Add-Content -Path $ScriptSessionlogFilePath -Value "No Cert matching the Server FQDN found, Please run gpupdate/force or reboot the system"
}

Script is commented with Explaining each section (should have done functions but i was pressed for time, never got around to do it, if you do fix it up and improve this please let me know in the comments :D)

5 – Deploy the script as a logon script via Group Policy

Setup a GPO and set this script as a logon Powershell script

Im using a user policy with GPO Loop-back processing set to Merge applied to the server OU

Testing

To confirm WinRM is listening on HTTPS, type the following commands:

winrm enumerate winrm/config/listener
Winrm get http://schemas.microsoft.com/wbem/wsman/1/config

Sources that helped me

https://docs.microsoft.com/en-us/troubleshoot/windows-client/system-management-components/configure-winrm-for-https

https://gmusumeci.medium.com/get-rid-of-those-annoying-self-signed-certificates-with-microsoft-certificate-services-part-3-9d4b8e819f45

http://vcloud-lab.com/entries/powershell/powershell-remoting-over-https-using-self-signed-ssl-certificate


Hi Internetz, its been a while…

So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums. 
It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.

What you need :

Hardware

  • Firebox 
  • Female to Female Serial Cable – link
  • 4GB CF Card (We can use 1Gb, 2Gb but personally I would recommend at-least 4GB)
  • CF Card Reader

Software

  • pfsense NanoBSD
  • physdiskwrite –  Download
  • TeraTerm Pro Web – Enhanced Telnet/SSH2 Client – Download

The firebox X700

This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware
The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device

There are several methods to run pfsense on this device.

HDD

Install PF sense on a PC and Plug the HDD to the firebox.

This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days

CF card

This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it. 


In this tutorial we are using the CF card method

Installing PFsense

  • Download the relevant pfsense image


Since we are using a CF card we need to use the PFsense version built to work on embedded devices.

NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle

Since we are using a 4GB CF card, we are going to use the 4G image

  • Flashing the nanoBSD image to the CF card


Extract the physdiskwrite program and run the PhysGUI.exe
This software is written in German i think but operating it is not that hard

Select the CF card from the list.

Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID

Load the ISO file
Right click on the Disk “Image laden > offnen”

select the ISO file from the “open file” window
program will prompt you with the following dialog box

 


Select the remove 2GB restriction and click “OK”
It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress

  • Installing the CF card on the Firebox

Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card

Remove the protective glue and replace the card with the new CF card flashed with pfsense image.

  • Booting up and configuring PFsense

since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device

Install “teraTerm pro web” program we downloaded earlier.

I tried using putty and many other telnet clients didn’t work properly

Open up the terminal window

Connect the firebox to the PC using the serial cable, and power it up

Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)

  
Many other tutorials says to change the baud rates. but defaults worked just fine for me
Since we already flashed the PFsense image to the CF card we do not need to install the OS

By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.

It will ask you to setup VLan

Assign the WAN, LAN, OPT1 interfaces.

ON X700 interface names are as follows 

Please refer to pfsense Docs for more info on setting up 


http://doc.pfsense.org/index.php/Tutorials#Advanced_Tutorials


After the initial config is completed. you do not need the console cable and Tera Term
you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP



Addtional configuration

  • Enabling the LCD panel

All firebox units have a LCD panel in front
We can use the pfsense LCDproc-dev package to enable and display various information

Install the LCDproc-dev Package via the package Manager

Go to Services > LCDProc

Set the settings as follows


Hope this article helped you guys.Dont forget to leave a comment with your thoughts 

Sources –

http://forum.pfsense.org/index.php?board=5.0

This guide covers how to add an NFS StorageClass and a dynamic provisioner to Kubernetes using the nfs-subdir-external-provisioner Helm chart. This enables us to mount NFS shares dynamically for PersistentVolumeClaims (PVCs) used by workloads.

Example use cases:

  • Database migrations
  • Apache Kafka clusters
  • Data processing pipelines

Requirements:

  • An accessible NFS share exported with: rw,sync,no_subtree_check,no_root_squash
  • NFSv3 or NFSv4 protocol
  • Kubernetes v1.31.7+ or RKE2 with rke2r1 or later

 

lets get to it


1. NFS Server Export Setup

Ensure your NFS server exports the shared directory correctly:

# /etc/exports
/rke-pv-storage  worker-node-ips(rw,sync,no_subtree_check,no_root_squash)

 

  • Replace worker-node-ips with actual IPs or CIDR blocks of your worker nodes.
  • Run sudo exportfs -r to reload the export table.

2. Install NFS Subdir External Provisioner

Add the Helm repo and install the provisioned:

helm repo add nfs-subdir-external-provisioner \
  https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

helm install nfs-client-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace kube-system \
  --set nfs.server=192.168.162.100 \
  --set nfs.path=/rke-pv-storage \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=false

Notes:

  • If you want this to be the default storage class, change storageClass.defaultClass=true.
  • nfs.server should point to the IP of your NFS server.
  • nfs.path must be a valid exported directory from that NFS server.
  • storageClass.name can be referenced in your PersistentVolumeClaim YAMLs using storageClassName: nfs-client.

3. PVC and Pod Test

Create a test PVC and pod using the following YAML:

# test-nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: shell
    image: busybox
    command: [ "sh", "-c", "sleep 3600" ]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-nfs-pvc

 

Apply it:

kubectl apply -f test-nfs-pvc.yaml
kubectl get pvc test-nfs-pvc -w

 

Expected output:

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-nfs-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi        RWX            nfs-client     30s

 


4. Troubleshooting

If the PVC remains in Pending, follow these steps:

Check the provisioner pod status:

kubectl get pods -n kube-system | grep nfs-client-provisioner

 

Inspect the provisioner pod:

kubectl describe pod -n kube-system <pod-name>
kubectl logs -n kube-system <pod-name>

 

Common Issues:

  • Broken State: Bad NFS mount
    mount.nfs: access denied by server while mounting 192.168.162.100:/pl-elt-kakfka

     

    • This usually means the NFS path is misspelled or not exported properly.
  • Broken State: root_squash enabled
    failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/…: permission denied

     

    • Fix by changing the export to use no_root_squash or chown the directory to nobody:nogroup.
  • ImagePullBackOff
    • Ensure nodes have internet access and can reach registry.k8s.io.
  • RBAC errors
    • Make sure the ServiceAccount used by the provisioner has permissions to watch PVCs and create PVs.

5. Healthy State Example

kubectl get pods -n kube-system | grep nfs-client-provisioner-nfs-subdir-external-provisioner
nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m   1/1     Running     0          3m39s

 

kubectl describe pod -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
# Output shows pod is Running with Ready=True

 

kubectl logs -n kube-system nfs-client-provisioner-nfs-subdir-external-provisioner-7992kq7m
...
I0512 21:46:03.752701       1 controller.go:1420] provision "default/test-nfs-pvc" class "nfs-client": volume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" provisioned
I0512 21:46:03.752763       1 volume_store.go:212] Trying to save persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d"
I0512 21:46:03.772301       1 volume_store.go:219] persistentvolume "pvc-73481f45-3055-4b4b-80f4-e68ffe83802d" saved
I0512 21:46:03.772353       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Name:"test-nfs-pvc"}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-73481f45-3055-4b4b-80f4-e68ffe83802d
...

 

Once test-nfs-pvc is bound and the pod starts successfully, your setup is working. You can now safely use storageClass: nfs-client in other workloads (e.g., Strimzi KafkaNodePool).


Recently we had a requirement to check SMTP of two diffrent servers and run a script if both servers failed. i googled around for the tool but ended up putting together this script.

Its not the most prettiest but it works, and im sure you guys will make something much better out of it.

# Define the host names here for the servers that needs to be monitored
$servers = "relay1.host.com","relay2.host.com"
# Define port number
$tcp_port = "25"

# Loop through each host to get an individual result.
ForEach($srv in $servers) {

$tcpClient = New-Object System.Net.Sockets.TCPClient
$tcpClient.Connect($srv,$tcp_port)

$connectState = $tcpClient.Connected

If($connectState -eq $true) {
Write-Host "$srv is online"
}
Else {
Write-Host "$srv is offline"
}

$tcpClient.Dispose()

}

If something is wrong or if you think there is a better way please free feel to comment and let everyone know. its all about community after all.

Update 4/18/2016 –

Updated the script with the one provided by Donald Gray – Thanks a lot : )


DISCLAIMER: No copyright infringement intended. This article is for entertainment and educational purposes only,


Alright!! now that’s out of the way I’m going to keep this short and simple 


Scope : – 

Install OpenVPN client
import profile with username and password
connect to your preferred VPN server


Use case : – 

  • Secure your fireTV traffic using any OpenVPN supported VPN services=
  • Connect to your home file server/NAS and stream files when traveling via your FireTV or Firestick using your own VPN server (not covered in this article)
  • Watch Streaming services when traveling using your own VPN server (not covered in this article)
 
 
 
Guide :- 


Project Summary 

Hardware – FireTV 4K Latest firmware 

Platform – Windows 10 Enterprise

in this guide im using ADB to install OpenVPN client on my fireTV and use that to connect to the NORDVPN service

All Project files are located on C:NoRDVPN


Files Needed (Please download these files to your workstation before proceeding)

OpenVPN client APK – http://plai.de/android/

NORDVPN OpenVPN configuration files – https://nordvpn.com/ovpn/

ADBLink – http://jocala.com

01. Enable Developer mode on Fire tv 

http://www.aftvnews.com/how-to-enable-adb-debugging-on-an-amazon-fire-tv-or-fire-tv-stick/

  1. From the Fire TV or Fire TV Stick’s home screen, scroll to “Settings”.
  2. Next, scroll to the right and select “Device”.
  3. Next, scroll down and select “Developer options”.
  4. Then select “ADB debugging” to turn the option to “ON”.
 
02. Install OpenVPN client via the network using ADBLInk
 
Install the ADBlink program
 
Download URL – http://jocala.com
 
Create Device profile and connect 
 
Launch ADBLink and click on “New”
 
 
Fill out the information 
 
Notes – 
 
Address  – this is the IP assigned to your FireTV you can get this from the fireTV Network status page under 
 
“Settings”.> “System”.> “About” > “Network”
 
 
You can also get this information from your ARP table, DHCP leases on your DHCP server, etc 
 
 
 
 
 
 
 
Leave everything else with default values and save the profile
 
Install APK using ADBLINK
 
 
Browse to the location you download all files to and select the OpenVPN APk file 
 
In this guide the location is “C:NoRDVPN”
After a successful install, you will be greeted with the following dialog box 
02. Configure and copy(ADB Push) OVPN configuration files
 
this step is very important
 
02-01 Create Login configuration files
 
Under the same folder where you downloaded files Example – C:NoRDVPN
 
create a text file with the following name – login.conf
 
Edit the file with your favorite text editor 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Enter your NORDVPN credentials in two separate lines (Email address and password)
 
 
 
 
 
 
 
 
 
 
 
 
 
Save Changes
 
 
02-02 Edit the VPN configuration file
Open the VPN configuration files in my case, I picked a US server so my filename is 
 
us226.nordvpn.com.udp1194.ovpn
File the line that reads “auth-user-pass” and replace it with “auth-user-pass login.conf
Save Changes
 
02. Push configuration files to the FireTV
 
 
  • Click on “File Manager” on adbLink
               Notes – By Default, it will connect to the root of the SDcard on your FireTV 
  • Create a folder (I’m going to call it NORD_VPN)
 
Find the created “NORD_VPN” folder and double click on it via the left window pane 
Click on “Push”
 
 
Browse to the folder (C:/NoRDVPN) and select the two configuration files 
 
Note – 
 
Use Shift to select multiple files
 
Files will be pushed out to the FireTV as soon as you select Choose
 
 
 
Now we are done with the work from your workstation
 
By the time you reach this step you will have completed the following 
 
  • Installed OpenVPN on the FireTV system
  • Customized the VPN configuration files
  • Copied the VPN configuration files to the Root of the SDcard on the FireTV system
Note – Next steps are really simple and you only need the fireTV remote to complete these
 
03. Import VPN profile on FireTV and connect
 
 
Browse to your Apps and Games > See All 
 

Select and launch OpenVPN Client

Use the + sign to add a profile 

Click Import

Browse and Select the ovpn configuration file using the browser 

 
 
Click on the imported VPN profile to initiate the connection 
Under the “Settings” Tab make sure “use System proxy” is enabled
Now your fireTV is routing traffic via the VPN 
 
This is the only outbound connection from the FireTV connecting to the NORDVPN server IP via openVPN port UDP 1194
 
You can find this IP in the configuration file or by going to the OpenVPN logs Tab
 
Until next time….Stay Awesome Internetz : ) 

i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)

anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting

Guide

First we need to find the occupied slots and the Bus address for each slot

sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"

Output will show the Slot ID, Usage and then the Bus Address

        Designation: CPU SLOT1 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT2 PCI-E 4.0 X8
        Current Usage: In Use
        Bus Address: 0000:41:00.0
        Designation: CPU SLOT3 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c1:00.0
        Designation: CPU SLOT4 PCI-E 4.0 X8
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT5 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:c2:00.0
        Designation: CPU SLOT6 PCI-E 4.0 X16
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: CPU SLOT7 PCI-E 4.0 X16
        Current Usage: In Use
        Bus Address: 0000:81:00.0
        Designation: PCI-E M.2-M1
        Current Usage: Available
        Bus Address: 0000:ff:00.0
        Designation: PCI-E M.2-M2
        Current Usage: Available
        Bus Address: 0000:ff:00.0

We can use lspci -s #BusAddress# to locate whats installed on each slot

lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)

lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)

Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments

Until next time!!!

Reference –

https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl

You’ve probably noticed: coding models are eager to please. Too eager. Ask for something questionable and you’ll get it, wrapped in enthusiasm. Ask for feedback and you’ll get praise followed by gentle suggestions. Ask them to build something and they’ll start coding before understanding what you actually need.

This isn’t a bug. It’s trained behavior. And it’s costing you time, tokens, and code quality.

The Sycophancy Problem

Modern LLMs go through reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Users rate responses higher when the AI agrees with them, validates their ideas, and delivers quickly. So that’s what the models learn to do. Anthropic’s work on sycophancy in RLHF-tuned assistants makes this pretty explicit: models learn to match user beliefs, even when they’re wrong.

The result: an assistant that says “Great idea!” before pointing out your approach won’t scale. One that starts writing code before asking what systems it needs to integrate with. One that hedges every opinion with “but it depends on your use case.”

For consumer use cases, travel planning, recipe suggestions, general Q&A this is fine. For engineering work, it’s a liability.

When the models won’t push back, you lose the value of a second perspective. When it starts implementing before scoping, you burn tokens on code you’ll throw away. When it leaves library choices ambiguous, you get whatever the model defaults to which may not be what production needs.

Here’s a concrete example. I asked Claude for a “simple Prometheus exporter app,” gave it a minimal spec with scope and data flows, and still didn’t spell out anything about testability or structure. It happily produced:

  • A script with sys.exit() sprinkled everywhere
  • Logic glued directly into if __name__ == "__main__":
  • Debugging via print() calls instead of real logging

It technically “worked,” but it was painful to test, impossible to reuse and extend.

The Fix: Specs Before Code

Instead of giving it a set of requirements and asking to generate code. Start with specifications. Move the expensive iteration the “that’s not what I meant” cycles to the design phase where changes are cheap. Then hand a tight spec to your coding tool where implementation becomes mechanical.

The workflow:

  1. Describe what you want (rough is fine)
  2. Scope through pointed questions (5–8, not 20)
  3. Spec the solution with explicit implementation decisions
  4. Implement by handing the spec to Cursor/Cline/Copilot

This isn’t a brand new methodology. It’s the same spec-driven development (SDD) that tools like github spec-kit is promoting

write the spec first, then let a cheaper model implement against it.

By the time code gets written, the ambiguity is gone and the assistant is just a fast pair of hands that follows a tight spec with guard rails built in.

When This Workflow Pays Off

To be clear: this isn’t for everything. If you need a quick one-off script to parse a CSV or rename some files, writing a spec is overkill. Just ask for the code and move on with your life.

This workflow shines when:

  • The task spans multiple files or components
  • External integrations exist (databases, APIs, message queues, cloud services)
  • It will run in production and needs monitoring and observability
  • Infra is involved (Kubernetes, Terraform, CI/CD, exporters, operators)
  • Someone else might maintain it later
  • You’ve been burned before on similar scope

Rule of thumb: if it touches more than one system or more than one file, treat it as spec-worthy. If you can genuinely explain it in two sentences and keep it in a single file, skip straight to code.

Implementation Directives — Not “add a scheduler” but “use APScheduler with BackgroundScheduler, register an atexit handler for graceful shutdown.” Not “handle timeouts” but “use cx_Oracle call_timeout, not post-execution checks.”

Error Handling Matrix — List the important failure modes, how to detect them, what to log, and how to recover (retry, backoff, fail-fast, alert, etc.). No room for “the assistant will figure it out.”

Concurrency Decisions — What state is shared, what synchronization primitive to use, and lock ordering if multiple locks exist. Don’t let the assistant improvise concurrency.

Out of Scope — Explicit boundaries: “No auth changes,” “No schema migrations,” “Do not add retries at the HTTP client level.” This prevents the assistant from “helpfully” adding features you didn’t ask for.

Anticipate Anywhere the Model might guess, make a decision instead or make it validate/confirm with you before taking action.

The Handoff

When you hand off to your coding agent, make self-review part of the process:

Rules:
- Stop after each file for review
- Self-Review: Before presenting each file, verify against
  engineering-standards.md. Fix violations (logging, error
  handling, concurrency, resource cleanup) before stopping.
- Do not add features beyond this spec
- Use environment variables for all credentials
- Follow Implementation Directives exactly

 Pair this with a rules.md that encodes your engineering standards—error propagation patterns, lock discipline, resource cleanup. The agent internalizes the baseline, self-reviews against it, and you’re left checking logic rather than hunting for missing using statements, context managers, or retries.

Fixing the Partnership Dynamic

Specs help, but “be blunt” isn’t enough. The model can follow the vibe of your instructions and still waste your time by producing unstructured output, bluffing through unknowns, or “spec’ing anyway” when an integration is the real blocker. That means overriding the trained “be agreeable” behavior with explicit instructions.

For example:

Core directive: Be useful, not pleasant.

OUTPUT CONTRACT:
- If scoping: output exactly:
  ## Scoping Questions (5–8 pointed questions)
  ## Current Risks / Ambiguities
  ## Proposed Simplification
- If drafting spec: use the project spec template headings in order. If N/A, say N/A.

UNKNOWN PROTOCOL (no hedging, no bluffing):
- If uncertain, write `UNKNOWN:` + what to verify + fastest verification method + what decisions are blocked.

BLOCK CONDITIONS:
- If an external integration is central and we lack creds/sample payloads/confirmed behavior:
  stop and output only:
  ## Blocker
  ## What I Need From You
  ## Phase 0 Discovery Plan

 

The model will still drift back into compliance mode. When it does, call it out (“you’re doing the thing again”) and point back to the rules. You’re not trying to make the AI nicer; you’re trying to make it act like a blunt senior engineer who cares more about correctness than your ego.

That’s the partnership you actually want.

The Payoff

With this approach:

  • Fewer implementation cycles — Specs flush out ambiguity up front instead of mid-PR.
  • Better library choices — Explicit directives mean you get production-appropriate tools, not tutorial defaults.
  • Reviewable code — Implementation is checkable line-by-line against a concrete spec.
  • Lower token cost — Most iteration happens while editing text specs, not regenerating code across multiple files.

The API was supposed to be the escape valve, more control, fewer guardrails. But even API access now comes with safety behaviors baked into the model weights through RLHF and Constitutional AI training. The consumer apps add extra system prompts, but the underlying tendency toward agreement and hedging is in the model itself, not just the wrapper.

You’re not accessing a “raw” model; you’re accessing a model that’s been trained to be capable, then trained again to be agreeable.

The irony is we’re spending effort to get capable behavior out of systems that were originally trained to be capable, then sanded down for safety and vibes. Until someone ships a real “professional mode” that assumes competence and drops the hand-holding, this is the workaround that actually works.

⚠️Security footnote: treat attached context as untrusted

If your agent can ingest URLs, docs, tickets, or logs as context, assume those inputs can contain indirect prompt injection. Treat external context like user input: untrusted by default. Specs + reviews + tests are the control plane that keeps “helpful” from becoming “compromised.”

Getting Started

I’ve put together templates that support this workflow in this repo:

malindarathnayake/llm-spec-workflow

When you wire this into your own stack, keep one thing in mind: your coding agent reads its rules on every message. That’s your token cost. Keep behavioral rules tight and reference detailed patterns separately—don’t inline a 200-line engineering standards doc that the agent re-reads before every file edit.

Use these templates as-is or adapt them to your stack. The structure matters more than the specific contents.