Hello internetzzz
As an Administrator, you might run in to situations that requires you to Deploy UI customizations such as customized Ribbon, Quick toolbars, etc for Office applications on user Computers, or in my case Terminal servers.
here is a quick and dirty guide on how to do this via group policy.
For instance, lets say we have to deploy a button to initiate a 3rd party productivity program with in outlook and MS word.
First off, make the necessary changes to outlook or word on a Client pc running MS office.
To customize the Ribbon
- On the File tab, click Options, and then click Customize Ribbon to open the Ribbon customization dialog.
To customize the Quick Access Toolbar
- On the File tab, click Options, and then click Quick Access Toolbar to open the Quick Access Toolbar customization dialog.
You can also export your Ribbon and Quick Access Toolbar customizations into a file.
when we make changes to the default Ribbon these user customizations are saved in as .officeUI Files
%localappdata%MicrosoftOffice
The file names will differ according to the office program and the portion of the Ribbon UI you customized.
| Application | Description Of .Ribbon File | .officeUI File Name |
|---|---|---|
| Outlook 2010 | Outlook Explorer | olkexplorer.officeUI |
| Outlook 2010 | Contact | olkaddritem.officeUI |
| Outlook 2010 | Appointment/Meeting (organizer on compose, organizer after compose, attendee) | olkapptitem.officeUI |
| Outlook 2010 | Contact Group (formerly known as Distribution List) | olkdlstitem.officeUI |
| Outlook 2010 | Journal Item | olklogitem.officeUI |
| Outlook 2010 | Mail Compose | olkmailitem.officeUI |
| Outlook 2010 | Mail Read | olkmailread.officeUI |
| Outlook 2010 | Multimedia Message Compose | olkmmsedit.officeUI |
| Outlook 2010 | Multimedia Message Read | olkmmsread.officeUI |
| Outlook 2010 | Received Meeting Request | olkmreqread.officeUI |
| Outlook 2010 | Forward Meeting Request | olkmreqsend.officeUI |
| Outlook 2010 | Post Item Compose | olkpostitem.officeUI |
| Outlook 2010 | Post Item Read | olkpostread.officeUI |
| Outlook 2010 | NDR | olkreportitem.officeUI |
| Outlook 2010 | Send Again Item | olkresenditem.officeUI |
| Outlook 2010 | Counter Response to a Meeting Request | olkrespcounter.officeUI |
| Outlook 2010 | Received Meeting Response | olkresponseread.officeUI |
| Outlook 2010 | Edit Meeting Response | olkresponsesend.officeUI |
| Outlook 2010 | RSS Item | olkrssitem.officeUI |
| Outlook 2010 | Sharing Item Compose | olkshareitem.officeUI |
| Outlook 2010 | Sharing Item Read | olkshareread.officeUI |
| Outlook 2010 | Text Message Compose | olksmsedit.officeUI |
| Outlook 2010 | Text Message Read | olksmsread.officeUI |
| Outlook 2010 | Task Item (Task/Task Request, etc.) | olktaskitem.officeUI |
| Access 2010 | Access Ribbon | Access.officeUI |
| Excel 2010 | Excel Ribbon | Excel.officeUI |
| InfoPath 2010 | InfoPath Designer Ribbon | IPDesigner.officeUI |
| InfoPath 2010 | InfoPath Editor Ribbon | IPEditor.officeUI |
| OneNote 2010 | OneNote Ribbon | OneNote.officeUI |
| PowerPoint | PowerPoint Ribbon | PowerPoint.officeUI |
| Project 2010 | Project Ribbon | MSProject.officeUI |
| Publisher 2010 | Publisher Ribbon | Publisher.officeUI |
| *SharePoint 2010 | SharePoint Workspaces Ribbon | GrooveLB.officeUI |
| *SharePoint 2010 | SharePoint Workspaces Ribbon | GrooveWE.officeUI |
| SharePoint Designer 2010 | SharePoint Designer Ribbon | spdesign.officeUI |
| Visio 2010 | Visio Ribbon | Visio.officeUI |
| Word 2010 | Word Ribbon | Word.officeUI |
You can use these files and push it via Group policy using a simple start up script..@echo off
setlocal
set userdir=%localappdata%MicrosoftOffice
set remotedir=\MyServerLogonFilespublicOfficeUI
for %%r in (Word Excel PowerPoint) do if not exist %userdir%%%r.officeUI cp %remotedir%%%r.officeUI %userdir%%%r.officeUI
endlocal
A basic script to copy .officeUI files from a network share into the user’s local AppData directory, if no .officeUI file currently exists there.
Can easily be modified to use the roaming AppData directory (replace %localappdata% with %appdata%) or to include additional ribbon customizations.
Managing Office suit setting via Group Policy
Download and import the ADM templates to the Group policy object editor.
This will allow you to manage settings Security, UI related options, Trust center, etc.. on office 2010 using GPO
Download Office 2010 Administrative Template files (ADM, ADMX/ADML)
hopefully, this will be help full to someone..
until next time cháo
I ran in to the the same issue detailed here working with a RKE cluster
https://github.com/metallb/metallb/issues/1154
After looking around for a few hours digging in to the logs i figured out the issue, hopefully this helps some one else our there in the situation save some time.
Make sure the IPVS mode is enabled on the cluster configuration
If you are using :
RKE2 – edit the cluster.yaml file
RKE1 – Edit the cluster configuration from the rancher UI > Cluster management > Select the cluster > edit configuration > edit as YAML

Locate the services field under rancher_kubernetes_engine_config and add the following options to enable IPVS
kubeproxy:
extra_args:
ipvs-scheduler: lc
proxy-mode: ipvs
https://www.suse.com/support/kb/doc/?id=000020035
Default

After changes

Make sure the Kernel modules are enabled on the nodes running control planes
Background
Example Rancher – RKE1 cluster
sudo docker ps | grep proxy # find the container ID for kubproxy
sudo docker logs ####containerID###
0313 21:44:08.315888 108645 feature_gate.go:245] feature gates: &{map[]}
I0313 21:44:08.346872 108645 proxier.go:652] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack_ipv4"
E0313 21:44:08.347024 108645 server_others.go:107] "Can't use the IPVS proxier" err="IPVS proxier will not be used because the following required kernel modules are not loaded: [ip_vs_lc]"
Kubproxy is trying to load the needed kernel modules and failing to enable IPVS
Lets enable the kernel modules
sudo nano /etc/modules-load.d/ipvs.conf
ip_vs_lc
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
Install ipvsadm to confirm the changes
sudo dnf install ipvsadm -y
Reboot the VM or the Baremetal server
use the sudo ipvsadm to confirm ipvs is enabled
sudo ipvsadm
Testing
kubectl get svc -n #namespace | grep load

arping -I ens192 192.168.94.140
ARPING 192.168.94.140 from 192.168.94.65 ens192
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 1.117ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.737ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.845ms
Unicast reply from 192.168.94.140 [00:50:56:96:E3:1D] 0.668ms
Sent 4 probes (1 broadcast(s))
Received 4 response(s)
If you have the service type load balancer on a deployment now you should be able to reach it if the container is responding on the service

helpful Links
https://metallb.universe.tf/configuration/troubleshooting/
Issue
Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs
"The Specified Domain either does not exist or could not be contacted"
(click to enlarge)
Checked the following
- Restarted ADsync Services
- Resolve the ADDS Domain FQDN and DNS – Working
- Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values
Root Cause
Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)
Assumption
when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.
This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high
Resolution
Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding
in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)
Domain Trust relationship failures, it may be a virus making it impossible to login using domain credentials..you are bound to run in to scenario’s like this while managing a AD environment.you will have to login to a local administrator account on the client pc and re join the domain or do what ever the necessary troubleshooting procedures. in some cases you don’t have local admin passwords on some pc’s. so this will be a life saver cause i my self had the unfortunate incident where i had to guide a user to reset the local admin password of a pc over the phone using hiren bootcd.
its very simple actually. use this VB script file, modify it accordingly and add it as a computer start up script via Group policy.
this script first queary for the user name you have specified in the script on the local pc, if it doesn’t exist it will create it as an member of the local administrator group. if the user name already exist it will change the password to the one specified.
‘—————————————————————————————————————
‘this section creates the new user called localsupport if it doesn’t existDim AdminPassword
AdminPassword = “password“
QueryForUser(“user_name“)
Set objNetwork = CreateObject(“Wscript.Network”)
strComputer = objNetwork.ComputerName
Set objComputer = GetObject(“WinNT://” &strComputer)
Set colAccounts = GetObject(“WinNT://” & strComputer & “”)
Set objUser = colAccounts.Create(“user”, “localsupport”)
objUser.SetPassword AdminPassword
objUser.Put “UserFlags”, 65600 ‘
objUser.SetInfo
‘add to administrators group
Set objGroup = GetObject(“WinNT://” & strComputer & “/Administrators,group”)
Set objUser = GetObject(“WinNT://” & strComputer & “/localsupport,user”)
objGroup.Add(objUser.ADsPath)
‘msgbox “user was created”
‘this section just changes the password if the user exists
Sub QueryForUser(strlocalsupport)
Set objlocal = GetObject(“WinNT://.”)
objlocal.Filter = Array(“user”)
For Each User In objlocal
If lcase(User.Name) = lcase(strlocalsupport) Then
strComputer = “.”
Set objUser = GetObject(“WinNT://” & strComputer & “/localsupport, user”)
objUser.SetPassword AdminPassword
objUser.SetInfo
‘msgbox User.Name & ” already exists.” & vbCrLf & “The password was re-set.”
WScript.Quit
End If
Next
End Sub
————————————————————————————————————–
to change the password modify the password within the quotes (marked in red), in the following code section. this also allows you to easily change the password in case you have to give the password to a end user.
Dim AdminPassword
AdminPassword = “password“
QueryForUser(“user_name“)
hope this helps someone, cause this saved my ass so many time. 😛
You’ve probably noticed: coding models are eager to please. Too eager. Ask for something questionable and you’ll get it, wrapped in enthusiasm. Ask for feedback and you’ll get praise followed by gentle suggestions. Ask them to build something and they’ll start coding before understanding what you actually need.
This isn’t a bug. It’s trained behavior. And it’s costing you time, tokens, and code quality.
The Sycophancy Problem
Modern LLMs go through reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Users rate responses higher when the AI agrees with them, validates their ideas, and delivers quickly. So that’s what the models learn to do. Anthropic’s work on sycophancy in RLHF-tuned assistants makes this pretty explicit: models learn to match user beliefs, even when they’re wrong.
The result: an assistant that says “Great idea!” before pointing out your approach won’t scale. One that starts writing code before asking what systems it needs to integrate with. One that hedges every opinion with “but it depends on your use case.”
For consumer use cases, travel planning, recipe suggestions, general Q&A this is fine. For engineering work, it’s a liability.
When the models won’t push back, you lose the value of a second perspective. When it starts implementing before scoping, you burn tokens on code you’ll throw away. When it leaves library choices ambiguous, you get whatever the model defaults to which may not be what production needs.
Here’s a concrete example. I asked Claude for a “simple Prometheus exporter app,” gave it a minimal spec with scope and data flows, and still didn’t spell out anything about testability or structure. It happily produced:
- A script with
sys.exit()sprinkled everywhere - Logic glued directly into
if __name__ == "__main__": - Debugging via
print()calls instead of real logging
It technically “worked,” but it was painful to test, impossible to reuse and extend.
The Fix: Specs Before Code
Instead of giving it a set of requirements and asking to generate code. Start with specifications. Move the expensive iteration the “that’s not what I meant” cycles to the design phase where changes are cheap. Then hand a tight spec to your coding tool where implementation becomes mechanical.
The workflow:
- Describe what you want (rough is fine)
- Scope through pointed questions (5–8, not 20)
- Spec the solution with explicit implementation decisions
- Implement by handing the spec to Cursor/Cline/Copilot
This isn’t a brand new methodology. It’s the same spec-driven development (SDD) that tools like github spec-kit is promoting
write the spec first, then let a cheaper model implement against it.
By the time code gets written, the ambiguity is gone and the assistant is just a fast pair of hands that follows a tight spec with guard rails built in.
When This Workflow Pays Off
To be clear: this isn’t for everything. If you need a quick one-off script to parse a CSV or rename some files, writing a spec is overkill. Just ask for the code and move on with your life.
This workflow shines when:
- The task spans multiple files or components
- External integrations exist (databases, APIs, message queues, cloud services)
- It will run in production and needs monitoring and observability
- Infra is involved (Kubernetes, Terraform, CI/CD, exporters, operators)
- Someone else might maintain it later
- You’ve been burned before on similar scope
Rule of thumb: if it touches more than one system or more than one file, treat it as spec-worthy. If you can genuinely explain it in two sentences and keep it in a single file, skip straight to code.
Implementation Directives — Not “add a scheduler” but “use APScheduler with BackgroundScheduler, register an atexit handler for graceful shutdown.” Not “handle timeouts” but “use cx_Oracle call_timeout, not post-execution checks.”
Error Handling Matrix — List the important failure modes, how to detect them, what to log, and how to recover (retry, backoff, fail-fast, alert, etc.). No room for “the assistant will figure it out.”
Concurrency Decisions — What state is shared, what synchronization primitive to use, and lock ordering if multiple locks exist. Don’t let the assistant improvise concurrency.
Out of Scope — Explicit boundaries: “No auth changes,” “No schema migrations,” “Do not add retries at the HTTP client level.” This prevents the assistant from “helpfully” adding features you didn’t ask for.
Anticipate Anywhere the Model might guess, make a decision instead or make it validate/confirm with you before taking action.
The Handoff
When you hand off to your coding agent, make self-review part of the process:
Rules:
- Stop after each file for review
- Self-Review: Before presenting each file, verify against
engineering-standards.md. Fix violations (logging, error
handling, concurrency, resource cleanup) before stopping.
- Do not add features beyond this spec
- Use environment variables for all credentials
- Follow Implementation Directives exactly
Pair this with a rules.md that encodes your engineering standards—error propagation patterns, lock discipline, resource cleanup. The agent internalizes the baseline, self-reviews against it, and you’re left checking logic rather than hunting for missing using statements, context managers, or retries.
Fixing the Partnership Dynamic
Specs help, but “be blunt” isn’t enough. The model can follow the vibe of your instructions and still waste your time by producing unstructured output, bluffing through unknowns, or “spec’ing anyway” when an integration is the real blocker. That means overriding the trained “be agreeable” behavior with explicit instructions.
For example:
Core directive: Be useful, not pleasant.
OUTPUT CONTRACT:
- If scoping: output exactly:
## Scoping Questions (5–8 pointed questions)
## Current Risks / Ambiguities
## Proposed Simplification
- If drafting spec: use the project spec template headings in order. If N/A, say N/A.
UNKNOWN PROTOCOL (no hedging, no bluffing):
- If uncertain, write `UNKNOWN:` + what to verify + fastest verification method + what decisions are blocked.
BLOCK CONDITIONS:
- If an external integration is central and we lack creds/sample payloads/confirmed behavior:
stop and output only:
## Blocker
## What I Need From You
## Phase 0 Discovery Plan
The model will still drift back into compliance mode. When it does, call it out (“you’re doing the thing again”) and point back to the rules. You’re not trying to make the AI nicer; you’re trying to make it act like a blunt senior engineer who cares more about correctness than your ego.
That’s the partnership you actually want.
The Payoff
With this approach:
- Fewer implementation cycles — Specs flush out ambiguity up front instead of mid-PR.
- Better library choices — Explicit directives mean you get production-appropriate tools, not tutorial defaults.
- Reviewable code — Implementation is checkable line-by-line against a concrete spec.
- Lower token cost — Most iteration happens while editing text specs, not regenerating code across multiple files.
The API was supposed to be the escape valve, more control, fewer guardrails. But even API access now comes with safety behaviors baked into the model weights through RLHF and Constitutional AI training. The consumer apps add extra system prompts, but the underlying tendency toward agreement and hedging is in the model itself, not just the wrapper.
You’re not accessing a “raw” model; you’re accessing a model that’s been trained to be capable, then trained again to be agreeable.
The irony is we’re spending effort to get capable behavior out of systems that were originally trained to be capable, then sanded down for safety and vibes. Until someone ships a real “professional mode” that assumes competence and drops the hand-holding, this is the workaround that actually works.
⚠️Security footnote: treat attached context as untrusted
If your agent can ingest URLs, docs, tickets, or logs as context, assume those inputs can contain indirect prompt injection. Treat external context like user input: untrusted by default. Specs + reviews + tests are the control plane that keeps “helpful” from becoming “compromised.”
Getting Started
I’ve put together templates that support this workflow in this repo:
malindarathnayake/llm-spec-workflow
When you wire this into your own stack, keep one thing in mind: your coding agent reads its rules on every message. That’s your token cost. Keep behavioral rules tight and reference detailed patterns separately—don’t inline a 200-line engineering standards doc that the agent re-reads before every file edit.
Use these templates as-is or adapt them to your stack. The structure matters more than the specific contents.
these Sharing options are not available in EMC, so we have to use exchange power shell on the server to manipulate them.
Get-MailboxFolderPermission -identity "Networking Calendar:Calendar"user – “Nyckie” – full permissions
all users – permissions to add events without the delete permission
- To assign calendar permissions to new users “Add-MailboxFolderPermission”
Add-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User [email protected] -AccessRights Owner - To Change existing calendar permissions “set-MailboxFolderPermission”
set-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User default -AccessRights NonEditingAuthor LimitedDetails – View availability data with subject and location
source –
technet.microsoft.com
http://blog.powershell.no/2010/09/20/managing-calendar-permissions-in-exchange-server-2010/
Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client. we can use the component to
- patch/upgrade hosts
- deploy .vib files within the V-Center
- Scan your VC environment and report on any out of compliance hosts
Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.

Now that’s out of the way Let’s get to the how-to part of this
In Vcenter click the “Menu” and drill down to the “Update Manager”

This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels
Click on the “Baselines” Tab

You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now
Navigate to the “ESXi Images” Tab, and Click “Import”

Once the Upload is complete, Click on “New Baseline”
Fill in the Name and Description that makes sense to anyone that logs in and click Next

Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it
Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time
Now click the “Menu” and Navigate Backup to “Hosts and Clusters”
Now you can apply the Baseline this at various levels within the Vcenter Hierarchy
Vcenter | DataCenter | Cluster | Host
Depending on your use case pick the right level
Excerpt from the KB For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel. When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster. The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded. Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.
Moving on; for this example, since I have only 2 hosts. we are going apply the baseline at the cluster level but apply the remediation at host level
Host 1 > Enter Maintenance > Remediation > Update complete and online
Host 2 > Enter Maintenance > Remediation > Update complete and online
Select the cluster, Click the “Updates” Tab and click on “Attach” on the Attached baselines section

Select and attach the baseline we created before
Click “Check Compliance” to scan and get a report
Select the host in the cluster, enter maintenance mode
Click “REMEDIATE” to start the upgrade. (if you do this at a cluster level if you have DRS, Update Manager will update each node)
This will reboot the host and go through the update process

Foot Notes –
You might run into the following issue
“vCenter cannot deploy Host upgrade agent to host”

Cause 1
Scratch partition is full use Vcenter and change the scratch folder location
Creating a persistent scratch location for ESXi – https://kb.vmware.com/s/article/1033696
Cause 2
Hardware is not compatible,
I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing
Vmware HCL – Link
ESXI and Vcenter log file locations – link

well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee.. 
I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!!
.
Crucial’s Official Release Notes:
“Release Date: 08/25/2011
Change Log:
Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002)
Improved throughput performance.
Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems.
Improved write latency for better performance under heavy write workloads.
Faster boot up times.
Improved compatibility with latest chipsets.
Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device.
Improvement for intermittent failures in cold boot up related to some specific host systems.”
Firmware Download:http://www.crucial.com/eu/support/firmware.aspx?AID=10273954&PID=4176827&SID=1iv16ri5z4e7x
to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???
to do this we are gonna use a niffty lil program called UNetbootin
ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website
so here we go then…
* First off Download – http://unetbootin.sourceforge.net/

* Run the program
* Select DiskImage Radio button (as shown on the image)
* browse and select the iso file you downloaded from crucial
* Type – USB Drive
* select the Drive letter of your Pendrive
* Click OK!!!
reboot
*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important
*Boot from your Pen drive
*Follow the instructions on screen to update
and Voila
****remember to set your SATA controller to AHCI again in Bios / EFI ****
Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).
I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.
Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way
I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.
This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x
So lets get stahhhted…….
Scope
- Setup Pi-hole as a docker container on a VM
- Enable IPV6 support
- Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
- Deploy and test

What we need
- Linux VM (Ubuntu, Hardened BSD, etc)
- Docker and Docker Compose
- Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)
Deployment
Setup Dynamic DNS solution to track your Dynamic WAN IP
for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)
Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address.
For Network A and Network B, I’m going to use the routers built-in DDNS update features
Network A gateway – UDM Pro

Network B Gateway – Netgear R6230

Confirmation

Setup the VM with Docker-compose
Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS
- Linode
- AWS lightsail
- IBM cloud
- Oracle cloud
- Google Compute
- Digital Ocean droplet
Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource
For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects
Setup your Linode VM – Getting started Guide

SSH in to the VM or use weblish console
Update your packages and sources
sudo apt-get update
install Docker and Docker Compose
Assuming you already have SSH access to the VM with a static IPv4 and IPv6 address
Guide to installing Docker Engine on Ubuntu
Guide to Installing Docker-Compose
Once you have this setup confirm the docker setup
docker-compose version

Setup the Pi-hole Docker Image
Lets Configure the docker networking side to fit our Needs
Create a Seperate Bridge network for the Pi-hole container
I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have
docker network create --ipv6 --driver bridge --subnet "fd01::/64" Piholev6
verification

We will use this network later in docker compose
With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53
which will prevent the container from starting. because Pi-hole binds to the same port UDP 53
we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates
After some google fu and trickering around this this is the workaround i found.
- Disable the stub-listener
- Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
- push the external name servers so the VM won’t look at loopback to resolve DNS
- Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener
We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.
sudo nano /etc/systemd/resolved.conf
Find and uncomment the line “DNSStubListener=yes” and change it to “no”

After this we need to push the external DNS servers to the box, this setting is stored on the following file
/etc/resolv.conf
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
But we cant manually update this file with out own DNS servers, lets investigate

ls -l /etc/resolv.conf

its a symlink to the another system file
/run/systemd/resolve/stub-resolv.conf
When you take a look at the directory where that file resides, there are two files

When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers
You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file
i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case
If you want to use your own servers here – Follow this guide

Lets change the symlink to this file instead of the stub-resolve.conf
$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Now that its pointing to the right file

Lets restart the systemd-resolved
systemctl restart systemd-resolved

Now you can resolve DNS and install packages, etc

Docker compose script file for the PI-Hole
sudo mkdir /Docker_Images/ sudo mkdir /Docker_Images/Piholev6/

Lets navigate to this directory and start setting up our environment
nano /Docker_Images/Piholev6/docker-compose.yml
version: '3.4'
services:
Pihole:
container_name: pihole_v6
image: pihole/pihole:latest
hostname: Multicastbits-DNSService
ports:
- "53:53/tcp"
- "53:53/udp"
- "8080:80/tcp"
- "4343:443/tcp"
environment:
TZ: America/New_York
DNS1: 1.1.1.1
DNS2: 8.8.8.8
WEBPASSWORD: F1ghtm4_Keng3n4sura
ServerIP: 45.33.73.186
enable_ipv6: "true"
ServerIPv6: 2600:3c03::f03c:92ff:feb9:ea9c
volumes:
- '${ROOT}/pihole/etc-pihole/:/etc/pihole/'
- '${ROOT}/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
dns:
- 127.0.0.1
- 1.1.1.1
cap_add:
- NET_ADMIN
restart: always
networks:
default:
external:
name: Piholev6
networks:
default:
external:
name: Piholev6
Lets break this down a littlebit
- Version – Declare Docker compose version
- container_name – This is the name of the container on the docker container registry
- image – What image to pull from the Docker Hub
- hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
- ports – What ports should be NATed via the Docker Bridge to the host VM
- TZ – Time Zone
- DNS1 – DNS server used with in the image
- DNS2 – DNS server used with in the image
- WEBPASSWORD – Password for the Pi-Hole web console
- ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
- IPv6 – Enable Disable IPv6 support
- ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
- volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
- cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
- restart: always – This will make sure the container gets restarted every time the VM boots up – Link
- networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before
Now lets bring up the Docker container
docker-compose up -d
-d switch will bring up the Docker container in the background
Run ‘Docker ps’ to confirm

Now you can access the web interface and use the Pihole

verifying its using the bridge network you created
Grab the network ID for the bridge network we create before and use the inspect switch to check the config
docker network ls

docker network inspect f7ba28db09ae
This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag

Testing
I manually configured my workstations primary DNS to the Pi-Hole IPs

Updating the docker Image
Pull the new image from the Registry
docker pull pihole/pihole

Take down the current container
docker-compose down
Run the new container
docker-compose up -d

Your settings will persist this update
Securing the install
now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed
but this is open to the public internet and will fall victim to DNS reflection attacks, etc
lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed
Disable IPtables from the docker daemon
Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect
So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW
This file may not exist already if so nano will create it for you
sudo nano /etc/docker/daemon.json
Add the following lines to the file
{
"iptables": false
}
restart the docker services
sudo systemctl restart docker
now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.
Automatically updating Firewall Rules based on the DYN DNS Host records
we are going to create a shell script and run it every hour using crontab
Shell Script Dry run
- Get the IP from the DYNDNS Host records
- remove/Cleanup existing rules
- Add Default deny Rules
- Add allow rules using the resolved IPs as the source
Dynamic IP addresses are updated on the following DNS records
- trusted-Network01.selfip.net
- trusted-Network02.selfip.net
Lets start by creating the script file under /bin/*
sudo touch /bin/PIHolefwruleupdate.sh sudo chmod +x /bin/PIHolefwruleupdate.sh sudo nano /bin/PIHolefwruleupdate.sh
now lets build the script
#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"
Lets run the script to make sure its working


I used a online port scanner to confirm

Setup Scheduled job with logging
lets use crontab and setup a scheduled job to run this script every hour
Make sure the script is copied to the /bin folder with the executable permissions
using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)
crontab -e

Add the following line
0 * * * * /bin/PIHolefwruleupdate.sh >> /var/log/PIHolefwruleupdate_Cronoutput.log 2>&1
Lets break this down
0 * * * *
this will run the script every time minutes hit zero which is usually every hour
/bin/PIHolefwruleupdate.sh
Script Path to execute
/var/log/PIHolefwruleupdate_Cronoutput.log 2>&1
Log file with errors captured
As a Part of my pre-flight check for Vcenter upgrades i like to mount the ISO and go through the first 3 steps, during this I noticed the installer cannot connect to the source appliance with this error
2019-05-01T20:05:02.052Z - info: Stream :: close
2019-05-01T20:05:02.052Z - info: Password not expired
2019-05-01T20:05:02.054Z - error: sourcePrecheck: error in getting source Info: ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials.
2019-05-01T20:05:03.328Z - error: Request timed out after 30000 ms, url: https://vcenter.companyABC.local:443/
2019-05-01T20:05:09.675Z - info: Log file was saved at: C:\Users\MCbits\Desktop\installer-20190501-160025555.log
trying to reset via the admin interface or the DCUI didn’t work, after digging around found a way to reset it by forcing the vcenter to boot in to single user mode
Procedure:
- Take a snapshot or backup of the vCenter Server Appliance before proceeding. Do not skip this step.
- Reboot the vCenter Server Appliance.
- After the OS starts, press e key to enter the GNU GRUB Edit Menu.
- Locate the line that begins with the word Linux.
- Append these entries to the end of the line: rw init=/bin/bash The line should look like the following screenshot:

After adding the statement, press F10 to continue booting
Vcenter appliance will boot into single user mode
Type passwd to reset the root password
if you run into the following error message
"Authentication token lock busy"

you need to re-mount the filesystem in RW, which lets you change between read-only and read-write. this will allow you to make changes
mount -o remount,rw /

Until next time !!!









