You’ve probably noticed: coding models are eager to please. Too eager. Ask for something questionable and you’ll get it, wrapped in enthusiasm. Ask for feedback and you’ll get praise followed by gentle suggestions. Ask them to build something and they’ll start coding before understanding what you actually need.
This isn’t a bug. It’s trained behavior. And it’s costing you time, tokens, and code quality.
The Sycophancy Problem
Modern LLMs go through reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Users rate responses higher when the AI agrees with them, validates their ideas, and delivers quickly. So that’s what the models learn to do. Anthropic’s work on sycophancy in RLHF-tuned assistants makes this pretty explicit: models learn to match user beliefs, even when they’re wrong.
The result: an assistant that says “Great idea!” before pointing out your approach won’t scale. One that starts writing code before asking what systems it needs to integrate with. One that hedges every opinion with “but it depends on your use case.”
For consumer use cases, travel planning, recipe suggestions, general Q&A this is fine. For engineering work, it’s a liability.
When the models won’t push back, you lose the value of a second perspective. When it starts implementing before scoping, you burn tokens on code you’ll throw away. When it leaves library choices ambiguous, you get whatever the model defaults to which may not be what production needs.
Here’s a concrete example. I asked Claude for a “simple Prometheus exporter app,” gave it a minimal spec with scope and data flows, and still didn’t spell out anything about testability or structure. It happily produced:
- A script with
sys.exit()sprinkled everywhere - Logic glued directly into
if __name__ == "__main__": - Debugging via
print()calls instead of real logging
It technically “worked,” but it was painful to test, impossible to reuse and extend.
The Fix: Specs Before Code
Instead of giving it a set of requirements and asking to generate code. Start with specifications. Move the expensive iteration the “that’s not what I meant” cycles to the design phase where changes are cheap. Then hand a tight spec to your coding tool where implementation becomes mechanical.
The workflow:
- Describe what you want (rough is fine)
- Scope through pointed questions (5–8, not 20)
- Spec the solution with explicit implementation decisions
- Implement by handing the spec to Cursor/Cline/Copilot
This isn’t a brand new methodology. It’s the same spec-driven development (SDD) that tools like github spec-kit is promoting
write the spec first, then let a cheaper model implement against it.
By the time code gets written, the ambiguity is gone and the assistant is just a fast pair of hands that follows a tight spec with guard rails built in.
When This Workflow Pays Off
To be clear: this isn’t for everything. If you need a quick one-off script to parse a CSV or rename some files, writing a spec is overkill. Just ask for the code and move on with your life.
This workflow shines when:
- The task spans multiple files or components
- External integrations exist (databases, APIs, message queues, cloud services)
- It will run in production and needs monitoring and observability
- Infra is involved (Kubernetes, Terraform, CI/CD, exporters, operators)
- Someone else might maintain it later
- You’ve been burned before on similar scope
Rule of thumb: if it touches more than one system or more than one file, treat it as spec-worthy. If you can genuinely explain it in two sentences and keep it in a single file, skip straight to code.
Implementation Directives — Not “add a scheduler” but “use APScheduler with BackgroundScheduler, register an atexit handler for graceful shutdown.” Not “handle timeouts” but “use cx_Oracle call_timeout, not post-execution checks.”
Error Handling Matrix — List the important failure modes, how to detect them, what to log, and how to recover (retry, backoff, fail-fast, alert, etc.). No room for “the assistant will figure it out.”
Concurrency Decisions — What state is shared, what synchronization primitive to use, and lock ordering if multiple locks exist. Don’t let the assistant improvise concurrency.
Out of Scope — Explicit boundaries: “No auth changes,” “No schema migrations,” “Do not add retries at the HTTP client level.” This prevents the assistant from “helpfully” adding features you didn’t ask for.
Anticipate Anywhere the Model might guess, make a decision instead or make it validate/confirm with you before taking action.
The Handoff
When you hand off to your coding agent, make self-review part of the process:
Rules:
- Stop after each file for review
- Self-Review: Before presenting each file, verify against
engineering-standards.md. Fix violations (logging, error
handling, concurrency, resource cleanup) before stopping.
- Do not add features beyond this spec
- Use environment variables for all credentials
- Follow Implementation Directives exactly
Pair this with a rules.md that encodes your engineering standards—error propagation patterns, lock discipline, resource cleanup. The agent internalizes the baseline, self-reviews against it, and you’re left checking logic rather than hunting for missing using statements, context managers, or retries.
Fixing the Partnership Dynamic
Specs help, but “be blunt” isn’t enough. The model can follow the vibe of your instructions and still waste your time by producing unstructured output, bluffing through unknowns, or “spec’ing anyway” when an integration is the real blocker. That means overriding the trained “be agreeable” behavior with explicit instructions.
For example:
Core directive: Be useful, not pleasant.
OUTPUT CONTRACT:
- If scoping: output exactly:
## Scoping Questions (5–8 pointed questions)
## Current Risks / Ambiguities
## Proposed Simplification
- If drafting spec: use the project spec template headings in order. If N/A, say N/A.
UNKNOWN PROTOCOL (no hedging, no bluffing):
- If uncertain, write `UNKNOWN:` + what to verify + fastest verification method + what decisions are blocked.
BLOCK CONDITIONS:
- If an external integration is central and we lack creds/sample payloads/confirmed behavior:
stop and output only:
## Blocker
## What I Need From You
## Phase 0 Discovery Plan
The model will still drift back into compliance mode. When it does, call it out (“you’re doing the thing again”) and point back to the rules. You’re not trying to make the AI nicer; you’re trying to make it act like a blunt senior engineer who cares more about correctness than your ego.
That’s the partnership you actually want.
The Payoff
With this approach:
- Fewer implementation cycles — Specs flush out ambiguity up front instead of mid-PR.
- Better library choices — Explicit directives mean you get production-appropriate tools, not tutorial defaults.
- Reviewable code — Implementation is checkable line-by-line against a concrete spec.
- Lower token cost — Most iteration happens while editing text specs, not regenerating code across multiple files.
The API was supposed to be the escape valve, more control, fewer guardrails. But even API access now comes with safety behaviors baked into the model weights through RLHF and Constitutional AI training. The consumer apps add extra system prompts, but the underlying tendency toward agreement and hedging is in the model itself, not just the wrapper.
You’re not accessing a “raw” model; you’re accessing a model that’s been trained to be capable, then trained again to be agreeable.
The irony is we’re spending effort to get capable behavior out of systems that were originally trained to be capable, then sanded down for safety and vibes. Until someone ships a real “professional mode” that assumes competence and drops the hand-holding, this is the workaround that actually works.
⚠️Security footnote: treat attached context as untrusted
If your agent can ingest URLs, docs, tickets, or logs as context, assume those inputs can contain indirect prompt injection. Treat external context like user input: untrusted by default. Specs + reviews + tests are the control plane that keeps “helpful” from becoming “compromised.”
Getting Started
I’ve put together templates that support this workflow in this repo:
malindarathnayake/llm-spec-workflow
When you wire this into your own stack, keep one thing in mind: your coding agent reads its rules on every message. That’s your token cost. Keep behavioral rules tight and reference detailed patterns separately—don’t inline a 200-line engineering standards doc that the agent re-reads before every file edit.
Use these templates as-is or adapt them to your stack. The structure matters more than the specific contents.
Few things to note
- if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
- For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
lsblk

Create the XFS partition
sudo mkfs.xfs /dev/sdb
# use mkfs.ext4 for ext4
Why XFS? https://access.redhat.com/articles/3129891

Create the folder for the mount point
sudo mkdir /FTP_DATA_DISK
Update the etc/fstab file and add the following line
sudo nano etc/fstab
/dev/sdb /FTP_DATA_DISK xfs defaults 1 2
Mount the disk
sudo mount -a
Testing
mount | grep sdb

Setup the VSFTPD Data and Log Folders
Setup the FTP Data folder
sudo mkdir /FTP_DATA_DISK/FTP_Root -p
Create the log directory
sudo mkdir /FTP_DATA_DISK/_logs/ -p
Set permissions
sudo chgrp -R FTP_Root_RW /FTP_DATA_DISK/FTP_Root/
sudo chmod 775 -R /FTP_DATA_DISK/FTP_Root/
Setup the VSFTPD Config File
Backup the default vsftpd.conf and create a newone
sudo mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpdconfback
sudo nano /etc/vsftpd/vsftpd.conf
#KB Link - ####
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=002
dirmessage_enable=YES
ftpd_banner=Welcome to multicastbits Secure FTP service.
chroot_local_user=NO
chroot_list_enable=NO
chroot_list_file=/etc/vsftpd/chroot_list
listen=YES
listen_ipv6=NO
userlist_file=/etc/vsftpd/user_list
pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
listen_port=21
connect_from_port_20=YES
local_root=/FTP_DATA_DISK/FTP_Root/
xferlog_enable=YES
vsftpd_log_file=/FTP_DATA_DISK/_logs/vsftpd.log
log_ftp_protocol=YES
dirlist_enable=YES
download_enable=NO
pasv_enable=Yes
pasv_max_port=65535
pasv_min_port=60000
Add the FTP users to the userlist file
Backup the Original file
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
sudo systemctl start vsftpd
sudo systemctl enable vsftpd
sudo systemctl status vsftpd

Setup SELinux
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
sudo nano /etc/fail2ban/jail.local
vsftpd]
enabled = true
port = ftp,ftp-data,ftps,ftps-data
logpath = /FTP_DATA_DISK/_logs/vsftpd.log
maxretry = 5
bantime = 7200
Make sure to update the logpath directive to match the vsftpd log file we defined on the vsftpd.conf file
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
sudo systemctl status fail2ban
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log

Fail2ban injects and manages the following rich rules

Client will fail to connect using FTP until the ban is lifted

Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
“Mail server crashed” worst nightmare for a System admin. Followed by tight Dead lines, incident reports, load of complains you have to listen to, Its a full fledged disaster.
In this scenario its a medium size business with with just one single Server running AD and Exchange 2010(not ideal i know) which was upgraded from SBS 2003
AD and DNS failed after de-commissioning the old SBS server.
Recovering from full server backup was not an option and we had the Databases on a separate drive.
Important things to keep in mind when recovering DB’s on a different AD domain
- Organization name and the Exchange Administrative Group should be the same in order for the Portability feature to work
- Database must be in a clean shutdown state
- After mounting the old DB’s always move the mail boxes to new database’s
- Exchange 2010 Slandered only supports up to 5 Databases.
there are few method’s to recover DB’s on exchange 2010, This is the method we used.
Check List before proceeding further
Once you have
- Restored the Old Databases from backup to a different location on the server
- installed the AD (with the same domain name) and the Exchange with the same Administrative Group as the earlier
Preparing the Databases
Checking the statues of the database file
in order for the Database portability feature to work. we need the DB’s in clean shutdown state. To check the Database State we are gonna be using the Exchange Server Database Utility’s file dump mode
More Detail on eseutil /MH – link
Launch command prompt and type
eseutil /MH “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Check the output you get and check if the DB is in a Dirty shutdown or a clean shutdown state
If the DB file is in Dirty shutdown state
In this case we did not have any recent backups and we were not able to soft recover the DB since this is a new DC. so we had to do a hard recovery using this command.
eseutil /P “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Click ok on the prompt to continue
After the Hard recovery to fully rebuild indexes and defragment the database
eseutil /D “D:RestoreoldDB.edb” (the text in blue is the location of the restored old database file)
Mounting the Database using the Portability feature.
Create a new Database
Create a new Database for example we will create one named – recoveryDB1
Go to properties of the new DB > Maintenance Tab > Select the option “This Database can be overwritten by a restore”
Apply the Changes and dismount the Database
Replace the new Database file with the Repaired Database
Firstly go to the folder where the new DB file(recoveryDB1.edb) is located and Rename or delete it
Delete the log files / Catalog files
———————————————————————————————————————–
Rename the Recovered Database
Go to the Folder where the Database we repaired before is located and Rename it to “recoveryDB1”
————————————————————————————————————————–
Replace the newly created Database
Copy the Repaired DB file and replace the new Database file recoveryDB1.edb
Remember the Log files should be deleted or moved before you mount this DB.
Mount the “recoveryDB1” Database From EMC
now the mailStore should be mounted with out an issue
Errors you might run in to
In case you do get errors when mounting the DB such as
Operation terminated with error -1216 (JET_errAttachedDatabaseMismatch, An outstanding database attachment has been detected at the start or end of recovery, but database is missing or does not match attachment info) after 11.625 seconds.
you are getting this error because The DB is in dirty shutdown state, refer to the Preparing the Database Section above to fix the issue by performing a Hard Recovery
unable to mount error ‘s
The new Database Log files are still present, Delete them or move them.
Now you can go ahead and Attach the Mailboxes to the corresponding user accounts.
Word of advice
It will be wise to not to keep this recovered Mail Store in production for long. you will run in to various issues as the Mails start to flow in and out
Create new Mail stores’s and Move the mail boxes to avoid future problems.
Some mailboxes might be corrupted. in that case
Easiest way is to use the
“New-MailboxRepairRequest” cmdlet
Refer to this tech-net article for more info – link
Or
- Export it to a PST
- Attach the user to a fresh mailbox
- Sync back the Data you need through outlook

Hi Internetz, its been a while…
So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums.
It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.
What you need :
Hardware
- Firebox
- Female to Female Serial Cable – link
- 4GB CF Card (We can use 1Gb, 2Gb but personally I would recommend at-least 4GB)
- CF Card Reader
Software
The firebox X700
This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware
The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device
There are several methods to run pfsense on this device.
HDD
Install PF sense on a PC and Plug the HDD to the firebox.
This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days
CF card
This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it.
In this tutorial we are using the CF card method
Installing PFsense
- Download the relevant pfsense image
Since we are using a CF card we need to use the PFsense version built to work on embedded devices.
NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle
Since we are using a 4GB CF card, we are going to use the 4G image
- Flashing the nanoBSD image to the CF card
Extract the physdiskwrite program and run the PhysGUI.exe
This software is written in German i think but operating it is not that hard
Select the CF card from the list.
Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID
Load the ISO file
Right click on the Disk “Image laden > offnen”
select the ISO file from the “open file” window
program will prompt you with the following dialog box
Select the remove 2GB restriction and click “OK”
It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress
- Installing the CF card on the Firebox
Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card
Remove the protective glue and replace the card with the new CF card flashed with pfsense image.
- Booting up and configuring PFsense
since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device
Install “teraTerm pro web” program we downloaded earlier.
I tried using putty and many other telnet clients didn’t work properly
Open up the terminal window
Connect the firebox to the PC using the serial cable, and power it up
Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)
By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.
It will ask you to setup VLan
Assign the WAN, LAN, OPT1 interfaces.
ON X700 interface names are as follows
Please refer to pfsense Docs for more info on setting up
http://doc.pfsense.org/index.php/Tutorials#Advanced_Tutorials
After the initial config is completed. you do not need the console cable and Tera Term
you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP
Addtional configuration
- Enabling the LCD panel
All firebox units have a LCD panel in front
We can use the pfsense LCDproc-dev package to enable and display various information
Install the LCDproc-dev Package via the package Manager
Go to Services > LCDProc
Set the settings as follows
Hope this article helped you guys.Dont forget to leave a comment with your thoughts
Sources –
http://forum.pfsense.org/index.php?board=5.0
Update Manager is bundled in the vCenter Server Appliance since version 6.5, it’s a plug-in that runs on the vSphere Web Client. we can use the component to
- patch/upgrade hosts
- deploy .vib files within the V-Center
- Scan your VC environment and report on any out of compliance hosts
Hardcore/Experienced VMware operators will scoff at this article, but I have seen many organizations still using ILO/IDRAC to mount an ISO to update hosts and they have no idea this function even exists.

Now that’s out of the way Let’s get to the how-to part of this
In Vcenter click the “Menu” and drill down to the “Update Manager”

This Blade will show you all the nerd knobs and overview of your current Updates and compliance levels
Click on the “Baselines” Tab

You will have two predefined baselines for security patches created by the Vcenter, let keep that aside for now
Navigate to the “ESXi Images” Tab, and Click “Import”

Once the Upload is complete, Click on “New Baseline”
Fill in the Name and Description that makes sense to anyone that logs in and click Next

Select the image you just Uploaded before on the next Screen and continue through the wizard and complete it
Note – If you have other 3rd party software for ESXI you can create seprate baselines for those and use baseline Groups to push out upgrades and vib files at the same time
Now click the “Menu” and Navigate Backup to “Hosts and Clusters”
Now you can apply the Baseline this at various levels within the Vcenter Hierarchy
Vcenter | DataCenter | Cluster | Host
Depending on your use case pick the right level
Excerpt from the KB For ESXi hosts in a cluster, the remediation process is sequential by default. With Update Manager, you can select to run host remediation in parallel. When you remediate a cluster of hosts sequentially and one of the hosts fails to enter maintenance mode, Update Manager reports an error, and the process stops and fails. The hosts in the cluster that are remediated stay at the updated level. The ones that are not remediated after the failed host remediation are not updated. If a host in a DRS enabled cluster runs a virtual machine on which Update Manager or vCenter Server are installed, DRS first attempts to migrate the virtual machine running vCenter Server or Update Manager to another host so that the remediation succeeds. In case the virtual machine cannot be migrated to another host, the remediation fails for the host, but the process does not stop. Update Manager proceeds to remediate the next host in the cluster. The host upgrade remediation of ESXi hosts in a cluster proceeds only if all hosts in the cluster can be upgraded. Remediation of hosts in a cluster requires that you temporarily disable cluster features such as VMware DPM and HA admission control. Also, turn off FT if it is enabled on any of the virtual machines on a host, and disconnect the removable devices connected to the virtual machines on a host, so that they can be migrated with vMotion. Before you start a remediation process, you can generate a report that shows which cluster, host, or virtual machine has the cluster features enabled.
Moving on; for this example, since I have only 2 hosts. we are going apply the baseline at the cluster level but apply the remediation at host level
Host 1 > Enter Maintenance > Remediation > Update complete and online
Host 2 > Enter Maintenance > Remediation > Update complete and online
Select the cluster, Click the “Updates” Tab and click on “Attach” on the Attached baselines section

Select and attach the baseline we created before
Click “Check Compliance” to scan and get a report
Select the host in the cluster, enter maintenance mode
Click “REMEDIATE” to start the upgrade. (if you do this at a cluster level if you have DRS, Update Manager will update each node)
This will reboot the host and go through the update process

Foot Notes –
You might run into the following issue
“vCenter cannot deploy Host upgrade agent to host”

Cause 1
Scratch partition is full use Vcenter and change the scratch folder location
Creating a persistent scratch location for ESXi – https://kb.vmware.com/s/article/1033696
Cause 2
Hardware is not compatible,
I had this issue due to 6.7 dropping support for an LSI Raid card on an older firmware, you need to do some foot work and check the log files to figure out why its failing
Vmware HCL – Link
ESXI and Vcenter log file locations – link
As a Part of my pre-flight check for Vcenter upgrades i like to mount the ISO and go through the first 3 steps, during this I noticed the installer cannot connect to the source appliance with this error
2019-05-01T20:05:02.052Z - info: Stream :: close
2019-05-01T20:05:02.052Z - info: Password not expired
2019-05-01T20:05:02.054Z - error: sourcePrecheck: error in getting source Info: ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials.
2019-05-01T20:05:03.328Z - error: Request timed out after 30000 ms, url: https://vcenter.companyABC.local:443/
2019-05-01T20:05:09.675Z - info: Log file was saved at: C:\Users\MCbits\Desktop\installer-20190501-160025555.log
trying to reset via the admin interface or the DCUI didn’t work, after digging around found a way to reset it by forcing the vcenter to boot in to single user mode
Procedure:
- Take a snapshot or backup of the vCenter Server Appliance before proceeding. Do not skip this step.
- Reboot the vCenter Server Appliance.
- After the OS starts, press e key to enter the GNU GRUB Edit Menu.
- Locate the line that begins with the word Linux.
- Append these entries to the end of the line: rw init=/bin/bash The line should look like the following screenshot:

After adding the statement, press F10 to continue booting
Vcenter appliance will boot into single user mode
Type passwd to reset the root password
if you run into the following error message
"Authentication token lock busy"

you need to re-mount the filesystem in RW, which lets you change between read-only and read-write. this will allow you to make changes
mount -o remount,rw /

Until next time !!!
Ran into this pesky little error message recently, on a vcenter environment
If the logs are stored on a local scratch disk, vCenter will display an alert stating – “System logs on host xxx are stored on non-persistent storage”

Configure ESXi Syslog location – vSphere Web Client
Vcenter > Select “Host”> Configure > Advance System Settings

Click on Edit and search for “Syslog.global.logDir”

Edit the value and in this case, I’m going to use the local data store (Localhost_DataStore01) to store the syslogs.
You can also define a remote syslog server using the “Syslog.global.LogHost” setting

Configure ESXi Syslog location – ESXCLI
Ssh on to the host
Check the current location
esxcli system syslog config get

*logs stored on the local scratch disk
Manually Set the Path
esxcli system syslog config set –logdir=/vmfs/directory/path
you can find the VMFS volume names/UUIDs under –
/vmfs/volumes
remote syslog server can be set using
esxcli system syslog config set –loghost=’tcp://hostname:port’
Load the configuration changes with the syslog reload command
esxcli system syslog reload
The logs will immediately begin populating the specified location.
just something that came up while setting up a monitoring script using mailx, figured ill note it down here so i can get it to easily later when I need it 😀
Important prerequisites
- You need to enable smtp basic Auth on Office 365 for the account used for authentication
- Create an App password for the user account
- nssdb folder must be available and readable by the user running the mailx command
Assuming all of the above prerequisite are $true we can proceed with the setup
Install mailx
RHEL/Alma linux
sudo dnf install mailx
NSSDB Folder
make sure the nssdb folder must be available and readable by the user running the mailx command
certutil -L -d /etc/pki/nssdb
The Output might be empty, but that’s ok; this is there if you need to add a locally signed cert or another CA cert manually, Microsoft Certs are trusted by default if you are on an up to date operating system with the local System-wide Trust Store
Reference – RHEL-sec-shared-system-certificates
Configure Mailx config file
sudo nano /etc/mail.rc
Append/prepend the following lines and Comment out or remove the same lines already defined on the existing config files
set smtp=smtp.office365.com
set smtp-auth-user=###[email protected]###
set smtp-auth-password=##Office365-App-password#
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-use-starttls
set from="###[email protected]###"
This is the bare minimum needed other switches are located here – link
Testing
echo "Your message is sent!" | mailx -v -s "test" [email protected]
-v switch will print the verbos debug log to console
Connecting to 52.96.40.242:smtp . . . connected.
220 xxde10CA0031.outlook.office365.com Microsoft ESMTP MAIL Service ready at Sun, 6 Aug 2023 22:14:56 +0000
>>> EHLO vls-xxx.multicastbits.local
250-MN2PR10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> STARTTLS
220 2.0.0 SMTP server ready
>>> EHLO vls-xxx.multicastbits.local
250-xxde10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-AUTH LOGIN XOAUTH2
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> AUTH LOGIN
334 VXNlcm5hbWU6
>>> Zxxxxxxxxxxxc0BmdC1zeXMuY29t
334 UGsxxxxxmQ6
>>> c2Rxxxxxxxxxxducw==
235 2.7.0 Authentication successful
>>> MAIL FROM:<###[email protected]###>
250 2.1.0 Sender OK
>>> RCPT TO:<[email protected]>
250 2.1.5 Recipient OK
>>> DATA
354 Start mail input; end with <CRLF>.<CRLF>
>>> .
250 2.0.0 OK <[email protected]> [Hostname=Bsxsss744.namprd11.prod.outlook.com]
>>> QUIT
221 2.0.0 Service closing transmission channel
Now you can use this in your automation scripts or timers using the mailx command
#!/bin/bash
log_file="/etc/app/runtime.log"
recipient="[email protected]"
subject="Log file from /etc/app/runtime.log"
# Check if the log file exists
if [ ! -f "$log_file" ]; then
echo "Error: Log file not found: $log_file"
exit 1
fi
# Use mailx to send the log file as an attachment
echo "Sending log file..."
mailx -s "$subject" -a "$log_file" -r "[email protected]" "$recipient" < /dev/null
echo "Log file sent successfully."
Secure it
sudo chown root:root /etc/mail.rc
sudo chmod 600 /etc/mail.rc
The above commands change the file’s owner and group to root, then set the file permissions to 600, which means only the owner (root) has read and write permissions and other users have no access to the file.
Use Environment Variables: Avoid storing sensitive information like passwords directly in the mail.rc file, consider using environment variables for sensitive data and reference those variables in the configuration.
For example, in the mail.rc file, you can set:
set smtp-auth-password=$MY_EMAIL_PASSWORD
You can set the variable using another config file or store it in the Ansible vault during runtime or use something like Hashicorp.
Sure, I would just use Python or PowerShell core, but you will run into more locked-down environments like OCI-managed DB servers with only Mailx is preinstalled and the only tool you can use 🙁
the Fact that you are here means you are already in the same boat. Hope this helped… until next time
these Sharing options are not available in EMC, so we have to use exchange power shell on the server to manipulate them.
Get-MailboxFolderPermission -identity "Networking Calendar:Calendar"user – “Nyckie” – full permissions
all users – permissions to add events without the delete permission
- To assign calendar permissions to new users “Add-MailboxFolderPermission”
Add-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User [email protected] -AccessRights Owner - To Change existing calendar permissions “set-MailboxFolderPermission”
set-MailboxFolderPermission -Identity "Networking Calendar:Calendar" -User default -AccessRights NonEditingAuthor LimitedDetails – View availability data with subject and location
source –
technet.microsoft.com
http://blog.powershell.no/2010/09/20/managing-calendar-permissions-in-exchange-server-2010/
Issue
Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs
"The Specified Domain either does not exist or could not be contacted"
(click to enlarge)
Checked the following
- Restarted ADsync Services
- Resolve the ADDS Domain FQDN and DNS – Working
- Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values
Root Cause
Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)
Assumption
when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.
This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high
Resolution
Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding
in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)




















