Hybrid Exchange mailbox On-boarding : Target user already has a primary mailbox – Fix
During an Office 365 migration on a Hybrid environment with AAD Connectran into the following scenario:
- Hybrid Co-Existence Environment with AAD-Sync
- User [email protected] has a mailbox on-premises. Jon is represented as a Mail User in the cloud with an office 365 license
- [email protected] had a cloud-only mailbox prior to the initial AD-sync was run
- A user account is registered as a mail-user and has a valid license attached
- During the office 365 Remote mailbox move, we end up with the following error during validation and removing the immutable ID and remapping to on-premise account won’t fix the issue
Target user 'Sam fisher' already has a primary mailbox.
+ CategoryInfo : InvalidArgument: (tsu:MailboxOrMailUserIdParameter) [New-MoveRequest], RecipientTaskException
+ FullyQualifiedErrorId : [Server=Pl-EX001,RequestId=19e90208-e39d-42bc-bde3-ee0db6375b8a,TimeStamp=11/6/2019 4:10:43 PM] [FailureCategory=Cmdlet-RecipientTaskException] 9418C1E1,Microsoft.Exchange.Management.Migration.MailboxRep
lication.MoveRequest.NewMoveRequest
+ PSComputerName : Pl-ex001.Paladin.org
It turns out this happens due to an unclean cloud object on MSOL, This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user
Option 1 (nuclear option)
to fix this problem was to delete *MSOL User Object* for Sam and re-sync it from on-premises. This would delete [email protected] from the cloud – but it will delete him/her from all workloads, not only Exchange. This is problematic because Sam is already using Teams, One-drive, SharePoint.
Option 2
Clean up only the office 365 mailbox pointer information
PS C:\> Set-User [email protected] -PermanentlyClearPreviousMailboxInfo
Confirm
Confirm
Are you sure you want to perform this action?
Delete all existing information about user "[email protected]"?. This operation will clear existing values from
Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that
existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to
continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a
Executing this leaves you with a clean object without the duplicate-mailbox problem,
in some cases when you run this command you will get the following output
“Command completed successfully, but no user settings were changed.”
If this happens
Remove the license from the user temporarily and run the command to remove previous mailbox data
then you can re-add the license
Cisco ASA WAN Failover IP SLA- Guide
We will proceed assuming
you already configured the ASA with the primary link
Configured the WAN2 on a port with the static IP or DHCP depending on the connection – you should be able to ping the secondary WAN link gateway from the ASA
Note:
Please remove the existing Static Route for the primary WAN link
Configure Route tracking
ASA(config)# route outside 0.0.0.0 0.0.0.0 <ISP 1(WAN1) Gateway> 1 track 1
ASA(config)# route Backup_Wan 0.0.0.0 0.0.0.0 <ISP 2 (WAN2) Gateway> 254
Now lets break it down
Line 01 – you add the WAN1 route with a administrative distance of 1 and we also include the track 1 statement for the SLA monitor tracking (See below)
Line 02 – with the second line we add the default route for the BackupWan link with a higher administrative distance to make it the secondary link
Examples
ASA(config)# route outside 0.0.0.0 0.0.0.0 100.100.100.10 1 track 1
ASA(config)# route Backup_Wan 0.0.0.0 0.0.0.0 200.200.200.10 254
Setup SLA monitoring and Route tracking
ASA(config)# sla monitor 10
Configure the SLA monitor with ID 10
ASA(config-sla-monitor)# type echo protocol ipIcmpEcho 8.8.8.8 interface outside
Configure the monitoring protocol, the target IP for the probe and the interface use
SLA monitor will keep probing the IP we define here and report if its unreachable via the given interface
In this senario im using 8.8.8.8 as the target IP you can use any public IP for monitoring
ASA(config-sla-monitor-echo)# num-packets 4
Number of packets sent to the probe
ASA(config-sla-monitor-echo)# timeout 1000
Timeout value in milliseconds. if you have a slow link as the primary increase the time out accordingly
ASA(config-sla-monitor-echo)# frequency 10
Frequency of the probe in seconds – SLA monitor will probe the IP every 10 seconds
ASA(config)# sla monitor schedule 10 life forever start-time now
Set the ASA to start the SLA monitor now and keep it running for ever
ASA(config)# track 1 rtr 10 reachability
This command will tell the ASA to keep tracking the SLA monitor with the ID:10 and the Default route defined with “Track 1”
if the probe fails to reach the target IP (in this case 8.8.8.8) via the designated interface it will remove the route defined with “Track 1” from the routing table
The next best possible route in this scenario the backup ISP route with administrative distance of 254 takes its place
Configure dynamic NAT Rules (Important)
nat (inside,<ISP 1(WAN1) Interface Name) source dynamic any interface
nat (inside,<ISP 2(WAN2) Interface Name>) source dynamic any interface
Examples
nat (inside,outside) source dynamic any interface
nat (inside,Backup_Wan) source dynamic any interface
This method worked well for me personally and keep in mind i’m no Cisco Guru so if i made a mistake or if you feel like there is a better way to do this please leave comment. its all about the community after all
Until next time stay awesome internetz
Stop Fighting Your LLM Coding Assistant
You’ve probably noticed: coding models are eager to please. Too eager. Ask for something questionable and you’ll get it, wrapped in enthusiasm. Ask for feedback and you’ll get praise followed by gentle suggestions. Ask them to build something and they’ll start coding before understanding what you actually need.
This isn’t a bug. It’s trained behavior. And it’s costing you time, tokens, and code quality.
The Sycophancy Problem
Modern LLMs go through reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Users rate responses higher when the AI agrees with them, validates their ideas, and delivers quickly. So that’s what the models learn to do. Anthropic’s work on sycophancy in RLHF-tuned assistants makes this pretty explicit: models learn to match user beliefs, even when they’re wrong.
The result: an assistant that says “Great idea!” before pointing out your approach won’t scale. One that starts writing code before asking what systems it needs to integrate with. One that hedges every opinion with “but it depends on your use case.”
For consumer use cases, travel planning, recipe suggestions, general Q&A this is fine. For engineering work, it’s a liability.
When the models won’t push back, you lose the value of a second perspective. When it starts implementing before scoping, you burn tokens on code you’ll throw away. When it leaves library choices ambiguous, you get whatever the model defaults to which may not be what production needs.
Here’s a concrete example. I asked Claude for a “simple Prometheus exporter app,” gave it a minimal spec with scope and data flows, and still didn’t spell out anything about testability or structure. It happily produced:
- A script with
sys.exit()sprinkled everywhere - Logic glued directly into
if __name__ == "__main__": - Debugging via
print()calls instead of real logging
It technically “worked,” but it was painful to test, impossible to reuse and extend.
The Fix: Specs Before Code
Instead of giving it a set of requirements and asking to generate code. Start with specifications. Move the expensive iteration the “that’s not what I meant” cycles to the design phase where changes are cheap. Then hand a tight spec to your coding tool where implementation becomes mechanical.
The workflow:
- Describe what you want (rough is fine)
- Scope through pointed questions (5–8, not 20)
- Spec the solution with explicit implementation decisions
- Implement by handing the spec to Cursor/Cline/Copilot
This isn’t a brand new methodology. It’s the same spec-driven development (SDD) that tools like github spec-kit is promoting
write the spec first, then let a cheaper model implement against it.
By the time code gets written, the ambiguity is gone and the assistant is just a fast pair of hands that follows a tight spec with guard rails built in.
When This Workflow Pays Off
To be clear: this isn’t for everything. If you need a quick one-off script to parse a CSV or rename some files, writing a spec is overkill. Just ask for the code and move on with your life.
This workflow shines when:
- The task spans multiple files or components
- External integrations exist (databases, APIs, message queues, cloud services)
- It will run in production and needs monitoring and observability
- Infra is involved (Kubernetes, Terraform, CI/CD, exporters, operators)
- Someone else might maintain it later
- You’ve been burned before on similar scope
Rule of thumb: if it touches more than one system or more than one file, treat it as spec-worthy. If you can genuinely explain it in two sentences and keep it in a single file, skip straight to code.
Implementation Directives — Not “add a scheduler” but “use APScheduler with BackgroundScheduler, register an atexit handler for graceful shutdown.” Not “handle timeouts” but “use cx_Oracle call_timeout, not post-execution checks.”
Error Handling Matrix — List the important failure modes, how to detect them, what to log, and how to recover (retry, backoff, fail-fast, alert, etc.). No room for “the assistant will figure it out.”
Concurrency Decisions — What state is shared, what synchronization primitive to use, and lock ordering if multiple locks exist. Don’t let the assistant improvise concurrency.
Out of Scope — Explicit boundaries: “No auth changes,” “No schema migrations,” “Do not add retries at the HTTP client level.” This prevents the assistant from “helpfully” adding features you didn’t ask for.
Anticipate Anywhere the Model might guess, make a decision instead or make it validate/confirm with you before taking action.
The Handoff
When you hand off to your coding agent, make self-review part of the process:
Rules:
- Stop after each file for review
- Self-Review: Before presenting each file, verify against
engineering-standards.md. Fix violations (logging, error
handling, concurrency, resource cleanup) before stopping.
- Do not add features beyond this spec
- Use environment variables for all credentials
- Follow Implementation Directives exactly
Pair this with a rules.md that encodes your engineering standards—error propagation patterns, lock discipline, resource cleanup. The agent internalizes the baseline, self-reviews against it, and you’re left checking logic rather than hunting for missing using statements, context managers, or retries.
Fixing the Partnership Dynamic
Specs help, but “be blunt” isn’t enough. The model can follow the vibe of your instructions and still waste your time by producing unstructured output, bluffing through unknowns, or “spec’ing anyway” when an integration is the real blocker. That means overriding the trained “be agreeable” behavior with explicit instructions.
For example:
Core directive: Be useful, not pleasant.
OUTPUT CONTRACT:
- If scoping: output exactly:
## Scoping Questions (5–8 pointed questions)
## Current Risks / Ambiguities
## Proposed Simplification
- If drafting spec: use the project spec template headings in order. If N/A, say N/A.
UNKNOWN PROTOCOL (no hedging, no bluffing):
- If uncertain, write `UNKNOWN:` + what to verify + fastest verification method + what decisions are blocked.
BLOCK CONDITIONS:
- If an external integration is central and we lack creds/sample payloads/confirmed behavior:
stop and output only:
## Blocker
## What I Need From You
## Phase 0 Discovery Plan
The model will still drift back into compliance mode. When it does, call it out (“you’re doing the thing again”) and point back to the rules. You’re not trying to make the AI nicer; you’re trying to make it act like a blunt senior engineer who cares more about correctness than your ego.
That’s the partnership you actually want.
The Payoff
With this approach:
- Fewer implementation cycles — Specs flush out ambiguity up front instead of mid-PR.
- Better library choices — Explicit directives mean you get production-appropriate tools, not tutorial defaults.
- Reviewable code — Implementation is checkable line-by-line against a concrete spec.
- Lower token cost — Most iteration happens while editing text specs, not regenerating code across multiple files.
The API was supposed to be the escape valve, more control, fewer guardrails. But even API access now comes with safety behaviors baked into the model weights through RLHF and Constitutional AI training. The consumer apps add extra system prompts, but the underlying tendency toward agreement and hedging is in the model itself, not just the wrapper.
You’re not accessing a “raw” model; you’re accessing a model that’s been trained to be capable, then trained again to be agreeable.
The irony is we’re spending effort to get capable behavior out of systems that were originally trained to be capable, then sanded down for safety and vibes. Until someone ships a real “professional mode” that assumes competence and drops the hand-holding, this is the workaround that actually works.
⚠️Security footnote: treat attached context as untrusted
If your agent can ingest URLs, docs, tickets, or logs as context, assume those inputs can contain indirect prompt injection. Treat external context like user input: untrusted by default. Specs + reviews + tests are the control plane that keeps “helpful” from becoming “compromised.”
Getting Started
I’ve put together templates that support this workflow in this repo:
malindarathnayake/llm-spec-workflow
When you wire this into your own stack, keep one thing in mind: your coding agent reads its rules on every message. That’s your token cost. Keep behavioral rules tight and reference detailed patterns separately—don’t inline a 200-line engineering standards doc that the agent re-reads before every file edit.
Use these templates as-is or adapt them to your stack. The structure matters more than the specific contents.
Crucial M4 SSD New Firmware and how to Flash using a USB thumb drive !!Update!!

well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee.. 
I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!!
.
Crucial’s Official Release Notes:
“Release Date: 08/25/2011
Change Log:
Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002)
Improved throughput performance.
Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems.
Improved write latency for better performance under heavy write workloads.
Faster boot up times.
Improved compatibility with latest chipsets.
Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device.
Improvement for intermittent failures in cold boot up related to some specific host systems.”
Firmware Download:http://www.crucial.com/eu/support/firmware.aspx?AID=10273954&PID=4176827&SID=1iv16ri5z4e7x
to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???
to do this we are gonna use a niffty lil program called UNetbootin
ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website
so here we go then…
* First off Download – http://unetbootin.sourceforge.net/

* Run the program
* Select DiskImage Radio button (as shown on the image)
* browse and select the iso file you downloaded from crucial
* Type – USB Drive
* select the Drive letter of your Pendrive
* Click OK!!!
reboot
*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important
*Boot from your Pen drive
*Follow the instructions on screen to update
and Voila
****remember to set your SATA controller to AHCI again in Bios / EFI ****
Server 2016 Data De-duplication Report – Powershell
I put together this crude little script to send out a report on a daily basis
it’s not that fancy but its functional
I’m working on the second revision with an HTML body, lists of corrupted files, Resource usage, more features will be added as I dive further into Dedupe CMDlets.
https://technet.microsoft.com/en-us/library/hh848450.aspx
Link to the Script – Dedupe_report.ps1
https://dl.dropboxusercontent.com/s/bltp675prlz1slo/Dedupe_report_Rev2_pub.txt
If you have any suggestions for improvements please comment and share with everyone
# Malinda Ratnayake | 2016 # Can only be run on Windows Server 2012 R2 # # Get the date and set the variable $Now = Get-Date # Import the cmdlets Import-Module Deduplication # $logFile01 = "C:_ScriptsLogsDedupe_Report.txt" # # Get the cluster vip and set to variable $HostName = (Get-WmiObject win32_computersystem).DNSHostName+"."+(Get-WmiObject win32_computersystem).Domain # #$OS = Get-Host {$_.WindowsProductName} # # delete previous days check del $logFile01 # Out-File "$logFile01" -Encoding ASCII Add-Content $logFile01 "Dedupication Report for $HostName" -Encoding ASCII Add-Content $logFile01 "`n$Now" -Encoding ASCII Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupJob Add-Content $logFile01 "Deduplication job Queue" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupJob | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupSchedule Add-Content $logFile01 "Deduplication Schedule" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupSchedule | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 # #Last Optimization Result and time Add-Content $logFile01 "Last Optimization Result and time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastOptimizationTime ,LastOptimizationResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # #Last Garbage Collection Result and Time Add-Content $logFile01 "Last Garbage Collection Result and Time" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Select-Object LastGarbageCollectionTime ,LastGarbageCollectionResultMessage | Format-Table -Wrap | Out-File -append -Encoding ASCII $logFile01 # # Get-DedupVolume $DedupVolumeLetter = Get-DedupVolume | select -ExpandProperty Volume Add-Content $logFile01 "Deduplication Enabled Volumes" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupVolume | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Volume $DedupVolumeLetter Details - " -Encoding ASCII Get-DedupVolume | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupStatus Add-Content $logFile01 "Deduplication Summary" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | Format-Table -AutoSize | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "Deduplication Status Details" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-DedupStatus | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-DedupMetadata Add-Content $logFile01 "Deduplication MetaData" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Add-Content $logFile01 "note - details about how deduplication processed the data on volume $DedupVolumeLetter " -Encoding ASCII Get-DedupMetadata | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Get-Dedupe Events # Get-Dedupe Events - Resource usage - WIP Add-Content $logFile01 "Deduplication Events" -Encoding ASCII Add-Content $logFile01 "__________________________________________________________________________" -Encoding ASCII Get-WinEvent -MaxEvents 10 -LogName Microsoft-Windows-Deduplication/Diagnostic | where ID -EQ "10243" | FL | Out-File -append -Encoding ASCII $logFile01 Add-Content $logFile01 "`n" -Encoding ASCII # # Change the -To, -From and -SmtpServer values to match your servers. $Emailbody = Get-Content -Path $logFile01 [string[]]$recipients = "[email protected]" Send-MailMessage -To $recipients -From [email protected] -subject "File services - Deduplication Report : $HostName " -SmtpServer smtp-relay.gmail.com -Attachments $logFile01
Setup guide for VSFTPD FTP Server – SELinux enforced with fail2ban (RHEL, CentOS, Almalinux)
Few things to note
- if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
- For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
lsblk

Create the XFS partition
sudo mkfs.xfs /dev/sdb
# use mkfs.ext4 for ext4
Why XFS? https://access.redhat.com/articles/3129891

Create the folder for the mount point
sudo mkdir /FTP_DATA_DISK
Update the etc/fstab file and add the following line
sudo nano etc/fstab
/dev/sdb /FTP_DATA_DISK xfs defaults 1 2
Mount the disk
sudo mount -a
Testing
mount | grep sdb

Setup the VSFTPD Data and Log Folders
Setup the FTP Data folder
sudo mkdir /FTP_DATA_DISK/FTP_Root -p
Create the log directory
sudo mkdir /FTP_DATA_DISK/_logs/ -p
Set permissions
sudo chgrp -R FTP_Root_RW /FTP_DATA_DISK/FTP_Root/
sudo chmod 775 -R /FTP_DATA_DISK/FTP_Root/
Setup the VSFTPD Config File
Backup the default vsftpd.conf and create a newone
sudo mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpdconfback
sudo nano /etc/vsftpd/vsftpd.conf
#KB Link - ####
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=002
dirmessage_enable=YES
ftpd_banner=Welcome to multicastbits Secure FTP service.
chroot_local_user=NO
chroot_list_enable=NO
chroot_list_file=/etc/vsftpd/chroot_list
listen=YES
listen_ipv6=NO
userlist_file=/etc/vsftpd/user_list
pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
listen_port=21
connect_from_port_20=YES
local_root=/FTP_DATA_DISK/FTP_Root/
xferlog_enable=YES
vsftpd_log_file=/FTP_DATA_DISK/_logs/vsftpd.log
log_ftp_protocol=YES
dirlist_enable=YES
download_enable=NO
pasv_enable=Yes
pasv_max_port=65535
pasv_min_port=60000
Add the FTP users to the userlist file
Backup the Original file
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
sudo systemctl start vsftpd
sudo systemctl enable vsftpd
sudo systemctl status vsftpd

Setup SELinux
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
sudo nano /etc/fail2ban/jail.local
vsftpd]
enabled = true
port = ftp,ftp-data,ftps,ftps-data
logpath = /FTP_DATA_DISK/_logs/vsftpd.log
maxretry = 5
bantime = 7200
Make sure to update the logpath directive to match the vsftpd log file we defined on the vsftpd.conf file
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
sudo systemctl status fail2ban
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log

Fail2ban injects and manages the following rich rules

Client will fail to connect using FTP until the ban is lifted

Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Find the PCI-E Slot number of PCI-E Add On card GPU, NIC, etc on Linux/Proxmox
i was working on a v-GPU POC using PVE Since Broadcom Screwed us with the Vsphere licensing costs (New post incoming about this adventure)
anyway i needed to find the PCI-E Slot used for the A4000 GPU on the host to disable it for troubleshooting
Guide
First we need to find the occupied slots and the Bus address for each slot
sudo dmidecode -t slot | grep -E "Designation|Usage|Bus Address"
Output will show the Slot ID, Usage and then the Bus Address
Designation: CPU SLOT1 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT2 PCI-E 4.0 X8
Current Usage: In Use
Bus Address: 0000:41:00.0
Designation: CPU SLOT3 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c1:00.0
Designation: CPU SLOT4 PCI-E 4.0 X8
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT5 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:c2:00.0
Designation: CPU SLOT6 PCI-E 4.0 X16
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: CPU SLOT7 PCI-E 4.0 X16
Current Usage: In Use
Bus Address: 0000:81:00.0
Designation: PCI-E M.2-M1
Current Usage: Available
Bus Address: 0000:ff:00.0
Designation: PCI-E M.2-M2
Current Usage: Available
Bus Address: 0000:ff:00.0
We can use lspci -s #BusAddress# to locate whats installed on each slot
lspci -s 0000:c2:00.0
c2:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1)
lspci -s 0000:81:00.0
81:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
Im sure there is a much more elegant way to do this, but this worked as a quick ish way to find what i needed. if you know a better way please share in the comments
Until next time!!!
Reference –
https://stackoverflow.com/questions/25908782/in-linux-is-there-a-way-to-find-out-which-pci-card-is-plugged-into-which-pci-sl
MS Exchange 2016 [ERROR] Cannot find path ‘..\Exchange_Server_V15\UnifiedMessaging\grammars’ because it does not exist.
So recently I ran into this annoying error message with Exchange 2016 CU11 Update.
Environment info-
- Exchange 2016 upgrade from CU8 to CU11
- Exchange binaries are installed under D:\Microsoft\Exchange_Server_V15\..
Microsoft.PowerShell.Commands.GetItemCommand.ProcessRecord()". [12/04/2018 16:41:43.0233] [1] [ERROR] Cannot find path 'D:\Microsoft\Exchange_Server_V15\UnifiedMessaging\grammars' because it does not exist.
[12/04/2018 16:41:43.0233] [1] [ERROR-REFERENCE] Id=UnifiedMessagingComponent___99d8be02cb8d413eafc6ff15e437e13d Component=EXCHANGE14:\Current\Release\Shared\Datacenter\Setup
[12/04/2018 16:41:43.0234] [1] Setup is stopping now because of one or more critical errors. [12/04/2018 16:41:43.0234] [1] Finished executing component tasks.
[12/04/2018 16:41:43.0318] [1] Ending processing Install-UnifiedMessagingRole
[12/04/2018 16:44:51.0116] [0] CurrentResult setupbase.maincore:396: 0 [12/04/2018 16:44:51.0118] [0] End of Setup
[12/04/2018 16:44:51.0118] [0] **********************************************
Root Cause
Ran the Setup again and it failed with the same error
while going though the log files i notice that the setup looks for this file path while configuring the "Mailbox role: Unified Messaging service" (Stage 6 on the GUI installer)
$grammarPath = join-path $RoleInstallPath "UnifiedMessaging\grammars\*";
There was no folder present with the name grammars under the Path specified on the error
just to confirm, i checked another server on CU8 and the grammars folder is there.
Not sure why the folder got removed, it may have happened during the first run of the CU11 setup that failed,
Resolution
My first thought was to copy the folder from an existing CU8 server. but just to avoid any issues (since exchange is sensitive to file versions)
I created an empty folder with the name "grammars" under D:\Microsoft\Exchange_Server_V15\UnifiedMessaging\
Ran the setup again and it continued the upgrade process and completed without any issues...¯\_(ツ)_/¯
[12/04/2018 18:07:50.0416] [2] Ending processing Set-ServerComponentState
[12/04/2018 18:07:50.0417] [2] Beginning processing Write-ExchangeSetupLog
[12/04/2018 18:07:50.0420] [2] Install is complete. Server state has been set to Active.
[12/04/2018 18:07:50.0421] [2] Ending processing Write-ExchangeSetupLog
[12/04/2018 18:07:50.0422] [1] Finished executing component tasks.
[12/04/2018 18:07:50.0429] [1] Ending processing Start-PostSetup
[12/04/2018 18:07:50.0524] [0] CurrentResult setupbase.maincore:396: 0
[12/04/2018 18:07:50.0525] [0] End of Setup
[12/04/2018 18:07:50.0525] [0] **********************************************
Considering cost of this software M$ really have to be better about error handling IMO, i have run in to silly issues like this way too many times since Exchange 2010.
Azure AD Sync Connect No-Start-Connection status
Issue
Received the following error from the Azure AD stating that Password Synchronization was not working on the tenant.

When i manually initiate a delta sync, i see the following logs
"The Specified Domain either does not exist or could not be contacted"
(click to enlarge)
Checked the following
- Restarted ADsync Services
- Resolve the ADDS Domain FQDN and DNS – Working
- Test required ports for AD-sync using portqry – issues with the Primary ADDS server defined on the DNS values
Root Cause
Turns out the Domain controller Defined as the primary DNS value was pointing was going thorough updates, its responding on the DNS but doesn’t return any data (Brown-out state)
Assumption
when checking DNS since the DNS server is connecting, Windows doesn’t check the secondary and tertiary servers defined under DNS servers.
This might happen if you are using a ADDS server via a S2S tunnel/MPLS when the latency goes high
Resolution
Check make sure your ADDS-DNS servers defined on AD-SYNC server are alive and responding
in my case i just updated the “Primary” DNS value with the umbrella Appliance IP (this act as a proxy and handle the fail-over)
PowerShell remoting (WinRM) over HTTPS using a AD CS PKI (CA) signed client Certificate
This is a guide to show you how to enroll your servers/desktops to allow powershell remoting (WINRM) over HTTPS
Assumptions
- You have a working Root CA on the ADDS environment – Guide
- CRL and AIA is configured properly – Guide
- Root CA cert is pushed out to all Servers/Desktops – This happens by default
Contents
- Setup CA Certificate template
- Deploy Auto-enrolled Certificates via Group Policy
- Powershell logon script to set the WinRM listener
- Deploy the script as a logon script via Group Policy
- Testing
1 – Setup CA Certificate template to allow Client Servers/Desktops to checkout the certificate from the CA
Connect to the The Certification Authority Microsoft Management Console (MMC)
Navigate to Certificate Templates > Manage

On the “Certificate templates Console” window > Select Web Server > Duplicate Template

Under the new Template window Set the following attributes
General – Pick a Name and Validity Period – This is up to you

Compatibility – Set the compatibility attributes (You can leave this on the default values, It up to you)

Subject Name – Set ‘Subject Name’ attributes (Important)

Security – Add “Domain Computers” Security Group and Set the following permissions
- Read – Allow
- Enroll – Allow
- Autoenroll – Allow

Click “OK” to save and close out of “Certificate template console”
Issue to the new template
Go back to the “The Certification Authority Microsoft Management Console” (MMC)
Under templates (Right click the empty space) > Select New > Certificate template to Issue

Under the Enable Certificate template window > Select the Template you just created

Allow few minutes for ADDS to replicate and pick up the changes with in the forest
2 – Deploy Auto-enrolled Certificates via Group Policy
Create a new GPO
Windows Settings > Security Settings > Public Key Policies/Certificate Services Client – Auto-Enrollment Settings

Link the GPO to the relevant OU with in your ADDS environment
Note – You can push out the root CA cert as a trusted root certificate with this same policy if you want to force computers to pick up the CA cert,
Testing
If you need to test it gpupdate/force or reboot your test machine, The Server VM/PC will pickup a certificate from ADCS PKI
3 – Powershell logon script to set the WINRM listener
Dry run
- Setup the log file
- Check for the Certificate matching the machines FQDN Auto-enrolled from AD CS
- If exist
- Set up the HTTPS WInRM listener and bind the certificate
- Write log
- else
- Write log
#Malinda Rathnayake- 2020
#
#variable
$Date = Get-Date -Format "dd_MM_yy"
$port=5986
$SessionRunTime = Get-Date -Format "dd_yyyy_HH-mm"
#
#Setup Logs folder and log File
$ScriptVersion = '1.0'
$locallogPath = "C:\_Scripts\_Logs\WINRM_HTTPS_ListenerBinding"
#
$logging_Folder = (New-Item -Path $locallogPath -ItemType Directory -Name $Date -Force)
$ScriptSessionlogFile = New-Item $logging_Folder\ScriptSessionLog_$SessionRunTime.txt -Force
$ScriptSessionlogFilePath = $ScriptSessionlogFile.VersionInfo.FileName
#
#Check for the the auto-enrolled SSL Cert
$RootCA = "Company-Root-CA" #change This
$hostname = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
$certinfo = (Get-ChildItem -Path Cert:\LocalMachine\My\ |? {($_.Subject -Like "CN=$hostname") -and ($_.Issuer -Like "CN=$RootCA*")})
$certThumbprint = $certinfo.Thumbprint
#
#Script-------------------------------------------------------
#
#Remove the existing WInRM Listener if there is any
Get-ChildItem WSMan:\Localhost\Listener | Where -Property Keys -eq "Transport=HTTPS" | Remove-Item -Recurse -Force
#
#If the client certificate exists Setup the WinRM HTTPS listener with the cert else Write log
if ($certThumbprint){
#
New-Item -Path WSMan:\Localhost\Listener -Transport HTTPS -Address * -CertificateThumbprint $certThumbprint -HostName $hostname -Force
#
netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=$port
#
Add-Content -Path $ScriptSessionlogFilePath -Value "Certbinding with the HTTPS WinRM HTTPS Listener Completed"
Add-Content -Path $ScriptSessionlogFilePath -Value "$certinfo.Subject"}
else{
Add-Content -Path $ScriptSessionlogFilePath -Value "No Cert matching the Server FQDN found, Please run gpupdate/force or reboot the system"
}
Script is commented with Explaining each section (should have done functions but i was pressed for time, never got around to do it, if you do fix it up and improve this please let me know in the comments :D)
5 – Deploy the script as a logon script via Group Policy
Setup a GPO and set this script as a logon Powershell script
Im using a user policy with GPO Loop-back processing set to Merge applied to the server OU

Testing
To confirm WinRM is listening on HTTPS, type the following commands:
winrm enumerate winrm/config/listener

Winrm get http://schemas.microsoft.com/wbem/wsman/1/config

Sources that helped me








