just something that came up while setting up a monitoring script using mailx, figured ill note it down here so i can get it to easily later when I need it 😀
Important prerequisites
You need to enable smtp basic Auth on Office 365 for the account used for authentication
Create an App password for the user account
nssdb folder must be available and readable by the user running the mailx command
Assuming all of the above prerequisite are $true we can proceed with the setup
Install mailx
RHEL/Alma linux
sudo dnf install mailx
NSSDB Folder
make sure the nssdb folder must be available and readable by the user running the mailx command
certutil -L -d /etc/pki/nssdb
The Output might be empty, but that’s ok; this is there if you need to add a locally signed cert or another CA cert manually, Microsoft Certs are trusted by default if you are on an up to date operating system with the local System-wide Trust Store
Append/prepend the following lines and Comment out or remove the same lines already defined on the existing config files
set smtp=smtp.office365.com
set smtp-auth-user=###[email protected]###
set smtp-auth-password=##Office365-App-password#
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-use-starttls
set from="###[email protected]###"
This is the bare minimum needed other switches are located here – link
-v switch will print the verbos debug log to console
Connecting to 52.96.40.242:smtp . . . connected.
220 xxde10CA0031.outlook.office365.com Microsoft ESMTP MAIL Service ready at Sun, 6 Aug 2023 22:14:56 +0000
>>> EHLO vls-xxx.multicastbits.local
250-MN2PR10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> STARTTLS
220 2.0.0 SMTP server ready
>>> EHLO vls-xxx.multicastbits.local
250-xxde10CA0031.outlook.office365.com Hello [167.206.57.122]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-AUTH LOGIN XOAUTH2
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
>>> AUTH LOGIN
334 VXNlcm5hbWU6
>>> Zxxxxxxxxxxxc0BmdC1zeXMuY29t
334 UGsxxxxxmQ6
>>> c2Rxxxxxxxxxxducw==
235 2.7.0 Authentication successful
>>> MAIL FROM:<###[email protected]###>
250 2.1.0 Sender OK
>>> RCPT TO:<[email protected]>
250 2.1.5 Recipient OK
>>> DATA
354 Start mail input; end with <CRLF>.<CRLF>
>>> .
250 2.0.0 OK <[email protected]> [Hostname=Bsxsss744.namprd11.prod.outlook.com]
>>> QUIT
221 2.0.0 Service closing transmission channel
Now you can use this in your automation scripts or timers using the mailx command
#!/bin/bash
log_file="/etc/app/runtime.log"
recipient="[email protected]"
subject="Log file from /etc/app/runtime.log"
# Check if the log file exists
if [ ! -f "$log_file" ]; then
echo "Error: Log file not found: $log_file"
exit 1
fi
# Use mailx to send the log file as an attachment
echo "Sending log file..."
mailx -s "$subject" -a "$log_file" -r "[email protected]" "$recipient" < /dev/null
echo "Log file sent successfully."
The above commands change the file’s owner and group to root, then set the file permissions to 600, which means only the owner (root) has read and write permissions and other users have no access to the file.
Use Environment Variables: Avoid storing sensitive information like passwords directly in the mail.rc file, consider using environment variables for sensitive data and reference those variables in the configuration.
For example, in the mail.rc file, you can set:
set smtp-auth-password=$MY_EMAIL_PASSWORD
You can set the variable using another config file or store it in the Ansible vault during runtime or use something like Hashicorp.
Sure, I would just use Python or PowerShell core, but you will run into more locked-down environments like OCI-managed DB servers with only Mailx is preinstalled and the only tool you can use 🙁
the Fact that you are here means you are already in the same boat. Hope this helped… until next time
After looking around for a few hours digging in to the logs i figured out the issue, hopefully this helps some one else our there in the situation save some time.
Make sure the IPVS mode is enabled on the cluster configuration
If you are using :
RKE2 – edit the cluster.yaml file
RKE1 – Edit the cluster configuration from the rancher UI > Cluster management > Select the cluster > edit configuration > edit as YAML
Locate the services field under rancher_kubernetes_engine_config and add the following options to enable IPVS
Make sure the Kernel modules are enabled on the nodes running control planes
Background
Example Rancher – RKE1 cluster
sudo docker ps | grep proxy # find the container ID for kubproxy
sudo docker logs ####containerID###
0313 21:44:08.315888 108645 feature_gate.go:245] feature gates: &{map[]}
I0313 21:44:08.346872 108645 proxier.go:652] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack_ipv4"
E0313 21:44:08.347024 108645 server_others.go:107] "Can't use the IPVS proxier" err="IPVS proxier will not be used because the following required kernel modules are not loaded: [ip_vs_lc]"
Kubproxy is trying to load the needed kernel modules and failing to enable IPVS
This guide will walk you through on how to extend and increase space for the root filesystem on a alma linux. Cent OS, REHL Server/Desktop/VM
Method A – Expanding the current disk
Edit the VM and Add space to the Disk
install the cloud-utils-growpart package, as the growpart command in it makes it really easy to extend partitioned virtual disks.
sudo dnf install cloud-utils-growpart
Verify that the VM’s operating system recognizes the new increased size of the sda virtual disk, using lsblk or fdisk -l
sudo fdisk -l
Notes -
Note down the disk id and the partition number for Linux LVM - in this demo disk id is sda and lvm partition is sda 3
lets trigger a rescan of a block devices (Disks)
#elevate to root
sudo su
#trigger a rescan, Make sure to match the disk ID you noted down before
echo 1 > /sys/block/sda/device/rescan
exit
Now sudo fdisk -l shows the correct size of the disks
Use growpart to increase the partition size for the lvm
sudo growpart /dev/sda 3
Confirm the volume group name
sudo vgs
Extend the logical volume
sudo lvextend -l +100%FREE /dev/almalinux/root
Grow the file system size
sudo xfs_growfs /dev/almalinux/root
Notes -
You can use this same steps to add space to different partitions such as home, swap if needed
Method B -Adding a second Disk to the LVM and expanding space
Why add a second disk?
may be the the current Disk is locked due to a snapshot and you cant remove it, Only solution would be to add a second disk/
Check the current space available
sudo df -h
Notes -
If you have 0% ~1MB left on the cs-root command auto-complete with tab and some of the later commands wont work, You should clear up atleast 4-10mb by clearing log files, temp files, etc
Mount an additional disk to the VM (Assuming this is a VM) and make sure the disk is visible on the OS level
sudo lvmdiskscan
OR
sudo fdisk -l
Confirm the volume group name
sudo vgs
Lets increase the space
First lets initialize the new disk we mounted
sudo mkfs.xfs /dev/sdb
Create the Physical volume
sudo pvcreate /dev/sdb
extend the volume group
sudo vgextend cs /dev/sdb
Volume group "cs" successfully extended
Extend the logical volume
sudo lvextend -l +100%FREE /dev/cs/root
Grow the file system size
sudo xfs_growfs /dev/cs/root
Confirm the changes
sudo df -h
Just making easy for us!!
#Method A - Expanding the current disk
#AlmaLinux
sudo dnf install cloud-utils-growpart
sudo lvmdiskscan
sudo fdisk -l #note down the disk ID and partition num
sudo su #elevate to root
echo 1 > /sys/block/sda/device/rescan #trigger a rescan
exit #exit root shell
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
#Method B - Adding a second Disk
#CentOS
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend cs /dev/sdb
sudo lvextend -l +100%FREE /dev/cs/root
sudo xfs_growfs /dev/cs/root
sudo df -h
#AlmaLinux
sudo lvmdiskscan
sudo fdisk -l
sudo vgs
sudo mkfs.xfs /dev/sdb
sudo pvcreate /dev/sdb
sudo vgextend almalinux /dev/sdb
sudo lvextend -l +100%FREE /dev/almalinux/root
sudo xfs_growfs /dev/almalinux/root
sudo df -h
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Update and Install packages we need
sudo dnf update
sudo dnf install net-tools lsof unzip zip tree policycoreutils-python-utils-2.9-20.el8.noarch vsftpd nano setroubleshoot-server -y
Setup Groups and Users and security hardening
if you want to prevent directory traversal we need to setup chroot with vsftpd (not covered on this KB)
Create the Service admin account
sudo useradd ftpadmin
sudo passwd ftpadmin
Create the group
sudo groupadd FTP_Root_RW
Create FTP only user shell for the FTP users
echo -e '#!/bin/sh\necho "This account is limited to FTP access only."' | sudo tee -a /bin/ftponly
sudo chmod a+x /bin/ftponly
echo "/bin/ftponly" | sudo tee -a /etc/shells
Create FTP users
sudo useradd ftpuser01 -m -s /bin/ftponly
sudo useradd ftpuser02 -m -s /bin/ftponly
user passwd ftpuser01
user passwd ftpuser02
Add the users to the group
sudo usermod -a -G FTP_Root_RW ftpuser01
sudo usermod -a -G FTP_Root_RW ftpuser02
sudo usermod -a -G FTP_Root_RW ftpadmin
Disable SSH Access for the FTP users.
Edit sshd_config
sudo nano /etc/ssh/sshd_config
Add the following line to the end of the file
DenyUsers ftpuser01 ftpuser02
Open ports on the VM Firewall
sudo firewall-cmd --permanent --add-port=20-21/tcp
#Allow the passive Port-Range we will define it later on the vsftpd.conf
sudo firewall-cmd --permanent --add-port=60000-65535/tcp
#Reload the ruleset
sudo firewall-cmd --reload
Setup the Second Disk for FTP DATA
Attach another disk to the VM and reboot if you haven’t done this already
lsblk to check the current disks and partitions detected by the system
sudo mv /etc/vsftpd/user_list /etc/vsftpd/user_listBackup
echo "ftpuser01" | sudo tee -a /etc/vsftpd/user_list
echo "ftpuser02" | sudo tee -a /etc/vsftpd/user_list
instead of putting our hands up and disabling SElinux, we are going to setup the policies correctly
Find the available policies using getsebool -a | grep ftp
getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$
[lxadmin@vls-BackendSFTP02 _logs]$ getsebool -a | grep ftp
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> off
ftpd_full_access --> off
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> off
Set SELinux boolean values
sudo setsebool -P ftpd_use_passive_mode on
sudo setsebool -P ftpd_use_cifs on
sudo setsebool -P ftpd_full_access 1
"setsebool" is a tool for setting SELinux boolean values, which control various aspects of the SELinux policy.
"-P" specifies that the boolean value should be set permanently, so that it persists across system reboots.
"ftpd_use_passive_mode" is the name of the boolean value that should be set. This boolean value controls whether the vsftpd FTP server should use passive mode for data connections.
"on" specifies that the boolean value should be set to "on", which means that vsftpd should use passive mode for data connections.
Enable ftp_home_dir --> on if you are using chroot
Add a new file context rule to the system.
sudo semanage fcontext -a -t public_content_rw_t "/FTP_DATA_DISK/FTP_Root/(/.*)?"
"fcontext" is short for "file context", which refers to the security context that is associated with a file or directory.
"-a" specifies that a new file context rule should be added to the system.
"-t" specifies the new file context type that should be assigned to files or directories that match the rule.
"public_content_rw_t" is the name of the new file context type that should be assigned to files or directories that match the rule. In this case, "public_content_rw_t" is a predefined SELinux type that allows read and write access to files and directories in public directories, such as /var/www/html.
"/FTP_DATA_DISK/FTP_Root/(/.)?" specifies the file path pattern that the rule should match. The pattern includes the "/FTP_DATA_DISK/FTP_Root/" directory and any subdirectories or files beneath it. The regular expression "/(.)?" matches any file or directory name that may follow the "/FTP_DATA_DISK/FTP_Root/" directory path.
In summary, this command sets the file context type for all files and directories under the "/FTP_DATA_DISK/FTP_Root/" directory and its subdirectories to "public_content_rw_t", which allows read and write access to these files and directories.
Reset the SELinux security context for all files and directories under the “/FTP_DATA_DISK/FTP_Root/”
sudo restorecon -Rvv /FTP_DATA_DISK/FTP_Root/
"restorecon" is a tool that resets the SELinux security context for files and directories to their default values.
"-R" specifies that the operation should be recursive, meaning that the security context should be reset for all files and directories under the specified directory.
"-vv" specifies that the command should run in verbose mode, which provides more detailed output about the operation.
"/FTP_DATA_DISK/FTP_Root/" is the path of the directory whose security context should be reset.
Setup Fail2ban
Install fail2ban
sudo dnf install fail2ban
Create the jail.local file
This file is used to overwrite the config blocks in /etc/fail2ban/fail2ban.conf
journalctl -u fail2ban will help you narrow down any issues with the service
Testing
sudo tail -f /var/log/fail2ban.log
Fail2ban injects and manages the following rich rules
Client will fail to connect using FTP until the ban is lifted
Remove the ban IP list
#get the list of banned IPs
sudo fail2ban-client get vsftpd banned
#Remove a specific IP from the list
sudo fail2ban-client set vsftpd unbanip <IP>
#Remove/Reset all the the banned IP lists
sudo fail2ban-client unban --all
This should get you up and running, For the demo I just used Unencrypted FTP on port 21 to keep things simple, Please utilize SFTP with the letsencrypt certificate for better security. i will cover this on another article and link it here
Let me address the question of why I decided to put a DNS server (Pihole) exposed to the internet (not fully open but still).
I needed/wanted to set up an Umbrella/NextDNS/CF type DNS server that’s publicly accessible but secured to certain IP addresses.
Sure NextDNS is an option and its cheap with similar features, but i wanted roll my own solution so i can learn a few things along the way
I can easily set this up for my family members with minimal technical knowledge and unable to deal with another extra device (Raspberry pi) plugged into their home network.
This will also serve as a quick and dirty guide on how to use Docker compose and address some Issues with Running Pi-hole, Docker with UFW on Ubuntu 20.x
So lets get stahhhted…….
Scope
Setup Pi-hole as a docker container on a VM
Enable IPV6 support
Setup UFW rules to prune traffic and a cronjob to handle the rules to update with the dynamic WAN IPs
Deploy and test
What we need
Linux VM (Ubuntu, Hardened BSD, etc)
Docker and Docker Compose
Dynamic DNS service to track the changing IP (Dyndns,no-Ip, etc)
Deployment
Setup Dynamic DNS solution to track your Dynamic WAN IP
for this demo, we are going to use DynDNS since I already own a paid account and its supported on most platforms (Routers, UTMs, NAS devices, IP camera-DVRs, etc)
Use some google-fu there are multiple ways to do this without having to pay for the service, all we need is a DNS record that's up-to-date with your current Public IP address.
For Network A and Network B, I’m going to use the routers built-in DDNS update features
Network A gateway – UDM Pro
Network B Gateway – Netgear R6230
Confirmation
Setup the VM with Docker-compose
Pick your service provider, you can and should be able to use a free tier VM for this since its just DNS
Linode
AWS lightsail
IBM cloud
Oracle cloud
Google Compute
Digital Ocean droplet
Make sure you have a dedicated (static) IPv4 and IPv6 address attached to the resource
For this deployment, I’m going to use a Linode – Nanode, due to their native IPv6 support and cause I prefer their platform for personal projects
Lets Configure the docker networking side to fit our Needs
Create a Seperate Bridge network for the Pi-holecontainer
I guess you could use the default bridge network, but I like to create one to keep things organized and this way this service can be isolated from the other containers I have
With the new ubuntu version 20.x, Systemd will start a local DNS stub client that runs on 127.0.0.53:53
which will prevent the container from starting. because Pi-hole binds to the same port UDP 53
we could disable the service but that breaks DNS resolution on the VM causing more headaches and pain for automation and updates
After some google fu and trickering around this this is the workaround i found.
Disable the stub-listener
Change the symlink to the /etc/resolved.conf to /run/systemd/resolve/resolv.conf
push the external name servers so the VM won’t look at loopback to resolve DNS
Restart systemd-resolved
Resolving Conflicts with the systemd-resolved stub listener
We need to disable the stub listener thats bound to port 53, as i mentioned before this breaks the local dns resolution we will fix it in a bit.
sudo nano /etc/systemd/resolved.conf
Find and uncomment the line “DNSStubListener=yes” and change it to “no”
After this we need to push the external DNS servers to the box, this setting is stored on the following file
/etc/resolv.conf
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
But we cant manually update this file with out own DNS servers, lets investigate
ls -l /etc/resolv.conf
its a symlink to the another system file
/run/systemd/resolve/stub-resolv.conf
When you take a look at the directory where that file resides, there are two files
When you look at the other file you will see that /run/systemd/resolve/resolv.conf is the one which really is carrying the external name servers
You still can’t manually edit This file, and it gets updated by whatever the IPs provided as DNS servers via DHCP. netplan will dictate the IPs based on the static DNS servers you configure on Netplan YAML file
i can see there two entries, and they are the default Linode DNS servers discovered via DHCP, I’m going to keep them as is, since they are good enough for my use case
container_name – This is the name of the container on the docker container registry
image – What image to pull from the Docker Hub
hostname – This is the host-name for the Docker container – this name will show up on your lookup when you are using this Pi-hole
ports – What ports should be NATed via the Docker Bridge to the host VM
TZ – Time Zone
DNS1 – DNS server used with in the image
DNS2 – DNS server used with in the image
WEBPASSWORD – Password for the Pi-Hole web console
ServerIP – Use the IPv4 address assigned to the VMs network interface(You need this for the Pi-Hole to respond on the IP for DNS queries)
IPv6 – Enable Disable IPv6 support
ServerIPv6 – Use the IPv4 address assigned to the VMs network interface (You need this for the Pi-Hole to respond on the IP for DNS queries)
volumes – These volumes will hold the configuration data so the container settings and historical data will persist reboots
cap_add:- NET_ADMIN – Add Linux capabilities to edit the network stack – link
restart: always – This will make sure the container gets restarted every time the VM boots up – Link
networks:default:external:name: Piholev6 – Set the container to use the network bridge we created before
Now lets bring up the Docker container
docker-compose up -d
-d switch will bring up the Docker container in the background
Run ‘Docker ps’ to confirm
Now you can access the web interface and use the Pihole
verifying its using the bridge network you created
Grab the network ID for the bridge network we create before and use the inspect switch to check the config
docker network ls
docker network inspect f7ba28db09ae
This will bring up the full configuration for the Linux bridge we created and the containers attached to the bridge will be visible under the “Containers”: tag
Testing
I manually configured my workstations primary DNS to the Pi-Hole IPs
Updating the docker Image
Pull the new image from the Registry
docker pull pihole/pihole
Take down the current container
docker-compose down
Run the new container
docker-compose up -d
Your settings will persist this update
Securing the install
now that we have a working Pi-Hole with IPv6 enabled, we can login and configure the Pihole server and resolve DNS as needed
but this is open to the public internet and will fall victim to DNS reflection attacks, etc
lets set up firewall rules and open up relevant ports (DNS, SSH, HTTPS) to the relevant IP addresses before we proceed
Disable IPtables from the docker daemon
Ubuntu uses UFW (uncomplicated firewall) as an obfuscation layer to make things easier for operators, but by default, Docker will open ports using IPtables with higher precedence, Rules added via UFW doesn’t take effect
So we need to tell docker not to do this when launching a container so we can manage the firewall rules via UFW
This file may not exist already if so nano will create it for you
sudo nano /etc/docker/daemon.json
Add the following lines to the file
{
"iptables": false
}
restart the docker services
sudo systemctl restart docker
now doing this might disrupt communication with the container until we allow them back in using UFW commands, so keep that in mind.
Automatically updating Firewall Rules based on the DYN DNS Host records
we are going to create a shell script and run it every hour using crontab
Shell Script Dry run
Get the IP from the DYNDNS Host records
remove/Cleanup existing rules
Add Default deny Rules
Add allow rules using the resolved IPs as the source
Dynamic IP addresses are updated on the following DNS records
trusted-Network01.selfip.net
trusted-Network02.selfip.net
Lets start by creating the script file under /bin/*
#!/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
now=$(date +"%m/%d/%T")
DYNDNSNetwork01="trusted-Network01.selfip.net"
DYNDNSNetwork02="trusted-Network02.selfip.com"
#Get the network IP using dig
Network01_CurrentIP=`dig +short $DYNDNSNetwork01`
Network02_CurrentIP=`dig +short $DYNDNSNetwork02`
echo "-----------------------------------------------------------------"
echo Network A WAN IP $Network01_CurrentIP
echo Network B WAN IP $Network02_CurrentIP
echo "Script Run time : $now"
echo "-----------------------------------------------------------------"
#update firewall Rules
#reset firewall rules
#
sudo ufw --force reset
#
#Re-enable Firewall
#
sudo ufw --force enable
#
#Enable inbound default Deny firewall Rules
#
sudo ufw default deny incoming
#
#add allow Rules to the relevant networks
#
sudo ufw allow from $Network01_CurrentIP to any port 22 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 8080 proto tcp
sudo ufw allow from $Network01_CurrentIP to any port 53 proto udp
sudo ufw allow from $Network02_CurrentIP to any port 53 proto udp
#add the ipV6 DNS allow all Rule - Working on finding an effective way to lock this down, with IPv6 rick is minimal
sudo ufw allow 53/udp
#find and delete the allow any to any IPv4 Rule for port 53
sudo ufw --force delete $(ufw status numbered | grep '53*.*Anywhere.' | grep -v v6 | awk -F"[][]" '{print $2}')
echo "--------------------end Script------------------------------"
Lets run the script to make sure its working
I used a online port scanner to confirm
Setup Scheduled job with logging
lets use crontab and setup a scheduled job to run this script every hour
Make sure the script is copied to the /bin folder with the executable permissions
using crontab -e (If you are launching this for the first time it will ask you to pick the editor, I picked Nano)
I have my lab setup in my room so I had to do something about this. After wondering around in the OSMA, DRAC and BIOS with no luck, I turned to almighty Google for help. Turns out Dell decided not to expose the BMC’s fan controller settings to the users. It’s baked in to the firmware.
Reducing the noise involves two mods, hardware and firmware.
Fan MOD – Lower the Fan speeds to reduce the noise
Firmware mod – Lowering the BMC fan rpm thresholds
Update:
I stress tested the server after the mod, check here for details – Dell PE 2950 Stress test 01. Fan MOD – Lower the Fan speeds to reduce the noise
I stumbled upon this post on the “Blind Caveman’s blog”. – http://blindcaveman.wordpress.com/2013/08/23/problem-dell-poweredge-2950-iii-jet-engine-fan-noise/ Apparently he had success with adding a 47ohm resistor in line to all 4 intake fans, he has a very comprehensive guide on the mod. I’m just going to put the summery of what I did. (Props to Caveman for coming up with this solution)
Items you need
4pc of 47ohm ½ watt resistors. (Radio shack $1.49)
Heat Shrink. (Radio shack $4.59)
Soldering iron.
Note : You can drop the resistor value to increase the fan voltage
Fan Mod – Steps 01. Remove the cover. 02. Remove the fan by pulling the orange tabs and gently lifting up.
03. Remove the wire clip cut the “Red” wire and solder the resistor in line with the wire.
Red Wire
04. Re-seat the fans back on the server. (be careful not to let it touch the heat sink right next to it)
Watch out for the Heat-sink
Note: I just modded the intake fans, OP suggest to mod the PSU fans but I don’t think you need to mess with the power supply fans for 3 reasons.
It’s not going to make a huge difference. (my PE is running below 52db with just the intake fans modded)
PSU is Expensive to replace. (on Ebay PSU is around $100 but four Dell 2950 Fans cost less than $10)
I believe the PSU units should run very cool and efficient as much as possible.
—————————————————————————————————————————
So after the mod, I booted up the server, it was running significantly quieter. BUT… yes there’s a huge but…. Issue 01 – OSMA Errors and fan speed issues The fan speeds were ramping up and down every few minutes. When i monitored the fan speeds via DRAC and it showed an error with the fans failing since the idle rpm is lower than the minimum rpm threshold.
What is happening
the BMC lower the fan RPM after the initial boot, since the resistor is in place the lowest RPM is around 1800 and the default minimum RPM error threshold is 2250rpm so the BMC panics, spins the fans back up to 100%, lower them again since the error is cleared. So on. it was going on in a never ending cycle of annoyingness.
So after some more google fu. I found a post written by a German “Artificial intelligence researcher” who faced the same issue after he swapped out the dell fans with lower RPM ones and since dell refused to help him fix it, he engineered his own fix for this by modifying the BMC firmware to reduce the minimum rpm threshold (how cool is that).
His name is Arnuschky – Link |Post link His post is well written to the point (Kudos to you sir) but its not very noob friendly. 🙁 So I’m going to make a step by step guide using his post as reference with few more additions, for anyone who is new to open source and messing with dell firmwares.
02. Firmware mod – Lowering the BMC fan rpm thresholds
The solution explained-
Arnuschky figured out the exact setting in the BMC’s firmware, the check-sums etc to modify the fan rpm thresholds and wrote a very nifty script to help us modify the values on a downloaded firmware file.
What is BMC (board management controller)
Among many other things, fans are controlled by the BMC and the fan curve and all the values are baked in to the firmware.
BMC (board management controller) by design will ramp up the RPM of the fans every time you add more hardware to the system such as – Add-on cards, RAM, HDD’s, etc
What is IPMI
Intelligent Platform Management Interface, this tool set can be easily installed on any linux distribution and after you enable IPMI in the BIOS (DRAC interface) you can query sensory data from BMC and configure parameters on the BMC.
Procedure Things you should know –
This worked for many people including me. Myself nor anyone involved will not be held responsible for any damages caused by proceeding with the firmware mod.
You cannot perform this mod on ESXI. But if you are running a base OS like Redhat/CentOS/Ubuntu you should be good to go.
You cannot flash the firmware using a VM (If you know a way please let us know)
To modify the firmware you have to be on a Linux server, you can technically flash the modified firmware from windows server. I will add the details later in the post
Packages required
BMC Firmware file – Dell Drivers and support
IPMI tools
glibc.i686 (If you are on a 64bit OS)
I have Esxi 5.5 installed on the Dell server so I used a Cent OS 6.4 installation running off a USB stick to do the modifications and flashing Enable IPMI on the DRAC interface
You can do this by logging in to the DRAC web interface or though the bios screen
Press ctrl+E during the post screen to access the DRAC card configuration screen and Enable IPMI
Setting up IPMI Tools
yum install OpenIPMI OpenIPMI-tools
StartEnable the Service
chkconfig ipmi on
service ipmi start
Run the following commands to see if IPMI is working
ipmitool sdr type Temperature
Temp | 01h | ok | 3.1 | -48 degrees C Temp | 02h | ok | 3.2 | -42 degrees C Temp | 05h | ok | 10.1 | 40 degrees C Temp | 06h | ok | 10.2 | 40 degrees C Ambient Temp | 08h | ok | 7.1 | 27 degrees C CPU Temp Interf | 76h | ns | 7.1 | Disabled
ipmitool sdr type Fan
FAN 1 RPM | 30h | ok | 7.1 | 4200 RPM FAN 2 RPM | 31h | ok | 7.1 | 4200 RPM FAN 3 RPM | 32h | ok | 7.1 | 4200 RPM FAN 4 RPM | 33h | ok | 7.1 | 4200 RPM FAN 5 RPM | 34h | ok | 7.1 | 4200 RPM FAN 6 RPM | 35h | ok | 7.1 | 4200 RPM Fan Redundancy | 75h | ok | 7.1 | Fully Redundant
Install glibc.i686
yum install glibc.i686
note: Firmware Flash program is 32bit and it will fail with the following warning on 64bit OS /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
Download the relevant firmware file
Visit – http://www.dell.com/support/
Enter your service tag
Select OS version – Redhat or any other linux flavor (This will allow you to download the .bin file containing the firmware, this is what we need to modify the values)
To save you time here’s the link for the Dell PE 2950 II, BMC firmware V2.50 – direct link
mkdir bmcfwmod cd bmcfwmod #create project directory wget "http://downloads.dell.com/FOLDER00928606M/1/2950_ESM_Firmware_4NNNG_LN32_2.50_A00.BIN"
Set permissions and extract the firmware .bin file
chmod 755 BMC_FRMW_LX_R223079.BIN # make executable sudo mkdir bmc_firmware # create dir as root sudo ./BMC_FRMW_LX_R223079.BIN --extract bmc_firmware # yes, you have to do this as root! :( cd bmc_firmware
Note : You have to extract the bin file in-order to proceed.. Above commands will extract the firmware bin file, in to the bmc_firmware folder. Check inside the folder to see if you have a file called /payload/bmcflsh.dat. If not that means your system is not compatible with this mod. If yes, please continue. Patching the firmware file
Note: You should be in the bmc_firmware directory created above
Download and run the script –no-check-certificate switch is used to get around the cert issue due to the github domain name mismatch
wget "https://raw.github.com/arnuschky/dell-bmc-firmware/master/adjust-fan-thresholds/dell-adjust-fan-thresholds.py --no-check-certificate" chmod 755 dell-adjust-fan-thresholds.py # set permissions ./dell-adjust-fan-thresholds.py payload/bmcflsh.dat #execute the py script on the bmcflsh.dat file
The script will prompt you with the following screen
Select your server model in this case I selected Dell PowerEdge 2950 = number 3 Then it will prompt you to select the fans and adjust the threshold. On the DRAC interface the intake fans shows up numbered 1-4, I edited the values for the fans 1 thorough 4 (Only the intake fans will be effected)
Setting the value
When you select the fan number it will ask you to enter the value for the new threshold This should be entered in multiples of 75 for example the default value is 2025 which is a 27×75 so the default value is 27 So to reduce the threshold value you need to enter something lower than 27 I choose 18 as the value, this will drop my threshold to 1350rpm (18×75=1350)
Saving the changes
After editing the appropriate values, enter “W” to write the changes to the firmware as prompted. This will update the bmcflsh.dat with the modified values
Flashing the modified firmware
If you are on a 64bit OS make sure you have the glibc.i686 package installed
This will map the necessary Shared Libraries and execute the bmcfl32l to flash the firmware file
Fans will rev up and stop for a brief moment during the update, don’t worry it will spool up again in a second. You do not need to reboot to see the changes, but do a reboot just in case. So there you go, your Dell 2950 should be purring away on the shelf silently. Note: You should disable the IPMI on DRAC since it is a big security risk.
So we had an old Firebox X700 laying around in office gathering dust. I saw this forum post about running m0nowall on this device. since pfsense is based on m0nowall, I googled around to find a way to install pfsense on the device and found several threads on pfsense forums. It took me a little while to comb through thousands of posts to find a proper way to go about this. And some more time was spent on troubleshooting the issues I faced during the installation and configuration. So I’m putting everything I found on this post, to save you the time spent googling around. This should work for all the other firebox models as well.
TeraTerm Pro Web – Enhanced Telnet/SSH2 Client – Download
The firebox X700
This is basically a small X86 PC. we have a Intel Celeron CPU running at @1.2Ghz with 512MB Ram. The system boots using a CF card with watchguard firmware The custom Intel motherboard used in the device does not include a VGA or a DVI port. we have to use the serial port for all the communications with the device
There are several methods to run pfsense on this device.
HDD
Install PF sense on a PC and Plug the HDD to the firebox.
This requires a bit more of a effort cause we need to change the boot order on bios. and its kinda hard to find IDE laptop HDD’s these days
CF card
This is very straight forward Method. We are basically swapping out the CF card already installed on the device and booting pfsense from it.
In this tutorial we are using the CF card method
Installing PFsense
Download the relevant pfsense image
Since we are using a CF card we need to use the PFsense version built to work on embedded devices. NanoBSD version is built specially to be used with CFcards or any other storage media’s that have limited read write life cycle Since we are using a 4GB CF card, we are going to use the 4G image
Flashing the nanoBSD image to the CF card
Extract the physdiskwrite program and run the PhysGUI.exe This software is written in German i think but operating it is not that hard
Select the CF card from the list.
Note : if you are not sure about the disk device ID. use diskpart and determine the disk ID Load the ISO file Right click on the Disk “Image laden > offnen”
select the ISO file from the “open file” window program will prompt you with the following dialog box
Select the remove 2GB restriction and click “OK” It will warn you about the disk being formatted (I think), click yes to start the disk flashing process. a CMD window will open and show you the progress
Installing the CF card on the Firebox
Once the flashing process is completed, open up the Firebox and Remove the drive cage to gain access to the installed CF Card
Remove the protective glue and replace the card with the new CF card flashed with pfsense image.
Booting up and configuring PFsense
since Firebox does not have any way to connect to a display or any peripheral ports. We need to use a serial connection for communicating with the device
Install “teraTerm pro web” program we downloaded earlier.
I tried using putty and many other telnet clients didn’t work properly
Open up the terminal window
Connect the firebox to the PC using the serial cable, and power it up
Select “Serial” and select the com port the device is connected to and click OK(You can check this in device manager)
Many other tutorials says to change the baud rates. but defaults worked just fine for me
Since we already flashed the PFsense image to the CF card we do not need to install the OS
By now on the terminal window you should be having the PF sense configuration details. just as with a normal fresh install.
It will ask you to setup VLan Assign the WAN, LAN, OPT1 interfaces. ON X700 interface names are as follows
Please refer to pfsense Docs for more info on setting up
After the initial config is completed. you do not need the console cable and Tera Term you will be able to access the PFsense via the web-interface and good ol SSH via the LAN IP
Addtional configuration
Enabling the LCD panel
All firebox units have a LCD panel in front We can use the pfsense LCDproc-dev package to enable and display various information
Install the LCDproc-dev Package via the package Manager
Go to Services > LCDProc
Set the settings as follows
Hope this article helped you guys.Dont forget to leave a comment with your thoughts
well i think the Title pretty much speak for it self..but any how…Crucial released a new Firmware for the M4 SSD’s and apparently its suppose to make the drive 20% faster…i updated mine no issues. and i didn’t brick it so its all good here hehee..
I looked up some Benches from reviews from the time of release and compared them with the benchmarks i did after the FW update, i do get around 20% more increase just like they SAY !!! . Crucial’s Official Release Notes:
“Release Date: 08/25/2011
Change Log:
Changes made in version 0002 (m4 can be updated to revision 0009 directly from either revision 0001 or 0002) Improved throughput performance. Increase in PCMark Vantage benchmark score, resulting in improved user experience in most operating systems. Improved write latency for better performance under heavy write workloads. Faster boot up times. Improved compatibility with latest chipsets. Compensation for SATA speed negotiation issues between some SATA-II chipsets and the SATA-III device. Improvement for intermittent failures in cold boot up related to some specific host systems.”
to install this via a pen drive with out wasting a blank cd..I know they are like really really cheap but think!!!! how many of you have blank cds or DVDs with you now a days ???
to do this we are gonna use a niffty lil program called UNetbootin ofcourse you can use this to boot any linux distro from a pen drive.its very easy actually, if you need help go check out the guides on the UNetbootin website
* Run the program * Select DiskImage Radio button (as shown on the image) * browse and select the iso file you downloaded from crucial * Type – USB Drive * select the Drive letter of your Pendrive * Click OK!!!
reboot
*Go to bios and put your SSD in to IDE (compatibility) mode ** this is important *Boot from your Pen drive *Follow the instructions on screen to update
and Voila
****remember to set your SATA controller to AHCI again in Bios / EFI ****
Update***
SATA 3 Benchmark.
SATA 2 Benchmark
Well i messed around with some Benchmark programs here are the results