First full week of May infra bits 2025

This week was a lot of heads down playing with firmware settings and doing some benchmarking on new hardware. Also, the usual fires and meetings and such.
Datacenter Move
Spent a fair bit of time this week configuring and looking at the new servers we have in our new datacenter. We only have management access to them, but I still (somewhat painfully) installed a few with RHEL9 to do some testing and benchmarking.
One question I was asked a while back was around our use of linux software raid over hardware raid. Historically, there were a few reasons we choose mdadm raid over hardware raid:
It's possble/easy to move disks to a different machine in the event of a controller failure and recover data. Or replace a failed controller with a new one and have things transparently work. With hardware raid you need to have the same exact controller and same firmware version.
Reporting/tools are all open source for mdadm. You can tell when a drive fails, you can easily re-add one, reshape, etc. With hardware raid you are using some binary only vendor tool, all of them different.
In the distant past being able to offload to a seperate cpu was nice, but anymore servers have a vastly faster/better cpu, so software raid should actually perform better than hardware raid (barring different settings).
So, I installed one with mdadm raid another with a hardware raid and did some fio benchmarking. The software raid won overall. Hardware was actually somewhat faster on writes, but the software raid murdered it in reads. Turns out the cache settings defaults here were write-through for software and write-back for hardware, so the difference in writes seemed explainable to that.
We will hopfully finish configuring firmware on all the machines early next week, then the next milestone should be network on them so we can start bootstrapping up the services there.
Builders with >32bit inodes again
We had a few builders hit the 'larger than 32 bit inode' problem again. Basically btrfs starts allocating inode numbers when installed and builders go through a lot of them by making and deleting and making a bunch of files during builds. When that hits > 4GB, i686 builds start to fail because they cannot get a inode. I reinstalled those builders and hopefully we will be ok for a while more again. I really am looking forward to i686 builds completely going away.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114484593787412504