I just got back from a visit to the largest of the datacenters used by Fedora Infrastructure, and I thought I would share a bit about why we do such visits what what we do on them.

We have a number of sites where Fedora Infrastructure has at least one machine, but we have one main datacenter (sponsored by our main sponsor, Red Hat) where all our build system machines are, our fedorainfracloud openstack cloud, and most of our main servers are. This site does have some on site folks who can handle most of our day to day hands on needs, but sometimes we have things that need a lot of coordination, or extra time to complete, so we try and ‘save up’ those things for one of our on-site visits.

(Side note: If you are interested in donating machines/colocation space for the Fedora Project, please do mail admin@fedoraproject.org. We would love to hear from you!)

On this particular visit last week we got quite a few things done:

  • Package maintainers will be happy to hear we racked a new HP Moonshot chassis. This will hopefully allow us soon to run armv7 builder vm’s on aarch64 virthosts which will vastly decrease build times.
  • We moved 5 machines to a new rack to be used for testing storage solutions. We want to see if we can move to more gluster or open source storage solutions. These machines will be used to test out those solutions as they mature.
  • We added memory to a number of cloud compute nodes, allowing us to run more and larger cloud instances (and have more copr builders).
  • We added a new equallogic storage device to our cloud network. This is a smaller one than we have now, but it will allow us to test things and also give us some more cloud storage room.
  • We fixed some firmware on some power supplies (yes, power supplies have firmware too!). This involved an elaborate dance of moving power supplies around in various machines to get them all upgraded (which would have been very tedious remotely).
  • Moved an old remote kvm and added a new kvm. We don’t tend to use these all that much, but they are very handy for machines that don’t have proper out of band management.
  • Pulled tons and tons of unused cables out of all our racks. On site folks don’t tend to want to remove cables, only add new cables for new devices. There was a bunch of old cables that were only connected to a switch or serial port or power device and were no longer needed.
  • Setup a power7 machine (many thanks to IBM) in our cloud network. As soon as we have it installed and setup we should be able to have ppc64 and ppc64le cloud instances and also do copr builds of those arches locally instead of against a remote server.
  • Updated our records of which servers were on which serial, power and switch ports. This information is very important when making remote requests or power cycling servers so you don’t accidentally get the wrong one.
  • Pulled out a few older servers we were no longer using.
  • Racked a few new servers we will be using for releng/buildsystem stuff.
  • Checked all the servers for any error lights or alarms. Cleared a few we had already fixed, and fixed a few we didn’t know about yet.

All in all a good weeks work that should set us up well for the next few months. I would like to thank Steven Smoogen (for being on-site and getting all the above done with me) and Patrick Uiterwijk (who held down the Fedora Infrastructure fort while we were off at the datacenter and helped get all the above done from the remote side) as well as Matthew Galgoci (Red Hat networking guru extrodinare) and our On-site folks, Jesse Iashie and Pedro Munoz. It’s a pleasure to work with you all.