Just heading to the airport for a day of travel, then late lunch with my sister in Tempe, then to the fudcon hotel/party central.
Looking forward to seeing lots of fedora folk.
Posts Tagged fedora
I recently went to Brno, CZ for CPE (Community Platform Engineering) meetings and then devconfcz 2019 and thought I would share my take on both of them.
Travel to and from Brno is always a long one for me. I’m currently based in Oregon, US, so my journey is:
- Drive to portland airport (PDX)
- Flight from Portland to Amsterdam (AMS) (a 9-11hour flight)
- Flight to Prague (PRG) (usually a 1-2 hour flight)
- Bus to train station (30-40min)
- Train to Brno (2-3 hours)
And then the same in reverse on the way back, with all the associated timezone issues. 🙂 I am very happy about the direct amsterdam flight, so I don’t have to change planes in london or frankfurt or something.
A short word about the CPE team. We are a team in Red Hat that works on Fedora and CentOS (formerly Fedora Engineering). We have some application developer folks who write and fix our custom applications (bodhi, pagure, release-monitoring, etc) as well as a number of Operations folks who keep the Fedora and CentOS infrastructures running smoothly.
We spent the week of Jan 21st meeting up and discussing plans for the year as well as ways we could be more responsive to the community and better handle our (large) workflow.
- 2019-01-21: Brian Stinson went over the CentOS CI setup we have and we identified projects that we care about that didn’t have any CI and worked on fixing them up. We got a bunch more projects with (all be it simple) tests running.
- 2019-01-22: We talked about ways to be more efficent with out workload. We determined to try and have a ops person paired with a dev person on deployments to avoid delays. We talked about doing more pair work. We talked about changing our status reports. Then we wrote up all the planned work we know of in the coming year, prioritized it, gave it owners to write up. We should have this info up on the wiki before too long (or somewhere).
- 2019-01-23: We talked about rawhide gating and changed our plan to be simpler than it had been. We went over the fedmsg to fedora-messaging changeover. We moved some apps to openshift and fedora-messaging. More to come.
- 2019-01-24: We had some meetings with some internal Red Hat teams on how we could help each other by doing things first in Fedora and how best to do that. We worked some more on priorities and upcoming tasks.
Then it was time for devconfcz. Always a great conference. Tons of talks to see and tons of people to talk to in the hallway track. A few of the talks I really wanted to go to I got to too late and they were already full, but I did see some interesting ones.
There was a lot of discussion about EPEL8 in the hallway track, but luckily we had a number of the people who knew how modularity works there to quash plans that wouldn’t work and to propose ones that would. At this point the plan is to make a EPEL8beta that is just the “ursine” packages and test that out while working on modular EPEL8. For modular EPEL8 we are going to look at something that takes the modular RHEL repos and splits them out into one repo per module. Then we can hopefully get mbs to use these external modules when it needs them as build requirements and we can also decide what modules we want in the ‘ursine’ buildroot. This is all handwavy and subject to change, but it is a plan. 🙂
Smooge and I gave our EPEL talk and I think it went pretty well. There were a lot of folks there at any rate and we used up the time no problem.
As always after a chance to meet up with my co-workers and see tons of interesting talks I’m really looking forward to the next few months. Lots and lots of work to do, but we will get it done!
As those of you who read the https://communityblog.fedoraproject.org/state-of-the-community-platform-engineering-team/ blog know, we are looking at changing workflows and organization around in the Community Platform Engineering team (of which, I am a member). So, I thought I would share a few thoughts from my perspective and hopefully enlighten the community more on why we are changing things and what that might look like.
First, let me preface my remarks with a disclaimer: I am speaking for myself, not our entire team or anyone else in it.
So what are the reasons we are looking for change? Well, there are a number of them, some of them inter-related, but:
- I know I spend more time on my job than any ‘normal’ person would. Thats great, but we don’t want burn out or heroic efforts all the time. It’s just not sustainable. We want to get things done more efficiently, but also have time to relax and not have tons of stress.
- We maintain/run too many things for the number of people we have. Some of our services don’t need much attention, but even so, we have added lots of things over the years and retired very few.
- Humans suck at multitasking. There’s been study after study that show that for the vast majority of people, it is MUCH more efficent to do one task at a time, finish it and then move on. Our team gets constant interruptions, and we currently handle them poorly.
- It’s unclear where big projects are in our backlog. When other teams approach us with big items to do it’s hard to show them when we might work on the thing they want us to, or whats ahead of it, or what priority things have.
- We have a lot of ‘silos’. Just the way the team has worked, one person usually takes lead on each specific application or area and knows it quite well. This however means no one else does, no one else can help, they can never win the lottery, etc.
- Things without a ‘driver’ sometimes just languish. If there is not someone (one of our team or even a requestor) pressing a work item forward, sometimes it just never gets done. Look at some of the old tickets in the fedora-infrastructure tracker. We totally want to do many of those, but they never get someone scheduling them and doing them.
- There’s likely more…
So, what have we done lately to help with these issues? We have been looking a lot at other similar teams and how they became more efficient. We have been looking at various of the ‘agile’ processes, although I personally do not want to cargo cult anything, if there’s a ceremony some process calls for that makes no sense for us, we should not do it.
- We setup an ‘oncall’ person (switched weekly). This person listens for pings on IRC, tickets or emails to anyone on the team and tries to intercept and triage them. This allows the rest of the team to focus on whatever they are working on (unless the oncall person deems this serious enough to bother them). Even if you stop and tell the person you don’t have time and are busy on something else, the amount of time to swap that out and back in already makes things much worse for you. We of course will still be happy to work with people on IRC, just schedule time in advance in the associated ticket.
- ticket or it doesn’t exist. We still are somewhat bad about this, but the idea is that every work item should be a ticket. Why? So we can keep track of the things we do, so oncall can triage them and assign priority, so people can look at tickets when they have finished a task and not been interrupted in the middle of it. So we can hand off items that are still being worked on and coordinate. So we know who is doing what. And on and on.
- We are moving our ‘big project’ items to be handled by teams that assemble for that project. This includes a gathering info phase, priority, who does what, estimated schedule, etc. This ensures that there’s no silo (multiple people working on it), that it has a driver so it gets done and so on. Setting expectations is key.
- We are looking to retire, outsource or hand off to community members some of the things we ‘maintain’ today. There’s a few things that just make sense to drop because they aren’t used much, or we can just point at some better one. There’s also a group of things that we could run, but we could just outsource to another company that focuses on that application and have them do it. Finally there are things we really like and want to grow, but we just don’t have any time to work on them. If we hand them off to people who are passioniate about them, hopefully they will grow much better than if we were still the bottleneck.
Finally, where are we looking at getting to?
- We will probibly be setting up a new tracker for work (which may not mean anything to our existing trackers, we may just sync from those to the new one). This is to allow us to get lots more metrics and have a better way of tracking all this stuff. This is all still handwavy, but we will of course take input on it as we go and adjust.
- Have an ability to look and see what everyone is working on right at a point in time.
- Much more ‘planning ahead’ and seeing all the big projects on the list.
- Have an ability for stakeholders to see where their thing is and who is higher priority and be able to negotiate to move things around.
- Be able to work on single tasks to completion, then grab the next one from the backlog.
- Be able to work “normal” amounts of time… no heroics!
I hope everyone will be patient with us as we do these things, provide honest feedback to us so we can adjust and help us get to a point where everyone is happier.
I’ve been spending some of my time off in the last few days pondering replacing my old reliable home server with something new and shiny. I figured this might be a good time to write up some thoughts around this.
So, the first question that I am sure leaps to mind for people is: Home server? why on earth do you want one of those! Move it to “The Cloud”! Of course doing so would indeed have a number of advantages:
- Better bandwith
- No need to hassle with hardware, someone else would do that
- Less noise and power usage at home
- Depending on how deep in the clouds you go: less hassle running services
On the other hand it has real disadvantages to me:
- No “real life” home setup to test/try/figure things out.
- Never really 100% sure who has/owns/can do things with your data
- Ability to mess with hardware, which can be kind of fun.
- I have a small list of close friends who I provide services to. It’s fun to keep in touch with them that way and have something I can do for them.
- Ability to mess with running a bunch of services, which can be kind of fun.
- Paying a cloud provider recurring fees for something I could just buy once and not pay for over and over again seems like it could be a win, depending on the fees.
Someday I might give up and move things, but it’s not come fully to that yet. Email has been slowly getting more difficult to run on a non gigantic domain, but I’ve managed to overcome so far, so I will keep going until that becomes completely untenable. I really like having my data close by and knowing that I can go fix some problem when it happens. It’s also been a while, but I want to look at spinning up a home OpenShift instance so I can dig into it more and learn more about the low level parts of it. Might need to use OKD or k3s or something instead of OpenShift, but should let me find out more about how ks8 works.
All that background said, lets look at my current home server. It’s a Dell PowerEdge C1100/CS24-TY. I got it from https://deepdiscountservers.com long long long ago, along with another identical server. You can really get pretty great stuff there. It’s basically all the old compute that cloud companies have aged out. So, they are usually older, but tons of memory and disk and cpu. These ones I got have 72GB memory, 24 cpu threads, and 4 3.5″ hot swap drive bays in the front. The second one I got I used for a long time as a test machine, but it has a slightly too old cpu to do power management, so it’s really really loud. The main server does do some power management, but it’s pretty loud too. In my current house I have a closet for computer stuff, but even with the door closed I am near enough to it that I can hear the server running. Of course I can also usually hear the fridge in the kitchen running too. The drives I currently have are 3TB 7200 rpm hitachi’s. Which have also been quite reliable. The server has a pci card in it for some more network ports. It serves as my main firewall / virthost / storage server.
So, why replace it? Well, it was made in the fall of 2012. Yes, thats 10 years old now. Thats ages in computer hardware. It’s slow. The cpu is pretty slow and the storage is super slow. It’s running the 7200rpm spinning disks on a 3GB/sec sata bus (They can do 6GB/s). Taking backups or moving a bunch of things or running a postgresql vacuum just takes ages. It’s also loud. Not earthshatteringly so, and like I mentioned our fridge is also kinda loud, but there’s a lot of times when the fridge compressor is off and I can hear the server distinctly. Finally, it’s fun to look at things and then install and assembe them. Computer geeks gotta geek. Also, this is perhaps a chance for me to play with some things I haven’t yet, like perhaps moving over to a AMD cpu instead of intel or raid on nvme, etc
So, my first thought was to just get another rackmount from deepdiscountservers, which would work fine, but it would be intel based, basically just a newer version of what I have now with more memory and cpus. The cpus would be intel and while newer servers are likely to do throttling better, I don’t think the noise would be all that much lower. rack mount servers are just not designed to be quiet.
Next, I poked around on the net and ran accross silentpc.com, which has some interesting computers on offer. I focused in on the “Powerhouse Ryzen PC” box. It’s a tower case, which is not ideal, but I’m sure I can fit it in somewhere. It’s a Ryzen cpu, a power supply that can power completely off if things are idle, super quiet fans, etc. It’s got enough room so I can move my existing 4 drives over to it (and add in a 5th that I have I was keeping for a spare). Only 2 NVME slots available, but… that takes me into an aside I had:
Most motherboards these days I have seen have just a few NVME slots on them. However, they make PCIe cards that have NVME slots (one, two, or four). The four slot NVME’s are interesting. You need to have a motherboard that supports “pci bifurication” on the slot you are putting it in. If you don’t, you can only see one drive and thats it. If you do, the motherboard takes the x16 slot and carves it into 4 x4 slots and you see all the drives.
From what I have been able to gather the Powerhouse Ryzen PC has a motherboard that has 1 pcie slot that can do bifurcation (but I asked them in email to make sure). If so, then I can get it with 2 NVME’s and raid1 them for now, move the 3.5″ drives over with most of my data, and then down the road I can get a PCIe 4 NVME card and stick 4 NVME’s in there and raid 6 them with the 2 on the MB and then perhaps retire the spinning drives. 🙂 Sadly, their web interface seems to only offer nvidia cards (which I really don’t want), but I asked them in email and they can indeed do other cards. So, waiting to hear back, but I think this might work out nicely for a new box. If it does, I’m also thinking about moving the existing 1U boxes out to the garage and see if I can set them up with a wake on lan or the like so I can use them if I need to test something.
Looking forward to tinkering with it (or looking more if this one doesn’t pan out).