I thought I would write up a quick post to fill folks in on what our OpenShift setup is in Fedora Infrastructure, what we are doing with it now, and what we hope to do with it in coming years.
For those that are not aware, OpenShift is the Red Hat version of OKD, which is a open source, container application platform. That is, it’s a way to deploy and manage application containers. Each of your applications can use a known framework to define how they are built, managed and run. It’s pretty awesome. If you need to move your applicaiton somewhere else, you can just export and import it into another OpenShift/OKD and away you go. Recent versions also include monitoring and logging frameworks too. There is also a very rich permissions model, so you can basically give as much control to a particular application as you like. This means the developer(s) of the applications can also deploy/debug/manage their application without needing any ops folks around for that.
Right now in Fedora Infrastructure we are running two separate OpenShift instances:One in our staging env and one in production. You may note that OpenShift changes the idea of needing a staging env, since you can run a separate staging instance or just test one container of a new version before using it for all of production, however, our main use for the staging OpenShift is not staging applications so much as having another OpenShift cluster to upgrade and test changes in.
In our production instance we have a number of applications already: bodhi (the web part of it, there is still a seperate backend for updates pushes), fpdc, greenwave, release-monitoring, the silverblue web site, and waiverdb. There’s more in staging that are working on getting ready for production.
One of the goals we had from the sysadmin side of things was the ability to be easily able to completely re-install the cluster and all applications, so we have made some different setup choices that others might. First, in order to deploy the cluster we have in our ansible playbooks one that creates and provisions a ‘control’ host. On this control host we pull a exact version of the openshift-ansible git repository and run ansible from the control host with a inventory we generate and the specific openshift-ansible repo. This allows us to provision a cluster exactly the same everytime. One the cluster is setup, we have setup our ansible repo to have the needed definitions for every application and it can provision them all with a few playbook runs. Of course this means no containers with persistent storage in them (or very few using NFS), but so far thats fine. Most of our applications store their state in a database and we just run that outside of the cluster.
Short term moving forward we plan to move as many applications as we can/make sense to OpenShift, as it’s a much easier way to manage and deploy things. We also intend to set things up so our prod cluster can run staging containers (and talk to all the right things, etc). Also we hope to run a development instance in our new private cloud. This instance we hope to open more widely to contributors for their developing applications or proof of concepts. We would like to get some persistent storage setup for our clusters, but it’s unclear right now what that would be.
Longer term we hope to run other clusters in other locations so we can move applications around as makes sense and also for disaster recovery.
I’d have to say that dealing with OpenShift has been very nice, there have been issues, but they are all logical and easy to track down, and the way things are setup just makes sense. Looking forward to 4.0!