To some, cloud is an excuse to introduce “black box” processes that lock users into their services. But they can’t really come right out and say that. Instead they distract from their approach with fanciful names and tell us that the cloud is full of magic and wonder that we don’t need to understand. This type of innovation is exciting to some, but to me, combining innovation with a lock-in approach is depressing. In the past, we’ve seen it at the operating system level and the hypervisor level. We’ve also seen open source disrupt lock-in at both levels and we are going to see the same thing happen in the cloud.
When we started designing and building OpenShift, we wanted to provide more than just a good experience to end users that, in turn, locked them in to our service. One of the early design decisions we made on OpenShift was to utilize standards as much as we could and to make interactions transparent at all levels. We did want the user experience to be magical but also completely accessible to those wanted to dig in. To demonstrate this, let’s walk through the deployment process in OpenShift - arguably the most magical part of the entire offering…
As we were designing a PaaS service, focused on developers, our first goal was to make the deployment process as natural as possible for developers. For most developers, their day to day process goes something like code, code, code, commit. For those questioning this process already let me speak on behalf of the developer in question by saying
Tests?! Of course I’ve already written the tests! They were in the third ‘code’!
Anyway, we wanted to plug into that process and to do that we chose git. The reason for selecting git over more centralized source code management tools like subversion was that the distributed nature of git allowed the user to have full control over their data. The user always had access to their entire historical repository and as developers, we thought that was a critical requirement. Given that, we standardized on git as the main link between our users’ code and OpenShift.
Now let’s look at what that development process might look like in practice. First, you start off with the code, code, commit part:
vi <file of your choice> # make earth shattering changes git commit -a -m "My earth shattering comment"
The next part of the process for those familiar with git is the publish process. You run a ‘push’ command to move your code from your local repository to your distributed clones. So when you run:
git push
Your code is transferred to OpenShift and automatically deployed to your environment. Regardless of whether code needs to be compiled, tests need to be run, dependencies need to be downloaded, a specific packaging spec needs to be built – it all happens on the server side with this one command. To do this we utilize a git hook to kick off the deployment process. Wait – I know what you are thinking…
What?! Just a git hook?! This is the cloud baby! Shouldn’t this be custom compiling my code into a Zeus Hammer to perform a magical Cloud Nuclear transfer?!!
If you ask us, a git hook works just fine because it’s what you would probably do yourself. We simply . That script invokes a series of scripts (called hooks) representing various steps in the deployment process. Some of the hooks are provided by the cartridge that your application is using and some of the scripts are provided by the application itself. This approach let’s the cartridge provide base functionality that can be further customized by the application.
First let’s talk about the cartridge hooks. Having cartridge specific hooks is important because each cartridge needs to do different things in their deployment process. For example, when a Java cartridge detects a deployment, we want to do a Maven build, but when a Ruby cartridge detects a deployment, it should execute Bundler. The cool part is that each individual cartridge can override anything it needs to in the default process.
Let’s look at how the Ruby cartridge implements this. We can look at the ruby-1.9 cartridge’s overridden . When you use the Java cartridge, it leverages Maven in the build process . You can implement the pieces that are right for your cartridge where it makes sense and still utilize the generic process everywhere else. In isolation, each individual script is really quite simple. In aggregate though, all those extensions can become extremely powerful and do much of the heavy lifting on behalf of the users.
But, what if you want to change the default behavior for a specific application? No problem! You have a collection of . You could put your own code in pre_build, build, deploy, post_deploy or wherever else it makes sense.These are found in your application in ~/.openshift/action_hooks. They are invoked just like the cartridge hooks as part of the deployment process. For example, you can see how the . What you choose to do with these hooks is your decision. Put some code in them and they will get called at each step in the deployment process. This let’s you not only leverage the power of a customized cartridge, but also let’s you tweak and tune so things are just right for your application.
At the end of the day, harnessing the power of the cloud doesn’t need to lock you into a vendor. At OpenShift, we believe that transparency, standards and extensibility will make a process that lasts the test of time. I hope this has provided some visibility to how the OpenShift deployment model works and also has given you some insight into navigating the codebase. And if this has peaked your interested and you find yourself digging through more and more code, please reach out and get involved.