The New Cloud Business Model – Fake Support

I’ve noticed a growing tend over the last year with companies that are providing exciting new enterprise software, the promise of support and no chance of being able to deliver on it.  And unfortunately, for consumers trying to sort through all the new offerings out there, it can sometimes be difficult to separate all the marketing glitz and glamour from the reality.  With OpenShift, Red Hat is able to stand behind the software that it distributes – they have deep expertise in every layer of the stack.  Given that, it frustrates me when I see others claim the same model without the expertise – that approach is just taking advantage of customers who don’t do their homework before buying.

Let’s think about what would happen if more industries took this same approach – the medical profession for example.  Imagine what the conversation might be after your yearly check-up.

Doctor: Well, I’ve got some good news and some bad news.  The good news is that you still look okay.  The bad news is that the is something going on under the surface that you are going to want to figure out.

You: Okay… what exactly do you mean by ‘under the surface’?  Also, when you say that ‘I’ will need to figure this out, what do you mean?

Doctor: I mean something is going on underneath your skin.  What happens under there is basically a mystery to us – it’s not something we support.  That said, whatever is going on probably needs to be fixed so you’ll want to find someone that can do that.  We could try but we really don’t have any better odds than you in fixing the problem…

If a conversation like this is so unacceptable in other disciplines, why do we so readily accept it in software?  Let’s take Platform as a Service (PaaS) for example.  PaaS is platform positioned to be the core application foundation in your company.  It is tightly integrated with both the operating system (OS) and your application platforms.  Those that say otherwise are either dreaming or trying to deceive you.  That tight integration is what lets the PaaS platform do things so that you don’t have to.  But many of the PaaS vendors in the market have limited experience across the OS and the application stacks.  In almost all cases, the PaaS providers are going to have to rely on a separate company for the operating system distribution.  In many cases, they are going to have to do the same for the application stacks.

What are these companies going to do when their customers hit issues in area outside of the core PaaS software?  Most of these guys aren’t active in the open source versions of the software so I doubt they are going to do the fixes themselves.  Don’t let them give you the ‘power of open source software’ unless they are involved enough to influence those changes.  Maybe they will proceed with the same awkward conversation as the above example…

Now, maybe these providers have the ability to support all the things they promise.  Maybe they have all the connections in the open source projects to maintain stable distributions themselves.  This is what Red Hat does but I don’t see too many others doing the same.  At a minimum, you should check because you might end up buying a product from a company whose business models is based on you not making that call for help…

Building an Open Source PaaS Deployment Model

To some, cloud is an excuse to introduce “black box” processes that lock users into their services.  But they can’t really come right out and say that.  Instead they distract from their approach with fanciful names and tell us that the cloud is full of magic and wonder that we don’t need to understand.  This type of innovation is exciting to some, but to me, combining innovation with a lock-in approach is depressing.  In the past, we’ve seen it at the operating system level and the hypervisor level.  We’ve also seen open source disrupt lock-in at both levels and we are going to see the same thing happen in the cloud.

When we started designing and building OpenShift, we wanted to provide more than just a good experience to end users that, in turn, locked them in to our service.  One of the early design decisions we made on OpenShift was to utilize standards as much as we could and to make interactions transparent at all levels.  We did want the user experience to be magical but also completely accessible to those wanted to dig in.  To demonstrate this, let’s walk through the deployment process in OpenShift – arguably the most magical part of the entire offering…

As we were designing a PaaS service, focused on developers, our first goal was to make the deployment process as natural as possible for developers.  For most developers, their day to day process goes something like code, code, code, commit.  For those questioning this process already let me speak on behalf of the developer in question by saying

Tests?! Of course I’ve already written the tests!  They were in the third ‘code’!

Anyway, we wanted to plug into that process and to do that we chose git.  The reason for selecting git over more centralized source code management tools like subversion was that the distributed nature of git allowed the user to have full control over their data.  The user always had access to their entire historical repository and as developers, we thought that was a critical requirement.  Given that, we standardized on git as the main link between our users’ code and OpenShift.

Now let’s look at what that development process might look like in practice.  First, you start off with the code, code, commit part:

vi <file of your choice>
# make earth shattering changes
git commit -a -m "My earth shattering comment"

The next part of the process for those familiar with git is the publish process.  You run a ‘push’ command to move your code from your local repository to your distributed clones.  So when you run:

git push

Your code is transferred to OpenShift and automatically deployed to your environment.  Regardless of whether code needs to be compiled, tests need to be run, dependencies need to be downloaded, a specific packaging spec needs to be built – it all happens on the server side with this one command.  To do this we utilize a git hook to kick off the deployment process.  Wait – I know what you are thinking…

What?!  Just a git hook?!  This is the cloud baby!  Shouldn’t this be custom compiling my code into a Zeus Hammer to perform a magical Cloud Nuclear transfer?!!

If you ask us, a git hook works just fine because it’s what you would probably do yourself.  We simply .  That script invokes a series of scripts (called hooks) representing various steps in the deployment process.  Some of the hooks are provided by the cartridge that your application is using and some of the scripts are provided by the application itself.  This approach let’s the cartridge provide base functionality that can be further customized by the application.

First let’s talk about the cartridge hooks.  Having cartridge specific hooks is important because each cartridge needs to do different things in their deployment process.  For example, when a Java cartridge detects a deployment, we want to do a Maven build, but when a Ruby cartridge detects a deployment, it should execute Bundler.  The cool part is that each individual cartridge can override anything it needs to in the default process.

Let’s look at how the Ruby cartridge implements this.  We can look at the ruby-1.9 cartridge’s overridden .  When you use the Java cartridge, it leverages Maven in the build process .  You can implement the pieces that are right for your cartridge where it makes sense and still utilize the generic process everywhere else.  In isolation, each individual script is really quite simple.  In aggregate though, all those extensions can become extremely powerful and do much of the heavy lifting on behalf of the users.

But, what if you want to change the default behavior for a specific application?  No problem!  You have a collection of .  You could put your own code in pre_build, build, deploy, post_deploy or wherever else it makes sense.These are found in your application in ~/.openshift/action_hooks.  They are invoked just like the cartridge hooks as part of the deployment process.  For example, you can see how the .  What you choose to do with these hooks is your decision.  Put some code in them and they will get called at each step in the deployment process.  This let’s you not only leverage the power of a customized cartridge, but also let’s you tweak and tune so things are just right for your application.

At the end of the day, harnessing the power of the cloud doesn’t need to lock you into a vendor.  At OpenShift, we believe that transparency, standards and extensibility will make a process that lasts the test of time.  I hope this has provided some visibility to how the OpenShift deployment model works and also has given you some insight into navigating the codebase.  And if this has peaked your interested and you find yourself digging through more and more code, please reach out and get involved.