Fedora 20 on a Thinkpad X1 Carbon (20A7)

Time to try out some new hardware.  My experience so far with the Thinkpad X1 Carbon has been great and will get even better over time.  Most of the things that I’m going to cover in this blog have already been fixed in various projects and I expect that many of them will land in Fedora 21.  However, until that time, I want to make sure that Fedora 20 users can have a great experience with the Thinkpad X1 Carbon (model 20A7), assuming they are willing to tweak a bit.

Step 1 – Disable UEFI Boot for installation

To do an easy install just disable the UEFI Boot in your BIOS and hook up your installation source (USB, PXE over the net, etc).  Very simple to get going.

Step 2 – Fix Suspend / Resume and USB3

Resuming from suspend is going to fail because of a problem with the firmware and the USB3 driver.  You have a couple options.  The first is to disable USB3 in the BIOS and move on.  The second if to update your BIOS which is trickier.  Do not update your BIOS using my instructions unless you know exactly what you are doing.  You can brick (i.e. ruin) your machine if you do it wrong.

Option A (Easy) – Disable USB3 in the BIOS

To do an easy install just disable the UEFI Boot in your BIOS and hook up your installation source (USB, PXE over the net, etc).  Very simple to get going.

As for disabling USB3, there is evidently a USB3 driver problem that keep the machine from un-suspending.  I’m going to investigate updating the BIOS to see if it fixes this, but an easy fix for right now is to disable USB3 and suspend resume works great.

Option B (Danger) – Update your BIOS to version 1.13+ (AT YOUR OWN RISK)

I’ll be honest, I even contemplated as to whether to put these instructions in here.  At the end of the day though, I figure I might as well pass along what worked for me.  Seriously though, if you mess up a BIOS update, you can ruin your machine so if you don’t know what you are doing, just turn off USB3.  However, if you want to update the firmware, this is what I did.

Step 1 – Download the geteltorito.pl script.  You can download the one I used here.

Step 2 – Get a USB drive that can be erased and plug it in.  Figure out which device that drive is.  I usually just run ‘fdisk’ to figure out.  Keep in mind that if you see /dev/sdb1 in fdisk, your device is actually going to be /dev/sdb (with no number at the end).

Step 3 – Download the BIOS ISO image from here.

MAKE SURE YOU GET THE BIOS FOR YOUR MODEL NUMBER LAPTOP.  For example, I downloaded the driver ‘BIOS Update (Bootable CD) for Windows 8.1 (64-bit), 8 (64-bit), 7 (32-bit, 64-bit) – ThinkPad X1 Carbon (Machine types: 20A7, 20A8)‘.  The filename was gruj08us.iso.

Step 4 – Convert the downloaded ISO to a bootable image, named bios-update.iso

perl geteltorito.pl -o bios-update.iso gruj08us.iso

Step 5 – Copy that bootable image to your USB drive.  I’m using /dev/sdx below which you need to replace with your USB device.  Double check that you have the device name right for your USB drive and run:

sudo dd if=bios-update.iso of=/dev/sdx bs=512K
sudo sync

Step 6 – Reboot and press F12 to get the boot menu and boot from the USB.  Follow the instructions to update your BIOS.

Step 3 – Add MattOnCloud Repository

I’ve created a yum repository that contains a RPM that contains various fixes and repositories used in this blog.  I’m keeping the  and pull requests are definitely appreciated.  To install my repository, run:

sudo rpm -Uvh https://files-oncloud.rhcloud.com/yum/RPMS/x86_64/oncloud-repo-0.4-1.fc20.x86_64.rpm

To apply the fixes, then run:

sudo yum install thinkpad-fixes

Step 4 – Update GNOME

Since the Thinkpad Carbon X1 has a very high resolution screen, you are going to want to get GNOME 3.12 HiDPI support.  If you don’t, a lot of the windows and text are going to be crazy small.  My RPM provides a repository to a backported version of GNOME 3.12.  So after installing, you just need to run:

sudo yum update

Go get a coffee since that is going to be a lot of packages.  After it’s done, logout and login or reboot your machine.

One you have GNOME reloaded, you are probably going to want to tweak your applications to scale their resolution correctly.  I followed the instructions in this article:

https://wiki.archlinux.org/index.php/HiDPI

Step 5 – Update Synaptics

The trackpad support for the Carbon is a little shaky in Fedora 20 by default as well.  The good news is that the 1.7.6 release backports some of these fixes.  Luckily you can get this release early by just installing from Fedora’s Koji RPM server:

sudo yum install http://kojipkgs.fedoraproject.org//packages/xorg-x11-drv-synaptics/1.7.6/2.fc20/x86_64/xorg-x11-drv-synaptics-1.7.6-2.fc20.x86_64.rpm

I found a great configuration from Major on his blog as well.  I started with that configuration and have made several tweaks – I think the setup is getting pretty solid.  I also add the syndaemon to disable the touchpad for a second after typing.  I’ve found this let’s me keep the touchpad fairly sensitive but avoid random taps when I’m typing email, etc.  I’ve added to the fixes RPM.  After you boot, you should run the following if you like the configuration and don’t want the settings to be updated via the settings widget:

gsettings set org.gnome.settings-daemon.plugins.mouse active false

I’ve also added a non-tap version of my synaptics settings that I’m currently using.  Curious on people’s feedback as to whether they like the tap settings or click settings better and I’ll make that the default.  You can find the non-tap setup .

Step 6 – Screen Brightness / Keyboard Backlight

Good news is that adaptive keyboard support is coming soon for Linux.  I’ll update once that is in a kernel that we can get at.  The bad news is that after a suspend, the adaptive keyboard is blank and doesn’t work.  We depend on that for backlight and brightness so we need a workaround.  Luckily the thinkpad-fixes provides them.  It ships with two scripts in /usr/bin to adjust backlight and brightness.  You can run them with:

# Brightness options (dim to bright)
sudo brightness dim
sudo brightness normal
sudo brightness bright

# Backlight options (dim to bright)
sudo backlight 0
sudo backlight 1
sudo backlight 2

A co-worker pointed out that you can also use the brightness slider in the top menu bar drop down (right below the volume).  That is a much easier way to set the brightness if you aren’t in a terminal.  I’ll leave the script for now but might end up removing it.

Step 7 – Fedy

I highly recommend running Fedy to setup the other miscellaneous features such as codecs and font rendering – Lately I’ve been using the Numix theme and the Infinality fonts and like them quite a bit.  You can install the Numix themes from Fedy and also the improved font rendering with Infinality.  I set the osx style fonts with:

$ sudo /etc/fonts/infinality/infctl.sh setstyle
Select a style:
1) debug       3) linux          5) osx2         7) win98
2) infinality  4) osx          6) win7         8) winxp
#? 4
conf.d -> styles.conf.avail/osx

To switch to the Numix theme, you’ll want to add the GNOME extension for User Themes by going to the following location – https://extensions.gnome.org/extension/19/user-themes/.  Then install the GNOME Tweak tool via Fedy and launch it and select Numix in all the theme options.

Lastly, I highly recommend the Dash to Dock extension as well.  I think it’s one of the best extensions out there – https://extensions.gnome.org/extension/307/dash-to-dock/

Hope this blog helps a new Fedora user out there get up and running!

Fedora 20 on a Macbook Air

Update 3/24/2014: Suspend working well, moving to mba-fixes RPM

Update 3/23/2014: Removing RPM version of backlight also added some attempts at avoiding wakeups

Update 2/17/2014: Added tip on reloading the wireless module

Update 2/5/2014: Added instructions for a shared partition and better backlight driver installation

With Fedora 20 out, it was time to refresh my Macbook Air (model 6,2) installation and see what I could get working.  You might have seen my earlier post using Fedora 19 but this time around, things went much smoother.  I’ve also tried to cover the initial installation with a little more detail, so hopefully that will help out someone just getting going.

Step 0 – Uninstall rEFIt

If you had previously installed Fedora on your Macbook like myself, you probably installed rEFIt as your bootloader.  Since that project is no longer maintained, you’ll want to uninstall it and move to the newer Refind bootloader.  I uninstalled Refit using their instructions and it worked fine. After uninstalling, I made sure I could boot into OSX by just rebooting and it worked.

Step 1 – Install Refind

I followed the Refind Mac OSX Instructions and while it was a few steps of pasting things into the terminal, it went very smoothly.  Since I have the MBA 6,2 model, I kept the 64 bit binary and drivers.  Also, the instructions mention that you can remove drivers to optimize boot time but I only removed the HFS driver.

Step 2 – Build Fedora 20 USB Key

First I downloaded the Fedora 20 ISO and followed these instructions to build a USB key.  I followed the Mac OSX section and used the dd program to create the key.  Worked like a charm.

Step 3 – Install

With the newly created USB key plugged in, you’ll want to restart and immediately hold down the Option key on your Mac.  That will pull up the boot device options and you can select the ‘Fedora Media’ option.

Note – I wasn’t able to successfully check the media.  When I tried, I received errors about a devmapper device not being present.  Continuing without checking worked fine for me.

Important – I’m going to describe how to do a clean install as that is what I normally do.  This is going to delete any existing Fedora installation you might have.  Also, if you mess up partitions, it could delete some of your Mac data as well.  This is the point where you need to triple check that you have all your stuff backed up.  I don’t claim this installation is fool-proof nor do I want anyone accidentally reformatting their machine as a part of this effort.  Make sure if that happens, that you aren’t going to permanently lose anything.

Step 3.1 – Partitions

Since I had Fedora 19 installed on encrypted partitions, it was a little confusing for me at first how to make sure the Fedora 20 partitions were created properly.  The following steps did the trick for me:

    • Select the encrypted partitions and unencrypt ALL of them
      • At this point, you should see the Fedora 19 partitions recognized, including swap
    • Delete all of the Fedora 19 partitions.  Remember – clean install being described
    • Click the link to auto-create the Fedora 20 partitions
    • Apply the changes

Sharing a partition with OSX

I wanted to share some data and documents between my OSX instance and my Fedora setup.  To do this, I actually changed the mount point of the /home partition in the Fedora 20 list to /mnt/shared (I had left enough space on the / partition).  I also changed the filesystem to Linux HFS+.  I assume that is just a non-journaled HFS+ partition.  Anyway, it created on install and mounts up on Linux.  The only thing I had to do was sync the UID of my Linux user with my already existing Mac user of the same name.  To do that, I switched to runlevel 1 and just ran (replace the contents in the braces with your values):

sudo init 1
# Put in root password
usermod -u <MAC_UID> <LINUX_USER>
chown -R <LINUX_USER> /home/<LINUX_USER>

After doing this, I was successfully able to read and write files when booted into Linux or Mac OSX.  I symlinked most of my directories like ~/Documents to that share.

At this point, you’ll let the install proceed as normal and you’ll create your users, root password, etc.  From this point on, I’ll assume you are running these steps from your newly installed Fedora machine.

Step 4 – Update

First odd thing I hit was that after the installation, PackageKit was telling me everything was up to date, so I had to run yum at the command line:

sudo yum update

That worked fine and applied a few hundred updates.  Reboot to get the new kernel.

Step 5 – RPM Fusion (extra packages)

You’ll want to install RPM Fusion to get the wireless drivers among other things.  You can either following the RPM Fusion docs or following these condensed steps:

  • Click this link on your Fedora machine and install it (free repo)
  • Click this link on your Fedora machine and install it (non-free repo).

Step 6 – Wireless

Now with the RPM Fusion repos, you can just install the broadcom wireless drivers:

 sudo yum install kernel-devel akmod-wl

Reloading the wireless module

After updating my kernel, the wireless module didn’t load automatically for me.  I’m fairly new to akmods so I’m not sure if the new akmod wasn’t built, loaded or what.  However, here is what I did to fix

# Make sure the module built for your kernel
sudo akmods

# See if the module is loaded (if no results, it's not)
sudo lsmod | grep wl

# Manually load the module
sudo modprobe wl

Step 7 – Backlight

Once of the most annoying things for me with Fedora 19 on my Macbook was that the backlight would always go to 100% after a resume.  Luckily came to the rescue and implemented a new kernel module specifically for the Macbook Air 6,2.  The source of his driver is located .  To build it, simply follow the instructions on his repository.  You’ll need to rebuild and install the driver on each kernel update.

For the X11 configuration file, I just create a file in /etc/X11/xorg.conf.d/01-backlight.conf with his suggested X11 configuration specifying the driver.  It works great.

Why did I give up on the RPM?  Well, I tried to hack around actually learning how to build a kmod RPM and the results just weren’t consistent.  Without calling ‘make modules’ and ‘make modules_install’ properly, the module wouldn’t always get loaded properly.  If there are any volunteers that are willing to put the mba6x_bl driver in a kmod RPM, I’m happy to host it.

Step 8 – SSD errors and Keyboard mapping

The next things I hit were some sporadic SSD errors.  In this thread, I found that I was able to set a queue depth to 1 and my errors largely went away.  Also, the tilde is mapped improperly so you need to apply a fix for that.  I’ve bundled both of these fixes as systemd service and a udev rule that will run at startup.

Step 8.1 – Add MattOnCloud Repository

I’ve created a yum repository that contains a built version of this code for Fedora as well as some of the other fixes below.  I’m keeping the  and pull requests are definitely appreciated.  This was my first attempt at a package that would automatically rebuild with kernel updates so install at your own risk…  To install my repository, run:

sudo rpm -Uvh https://files-oncloud.rhcloud.com/yum/RPMS/x86_64/oncloud-repo-0.4-1.fc20.x86_64.rpm

To apply the fixes, then run:

sudo yum install mba-fixes

Wakeups after Suspend

I found the following blog which describes a way to avoid wakeups after suspending.  This is a bit of a pain for me since I’ll suspend at night only to have it wakeup after I’m asleep and drain the battery.  I’ve created a systemd script that runs each startup and disables XHC1 and LID0 from waking the machine up.  This means that after closing the lid to suspend, you need to open the lid and push the power button to wake the machine up.  This is now installed by the mba-fixes RPM.

Random Items

I went ahead and ran  to setup a bunch of other random things, including the better IO scheduler for a SSD.  I would probably recommend doing that – there are a couple of options, but Fedy is pretty nice.

End Result

The end result with Fedora 20 is an impressive setup.  Mini-display to VGA projection actually works this time around which was a huge win for me.  Sound also works right out of the box.  Performance seems solid and I’ll be tracking battery life and updating.

I’ll keep you updated!

The Evolution of PaaS

Having spent three years now working on OpenShift, the lesson I’ve learned from working in the cloud space is that if you aren’t evolving, you are doing something wrong.  PaaS isn’t a static solution but a constantly progressing set of technologies to enable a better approach to building and running applications.  But at Red Hat, the open source way is a critical aspect of how we work.  To us, that means finding the best technology and best communities out there and working with them instead of against them.  That is why we created  with the mission of being able to experiment along with other communities to find the best solutions for our users.  With that mission in mind, we are always looking for those adjacent communities and determining if they are a good fit.  Looking ahead to next year, I see a couple of exciting community developments on the horizon, one centered around Linux Containers and the other centered around OpenStack.
 
 
First, let’s talk about Linux Containers.  Personally, I think Linux Containers is one of the most exciting developments in the Linux kernel today.  The combination of kernel namespace capabilities, coupled with Linux Control groups and a strong security model is changing how users think about isolating applications running on the same machine.  But much like PaaS, Linux Containers isn’t a static solution.  There are a lot of options that can be utilized to strike the right balance of isolation versus overhead.  On OpenShift, we’ve been using our own variant of Linux Containers since day one.  But there is a lot of community activity around containers and a few months ago, we noticed Docker.  Docker introduced an innovative approach to container isolation and packaging which had the potential to both simplify our cartridges in OpenShift and increase user application portability.  But a lot of stuff looks interesting on the surface.  We wanted to really dive in (i.e. start hacking) to see if this would be a good fit.  We had a great experience working with the leads behind Docker and were able to close many of the initial gaps we hit to make Docker run seamlessly on the platforms critical to us, Fedora and RHEL, to enable us to start utilizing it on OpenShift.  During that same time period, Docker was accepted as a Nova driver to the OpenStack project.  With that foundation, we are getting ever closer to having a consistent portable container layer across the operating system (e.g. RHEL), IaaS (e.g. OpenStack) and PaaS (e.g. OpenShift).  Better still is that we are able to take our experience with containers and work with hundreds of other community members to come up with the best approach going forward.
 
 
But as excited as we are about the evolution of containers and better portability of applications, we also know the operational experience is equally critical.  And more and more, that operational experience is centered around OpenStack.  While we can run OpenShift very well on OpenStack and are even enabling better integration through projects like Heat and Neutron, we’ve had the feeling that there is a more fundamental set of capabilities in our platform today that could be native to OpenStack itself.  And in doing that, we could drastically improve the operational experience.
 
 
But I think it helps to talk through some of those operational challenges.  An example of this is the visibility of containers in OpenStack.  Almost every PaaS on the market right now uses some form of Linux containers.  Arguably, it’s what makes a PaaS so efficient – this highly elastic mesh of containers that form your applications.  However, if that PaaS doesn’t natively integrate with OpenStack, your operations team isn’t going to see those containers.  They are just going to see the virtual machines in OpenStack and not have deeper visibility.  But if that PaaS was natively integrated into OpenStack, things get interesting.  The containers themselves could be managed in OpenStack, opening up full visibility to the operations team.  They wouldn’t just see a virtual machine that is working really hard, they would see exactly why.  It would enable them to start using projects like Ceilometer to monitor those resources, Heat to deploy them, etc.  In other words they could start leveraging more of OpenStack to do their jobs better.  But where do you draw that line?  Should OpenStack know about the applications themselves or just containers?  In looking for those answers, we wanted to embrace the OpenStack community to help us draw that line, just like we did with Linux Containers.
 
 
OpenStack has a tremendous community and various areas where we could have started  – Nova, Neutron, Heat, Ceilometer, Keystone, etc.  At the end of the day, we were going to need to interact with all of them.  That led to the Solum project.  You might have seen the announcement today around the new community project.  We will be working with a group of like minded companies and individuals to figure out the approach that makes the most sense for OpenStack.  While OpenStack is a fast moving space, we have a lot of experience with it and believe that there is tremendous potential to align our PaaS approach with this project.  Being Red Hat, we love community driven innovation, and we’re excited to jump in and help move this effort forward.
 
I think 2014 is going to be the most exciting year for PaaS to date.  There is great traction in the market, developer expectations are starting to solidify and we’re seeing more and more traction in production.  I believe the next advancements will come as much in the operational experience as with the developer experience.  And I’m excited to find healthy and vibrant communities looking to solve the same problem.  The end result will be that OpenShift users benefit from greater portability as well as deeper integration with OpenStack.  This has been one of those moments that just crystallizes why I love working in open source.
 
 
If you are interested in finding out more, follow our progress at or get involved with the Solum project directly.  And I’m sure there will be a lot of activity at the OpenStack Design Summit, so If you are going to be there, come find us at  and we can hack on this together!

An application built from… cartridges?

Background

What are these things called cartridges in OpenShift and why are they so important?  Well, let’s take a step back and look at a typical application.  While some might argue the specifics, most applications are still multi-tier applications and utilize multiple technologies with some separation between them.  A classic case is a web application and a database.  While some of the databases might be experimenting with NoSQL backends, the general pattern largely holds.  And maybe you throw in a caching tier in there or something more exotic, but at the end of the day, very few applications I’ve seen get very far with just a web application runtime and nothing else.

Composition

So if you’re still with me and not yet posting to the comments about that initial claim being ridiculous, let’s talk about how that process often plays out.  When building an application, many developers think from a technology standpoint.  They might think of Ruby and want to use Mongo for storage.  Despite claims that the most effective route is to focus on the use cases first (e.g. I’m building a coupon generating web application that has to store large amount of redundant data), at the end of the day, the technology decision is often a major factor.  I often operate this way myself – half of the time, an idea I’m pursuing is as driven by getting to try out some new technology as it is on a successful and fast implementation.  Engineers like to learn and new technology is a great vehicle for that.

But while the learning curve around new technology has some benefits, it also has many disadvantages.  It’s hard for me to argue that learning to wire up a MySQL database to a Ruby application server versus a Java application server has any practical benefit.  It’s just something I need to do.  I need a database driver for the language I’m using, authentication details and endpoints.  It’s the same in theory but just different enough in every language and runtime to be a major pain.  And databases are well know.  The newer the technology gets, the more time is often wasted on the mundane aspects of integration.  But don’t give up on being a developer yet because this is what cartridges in OpenShift eliminate.

The cartridge model in OpenShift is all about enabling choice in technology and language while also reducing the effort around the integration portions that can be automated.  If you have an application that consists of a and a , the two are automatically wired together.  You don’t need to know or care about what MySQL driver is being used in JBoss or how the data source is setup.  You can just get down to writing code and queries.  This is beneficial in both development and production.  In development, this gives engineers the ability to trial a lot of different software to find the best solution to their problem.  They can spend more time on the analysis and not the administrivia of learning the setup environment of each technology.  But that same approach and power also extends to production.  Cartridges don’t only automate things like wiring up different components, they also can implement functionality like scaling.  For example, the JBoss cartridge has auto-scaling built in so that when the application is getting more load than it can handle, it will spin up new instances automatically.  And for those who might be wondering, clustering is automatically setup as well – new instances automatically join the cluster.  The goal of the cartridge model is to capture these capabilities in a standardized, easily consumable format that bring benefits throughout the entire lifecycle of application development.

The Technology

OpenShift cartridges have an but there are two capabilities that are my favorite:

  • Providing a first class way to interact with each other, even across multiple machines
  • Giving the cartridges the ability to influence their deployment topology (i.e. can they run embedded with other cartridges or do they scale differently)

Publish / Subscribe

Let’s talk about the interaction model first.  By interaction model, I simply mean having multiple cartridges communicate with each other.  That sounds incredibly simple but it’s also amazingly powerful, especially as you consider building applications from many cartridges.  The concept is that a cartridge like MySQL can publish information about itself that other cartridges might want to know.  For example, when a new MySQL instance is created, you probably need to know the username, password and JDBC URL – all of that information can be published.  That process is described with the cartridge in a file that we call a manifest.  Here is an example of how MySQL actually publishes its connection information in it’s :

Publishes:
  publish-db-connection-info:
    Type: ENV:NET_TCP:db:connection-info
That command will invoke a script called publish-db-connection-info that will publish a collection of environment variables of type ENV:NET_TCP:db:connection-info.  You can think of the type as an arbitrary string that can be used by consumers to filter out what they may or may not support.  This published information can then be consumed by any other cartridge that subscribes to a matching type.  For example, in the JBoss cartridge, you’ll see the following section in it’s :
Subscribes:
  set-env:
    Type: ENV:*
      Required: false
This instructs the JBoss cartridge to listen to all environment variables set by publishing events that start with the string ENV.  More restrictive matching can also be done in cases where you might have a cartridge that is only compatible with a certain class of published information (e.g. subscribing to ENV:NET_TCP:db:connection-info instead of ENV:*).  Either way, if the publish and subscribe string match, the JBoss cartridge has access to the .  With that information, the JBoss cartridge is then able to automatically wire up a by using those values:
<datasource jndi-name="java:jboss/datasources/MysqlDS" 
...
<connection-url>
  jdbc:mysql://${env.OPENSHIFT_MYSQL_DB_HOST}:${env.OPENSHIFT_MYSQL_DB_PORT}/${env.OPENSHIFT_APP_NAME}
</connection-url>
<driver>mysql</driver>
<security>
  <user-name>${env.OPENSHIFT_MYSQL_DB_USERNAME}</user-name>
  <password>${env.OPENSHIFT_MYSQL_DB_PASSWORD}</password>
</security>
...
</datasource>

While this is just a simple example, hopefully the beauty of it to a developer is apparent.  Just the act of adding a MySQL cartridge to your JBoss application will automatically wire up your application to it.  Adding Mongo would do the same thing, as would Postgres, etc, etc.  And this isn’t limited to databases either.  It also works with monitoring cartridges, metrics cartridges, caching, and many others – the possibilities are limitless.

Deployment Topology

The second capability isn’t about the development process as much as it is about production.  We all know that different application technologies scale differently.  You might have a Ruby application whose throughput is determined by the number of Passenger instances that are running.  If it starts slowing down, you need to add more.  However, if this same application depends on a database, you probably need to scale the data tier independently.  You don’t want to add another MySQL instance every time you add a new Passenger instance.  Not only is that unnecessary and expensive, it most likely wouldn’t even work.  When scaling your web tier, you need to think about session affinity, connection persistence, stateless / stateful behavior and similar concepts.  However, when scaling MySQL, you need to think about your master / slave model, how many to add of each and what type of query patterns you are using.  In OpenShift, since these are different cartridges, each cartridge can approach scaling in a unique manner.

From the cartridge standpoint, the Ruby cartridge is going to respond to a scaling events very differently than MySQL.  While this requires real work and thought from the cartridge authors, it captures the complexity in a model that is easily leveraged by developers.  Developers are able to specify how they want scaling to occur (e.g. automatically or manually) and also put limits around how many of their resources they want each cartridge to be able to consume.  They might want their Ruby tier to always start with pre-allocated resources (called gears in OpenShift) but still limit the maximum number of resources it could consume.  Using the OpenShift command line tools, that would be as simple as:

rhc scale-cartridge ruby -a myapp --min 5 --max 10

In my application, that would always start the Ruby cartridge with 5 gears and never consume more than 10.  The best part though is that the cartridges themselves can also influence what sort of scaling is possible so that you aren’t blindly adding resources to a cartridge that can’t use them.  The default Ruby cartridge supports scaling but the default MySQL cartridge can only run standalone.  The MySQL cartridge is able to express limitation this by setting the scaling options to a :

Scaling:
  Min: 1
  Max: 1

The end result is that when you are creating a scaled application, the Ruby runtimes and MySQL runtimes will get created on separate gears to give the maximum amount of resources to each tier, but the MySQL cartridge and Ruby cartridges will implement their own unique scaling approach.

At the end of the day, this is really about separation of concerns.  Cartridges in OpenShift are used to describe lifecycle characteristics of the technology they represent as well as integration options with other cartridges.  Since the OpenShift cartridge format is completely open, it’s easy for commercial vendors as well as open source users to create cartridges.  For developers, that means they get to access a broad choice of technologies, both commercial and community.  But in addition to choice, the most value comes from allowing developers to spend more time doing the thing they do best – coding.

Fedora 19… on a Macbook Air (2013 Model)!

Update 02-04-2014 – this blog has been updated with Fedora 20.  Fedora 20 works much better so I would highly recommend following the instructions there.

Warning – This information is now out of date and replaced with the blog entry on Fedora 20

 

Update 07-29-2013 – the new kernel supports the touchpad out of the box.  Getting better every day!

This is a slight deviation from my traditional posts but I’m a techie at heart and when a Linux guy gets a new Macbook, he’s gotta try putting on Fedora.  Anderson Silva was my primary inspiration since he got this working on a mid-2012 model Macbook Air ().  However, when deploying on a 2013 model, I hit a couple of bumps in the road.  The good news is that I was able to fix everything but it took me a while to track down all the fixes.  Warning – if you aren’t interested in building some packages, it’s probably just better to wait a couple of months.  However, if you are impatient like myself, read on!

Step 1 – Building RPM’s

You are going to need to build some RPM’s here so you’ll need some development tools installed.

sudo yum install @development-tools
rpmdev-setuptree

Step 2 – Wireless

The new Macbooks ship a BCM4360 wireless chipset which isn’t supported today in Fedora or available via RPMFusion.  However, the RPMFusion guys are working on it and you can build your own RPM with the latest driver to get this working.  You can track the progress at https://bugzilla.rpmfusion.org/show_bug.cgi?id=2721.

# Download and build the source RPM's
cd ~/rpmbuild/SRPMS
wget http://dl.dropboxusercontent.com/u/25699833/rpmfusion/bug2721/broadcom-wl-6xx-6.30.223.30-1.fc20.src.rpm
wget http://dl.dropboxusercontent.com/u/25699833/rpmfusion/bug2721/wl-6xx-kmod-6.30.223.30-1.fc20.src.rpm
rpm -Uvh *.src.rpm
cd ~/rpmbuild/SPECS
rpmbuild -ba wl-6xx-kmod.spec
rpmbuild -ba broadcom-wl-6xx.spec

# Install those RPM's
cd ~/rpmbuild/RPMS
sudo yum install RPMS/`uname -i`/akmod-wl-6xx* RPMS/`uname -i`/kmod-wl-6xx* RPMS/noarch/broadcom-wl-6xx*

Step 3 – Touchpad support (e.g. two finger scroll, two finger click)

Update: You used to have to build a custom kernel but now that 3.10 is out, this works out of the box!

Issues

1. Light sensor / backlight – after a reboot, I can adjust screen brightness with the correct steps / increments using the hot keys. However, after suspend / resume, I can still use the hot keys but it’s either max / min brightness. Not the end of the world, but a bit of a pain.  I’ve opened the following bug to track – https://bugzilla.redhat.com/show_bug.cgi?id=989555

2. Internal speakers.  I can’t seem to get the internal speakers to work.  Headphones work fine but I went ahead and opened a bug to track – https://bugzilla.redhat.com/show_bug.cgi?id=989582

3. 15-30 second hangs.  This seems to be somewhat CPU / IO related but every once in a while, my machine will hang for 15 or 30 seconds.  Nothing more than an annoyance but I’m going to try adding libata.force=1:noncq to my kernel boot parameters and see if that helps based on this article (https://bbs.archlinux.org/viewtopic.php?pid=1295212#p1295212).

# Edit the default grub file
sudo vim /etc/default/grub

# Add 'libata.force=1:noncg' to the end of the GRUB_CMDLINE_LINUX parameter

# Regenerate the grub configurations
sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cf

Please comment on the bugs if you are experiencing the same issues or if you have fixes!

Developing in the Clouds

I’ve seen a lot of interesting evolutionary changes in our development practices over the last 10 years or so and I actually don’t think there have been many revolutionary changes.  Until now, that is.  The power of cloud technologies has introduced a new twist in what development teams can utilize – infinite resources.  Yeah, yeah, I know there really is a capacity limit and I know someone still pays the bill at the end of the day but it’s still fundamentally different than what we are used to.  Without even realizing it, we have built development processes around resource constraints – everything from how developers code, to our QE, to our release processes.  But that doesn’t have to be the case anymore and it doesn’t have to break the bank either.  On OpenShift, we decided early on to build our development process entirely around the concept of using as many volatile resources as we needed to see how productive we could be.  I’ll let you judge for yourself, but to me, the results have been astounding.  Not only is our team more productive than any development team I’ve worked with before, but we are shockingly more cost effective as well.

Let’s take a trip down memory lane first to give you some context to my past experience.  If you roll back the clock 5 years or so, scaling development was painful.  Development essentially ran off desktops and the operations controlled environments were rigid and static.  On the development side, I was constantly struggling to get powerful enough desktops and enough of them to keep the development team productive.  Virtualization wasn’t that prominent yet, so often the developers needed two or three desktops to emulate multi-tier systems.  I spent as much time in purchasing and unpacking boxes as I did coding (and on the warranty calls when the things broke down).  We would constantly have power issues and stability issues as developers would inevitably start hosting shared services on their desktops like our test or continuous integration servers.  Power outages every few days would destabilize services and every once in a while kill a hard drive that wasn’t backed up.  On the other hand, our operations controlled environments that had stable power and redundant storage were ridiculously expensive and effectively static.  Those environments were essentially mirrors of production and expected to be as stable as production.  Developers needed agility so they stuck to their desktops.  It’s the classic development and operations divide and I’ve seen this same scenario play out time and time again.

When we started OpenShift, we wanted to approach it differently.  Sure, everyone still gets a laptop (or in some cases a desktop) for a personalized development environment but that is where physical resources stop.  Developers have the ability to spin up as many mini-environments and sync their local code to those environments.  They can use those mini-environments to kick off various suites of tests as well as do ad-hoc testing of their own.  They might only need a single environment if they are working on a single feature or they might need a dozen.  The focus for us was on ease of consumption – a single command and a 30 second wait is all they need to spin up a new environment of the latest stable build.  Another command synchronizes their local changes to that environment.  The most important thing is that there are no limits to the number of environments they can wield – whatever makes them productive.

But let’s talk cost effectiveness.  How can we provide something like this and not completely break the bank?  Well, this is where IaaS pricing can start benefiting the consumer.  Before you get there however, you have some challenges to solve.  The first challenge is the developer habit of keeping long running machines around.  Any time a developer wanted to test something, we wanted him to be able to spin up a new machine with his changes.  You have to make it easier to start with a new machine instead of keeping an old one around.  Our goal was to keep that entire operation to under a minute, providing easy access to various levels of the codebase and that did the trick for us.  Once you’ve broken the requirement of long running machines, you now start to take advantage of hourly pricing with most IaaS vendors.  If a developer can spin up a machine, run their tests and finish in an hour, you probably paid about $0.30 for that operation.  If they need 6 hours, you are paying less then $2.  To help reinforce this model, any machine that hasn’t been logged into in 6 hours in our environment automatically gets stopped.  The reality is that developers aren’t great at cleaning up, but once you get the model right, they don’t need resources for a long time.  If you get the habit formed around starting each time with a new VM, everything else starts to fall in place.

And yes, being Red Hat, we’ve also open sourced this work () too.  While these tools are fairly specialized for OpenShift development, hopefully you might find a nugget or two in them that will help in your own process.  And we’re always up for suggestions so if you see a better way of doing something, please send us a pull request!

Next up, I’ll discuss how we expanded upon this to improve our code submission and review process.  As many readers will know, cutting and testing code is only one part of developer effectiveness.  We’ve been able to use these same techniques to fundamentally change how we look at code.  Ever wonder what that OpenShift GitHub Bot is all about commenting on pull requests (e.g. )?  If so then stay tuned!

The New Cloud Business Model – Fake Support

I’ve noticed a growing tend over the last year with companies that are providing exciting new enterprise software, the promise of support and no chance of being able to deliver on it.  And unfortunately, for consumers trying to sort through all the new offerings out there, it can sometimes be difficult to separate all the marketing glitz and glamour from the reality.  With OpenShift, Red Hat is able to stand behind the software that it distributes – they have deep expertise in every layer of the stack.  Given that, it frustrates me when I see others claim the same model without the expertise – that approach is just taking advantage of customers who don’t do their homework before buying.

Let’s think about what would happen if more industries took this same approach – the medical profession for example.  Imagine what the conversation might be after your yearly check-up.

Doctor: Well, I’ve got some good news and some bad news.  The good news is that you still look okay.  The bad news is that the is something going on under the surface that you are going to want to figure out.

You: Okay… what exactly do you mean by ‘under the surface’?  Also, when you say that ‘I’ will need to figure this out, what do you mean?

Doctor: I mean something is going on underneath your skin.  What happens under there is basically a mystery to us – it’s not something we support.  That said, whatever is going on probably needs to be fixed so you’ll want to find someone that can do that.  We could try but we really don’t have any better odds than you in fixing the problem…

If a conversation like this is so unacceptable in other disciplines, why do we so readily accept it in software?  Let’s take Platform as a Service (PaaS) for example.  PaaS is platform positioned to be the core application foundation in your company.  It is tightly integrated with both the operating system (OS) and your application platforms.  Those that say otherwise are either dreaming or trying to deceive you.  That tight integration is what lets the PaaS platform do things so that you don’t have to.  But many of the PaaS vendors in the market have limited experience across the OS and the application stacks.  In almost all cases, the PaaS providers are going to have to rely on a separate company for the operating system distribution.  In many cases, they are going to have to do the same for the application stacks.

What are these companies going to do when their customers hit issues in area outside of the core PaaS software?  Most of these guys aren’t active in the open source versions of the software so I doubt they are going to do the fixes themselves.  Don’t let them give you the ‘power of open source software’ unless they are involved enough to influence those changes.  Maybe they will proceed with the same awkward conversation as the above example…

Now, maybe these providers have the ability to support all the things they promise.  Maybe they have all the connections in the open source projects to maintain stable distributions themselves.  This is what Red Hat does but I don’t see too many others doing the same.  At a minimum, you should check because you might end up buying a product from a company whose business models is based on you not making that call for help…