Fedora 20 on a Macbook Air

Update 3/24/2014: Suspend working well, moving to mba-fixes RPM

Update 3/23/2014: Removing RPM version of backlight also added some attempts at avoiding wakeups

Update 2/17/2014: Added tip on reloading the wireless module

Update 2/5/2014: Added instructions for a shared partition and better backlight driver installation

With Fedora 20 out, it was time to refresh my Macbook Air (model 6,2) installation and see what I could get working.  You might have seen my earlier post using Fedora 19 but this time around, things went much smoother.  I’ve also tried to cover the initial installation with a little more detail, so hopefully that will help out someone just getting going.

Step 0 – Uninstall rEFIt

If you had previously installed Fedora on your Macbook like myself, you probably installed rEFIt as your bootloader.  Since that project is no longer maintained, you’ll want to uninstall it and move to the newer Refind bootloader.  I uninstalled Refit using their instructions and it worked fine. After uninstalling, I made sure I could boot into OSX by just rebooting and it worked.

Step 1 – Install Refind

I followed the Refind Mac OSX Instructions and while it was a few steps of pasting things into the terminal, it went very smoothly.  Since I have the MBA 6,2 model, I kept the 64 bit binary and drivers.  Also, the instructions mention that you can remove drivers to optimize boot time but I only removed the HFS driver.

Step 2 – Build Fedora 20 USB Key

First I downloaded the Fedora 20 ISO and followed these instructions to build a USB key.  I followed the Mac OSX section and used the dd program to create the key.  Worked like a charm.

Step 3 – Install

With the newly created USB key plugged in, you’ll want to restart and immediately hold down the Option key on your Mac.  That will pull up the boot device options and you can select the ‘Fedora Media’ option.

Note – I wasn’t able to successfully check the media.  When I tried, I received errors about a devmapper device not being present.  Continuing without checking worked fine for me.

Important – I’m going to describe how to do a clean install as that is what I normally do.  This is going to delete any existing Fedora installation you might have.  Also, if you mess up partitions, it could delete some of your Mac data as well.  This is the point where you need to triple check that you have all your stuff backed up.  I don’t claim this installation is fool-proof nor do I want anyone accidentally reformatting their machine as a part of this effort.  Make sure if that happens, that you aren’t going to permanently lose anything.

Step 3.1 – Partitions

Since I had Fedora 19 installed on encrypted partitions, it was a little confusing for me at first how to make sure the Fedora 20 partitions were created properly.  The following steps did the trick for me:

    • Select the encrypted partitions and unencrypt ALL of them
      • At this point, you should see the Fedora 19 partitions recognized, including swap
    • Delete all of the Fedora 19 partitions.  Remember – clean install being described
    • Click the link to auto-create the Fedora 20 partitions
    • Apply the changes

Sharing a partition with OSX

I wanted to share some data and documents between my OSX instance and my Fedora setup.  To do this, I actually changed the mount point of the /home partition in the Fedora 20 list to /mnt/shared (I had left enough space on the / partition).  I also changed the filesystem to Linux HFS+.  I assume that is just a non-journaled HFS+ partition.  Anyway, it created on install and mounts up on Linux.  The only thing I had to do was sync the UID of my Linux user with my already existing Mac user of the same name.  To do that, I switched to runlevel 1 and just ran (replace the contents in the braces with your values):

sudo init 1
# Put in root password
usermod -u <MAC_UID> <LINUX_USER>
chown -R <LINUX_USER> /home/<LINUX_USER>

After doing this, I was successfully able to read and write files when booted into Linux or Mac OSX.  I symlinked most of my directories like ~/Documents to that share.

At this point, you’ll let the install proceed as normal and you’ll create your users, root password, etc.  From this point on, I’ll assume you are running these steps from your newly installed Fedora machine.

Step 4 – Update

First odd thing I hit was that after the installation, PackageKit was telling me everything was up to date, so I had to run yum at the command line:

sudo yum update

That worked fine and applied a few hundred updates.  Reboot to get the new kernel.

Step 5 – RPM Fusion (extra packages)

You’ll want to install RPM Fusion to get the wireless drivers among other things.  You can either following the RPM Fusion docs or following these condensed steps:

  • Click this link on your Fedora machine and install it (free repo)
  • Click this link on your Fedora machine and install it (non-free repo).

Step 6 – Wireless

Now with the RPM Fusion repos, you can just install the broadcom wireless drivers:

 sudo yum install kernel-devel akmod-wl

Reloading the wireless module

After updating my kernel, the wireless module didn’t load automatically for me.  I’m fairly new to akmods so I’m not sure if the new akmod wasn’t built, loaded or what.  However, here is what I did to fix

# Make sure the module built for your kernel
sudo akmods

# See if the module is loaded (if no results, it's not)
sudo lsmod | grep wl

# Manually load the module
sudo modprobe wl

Step 7 – Backlight

Once of the most annoying things for me with Fedora 19 on my Macbook was that the backlight would always go to 100% after a resume.  Luckily came to the rescue and implemented a new kernel module specifically for the Macbook Air 6,2.  The source of his driver is located .  To build it, simply follow the instructions on his repository.  You’ll need to rebuild and install the driver on each kernel update.

For the X11 configuration file, I just create a file in /etc/X11/xorg.conf.d/01-backlight.conf with his suggested X11 configuration specifying the driver.  It works great.

Why did I give up on the RPM?  Well, I tried to hack around actually learning how to build a kmod RPM and the results just weren’t consistent.  Without calling ‘make modules’ and ‘make modules_install’ properly, the module wouldn’t always get loaded properly.  If there are any volunteers that are willing to put the mba6x_bl driver in a kmod RPM, I’m happy to host it.

Step 8 – SSD errors and Keyboard mapping

The next things I hit were some sporadic SSD errors.  In this thread, I found that I was able to set a queue depth to 1 and my errors largely went away.  Also, the tilde is mapped improperly so you need to apply a fix for that.  I’ve bundled both of these fixes as systemd service and a udev rule that will run at startup.

Step 8.1 – Add MattOnCloud Repository

I’ve created a yum repository that contains a built version of this code for Fedora as well as some of the other fixes below.  I’m keeping the  and pull requests are definitely appreciated.  This was my first attempt at a package that would automatically rebuild with kernel updates so install at your own risk…  To install my repository, run:

sudo rpm -Uvh https://files-oncloud.rhcloud.com/yum/RPMS/x86_64/oncloud-repo-0.4-1.fc20.x86_64.rpm

To apply the fixes, then run:

sudo yum install mba-fixes

Wakeups after Suspend

I found the following blog which describes a way to avoid wakeups after suspending.  This is a bit of a pain for me since I’ll suspend at night only to have it wakeup after I’m asleep and drain the battery.  I’ve created a systemd script that runs each startup and disables XHC1 and LID0 from waking the machine up.  This means that after closing the lid to suspend, you need to open the lid and push the power button to wake the machine up.  This is now installed by the mba-fixes RPM.

Random Items

I went ahead and ran  to setup a bunch of other random things, including the better IO scheduler for a SSD.  I would probably recommend doing that – there are a couple of options, but Fedy is pretty nice.

End Result

The end result with Fedora 20 is an impressive setup.  Mini-display to VGA projection actually works this time around which was a huge win for me.  Sound also works right out of the box.  Performance seems solid and I’ll be tracking battery life and updating.

I’ll keep you updated!

The Evolution of PaaS

Having spent three years now working on OpenShift, the lesson I’ve learned from working in the cloud space is that if you aren’t evolving, you are doing something wrong.  PaaS isn’t a static solution but a constantly progressing set of technologies to enable a better approach to building and running applications.  But at Red Hat, the open source way is a critical aspect of how we work.  To us, that means finding the best technology and best communities out there and working with them instead of against them.  That is why we created  with the mission of being able to experiment along with other communities to find the best solutions for our users.  With that mission in mind, we are always looking for those adjacent communities and determining if they are a good fit.  Looking ahead to next year, I see a couple of exciting community developments on the horizon, one centered around Linux Containers and the other centered around OpenStack.
 
 
First, let’s talk about Linux Containers.  Personally, I think Linux Containers is one of the most exciting developments in the Linux kernel today.  The combination of kernel namespace capabilities, coupled with Linux Control groups and a strong security model is changing how users think about isolating applications running on the same machine.  But much like PaaS, Linux Containers isn’t a static solution.  There are a lot of options that can be utilized to strike the right balance of isolation versus overhead.  On OpenShift, we’ve been using our own variant of Linux Containers since day one.  But there is a lot of community activity around containers and a few months ago, we noticed Docker.  Docker introduced an innovative approach to container isolation and packaging which had the potential to both simplify our cartridges in OpenShift and increase user application portability.  But a lot of stuff looks interesting on the surface.  We wanted to really dive in (i.e. start hacking) to see if this would be a good fit.  We had a great experience working with the leads behind Docker and were able to close many of the initial gaps we hit to make Docker run seamlessly on the platforms critical to us, Fedora and RHEL, to enable us to start utilizing it on OpenShift.  During that same time period, Docker was accepted as a Nova driver to the OpenStack project.  With that foundation, we are getting ever closer to having a consistent portable container layer across the operating system (e.g. RHEL), IaaS (e.g. OpenStack) and PaaS (e.g. OpenShift).  Better still is that we are able to take our experience with containers and work with hundreds of other community members to come up with the best approach going forward.
 
 
But as excited as we are about the evolution of containers and better portability of applications, we also know the operational experience is equally critical.  And more and more, that operational experience is centered around OpenStack.  While we can run OpenShift very well on OpenStack and are even enabling better integration through projects like Heat and Neutron, we’ve had the feeling that there is a more fundamental set of capabilities in our platform today that could be native to OpenStack itself.  And in doing that, we could drastically improve the operational experience.
 
 
But I think it helps to talk through some of those operational challenges.  An example of this is the visibility of containers in OpenStack.  Almost every PaaS on the market right now uses some form of Linux containers.  Arguably, it’s what makes a PaaS so efficient – this highly elastic mesh of containers that form your applications.  However, if that PaaS doesn’t natively integrate with OpenStack, your operations team isn’t going to see those containers.  They are just going to see the virtual machines in OpenStack and not have deeper visibility.  But if that PaaS was natively integrated into OpenStack, things get interesting.  The containers themselves could be managed in OpenStack, opening up full visibility to the operations team.  They wouldn’t just see a virtual machine that is working really hard, they would see exactly why.  It would enable them to start using projects like Ceilometer to monitor those resources, Heat to deploy them, etc.  In other words they could start leveraging more of OpenStack to do their jobs better.  But where do you draw that line?  Should OpenStack know about the applications themselves or just containers?  In looking for those answers, we wanted to embrace the OpenStack community to help us draw that line, just like we did with Linux Containers.
 
 
OpenStack has a tremendous community and various areas where we could have started  – Nova, Neutron, Heat, Ceilometer, Keystone, etc.  At the end of the day, we were going to need to interact with all of them.  That led to the Solum project.  You might have seen the announcement today around the new community project.  We will be working with a group of like minded companies and individuals to figure out the approach that makes the most sense for OpenStack.  While OpenStack is a fast moving space, we have a lot of experience with it and believe that there is tremendous potential to align our PaaS approach with this project.  Being Red Hat, we love community driven innovation, and we’re excited to jump in and help move this effort forward.
 
I think 2014 is going to be the most exciting year for PaaS to date.  There is great traction in the market, developer expectations are starting to solidify and we’re seeing more and more traction in production.  I believe the next advancements will come as much in the operational experience as with the developer experience.  And I’m excited to find healthy and vibrant communities looking to solve the same problem.  The end result will be that OpenShift users benefit from greater portability as well as deeper integration with OpenStack.  This has been one of those moments that just crystallizes why I love working in open source.
 
 
If you are interested in finding out more, follow our progress at or get involved with the Solum project directly.  And I’m sure there will be a lot of activity at the OpenStack Design Summit, so If you are going to be there, come find us at  and we can hack on this together!

The New Cloud Business Model – Fake Support

I’ve noticed a growing tend over the last year with companies that are providing exciting new enterprise software, the promise of support and no chance of being able to deliver on it.  And unfortunately, for consumers trying to sort through all the new offerings out there, it can sometimes be difficult to separate all the marketing glitz and glamour from the reality.  With OpenShift, Red Hat is able to stand behind the software that it distributes – they have deep expertise in every layer of the stack.  Given that, it frustrates me when I see others claim the same model without the expertise – that approach is just taking advantage of customers who don’t do their homework before buying.

Let’s think about what would happen if more industries took this same approach – the medical profession for example.  Imagine what the conversation might be after your yearly check-up.

Doctor: Well, I’ve got some good news and some bad news.  The good news is that you still look okay.  The bad news is that the is something going on under the surface that you are going to want to figure out.

You: Okay… what exactly do you mean by ‘under the surface’?  Also, when you say that ‘I’ will need to figure this out, what do you mean?

Doctor: I mean something is going on underneath your skin.  What happens under there is basically a mystery to us – it’s not something we support.  That said, whatever is going on probably needs to be fixed so you’ll want to find someone that can do that.  We could try but we really don’t have any better odds than you in fixing the problem…

If a conversation like this is so unacceptable in other disciplines, why do we so readily accept it in software?  Let’s take Platform as a Service (PaaS) for example.  PaaS is platform positioned to be the core application foundation in your company.  It is tightly integrated with both the operating system (OS) and your application platforms.  Those that say otherwise are either dreaming or trying to deceive you.  That tight integration is what lets the PaaS platform do things so that you don’t have to.  But many of the PaaS vendors in the market have limited experience across the OS and the application stacks.  In almost all cases, the PaaS providers are going to have to rely on a separate company for the operating system distribution.  In many cases, they are going to have to do the same for the application stacks.

What are these companies going to do when their customers hit issues in area outside of the core PaaS software?  Most of these guys aren’t active in the open source versions of the software so I doubt they are going to do the fixes themselves.  Don’t let them give you the ‘power of open source software’ unless they are involved enough to influence those changes.  Maybe they will proceed with the same awkward conversation as the above example…

Now, maybe these providers have the ability to support all the things they promise.  Maybe they have all the connections in the open source projects to maintain stable distributions themselves.  This is what Red Hat does but I don’t see too many others doing the same.  At a minimum, you should check because you might end up buying a product from a company whose business models is based on you not making that call for help…

Red Hat Summit Recap – 2012

Last month we had our annual Red Hat Summit.  I always look forward to Summit as it gives me the ability to connect with users and customers at a frequency that I don’t get on a daily basis.  I try and spend as much time as I can with users and customers, both in understanding their current needs as well as deducing some trends for the upcoming year.  Here are my conclusions for the 2012 Summit:

1. IT shops are moving beyond basic virtualization (finally!)

Now, this is probably a bit skewed since I’m an OpenShift guy and tend to be asked a lot of PaaS / IaaS / Cloud questions, but there was something different this year.  The questions I was being asked had enough depth that it wasn’t just casual prodding of things to expect with ‘cloud’, but questions from customers that had been through designs and experimentation themselves.  This was an exciting change for me because it means that IT shops are becoming comfortable enough with the core capabilities like virtualization and provisioning to pursue more.  Private cloud and IaaS capabilities are maturing quite a bit and there was a lot of questions on usage and expectations on all fronts.  The big issue everyone was bumping up into with the public cloud was data locality and regulations.  I think that is going to be a challenge for years to come but luckily private cloud solutions are getting more robust which will let IT shops in that predicament still compete.  In fact, Red Hat announced our Enterprise PaaS offering which includes offerings installable in customer data centers.  I think being able to bridge the private and public cloud is going to be a key requirement for any vendor looking to compete in this space.

2. SELinux is getting more popular with cloud deployments

I’m not used to being asked SELinux questions at all really.  Heck, I’m usually the guy promoting it at conferences and educating people on it.  So by the 3rd or 4th time SELinux conversations were coming up, I had to wonder what was changing.  My conclusion is that with most things cloud, achieving maximum density and cost effectiveness is usually a primary goal.  However, in many cases, extreme density comes at the cost of security.  Given the amount of security exposures that made big headlines recently, it appears that a refocusing on the security side of solutions is occurring.  It also appears that people are starting to realize that SELinux provides a very elegant solution to help avoid having to make this compromise.  SELinux provides a layer of security at the kernel that allows you to securely segment filesystem and process interactions (among other things).  This allows you to run a variety of workloads on the same machines, using and overcommitting the underlying resources as necessary without giving up the ability to segment those applications.  Segmentation used to be something only attempted at a virtual machine or physical machine level.  That is no longer be the case.  On OpenShift we’ve been a fan of this approach for a long time and it’s worked extremely well.  Glad that others are looking at the same model.

3. Developer efficiency is becoming a focus of IT shops

Okay, this one is definitely biased since I’m a PaaS guy.  That said, before I was a PaaS guy, I was an IT guy.  I worked on both the development and the operational side of the house for several years.   One of things I learned with operations is that they traditionally care more about the running system than the development process.  The opposite was true for development – they often ignore the complexities of keeping their code running and focus on cutting new code.  However, as operations generally controls much of  the IT budget, it was always a struggle to actually get the development side of the house productive.  In 2009, the idea of a DevOps model was introduced which would bring both sides of IT closer together and harmony would ensue… Why then has it taken until 2012 for this to start to take hold?  I think that very strong catalyst was needed to force a change in the existing behavior.  And these days, there are quite a few catalysts that are forcing change in many companies: the public cloud, the economy and startup competition.  For the first time in a while, I was talking with customers that were legitimately trying to bring their development and operational processes together.  In some cases, it was an operational change driven by the transparency of public cloud pricing and realizing that if they couldn’t compete, developers could get services elsewhere.  In some cases it was the economic pressure pushing companies to do more with less.  And in other cases, it was the realization of what some very small companies have been able to achieve just using cloud-based services.  Whatever the motivation, companies are re-invigorated to compete and ready to change to make that happen.  I love seeing that and I think these companies will be the ones that push the boundaries of PaaS offerings and help that space to continue to evolve.