Asterisk, and other worldly endeavours.

A blog by Leif Madsen

Posts Tagged ‘GRUB

CentOS 5.8 On AWS EC2 With Xen Kernel (PVGRUB)

At CoreDial we’ve been using a lot of AWS EC2 lately for building sandbox infrastructure for testing. Part of the infrastructure is a voice platform utilizing Asterisk 1.4 and 1.8, and those voice platforms are using Zaptel and DAHDI respectively for use with MeetMe(). This hasn’t been an issue previously as our testing has either been on bare metal, or in other virtual machine systems where installation of a base image and standard kernel are not an issue.

However, with the introduction of a lot of EC2 instances in our testing process, we ran into issues with building our own DAHDI RPMs since there aren’t any EC2 kernel development packages outside of OpenSuSE (which we don’t use). After spending a day of trying to hack around it, Kevin found a PDF from Amazon that states AWS now supports the ability to load your own kernels via PVGRUB. Great! If I can do that, then I can just continue using the same RPMs I’d be building anyways (albeit the xen based kernel, but that’s easy to do in the spec file).

Unfortunately this was not nearly as trivial and simple as it appeared at first. The first problem was that I had to figure out the correct magic kernel AKI that needed to be loaded, and the PDF wasn’t incredibly clear about which one to use. (There is two different styles of the AKI, one called “hd0” and another called “hd00” which I’ll get into shortly.) After searching Google and looking through several forum posts and other blogs (linked at the end), I finally found a combination that seems to work for our imported CentOS 5.8 base image. Below is a list of the steps I executed after loading up an image from our base AMI:

  • yum install grub kernel-xen kernel-xen-devel
  • grub-install /dev/sda
  • cd /boot/
  • mkinitrd -f -v –allow-missing –builtin uhci-hcd –builtin ohci-hcd –builtin ehci-hcd –preload xennet –preload xenblk –preload dm-mod –preload linear –force-lvm-probe /boot/initrd-2.6.18-308.13.1.el5xen.img 2.6.18-308.13.1.el5xen
  • touch /boot/grub/menu.lst
  • cat /boot/grub/menu.lst
default 0
timeout 1

title EC2
     root (hd0)
     kernel /boot/vmlinuz-2.6.18-308.11.1.el5xen root=/dev/sda1
     initrd /boot/initrd-2.6.18-308.11.1.el5xen.img

Once the changes were made to the image, I took a snapshot of the running instances volume. I then created an image from the snapshot. When creating the image, I selected a new kernel ID. The kernel ID’s for the various zones and architectures are listed in the PDF. As our base image was CentOS 5.8 i386 in the us-east-1 zone, I had to select between either aki‐4c7d9525 or aki‐407d9529. The paragraph above seems to indicate there is a difference based on what type of machine you’re using, and references S3 or EBS based images. We are using EBS based images, so I tried the first one, which in the end failed miserably. After reading through the IonCannon blog post it became clear that the hd0 and hd00 AKIs are really differences in whether you have a single partition, or multiple partitions with a separate /boot/ partition.

With that bit of knowledge, and knowing that we only had a single partition that contained our /boot/ directory, I knew to use aki-407d9529 (hd0). Another forum post also pointed out that I needed to enable some modules for the xen kernel or the system wouldn’t boot (and I verified that by stepping through each of the steps listed above to make sure it was required). With those two major items checked off, I am now able to build an AMI that will load with a stock CentOS Xen kernel image, making it trivial to build RPMs against now.

Note: If you do happen to use separate partitions, make sure you use the hd00 AKI. In the menu.lst you need to make sure to use root (hd0,0) instead of just (hd0). Additionally, your menu.lst file needs to live at /boot/boot/grub/menu.lst since AWS is going to look in the /boot/grub/menu.lst location on the /boot/ partition. On a single partition the file can just live at /boot/grub/menu.lst.


Written by Leif Madsen

2012/08/22 at 9:10 am

Initial impressions of qemu-kvm (virtualization server)

The qemu-kvm ( package on Ubuntu 10.10 allows you to create virtual machines much like VMware, Xen, etc.

My initial impressions are generally pretty positive. I like that it lets you install multiple operating systems (including MS Windows, which I haven’t tried yet), and doesn’t use a web interface like VMware Server 2 (which I’ve found to be terribly crash prone, requiring a restart of the web interface at the least, and sometime the entire server needs to be restarted, often abruptly with the kill application). My favourite part is the libvirt-manager that lets you manage the system and install virtual machines remotely over SSH using VNC (or at least something similar to it).

The one problem I had initially was installing CentOS 5.5. It was terribly slow, and when I started the virtual machine, I coudn’t get it past GRUB. I thought perhaps it was just a problem with running CentOS VMs on an Ubuntu host machine, but then I had the same problem with an Ubuntu installation (which took at least 3-4 times as long to install as CentOS for some reason).

I got looking around and found people with similar issues as me, but they all basically just outlined reinstalling GRUB via a rescue disk. I tried this which got me just a little bit further, but still no GRUB menu, or system booting.

Then I found a post indicating that restarting the KVM service caused things to work, which I tried doing, but nothing changed. So as a last ditch effort I tried rebooting the service, which I didn’t expect to actually fix anything, but oddly enough worked. The existing VMs I installed started up fine, and the new installation I tried went significantly faster.

I haven’t had much of a chance to try more than what I’ve described, but thus far I like KVM a lot more than VMware. Hopefully I keep enjoying it as I go forward now that I’ve gotten VMs to boot 🙂

Written by Leif Madsen

2011/02/08 at 8:18 pm