Asterisk, and other worldly endeavours.

A blog by Leif Madsen

Selecting Chef Servers With Environment Variables

Today I got playing around with dynamically selecting different chef servers in preparation for migrating some of our nodes away from our chef-dev server to our chef-live server (which I’m currently in the process of building and populating with data). I had been talking in the #chef IRC channel a few weeks back about making things dynamic, or at least easily switchable, when using multiple chef servers for different groups of servers in an environment.

What I want to do, is be able to set an environment variable at my console in order to switch between chef servers. Previously I had been doing this with different files in my ~/.chef/ directory and changing symlinks between the files. This method works, but is kind of annoying. So with the help of some of the folks in #chef, and with this gist of a sample file that someone is using for their hosted chef environment, I was able to build my own knife.rb and commit it to our chef.git repository.

In our chef.git repository, I created a directory .chef and placed a knife.rb file in it:

$ cd ~/src/chef-repo
$ mkdir .chef
$ touch .chef/knife.rb

I then filled knife.rb with the following contents:

current_dir = File.dirname(__FILE__)

sys_user = ENV["USER"]

log_level                :info
log_location             STDOUT
node_name                sys_user
client_key               "#{ENV["HOME"]}/.chef/#{ENV["KNIFE_ENV"]}/#{ENV["USER"]}.pem"
validation_client_name   "chef-validator"
validation_key           "#{ENV["HOME"]}/.chef/#{ENV["KNIFE_ENV"]}/validator.pem"
chef_server_url          "http://chef-#{ENV["KNIFE_ENV"]}"
cache_type               'BasicFile'
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
cookbook_path            [ "#{current_dir}/../cookbooks", "#{current_dir}/../site-cookbooks" ]

The main key is the KNIFE_ENV environment variable which I set using: export KNIFE_ENV=dev or export KNIFE_ENV=live

After setting the environment variable, which server I’m using is selected for me. Additionally, I copied my validation.pem and client.pem files into corresponding directories in my ~/.chef/ directory: $ mkdir ~/.chef/{live,dev}

With all that done, I can now easily switch between our different servers in order to start the migration of our nodes. (I might create another blog post about that in the future if I get a chance.)

“BUT HOW DO I KNOW WHICH ENVIRONMENT I’M WORKING WITH?!?!?!”, you say? Oh fancy this little PS1 and function I added to my ~/.bashrc file:

if [ "$KNIFE_ENV" == "" ]; then
 export KNIFE_ENV="dev"

function which_env {
  if [ "$KNIFE_ENV" == "live" ]; then
    echo "31"
    echo "32"

export PS1='[\u@\h \[\033[0;36m\]\W$(__git_ps1 "\[\033[0m\]\[\033[0;33m\](%s) \[\033[0;`which_env`m\]~$KNIFE_ENV~")\[\033[0m\]]\$ '

Is nice 🙂

Written by Leif Madsen

2012/08/22 at 1:49 pm

Posted in DevOps

Tagged with , , , , , ,

CentOS 5.8 On AWS EC2 With Xen Kernel (PVGRUB)

At CoreDial we’ve been using a lot of AWS EC2 lately for building sandbox infrastructure for testing. Part of the infrastructure is a voice platform utilizing Asterisk 1.4 and 1.8, and those voice platforms are using Zaptel and DAHDI respectively for use with MeetMe(). This hasn’t been an issue previously as our testing has either been on bare metal, or in other virtual machine systems where installation of a base image and standard kernel are not an issue.

However, with the introduction of a lot of EC2 instances in our testing process, we ran into issues with building our own DAHDI RPMs since there aren’t any EC2 kernel development packages outside of OpenSuSE (which we don’t use). After spending a day of trying to hack around it, Kevin found a PDF from Amazon that states AWS now supports the ability to load your own kernels via PVGRUB. Great! If I can do that, then I can just continue using the same RPMs I’d be building anyways (albeit the xen based kernel, but that’s easy to do in the spec file).

Unfortunately this was not nearly as trivial and simple as it appeared at first. The first problem was that I had to figure out the correct magic kernel AKI that needed to be loaded, and the PDF wasn’t incredibly clear about which one to use. (There is two different styles of the AKI, one called “hd0” and another called “hd00” which I’ll get into shortly.) After searching Google and looking through several forum posts and other blogs (linked at the end), I finally found a combination that seems to work for our imported CentOS 5.8 base image. Below is a list of the steps I executed after loading up an image from our base AMI:

  • yum install grub kernel-xen kernel-xen-devel
  • grub-install /dev/sda
  • cd /boot/
  • mkinitrd -f -v –allow-missing –builtin uhci-hcd –builtin ohci-hcd –builtin ehci-hcd –preload xennet –preload xenblk –preload dm-mod –preload linear –force-lvm-probe /boot/initrd-2.6.18-308.13.1.el5xen.img 2.6.18-308.13.1.el5xen
  • touch /boot/grub/menu.lst
  • cat /boot/grub/menu.lst
default 0
timeout 1

title EC2
     root (hd0)
     kernel /boot/vmlinuz-2.6.18-308.11.1.el5xen root=/dev/sda1
     initrd /boot/initrd-2.6.18-308.11.1.el5xen.img

Once the changes were made to the image, I took a snapshot of the running instances volume. I then created an image from the snapshot. When creating the image, I selected a new kernel ID. The kernel ID’s for the various zones and architectures are listed in the PDF. As our base image was CentOS 5.8 i386 in the us-east-1 zone, I had to select between either aki‐4c7d9525 or aki‐407d9529. The paragraph above seems to indicate there is a difference based on what type of machine you’re using, and references S3 or EBS based images. We are using EBS based images, so I tried the first one, which in the end failed miserably. After reading through the IonCannon blog post it became clear that the hd0 and hd00 AKIs are really differences in whether you have a single partition, or multiple partitions with a separate /boot/ partition.

With that bit of knowledge, and knowing that we only had a single partition that contained our /boot/ directory, I knew to use aki-407d9529 (hd0). Another forum post also pointed out that I needed to enable some modules for the xen kernel or the system wouldn’t boot (and I verified that by stepping through each of the steps listed above to make sure it was required). With those two major items checked off, I am now able to build an AMI that will load with a stock CentOS Xen kernel image, making it trivial to build RPMs against now.

Note: If you do happen to use separate partitions, make sure you use the hd00 AKI. In the menu.lst you need to make sure to use root (hd0,0) instead of just (hd0). Additionally, your menu.lst file needs to live at /boot/boot/grub/menu.lst since AWS is going to look in the /boot/grub/menu.lst location on the /boot/ partition. On a single partition the file can just live at /boot/grub/menu.lst.


Written by Leif Madsen

2012/08/22 at 9:10 am

Assign unique hostname to dhcp client with dnsmasq

Today I’ve been getting our lab environment setup with vagrant to auto-provision our lab servers with chef server in order to allow the development team to quickly and easily turn up and tear down web application servers.

Because when the server gets spun up with vagrant, it registers itself as a new node to the chef server using its hostname. Since using localhost for every node pretty much makes the chef server useless for more than 1 virtual machine at a time, I needed to figure out how to get dnsmasq to assign a unique hostname based on the IP address being provided by dnsmasq to the dhcp client.

I had seen a similar thing done with Amazon EC2 instances that when they turn up, they gets a hostname that looks similar to the private IP address it has been assigned. For example, if the private IP address assigned to the server was it would get a hostname like ip-192-168-12-14. I wanted to do a similar thing with our server.

After a little bit of Googling and reading the dnsmasq configuration file, it donned on me how simple this really was. You simply need to define the hostnames that the dnsmasq server could assign to a server, list those in the /etc/hosts file on the dnsmasq server, and then define the hostname you wanted to provide to the server. I didn’t want to use the MAC address of the servers (a la dhcp-host option) since the MAC address will be dynamic each time I spin up a virtual machine.

So in my dnsmasq.conf file I might have something defined like



So in my /etc/hosts file I’d just place the following to assign those unique hostnames:    ip-90-100-1-120    ip-90-100-1-121    ip-90-100-1-122    ip-90-100-1-123    ip-90-100-1-124

Written by Leif Madsen

2012/07/23 at 2:14 pm

bash creating files named ‘1’ everywhere!

So I ran into something kind of stupid today 🙂 Adding a little note for anyone who might run into a similar instance.

I have some ssh-add stuff that gets run in my .bashrc file, but when I was outputting it, I was doing:

ssh-add ~/.ssh/some_key > /dev/null 2&>1

Note the 2&>1 at the end. That means to redirect output to a file named 1. You need to flip the &> into >&, so the fixed version looks like:

ssh-add ~/.ssh/some_key > /dev/null 2>&1

Written by Leif Madsen

2012/07/19 at 10:03 am

Posted in Musings, Programming

Tagged with , , ,

Integration Testing Using Jenkins (Part 1)

So for the last week or so, I’ve been tasked at CoreDial with adding our own set of integration testing now that we’re moving to a more formal deployment method using chef. After much pestering of questions to thehar of Lookout Mobile Security and with help of Google, #chef and jhansche in #jenkins I’ve finally got a nice clean proof of concept that we can evaluate and likely deploy.

I’ll come back later with another article on my installation issues with jenkins and the solutions that I solved (nothing too terribly complicated), but what I wanted to blog about was the two types of tests that I’ve been focusing on and was able to finally solve.

First, I wanted to simply get a working test going in jenkins since I’d never used it before and needed a minimum viable product to look at. Based on a recommendation from thehar a couple weeks ago, I looked at foodcritic, got that working, and with their instructions, was able to get that integrated for my first automated test in jenkins.

The main problem I had was really getting an environment path variable set so that I could execute a ruby shell (#!/usr/bin/env rvm-shell 1.9.3, in the foodcritic instructions). After some searching, I came across a hint (sorry, I’ve misplaced the link) that stated I needed to add source /etc/profile to the bottom of my /etc/default/jenkins file, which worked marvellously to get the command I was trying to run to go. (Note that I installed on Ubuntu 12.04 for this test.)

(Prior to that, I installed rvm and then ran the multi-user instructions to get ruby 1.9.3 installed. I also installed foodcritic via gem install foodcritic which depends on ruby 1.9.2+.)

Having created my first job, I filled in the Git information to connect to my git server. I ran into a few issues there, and needed to create a new .ssh directory in /var/lib/jenkins/.ssh/ (/var/lib/jenkins is the $HOME directory of jenkins). I then placed the appropriate authentication keys in the directory, but was still having issues with connecting to the server. It ended up being that I needed to add a config file to the .ssh directory with the following contents:

Host coredial-git
  User git
  IdentityFile /var/lib/jenkins/.ssh/id_rsa.key
  StrictHostKeyChecking no

After adding this, then I could set the repository URL to git@coredial-git:chef-repo.git and the branch specifier to something like */feature/ENG-* in order to test all our engineering testing branches. I then setup Poll SCM with polling schedule */5 * * * * (I set to */1 at first for testing, and will likely increase this further, or add a post-commit hook to git.)

The actual command I’m running in the Execute Shell section looks like this:

#!/usr/bin/env rvm-shell 1.9.3
foodcritic -f any site-cookbooks/my_awesome_cookbook

Then I saved the test, made some changes, and during the poll was able to trigger off both expected failed and expected passing tests. Very cool indeed!

Written by Leif Madsen

2012/06/26 at 7:51 am

Posted in DevOps

Tagged with , , , , , , ,

rpmlint non-utf8-spec-file error

I’ve been doing a bunch of work with RPMs lately, and while running rpmlint against a spec file I was modifying, I received this error:

myfile.spec: E: non-utf8-spec-file myfile.spec

After Googling, I ran into a way of finding the non-compliant characters.

$ iconv -f ISO-8859-8 -t UTF-8 myfile.spec > converted.spec
$ diff -u myfile.spec converted.spec

(Answer thanks to Dominique Leuenberger @

Written by Leif Madsen

2012/02/23 at 1:43 pm

Posted in Useful Tools

Tagged with , , , ,

Converting multiple exten => lines to using same => in Asterisk dialplan

Last week I wanted to start changing some 1.4 based Asterisk dialplan to a 1.8 based Asterisk system, and in that process wanted to convert lines like:

exten => _NXXNXXXXXX,1,NoOp()
exten => _NXXNXXXXXX,2,GotoIf($[...]?reject,1)
exten => _NXXNXXXXXX,3,Dial(SIP/foo/${EXTEN})

into using the same => prefix:

exten => _NXXNXXXXXX,1,NoOp()
 same => n,GotoIf($[...]?reject,1)
 same => n,Dial(SIP/foo/${EXTEN})

In order to do that, Mike King helped me out with the following regular expressing which I used in vim:

%s/exten\s*=>\s*[^,]\+,\s*[n2-9]/ same => n/g

Written by Leif Madsen

2012/01/16 at 8:28 am