December 2011

A pattern for using drupal_queue to run background tasks with Drupal 6

A common pattern for executing background tasks when writing software is to implement a FIFO (First In First Out) queue that runs every so often. I've been working on a module for Drupal 6 that utalizes several different queues to perform periodic tasks so we don't impact the user's experience on the site.

One thing is send a number of emails to different users, and drupal_mail when you're trying to email ~25000 users after a node is saved, it tends to take a while to complete. Enter the drupal_queue module, which is a backport of the Drupal 7 Queue API.

First, you have to implement hook_cron_queue_info:


 * Implements hook_cron_queue_info()
function thing_cron_queue_info() {
  $queues['EmailQueue'] = array(
    'worker callback' => 'email_queue_worker',
    'time' => 3600,
  return $queues;

Then you need to create your queue worker:


 * Worker function for the EmailQueue
 * @param
 *   An associative array containing:
 *   - to: A valid email address that the message will be sent to.
 *   - subject: The subject of the email message
 *   - body: The body of the email message
 *   - headers: Additional headers for the email message
function email_queue_worker($data) {

           'EmailQueueWorker: Sending email to: @to containing: @body',
           array('@to' => $data['to'], '@body' => $data['body']),

After that all you have to do is join the queue and create a new item in the queue, like so:


$email_queue = drupal_queue_get('EmailQueue');

// Sending email to a user
$message = array(
  'to' => '',
  'subject' => 'The subject of this email',
  'body' => 'The body of this email',
  'headers' => array('From' => ''),

What this does is:

  • Join the queue
  • If the queue doesn't exist, create it
  • Create a new queue item and then add it to the queue

Bam, that's about it. All you have to do now is create a Jenkins job (or cron job if you're lazy) to run:

$ drush -r /path/to/drupal/instance queue-cron

And drupal_queue takes care of the rest. If you want to see what is in your queue, look at the queue table in the database. You can also have multiple queues declared in your hook_cron_queue_info, you can even have queues populate other queues.

Managing your "Weekly Things I did" list with Google CL

As a developer I spend most of my day working on things and every week or so I send an email with everything I've done that week to my boss. Lots of folks do this and it takes a good 10 to 15 minutes out of my monday where I could be doing things like drinking coffee, setting up random webcams or writing Drupal modules. Google has a set of command line tools called Google CL that makes managing your documents incredibly easily via the command line.

There's only 2 parts to this, first a script that opens a google document for editing and second a script that emails the contents of that document to someone.

First, we need the ability to create a create a new document weekly, so we create a script called


$google docs edit "Work $(date +%V)" --editor=vim --folder=weekly

What this does is:

  • Create the document that has the number of the current week in the title (if it doesn't already exist in your google docs)
  • Tell google-cl that the editor should be vim, you can use emacs or whatever editor you like
  • Finally store them in the weekly folder in your google docs

That's it, super easy way to keep track of everything you do and you end up with something looking like this:

I also have another script that emails a copy of my current list every friday at 6:00pm, here's what that looks like:


# Emails weekly updates to an email
# usage: ./gdocs_email

updates_dump=/tmp/weekly-$(date +%V).txt
email_subject="Weekly Update: $(date +"%A %B %d, %Y")"

if [ $# -ne 1 ]
        echo "Usage: gdocs_email <>"

$google docs get "Work $(date +%V)" $updates_dump
$mail -s "$email_subject" $1 < $updates_dump

Like I said, I run this command every friday at 6:00pm, here's what Jenkins looks like:

You can get more information about what Google CL can do by running:

$ google --help

Web scraping and growl notifications with node.js and jsdom

I've been working on this idea for scraping data from a few sites and displaying them to me through growl. With node.js and a few extra modules this is remarkably easy.

First you need to make sure you have a few things installed

  • xcode (a requirement for homebrew)
  • homebrew
  • node.js
  • npm

I like to use the version of node.js that's included in mac homebrew and npm installed from their one line installer. After that you need to install a few dependencies.

$ npm install jsdom growl

Then you have to do something like this:

var jsdom = require('jsdom');
var growl = require('growl');

jsdom.env("", [
  function(errors, window) {
    var $ = window.$
    var stat = $("#wrapper-content-inner strong:first").text();
    growl('Campus is operating under: ' + stat);

You end up with the ability to use jquery to do something looking like this:

More reading:


Reddit repost: Developing like a boss for Drupal

Okay, so I missed my daily blog post yesterday, I was a kinda busy. I'm hoping to make up for it by doing two posts today, this is the first one. It's a repost from a reddit thread titled: Drupal development 2 or more people on one project. Forgive my spelling and any other errors, I got kinda drunk in the middle of this one. Enjoy.

Certainly will. First, IDE. I'm a vim person and try to avoid using an IDE as much as possible. I find that they slow me down and with vim I have everything I need to edit any kind of file it opens. Plus I also use tmux for terminal emulation and zsh along with oh-my-zsh for my shell.

We use gitosis to host our repos and use the command line git to do everything else. I've heard very good things about gitolite, and sometimes github.

I've had lots of luck using the A successful Git branching model for managing workflow within each project with around 10 people contributing to the project. The key thing here is not to commit directly to the master branch and use the --no-ff flag when merging branches. The rest is kinda common sense, create your tags for things you want to remember (we tag for every new platform we deploy to aegir). You have to enforce this with your users, we use git_hooks to keep people from commiting and merging into branches they shouldn't be touching and we accept patches for changes they want to make to something they can't push to.

The next part is following a features based workflow. Features allows you to store small pieces of functionality in module form that you can then enable on a site and provide that functionality. For example, we have a flash streaming server user can upload media to and stream to different audiences. We wanted to provide a way for users to add "Streaming Video" content to their sites also wanted to make functionality work across our 190ish Drupal instances. Features lets us do that really easily by packaging up the code that provides the functionality to the site and now it's an option on all of our instances.

Here's the basic workflow for a features based development cycle.

1) Create a vanilla instance of your drupal site, we use drush make files to define our builds and we have all of our code in the install profile. We base our make files off of how Open Atrium does theirs:

So our install profile file structure ends up kinda looks like this:


That is all versioned and when you build your development instance you are cloning a fresh version of that. The reason why we do this like this is so we can have each build defined with a make file, and if you notice, there is a make file included in the repo. The make file in the repo declares everything that should be included in an instance when you download it. It looks something like this:

core = 6.x
api = 2
projects[drupal][type] = core
; Contrib modules
projects[features][subdir] = "contrib"
projects[token][subdir] = "contrib"
projects[ctools][subdir] = "contrib"
projects[menu_block][subdir] = "contrib"

Now you need a make file to build this instance, it looks something like this:

api = 2
core = "6.x"

projects[drupal][type] = core

projects[PROFILENAME][type] = "profile"
projects[PROFILENAME][download][type] = "git"
projects[PROFILENAME][download][url] = ""
projects[PROFILENAME][download][branch] = "feature-streaming-video"

This will clone a copy of the profile and checkout the feature-streaming-video branch. You can also use tag instead of branch and checkout a specific tag of a repo.

So with drush make you would do something like this:

drush make --working-copy ./builds/stable.make www

The --working-copy flag tells drush make to include the gits (or whatever VCS you use) versioning information for the project.

After that your profile will look like this:


Basically you've generated everything you need for a drupal instance to work

Then you create your database, and use drush site-install (or si for short) to install the site

drush si appstate --db-url=mysql://MYSQLUSER:MYSQLPASSWD@localhost/DATABASE --account-name=admin --account-pass=password --site-name="My Site Name"

2) Now that you have an instance built, then create your content type, views, or whatever else you need to do. Then using Features create a Feature that includes all of your functionality and untar it into your /profiles/PROFILENAME/modules/custom/ folder in your profile. At this point you want to add your changes to a feature branch in the repo or to the develop branch, but don't merge into master yet. Also, checkout strongarm, that wil let you export things from the variables table which is helpful for including automatic url aliases in your features.

3) Now go back and drop all tables in your database and run the drush site-install command to create a vanilla install without anything left from the initial creation of the feature. CCK creates a database table for each content type so you have to be sure to remove everything so you don't get weird side effects when testing or developing the feature. Then re-enable the feature and ensure that everything you intended to be included with the feature is included. This would be a good time to write some tests too.

That's basically it. You keep dropping the tables and reinitializing the site until you have it so all you have to do is enable a feature and everything is setup. After that you merge into the master branch, Jenkins runs some QA tests and eventually the code gets pushed to production.

Building an Aegir based Drupal hosting environment (Part 1)

I use the Aegir hosting system for all of my Drupal hosting needs and it makes managing sites a whole lot easier. Here's a basic rundown of a the new Aegir environment I'm putting together. I'm a big fan of Linode's VPS hosting, they have really nice prices and have great service. I like to run either Debian or Ubuntu, this box is going to be an Ubuntu 11.10 instance.

The first think you need to do is make sure your system software is up-to date.

root@aegir:~# sudo aptitude update
root@aegir:~# sudo aptitude safe-upgrade

Then we need to set the hostname for the box:

root@aegir:~# echo "" > /etc/hostname
root@aegir:~# hostname -F /etc/hostname

Next is configuring the timezone information, run the following command and choose your timezone from the list.

root@aegir:~# dpkg-reconfigure tzdata

After that is setup, we need to configure OpenSSH, tyipcally I like to force users to authenticate with pubic keys only and change the port that ssh runs on. I'm only going to disable password based authentication for this one and let users ssh on port 22.

Before you do the next part you should read this: and generate your keys and get them on the server.

root@aegir:~# vim /etc/ssh/sshd_config

Change the following line to read:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no

Then reload the ssh process

root@aegir:~# service ssh reload

Now users will not be able to login to the box using a password and they will be forced to login to the box with a public key.

Read more

Building an Aegir based Drupal hosting environment (Part 2)

Now that the machine is setup it's time to actually install Aeigr and start to get things up and running. I pretty much follow the instructions on Aegir's community site, but because we're running Ubuntu 11.10 we can skip a few streps.

First we need to add the aegir sources to the aptitude so it knows where to grab the software from, add the key to our apt-key and run aptitude update.

root@aegir:~# echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/aegir-stable.list
root@aegir:~# wget -q -O- | sudo apt-key add -
root@aegir:~# aptitude update

After aptitude update finishes you can install Aegir and configure everything:

root@aegir:~# aptitude install aegir

It will ask you for a password for the root mysql account, email options (choose Internet Site) and the domain you want to run your Aegir instance on. You should choose the name you inserted into the /etc/hostname when first setting up the machine for this.

Right, so now you have a working instance of Aegir ready to go. Now it's time to do some tweeking of the environment.

This instance already has the memory limit set to 128M but if it is set to something lower, say 2M or 20M, the first thing we need to do is expand the php memory limit and bump up the maximum upload limit, you can make these changes by editing the php.ini

root@aegir:~# vim /etc/php5/apache2/php.ini

And change the following values in it to match this

; Maximum amount of memory a script may consume (128MB)
memory_limit = 128M

Then you need to restart apache:

root@aegir:~# service apache2 graceful

Then yeah, that's about it for getting a basic instance up. In the next installment we'll discuss moving the mysql database to a different instance and tuning it, using Jenkins for periodic jobs instead of cron, putting a self signed cert on the front end, and firewalling with iptables.

Read more

Building an Aegir based Drupal hosting environment (Part 3)

Okay, now that we have our Aegir instance up and the system updated we need to get to work on securing the ports on the linode. I'm a big fan of using iptables to do this mainly because it's the first firewall I learned how to setup and it's installed by default in Ubuntu. You have to setup some type of firewall to block people from trying to connect to open ports and comprimising your server.

root@aegir:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

By default Ubuntu doesn't have any firewall rules enabled, so your box is pretty much open. So I add these rules:

root@aegir:~# iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
root@aegir:~# iptables -A INPUT -p tcp --dport ssh -j ACCEPT
root@aegir:~# iptables -A INPUT -p tcp --dport 80 -j ACCEPT
root@aegir:~# iptables -A INPUT -j DROP
  • First we're allowing incoming connections
  • Then we're allowing ssh connections
  • After that we're allowing traffic on port 80
  • Finally we're dropping all other traffic

This is the output of iptables -L:

root@aegir:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            ctstate RELATED,ESTABLISHED 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www 
DROP       all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

After you run those commands try accessing the aegir instance and logging out and logging back into your server to make sure your firewall rules aren't broken. If you do have trouble, Linode has a nifty shell they have you can use to login to the instance and fix whatever you broke.

Now we need to save the rules so when the server reboots the rules are still there. I use to generate it like so:

root@aegir:~# iptables-save > /etc/iptables.rules

That gives us this:

root@aegir:~# more /etc/iptables.rules 
# Generated by iptables-save v1.4.10 on Thu Dec 22 14:17:35 2011
:INPUT ACCEPT [52640:65134441]
:OUTPUT ACCEPT [20821:1765839]
# Completed on Thu Dec 22 14:17:35 2011
# Generated by iptables-save v1.4.10 on Thu Dec 22 14:17:35 2011
:PREROUTING ACCEPT [52717:65141157]
:OUTPUT ACCEPT [20821:1765839]
# Completed on Thu Dec 22 14:17:35 2011
# Generated by iptables-save v1.4.10 on Thu Dec 22 14:17:35 2011
:INPUT ACCEPT [125:7481]
:OUTPUT ACCEPT [2427:169661]
# Completed on Thu Dec 22 14:17:35 2011
# Generated by iptables-save v1.4.10 on Thu Dec 22 14:17:35 2011
:PREROUTING ACCEPT [52717:65141157]
:INPUT ACCEPT [52709:65138421]
:OUTPUT ACCEPT [20821:1765839]
:POSTROUTING ACCEPT [20821:1765839]
# Completed on Thu Dec 22 14:17:35 2011
# Generated by iptables-save v1.4.10 on Thu Dec 22 14:17:35 2011
:OUTPUT ACCEPT [238:22071]
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT 
# Completed on Thu Dec 22 14:17:35 2011

Right, so now we have our iptables config setup we need to edit the /etc/network/interfaces file to tell it about our new rules.

root@aegir:~# vim /etc/network/interfaces

And make it look like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
  pre-up iptables-restore &lt; /etc/iptables.rules
  post-down iptables-restore &lt; /etc/iptables.downrules

Now if we reboot the machine we should have the iptable rules working properly

zach@brains ~ &raquo; ssh
Last login: Thu Dec 22 13:58:47 2011 from
root@aegir:~# uptime
 14:27:29 up 0 min,  1 user,  load average: 0.15, 0.03, 0.01
root@aegir:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            ctstate RELATED,ESTABLISHED 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www 
DROP       all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Sweet, the server is now blocking all traffic except ssh and port 80 traffic.

You can read more about iptables here:

Read more

Building an Aegir based Drupal hosting environment (Part 4)

Right, so in the interest of time I'm going to go through how I setup my Aegir instances and some best practices I follow when working with Aegir.

First, let's extend aegir a bit, there's a great page of contributed modules for Aegir and there's a few that I really like. Here's what I typically download and use:

This is pretty much the only functionality I see missing in Aegir core, this could easily be done with a cronjob or Jenkins job, but I choose to let Aegir manage it's own backups. It's built into it anyways. Switch to the aegir user and download the files:

root@aegir:~# su aegir 
aegir@aegir:~$ drush @hostmaster dl hosting_backup_gc, hosting_backup_queue --destination=profiles/hostmaster/modules
Project hosting_backup_gc (6.x-1.2) downloaded to profiles/hostmaster/modules/hosting_backup_gc.         [success]
Project hosting_backup_queue (6.x-1.0-beta4) downloaded to                                               [success]

Now go to your Aegir instance and go to the Features admin page, then click on the Experimental fieldset and enable the backup features we downloaded earlier.

Now configure the Backup garbage collection module. I enabled it and set it to save 2 weeks worth of backups and save 1 week out of the two when it gets rid of the old backups.

And then configure the Backup queue to schedule backups

Then when you go to create a site you have the option to overide the default backup settings.

Well, there you go. A secure and working Aegir instance ready to host Drupal instances in 4 parts.  We'll move the database to a different machine in the next few days, get a cert on the box, and get Jenkins working too.

Read more

Building an Aegir based Drupal hosting environment (Part 5)

Okay, so now we've got our instances backing up regularly and our server updated and secure I want to start running aegir under ssl. I'm not a big fan of passing passwords in plain text across the internet.

First you need to make sure you have openssl installed

root@aegir:~# aptitude show openssl
Package: openssl                         
State: installed
Automatically installed: no
Version: 1.0.0e-2ubuntu4
Priority: standard
Section: utils
Maintainer: Ubuntu Developers 
Uncompressed Size: 1,040 k
Depends: libc6 (&gt;= 2.7), libssl1.0.0 (&gt;= 1.0.0)
Suggests: ca-certificates
Description: Secure Socket Layer (SSL) binary and related cryptographic tools
This package contains the openssl binary and related tools. 

It is part of the OpenSSL implementation of SSL. 

You need it to perform certain cryptographic actions like: 
* Creation of RSA, DH and DSA key parameters;
* Creation of X.509 certificates, CSRs and CRLs;
* Calculation of message digests;
* Encryption and decryption with ciphers;
* SSL/TLS client and server tests;
* Handling of S/MIME signed or encrypted mail.

Apparently Ubuntu installs it by default on 11.10, you might not have it and need to intsall it from aptitude.

There, you now have certs and you've made the certs read only by everyone, now we have to change the iptables rules to allow us to talk over port 443. In Part 3 of this blog post series thing I opened a few ports (80 and 22) that the machines can talk through. Well, we need to open port 443 now.

root@aegir:~# iptables -I INPUT 3 -p tcp --dport 443 -j ACCEPT
root@aegir:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            ctstate RELATED,ESTABLISHED 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:https 
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www 
DROP       all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

There we go, port 443 is now open in the firewall, if we want our rules to stay after a reboot we need to generate the /etc/iptables.rules.

root@aegir:~# iptables-save &gt; /etc/iptables.rules

Let's check and see if the apache ssl mod is enabled.

root@aegir:~# apache2ctl -t -D DUMP_MODULES | grep ssl
[Thu Dec 22 16:17:58 2011] [warn] NameVirtualHost *:80 has no VirtualHosts
Syntax OK
ssl_module (shared)

Right, so it is. If it wasn't installed you would do this:

root@aegir:~# a2enmod ssl

Now we have to configure Aegir to work over ssl, use this guide: It's how you do it. After that you should have Aegir running under ssl. I select the "require" when saving Aegir instance.

Read more

Building an Aegir based Drupal hosting environment (Part 6)

There's only I've got to setup before we can call this pretty much done. I want to be able to clone git repos hosted in my gitolite instance. First we need to generate a ssh public key pair and I know it's going to sound weird but we need to generate a key without a pass phrase. I'm okay with it because it will only have read-only access to the repo and the repo can only talk to specific boxes.

root@aegir:~# su aegir
aegir@aegir:/root$ cd
aegir@aegir:~$ ssh-keygen -t rsa

I added the key it generated to my gitolite-admin for the install profile the site should be built form and my builds repo and pushed it the changes up

Next I cloned the builds director

aegir@aegir:~$ cd ~/
aegir@aegir:~$ git clone
Cloning into builds...
remote: Counting objects: 3, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3/3), done.

After that all I have to do is build a platform, a site, and we're ready to start building Drupal sites.

Read more

A simple node.js web application environment

I'm a big fan of using node.js to build small web applications, I feel that javascript is the future for the web and develpers can do some cool stuff with it. So far I've built a few applications that utalize node.js and a few node modules that make building applications a lot easier. I like to build most of my apps with express for the application, jade for templates, an less for generating css.

To work with node.js you need to install it and install it's package manager npm. I used the homebrew version of node.

zach@brains ~/projects &raquo; brew install node

And the version of npm from their website:

zach@brains ~/projects &raquo; curl | sh

Now create a folder for your app

zach@brains ~/projects &raquo; mkdir

And download a few express, jade, and less

zach@brains projects/ &raquo; npm install express jade less
less@1.1.6 ./node_modules/less 
jade@0.20.0 ./node_modules/jade 
├── mkdirp@0.2.1└── commander@0.2.1
express@2.5.2 ./node_modules/express 
├── mime@1.2.4
├── qs@0.4.0
├── mkdirp@0.0.7
└── connect@1.8.5

Now you can go ahead and create your express application

zach@brains projects/ &raquo; ./node_modules/express/bin/express -s -t jade -c less appname

   create : appname
   create : appname/package.json
   create : appname/app.js
   create : appname/public
   create : appname/routes
   create : appname/routes/index.js
   create : appname/views
   create : appname/views/layout.jade
   create : appname/views/index.jade
   create : appname/public/javascripts
   create : appname/public/images
   create : appname/public/stylesheets
   create : appname/public/stylesheets/style.css

   dont forget to install dependencies:
   $ cd appname &amp;&amp; npm install

And boom, you have an express web application ready to go, you can go ahead and start the application in development mode and go to localhost:3000 to see it in action

zach@brains projects/ &raquo; node ./appname/app.js 
Express server listening on port 3000 in development mode

Now go ahead and build your app.