Building an Aegir based Drupal hosting environment (Part 2)

Now that the machine is setup it's time to actually install Aeigr and start to get things up and running. I pretty much follow the instructions on Aegir's community site, but because we're running Ubuntu 11.10 we can skip a few streps.

First we need to add the aegir sources to the aptitude so it knows where to grab the software from, add the key to our apt-key and run aptitude update.

root@aegir:~# echo "deb http://debian.aegirproject.org stable main" | sudo tee -a /etc/apt/sources.list.d/aegir-stable.list
root@aegir:~# wget -q http://debian.aegirproject.org/key.asc -O- | sudo apt-key add -
root@aegir:~# aptitude update

After aptitude update finishes you can install Aegir and configure everything:

root@aegir:~# aptitude install aegir

It will ask you for a password for the root mysql account, email options (choose Internet Site) and the domain you want to run your Aegir instance on. You should choose the name you inserted into the /etc/hostname when first setting up the machine for this.

Right, so now you have a working instance of Aegir ready to go. Now it's time to do some tweeking of the environment.

This instance already has the memory limit set to 128M but if it is set to something lower, say 2M or 20M, the first thing we need to do is expand the php memory limit and bump up the maximum upload limit, you can make these changes by editing the php.ini

root@aegir:~# vim /etc/php5/apache2/php.ini

And change the following values in it to match this

; Maximum amount of memory a script may consume (128MB)
; http://php.net/memory-limit
memory_limit = 128M

Then you need to restart apache:

root@aegir:~# service apache2 graceful

Then yeah, that's about it for getting a basic instance up. In the next installment we'll discuss moving the mysql database to a different instance and tuning it, using Jenkins for periodic jobs instead of cron, putting a self signed cert on the front end, and firewalling with iptables.

Read more

Building an Aegir based Drupal hosting environment (Part 1)

I use the Aegir hosting system for all of my Drupal hosting needs and it makes managing sites a whole lot easier. Here's a basic rundown of a the new Aegir environment I'm putting together. I'm a big fan of Linode's VPS hosting, they have really nice prices and have great service. I like to run either Debian or Ubuntu, this box is going to be an Ubuntu 11.10 instance.

The first think you need to do is make sure your system software is up-to date.

root@aegir:~# sudo aptitude update
root@aegir:~# sudo aptitude safe-upgrade

Then we need to set the hostname for the box:

root@aegir:~# echo "aegir.DOMAIN.com" > /etc/hostname
root@aegir:~# hostname -F /etc/hostname

Next is configuring the timezone information, run the following command and choose your timezone from the list.

root@aegir:~# dpkg-reconfigure tzdata

After that is setup, we need to configure OpenSSH, tyipcally I like to force users to authenticate with pubic keys only and change the port that ssh runs on. I'm only going to disable password based authentication for this one and let users ssh on port 22.

Before you do the next part you should read this: https://help.ubuntu.com/community/SSH/OpenSSH/Keys and generate your keys and get them on the server.

root@aegir:~# vim /etc/ssh/sshd_config

Change the following line to read:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no

Then reload the ssh process

root@aegir:~# service ssh reload

Now users will not be able to login to the box using a password and they will be forced to login to the box with a public key.

Read more

Reddit repost: Developing like a boss for Drupal

Okay, so I missed my daily blog post yesterday, I was a kinda busy. I'm hoping to make up for it by doing two posts today, this is the first one. It's a repost from a reddit thread titled: Drupal development 2 or more people on one project. Forgive my spelling and any other errors, I got kinda drunk in the middle of this one. Enjoy.


Certainly will. First, IDE. I'm a vim person and try to avoid using an IDE as much as possible. I find that they slow me down and with vim I have everything I need to edit any kind of file it opens. Plus I also use tmux for terminal emulation and zsh along with oh-my-zsh for my shell.

We use gitosis to host our repos and use the command line git to do everything else. I've heard very good things about gitolite, and sometimes github.

I've had lots of luck using the A successful Git branching model for managing workflow within each project with around 10 people contributing to the project. The key thing here is not to commit directly to the master branch and use the --no-ff flag when merging branches. The rest is kinda common sense, create your tags for things you want to remember (we tag for every new platform we deploy to aegir). You have to enforce this with your users, we use git_hooks to keep people from commiting and merging into branches they shouldn't be touching and we accept patches for changes they want to make to something they can't push to.

The next part is following a features based workflow. Features allows you to store small pieces of functionality in module form that you can then enable on a site and provide that functionality. For example, we have a flash streaming server user can upload media to and stream to different audiences. We wanted to provide a way for users to add "Streaming Video" content to their sites also wanted to make functionality work across our 190ish Drupal instances. Features lets us do that really easily by packaging up the code that provides the functionality to the site and now it's an option on all of our instances.

Here's the basic workflow for a features based development cycle.

1) Create a vanilla instance of your drupal site, we use drush make files to define our builds and we have all of our code in the install profile. We base our make files off of how Open Atrium does theirs: https://community.openatrium.com/documentation-en/node/1420.

So our install profile file structure ends up kinda looks like this:

/profiles/PROFILENAME
/profiles/PROFILENAME/PROFILENAME.make
/profiles/PROFILENAME/PROFILENAME.info
/profiles/PROFILENAME/PROFILENAME.install
/profiles/PROFILENAME/PROFILENAME.profile
/profiles/PROFILENAME/modules/
/profiles/PROFILENAME/modules/custom
/profiles/PROFILENAME/modules/custom/my_module
/profiles/PROFILENAME/themes/
/profiles/PROFILENAME/themes/custom
/profiles/PROFILENAME/themes/custom/my_theme

That is all versioned and when you build your development instance you are cloning a fresh version of that. The reason why we do this like this is so we can have each build defined with a make file, and if you notice, there is a make file included in the repo. The make file in the repo declares everything that should be included in an instance when you download it. It looks something like this:

core = 6.x
api = 2
projects[drupal][type] = core
; Contrib modules
projects[features][subdir] = "contrib"
projects[token][subdir] = "contrib"
projects[ctools][subdir] = "contrib"
projects[menu_block][subdir] = "contrib"

Now you need a make file to build this instance, it looks something like this:

api = 2
core = "6.x"

projects[drupal][type] = core

projects[PROFILENAME][type] = "profile"
projects[PROFILENAME][download][type] = "git"
projects[PROFILENAME][download][url] = "git@git.example.edu:profiles/PROFILENAME.git"
projects[PROFILENAME][download][branch] = "feature-streaming-video"

This will clone a copy of the profile and checkout the feature-streaming-video branch. You can also use tag instead of branch and checkout a specific tag of a repo.

So with drush make you would do something like this:

drush make --working-copy ./builds/stable.make www

The --working-copy flag tells drush make to include the gits (or whatever VCS you use) versioning information for the project.

After that your profile will look like this:

/profiles/PROFILENAME
/profiles/PROFILENAME/PROFILENAME.make
/profiles/PROFILENAME/PROFILENAME.info
/profiles/PROFILENAME/PROFILENAME.install
/profiles/PROFILENAME/PROFILENAME.profile
/profiles/PROFILENAME/modules/
/profiles/PROFILENAME/modules/custom
/profiles/PROFILENAME/modules/custom/my_module
/profiles/PROFILENAME/modules/contrib
/profiles/PROFILENAME/modules/contrib/contrib_module
/profiles/PROFILENAME/themes/
/profiles/PROFILENAME/themes/custom
/profiles/PROFILENAME/themes/custom/my_theme
/profiles/PROFILENAME/themes/contrib
/profiles/PROFILENAME/themes/contrib/contrib_theme
/profiles/PROFILENAME/libraries/
/profiles/PROFILENAME/libraries/custom_library

Basically you've generated everything you need for a drupal instance to work

Then you create your database, and use drush site-install (or si for short) to install the site

drush si appstate --db-url=mysql://MYSQLUSER:MYSQLPASSWD@localhost/DATABASE --account-name=admin --account-pass=password --account-mail=noreply@example.edu --site-name="My Site Name" --site-mail=noreply@example.edu

2) Now that you have an instance built, then create your content type, views, or whatever else you need to do. Then using Features create a Feature that includes all of your functionality and untar it into your /profiles/PROFILENAME/modules/custom/ folder in your profile. At this point you want to add your changes to a feature branch in the repo or to the develop branch, but don't merge into master yet. Also, checkout strongarm, that wil let you export things from the variables table which is helpful for including automatic url aliases in your features.

3) Now go back and drop all tables in your database and run the drush site-install command to create a vanilla install without anything left from the initial creation of the feature. CCK creates a database table for each content type so you have to be sure to remove everything so you don't get weird side effects when testing or developing the feature. Then re-enable the feature and ensure that everything you intended to be included with the feature is included. This would be a good time to write some tests too.

That's basically it. You keep dropping the tables and reinitializing the site until you have it so all you have to do is enable a feature and everything is setup. After that you merge into the master branch, Jenkins runs some QA tests and eventually the code gets pushed to production.

Web scraping and growl notifications with node.js and jsdom

I've been working on this idea for scraping data from a few sites and displaying them to me through growl. With node.js and a few extra modules this is remarkably easy.

First you need to make sure you have a few things installed

  • xcode (a requirement for homebrew)
  • homebrew
  • node.js
  • npm

I like to use the version of node.js that's included in mac homebrew and npm installed from their one line installer. After that you need to install a few dependencies.

$ npm install jsdom growl

Then you have to do something like this:

var jsdom = require('jsdom');
var growl = require('growl');

jsdom.env("http://appstatealert.com/", [
  'http://code.jquery.com/jquery-1.5.min.js'
  ],
  function(errors, window) {
    var $ = window.$
    var stat = $("#wrapper-content-inner strong:first").text();
    growl('Campus is operating under: ' + stat);
});

You end up with the ability to use jquery to do something looking like this:

More reading:

  • https://github.com/visionmedia/node-growl
  • http://mxcl.github.com/homebrew/
  • http://nodejs.org/
  • http://npmjs.org/
  • https://github.com/tmpvar/jsdom

Managing your "Weekly Things I did" list with Google CL

As a developer I spend most of my day working on things and every week or so I send an email with everything I've done that week to my boss. Lots of folks do this and it takes a good 10 to 15 minutes out of my monday where I could be doing things like drinking coffee, setting up random webcams or writing Drupal modules. Google has a set of command line tools called Google CL that makes managing your documents incredibly easily via the command line.

There's only 2 parts to this, first a script that opens a google document for editing and second a script that emails the contents of that document to someone.

First, we need the ability to create a create a new document weekly, so we create a script called gdocs_weekly.sh:

#!/bin/bash

google=/usr/local/bin/google
$google docs edit "Work $(date +%V)" --editor=vim --folder=weekly

What this does is:

  • Create the document that has the number of the current week in the title (if it doesn't already exist in your google docs)
  • Tell google-cl that the editor should be vim, you can use emacs or whatever editor you like
  • Finally store them in the weekly folder in your google docs

That's it, super easy way to keep track of everything you do and you end up with something looking like this:

I also have another script that emails a copy of my current list every friday at 6:00pm, here's what that looks like:

#!/bin/bash

#
# Emails weekly updates to an email
#
# usage: ./gdocs_email email@example.com
#

google=/usr/local/bin/google
mail=/usr/bin/mail
updates_dump=/tmp/weekly-$(date +%V).txt
email_subject="Weekly Update: $(date +"%A %B %d, %Y")"

if [ $# -ne 1 ]
        then
        echo "Usage: gdocs_email <email@example.com>"
        exit;
fi

$google docs get "Work $(date +%V)" $updates_dump
$mail -s "$email_subject" $1 < $updates_dump

Like I said, I run this command every friday at 6:00pm, here's what Jenkins looks like:

You can get more information about what Google CL can do by running:

$ google --help