Wednesday, August 26, 2009

Virtual Centos Guests

By default, a redhat/centos/fedora machine uses 1000 hz timers. This is good to maintain response to the user in a GUI, but not really needed in a headless environment. In a virtualised environment this configuration can make cpu usage on the host machine unnecessarily high, just in order to keep track of the time.

Previous getting around this problem required recompiling the kernel, but now there's an easier option. Just add divider=10 to the kernel parameters. Or if you prefer not to stress about whether or not you're putting the kernel parameters in the right conf file, just run this command as root.

grubby --update-kernel=ALL --args="divider=10"

Monday, August 17, 2009

Display Server Date Dilemma

Had this case recently where the client wanted to display their local time on the banner of their webpage. First I didn't think much of it, just required a single line of php to print out the server time (server is set to the same timezone as the client).

The server though is a VPS and the site is complex enough and busy engough to be under a bit of load, so we make use of Drupal's page caching, e-accellerator and also memcache. With that trifector running (and 1GB of RAM) the VPS handles things quite smoothly.

Displaying the time though threw a spanner in the works. Unfortunately, time continues to change, which means that it doesn't like being cached. If we're displaying the time accuracy down to the minute then Drupal's page cache would need to be refreshed every 60 seconds. Drupal's cron is set to run once an hour at the moment and considering that it does a few other tasks like checking for module updates and indexing, I don't think running the cron every minute would help us reduce load.

So I started looking for a few other options and came up with the following ideas based on moving the function to the client's browser to process:

1. Javascript could display the date using the appropriate offset and the client's clock. This relies on the client having the time (and location) set correctly on their own machine.

2. Javascript could use ajax to get a non-cached version of the date, either from our server or a separate time server. This would be accurate but would double the number of requests/responses needed.

3. Javascript could use the page response header timestamp to display the server time. I looked into this one and found that there is indeed a fileCreatedDate header, but it's not accessible to javascript in all browsers.

So at the moment, option 1 seems like the best solution to this dilemma. I'm not happy with it though, I don't like relying on the general public to have their computers configured correctly.

If anyone else has any other creative solutions to this one, I'd love to hear them.

Monday, August 10, 2009

Why do we need a 4 lane highway?

I just watched Will Hodgman's youtube video on upgrading the Midlands Highway to 4 lanes (http://www.youtube.com/user/HodgmanWill?gl=AU&hl=en-GB). Admittedly, I drive something old and robust enough that a couple of pot holes don't tend to bother me ('91 Triton Ute), but I'm rather surprised at the lack of creativity that has gone into this proposal and it seems like a bit of a step in the wrong direction to me.

There are so many important infrastructure projects that should be put ahead of a 4 lane midlands highway, and before a Kingston bypass for that matter. First and foremost is getting the hospital under control. Coincidentally I was in the RHH earlier today, my wife compared it to the last public hospital she was in (Chiang Mai, Thailand) and the RHH didn't get a favourable report.

Spending huge amounts of money so more cars can fit on our roads seems like lunacy to me. Money would be better spent on:
1. Improving public transport availability.
2. Promoting carpooling and cycling.
3. Implementing free park and ride shuttlebus services.
4. Using the NBN to get businesses to conduct meetings online rather than drive between Launceston and Hobart constantly. Online meeting software is getting very advanced and very affordable - see www.dimdim.com, every Tasmanian business should be equiped and trained with such software.

If dimdim for linux supported desktop sharing I might even use it a bit more myself, but that's a story for another blog post.

Monday, May 11, 2009

Using RSS to Trac Tickets

Like many web development firms, we work on many projects at once and often they are quite small projects. I find the combination of Trac + Subversion really helpful to keep a record of changes through a project and also to be able to break a project down into it's components so that each component can be assigned to the person who specialises in that area. With a team of developers who are distributed across different locations and timezones these tools really assist with project management.

The problem is though, we now have lots of different Trac and Subversion installations - one for each project, and it was getting hard to keep an eye on all the tickets assigned to each of us for the various projects. I considered the idea of moving them all into one installation of the Trac and Subversion but then we would lose the ability to allow clients to login to Trac to keep an eye on the progress of their project, unless of course we allowed them to view all details of all projects.

Another option was to move to something a bit more advanced than Trac, but I didn't like the look of the license fees involved with Jira.

I solved this little problem by using my RSS reader (Liferea) to aggregrate the ticket feeds from each of the Trac installs. I chose Liferea because it supports http authentication, but there are plenty of RSS readers around that would do just as good. I started by making a folder in Liferea called 'My Trac Tasks' and then for each project I went to the trac page listing 'My Tickets' and selected the RSS feed link. I added each of these feeds to Liferea.

Normally an RSS reader is used to read news, so there were a few little settings I had to change to get Liferea to suit my needs.

Normally an RSS reader remembers results that you haven't read yet, even if they're no longer included in the RSS feed's XML. I didn't want this, I wanted my aggregated list in Liferea to simply represent a list of exactly all the items in the RSS feeds I added, without remembering items from the past.

Also, an RSS reader normally removes items from display that you have already read. Again I didn't want this, I only wanted items to be removed form display if the ticket has been closed and the item is no longer listed in the RSS XML.

The two options I needed to change to suit my needs were:
Tools -> Feeds -> Feed Cache Handling - set the 'Default number of items per feed to save:' to 0.

For each feed, right click on the feed name and select properties -> archive and then select 'Unlimited cache'.

Now I have a single place that lists all of the open Trac tickets assigned to me. I might still setup another folder within Liferea and add the Trac feeds for ALL open tickets rather than just mine, then I would get a good overview of where everyone else is up to with each of the projects.

Sunday, May 3, 2009

Getting FCKEditor Styles Right in Drupal

We've got a multi-site install of Drupal and we like to place the more common modules at the global 'drupal_install/sites/all/modules' location. Being able to do this is great because it means that when a module needs to be upgraded it helps us upgrade all the sites that use that module at once.

Once such global module is fckeditor. While we do want the module to be stored once at the global level we also want to be able to customise the toolbar and styles for each site, and we want to keep those settings after a module upgrade.

So here's how it's done:

There are two important files used to style fckeditor in a drupal site.

1. fckeditor.config.js
This file belongs to the Drupal module (i.e. stored as ..../modules/fckeditor/fckeditor.config.js) and stores profiles of toolbar sets.

2. fckstyles.xml
This file belongs to FCKEditor itself (i.e. stored as ..../modules/fckeditor/fckeditor/fckstyles.xml)

To be able to customise FCKEditor for each drupal site in a multi-site install. Copy the two files above into your theme folder (i.e ..../themes/my_theme/).

Now, edit the fckeditor profiles you want to use (goto mydrupalsite.com/admin/settings/fckeditor and click 'edit' next to a profile).

Open the CSS fieldset and select 'Use theme fckstyles.xml' from the Predefined Styles options.






Open the Advanced fieldset and set Load fckeditor.config.js from theme path to 'yes'.









Now you can edit fckeditor.config.js and fckstyles.xml within your theme's path to customise fckeditor for only one site. When the time comes to upgrade fckeditor your styles and profiles will remain after the upgrade.

Sunday, April 26, 2009

Backup Rotations with Amazon S3

I've starting putting my backups onto Amazon S3 and I've already got a script working that does a nightly sync (let me know if you're interested in that and I'll do a blog post about it).

The next problem though is that I also want weekly snapshots on a one month rotation schedule, just in case it's a little while before I realise that I want a certain file back.

Option A: Setup some more cron jobs that will sync weekly to different S3 folders.
The problem with this is that I would be transferring the same files twice - once for the nightly sync and once for the weekly sync.

Option B: Take a snapshot of the nightly copy and put that in another folder.
This sounds better, and has the added benefit that I'm working with files that are already off my server, so won't be adding any extra load to it. The latest version of s3cmd supports copying between buckets and within buckets, but unfortunately does not (yet) support recursion in these scenarios. So my little challenge is to implement the recursion part with a shell script.

Now I'm no shell guru so improvement suggestions are most welcome:
I created a file called s3remotecp and put this into it:

#!/bin/sh

# check that two arguments have been supplied
if [ $# -ne 2 ]; then
echo 1>&2 Usage: $0 s3://sourcebucket/sourcefolder/ s3://destinationbucket/destinationfolder/
exit 127
fi

sourcePath=$1
destinationPath=$2

logfile='/var/log/s3remotecp.log'

for line in `s3cmd ls -r $sourcePath | awk '{print $4}'`;do
s3cmd cp $line $destinationPath`echo $line | awk -F "$sourcePath" '{print $2}'`
done


As of writing this, s3cmd is at version 0.9.9. The next version is likely to support recursion, which would make my little script redundant.

Saturday, April 25, 2009

Sound in Ubuntu on Compaq Presario CQ20 116TU

On my CQ20 116TU I've been struggling with the sound since I upgraded to Ubuntu 8.10. Headphones worked ok but the laptop speakers didn't produce anything. I hoped that upgrading to Jaunty might fix my problems ...... it didn't. But it did prompt me to have yet another google around to find a solution.

So a big thank you to those on this thread http://ubuntuforums.org/archive/index.php/t-940689.html where I found my answer.

So, I simply added the following three lines to /etc/modprobe.d/alsa-base.conf


options snd-hda-intel model=3stack-dig
options snd-hda-intel enable_msi=1
options snd-hda-intel single_cmd=1


Note: This solution will probably work on versions before Jaunty, in which case the file to edit won't have the .conf extension.

One thing that is still a bit odd is that when I plug headphones in it doesn't mute the laptop speakers. But I easily get around that by pressing the laptop's mute button, which mutes the laptop speakers but not the headphones.

Update:

Excited to finally be able to listen to music I opened amarok and got no sound, but that was a Jaunty issue solved here: http://chanweiyee.blogspot.com/2009/04/ubuntu-904-amarok-2-sound-problem.html