Sunday, April 26, 2009

Backup Rotations with Amazon S3

I've starting putting my backups onto Amazon S3 and I've already got a script working that does a nightly sync (let me know if you're interested in that and I'll do a blog post about it).

The next problem though is that I also want weekly snapshots on a one month rotation schedule, just in case it's a little while before I realise that I want a certain file back.

Option A: Setup some more cron jobs that will sync weekly to different S3 folders.
The problem with this is that I would be transferring the same files twice - once for the nightly sync and once for the weekly sync.

Option B: Take a snapshot of the nightly copy and put that in another folder.
This sounds better, and has the added benefit that I'm working with files that are already off my server, so won't be adding any extra load to it. The latest version of s3cmd supports copying between buckets and within buckets, but unfortunately does not (yet) support recursion in these scenarios. So my little challenge is to implement the recursion part with a shell script.

Now I'm no shell guru so improvement suggestions are most welcome:
I created a file called s3remotecp and put this into it:

#!/bin/sh

# check that two arguments have been supplied
if [ $# -ne 2 ]; then
echo 1>&2 Usage: $0 s3://sourcebucket/sourcefolder/ s3://destinationbucket/destinationfolder/
exit 127
fi

sourcePath=$1
destinationPath=$2

logfile='/var/log/s3remotecp.log'

for line in `s3cmd ls -r $sourcePath | awk '{print $4}'`;do
s3cmd cp $line $destinationPath`echo $line | awk -F "$sourcePath" '{print $2}'`
done


As of writing this, s3cmd is at version 0.9.9. The next version is likely to support recursion, which would make my little script redundant.

Saturday, April 25, 2009

Sound in Ubuntu on Compaq Presario CQ20 116TU

On my CQ20 116TU I've been struggling with the sound since I upgraded to Ubuntu 8.10. Headphones worked ok but the laptop speakers didn't produce anything. I hoped that upgrading to Jaunty might fix my problems ...... it didn't. But it did prompt me to have yet another google around to find a solution.

So a big thank you to those on this thread http://ubuntuforums.org/archive/index.php/t-940689.html where I found my answer.

So, I simply added the following three lines to /etc/modprobe.d/alsa-base.conf


options snd-hda-intel model=3stack-dig
options snd-hda-intel enable_msi=1
options snd-hda-intel single_cmd=1


Note: This solution will probably work on versions before Jaunty, in which case the file to edit won't have the .conf extension.

One thing that is still a bit odd is that when I plug headphones in it doesn't mute the laptop speakers. But I easily get around that by pressing the laptop's mute button, which mutes the laptop speakers but not the headphones.

Update:

Excited to finally be able to listen to music I opened amarok and got no sound, but that was a Jaunty issue solved here: http://chanweiyee.blogspot.com/2009/04/ubuntu-904-amarok-2-sound-problem.html

Backing up VDI files to Amazon S3

I've got a Centos 5.3 server that uses VirtualBox to run a couple of headless virtual servers. Apart from regularly (i.e. nightly or weekly) backing up the contents of the virtual servers, it's also useful to take the occasional snapshot of the whole virtual server. To do this you basically just need to grab a copy of the .vdi file, which can be found somewhere like /root/.VirtualBox/VDI/.

So in the unfortunate case of a complete hardware failure or data centre screwup I'll be able to put the .vdi file on another box somewhere and startup the virtual server to the same state that it was when I took the snapshot. At that point I would probably want to restore the files from the nightly backup into the restored virtual server to make sure everything is as up-to-date as possible.

So what I'm going to run through here is how to backup the snapshot to Amazon's Simple Storage Service (S3). Which I'm using for backups because:
  1. It's cheaper and easier than maintaining hardware in the office.
  2. It's faster to copy a large file from the server (inside data centre) to Amazon S3 than to copy to the office server.
  3. Bandwidth is still very expensive in Australia and my fellow office-mates get rather annoyed when our connection is shaped by me copying large backups onto the office server.
So here's the basic process of doing the backup, hopefully I'll at some point get around to automating this with a shell script.

1. Install s3cmd by executing the following as the root user:
yum install s3cmd
Update: Actually you'll need to install the repository first - instructions here: http://s3tools.org/repositories

2. You will now need to configure s3cmd with the details of your Amazon S3 account.
s3cmd --configure
This will prompt you for your access key and secret access key. I also selected to use encryption and https.

3. Create a bucket to store your backups. Personally, I've got one S3 account and then a bucket for each server I need to backup, so my bucket is setup like this:
s3cmd mb s3://myservername.com

4. Login to the virtual server guest that you want to backup and shut it down.

5. Navigate to the location of the VDI files.
cd /root/.VirtualBox/VDI/

6. Make a copy of the VDI file you want to backup.
cp myvps.vdi myvps_snapshotdate.vdi

7. Startup the virtual server again. Note: I've written startup scripts for my virtual servers, so this command isn't available by default. The startup scripts might be the subject of another blog post if anyone requests it.
/etc/init.d/myvps start

8. Compress the vdi backup file. My original vdi was about 9GB, compressed it got down to 1.8GB.
tar -czvf myvps_snapshotdate.vdi.tgz ./myvps_snapshotdate.vdi
Note: The reason why I didn't compress this at the same time as making a copy from the original is because I wanted to get the virtual server started up with as little downtime as possible.

9. Send the compressed vdi to S3. This might take a while.
s3cmd put myvps_snapshotdate.vdi.tgz s3://myservername.com/VDI/myvps_snapshotdate.vdi.tgz

10. Cleanup

rm myvps_snapshotdate.vdi.tgz
rm myvps_snapshotdate.vdi

Wednesday, April 22, 2009

OpenVZ Burstable Memory

So I signed up for a new VPS server the other day and this time it was an OpenVZ based environment. The specs said it had 512MB of RAM burstable up to 768MB. These sounded like reasonable numbers, but at the time I didn't really think about what burstable means.

On a normal linux machine (and I assume other OS's work in a similar way), when your RAM is full you start using swap space (sometimes known as virtual memory), which is stored on the hard disk. It's pretty much the same as RAM, but reading off the HDD makes it slower. If your RAM is constantly full and you're always relying on SWAP you should buy more RAM, it's the easiest way to get a performance increase on your box.

Anyway, on this OpenVZ system it's a virtual server so you're sharing the actual RAM with a bunch of other VPS's on the same physical hardware. As far as the virtualised OS is concerned it just has 768MB, and no SWAP space. The problem is though, when it gets past 512MB it starts moving into the burstable area, which could simply just be renamed "Unreliable Memory" because at any point it could just disappear and crash your application because someone else's server is also using up a lot of memory.

So lets take a look at an example situation. Most of the time my memory use might stay below 512MB of RAM, but every now and then I need to repopulate the whole database from a backup (don't ask why), which may push the memory usage over the top. The burstable memory "might" cater for this if it only needs the memory for a minute or so but if it needs it for longer or the burstable memory isn't available because of other VPS's, my application will simply fail.

I need this for a production environment and the idea of having "Unreliable" memory just doesn't cut it. The performance might be a bit slower, but at least SWAP space wouldn't crash my application.

So it seems to me that OpenVZ is designed to rip-off the consumer, forcing me to upgrade to a guaranteed 768MB of RAM even though I only need more than 512 once in a blue moon. When my memory usage peaks, it's normally because an important process is running so I'd much prefer the application to use reliable, cheap but slightly slower SWAP space rather than burstable memory.

The moral of this story is... don't use OpenVZ, choose XEN virtualisation and setup swap space.

Tuesday, April 21, 2009

wurfl.xml has moved

I've got a java web application that uses the wurfl tag library. Wurfl is awesome by the way, it means you don't have to write your application a thousand times for all the different mobile devices on the market now.

So the app broke recently and I found the cause to be that the wurlf.xml file couldn't be found. This file contains all the profiles of all the mobile devices on the planet and it needs to be updated every now and then when new devices are released. I just used a cron job to periodically download the wurfl.xml file.

The reason things broke was because the old url pointing to the latest wurfl.xml file didn't work, sourceforge had made the wurfl guys use a different distribution method.

Luckily, this was easy to fix. I found the new path (even had a nearby mirror this time) and updated the script on the cron. I had to add a little bit extra to the script because the new distribution method used a zip format, so my script now had to unarchive the file after download it.

Sure enough, after I'd figured all this out the mobile app was back up and running.

Monday, April 20, 2009

Skype on the iphone

Every now and then I've been checking back in the appstore, eagerly hoping that skype will turn up. I'd already tried using an app called Fring, which was capable of connecting a skype call so both sides could hear each other, but when it came to actually holding a conversation the quality wasn't up to it.

This time I checked the appstore and there it was, Skype, the real mc'coy. The reason this is so important of course is that on my Compaq Presario CQ20 laptop running Ubuntu the soundcard locks up after exactly 15 minutes of talking on skype, requiring me to reboot the whole computer and I haven't been able to figure out why (suggestions are welcome though).

So far it looks promising that having skype on my iphone will make work a bit more productive. Together with the SIP client I've already installed called Siphon I should have all types of calls covered. Strangely, as my home is out in the country, out of GSM range for this old 1st generation iphone, it feels like quite an achievement to make a phone call from my phone.

Update:
It worked! I had a clear 20 minute call with a colleague in Vancouver so skype on the iphone gets a thumbs up.

Sunday, April 19, 2009

amazon simple storage

I recently learned about the Amazon Simple Storage Service (S3) and now I'm starting to move my backups into the cloud. Beforehand I had my own backup server running in the office and I would rsync each of production servers every night as well as take weekly snapshots on a month rotation. Now I can just put it all in the cloud and using s3cmd sync I can do the same sort of sync schedule.

Some things aren't quite as flexible as good old rsync though. I used to only sync files under 200MB so that I wouldn't bother backing up the odd linux distro iso that I'd leave on a server accidentally, or attempt to backup the virtual machine vdi files and I've had to work around that another way with s3.

Also I couldn't find a way to take a snapshot and copy that into another bucket with s3cmd, so every week I need to sync to the appropriate weekly snapshot buckets. This causes a bit more traffic than is really necessary but traffic between the data centre and amazon s3 is a lot cheaper than coming through my isp onto the office server.

Drupal Hosting on Centos

Setting up Centos 5.3 for Drupal 6.x

Introduction:

This document runs through how to setup a Centos 5.2 server as a host for multiple drupal sites. It covers the setup from a fresh centos install including setting up administrator users, firewall, backup etc.

These details were written during an install of centos as a VPS within HyperVM, using the centos-5-i386 build profile as a starting point.

For stability and security most things should be installed from the standard repositories in the standard manner (i.e. from yum), but we need to make an exception for php.
By default Centos 5.2 doesn't have the latest PHP installed, and there's some aspects of php 5.2+ that are important for some common Drupal 6.x modules, so this makes upgrading php from non-standard repositories an important requirement for a Drupal server. Note, this probably won't be required when the next major release of Centos comes out, at least until Drupal jumps ahead again.

    Before starting:
    Section 1: Webmin and Virtualmin
    Section 2: PHP
    Section 3: Security
    Section 4: Other Useful Software
    Section 5: Monitoring
    Section 6: Backups
    Section 7: Drupal
    Section 8: Setting up a New Drupal Site


Procedures:


Before starting:

  • Check the minimum requirements before starting. I thought I'd save a few dollars by getting the cheapest possible VPS, but then soon found out that basically nothing will install onto centos with only 128MB RAM.
  • If the server was commissioned by someone else (i.e. a data centre), start by changing the root password by logging in as root and running the 'passwd' command.
  • Still as root, get Centos up to date by running 'yum update'.
  • I use vim a lot and I get confused moving between operating systems that have either vim or vi. Centos has vim installed but it uses the command vi to run it so one of the first things I do on centos is run 'ln -s /bin/vi /bin/vim' so that I can still type vim to open vim (it's just what my fingers remember).
  • Edit the bottom of /etc/aliases so that mail to root goes to your own email address. You'll need to run 'newaliases' after changing that file.
  • add updatedb to the cron so we can use the locate command
33 * * * * updatedb
  • This is a good guide for setting the time in linux. If like me you're using a VPS you probably can't change the hwclock, but you should be able to set your timezone.
http://www.wikihow.com/Change-the-Timezone-in-Linux

Section 1: Webmin and Virtualmin


Lets get webmin and virtualmin up and running first off, then we can use them to make the rest of the configuration process a bit easier.

Virtualmin has a great install script, which actually installs webmin for you, does some other tricky stuff and is very well tested for centos. Use wget from the command prompt to download the virtualmin install.sh script from the webmin site, make it executable by running 'chmod +x install.sh' and then run './install.sh'. Once this runs through everything it needs to you should be able to browse to https://yourserver.com:10000 and login to webmin with your root password. You'll probably see a big yelllow box saying 'Check virtualmin configuration', click this and run through any extra module installation that virtualmin depends on. I setup the defaults for BIND and then the big yellow box disappeared.

  • I also installed the mailbox signup module from within virtualmin, as of writing this I haven't figured out what it does but it sounds like it might be useful.
  • In virtualmin package update section, set email address to receive weekly package update notification.

In virtualmin -> system settings -> features and plugins, remove anything that you're not going to need. I disabled webalizer reporting, webmin login and dav login.

In virtualmin -> system customization -> custom shells, I enabled custom shells and then set the default for admin users to be /bin/false

In virtualmin -> system settings -> server templates, I set disk quotas for admin users to soft, under mail for domain I set mail aliases for new domains to none
In the apache website section of server templates I changed the directives to look like this: (note that I left the old lines as comments so they're easy to change back for non-drupal sites). The new DocumentRoot and Directory values will become clear when we get to the Drupal section.

ServerName ${DOM}
ServerAlias www.${DOM}
#DocumentRoot ${HOME}/public_html
DocumentRoot /home/drupal_owner/current_drupal
ErrorLog /var/log/virtualmin/${DOM}_error_log
CustomLog /var/log/virtualmin/${DOM}_access_log combined
ScriptAlias /cgi-bin/ ${HOME}/cgi-bin/
DirectoryIndex index.html index.htm index.php index.php4 index.php5
#

Options -Indexes IncludesNOEXEC FollowSymLinks
allow from all
AllowOverride All


allow from all



For redirect webmail.domain to usermin I said no
For redirect admin.domain to virtualmin I said no
For disabled website html, type in a message about being down for maintenace.

In Virtualmin -> system settings -> bandwidth monitoring I enabled bandwidth monitorying.
In virtualmin -> limits and validation -> ftp directory restrictins, I enabled the restrictions so ftp users would be lock into their home folders

Lastly, I've run into problems running backups from virtualmin because by default it creates files in /tmp, which often doesn't have enough room for a full site backup. So in webmin configuration -> advanced options I changed the temporary files location to /var/tmp/.webmin. Also enabled clear temp files in non-standard directory.

Section 2: PHP


If you run php -version you'll notice that Centos 5.3 only support PHP 5.1.6. Because of the way Centos (being an enterprise distribution) handles the release cycle, PHP 5.2 won't be supported until Centos 6. This is a bit of an issue with Drupal hosting as some of the awesome Drupal 6.x modules that have become available recently require PHP 5.2+. There's a great guide for overcoming this problem here: http://bluhaloit.wordpress.com/2008/03/13/installing-php-52x-on-redhat-es5-centos-5-etc/ which I will summarise below.

The following will install the remi repository:


wget http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-2.noarch.rpm
wget http://rpms.famillecollet.com/el5.i386/remi-release-5-4.el5.remi.noarch.rpm
rpm -Uvh remi-release-5*.rpm epel-release-5*.rpm

Update for centos 5.3:

wget http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm
wget http://rpms.famillecollet.com/el5.i386/remi-release-5-7.el5.remi.noarch.rpm
rpm -Uvh remi-release-5*.rpm epel-release-5*.rpm


You now have the Remi repository on your system, however it is disabled by default. Obviously you don’t want all of your packages been effected by this repository, however to enable it for a specific yum call, install the latest php with the following:

This will update mysql to version 5.2, which seems to be required for the php 5.2+ packages. PHP 5.3 will be installed at the same time
yum --enablerepo=remi update mysql php

After upgrading mysql, you'll need to run the following script:
/usr/bin/mysql_fix_privilege_tables --password=mysql_root_password

php-xml is pretty useful, so I installed that too in a similar way
yum --enablerepo=remi install php-xml

By default php restricts the maximum upload size to 2MB. This can be increased by editing the /etc/php.ini file and changing these two lines:
; Maximum allowed size for uploaded files.
upload_max_filesize = 24M

; Maximum size of POST data that PHP will accept.
post_max_size = 24M

You'll need to restart apache after making changes to php.ini.

Section 3: Security


On a new system there's a few security issues that are worth tightening a little. With any system you should assess what your specific requirements are and configure your security around that. For example, if you don't need ftp access to your server then you might as well disable the ftp services and block off port 21. The following are the settings that I used, your requirements might be different.

  • SSH Login. Using webmin, create a new user and where it has an option for shell, select something like /bin/sh or /bin/bash. I like bash because it lets me tab out directory names but that's another story.
    • Check that you can ssh to the box as the user you just created.
    • In the SSH Server section of webmin, goto Authentication and then set 'Allow login by root' to no. Apply the changes and check that you CAN'T ssh to the server as root.

  • Virus Protection. Linux machines aren't all that prone to viruses but it still pays to be diligent and if you're running a mail server it's a nice feature to be able to help make the world a better place by deleting any emails that contain Windows viruses. That said, if the customers who are using your mail server also call you up for support when their PC gets a virus then you might want to leave that latter feature off. I'm not that greedy, I don't need the extra work and I hate MS support so I like filtering viruses out of emails.
    • clamav can also scan the filesystem using clamscan. I used the script from here: http://code.google.com/p/clamav-cron/ and set it up to scan everything in /home every saturday night.
Note: I didn't need the scan report cc'd so I changed the last line of the script to this:
/bin/mail -s "$CV_SUBJECT" $CV_MAILTO -- -f $CV_MAILFROM < $CV_LOGFILE
  • Detecting Rootkit attacks.
Can someone tell my if rootkit attacks should not be classified as viruses, and if not, why not? Anyway, rootkits are nasty little buggers. Basically, if your security is compromised and a rootkit is installed it will replace some of the core system programs in order to hide itself and then it will probably leave some backdoors open for malicious elves to sneak in at night and install ircbots on your server. If a rootkit in installed on your server the safest action I believe is to incinerate the whole computer and then wash your whole body thoroughtly with dettol, vinegar and tea tree oil.
RootKit Hunter (rkhunter) isn't in the standard centos repositories but it is in the epel repository which we added in when installing php, so just install it by running 'yum install rkhunter'

To get rkhunter to work on Centos I had to add this line to the bottom of /etc/rkhunter.conf
INSTALLDIR=/usr

To keep rkhunter up-to-date, put this command in the cron:
11 2 * * * rkhunter --update
This will update rkhunter every day at 2:11am

run 'rkhunter -c --cronjob' and when it's finished you should get a report in you email (if not, go back to the preparation steps and the /etc/aliases file).
The email will probably read 'Please inspect this machine because it can be infected'. Don't panic yet, there's a few things to change so that rkhunter alerts you when something is wrong but doesn't pester you every day.
At the top of the output from the above command you'll probably see something like this (if you're on centos):
Determining OS... Unknown
Warning: This operating system is not fully supported!
All MD5 checks will be skipped!
Copy the output of 'cat /etc/redhat-release', for me this was 'CentOS release 5.3 (Final)'
Now at the very bottom of /var/lib/rkhunter/db/os.dat add an entry for your operating system. Note, if rkhunter updates itself later and still doesn't support your OS, you'll need to do this step again.
Rkhunter had also alerted me of the following two suspicious files, which some research found to be quite safe.
/etc/.pwd.lock /usr/share/man/man1/..1.gz
In the /etc/rkhunter.conf file, uncomment the ALLOWHIDDENFILE lines that relate to those 2 files.

Running 'rkhunter -c --cronjob' again didn't find any errors and so didn't email me any false-positive alerts.
The last step here is to add this to the cron:
21 2 * * * rkhunter -c --cronjob
This will run rkhunter at 2:21am every day.

  • Firewall
The firewall can be configured from within webmin -> networking -> linux firewall
Firewall rules are another story far too long to go into here.

Section 4: Other Useful Software


yum install phpmyadmin
- then add mysql root password to /usr/share/phpmyadmin/config.inc.php
- and add the following to /etc/httpd/conf/httpd.conf

Alias /myadmin "/usr/share/phpMyAdmin"


Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all


You'll then be able to access phpMyAdmin at any of the domains on your system by going to the /myadmin alias



yum install libmcrypt
yum install php-mcrypt

Section 5: Monitoring


When something fails, we want to know about it. The problem with monitoring and alerting systems is of course that if the means by which you're being alerted goes down you have no way of knowing if there's a problem or not. i.e. if the network connection goes down your server can't alert you. Having multiple servers monitor each other is a good way to combat this problem, and webmin allows you to configure an index of related servers so they can monitor each other (webmin -> webmin servers index).

You can easily setup some monitoring and alerting in webmin -> other -> system and server status

Click on the scheduled monitoring button, enable scheduled monitoring and configure it with the email address you want the alerts sent to.
I also set it up to use an external SMTP server rather than the local one incase the local one is down (or never setup).

I like to have the following alerts running for a local server:
When the load average is above 0.9 for 15 minutes
When half the HDD is full, this is just because I like to know about it. When the HDD reaches 50% I'd probably change the alert to I get notified when it reaches 70%.
When MySQL is down
When apache is down

I also setup one a group of servers to monitor each other, and for that I just use the 'Alive System' monitor type.

Section 6: Backups


Backups are important, and everyone who understands that has probably learned the hard way. Even if your HDD is in the cloud something can still fail or some human error can make everything pear shaped. Every linux user accidentally types rm -Rf / or some disastrous variation at some point.

I like to use rsync for backups, it's a great tool that will use ssh to syncronise files across servers and it's smart enough to only transfer files that have changed so I can run the script every night and not worry too much about how much traffic it's causing. Every now and then someone puts a huge file (like a vmware or virtualbox image), which would be a pain to try to backup via ssh every night, so I limit rsync to files that are less than 200MB.

Ssh of course normally uses a password to authenticate, but we want to put the backup script on a cron job and not have to type in a password at 1am every morning, so we start by setting up certificate authentication. On the server that holds all the backups (also centos), I create a new user for each server that gets backed up and setup a folder in their home directory to keep the backup files. (I also take a weekly snapshot of this folder and run a 5 week rotation schedule but I won't go into that here).

I blatantly took these instructions for setting up ssh without a password from here: http://am3n.profusehost.net/post/index/37/SSH-without-Password who in turn took them from somewhere else.

1. Go to your home folder
CODE:
$ cd ~

2. Pass this step if you already has .ssh folder
CODE:
$ mkdir .ssh

3. Set private
CODE:
$ chmod -R 0700 .ssh

4. Go to your ~/.ssh folder
CODE:
$ cd .ssh

5. Create a ssh public_key
CODE:
$ ssh-keygen -t dsa -f id_dsa -P ''

6. Copy PUBLIC key ONLY to .ssh folder on target server
CODE:
$ scp id_dsa.pub user@server:~/.ssh

7. Now log into the remote server as the target user
CODE:
$ ssh user@server

8. Go to it's .ssh folder
CODE:
$ cd .ssh

9. Put your public key in the authorized keys file
CODE:
$ cat id_dsa.pub >> authorized_keys2

10. Set privates the authorized_keys2
CODE:
$ chmod 0600 authorized_keys2

11. Delete the public key on the remote server
CODE:
$ rm id_dsa.pub

12. Exit the server
CODE:
$ exit


Now if everything is correct you should be able to ssh, scp and sftp to target server without password

Now that no password prompts get in the way we can setup the backup script, it's not the most elegant but it looks like this - you'll need to change the values for the vars at the top of course. This script will dump the whole mysql database and back that up too.

#!/bin/sh
logfile='/var/log/sync_'$(date "+%d%m%Y").log
mysqlDumpFile='path_to_local_mysql_dump_folder/mysql-'$(date "+%d%m%Y%H%M").sql
dbuser=database_username
dbpass=database_password
backupCustodian=myemail@me.com
remoteUser=remote_username
remoteServer=remote.domain.com
remotePath=/home/remoteUser/backup_path
localServerName=friendly_local_server_name

rm -f /home/drupal_user/mysqlDUMP/*
rm -f /var/log/sync_*
echo 'START BACKUP '$logfile >> $logfile
echo 'RUNNING MYSQL DUMP...' >> $logfile
mysqldump -u$dbuser -p$dbpass -q -C -A > $mysqlDumpFile 2>&1
echo 'mysql dump file:' >> $logfile
ls -la $mysqlDumpFile >> $logfile 2>&1
echo 'SYNCHRONIZING HOME DIRECTORIES...' >> $logfile
rsync -avz --delete --max-size=200m /home/ $remoteUser@$remoteServer:$remotePath >> $logfile 2>&1
echo 'END BACKUP '$logfile >> $logfile
mail -s $localServerName" BACKUP LOG "$(date "+%d%m%Y") $backupCustodian < $logfile Backing Up Configuration Files I also used webmin -> backup configuration files and setup a schedule to backup all the config files to the backup server every month.

Backing up Virtual Servers.

The rsync script will keep our files syncronised every night, but if we ever need to do a full restore it's nice to have all the virtualmin virtual servers nicely packaged up so we can just import them back using virtualmin again. virtualmin -> backup and restore takes care of this, so I setup a schedule in there to do a full backup of all the virtual servers once a month. Again via ssh onto the backup server.

Section 7: Drupal


Using webmin, create a new user drupal_owner
Set that user's home directory permissions to 444
In that user's home directory, download and unpack the latest Drupal.
Now create a symlink in that folder called current_drupal that points to the folder you just unpacked
e.g. ln -s ./drupal-6.10 ./current_drupal

All of the drupal sites on our server are going to use the one codebase, but because each site might have a different administrator the site-specific files (the ones in the sites folder) are going to be located within the virtual server home directories. When we upgrade core, we'll just need to unzip the new version, update the current_drupal symlink to point to the new version and run update.php for each of the drupal sites we're hosting. (Well, in theory it might be that easy.).

We can make things a bit easier for ourselves by creating the basic drupal folder structure in the /etc/skel folder, so this will be used every time a new account is created.
As root, copy the /etc/skel folder to /etc/drupalskel
cp -R /etc/skel /etc/drupalskel
Inside the drupalskel folder, create new directories called, files, modules and themes.
Set the permissions on files to 775
Copy the default drupal settings.php into this folder
cp /home/drupal_owner/current_drupal/sites/default/default-settings.php /etc/drupalskel/settings.php
and set the permissions on that to 775
edit the settings.php file and set the cookie_domain line to the following:
$cookie_domain = '${DOM}';

Create a new file in this same folder called linkDrupal.sh and put the following in it:
#!/bin/bash
ln -s /home/${USER}/drupal /home/drupal_user/current_drupal/sites/${DOM}
and set it's permissions to 744


In the virtualmin server templates configuration, set the default template to /etc/drupalskel


Section 8: Setting up a New Drupal Site


Create a new virtual server from within virtualmin. Depending on the website that is being setup you might want to change some of the default enabled features.
Once it's created, go to the home directory of the new user created by virtualmin
run the linkDrupal.sh to setup the symlinks required to let drupal know where our new site is.
Browse to the domain name setup with virtualmin and you should see the drupal setup page.

Section 7: Note on backing up sites:

in virtualmin -> backup and restore -> scheduled backups, I setup a schedule to backup every virtual server locally every month and to delete old backups after 90 days