Multiple MySQL slave instances on a single server

Just had this scenario:

Servers A, B, and C each running a different rails app using a MySQL DB installed locally on each server. Had server D that should work as a slave for each of the DBs in order to have an up-to-date copy of the DBs in case of a HD crash. The backups of the DBs are also performed from the slave in order to avoid the locking of the DB on the prod servers in connection with the mysqldump command.

So how do we do that?

First step is to be able to run multiple MySQL instances on server D.

Seems that the preferred way to do this with MySQL 5.0 is to use the MySQL Instance Manager.
Unfortunately, the /etc/init.d/mysql script you get when installing MySQL on Ubuntu using
apt-get does not use the MySQL instance manager.

So I installed from source:
tar xvzf mysql-5.0.85.tar.gz
cd mysql-5.0.85/
CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors \
-fno-exceptions -fno-rtti" ./configure \
--prefix=/usr/local/mysql --enable-assembler \
sudo make install

Setup some symlinks
sudo ln -s /usr/local/mysql/bin/mysql /usr/local/bin
sudo ln -s /usr/local/mysql/bin/mysqldump /usr/local/bin
sudo ln -s /usr/local/mysql/libexec/mysqlmanager /usr/local/sbin

Installed the /etc/init.d/mysql script and made it use the MySQL Instance Manager
sudo sh -c "sed 's/use_mysqld_safe=1/use_mysqld_safe=0/' support-files/mysql.server > /etc/init.d/mysql"
sudo chmod 755 /etc/init.d/mysql
sudo update-rc.d mysql defaults

Installed the MySQL configuration file. Note that this lives in /etc/my.cnf and _not_ in /etc/mysql/my.cnf
sudo cp support-files/my-large.cnf /etc/my.cnf

Added the following to the top of /etc/my.cnf
socket = /var/lib/mysql/manager.sock
pid-file = /var/run/mysql/
password-file = /etc/mysqlmanager.passwd
monitoring-interval = 3600
user = mysql
log = /var/log/mysql/mysql-man.log


Create the mysql user and the necessary directories
sudo groupadd mysql
sudo useradd -g mysql mysql
sudo mkdir -p /var/lib/mysql /var/run/mysql /var/log/mysql
sudo chown mysql:mysql /var/lib/mysql /var/run/mysql /var/log/mysql

Create the mysqlmanager password
sudo sh -c "mysqlmanager --passwd > /etc/mysqlmanager.passwd"
sudo chown mysql:mysql /etc/mysqlmanager.passwd
sudo chmod 600 /etc/mysqlmanager.passwd

Create the data directories
sudo /usr/local/mysql/bin/mysql_install_db --user=mysql --datadir=/usr/local/mysql/var/data
sudo /usr/local/mysql/bin/mysql_install_db --user=mysql --datadir=/usr/local/mysql/var/data1
sudo /usr/local/mysql/bin/mysql_install_db --user=mysql --datadir=/usr/local/mysql/var/data2

Replace the [mysqld] section in /etc/my.cnf with the following:
port = 3306
socket = /tmp/mysql.sock
server-id = 10
relay_log = mysql-relay-bin
log_slave_updates = 1

port = 3307
socket = /tmp/mysql1.sock
server-id = 11
relay_log = mysql-relay-bin
log_slave_updates = 1

port = 3308
socket = /tmp/mysql2.sock
server-id = 12
relay_log = mysql-relay-bin
log_slave_updates = 1

Start the MySQL server
sudo /etc/init.d/mysql start

Connect to the MySQL Instance Manager
mysql -u root --socket=/var/lib/mysql/manager.sock -p

mysql> show instances;
| instance_name | status |
| mysqld | online |
| mysqld2 | online |
| mysqld1 | online |


Exit and connect to the MySQL DB running on port 3308:
mysql -u root -P 3308 -h
Note that you need to specify the host (-h) option. Otherwise, the mysql command will ignore the port option and just connect to the default instance running on port 3306.

I will make a followup post on how to setup the actual replication. Hope someone finds this useful:-)

How REE and GC tuning reduced spec suite runtime to one third

Or how my spec suite runtime went from 11 minutes and 10 seconds to 3 minutes and 29 seconds!

The Rails project I am currently working on is developed using BDD. This means that is has a big, fat spec suite. Or to be more specific it has 10033 examples!

This is very nice except for one thing: It is slooow😦

On my shiny (literally) new 2.4 GHz MacBook Pro the spec suite has a runtime of 670 seconds ie. 11 minutes and 10 seconds – yikes! This is with the ruby interpreter shipped with Mac OS X Leopard.

I have noticed running top that it seems to be mostly CPU bound. The ruby process hovers at around 95-100% CPU usage.

Ruby Enterprise Edition to the rescue!

Previously, I have tried to run the spec suite with the 1.8.6-20080810 version of REE and it did not change the runtime significantly.

The new 1.8.6-20081205 version has some interesting changes. First of all, the tcmalloc memory allocator now works with Mac OS X. And second of all, it has integration with the RailsBench garbage collector patches which allows for tweaking the GC settings of the ruby interpreter.

So what does mean in “real life”?

I downloaded and installed the new version of REE and ran the spec suite. The runtime with the new REE version was 436 seconds ie. 7 minutes and 16 seconds, chopping of nearly 4 minutes – VERY nice!

RailsBench GC patches to the rescue!

I decided to experiment a little with the GC settings ie.

export RUBY_GC_MALLOC_LIMIT=64000000

and reran the spec suite. The result: 221 seconds ie. 3 minutes and 41 seconds. Tried RUBY_GC_MALLOC_LIMIT=256000000 and the result: 209 seconds ie. 3 minutes and 29 seconds – holy Batman!

Thank you guys!

I suggest you go to and recommend Hongli Lai, Ninh Bui and Stefan Kaes like I just did – they deserve it.

Automatic Rails on Ubuntu 8.04 LTS

A couple of weeks ago there was a post on the FiveRuns blog about automatically installing the Rails stack on an Ubuntu 8.04 VPS.

I prefer to use Passenger and Ruby Enterprise Edition when running my Rails app, so inspired by the FiveRuns script I wrote my own version – here is the gist on github.

# Inspired by

apt-get update
apt-get upgrade -y
apt-get -y install build-essential libssl-dev libreadline5-dev zlib1g-dev
apt-get -y install mysql-server libmysqlclient15-dev mysql-client
apt-get -y install ruby ruby1.8-dev irb ri rdoc libopenssl-ruby1.8

tar xzf $RUBYGEMS.tgz
ruby setup.rb
cd ..

# Install Ruby Enterprise Edition
tar xvzf ruby-enterprise-1.8.6-20080810.tar.gz
yes '' | ./ruby-enterprise-1.8.6-20080810/installer

# Install Passenger
/usr/bin/gem1.8 install -v=2.0.3 passenger --no-rdoc --no-ri
apt-get -y install apache2-mpm-prefork apache2-prefork-dev
yes '' | passenger-install-apache2-module

# Create sample Rails app
/usr/bin/gem1.8 install rails --no-rdoc --no-ri
cd /var/www
rails -d mysql hello
cd hello
./script/generate controller welcome hello
echo "Hello World" > app/views/welcome/hello.html.erb
rake db:create RAILS_ENV=production

# Create the Apache2 Passenger module files
cat >> /etc/apache2/mods-available/passenger.load <> /etc/apache2/mods-available/passenger.conf <<-EOF

PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.0.3
PassengerRuby /opt/ruby-enterprise-1.8.6-20080810/bin/ruby

a2enmod passenger

# Create a site file for the sample Rails app
IP_ADDRESS=`ifconfig eth0 | sed -n 's/.*dr:\(.*\) Bc.*/\1/p'`
cat >> /etc/apache2/sites-available/hello <<-EOF

DocumentRoot /var/www/hello/public

a2ensite hello

# That's it!

The script assumes that you have ssh access as root to a clean Ubuntu 8.04 install.

The script will install

  • Ruby 1.8.6
  • RubyGems 1.3.0
  • Passenger 2.0.3
  • Ruby Enterprise Edition 20080810
  • Apache 2.2.8
  • MySQL 5.0.51a
  • A sample Rails app

Note that the Passenger installer will install the latest Rails (2.1.1) and a bunch of other useful gems.

Assuming that your server IP address is you can run it like this:

ssh root@ "wget -O - | sed -e s/$'\r'//g >; /bin/bash; rm"

Sit back and enjoy – in less than ten minutes you will have the full Rails stack and a sample Rails app running. Take a look at it on

Ruby Fools presentation slides

Today I gave a presentation at the Ruby Fools Copenhagen 2008 Conference.

The presentation was about adding full text search to a Rails app.

Here is a pdf with my presentation:

Adding Full Text Search to Your Rails App

The conference was arranged by the same crew doing the JAOO conference and most (all?) presentations were recorded on video. When the videos are available online I will post a link.

Benchmarking fun with JRuby 1.1 RC2, glassfish, and Rails 2.0.2

Yesterday JRuby 1.1 RC2 was released and two days ago the glassfish gem v 0.1.1 was released. Lots of interesting stuff happening in JRuby land!

I decided to take JRuby and the glassfish gem for a spin with a simple Rails application.
Installing JRuby

First step was to download and install JRuby. This is pretty straightforward:
cd /tmp
tar xvzf jruby-src-1.1RC2.tar.gz
cd jruby-1.1RC2/
export JRUBY_HOME=`pwd`
jruby --version
ruby 1.8.6 (2008-02-17 rev 5944) [i386-jruby1.1RC2]

Yep, seems to work.

Installing gems

Next step was to install the Rails and glassfish gems:
unset GEM_HOME
unset GEM_PATH
gem install rails
gem install glassfish

Creating a Rails application

On to the Rails application… I used scaffold to have a simple application up and running quickly:
cd ..
rails glassfishtest --database=mysql
cd glassfishtest/
export RAILS_ENV=production
rake db:sessions:create
script/generate scaffold Book title:string
rake db:create
rake db:migrate
script/runner "Book.create(:title => 'JRuby Rocks')"

I use the database session store, so I added this line to the config/environment.rb file
config.action_controller.session_store = :active_record_store

Firing up glassfish

Let’s fire up the glassfish server:
cd ..
glassfish_rails glassfishtest -n 2

The -n 2 option will make glassfish start 2 Rails instances.

Benchmark fun!

I used the ab command to perform some simple benchmarks.
Each ab command was run twice with a freshly started glassfish server. The first run warms up the JIT in the JVM. The results listed below are for the second run (and the fifth run for some). All benchmarks were performed on my 2.33GHz MacBook Pro running Leopard 10.5.2 with Java version 1.5.0_13-b05-237.

The performance with respect to static files is impressive:
ab -n 5000 -c 10 http://localhost:3000/
Requests per second: 2705.63 [#/sec] (mean)

Now onto a page created by Rails:
ab -n 1000 -c 8 http://localhost:3000/books/1
Requests per second: 54.10 [#/sec] (mean)

JRuby can be tweaked a little bit with the -server parameter:
JAVA_OPTS="-server" glassfish_rails glassfishtest -n 2
ab -n 1000 -c 8 http://localhost:3000/books/1
Requests per second: 53.82 [#/sec] (mean) 2nd run
Requests per second: 63.06 [#/sec] (mean) 5th run

After a little warmup the performance is approximately 20% better than without the -server option.

Let’s try adding more Rails instances:
JAVA_OPTS="-server" glassfish_rails glassfishtest -n 4
Requests per second: 50.71 [#/sec] (mean) 2nd run
Requests per second: 60.69 [#/sec] (mean) 5th run

On my dual core machine this actually degrades performance a little bit. I guess it is a good idea to have the number of Rails instances match the number of cores in your server.

But what about one Rails instance:
JAVA_OPTS="-server" glassfish_rails glassfishtest -n 1
Requests per second: 31.56 [#/sec] (mean) 2nd run
Requests per second: 34.48 [#/sec] (mean) 5th run

That hurts!


How does mongrel compare to glassfish?
Single Mongrel – JRuby
JAVA_OPTS='-server' jruby script/server -e production
Requests per second: 54.99 [#/sec] (mean) 2nd run
Requests per second: 63.20 [#/sec] (mean) 5th run

Two Mongrels behind pen – JRuby
Requests per second: 58.39 [#/sec] 2nd run(mean)
Requests per second: 69.16 [#/sec] (mean) 10th run

Static files:
Requests per second: 313.57 [#/sec] (mean)

Mongrel and the glassfish server have comparable performance with respect to Rails generated pages.
With respect to serving static files, glassfish outperforms Mongrel significantly. That said, you shouldn’t really let Mongrel serve static content – it is better to leave that to nginx or Apache.

Mongrel – MRI

What is the performance when using MRI?
Single Mongrel – MRI
Requests per second: 120.79 [#/sec] (mean)

Two Mongrels behind pen – MRI
Requests per second: 123.42 [#/sec] (mean)

The MRI Mongrel seems to have a lot better performance for this (admittedly simple) benchmark.


With respect to ease of running a server the JRuby/glassfish combo is very appealing:

  • static files are served very fast
  • no need for a separate load balancer
  • the whole thing is started with just one command

For this particular Rails application benchmark, the performance of the JRuby stack is only half of the performance of MRI, which is kind of sad. I am pretty sure that this is not the case for all Rails applications. In fact, evidence from Mingle seems to indicate that JRuby is faster than MRI. So I guess the best thing is to try it out on your own Rails app – and please blog about your findings. If you decide to benchmark your own Rails app I highly recommend this peepcode screencast about benchmarking.

My first Rails Contribution

Yeah! I am a Rails contributor!

In an application at work we are using a Rails REST application for the backend of the application and another Rails application as the frontend. The frontend application does not use the database at all but only the REST api provided by the backend application.

When sending lots of data between the two applications serializing to and from XML turned out to be a performance bottleneck. We turned to JSON and this improved performance significantly.

The JSON support in ActiveResource was added recently and there are still some areas where XML is better supported than JSON. So I submitted a patch to improve the JSON support. The patch got submitted to trunk in this changeset.

It feels really good to contribute back to Rails when Rails have brought me so many hours of joy:-)

Experimenting with Amazon S3 EU edition

Today Amazon announced the availability of S3 in Europe.

Nice! Let’s play with it! Please notice that I am located in Denmark and that all tests were performed on my 2048/512 ADSL line.

Download the new version of Amazon S3 Authentication Tool for Curl

Unzip it and create an .s3curl file containing you AWS keys as described in the readme file.

Now let’s create some buckets – a US bucket and an EU bucket: --id personal --createBucket -- --id personal --createBucket=EU --

Fetch some test files a 50K file and a 10MB one:


And upload them: --id=personal --acl public-read --put 10Mtestb.rnd -- --id=personal --acl public-read --put 50Ktest.rnd -- --id=personal --acl public-read --put 10Mtestb.rnd -- --id=personal --acl public-read --put 50Ktest.rnd --

Try fetching the large file from the US bucket a couple of times

ab -n 1
Time taken for tests: 50.325 seconds
Transfer rate: 208.37 [Kbytes/sec] received

ab -n 1

Time taken for tests: 48.351 seconds

Transfer rate: 216.87 [Kbytes/sec] received
And the EU bucket

ab -n 1
Time taken for tests: 47.907 seconds
Transfer rate: 218.88 [Kbytes/sec] received

ab -n 1

Time taken for tests: 50.943 seconds

Transfer rate: 205.84 [Kbytes/sec] received
With respect to transfer rate they seem to perform about the same from my local machine’s point of view. But I guess that this is what is to expect. The EU bucket should give better response times and for large files the response times are only a small fraction of the total transfer time.

But what about the small file?

US bucket

ab -n 50
Time taken for tests: 60.308 seconds
Time per request: 1206.16 [ms] (mean)

EU bucket

ab -n 50
Time taken for tests: 26.676 seconds
Time per request: 533.52 [ms] (mean)

Now we’re talking!

Summary: For large files you could just as well use the US variant of S3. If you use S3 for serving the static files of your web site and most of your visitors come from Europe switching to the EU S3 should give your users significantly better load times.