Biomedical Engineering Reference
In-Depth Information
Our initial server was a RAID1 system disk (RAID1 requires two disks
so the complete data is mirrored at all times), dual power supplies, dual
fi bre cards, dual network ports, fi bre access to the SAN for MySQL
binlogs and backups. MySQL binlogs store every SQL statement run
since the last backup and can be used to restore a corrupted database in
conjunction with the last SQL backup.
Our new cluster consists of blades and storage in one chassis. Shared
storage is provided via SAS (serial attached SCSI) connections to each
blade. As well as RAID5 a parallel fi le system is used: GPFS, IBM's
General Purpose fi le system [16]. The chassis includes dual power
supplies, dual network switches and dual SAS switches. Ultimately,
the hardware and system software choices are not important, provided
that they enable system redundancy and remove any single point of
failure. Our choices are based on previous experience and expertise
held by our group. Although we want the same basic software
stack - Linux, MySQL, Apache, Python - we need to expand it to handle
the complexities of failover. One key design decision was to split
the database and application operations. Now we have three blades
running Apache to handle the application layer and three blades for the
MySQL database.
There are multiple topologies available to design a MySQL cluster. We
opted for master-slave, in our case a single master and two slaves.
Additional software is required to handle this cluster. After evaluating
several solutions we opted for Tungsten Enterprise from Continuent [17].
Tungsten Replicator sits on our MySQL blades and ensures the
transactions carried out on the master are also repeated on the slaves.
Tungsten Connector sits on the application blades and is where all the
MySQL connections point to. Depending on the request (read or write) it
then directs to the master for writing and a slave for reading. Separating
the read and write commands offers substantial performance benefi ts. As
the system grows, further slaves can be added to accommodate more
traffi c. If the master crashes one of the slaves is promoted automatically.
The ability to move the master around means zero down-time for
upgrades and maintenance.
For the application layer the setup is more complex than the database
layer. We need highly available web servers that can scale. We have long
used Apache and continue to do so. Using mod_wsgi we can link to
Python [18]. Our application blades are setup with Apache but there is
no failover. Nginx is an upcoming web server, which offers a proxy
function [19]. We run a single instance of Nginx, which is the point for
all URL requests. It then splits requests between the Apache servers. To
￿ ￿ ￿ ￿ ￿
 
Search WWH ::




Custom Search