LinuxForce’s web hosting services are designed to provide our customers with the benefits of strong security, simple on-going administration and maintenance, and support for the web software to work well in a large number of situations including multi-site support for applications.
In particular, we use the Debian package for Drupal 6. This provides community supported upgrades with a strong, well-documented policy. Like other software which ships in Debian, the software version is often somewhat older than the most recent upstream release, but it is regularly patched for security by the maintainer and the Debian security team.
In addition, this infrastructure also offers:
Only one package needs to be upgraded whenever there is a security patch, instead of every site individually
Strong separation from site specific themes, modules, libraries, uploaded files helps keep files you want to edit away from the Drupal core files (which you don’t want to edit)
The automation the infrastructure provides disk, memory, and sysadmin time to be minimized, thus reducing costs
The benefits of code maturity, as the Debian Drupal maintainers have thought through many boundary cases which it would take our staff time and trial-and-error to re-discover
Experienced Drupal admins may find some of the file locations for the package confusing at first, so let’s clarify the differences to ease the transition.
Our infrastructure supports multiple sites per server. This is implemented by providing each site with their own dbconfig.php and settings.php files, plus the following directories:
All of these are configurable by the user, and are located in /etc/drupal/6/sites/drupal.example.com/. The site also inherits the contents of these default directories from core Drupal.
Access to these files via FTP is discussed below.
Core Drupal files are located in /usr/share/drupal6/ and are shared between all the Drupal sites on the server. They should not be edited since all changes to these files will be lost upon upgrade of the Debian Drupal package.
Users & Permissions
A jailed userdrupal FTP account is created to manage the files for the Drupal install.
The userdrupal account is the owner of all the configurable files located in /etc/drupal/6/sites/drupal.example.com/
The system-wide www-data user must have access to the following by having the www-data group own them and be given the appropriate permissions:
Read access to dbconfig.php
Write access to the files/ directory (this is where Drupal typically stores uploaded files)
All files (with the exception of dbconfig.php) should be writable by userdrupal, the userdrupal group itself is enforced by the system but all users should be configured client-side to respect group read-write permissions to maintain strict security, this is a umask of 002.
Additional Users and non-Drupal Directories
If there is a non-developer who needs to have access to the files directory, for instance, a specific FTP user for that use may be created and added to the userdrupal group.
For ease of administration, if multiple users exist and we are able to support an ssh account (static IP from client required) for handling administration, sudo can be configured to allow said user to execute commands on behalf of the userdrupal ftp user. Remember to add “umask 002” to the .bashrc to respect group read-write permissions.
If additional directories outside the Drupal infrastructure are required for a site, they will be placed in /srv/www/example.com/ and a separate jailed ftp user created to manage the files here. If a cgi-bin is required, it will be placed in /srv/www/example.com/cgi-bin/
Caveats and Cautions
Because of shared use for core Drupal files located in /usr/share/drupal6/ you may experience problems with the following:
Some drush commands and plugins assume the drupal files are editable files within the same document root as the rest of your files, thus may not work as expected
When you upgrade the drupal6 package all sites are upgraded at once without testing, customers who are concerned about the impact of the upgrade changes and require very high uptimes can be accommodated by testing the site with the upgraded versions of PHP and Drupal in a testing VM
Since the Debian package name does not change, you cannot install the drupal6 package from an older Debian version alongside one from a current version, a separate virtualized Debian environment may be needed for testing upgrades if support is uncertain (this issue does not exist with upgrades from drupal6 to drupal7, as they can be installed alongside each other)
A policy for handling root level .htaccess files should be developed if they wish to be used
Although the Debian way differs from the Drupal tarball approach, it makes it possible to scale the service to many sites saving disk, memory, and sysadmin effort. By leveraging this Drupal infrastructure provided by Debian, Linuxforce provides one-off Debian package deployments to dedicated systems, shared arrangements for small businesses who are running several sites, and infrastructure deployments for businesses who provide hosting services. We also offer a boutique hosting service for select customers on one of our systems.
In January I attended the 10th annual Southern California Linux Expo. In addition to speaking and running the Ubuntu booth, I had an opportunity to talk to other sysadmins about everything from selection of distribution to the latest in configuration management tools and virtualization technology.
I ended up in a conversation with a fellow sysadmin who was using a proprietary virtualization technology on Red Hat Enterprise Linux. Not only did he have surprising misconceptions about the FOSS (Free and Open Source Software) virtualization tools available, he assumed that some of the features he was paying extra for (or not, as the case may be) wouldn’t be in the FOSS versions of the software available.
Here are five features that you might be surprised to find included in the space of FOSS virtualization tools:
1. Data replication with verification for storage in server clusters
When you consider storage for a cluster there are several things to keep in mind:
Storage is part of your cluster too, you want it to be redundant
For immediate failover, you need replication between your storage devices
For data integrity, you want a verification mechanism to confirm the replication is working
Regardless of what you use for storage (a single hard drive, a RAID array, or an iSCSI device), the open source DRBD (Distributed Replicated Block Device) offers quick replication over a network backplane and verification tools you can run at regular intervals to ensure deta integrity.
Looking to the future, the FOSS distributed object store and file system Ceph is showing great promise for more extensive data replication.
2. Automatic failover in cluster configurations
Whether you’re using KVM Kernel-based Virtual Machine or Xen, automatic failover can be handled via a couple of closely integrated FOSS tools, Pacemaker and Corosync. At the core, Pacemaker handles core configuration of the resources themselves and Corosync handles quorum and “aliveness” checks of the hosts and resources and logic to manage moving Virtual Machines.
3. Graphical interface for administration
While development of graphical interfaces for administration is an active area, many of the basic tasks (and increasingly, more complicated ones) can be made available through the Virtual Machine Manager application. This manager uses the libvirt toolkit, which can also be used to build custom interfaces for management.
In virtualized environments it’s common to reboot a virtual machine to move it from one host to another, but when shared storage is used it is also possible to do live migrations on KVM and Xen. During these live migrations, the virtual machine retains state as it moves between the physical machines. Since there is no reboot, connections stay intact and sessions and services continue to run with only a short blip of unavailability during the switch over.
KVM has the option to take full advantage of hardware resources by over-allocating both RAM (with adequate swap space available) and CPU. Details about over-allocation and key warnings can be found here: Overcommitting with KVM.
Data replication with verification for storage, automatic failover, graphical interface for administration, live migrations and over-allocating shared hardware are currently available with the FOSS virtualization tools included in many modern Linux distributions. As with all moves to a more virtualized environment, deployments require diligent testing procedures and configuration but there are many on-line resources available and the friendly folks at LinuxForce to help.
As announced in our Cluster Services Built With FOSS post, LinuxForce’s Cluster Services are built exclusively with Free and Open Source Software (FOSS). Here is an expanded outline of the basic architecture of our approach to High-Availability (HA) clustering.
In any HA deployment there are two main components: hosts and guests. The hosts are the systems which are the core of the cluster itself. The host runs with very limited services dedicated for the use and functioning of the cluster. The host systems handle resource allocation, from persistent storage to RAM to the number of CPUs each guest gets. The host machines give an “outside” look at guest performance and give the opportunity to manipulate them from outside the guest operating system. This offers significant advantages when there are boot or other failures which traditionally would require physical (or at least console) access to debug. The guests in this infrastructure are the virtual machines (VMs) which will be running the public-facing services.
On the host, we define a number of “resources” to manage the guest systems. Resources are defined for ping checking the hosts, bringing up shared storage or storage replication (like drbd) as primary on one machine or the other and launching the VMs.
In the simplest case, the cluster infrastructure is used for new server deployments, in which case the operating system installs are fresh and the services are new. More likely a migration from an existing infrastructure will be necessary. Migrations from a variety of sources are possible including from physical hardware, other virtualization technologies (like Xen) or different KVM infrastructures which may already use many of the same core features, like shared storage. When a migration is required downtime can be kept to a minimum through several techniques.
The first consideration when you begin to build a cluster is the hardware. The basic requirement for a small cluster is 3 servers and a fast dedicated network backplane to connect the servers. The three servers can all be active as hosts, but we typically have a configuration where two machines are the hosts and a third, less powerful arbitrator system is available to make sure there is a way to break ties when there is resource confusion.
Two live resource hosts
These systems will be where the guests are run. They should be as similar as possible down to the selection of processor brand and amount of RAM and storage capabilities so that both machines are capable of fully taking over for the other in case of a failure, thus ensuring high availability.
The amount of resources required will be heavily dependent upon the services you’re running. When planning we recommend thinking about each guest as a physical machine and how many resources it needs, allowing room for inevitable expansion of services over time. You can over-commit both CPU and RAM on KVM, so you will want to read a best practices guide such as Chapter 6: Overcommitting with KVM. Disk space requirements and configuration will vary greatly depending upon your deployment, including the ability to use shared storage backplanes and replicated RAID arrays, but Linux Software RAID will typically be used for the core operating system install controlling each physical server. Additionally, using a thorough testing process so you know how your services will behave if they run out of resources is critical to any infrastructure change.
A third server is required to complete quorum for the cluster. In our configuration this machine doesn’t need to have high specs or a lot of storage space. We typically use at least RAID1 so we have file system redundancy for this host.
A fast switch whose only job is to handle traffic between the three machines is highly recommended for assured speed of these two vital resources:
It’s best to keep these off a shared network, which may be prone to congestion or failure, since fast speeds for both these resources are important for a properly functioning cluster.
Key software components
There are many options when it comes to selecting your HA stack, from which Linux distribution to use, to what storage replication system to use. We have selected the following:
Like most LinuxForce solutions, we start with a base of Debian stable, currently Debian “Squeeze” 6.0. All of the software mentioned in this article comes from the standard Debian stable repository and is open source and completely free of charge.
Logical Volume Manager (LVM)
We use LVM extensively throughout our deployments for the flexibility of easy reallocation of filesystem resources. In a cluster infrastructure it is used to create separate disk images for each guest and then may be used again inside this disk image for partitioning.
Distributed Replicated Block Device (DRBD)
DRBD is used for replicating storage between the two hosts which have their own storage. Storage needs could also be met by shared storage or other data replication mechanisms.
Kernel-based Virtual Machine (KVM)
Since hardware-based virtualization is now ubiquitous on modern server hardware we use KVM for our virtualization technology. It allows fully virtualized VMs running their own unmodified kernels to run directly on the hardware without the overhead of a hypervisor or emulation.
Pacemaker & Corosync
Pacemaker and Corosync are used together to do the heavy lifting of the cluster management. The two services are deeply intertwined, but at the core Pacemaker handles core configuration of the resources themselves, and Corosync handles quorum and “aliveness” checks of the hosts and resources and determination of where resources should go.
We have deployed this infrastructure for mission critical services including DNS, FTP and web server infrastructures serving everything from internal ticketing systems to high-traffic public-facing websites. For a specific example of implementation of this infrastructure, see Laird Hariu’s report File Servers – The Business Case for High Availability where he covers the benefits of HA for file servers.
I was very impressed with the breadth of Widom’s approach to the subject: it was a major reason I decided to spend time on the course. Another strength is its nuts-n-bolts approach: some theoretical topics are covered but for the most part this is a course for practitioners. Finally, I particularly appreciated the extensive use of FOSS (Free and Open Source Software) in the course.
Why study databases? I will merely say that data is a core tool pervading the information resources of modern civilization. Databases are where data is housed. For example, the data constituting this blog is stored in a database and the same can be said for much (if not all) of the Internet. Databases are a profoundly vital, big picture subject.
Widom’s course is still open for enrollment in “archival mode” meaning you can watch the videos, work through the exercises, quizzes and exams, and track your progress, but the deadlines have expired and no more “Statements of Accomplishment” will be awarded (at least until the course is offered again). To complete the simple enrollment go to db-class.org and start learning about databases with FOSS today!
Although the course is broadly useful for anyone wanting to learn the basics of databases from a broad perspective, I found it to be particularly good for learning about the FOSS tools that can support database systems. So let’s start there.
FOSS Tools Covered
For traditional RDBMS, the course uses PostgreSQL, SQLite, and MySQL. Widom mentions some limitations of each database (DB) in regards to the SQL (Structured Query Language) standard including important distinctions about using each system with triggers, transactions (PostgreSQL has the best support), Views (MySQL uses updatable views whereas PostgreSQL & SQLite use triggers to modify views), recursion (only PostgreSQL and only in newer versions), and OLAP (only MySQL supports with rollup).
On the XML side, I learned xmllint and SAXON for XML validation, querying and transformation. The course covers the basics of DTDs, XML Schema, XPath, XQuery and XSLT. I used xmllint, saxonb-xquery, and saxonb-xslt to work through the exercises (the searchable Q&A forum provides usage details).
Finally, for NoSQL there are two videos which survey the state of the art. There is some depth on the Map Reduce framework provided by Hadoop. Several other FOSS systems are briefly discussed: Cassandra, Voldemort, CouchDB, MongoDB, and more. The NoSQL portion of the course is a good overview of the technology, but there are no exercises and hence little depth of a concrete nature.
From a FOSS perspective, the course is exquisite: FOSS utilities were front and center for the duration and some guidance in using these tools is provided. Help was readily available: I answered a few questions in the Q&A forum to help people overcome hurdles and I used its search feature to overcome some of my own. In sum, studying this course will give you the lay of the land for FOSS database technology including some advice about the limitations and strengths of its best database tools.
I am a big fan of so-called Open Educational Resources (OER) including free on-line video courses. Stanford’s Databases course is the 12th I’ve completed, but only the second in which I did a “deep dive” by reinforcing learning with exercises, quizzes and exams. In general, I use OER video courses as edutainment as I usually find the extra work too time-consuming: my goal is to broadly understand how the world works, not to build expertise in every subject I study! So, conceptually, I prefer the traditional form of video courses pioneered by MIT’s OpenCourseWare which in contrast with Stanford’s new approach might be called archived courses. Archived courses make the material available without (m)any social tools. So, working through the materials in traditional OER courses usually requires extra self-discipline and commitment (unless you just watch the videos for fun as I often do).
Stanford’s OER system, online at coursera.org, builds on the basic idea of OER video courses by adding deadlines, interactive feedback from automatically evaluated work, and some, including the Databases course, offer the ability to earn a “Statement of Accomplishment” for demonstrating basic proficiency. It is precisely these social enhancements that makes Stanford’s initiative so noteworthy. Together these social tools provide a shared experience with a clear set of tasks for a cohort of students working through the course at the same time.
The extra interactivity and the focus of deadlines give the Stanford approach to OER a special excitement and sense of goal accomplishment which is absent in archived courses. Even though I prefer the archived courses whose videos can be more entertaining than Stanford’s tutorial-focused approach, I have to admit I was enthralled by the deadlines: they kept me focused. It should be emphasized that Stanford’s courses like the more traditional OER archival courses can be pursued at a pace that suits your time and interest: there’s no imperative to follow the deadlines or earn kudos for accomplishments.
How I used the course materials
Although I have been using databases professionally for many years, I had not read nor studied the subject in any depth previously. I decided to take this opportunity for a deep dive. Widom’s Databases course includes a simple enrollment process, tutorial-style videos (for download or in a browser with Flash support), automatically graded exercises, quizzes, and exams each with hard deadlines, a Course Materials section with many goodies, Optional Exercises, a FAQ, a Q&A forum, and a “Statement of Accomplishment” from the instructor if you completed a substantial portion of the coursework by the deadline (6,513 of the 91,000 students enrolled in the Fall 2011 Databases course earned one; here is mine).
First, I watched each video twice taking detailed handwritten notes on the second viewing (22 pages worth!). I then checked the Flash version of the videos which often included inline questions that were very useful (Stanford’s Flash video viewer is the best I’ve seen: it even supports speeding up the video by 1.2 or 1.5 times while automatically adjusting the pitch!).
Then I worked through the quizzes and exercises. One nice feature of both was that you could attempt them many times. Different variants were provided to many of the quiz questions to make it harder to apply a blind trial and error approach and you can continue to work on them after getting them all correct (which might be useful as a way to practice for the exams or to see if you remember anything of the course when you check back in 1 or 10 years). I found the quizzes and exams to be very challenging and not so rewarding. Of course some mastery is required and acquired from the quizzes and for other learning styles they may prove more valuable than they were for me: judge for yourself.
The course included many supplementary exercises to provide extra practice. I used them for Relational Algebra and they were very helpful. However due to time constraints, I was unable to use them further. Most of my time was spent working through the exercises. I did all the exercises in an xterm window running sqlite, psql, xmllint, saxonb-xquery, or saxonb-xslt and pasting the results into the query workbench. This allowed me to really experience the FOSS tools “in the wild” which gave me a strong sense of their ins and outs. For me the interactive exercises were fantastic: they really helped me learn the material by directly engaging my problem solving faculty. They were like a real project with deadlines! Although I occasionally got ruffled with some of the difficult ones, they were engaging and fun!
Although the course web site is very simple and well-designed, it was still possible to have difficulty finding some of the gems provided for the students. For example, it took me awhile to find the code used in the demos (which was extremely useful by the way): the Course Materials section of the site has all the goodies you need but you have to mouse over the icons to see treasures that appear hidden at first (remember to right click to download). Also look carefully at the prescripts or postscripts affixed to some sections: more treasures.
The Q&A forum was helpful for finding things that were not at first apparent. A couple of times I scoured the Internet or Wikipedia looking for other angles on the material to understand a point I was struggling with. All work for the class is “open book”, so I only prepared for the exams by simply reviewing my notes.
Advice for students
I recommend taking the course now even though the deadlines have expired. Feel free to skip any part of the course that your interests, time constraints, and patience warrant. If the course is ever offered again, you will already know much of the material which should help you earn a “Statement of Accomplishment”. If not, you will better understand a broadly useful, important and interesting subject.
If you want to do a “lite” version of the course, I recommend skipping the quizzes and exams. In addition, many of the topics can be skipped if you are short on time or find them uninteresting. To her credit Jennifer Widom recommended as much in her screen side chats which provided a wonderful human dimension to the course. Although some of the material is cumulative, there are several parts of the course for which skipping is a real option. For example, the Relational Algebra (I thoroughly enjoyed doing those exercises!!!) and the Relational Design Theory topics are, I think, less important especially if you just want to acquire basic DBA skills. I deal with enough XML, that I found that part of the course extremely useful, but I can imagine someone who just needs SQL might skip those parts. This is a free course: you can be creative in how you use the materials so that you get what you want out of it!
I recommend doing the exercises associated with each topic (some did not have exercises). Some of them are quite challenging and some took me quite a bit of time. If necessary, skip some of the harder ones. If time is really pressing, just watch the videos that particularly interest you: remember this is a free resource: you can tailor your work on the course in whatever way suits your interests and time.
I did not buy nor borrow a textbook for the course (I was very impressed that Widom prepared reading assignments for four separate texts in the Course Materials sections of the site. Wow, that must have been a lot of work!). Having been through the course, I think a text is unnecessary for most students. You may find a few topics that are hard for you or for which the videos were insufficient to master the material. Since textbooks are more comprehensive and more detailed, they could help fill in the gaps. In particular diligent students may want a text. I prefer to learn iteratively, that is, I would prefer a shallow course today and another later that goes in more depth (I might even prefer to take two lite courses to build my mastery of a subject by degrees). But if you want a more complete experience now, then by all means get a textbook and dig in!
Stanford’s exciting new system for on-line courses is remarkable in its use of social tools to engage students. This is a boon to the OER movement! In today’s rapidly changing world, refreshing and expanding one’s skills is essential to apprehending the needs and opportunities that abound if you are curious enough!
I’m hooked! Although I still love and will continue to use archived courses, I will be checking to see if Stanford has any courses of interest regularly from now on! I’ve signed up for several of Stanford’s offerings (there are 16 of them!) for Winter term with the intention of “dropping” or doing a “shallow dive” (maybe just watch a few videos and do some exercises as interest permits). But I am eying the Model Thinking course for a possible deep dive. To see the full list of offerings go to Class Central: Summary of Stanford’s online course offerings and plan your learning for the Winter term which starts next Monday, January 23rd!
If you haven’t started or finished it yet, head over to db-class.org now and get to it! It is one of the best resources on the Internet for learning about databases. Moreover, it includes the special benefit of covering a broad range of important FOSS database tools.
Built on the Free/Open Source Software (FOSS) model for cluster deployments, LinuxForce staff has been hard at work over the past months developing and deploying LinuxForce Cluster Services built upon exclusively FOSS technologies and on December 15th we put out a press release:
In September Laird Hariu wrote the article “File Servers – The Business Case for High Availability” where, in addition to building a case to use clusters, he also briefly outlined how Debian and other FOSS could be used to create a cluster for a file server. File servers are just the beginning, we have deployed clusters which host web, mail, DNS and more.
The core of this infrastructure uses Debian 6.0 (Squeeze) 64-bit and then depending upon the needs and budget of the customer, and whether they have a need for high availability, we use tools including Pacemaker, Corosync, rsync, drbd and KVM. Management of this infrastructure is handled remotely through the virtualization API libvirt using the virsh and Virtual Machine Manager.
The ability to use such high-quality tools directly from the repositories in the stable Debian distribution keeps our maintenance costs down, avoids vendor lock-in and gives companies like ours the ability offer these enterprise-level clustering solutions to small and medium size businesses for reasonable prices.
You have probably heard of high availability transaction processing servers. You have most likely read about the sophisticated systems used by the airlines to sell tickets online. They have to be non-stop because downtime translates to lost orders and revenue. In this article I will discuss the economics of using non-stop technologies for everyday applications. I will show that even ordinary file sharing applications can benefit from inexpensive Linux based Pacemaker clustering technology.
What is our availability goal? Our goal should be to take prudent and cost effective measures to reduce computer downtime to nil in the required service window. I’m not talking about 99.999 % (five 9s) up time. This is the popular (and very expensive) claim made by high availability vendors. I’m talking about maintaining enough up time to service the application. Take a simple example, for office document preparation the service time window is office hours (9-5). The rest of the time the desktop PCs can be turned off, nobody is there to operate them anyway. You only need the PCs for 5 days a week for 8 hours a day or for 2080 hours per desktop PC per year. This translates into an up time requirement of 24 percent. Ideally you want the desktop PCs to be available all the time during office hours but are willing to give up availability for routine maintenance and for the infrequent breakdowns that may occur only once per workstation every five years or so. Perhaps you have two spare desktop PC workstations for every 100. This extra capacity allows your office workers to resume their work on a spare while their workstation is being repaired. In this example the cost of maintaining adequate availability is the cost of maintaining two spare desktop PCs. You might adjust this cost to account for real world conditions at the work site. Wide swings in operating temperature or poor quality electricity supply, might dictate that you increase the number of spare PCs. Sounds like a low stress, straightforward availability solution.
The problem gets more complicated when the desktop PCs are networked together and all the documents are stored on a central file server rather than on each workstation’s hard drive. There is a multiplicative effect. If the file server is not available then all 100 document processing PCs are rendered unavailable. Then you have 98 (remember the 2 spares from above) workers being paid but not producing documents. A failure during office hours can become expensive. One hour of downtime can cost as much as $1500 in lost worker wages. A day of downtime can cost $12,000 of lost worker wages. How long will it take for a hardware repair person to travel to your site? How long will it take for spare parts to arrive? How long will it take the repair person to replace the parts? How long will it take for damaged files to be replaced from backup by your own people? A serious but not unlikely failure can take several days to be completely resolved. Its not unreasonable to assume that such a $24,000 failure can occur once every 5 years. This is a very simple example. We are not talking about a complicated order-entry or inventory control system. We are talking about 98 office workers saving files to a central file share so that they can be indexed and backed up.
The Effects of Time
I’m going to add another wrinkle to our office document processing example. This file sharing setup has been in use for 4 years. Time flies. The hardware is getting old faster than you realize. Old hardware is more likely to fail. It has been through more thunderstorms, more A/C breakdowns, people knocking the server by accident and all that. You’ve been noticing that your hardware maintenance plan is costing more every year. How long is the hardware vendor going to stock spare parts for your obsolete office equipment? Please forgive me for playing on your paranoia but the real world can be rude.
Time for an Upgrade
In this scenario you conclude that you are going to have to replace that file server soon. Its going to be a pain to migrate all the files to a new unit. I am going to have to upgrade to a new version of Windows server. How much is that going to cost? How much has Windows changed? If I am going to have to go to all this trouble, why not get some new improvement out of it. I know I can get bigger disks and more RAM (random access memory) for less money than I paid for the old server. Whoops. Windows is going to cost more. I have to pay a charge for every workstation attached to it. That CAL (Client Access License) price has gone up. I read something about high availability clustering in Windows. Enterprise Server does that. Wow. Look at the price of that! Remember that $12,000 per day of downtime cost overhang? It’s more of an issue now that you are dealing with an old system.
A Debian Cluster Solution
Enough of this already. Since I asked so many questions and raised so many doubts, I owe you, the reader, some answers. Debian Linux provides a very nice high availability solution for file servers. You need two servers with directly attached storage and also a third little server that can be little more than a glorified workstation. You need Debian Squeeze 64 bit edition that has the Pacemaker, Corosync, drbd and Samba packages installed for each server. The software is free. You pay for the hardware and a trustworthy Linux consultant who can set everything up for you. What you get is a fully redundant quorum cluster with fully redundant storage, multiple CPU cores on each node, much more RAM than you had before and much more storage capacity.
Here are hardware price estimates:
Tie breaker node: Two hard drives, 512MB RAM
Name brand file server node: 8 2TB SATA drives, 24GB RAM, 1 4 core CPU chip, 3 year on site parts and labor warranty.
Second file server node like above.
Misc parts for storage and control networks.
Each file server node has software RAID 5 and each node holds 14 terabytes of disk storage. Because it is completely redundant across nodes, total cluster storage capacity is 14 terabytes. Performance of this unit will be much better than the old unit. It effectively has 4 CPUs per file storage node and much more RAM for file buffering. Software updates from Debian are free. You just need someone to apply the security patches and version upgrades.
The best feature is complete redundancy for file processing. In our file server example, any one of the nodes can completely fail and file server processing will continue. Based on the lost labor time cost estimates above, this system pays for itself if it eliminates 1 day of downtime in a five year period. You also have hardware maintenance savings of whatever the yearly charge is for your old system times 3 years because you get 3 years of warranty coverage on the new hardware. You have the consultant’s charges for converting to the new system, but remember, you were going to have to pay that fee for a new Windows system as well.
I hope I have stirred your interest in Linux Pacemaker based clusters. I have shown a file server upgrade that pays for itself by reducing downtime. You also upgrade your file server’s performance while reducing out of pocket expenses for software and hardware maintenance. Not a bad deal.
LinuxForce Systems Administrator, Elizabeth Krumbach will deliver the keynote address this Saturday, 23 July 2011 at FOSSCON. Her talk entitled “Make a Difference for Millions: Getting Involved with FOSS” will help attendees better understand how to contribute to the greater good by participating more actively in FOSS (Free and Open Source Software).
On Saturday, May 7th, I’ll be taking a flight out to Budapest, Hungary to attend the week-long Ubuntu Developer Summit (UDS) as the kick-off event to the development of the next Ubuntu release, 11.10 (code name Oneiric Ocelot) coming out in October 2011.
The Ubuntu Developer Summit is the seminal Ubuntu event in which we define the focus and plans for our up-coming version of Ubuntu. The event pulls together Canonical engineers, community members, partners, ISVs, upstreams and more into an environment focused on discussion and planning.
My role at these summits as an Ubuntu Community Council member tends to be on community work, which includes recruitment and retention of volunteers to the Ubuntu community. I will also attend sessions related to upstream collaboration; most worthy of note are the collaboration sessions related to Debian as my primary development interest remains there. Debian is the parent distribution of Ubuntu, which LinuxForce almost exclusively deploys to our customers.
This will be my third time attending a UDS. I’m excited to see what I will learn, from the possibilities for the next release to the new ideas I will be able to apply in my day-to-day work. So much comes from such in-person collaborations with fellow contributors.