Cluster Workshop - Build Your Own Warewulf Cluster

You should have the following parts:

Part 1: Master Node OS Installation

Warewulf itself is a collection of libraries and utilities to run your cluster, that install on top of a normal Linux distribution. For this workshop, we'll use CentOS 4.2 as the base install.
  1. Plug the monitor into the power strip and turn it on.
  2. Find the machine with two network cards; this is the master node.
  3. Plug the master node into the power strip and connect the monitor, keyboard, and mouse.
  4. Power on the master node, open the CD-ROM drive, insert CentOS 4.2 disc 1, and press the reset button. The machine should boot from the CD-ROM.
    If you wait too long and your machine starts booting off of the hard drive just press the reset button to make it boot from the CD-ROM. If your machine still insists on booting from the hard drive you may need to modify its BIOS settings.
  5. When the CentOS screen comes up, hit enter.
    If you don't have a mouse it is suggested that you type linux text at this point to do a text based install. The process is very similar to the graphical install.
  6. Skip testing the CD media.
    This takes far too long and has no real benefit for fresh installs.
  7. Click Next to the "Welcome to CentOS!" screen.
  8. Select English as your installation language.
  9. Select a US model keyboard.
  10. If your mouse was not automatically detected, select a generic 3-button PS/2 mouse.
  11. If the installer detects that another version of Linux is installed, click Install and hit Next.
  12. Select a Workstation install.
    We typically use Custom, but Workstation is good enough for now.
  13. Select Autopartition.
    We don't store files on our cluster machines, even on the master nodes, so it doesn't matter how the drive is set up. We use dedicated fileservers for storage.
  14. Select "Remove all partitions on this system".
    Again, we don't keep data on cluster machines.
  15. Yes, you really want to delete all existing data.
    Of course, at home you might not want to do this.
  16. Click Next at the GRUB boot loader screen.
  17. At the network configuration screen set both cards to "Active on boot." Select device eth1 and click the "edit" button. In the dialog that appears, unlick the "Configure using DHCP" checkbox and then enter 10.0.4.1 in the IP address Field and 255.255.255.0 in the Netmask field. Click "OK" after you have made these changes.
    This will be the interface to the private network.
  18. "Eth0" is for the outside network, and you should input the IP address given to you by your instructor. The netmask should be 255.255.255.0. Select "OK" when done with the interface.
  19. Enter these settings:
    Gateway:        130.126.120.1
    Primary DNS:    130.126.120.32
    Secondary DNS:  130.126.120.33
    Tertiary DNS:   130.126.116.194
    
    and select "Next" to continue.
    Note: these values are specific to our network. If you want to set up your own cluster later on, you'll have to get these addresses from your local sysadmin (which might be you!).
  20. Disable the firewall and SELinux.
    In most cases your cluster will not be connecting to the outside world, so it should be safe to disable the firewall if you trust your network, if not you'll need to enable it.
  21. Click Proceed when the installer warns you about not having a firewall.
  22. No additional languages are necessary, so just click "Next."
  23. The hardware clock should be set to GMT. Pick your time zone (America/Chicago recommended).
  24. Pick a root password that you will remember. Write it down.
  25. You don't need to customize the software selection or pick individual packages.
    However, you may want to do this for a production system. This is by far the easiest time to add packages to your cluster. On the other hand, the default install has 2 GB of software, so you could save some time in the next step if you pared the list down.
  26. Start the installation. It will take between 15 and 25 minutes to install CentOS 4.2 and will prompt you as necessary for additional discs.
  27. At this point your CentOS 4.2 box is installed. Reboot the system when prompted to.
  28. After rebooting, the Welcome to CentOS screen will appear. Click Next.
  29. You will need to Agree to the License Agreement before continuing.
  30. Verify that the computer is set to the correct time, if not change it to be correct.
  31. At the Display Configuration screen you can either just click next, or adjust the monitor configuration to "Generic LCD 1024x768". After you have done this you can adjust the default screen resolution to 1024x768.
    You would normally only use the console of a production cluster during initial configuration or adding nodes, and you don't need a GUI for either of those, so there is little reason to configure X-Windows. Having multiple terminals available will be useful for this exercise, so we'll go ahead and configure X-Windows anyway.
  32. Create a username and password for yourself.
    Again, take care to write down this username and password for the sake of this workshop (though both can be retrieved using the root account). The last line syncs the nodes up to the master, since our setup has changed, and will allow the user to log into the nodes as well.
  33. Click Next at the sound card configuration screen.
  34. Click Next at the Additional CDs screen.
  35. Click Next at the Finish Setup screen.
  36. Congratulations, you've installed CentOS 4.2.

Part 2: Installing Warewulf

  1. Log in as root and open a terminal (Right-click on desktop, "Open Terminal.")
  2. Connect one end of an ethernet cable to one of the network cards in the back of the master node, and the other end to your mini-switch. Connect the other network card to the "outside world."
  3. In order to test to see if you've connected them correctly, run "ping -c 1 130.126.120.32". If you get a response, move on to the next step. If not, swap the two network cables on the back of the master node and repeat. If you still get no response, ask one of the instructors for assistance.
  4. In order to be able to access the outside world on our network, you'll have to set your account up to use our proxy server:
    # echo 'export HTTP_PROXY=http://login.ks.uiuc.edu:3128' >> ~/.bashrc
    # echo 'export http_proxy=$HTTP_PROXY' >> ~/.bashrc
    # source ~/.bashrc
    
    (different programs refer to HTTP_PROXY and http_proxy, so we set them both).
  5. Add the Dag Repository for fetching some necessary warewulf RPMs by creating /etc/yum.repos.d/dag.repo:
    [dag] 
    name=Dag RPM Repository for Red Hat Enterprise Linux
    baseurl=http://apt.sw.be/redhat/el4/en/i386/dag
    gpgcheck=1
    enabled=1
    gpgkey=http://dag.wieers.com/packages/RPM-GPG-KEY.dag.txt
    
  6. Install the following rpms:
    # yum -y install perl-Unix-Syslog dhcp tftp-server tftp createrepo rpm-build
    # yum -y groupinstall "Development Tools"
    
  7. Update the system with "yum -y update".
  8. Reboot the system, log back in as root, and open a terminal again.
  9. Create a directory for the rpms we'll download:
    # mkdir /downloads
    # cd /downloads
    
  10. Warewulf only distributes source rpms, so obtain and build these from warewulf-cluster.org:
    # wget http://www.warewulf-cluster.org/downloads/dependancies/perl-Term-Screen-1.02-3.caos.src.rpm
    # wget http://www.warewulf-cluster.org/downloads/releases/2.6.1/warewulf-2.6.1-2.src.rpm
    # wget http://warewulf.lbl.gov/downloads/addons/pdsh/pdsh-2.3-10.caos.src.rpm
    # rpmbuild --rebuild perl-Term-Screen-1.02-3.caos.src.rpm
    # rpmbuild --rebuild warewulf-2.6.1-2.src.rpm
    # rpmbuild --rebuild pdsh-2.3-10.caos.src.rpm
    
    Note: these files can also be found on the Cluster Workshop CD if you're experiencing network problems.
  11. Create a local repository to house these new rpms:
    # createrepo /usr/src/redhat/RPMS
    
  12. Then, register this repository with the system by creating the file /etc/yum.repos.d/local.repo, containing:
    [local]
    name=Local repository
    baseurl=file:///usr/src/redhat/RPMS
    gpgcheck=0
    
  13. Then actually install the newly built rpms:
    # yum -y install perl-Term-Screen warewulf warewulf-tools pdsh
    
  14. Add the above (local) repo to /usr/share/warewulf/vnfs-scripts/centos4-vnfs.sh around line 104. This script helps generate the virtual filesystem for the nodes, so that they'll know about this repository as well.
  15. Create the nodes' filesystem:
    # /usr/share/warewulf/vnfs-scripts/centos4-vnfs.sh
    
    If this command results in many errors about files not existing, run the command "mkdir -p /vnfs/centos-4/var/lock/rpm" and repeat this step.
  16. Get the filesystem ready to transfer:
    # wwvnfs --build --hybrid
    
  17. Since our cluster will be in a trusted environment, we don't need kerberos or iptables -- both of which CentOS installs by default:
    # yum -y remove iptables krb5-workstation
    
  18. Initialize Warewulf:
    # wwinit --init
    
    Yes, Warewulf is set up correctly. 10.128.0.0 is fine for DHCP. Yes, this configuration looks correct. Yes, you should configure DHCP now. Yes, you should configure NFS now. Yes, you should configure TFTP now. Yes, you should configure syslog.
  19. As the process warns, you should open /etc/hosts and change the third line to remove any mentions of your hostname for 127.0.0.1 (keep the localhost entries there).
  20. Connect and boot your first node. Its only network card should be connected to the mini-switch. Enter the BIOS settings to make sure it's booting from the network, as Warewulf uses PXE booting to bring up the nodes.
  21. An entry for the new node will appear in /etc/warewulf/nodes/new/node0000. Move this file to /etc/warewulf/nodes/nodegroup1/
  22. Repeat for the remaining nodes, and then restart Warewulf with /etc/init.d/warewulf restart
  23. In order to make sure the user account you created during installation gets mirrored onto the slave nodes, run:
    # cp /etc/passwd /vnfs/default/etc/passwd
    # cp /etc/group /vnfs/default/etc/group
    # wwvnfs --build --hybrid
    
    And then reboot the nodes.
    Note: this is only needed because Warewulf uses a strange script to generate the passwd and group files without including the user account that we created during the install. This step is only necessary to mirror that user account to the nodes, and any further users created with "adduser" will be properly propagated.

Part 3: Configuring the Frontend

  1. In order to run some useful jobs, we'll install the mpich library:
    # cd /downloads
    # wget http://www-unix.mcs.anl.gov/mpi/mpich/downloads/mpich.tar.gz
    # tar xvfz mpich.tar.gz
    # cd mpich-1.2.7p1
    # ./configure --prefix=/opt/mpich
    # make
    # make install
    
    Again, this file can be found on the Cluster Workshop CD as well.
  2. Now, to install the Sun Grid Engine (SGE). Begin the installation with adding a user to run SGE. adduser sgeadmin
  3. Change to the home directory of the sgeadmin user you just created. cd /home/sgeadmin
  4. Insert the TCB Cluster CD into the Master Node.
  5. Unpack the Common and Platform specific packages off the CD into the home directory of the sgeadmin user.
    tar xzf /media/cdrom/sge-6.0u6-bin-lx24-x86.tar.gz
    tar xzf /media/cdrom/sge-6.0u6-common.tar.gz
    
  6. Set your SGE_ROOT environment variable to the sgeadmin's home directory. export SGE_ROOT=/home/sgeadmin
  7. Run the setfileperm.sh script provided to fix file permissions. util/setfileperm.sh $SGE_ROOT
  8. Run gedit /etc/services and add the lines
    sge_qmaster 536/tcp
    sge_execd 537/tcp
    
    in the appropriate place. Save the file.
  9. Run the QMaster installer.
    # cd $SGE_ROOT
    # ./install_qmaster
    
  10. Hit Return at the introduction screen.
  11. When choosing the Grid Engine admin user account hit y to specify a non-root user, and then enter sgeadmin as the user account. Hit return to progress.
  12. Verify that the Grid Engine root directory is set to /home/sgeadmin.
  13. Since we have set the ports needed from sge_qmaster and sge_engine in a previous step, we should be able to hit Return through the next two prompts.
  14. Hit return to set the name of your cell to "default".
  15. Use the default install options for the spool directory configuration.
  16. We already ran the file permission script so we can hit yes and skip this step.
  17. We can say y to them being in the same domain name.
  18. The install script will then create some directories.
  19. Use the default options for the Spooling/DB questions.
  20. When prompted for group id range, use the default range of 20000-20100 unless you have a reason to do otherwise.
  21. Use the default options for the spool directory.
  22. The next step asks you to input an email address for the user who should receive problem reports. Typically this will be the person responsible for maintaining the cluster, but for now enter root@localhost
  23. Verify that your configuration options are correct.
  24. Hit yes so that the qmaster will startup when the computer boots.
  25. The next step asks you to enter in the names of your Execution Hosts. Say no to using a filename and then when prompted for a host, enter node0000, then node0001, then node0002, and then hit enter with a blank entry to show that you're done.
  26. It will then ask you if you want to add any shadow hosts now. Say n.
  27. The next thing that the configuration program will ask you to do is to select a scheduler profile. Normal will work for most situations, so that's what we'll use now.
  28. Our queue master is now installed. Run . /home/sgeadmin/default/common/settings.sh to set some environment variables up. Note that you should add this line to /etc/profile so it's run everytime anyone logs on. Also be sure to run chmod a+rx /home/sgeadmin so that all of your users have access to it.
  29. Use . /home/sgeadmin/install_execd to start up the Execution host configuration script.
  30. Like the qmaster installation we can use all of the default options.
  31. After install_execd finishes running, use . /home/sgeadmin/default/common/settings.sh to set our environmental variables accordingly.
  32. Each node must also have the execution daemon installed on it. Begin by running qconf -sel. If localhost is not listed as execution host you will need to add it as a administrative host by running qconf -ae hostname.
  33. A small fix is needed for WW 2.6.0 for name resolution, so in /etc/warewulf/master.conf, change the default network to admin. Restart warewulf with /etc/init.d/warewulf restart
  34. We also need to make sure that the master node has the authority to submit jobs to the queue, so run qconf -as hostname to add it to the "submit host list."
  35. Now, onto the slave nodes. We'll need to install binutils, set up the sge services, make sure sgeexecd starts on boot, add libstdc++.so.5 (required for NAMD, which we'll get to later), and add the same line from above to the nodes' /etc/profile:
    # yum -y --installroot=/vnfs/default/ install binutils
    # echo 'sge_qmaster 536/tcp' >> /vnfs/default/etc/services
    # echo 'sge_execd 537/tcp' >> /vnfs/default/etc/services
    # cp /home/sgeadmin/default/common/sgeexecd /vnfs/default/etc/init.d/
    # echo '. /home/sgeadmin/default/common/settings.sh' >> /vnfs/default/etc/profile
    # cp -d /usr/lib/libstdc++.so.5* /vnfs/default/usr/lib/
    # chroot /vnfs/default/ /sbin/ldconfig
    
  36. We also need to make sure /opt is available on the slave nodes, so we set it up as an NFS export on the master node, and put an entry into the slave nodes' fstab to automount it. First, edit /etc/exports, and copy the /vnfs line to a new entry. Change /vnfs to /opt, leaving the rest the same and save. Restart the NFS daemon by running:
    # /etc/init.d/nfs restart
    
    Then edit /vnfs/default/etc/fstab and add this line to the bottom (replacing hostname with your master's hostname):
    hostname.ks.uiuc.edu-sharedfs:/opt	/opt	nfs nfsvers=2	0 0
    
  37. To add the parallel environment "mpich", run qconf -ap mpich, and enter the following settings:
    pe_name           mpich
    slots             9999
    user_lists        NONE
    xuser_lists       NONE
    start_proc_args   /home/sgeadmin/mpi/startmpi.sh -catch_rsh $pe_hostfile
    stop_proc_args    /home/sgeadmin/mpi/stopmpi.sh
    allocation_rule   $round_robin
    control_slaves    TRUE
    job_is_first_task FALSE
    urgency_slots     min
    
    Type ":wq" to write the file and quit (for non-vi users).
  38. In order add the nodes to the queuing system, run qconf -mq all.q, and edit the hostlist to contain "node0000 node0001 node0002", change the pe_list to contain "make mpich" in order to work with our new parallel environment, and also change the shell to "/bin/bash", since Warewulf doesn't have csh. Write and exit as you did before.
  39. We can now build a list of all nodes for pdsh to use, so we can easily run commands on all nodes at once:
    # wwlist -qr > /etc/hosts.pdsh
    
    You can test that this works with pdsh -a hostname
  40. Again, rebuild the filesystem for the nodes and sync them up (for good measure, we'll reboot them all anyway):
    # wwvnfs --build --hybrid
    # wwnodes --sync
    # pdsh -a /sbin/reboot
    
    Use "wwtop" to watch the nodes shutdown and come back up. When they're all back up, press "q" and continue.
  41. Now, we need to start the SGE execution daemon on each of the nodes, so run:
    # pdsh -a /sbin/chkconfig sgeexecd on
    # pdsh -a /etc/init.d/sgeexecd start
    
    You can use "qstat -f" to make sure you have your queue (all.q) set up with your three nodes.
  42. Congratulations, you now have a queuing system setup for your cluster. Now to do some real work.

Part 4: Checking Your Cluster Status

Warewulf has a lot of tools for checking what nodes are up and the general status of your cluster. A few helpful tools (all have man pages):

See Also

Warewulf web site (www.warewulf-cluster.org)