| ---+ Installation on additional hosts |
| |
| To add additional hosts to the Apache Tashi cluster, they should be |
| installed to satisfy the prerequisites as described in the main |
| installation document. However, only the node manager will have to be |
| launched on these hosts. There should be only one cluster manager and |
| scheduling agent in the cluster. |
| |
| You must register the host's hostname with the clustermanager, as shown |
| in the main installation document. |
| |
| You can then start the node manager manually, or have it start from the |
| system initialization scripts. |
| |
| ---+ VM host best practices |
| |
| The nodemanager will use a small amount of local storage to keep |
| accounting data on the virtual machines that it manages. The |
| copy-on-write storage used by running virtual machines will also be |
| written locally. For the sake of stability, these two storage locations |
| should be chosen so that there is always sufficient space for the |
| accounting data. Otherwise, this could lead to VMs running but not known |
| to the nodemanager. |
| |
| If you are using Qemu/KVM as the hypervisor, be advised that the VM |
| format changes rather quickly. E.g. version 1.1 of Qemu states it will |
| accept migrations in from versions 0.13 to 1.1. Migrations out are |
| liable to be dropped on the floor by a receiving older version Qemu |
| (with a successful return from the sending Qemu). Be careful to only |
| deploy Qemu versions in your cluster that can work together. Since |
| suspend and resume are handled via the same mechanisms, the above caveat |
| applies to these actions too. |
| |
| ---+ Deployment over multiple networks |
| |
| To add additional networks to the Apache Tashi cluster, they should be |
| brought to the hosts as VLAN network interfaces, attached to software |
| bridges. The new network will have to be registered with the cluster |
| manager, as detailed in the main installation document. Scheduling of |
| virtual machines is open over the cluster, so each host needs to provide |
| access to the same networks. |
| |
| Generally, your network switch will have to be configured to send |
| packets "tagged" with the VLAN identifiers for all networks that the |
| cluster is to host virtual machines on. |
| |
| This can be automatically done on start up. For example, a stanza from |
| /etc/network/interfaces that configures a bridge for VLAN 11, using |
| jumbo frames, will look like this: |
| |
| auto br11 |
| iface br11 inet manual |
| mtu 9000 |
| bridge_ports eth0.11 |
| bridge_fd 1 |
| bridge_hello 1 |
| up ifconfig eth0.11 mtu 9000 |
| |
| The corresponding /etc/qemu-ifup.11 looks like this: |
| |
| #!/bin/sh |
| |
| /sbin/ifconfig $1 0.0.0.0 up mtu 9000 |
| /sbin/brctl addif br11 $1 |
| exit 0 |
| |
| Note that the entire path of a network connection must be configured to |
| use jumbo frames, if the virtual machines are to use them. |
| |
| If you have large numbers of VLANs, and don't want to hardcode them into |
| each VM host, you can find a sample qemu-ifup in the doc directory. This |
| script will need to be adapted to your local standards by changing the |
| basic parameters at the top. This script can then be linked to by the name |
| Tashi expects them to have. For example, if you have a VLAN 1001, you will |
| create a link from /etc/qemu-ifup.1001 to this script. |
| |
| The script will handle the creation of the VM interface, and creation of the |
| bridge and VLANs if they haven't been created before. |
| |
| ---+ Accounting server |
| |
| An accounting server is available in the distribution. It will log |
| events from the cluster manager and node managers, as well as obtain |
| periodic state from the cluster manager on what virtual machines are |
| running. It can be started by running "accounting" from the binaries |
| directory, and then starting the cluster services. |