| <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> |
| <html> |
| <head> |
| <META http-equiv="Content-Type" content="text/html; charset=UTF-8"> |
| <meta content="Apache Forrest" name="Generator"> |
| <meta name="Forrest-version" content="0.8"> |
| <meta name="Forrest-skin-name" content="pelt"> |
| <title> |
| Hadoop On Demand |
| </title> |
| <link type="text/css" href="skin/basic.css" rel="stylesheet"> |
| <link media="screen" type="text/css" href="skin/screen.css" rel="stylesheet"> |
| <link media="print" type="text/css" href="skin/print.css" rel="stylesheet"> |
| <link type="text/css" href="skin/profile.css" rel="stylesheet"> |
| <script src="skin/getBlank.js" language="javascript" type="text/javascript"></script><script src="skin/getMenu.js" language="javascript" type="text/javascript"></script><script src="skin/fontsize.js" language="javascript" type="text/javascript"></script> |
| <link rel="shortcut icon" href="images/favicon.ico"> |
| </head> |
| <body onload="init()"> |
| <script type="text/javascript">ndeSetTextSize();</script> |
| <div id="top"> |
| <!--+ |
| |breadtrail |
| +--> |
| <div class="breadtrail"> |
| <a href="http://www.apache.org/">Apache</a> > <a href="http://hadoop.apache.org/">Hadoop</a> > <a href="http://hadoop.apache.org/core/">Core</a><script src="skin/breadcrumbs.js" language="JavaScript" type="text/javascript"></script> |
| </div> |
| <!--+ |
| |header |
| +--> |
| <div class="header"> |
| <!--+ |
| |start group logo |
| +--> |
| <div class="grouplogo"> |
| <a href="http://hadoop.apache.org/"><img class="logoImage" alt="Hadoop" src="images/hadoop-logo.jpg" title="Apache Hadoop"></a> |
| </div> |
| <!--+ |
| |end group logo |
| +--> |
| <!--+ |
| |start Project Logo |
| +--> |
| <div class="projectlogo"> |
| <a href="http://hadoop.apache.org/core/"><img class="logoImage" alt="Hadoop" src="images/core-logo.gif" title="Scalable Computing Platform"></a> |
| </div> |
| <!--+ |
| |end Project Logo |
| +--> |
| <!--+ |
| |start Search |
| +--> |
| <div class="searchbox"> |
| <form action="http://www.google.com/search" method="get" class="roundtopsmall"> |
| <input value="hadoop.apache.org" name="sitesearch" type="hidden"><input onFocus="getBlank (this, 'Search the site with google');" size="25" name="q" id="query" type="text" value="Search the site with google"> |
| <input name="Search" value="Search" type="submit"> |
| </form> |
| </div> |
| <!--+ |
| |end search |
| +--> |
| <!--+ |
| |start Tabs |
| +--> |
| <ul id="tabs"> |
| <li> |
| <a class="unselected" href="http://hadoop.apache.org/core/">Project</a> |
| </li> |
| <li> |
| <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a> |
| </li> |
| <li class="current"> |
| <a class="selected" href="index.html">Hadoop 0.17 Documentation</a> |
| </li> |
| </ul> |
| <!--+ |
| |end Tabs |
| +--> |
| </div> |
| </div> |
| <div id="main"> |
| <div id="publishedStrip"> |
| <!--+ |
| |start Subtabs |
| +--> |
| <div id="level2tabs"></div> |
| <!--+ |
| |end Endtabs |
| +--> |
| <script type="text/javascript"><!-- |
| document.write("Last Published: " + document.lastModified); |
| // --></script> |
| </div> |
| <!--+ |
| |breadtrail |
| +--> |
| <div class="breadtrail"> |
| |
| |
| </div> |
| <!--+ |
| |start Menu, mainarea |
| +--> |
| <!--+ |
| |start Menu |
| +--> |
| <div id="menu"> |
| <div onclick="SwitchMenu('menu_1.1', 'skin/')" id="menu_1.1Title" class="menutitle">Documentation</div> |
| <div id="menu_1.1" class="menuitemgroup"> |
| <div class="menuitem"> |
| <a href="index.html">Overview</a> |
| </div> |
| <div class="menuitem"> |
| <a href="quickstart.html">Quickstart</a> |
| </div> |
| <div class="menuitem"> |
| <a href="cluster_setup.html">Cluster Setup</a> |
| </div> |
| <div class="menuitem"> |
| <a href="hdfs_design.html">HDFS Architecture</a> |
| </div> |
| <div class="menuitem"> |
| <a href="hdfs_user_guide.html">HDFS User Guide</a> |
| </div> |
| <div class="menuitem"> |
| <a href="hdfs_shell.html">HDFS Shell Guide</a> |
| </div> |
| <div class="menuitem"> |
| <a href="hdfs_permissions_guide.html">HDFS Permissions Guide</a> |
| </div> |
| <div class="menuitem"> |
| <a href="mapred_tutorial.html">Map-Reduce Tutorial</a> |
| </div> |
| <div class="menuitem"> |
| <a href="native_libraries.html">Native Hadoop Libraries</a> |
| </div> |
| <div class="menuitem"> |
| <a href="streaming.html">Streaming</a> |
| </div> |
| <div class="menuitem"> |
| <a href="hod.html">Hadoop On Demand</a> |
| </div> |
| <div class="menuitem"> |
| <a href="api/index.html">API Docs</a> |
| </div> |
| <div class="menuitem"> |
| <a href="http://wiki.apache.org/hadoop/">Wiki</a> |
| </div> |
| <div class="menuitem"> |
| <a href="http://wiki.apache.org/hadoop/FAQ">FAQ</a> |
| </div> |
| <div class="menuitem"> |
| <a href="http://hadoop.apache.org/core/mailing_lists.html">Mailing Lists</a> |
| </div> |
| <div class="menuitem"> |
| <a href="releasenotes.html">Release Notes</a> |
| </div> |
| <div class="menuitem"> |
| <a href="changes.html">All Changes</a> |
| </div> |
| </div> |
| <div id="credit"></div> |
| <div id="roundbottom"> |
| <img style="display: none" class="corner" height="15" width="15" alt="" src="skin/images/rc-b-l-15-1body-2menu-3menu.png"></div> |
| <!--+ |
| |alternative credits |
| +--> |
| <div id="credit2"></div> |
| </div> |
| <!--+ |
| |end Menu |
| +--> |
| <!--+ |
| |start content |
| +--> |
| <div id="content"> |
| <div title="Portable Document Format" class="pdflink"> |
| <a class="dida" href="hod_admin_guide.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br> |
| PDF</a> |
| </div> |
| <h1> |
| Hadoop On Demand |
| </h1> |
| <div id="minitoc-area"> |
| <ul class="minitoc"> |
| <li> |
| <a href="#Overview">Overview</a> |
| </li> |
| <li> |
| <a href="#Pre-requisites">Pre-requisites</a> |
| </li> |
| <li> |
| <a href="#Resource+Manager">Resource Manager</a> |
| </li> |
| <li> |
| <a href="#Installing+HOD">Installing HOD</a> |
| </li> |
| <li> |
| <a href="#Configuring+HOD">Configuring HOD</a> |
| <ul class="minitoc"> |
| <li> |
| <a href="#Minimal+Configuration+to+get+started">Minimal Configuration to get started</a> |
| </li> |
| <li> |
| <a href="#Advanced+Configuration">Advanced Configuration</a> |
| </li> |
| </ul> |
| </li> |
| <li> |
| <a href="#Running+HOD">Running HOD</a> |
| </li> |
| <li> |
| <a href="#Supporting+Tools+and+Utilities">Supporting Tools and Utilities</a> |
| <ul class="minitoc"> |
| <li> |
| <a href="#logcondense.py+-+Tool+for+removing+log+files+uploaded+to+DFS">logcondense.py - Tool for removing log files uploaded to DFS</a> |
| <ul class="minitoc"> |
| <li> |
| <a href="#Running+logcondense.py">Running logcondense.py</a> |
| </li> |
| <li> |
| <a href="#Command+Line+Options+for+logcondense.py">Command Line Options for logcondense.py</a> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| </div> |
| |
| <a name="N1000C"></a><a name="Overview"></a> |
| <h2 class="h3">Overview</h2> |
| <div class="section"> |
| <p>The Hadoop On Demand (HOD) project is a system for provisioning and |
| managing independent Hadoop MapReduce and HDFS instances on a shared cluster |
| of nodes. HOD is a tool that makes it easy for administrators and users to |
| quickly setup and use Hadoop. It is also a very useful tool for Hadoop developers |
| and testers who need to share a physical cluster for testing their own Hadoop |
| versions. |
| </p> |
| <p>HOD relies on a resource manager (RM) for allocation of nodes that it can use for |
| running Hadoop instances. At present it runs with the <a href="http://www.clusterresources.com/pages/products/torque-resource-manager.php">Torque |
| resource manager</a>. |
| </p> |
| <p> |
| The basic system architecture of HOD includes components from:</p> |
| <ul> |
| |
| <li>A Resource manager (possibly together with a scheduler),</li> |
| |
| <li>HOD components, and </li> |
| |
| <li>Hadoop Map/Reduce and HDFS daemons.</li> |
| |
| </ul> |
| <p> |
| HOD provisions and maintains Hadoop Map/Reduce and, optionally, HDFS instances |
| through interaction with the above components on a given cluster of nodes. A cluster of |
| nodes can be thought of as comprising of two sets of nodes:</p> |
| <ul> |
| |
| <li>Submit nodes: Users use the HOD client on these nodes to allocate clusters, and then |
| use the Hadoop client to submit Hadoop jobs. </li> |
| |
| <li>Compute nodes: Using the resource manager, HOD components are run on these nodes to |
| provision the Hadoop daemons. After that Hadoop jobs run on them.</li> |
| |
| </ul> |
| <p> |
| Here is a brief description of the sequence of operations in allocating a cluster and |
| running jobs on them. |
| </p> |
| <ul> |
| |
| <li>The user uses the HOD client on the Submit node to allocate a required number of |
| cluster nodes, and provision Hadoop on them.</li> |
| |
| <li>The HOD client uses a Resource Manager interface, (qsub, in Torque), to submit a HOD |
| process, called the RingMaster, as a Resource Manager job, requesting the user desired number |
| of nodes. This job is submitted to the central server of the Resource Manager (pbs_server, in Torque).</li> |
| |
| <li>On the compute nodes, the resource manager slave daemons, (pbs_moms in Torque), accept |
| and run jobs that they are given by the central server (pbs_server in Torque). The RingMaster |
| process is started on one of the compute nodes (mother superior, in Torque).</li> |
| |
| <li>The Ringmaster then uses another Resource Manager interface, (pbsdsh, in Torque), to run |
| the second HOD component, HodRing, as distributed tasks on each of the compute |
| nodes allocated.</li> |
| |
| <li>The Hodrings, after initializing, communicate with the Ringmaster to get Hadoop commands, |
| and run them accordingly. Once the Hadoop commands are started, they register with the RingMaster, |
| giving information about the daemons.</li> |
| |
| <li>All the configuration files needed for Hadoop instances are generated by HOD itself, |
| some obtained from options given by user in its own configuration file.</li> |
| |
| <li>The HOD client keeps communicating with the RingMaster to find out the location of the |
| JobTracker and HDFS daemons.</li> |
| |
| </ul> |
| <p>The rest of the document deals with the steps needed to setup HOD on a physical cluster of nodes.</p> |
| </div> |
| |
| |
| <a name="N10056"></a><a name="Pre-requisites"></a> |
| <h2 class="h3">Pre-requisites</h2> |
| <div class="section"> |
| <p>Operating System: HOD is currently tested on RHEL4.<br> |
| Nodes : HOD requires a minimum of 3 nodes configured through a resource manager.<br> |
| </p> |
| <p> Software </p> |
| <p>The following components are to be installed on *ALL* the nodes before using HOD:</p> |
| <ul> |
| |
| <li>Torque: Resource manager</li> |
| |
| <li> |
| <a href="http://www.python.org">Python</a> : HOD requires version 2.5.1 of Python.</li> |
| |
| </ul> |
| <p>The following components can be optionally installed for getting better |
| functionality from HOD:</p> |
| <ul> |
| |
| <li> |
| <a href="http://twistedmatrix.com/trac/">Twisted Python</a>: This can be |
| used for improving the scalability of HOD. If this module is detected to be |
| installed, HOD uses it, else it falls back to default modules.</li> |
| |
| <li> |
| <a href="http://hadoop.apache.org/core/">Hadoop</a>: HOD can automatically |
| distribute Hadoop to all nodes in the cluster. However, it can also use a |
| pre-installed version of Hadoop, if it is available on all nodes in the cluster. |
| HOD currently supports Hadoop 0.15 and above.</li> |
| |
| </ul> |
| <p>NOTE: HOD configuration requires the location of installs of these |
| components to be the same on all nodes in the cluster. It will also |
| make the configuration simpler to have the same location on the submit |
| nodes. |
| </p> |
| </div> |
| |
| |
| <a name="N1008A"></a><a name="Resource+Manager"></a> |
| <h2 class="h3">Resource Manager</h2> |
| <div class="section"> |
| <p> Currently HOD works with the Torque resource manager, which it uses for its node |
| allocation and job submission. Torque is an open source resource manager from |
| <a href="http://www.clusterresources.com">Cluster Resources</a>, a community effort |
| based on the PBS project. It provides control over batch jobs and distributed compute nodes. Torque is |
| freely available for download from <a href="http://www.clusterresources.com/downloads/torque/">here</a>. |
| </p> |
| <p> All documentation related to torque can be seen under |
| the section TORQUE Resource Manager <a href="http://www.clusterresources.com/pages/resources/documentation.php">here</a>. You can |
| get wiki documentation from <a href="http://www.clusterresources.com/wiki/doku.php?id=torque:torque_wiki">here</a>. |
| Users may wish to subscribe to TORQUE’s mailing list or view the archive for questions, |
| comments <a href="http://www.clusterresources.com/pages/resources/mailing-lists.php">here</a>. |
| </p> |
| <p>For using HOD with Torque:</p> |
| <ul> |
| |
| <li>Install Torque components: pbs_server on one node(head node), pbs_mom on all |
| compute nodes, and PBS client tools on all compute nodes and submit |
| nodes. Perform atleast a basic configuration so that the Torque system is up and |
| running i.e pbs_server knows which machines to talk to. Look <a href="http://www.clusterresources.com/wiki/doku.php?id=torque:1.2_basic_configuration">here</a> |
| for basic configuration. |
| |
| For advanced configuration, see <a href="http://www.clusterresources.com/wiki/doku.php?id=torque:1.3_advanced_configuration">here</a> |
| </li> |
| |
| <li>Create a queue for submitting jobs on the pbs_server. The name of the queue is the |
| same as the HOD configuration parameter, resource-manager.queue. The Hod client uses this queue to |
| submit the Ringmaster process as a Torque job.</li> |
| |
| <li>Specify a 'cluster name' as a 'property' for all nodes in the cluster. |
| This can be done by using the 'qmgr' command. For example: |
| qmgr -c "set node node properties=cluster-name". The name of the cluster is the same as |
| the HOD configuration parameter, hod.cluster. </li> |
| |
| <li>Ensure that jobs can be submitted to the nodes. This can be done by |
| using the 'qsub' command. For example: |
| echo "sleep 30" | qsub -l nodes=3</li> |
| |
| </ul> |
| </div> |
| |
| |
| <a name="N100C4"></a><a name="Installing+HOD"></a> |
| <h2 class="h3">Installing HOD</h2> |
| <div class="section"> |
| <p>Now that the resource manager set up is done, we proceed on to obtaining and |
| installing HOD.</p> |
| <ul> |
| |
| <li>If you are getting HOD from the Hadoop tarball,it is available under the |
| 'contrib' section of Hadoop, under the root directory 'hod'.</li> |
| |
| <li>If you are building from source, you can run ant tar from the Hadoop root |
| directory, to generate the Hadoop tarball, and then pick HOD from there, |
| as described in the point above.</li> |
| |
| <li>Distribute the files under this directory to all the nodes in the |
| cluster. Note that the location where the files are copied should be |
| the same on all the nodes.</li> |
| |
| <li>Note that compiling hadoop would build HOD with appropriate permissions |
| set on all the required script files in HOD.</li> |
| |
| </ul> |
| </div> |
| |
| |
| <a name="N100DD"></a><a name="Configuring+HOD"></a> |
| <h2 class="h3">Configuring HOD</h2> |
| <div class="section"> |
| <p>After HOD installation is done, it has to be configured before we start using |
| it.</p> |
| <a name="N100E6"></a><a name="Minimal+Configuration+to+get+started"></a> |
| <h3 class="h4">Minimal Configuration to get started</h3> |
| <ul> |
| |
| <li>On the node from where you want to run hod, edit the file hodrc |
| which can be found in the <install dir>/conf directory. This file |
| contains the minimal set of values required for running hod.</li> |
| |
| <li> |
| |
| <p>Specify values suitable to your environment for the following |
| variables defined in the configuration file. Note that some of these |
| variables are defined at more than one place in the file.</p> |
| |
| |
| <ul> |
| |
| <li>${JAVA_HOME}: Location of Java for Hadoop. Hadoop supports Sun JDK |
| 1.5.x and above.</li> |
| |
| <li>${CLUSTER_NAME}: Name of the cluster which is specified in the |
| 'node property' as mentioned in resource manager configuration.</li> |
| |
| <li>${HADOOP_HOME}: Location of Hadoop installation on the compute and |
| submit nodes.</li> |
| |
| <li>${RM_QUEUE}: Queue configured for submiting jobs in the resource |
| manager configuration.</li> |
| |
| <li>${RM_HOME}: Location of the resource manager installation on the |
| compute and submit nodes.</li> |
| |
| </ul> |
| |
| </li> |
| |
| |
| <li> |
| |
| <p>The following environment variables *may* need to be set depending on |
| your environment. These variables must be defined where you run the |
| HOD client, and also be specified in the HOD configuration file as the |
| value of the key resource_manager.env-vars. Multiple variables can be |
| specified as a comma separated list of key=value pairs.</p> |
| |
| |
| <ul> |
| |
| <li>HOD_PYTHON_HOME: If you install python to a non-default location |
| of the compute nodes, or submit nodes, then, this variable must be |
| defined to point to the python executable in the non-standard |
| location.</li> |
| |
| </ul> |
| |
| </li> |
| |
| </ul> |
| <a name="N10117"></a><a name="Advanced+Configuration"></a> |
| <h3 class="h4">Advanced Configuration</h3> |
| <p> You can review other configuration options in the file and modify them to suit |
| your needs. Refer to the <a href="hod_config_guide.html">Configuration Guide</a> for information about the HOD |
| configuration. |
| </p> |
| </div> |
| |
| |
| <a name="N10126"></a><a name="Running+HOD"></a> |
| <h2 class="h3">Running HOD</h2> |
| <div class="section"> |
| <p>You can now proceed to <a href="hod_user_guide.html">HOD User Guide</a> for information about how to run HOD, |
| what are the various features, options and for help in trouble-shooting.</p> |
| </div> |
| |
| |
| <a name="N10134"></a><a name="Supporting+Tools+and+Utilities"></a> |
| <h2 class="h3">Supporting Tools and Utilities</h2> |
| <div class="section"> |
| <p>This section describes certain supporting tools and utilities that can be used in managing HOD deployments.</p> |
| <a name="N1013D"></a><a name="logcondense.py+-+Tool+for+removing+log+files+uploaded+to+DFS"></a> |
| <h3 class="h4">logcondense.py - Tool for removing log files uploaded to DFS</h3> |
| <p>As mentioned in |
| <a href="hod_user_guide.html#Collecting+and+Viewing+Hadoop+Logs">this section</a> of the |
| <a href="hod_user_guide.html">HOD User Guide</a>, HOD can be configured to upload |
| Hadoop logs to a statically configured HDFS. Over time, the number of logs uploaded |
| to DFS could increase. logcondense.py is a tool that helps administrators to clean-up |
| the log files older than a certain number of days. </p> |
| <a name="N1014E"></a><a name="Running+logcondense.py"></a> |
| <h4>Running logcondense.py</h4> |
| <p>logcondense.py is available under hod_install_location/support folder. You can either |
| run it using python, for e.g. <em>python logcondense.py</em>, or give execute permissions |
| to the file, and directly run it as <em>logcondense.py</em>. logcondense.py needs to be |
| run by a user who has sufficient permissions to remove files from locations where log |
| files are uploaded in the DFS, if permissions are enabled. For e.g. as mentioned in the |
| <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>, the logs could |
| be configured to come under the user's home directory in HDFS. In that case, the user |
| running logcondense.py should have super user privileges to remove the files from under |
| all user home directories.</p> |
| <a name="N10162"></a><a name="Command+Line+Options+for+logcondense.py"></a> |
| <h4>Command Line Options for logcondense.py</h4> |
| <p>The following command line options are supported for logcondense.py.</p> |
| <table class="ForrestTable" cellspacing="1" cellpadding="4"> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">Short Option</td> |
| <td colspan="1" rowspan="1">Long option</td> |
| <td colspan="1" rowspan="1">Meaning</td> |
| <td colspan="1" rowspan="1">Example</td> |
| |
| </tr> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">-p</td> |
| <td colspan="1" rowspan="1">--package</td> |
| <td colspan="1" rowspan="1">Complete path to the hadoop script. The version of hadoop must be the same as the |
| one running HDFS.</td> |
| <td colspan="1" rowspan="1">/usr/bin/hadoop</td> |
| |
| </tr> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">-d</td> |
| <td colspan="1" rowspan="1">--days</td> |
| <td colspan="1" rowspan="1">Delete log files older than the specified number of days</td> |
| <td colspan="1" rowspan="1">7</td> |
| |
| </tr> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">-c</td> |
| <td colspan="1" rowspan="1">--config</td> |
| <td colspan="1" rowspan="1">Path to the Hadoop configuration directory, under which hadoop-site.xml resides. |
| The hadoop-site.xml must point to the HDFS NameNode from where logs are to be removed.</td> |
| <td colspan="1" rowspan="1">/home/foo/hadoop/conf</td> |
| |
| </tr> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">-l</td> |
| <td colspan="1" rowspan="1">--logs</td> |
| <td colspan="1" rowspan="1">A HDFS path, this must be the same HDFS path as specified for the log-destination-uri, |
| as mentioned in the <a href="hod_config_guide.html#3.7+hodring+options">configuration guide</a>, |
| without the hdfs:// URI string</td> |
| <td colspan="1" rowspan="1">/user</td> |
| |
| </tr> |
| |
| <tr> |
| |
| <td colspan="1" rowspan="1">-n</td> |
| <td colspan="1" rowspan="1">--dynamicdfs</td> |
| <td colspan="1" rowspan="1">If true, this will indicate that the logcondense.py script should delete HDFS logs |
| in addition to Map/Reduce logs. Otherwise, it only deletes Map/Reduce logs, which is also the |
| default if this option is not specified. This option is useful if dynamic DFS installations |
| are being provisioned by HOD, and the static DFS installation is being used only to collect |
| logs - a scenario that may be common in test clusters.</td> |
| <td colspan="1" rowspan="1">false</td> |
| |
| </tr> |
| |
| </table> |
| <p>So, for example, to delete all log files older than 7 days using a hadoop-site.xml stored in |
| ~/hadoop-conf, using the hadoop installation under ~/hadoop-0.17.0, you could say:</p> |
| <p> |
| <em>python logcondense.py -p ~/hadoop-0.17.0/bin/hadoop -d 7 -c ~/hadoop-conf -l /user</em> |
| </p> |
| </div> |
| |
| </div> |
| <!--+ |
| |end content |
| +--> |
| <div class="clearboth"> </div> |
| </div> |
| <div id="footer"> |
| <!--+ |
| |start bottomstrip |
| +--> |
| <div class="lastmodified"> |
| <script type="text/javascript"><!-- |
| document.write("Last Published: " + document.lastModified); |
| // --></script> |
| </div> |
| <div class="copyright"> |
| Copyright © |
| 2007 <a href="http://www.apache.org/licenses/">The Apache Software Foundation.</a> |
| </div> |
| <!--+ |
| |end bottomstrip |
| +--> |
| </div> |
| </body> |
| </html> |