blob: 5aa8e4b453b0a06cfd993a873089c8294c1e34f9 [file] [log] [blame]
<?xml version="1.0"?>
<chapter xml:id="configuration"
version="5.0" xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/**
*(C) Copyright 2015 Hewlett-Packard Development Company, L.P.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<title>Configuration</title>
<para>This chapter is the Not-So-Quick start guide to DCS configuration.</para>
<para>Please read this chapter carefully and ensure that all requirements have
been satisfied. Failure to do so will cause you (and us) grief debugging strange errors.
</para>
<para>
To configure a deploy, edit a file of environment variables
in <filename>conf/dcs-env.sh</filename> -- this configuration
is used mostly by the launcher shell scripts getting the cluster
off the ground -- and then add configuration to an XML file to
do things like override DCS defaults, tell the location of the ZooKeeper ensemble.<footnote>
<para>
Be careful editing XML. Make sure you close all elements.
Run your file through <command>xmllint</command> or similar
to ensure well-formedness of your document after an edit session.
</para>
</footnote>
</para>
<para>After you make an edit to an DCS configuration, make sure you copy the
content of the <filename>conf</filename> directory to
all nodes of the cluster. DCS will not do this for you.
Use <command>rsync</command>.</para>
<section xml:id="java">
<title>Java</title>
<para>DCS requires java 7 from <link
xlink:href="http://www.java.com/download/" xlink:show="new">Oracle</link>. Usually
you'll want to use the latest version available (7u7 is the latest version as of this writing).</para>
</section>
<section xml:id="os">
<title>Operating System</title>
<section xml:id="ssh">
<title>ssh</title>
<para><command>ssh</command> must be installed and
<command>sshd</command> must be running to use DCS's' scripts to
manage remote DCS daemons. You must be able to ssh to all
nodes, including your local node, using passwordless login (Google
"ssh passwordless login").</para>
</section>
<section xml:id="dns">
<title>DNS</title>
<para>Both forward and reverse DNS resolving should work.</para>
<para>If your machine has multiple interfaces, DCS will use the
interface that the primary hostname resolves to.</para>
</section>
<section xml:id="loopback.ip">
<title>Loopback IP</title>
<para>DCS expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions,
for example, will default to 127.0.1.1 and this will cause problems for you.
</para>
<para><filename>/etc/hosts</filename> should look something like this:
<programlisting>
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
</programlisting>
</para>
</section>
<section xml:id="ntp">
<title>NTP</title>
<para>The clocks on cluster members should be in basic alignments.
Some skew is tolerable but wild skew could generate odd behaviors. Run
<link
xlink:href="http://en.wikipedia.org/wiki/Network_Time_Protocol" xlink:show="new">NTP</link>
on your cluster, or an equivalent.</para>
</section>
<section xml:id="windows">
<title>Windows</title>
<para>DCS is not supported on Windows.</para>
</section>
</section> <!-- OS -->
<section xml:id="run_modes">
<title>Run modes</title>
<section xml:id="Single Node">
<title>Single Node</title>
<para>This is the default mode. Single node is what is described
in the <xref linkend="quickstart" /> section. In
single node, it runs all DCS daemons and a local
ZooKeeper all on the same node. Zookeeper binds to a well known port.
</para>
</section>
<section xml:id="Multi Node">
<title>Multi-Node</title>
<para>Multi node is where the daemons are spread
across all nodes in the cluster. Before proceeding, ensure you have a
working Trafodion instance.
</para>
<para>Below we describe the different setups. Starting,
verification and exploration of your install. Configuration is described in a
section that follows, <xref linkend="confirm" />.
</para>
<para>To set up a multi-node deploy, you will need to
configure DCS by editing files in the DCS <filename>conf</filename>
directory.
</para>
<para>You may need to edit
<code>conf/dcs-env.sh</code> to tell DCS which
<command>java</command> to use. In this file you set DCS environment
variables such as the heap size and other options for the
<application>JVM</application>, the preferred location for log files,
etc. Set <varname>JAVA_HOME</varname> to point at the root of your
<command>java</command> install.</para>
<section xml:id="servers">
<title><filename>servers</filename></title>
<para>In addition, a multi-node deploy requires that you
modify <filename>conf/servers</filename>. The
<filename>servers</filename> file
lists all hosts that you would have running
<application>DcsServer</application>s, one host per line or the host name followed by the number of master executor servers.
All servers listed in this file will be started and stopped
when DCS start or stop is run.</para>
</section>
<section xml:id="dcs.zookeeper">
<title>ZooKeeper and DCS</title>
<para>See section <xref linkend="zookeeper"/> for ZooKeeper setup for DCS.</para>
</section>
</section>
<section xml:id="confirm">
<title>Running and Confirming Your Installation</title>
<para>Make sure Trafodion is running first. Start and stop the Trafodion instance
by running <filename>sqstart.sh</filename> over in the
<varname>MY_SQROOT/sql/scripts</varname> directory. You can ensure it started
properly by testing with <command>sqcheck</command>.
</para>
<para><emphasis>If</emphasis> you are managing your own ZooKeeper,
start it and confirm its running else, DCS will start up ZooKeeper
for you as part of its start process.</para>
<para>Start DCS with the following command:</para>
<programlisting>bin/start-dcs.sh</programlisting>
Run the above from the
<varname>DCS_HOME</varname>
directory.
<para>You should now have a running DCS instance. DCS logs can be
found in the <filename>logs</filename> subdirectory. Check them out
especially if DCS had trouble starting.</para>
<para>DCS also puts up a UI listing vital attributes and metrics. By default its
deployed on the DcsMaster host at port 40010 (DcsServers put up an
informational http server at 40030+their instance number). If the DcsMaster were running on a host named
<varname>master.example.org</varname> on the default port, to see the
DcsMaster's homepage you'd point your browser at
<filename>http://master.example.org:40010</filename>.</para>
<para>To stop DCS after exiting the DCS shell enter
<programlisting>$ ./bin/stop-dcs.sh
stopping dcs...............</programlisting> Shutdown can take a moment to
complete. It can take longer if your cluster is comprised of many
machines.</para>
</section>
</section> <!-- run modes -->
<section xml:id="zookeeper">
<title>ZooKeeper<indexterm>
<primary>ZooKeeper</primary>
</indexterm></title>
<para>DCS depends on a running ZooKeeper cluster.
All participating nodes and clients need to be able to access the
running ZooKeeper ensemble. DCS by default manages a ZooKeeper
"cluster" for you. It will start and stop the ZooKeeper ensemble
as part of the DCS start/stop process. You can also manage the
ZooKeeper ensemble independent of DCS and just point DCS at
the cluster it should use. To toggle DCS management of
ZooKeeper, use the <varname>DCS_MANAGES_ZK</varname> variable in
<filename>conf/dcs-env.sh</filename>. This variable, which
defaults to <varname>true</varname>, tells DCS whether to
start/stop the ZooKeeper ensemble servers as part of DCS
start/stop.</para>
<para>When DCS manages the ZooKeeper ensemble, you can specify
ZooKeeper configuration using its native
<filename>zoo.cfg</filename> file, or, the easier option is to
just specify ZooKeeper options directly in
<filename>conf/dcs-site.xml</filename>. A ZooKeeper
configuration option can be set as a property in the DCS
<filename>dcs-site.xml</filename> XML configuration file by
prefacing the ZooKeeper option name with
<varname>dcs.zookeeper.property</varname>. For example, the
<varname>clientPort</varname> setting in ZooKeeper can be changed
by setting the
<varname>dcs.zookeeper.property.clientPort</varname> property.
For all default values used by DCS, including ZooKeeper
configuration, see <xref linkend="dcs_default_configurations" />. Look for the
<varname>dcs.zookeeper.property</varname> prefix <footnote>
<para>For the full list of ZooKeeper configurations, see
ZooKeeper's <filename>zoo.cfg</filename>. DCS does not ship
with a <filename>zoo.cfg</filename> so you will need to browse
the <filename>conf</filename> directory in an appropriate
ZooKeeper download.</para>
</footnote></para>
<para>You must at least list the ensemble servers in
<filename>dcs-site.xml</filename> using the
<varname>dcs.zookeeper.quorum</varname> property. This property
defaults to a single ensemble member at
<varname>localhost</varname> which is not suitable for a fully
distributed DCS. (It binds to the local machine only and remote
clients will not be able to connect). <note xml:id="how_many_zks">
<title>How many ZooKeepers should I run?</title>
<para>You can run a ZooKeeper ensemble that comprises 1 node
only but in production it is recommended that you run a
ZooKeeper ensemble of 3, 5 or 7 machines; the more members an
ensemble has, the more tolerant the ensemble is of host
failures. Also, run an odd number of machines. In ZooKeeper,
an even number of peers is supported, but it is normally not used
because an even sized ensemble requires, proportionally, more peers
to form a quorum than an odd sized ensemble requires. For example, an
ensemble with 4 peers requires 3 to form a quorum, while an ensemble with
5 also requires 3 to form a quorum. Thus, an ensemble of 5 allows 2 peers to
fail, and thus is more fault tolerant than the ensemble of 4, which allows
only 1 down peer.
</para>
<para>Give each ZooKeeper server around 1GB of RAM, and if possible, its own
dedicated disk (A dedicated disk is the best thing you can do
to ensure a performant ZooKeeper ensemble). For very heavily
loaded clusters, run ZooKeeper servers on separate machines
from DcsServers.</para>
</note></para>
<para>For example, to have DCS manage a ZooKeeper quorum on
nodes <emphasis>host{1,2,3,4,5}.example.com</emphasis>, bound to
port 2222 (the default is 2181) ensure
<varname>DCS_MANAGE_ZK</varname> is commented out or set to
<varname>true</varname> in <filename>conf/dcs-env.sh</filename>
and then edit <filename>conf/dcs-site.xml</filename> and set
<varname>dcs.zookeeper.property.clientPort</varname> and
<varname>dcs.zookeeper.quorum</varname>. You should also set
<varname>dcs.zookeeper.property.dataDir</varname> to other than
the default as the default has ZooKeeper persist data under
<filename>/tmp</filename> which is often cleared on system
restart. In the example below we have ZooKeeper persist to
<filename>/user/local/zookeeper</filename>. <programlisting>
&lt;configuration&gt;
...
&lt;property&gt;
&lt;name&gt;dcs.zookeeper.property.clientPort&lt;/name&gt;
&lt;value&gt;2222&lt;/value&gt;
&lt;description&gt;Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dcs.zookeeper.quorum&lt;/name&gt;
&lt;value&gt;host1.example.com,host2.example.com,host3.example.com,host4.example.com,host5.example.com&lt;/value&gt;
&lt;description&gt;Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost. For a multi-node setup, this should be set to a full
list of ZooKeeper quorum servers. If DCS_MANAGES_ZK=true set in dcs-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dcs.zookeeper.property.dataDir&lt;/name&gt;
&lt;value&gt;/usr/local/zookeeper&lt;/value&gt;
&lt;description&gt;Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
&lt;/description&gt;
&lt;/property&gt;
...
&lt;/configuration&gt;</programlisting></para>
<section>
<title>Using existing ZooKeeper ensemble</title>
<para>To point DCS at an existing ZooKeeper cluster, one that
is not managed by DCS, uncomment and set <varname>DCS_MANAGES_ZK</varname>
in <filename>conf/dcs-env.sh</filename> to false
<programlisting>
...
# Tell DCS whether it should manage it's own instance of Zookeeper or not.
export DCS_MANAGES_ZK=false</programlisting> Next set ensemble locations
and client port, if non-standard, in
<filename>dcs-site.xml</filename>, or add a suitably
configured <filename>zoo.cfg</filename> to DCS's
<filename>CLASSPATH</filename>. DCS will prefer the
configuration found in <filename>zoo.cfg</filename> over any
settings in <filename>dcs-site.xml</filename>.</para>
<para>When DCS manages ZooKeeper, it will start/stop the
ZooKeeper servers as a part of the regular start/stop scripts.
If you would like to run ZooKeeper yourself, independent of
DCS start/stop, you would do the following</para>
<programlisting>
${DCS_HOME}/bin/dcs-daemons.sh {start,stop} zookeeper
</programlisting>
<para>Note that you can use DCS in this manner to start up a
ZooKeeper cluster, unrelated to DCS. Just make sure to uncomment and set
<varname>DCS_MANAGES_ZK</varname> to <varname>false</varname>
if you want it to stay up across DCS restarts so that when
DCS shuts down, it doesn't take ZooKeeper down with it.</para>
<para>For more information about running a distinct ZooKeeper
cluster, see the <link
xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html" xlink:show="new">ZooKeeper Getting
Started Guide</link>. Additionally, see the <link xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7" xlink:show="new">ZooKeeper Wiki</link> or the
<link xlink:href="http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup" xlink:show="new">ZooKeeper documentation</link>
for more information on ZooKeeper sizing.
</para>
</section>
</section> <!-- zookeeper -->
<section xml:id="config.files">
<title>Configuration Files</title>
<section xml:id="dcs.site">
<title><filename>dcs-site.xml</filename> and <filename>dcs-default.xml</filename></title>
<para>You add site-specific configuration
to the <filename>dcs-site.xml</filename> file,
for DCS, site specific customizations go into
the file <filename>conf/dcs-site.xml</filename>.
For the list of configurable properties, see
<xref linkend="dcs_default_configurations" />
below or view the raw <filename>dcs-default.xml</filename>
source file in the DCS source code at
<filename>src/main/resources</filename>.
</para>
<para>
Not all configuration options make it out to
<filename>dcs-default.xml</filename>. Configuration
that it is thought rare anyone would change can exist only
in code; the only way to turn up such configurations is
via a reading of the source code itself.
</para>
<para>
Currently, changes here will require a cluster restart for DCS to notice the change.
</para>
<!--The file dcs-default.xml is generated as part of
the build of the dcs site. See the dcs pom.xml.
The generated file is a docbook section with a glossary
in it-->
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude"
href="../../target/site/dcs-default.xml" />
</section>
<section xml:id="dcs.env.sh">
<title><filename>dcs-env.sh</filename></title>
<para>Set DCS environment variables in this file.
Examples include options to pass the JVM on start of
an DCS daemon such as heap size and garbarge collector configs.
You can also set configurations for DCS configuration, log directories,
niceness, ssh options, where to locate process pid files,
etc. Open the file at
<filename>conf/dcs-env.sh</filename> and peruse its content.
Each option is fairly well documented. Add your own environment
variables here if you want them read by DCS daemons on startup.</para>
<para>
Changes here will require a cluster restart for DCS to notice the change.
</para>
</section>
<section xml:id="log4j">
<title><filename>log4j.properties</filename></title>
<para>Edit this file to change rate at which DCS files
are rolled and to change the level at which DCS logs messages.
</para>
<para>
Changes here will require a cluster restart for DCS to notice the change
though log levels can be changed for particular daemons via the DCS UI.
</para>
</section>
</section> <!-- config files -->
<section xml:id="example_config">
<title>Example Configurations</title>
<section>
<title>Basic Distributed DCS Install</title>
<para>This example shows a basic configuration for a distributed four-node
cluster. The nodes are named <varname>example1</varname>,
<varname>example2</varname>, and so on, through node
<varname>example4</varname> in this example. The DCS Master
is running on the node <varname>example1</varname>.
DCS Servers run on nodes
<varname>example1</varname>-<varname>example4</varname>. A 3-node
ZooKeeper ensemble runs on <varname>example1</varname>,
<varname>example2</varname>, and <varname>example3</varname> on the
default ports. ZooKeeper data is persisted to the directory
<filename>/export/zookeeper</filename>. Below we show what the main
configuration files -- <filename>dcs-site.xml</filename>,
<filename>servers</filename>, and
<filename>dcs-env.sh</filename> -- found in the DCS
<filename>conf</filename> directory might look like.</para>
<section xml:id="dcs_site">
<title><filename>dcs-site.xml</filename></title>
<programlisting>
&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;dcs.zookeeper.quorum&lt;/name&gt;
&lt;value&gt;example1,example2,example3&lt;/value&gt;
&lt;description&gt;The directory shared by RegionServers.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dcs.zookeeper.property.dataDir&lt;/name&gt;
&lt;value&gt;/export/zookeeper&lt;/value&gt;
&lt;description&gt;Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
&lt;/description&gt;
&lt;/property&gt;
&lt;/configuration&gt;
</programlisting>
</section>
<section xml:id="servers">
<title><filename>servers</filename></title>
<para>In this file, you list the nodes that will run DcsServers. In this case,
there are two DcsServrs per node each starting a single mxosrvr:
</para>
<programlisting>
example1
example2
example3
example4
example1
example2
example3
example4
</programlisting>
<para>Alternatively, you can list the nodes followed by the number of mxosrvrs:
</para>
<programlisting>
example1 2
example2 2
example3 2
example4 2
</programlisting>
</section>
<section xml:id="dcs_env">
<title><filename>dcs-env.sh</filename></title>
<para>Below we use a <command>diff</command> to show the differences
from default in the <filename>dcs-env.sh</filename> file. Here we
are setting the DCS heap to be 4G instead of the default
128M.</para>
<programlisting>
$ git diff dcs-env.sh
diff --git a/conf/dcs-env.sh b/conf/dcs-env.sh
index e70ebc6..96f8c27 100644
--- a/conf/dcs-env.sh
+++ b/conf/dcs-env.sh
@@ -31,7 +31,7 @@ export JAVA_HOME=/usr/java/jdk1.7.0/
# export DCS_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 128.
-# export DCS_HEAPSIZE=128
+export DCS_HEAPSIZE=4096
# Extra Java runtime options.
# Below are what we set by default. May only work with SUN JVM.
</programlisting>
<para>Use <command>rsync</command> to copy the content of the
<filename>conf</filename> directory to all nodes of the
cluster.</para>
</section>
</section>
</section> <!-- example config -->
<section xml:id="important_configurations">
<title>The Important Configurations</title>
<para>Below we list what the <emphasis>important</emphasis>
Configurations. We've divided this section into
required configuration and worth-a-look recommended configs.
</para>
<section xml:id="required_configuration"><title>Required Configurations</title>
<para>Review the <xref linkend="os" /> section.
</para>
</section>
<section xml:id="recommended_configurations"><title>Recommended Configurations</title>
<section xml:id="dcs.master.port"><title><varname>dcs.master.port</varname></title>
<para>The default value is 37800. This is the port the DcsMaster listener binds to
waiting for JDBC/ODBC T4 client connections. The value may need to be changed
if this port number conflicts with other ports in use on your cluster.
</para>
<para>To change this configuration, edit <filename>conf/dcs-site.xml</filename>,
copy the changed file around the cluster and restart.</para>
</section>
<section xml:id="dcs.master.port.range"><title><varname>dcs.master.port.range</varname></title>
<para>The default value is 100. This is the total number of ports that MXOSRVRs will scan trying
to find an available port to use. You must ensure the value is large enough to support the
number of MXOSRVRs configured in <filename>conf/servers</filename>.
</para>
<para>To change this configuration, edit <filename>dcs-site.xml</filename>,
copy the changed file around the cluster and restart.</para>
</section>
</section>
</section> <!-- important config -->
</chapter>