| [[_wb.vfsclustering]] |
| = VFS clustering |
| |
| The <<_wb.vfsrepository,VFS repositories>> (usually git repositories) stores all the assets (such |
| as rules, decision tables, process definitions, forms, etc). If that VFS resides on each local |
| server, then it must be kept in sync between all servers of a cluster. |
| |
| Use http://zookeeper.apache.org/[Apache Zookeeper] and http://helix.incubator.apache.org/[Apache Helix] |
| to accomplish this. Zookeeper glues all the parts together. Helix is the cluster management |
| component that registers all cluster details (nodes, resources and the cluster itself). Uberfire |
| (on top of which Workbench is build) uses those 2 components to provide VFS clustering. |
| |
| To create a VFS cluster: |
| |
| . Download http://zookeeper.apache.org/[Apache Zookeeper] and http://helix.incubator.apache.org/[Apache Helix]. |
| . Install both: |
| .. Unzip Zookeeper into a directory (``$ZOOKEEPER_HOME``). |
| .. In ``$ZOOKEEPER_HOME``, copy `zoo_sample.conf` to `zoo.conf` |
| .. Edit ``zoo.conf``. Adjust the settings if needed. Usually only these 2 properties are relevant: |
| + |
| [source, properties] |
| ---- |
| # the directory where the snapshot is stored. |
| dataDir=/tmp/zookeeper |
| # the port at which the clients will connect |
| clientPort=2181 |
| ---- |
| .. Unzip Helix into a directory (``$HELIX_HOME``). |
| . Configure the cluster in Zookeeper: |
| .. Go to its `bin` directory: |
| + |
| [source,shell] |
| ---- |
| $ cd $ZOOKEEPER_HOME/bin |
| ---- |
| .. Start the Zookeeper server: |
| + |
| [source,shell] |
| ---- |
| $ sudo ./zkServer.sh start |
| ---- |
| + |
| If the server fails to start, verify that the `dataDir` (as specified in ``zoo.conf``) is accessible. |
| .. To review Zookeeper's activities, open ``zookeeper.out``: |
| + |
| |
| [source] |
| ---- |
| $ cat $ZOOKEEPER_HOME/bin/zookeeper.out |
| ---- |
| . Configure the cluster in Helix: |
| .. Go to its `bin` directory: |
| + |
| |
| [source] |
| ---- |
| $ cd $HELIX_HOME/bin |
| ---- |
| .. Create the cluster: |
| + |
| |
| [source] |
| ---- |
| $ ./helix-admin.sh --zkSvr localhost:2181 --addCluster kie-cluster |
| ---- |
| + |
| The `zkSvr` value must match the used Zookeeper server. |
| The cluster name (``kie-cluster``) can be changed as needed. |
| .. Add nodes to the cluster: |
| + |
| |
| [source] |
| ---- |
| # Node 1 |
| $ ./helix-admin.sh --zkSvr localhost:2181 --addNode kie-cluster nodeOne:12345 |
| # Node 2 |
| $ ./helix-admin.sh --zkSvr localhost:2181 --addNode kie-cluster nodeTwo:12346 |
| ... |
| ---- |
| + |
| Usually the number of nodes a in cluster equal the number of application servers in the cluster. |
| The node names (``nodeOne:12345`` , ...) can be changed as needed. |
| + |
| |
| [NOTE] |
| ==== |
| `nodeOne:12345` is the unique identifier of the node, which will be referenced later on when configuring application servers. |
| It is not a host and port number, but instead it is used to uniquely identify the logical node. |
| ==== |
| .. Add resources to the cluster: |
| + |
| |
| [source] |
| ---- |
| $ ./helix-admin.sh --zkSvr localhost:2181 --addResource kie-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE |
| ---- |
| + |
| The resource name (``vfs-repo``) can be changed as needed. |
| .. Rebalance the cluster to initialize it: |
| + |
| |
| [source] |
| ---- |
| $ ./helix-admin.sh --zkSvr localhost:2181 --rebalance kie-cluster vfs-repo 2 |
| ---- |
| .. Start the Helix controller to manage the cluster: |
| + |
| |
| [source] |
| ---- |
| $ ./run-helix-controller.sh --zkSvr localhost:2181 --cluster kie-cluster 2>&1 > /tmp/controller.log & |
| ---- |
| . Configure the security domain correctly on the application server. For example on WildFly and JBoss EAP: |
| .. Edit the file ``$JBOSS_HOME/domain/configuration/domain.xml``. |
| + |
| For simplicity sake, presume we use the default domain configuration which uses the profile `full` that defines two server nodes as part of ``main-server-group``. |
| .. Locate the profile `full` and add a new security domain by copying the other security domain already defined there by default: |
| + |
| |
| [source] |
| ---- |
| <security-domain name="kie-ide" cache-type="default"> |
| <authentication> |
| <login-module code="Remoting" flag="optional"> |
| <module-option name="password-stacking" value="useFirstPass"/> |
| </login-module> |
| <login-module code="RealmDirect" flag="required"> |
| <module-option name="password-stacking" value="useFirstPass"/> |
| </login-module> |
| </authentication> |
| </security-domain> |
| ---- |
| + |
| |
| [IMPORTANT] |
| ==== |
| The security-domain name is a magic value. |
| ==== |
| . Configure the <<_wb.systemproperties,system properties>> for the cluster on the application server. For example on WildFly and JBoss EAP: |
| .. Edit the file ``$JBOSS_HOME/domain/configuration/host.xml``. |
| .. Locate the XML elements `server` that belong to the `main-server-group` and add the necessary system property. |
| + |
| For example for nodeOne: |
| + |
| |
| [source,xml] |
| ---- |
| <system-properties> |
| <property name="jboss.node.name" value="nodeOne" boot-time="false"/> |
| <property name="org.uberfire.nio.git.dir" value="/tmp/kie/nodeone" boot-time="false"/> |
| <property name="org.uberfire.metadata.index.dir" value="/tmp/kie/nodeone" boot-time="false"/> |
| <property name="org.uberfire.cluster.id" value="kie-cluster" boot-time="false"/> |
| <property name="org.uberfire.cluster.zk" value="localhost:2181" boot-time="false"/> |
| <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" boot-time="false"/> |
| <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/> |
| <!-- If you're running both nodes on the same machine: --> |
| <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/> |
| </system-properties> |
| ---- |
| + |
| And for nodeTwo: |
| + |
| |
| [source,xml] |
| ---- |
| <system-properties> |
| <property name="jboss.node.name" value="nodeTwo" boot-time="false"/> |
| <property name="org.uberfire.nio.git.dir" value="/tmp/kie/nodetwo" boot-time="false"/> |
| <property name="org.uberfire.metadata.index.dir" value="/tmp/kie/nodetwo" boot-time="false"/> |
| <property name="org.uberfire.cluster.id" value="kie-cluster" boot-time="false"/> |
| <property name="org.uberfire.cluster.zk" value="localhost:2181" boot-time="false"/> |
| <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" boot-time="false"/> |
| <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/> |
| <!-- If you're running both nodes on the same machine: --> |
| <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/> |
| </system-properties> |
| ---- |
| + |
| Make sure the cluster, node and resource names match those configured in Helix. |
| |
| |
| = jBPM clustering |
| |
| In addition to the information above, jBPM clustering requires additional configuration. |
| See http://mswiderski.blogspot.com.br/2013/06/clustering-in-jbpm-v6.html[this blog post] to configure the database etc correctly. |