Add Limitations section and some troubleshooting stubs.
git-svn-id: https://svn.apache.org/repos/asf/incubator/knox/trunk@1542103 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/books/0.3.0/book.md b/books/0.3.0/book.md
index 97be41f..8276cc7 100644
--- a/books/0.3.0/book.md
+++ b/books/0.3.0/book.md
@@ -24,7 +24,7 @@
<img src="knox-logo.gif" alt="Knox"/>
<img src="apache-incubator-logo.png" align="right" alt="Incubator"/>
-# Apache Knox Gateway 0.3.0 (Incubator) User's Guide #
+# Apache Knox Gateway 0.3.x (Incubator) User's Guide #
## Table Of Contents ##
@@ -48,6 +48,7 @@
* #[Oozie]
* #[HBase]
* #[Hive]
+* #[Limitations]
* #[Troubleshooting]
* #[Export Controls]
@@ -72,6 +73,7 @@
<<book_gateway-details.md>>
<<book_client-details.md>>
<<book_service-details.md>>
+<<book_limitations.md>>
<<book_troubleshooting.md>>
diff --git a/books/0.3.0/book_getting-started.md b/books/0.3.0/book_getting-started.md
index 13e699f..015909a 100644
--- a/books/0.3.0/book_getting-started.md
+++ b/books/0.3.0/book_getting-started.md
@@ -93,10 +93,6 @@
| | 0.12.0 | ![n] | ![n] |
-### Sandbox Configuration ###
-
-TODO
-
### More Examples ###
These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.
diff --git a/books/0.3.0/book_limitations.md b/books/0.3.0/book_limitations.md
new file mode 100644
index 0000000..2399429
--- /dev/null
+++ b/books/0.3.0/book_limitations.md
@@ -0,0 +1,42 @@
+<!---
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+--->
+
+## Limitations ##
+
+
+### Secure Oozie POST/PUT Request Payload Size Restriction ###
+
+With one exception there are no know size limits for requests or responses payloads that pass through the gateway.
+The exception involves POST or PUT request payload sizes for Oozie in a Kerberos secured Hadoop cluster.
+In this one case there is currently a 4Kb payload size limit for the first request made to the Hadoop cluster.
+This is a result of how the gateway negotiates a trust relationship between itself and the cluster via SPNego.
+There is an undocumented configuration setting to modify this limit's value if required.
+In the future this will be made more easily configuration and at that time it will be documented.
+
+
+### LDAP Groups Acquisition ###
+
+The LDAP authenticator currently does not "out of the box" support the acquisition of group information.
+This can be addressed by implementing a custom Shiro Realm extension.
+Building this into the default implementation is on the roadmap.
+
+
+### Group Membership Propagation ###
+
+Groups that are acquired via Identity Assertion Group Principal Mapping are not propigated to the Hadoop services.
+Therefore groups used for Service Level Authorization policy may not match those acquired within the cluster via GroupMappingServiceProvider plugins.
+
diff --git a/books/0.3.0/book_troubleshooting.md b/books/0.3.0/book_troubleshooting.md
index 8dcc04b..690dad7 100644
--- a/books/0.3.0/book_troubleshooting.md
+++ b/books/0.3.0/book_troubleshooting.md
@@ -17,12 +17,25 @@
## Troubleshooting ##
-### Connection Errors ###
+### Finding Logs ###
-TODO - Explain how to debug connection errors.
+When things aren't working the first thing you need to do is examine the diagnostic logs.
+Depending upon how you are running the gateway these diagnostic logs will be output to different locations.
+
+#### java -jar bin/gateway.jar ####
+
+When the gateway is run this way the diagnostic output is written directly to the console.
+If you want to capture that output you will need to redirect the console output to a file using OS specific techniques.
+
+ java -jar bin/gateway.jar > gateway.log
+
+#### bin/gateway.sh start ####
+
+When the gateway is run this way the diagnostic output is written to /var/log/knox/knox.out and /var/log/knox/knox.err.
+Typically only knox.out will have content.
-### Enabling Logging ###
+### Increasing Logging ###
The `log4j.properties` files `{GATEWAY_HOME}/conf` can be used to change the granularity of the logging done by Knox.
The Knox server must be restarted in order for these changes to take effect.
@@ -36,11 +49,82 @@
log4j.logger.org.apache.http.wire=DEBUG # Use this logger to increase the debugging of Apache HTTP wire traffic.
+### LDAP Server Connectivity Issues ###
+
+If the gateway cannot contact the configured LDAP server you will see errors in the gateway diagnostic output.
+
+ TODO:Kevin - What does it look like when the LDAP server isn't running.
+
+Resolving this will require ensuring that the LDAP server is running and that connection information is correct.
+The LDAP server connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
+
+
+### Hadoop Cluster Connectivity Issues ###
+
+If the gateway cannot contact one of the services in the configured Hadoop cluster you will see errors in the gateway diagnostic output.
+
+ TODO:Kevin - What does it look like when the Sandbox isn't running.
+
+Resolving this will require ensuring that the Hadoop services are running and that connection information is correct.
+Basic Hadoop connectivity can be evaluated using cURL as described elsewhere.
+Otherwise the Hadoop cluster connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
+
+
+### Check Hadoop Cluster Access via cURL ###
+
+When you are experiencing connectivity issue it can be helpful to "bypass" the gateway and invoke the Hadoop REST APIs directly.
+This can easily be done using the cURL command line utility or many other REST/HTTP clients.
+Exactly how to use cURL depends on the configuration of your Hadoop cluster.
+In general however you will use a command line the one that follows.
+
+ curl -ikv -X GET 'http://namenode-host:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using Sandbox the WebHDFS or NameNode port will be mapped to localhost so this command can be used.
+
+ curl -ikv -X GET 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using a cluster secured with Kerberos you will need to have used `kinit` to authenticate to the KDC.
+Then the command below should verify that WebHDFS in the Hadoop cluster is accessible.
+
+ curl -ikv --negotiate -u : -X 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+
+### Authentication Issues ###
+
+TODO:Kevin - What does it look like when the username/password don't match what is in LDAP?
+
+
+### Hostname Resolution Issues ###
+
+TODO:Kevin - What does it look like when host mapping is enabled and shouldn't be or vice versa.
+
+
+### Job Submission Issues - HDFS Home Directories ###
+
+TODO:Dilli - What does it look like if the LDAP authenticated user doesn't have a HDFS home directory and submits a job.
+
+
+### Job Submission Issues - OS Accounts ###
+
+TODO:Dilli - What does it look like if the LDAP authenticated user submits a job but doesn't have an OS account.
+
+
+### HBase Issues ###
+
+TODO:Kevin - What does it look like when HBase/Stargate hangs and how do you fix it.
+
+
+### SSL Certificate Issues ###
+
+TODO:Larry - What does it look like when a client doesn't trust the gateway's SSL identity certificate?
+
+
### Filing Bugs ###
Bugs can be filed using [Jira][jira].
Please include the results of this command below in the Environment section.
Also include the version of Hadoop being used in the same section.
+ cd {GATEWAY_HOME}
java -jar bin/gateway.jar -version
diff --git a/books/0.3.0/quick_start.md b/books/0.3.0/quick_start.md
index 4d11096..a5e0108 100644
--- a/books/0.3.0/quick_start.md
+++ b/books/0.3.0/quick_start.md
@@ -139,7 +139,8 @@
Knox comes with an LDAP server for demonstration purposes.
- java -jar {GATEWAY_HOME}/bin/ldap.jar conf &
+ cd {GATEWAY_HOME}
+ java -jar bin/ldap.jar conf &
### 7 - Start Knox ###
@@ -152,7 +153,8 @@
This is the simplest way to start the gateway.
Starting this way will result in all logging being written directly to standard output.
- java -jar {GATEWAY_HOME}/bin/gateway.jar
+ cd {GATEWAY_HOME}
+ java -jar bin/gateway.jar
Upon start, Knox server will prompt you for the master secret (i.e. password).
@@ -164,13 +166,15 @@
Run the setup command with root privileges.
- sudo {GATEWAY_HOME}/bin/gateway.sh setup
+ cd {GATEWAY_HOME}
+ sudo bin/gateway.sh setup
The server will prompt you for the master secret (i.e. password).
The server can then be started without root privileges using this command.
- {GATEWAY_HOME}/bin/gateway.sh start
+ cd {GATEWAY_HOME}
+ bin/gateway.sh start
When starting the gateway this way the process will be run in the backgroud.
The log output is written into the directory /var/log/knox.
@@ -178,11 +182,13 @@
In order to stop a gateway that was started with the script use this command.
- {GATEWAY_HOME}/bin/gateway.sh stop
+ cd {GATEWAY_HOME}
+ bin/gateway.sh stop
If for some reason the gateway is stopped other than by using the command above you may need to clear the tracking PID.
- {GATEWAY_HOME}/bin/gateway.sh clean
+ cd {GATEWAY_HOME}
+ bin/gateway.sh clean
__NOTE: This command will also clear any log output in /var/log/knox so use this with caution.__