When things aren't working the first thing you need to do is examine the diagnostic logs. Depending upon how you are running the gateway these diagnostic logs will be output to different locations.
When the gateway is run this way the diagnostic output is written directly to the console. If you want to capture that output you will need to redirect the console output to a file using OS specific techniques.
java -jar bin/gateway.jar > gateway.log
When the gateway is run this way the diagnostic output is written to /var/log/knox/knox.out and /var/log/knox/knox.err. Typically only knox.out will have content.
The log4j.properties
files {GATEWAY_HOME}/conf
can be used to change the granularity of the logging done by Knox. The Knox server must be restarted in order for these changes to take effect. There are various useful loggers pre-populated but commented out.
log4j.logger.org.apache.hadoop.gateway=DEBUG # Use this logger to increase the debugging of Apache Knox itself. log4j.logger.org.apache.shiro=DEBUG # Use this logger to increase the debugging of Apache Shiro. log4j.logger.org.apache.http=DEBUG # Use this logger to increase the debugging of Apache HTTP components. log4j.logger.org.apache.http.client=DEBUG # Use this logger to increase the debugging of Apache HTTP client component. log4j.logger.org.apache.http.headers=DEBUG # Use this logger to increase the debugging of Apache HTTP header. log4j.logger.org.apache.http.wire=DEBUG # Use this logger to increase the debugging of Apache HTTP wire traffic.
If the gateway cannot contact the configured LDAP server you will see errors in the gateway diagnostic output.
13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQ=] 13/11/15 16:30:17 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP 13/11/15 16:30:17 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL [ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled 13/11/15 16:30:17 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:30:17 GMT] 13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
The client should see something along the lines of:
HTTP/1.1 401 Unauthorized WWW-Authenticate: BASIC realm="application" Content-Length: 0 Server: Jetty(8.1.12.v20130726)
Resolving this will require ensuring that the LDAP server is running and that connection information is correct. The LDAP server connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
If the gateway cannot contact one of the services in the configured Hadoop cluster you will see errors in the gateway diagnostic output.
13/11/18 18:49:45 WARN hadoop.gateway: Connection exception dispatching request: http://localhost:50070/webhdfs/v1/?user.name=guest&op=LISTSTATUS org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.hadoop.gateway.dispatch.HttpClientDispatch.executeRequest(HttpClientDispatch.java:99)
The the resulting behavior on the client will differ by client. For the client DSL executing the {GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy the output will look look like this.
Caught: org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error at org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:72) at org.apache.hadoop.gateway.shell.AbstractRequest$now.call(Unknown Source) at ExampleWebHdfsLs.run(ExampleWebHdfsLs.groovy:28)
When executing commands requests via cURL the output might look similar to the following example.
Set-Cookie: JSESSIONID=16xwhpuxjr8251ufg22f8pqo85;Path=/gateway/sandbox;Secure Content-Type: text/html;charset=ISO-8859-1 Cache-Control: must-revalidate,no-cache,no-store Content-Length: 21856 Server: Jetty(8.1.12.v20130726) <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 Server Error</title> </head> <body><h2>HTTP ERROR 500</h2>
Resolving this will require ensuring that the Hadoop services are running and that connection information is correct. Basic Hadoop connectivity can be evaluated using cURL as described elsewhere. Otherwise the Hadoop cluster connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
When you are experiencing connectivity issue it can be helpful to “bypass” the gateway and invoke the Hadoop REST APIs directly. This can easily be done using the cURL command line utility or many other REST/HTTP clients. Exactly how to use cURL depends on the configuration of your Hadoop cluster. In general however you will use a command line the one that follows.
curl -ikv -X GET 'http://namenode-host:50070/webhdfs/v1/?op=LISTSTATUS'
If you are using Sandbox the WebHDFS or NameNode port will be mapped to localhost so this command can be used.
curl -ikv -X GET 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
If you are using a cluster secured with Kerberos you will need to have used kinit
to authenticate to the KDC. Then the command below should verify that WebHDFS in the Hadoop cluster is accessible.
curl -ikv --negotiate -u : -X 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
The following log information is available when you enable debug level logging for shiro. This can be done within the conf/log4j.properties file. Not the “Password not correct for user” message.
13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQw] 13/11/15 16:37:15 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP 13/11/15 16:37:15 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL [ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled 2013-11-15 16:37:15,899 INFO Password not correct for user 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org' 2013-11-15 16:37:15,899 INFO Authenticator org.apache.directory.server.core.authn.SimpleAuthenticator@354c78e3 failed to authenticate: BindContext for DN 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org', credentials <0x67 0x75 0x65 0x73 0x74 0x2D 0x70 0x61 0x73 0x73 0x77 0x6F 0x72 0x64 0x30 > 2013-11-15 16:37:15,899 INFO Cannot bind to the server 13/11/15 16:37:15 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:37:15 GMT] 13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
The client will likely see something along the lines of:
HTTP/1.1 401 Unauthorized WWW-Authenticate: BASIC realm="application" Content-Length: 0 Server: Jetty(8.1.12.v20130726)
If your authentication to knox fails and you believe your are using correct creedentilas, you could try to verify the connectivity and credentials usong ldapsearch, assuming you are using ldap directory for authentication.
Assuming you are using the default values that came out of box with knox, your ldapsearch command would be like the following
You would also see this error if you try file operation on the home directory of the authenticating user.
The error would look a little different as shown below if you are attempting to the operation with cURL.
Create the home directory for the user on HDFS. The home directory is typically of the form /user/{userid}
and should be owned by the user. user ‘hdfs’ can create such a directory and make the user owner of the directory.
If the hadoop cluster is not secured with Kerberos, the user submitting a job need not have an OS account on the hadoop nodemanagers.
If the hadoop cluster is secured with Kerberos, the user submitting the job should have an OS account on hadoop nodemanagers.
In either case if the user does not have such OS account, his file permissions are based on user ownership of files or “other” permission in “ugo” posix permission. The user does not get any file permission as a member of any group if you are using default hadoop.security.group.mapping.
TODO: add sample error message from running test on secure cluster with missing OS account
If you experience problems running the HBase samples with the Sandbox VM it may be necessary to restart HBase and Stargate. This can sometimes occur with the Sandbox VM is restarted from a saved state. If the client hangs after emitting the last line in the sample output below you are most likely affected.
System version : {...} Cluster version : 0.96.0.2.0.6.0-76-hadoop2 Status : {...} Creating table 'test_table'...
HBase and Starget can be restred using the following commands on the Hadoop Sandbox VM. You will need to ssh into the VM in order to run these commands.
sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh restart rest -p 60080
Clients that do not trust the certificate presented by the server will behave in different ways. A browser will typically warn you of the inability to trust the receieved certificate and give you an opportunity to add an exception for the particular certificate. Curl will present you with the follow message and instructions for turning of certificate verification:
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
Bugs can be filed using [Jira][jira]. Please include the results of this command below in the Environment section. Also include the version of Hadoop being used in the same section.
cd {GATEWAY_HOME} java -jar bin/gateway.jar -version