Apache Hadoop 2.0.4-alpha RC2.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4-alpha-rc1@1467479 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/branch-2.0.4-alpha/.gitattributes b/branch-2.0.4-alpha/.gitattributes
new file mode 100644
index 0000000..851d236
--- /dev/null
+++ b/branch-2.0.4-alpha/.gitattributes
@@ -0,0 +1,18 @@
+# Auto detect text files and perform LF normalization
+*        text=auto
+
+*.cs     text diff=csharp
+*.java   text diff=java
+*.html   text diff=html
+*.py     text diff=python
+*.pl     text diff=perl
+*.pm     text diff=perl
+*.css    text
+*.js     text
+*.sql    text
+
+*.sh     text eol=lf
+
+*.bat    text eol=crlf
+*.csproj text merge=union eol=crlf
+*.sln    text merge=union eol=crlf
diff --git a/branch-2.0.4-alpha/.gitignore b/branch-2.0.4-alpha/.gitignore
new file mode 100644
index 0000000..93e755c
--- /dev/null
+++ b/branch-2.0.4-alpha/.gitignore
@@ -0,0 +1,11 @@
+*.iml
+*.ipr
+*.iws
+.idea
+.svn
+.classpath
+.project
+.settings
+target
+hadoop-hdfs-project/hadoop-hdfs/downloads
+hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads
diff --git a/branch-2.0.4-alpha/BUILDING.txt b/branch-2.0.4-alpha/BUILDING.txt
new file mode 100644
index 0000000..c2e7901
--- /dev/null
+++ b/branch-2.0.4-alpha/BUILDING.txt
@@ -0,0 +1,140 @@
+Build instructions for Hadoop
+
+----------------------------------------------------------------------------------
+Requirements:
+
+* Unix System
+* JDK 1.6
+* Maven 3.0
+* Findbugs 1.3.9 (if running findbugs)
+* ProtocolBuffer 2.4.1+ (for MapReduce and HDFS)
+* CMake 2.6 or newer (if compiling native code)
+* Internet connection for first build (to fetch all Maven and Hadoop dependencies)
+
+----------------------------------------------------------------------------------
+Maven main modules:
+
+  hadoop                            (Main Hadoop project)
+         - hadoop-project           (Parent POM for all Hadoop Maven modules.             )
+                                    (All plugins & dependencies versions are defined here.)
+         - hadoop-project-dist      (Parent POM for modules that generate distributions.)
+         - hadoop-annotations       (Generates the Hadoop doclet used to generated the Javadocs)
+         - hadoop-assemblies        (Maven assemblies used by the different modules)
+         - hadoop-common-project    (Hadoop Common)
+         - hadoop-hdfs-project      (Hadoop HDFS)
+         - hadoop-mapreduce-project (Hadoop MapReduce)
+         - hadoop-tools             (Hadoop tools like Streaming, Distcp, etc.)
+         - hadoop-dist              (Hadoop distribution assembler)
+
+----------------------------------------------------------------------------------
+Where to run Maven from?
+
+  It can be run from any module. The only catch is that if not run from utrunk
+  all modules that are not part of the build run must be installed in the local
+  Maven cache or available in a Maven repository.
+
+----------------------------------------------------------------------------------
+Maven build goals:
+
+ * Clean                     : mvn clean
+ * Compile                   : mvn compile [-Pnative]
+ * Run tests                 : mvn test [-Pnative]
+ * Create JAR                : mvn package
+ * Run findbugs              : mvn compile findbugs:findbugs
+ * Run checkstyle            : mvn compile checkstyle:checkstyle
+ * Install JAR in M2 cache   : mvn install
+ * Deploy JAR to Maven repo  : mvn deploy
+ * Run clover                : mvn test -Pclover [-DcloverLicenseLocation=${user.name}/.clover.license]
+ * Run Rat                   : mvn apache-rat:check
+ * Build javadocs            : mvn javadoc:javadoc
+ * Build distribution        : mvn package [-Pdist][-Pdocs][-Psrc][-Pnative][-Dtar]
+ * Change Hadoop version     : mvn versions:set -DnewVersion=NEWVERSION
+
+ Build options:
+
+  * Use -Pnative to compile/bundle native code
+  * Use -Pdocs to generate & bundle the documentation in the distribution (using -Pdist)
+  * Use -Psrc to create a project source TAR.GZ
+  * Use -Dtar to create a TAR with the distribution (using -Pdist)
+
+ Snappy build options:
+
+   Snappy is a compression library that can be utilized by the native code.
+   It is currently an optional component, meaning that Hadoop can be built with
+   or without this dependency.
+
+  * Use -Drequire.snappy to fail the build if libsnappy.so is not found.
+    If this option is not specified and the snappy library is missing,
+    we silently build a version of libhadoop.so that cannot make use of snappy.
+    This option is recommended if you plan on making use of snappy and want
+    to get more repeatable builds.
+
+  * Use -Dsnappy.prefix to specify a nonstandard location for the libsnappy
+    header files and library files. You do not need this option if you have
+    installed snappy using a package manager.
+  * Use -Dsnappy.lib to specify a nonstandard location for the libsnappy library
+    files.  Similarly to snappy.prefix, you do not need this option if you have
+    installed snappy using a package manager.
+  * Use -Dbundle.snappy to copy the contents of the snappy.lib directory into
+    the final tar file. This option requires that -Dsnappy.lib is also given,
+    and it ignores the -Dsnappy.prefix option.
+
+   Tests options:
+
+  * Use -DskipTests to skip tests when running the following Maven goals:
+    'package',  'install', 'deploy' or 'verify'
+  * -Dtest=<TESTCLASSNAME>,<TESTCLASSNAME#METHODNAME>,....
+  * -Dtest.exclude=<TESTCLASSNAME>
+  * -Dtest.exclude.pattern=**/<TESTCLASSNAME1>.java,**/<TESTCLASSNAME2>.java
+
+----------------------------------------------------------------------------------
+Building components separately
+
+If you are building a submodule directory, all the hadoop dependencies this
+submodule has will be resolved as all other 3rd party dependencies. This is,
+from the Maven cache or from a Maven repository (if not available in the cache
+or the SNAPSHOT 'timed out').
+An alternative is to run 'mvn install -DskipTests' from Hadoop source top
+level once; and then work from the submodule. Keep in mind that SNAPSHOTs
+time out after a while, using the Maven '-nsu' will stop Maven from trying
+to update SNAPSHOTs from external repos.
+
+----------------------------------------------------------------------------------
+Importing projects to eclipse
+
+When you import the project to eclipse, install hadoop-maven-plugins at first.
+
+  $ cd hadoop-maven-plugins
+  $ mvn install
+
+Then, generate ecplise project files.
+
+  $ mvn eclipse:eclipse -DskipTests
+
+At last, import to eclipse by specifying the root directory of the project via
+[File] > [Import] > [Existing Projects into Workspace].
+
+----------------------------------------------------------------------------------
+Building distributions:
+
+Create binary distribution without native code and without documentation:
+
+  $ mvn package -Pdist -DskipTests -Dtar
+
+Create binary distribution with native code and with documentation:
+
+  $ mvn package -Pdist,native,docs -DskipTests -Dtar
+
+Create source distribution:
+
+  $ mvn package -Psrc -DskipTests
+
+Create source and binary distributions with native code and documentation:
+
+  $ mvn package -Pdist,native,docs,src -DskipTests -Dtar
+
+Create a local staging version of the website (in /tmp/hadoop-site)
+
+  $ mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
+
+----------------------------------------------------------------------------------
diff --git a/branch-2.0.4-alpha/dev-support/relnotes.py b/branch-2.0.4-alpha/dev-support/relnotes.py
new file mode 100644
index 0000000..57d48a4
--- /dev/null
+++ b/branch-2.0.4-alpha/dev-support/relnotes.py
@@ -0,0 +1,274 @@
+#!/usr/bin/python
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+import re
+import sys
+from optparse import OptionParser
+import httplib
+import urllib
+import cgi
+try:
+  import json
+except ImportError:
+  import simplejson as json
+
+
+namePattern = re.compile(r' \([0-9]+\)')
+
+def clean(str):
+  return quoteHtml(re.sub(namePattern, "", str))
+
+def formatComponents(str):
+  str = re.sub(namePattern, '', str).replace("'", "")
+  if str != "":
+    ret = "(" + str + ")"
+  else:
+    ret = ""
+  return quoteHtml(ret)
+    
+def quoteHtml(str):
+  return cgi.escape(str).encode('ascii', 'xmlcharrefreplace')
+
+def mstr(obj):
+  if (obj == None):
+    return ""
+  return unicode(obj)
+
+class Version:
+  """Represents a version number"""
+  def __init__(self, data):
+    self.mod = False
+    self.data = data
+    found = re.match('^((\d+)(\.\d+)*).*$', data)
+    if (found):
+      self.parts = [ int(p) for p in found.group(1).split('.') ]
+    else:
+      self.parts = []
+    # backfill version with zeroes if missing parts
+    self.parts.extend((0,) * (3 - len(self.parts)))
+
+  def decBugFix(self):
+    self.mod = True
+    self.parts[2] -= 1
+    return self
+
+  def __str__(self):
+    if (self.mod):
+      return '.'.join([ str(p) for p in self.parts ])
+    return self.data
+
+  def __cmp__(self, other):
+    return cmp(self.parts, other.parts)
+
+class Jira:
+  """A single JIRA"""
+
+  def __init__(self, data, parent):
+    self.key = data['key']
+    self.fields = data['fields']
+    self.parent = parent
+    self.notes = None
+
+  def getId(self):
+    return mstr(self.key)
+
+  def getDescription(self):
+    return mstr(self.fields['description'])
+
+  def getReleaseNote(self):
+    if (self.notes == None):
+      field = self.parent.fieldIdMap['Release Note']
+      if (self.fields.has_key(field)):
+        self.notes=mstr(self.fields[field])
+      else:
+        self.notes=self.getDescription()
+    return self.notes
+
+  def getPriority(self):
+    ret = ""
+    pri = self.fields['priority']
+    if(pri != None):
+      ret = pri['name']
+    return mstr(ret)
+
+  def getAssignee(self):
+    ret = ""
+    mid = self.fields['assignee']
+    if(mid != None):
+      ret = mid['displayName']
+    return mstr(ret)
+
+  def getComponents(self):
+    return " , ".join([ comp['name'] for comp in self.fields['components'] ])
+
+  def getSummary(self):
+    return self.fields['summary']
+
+  def getType(self):
+    ret = ""
+    mid = self.fields['issuetype']
+    if(mid != None):
+      ret = mid['name']
+    return mstr(ret)
+
+  def getReporter(self):
+    ret = ""
+    mid = self.fields['reporter']
+    if(mid != None):
+      ret = mid['displayName']
+    return mstr(ret)
+
+  def getProject(self):
+    ret = ""
+    mid = self.fields['project']
+    if(mid != None):
+      ret = mid['key']
+    return mstr(ret)
+
+
+
+class JiraIter:
+  """An Iterator of JIRAs"""
+
+  def __init__(self, versions):
+    self.versions = versions
+
+    resp = urllib.urlopen("https://issues.apache.org/jira/rest/api/2/field")
+    data = json.loads(resp.read())
+
+    self.fieldIdMap = {}
+    for part in data:
+      self.fieldIdMap[part['name']] = part['id']
+
+    self.jiras = []
+    at=0
+    end=1
+    count=100
+    while (at < end):
+      params = urllib.urlencode({'jql': "project in (HADOOP,HDFS,MAPREDUCE,YARN) and fixVersion in ('"+"' , '".join(versions)+"') and resolution = Fixed", 'startAt':at, 'maxResults':count})
+      resp = urllib.urlopen("https://issues.apache.org/jira/rest/api/2/search?%s"%params)
+      data = json.loads(resp.read())
+      if (data.has_key('errorMessages')):
+        raise Exception(data['errorMessages'])
+      at = data['startAt'] + data['maxResults']
+      end = data['total']
+      self.jiras.extend(data['issues'])
+
+    self.iter = self.jiras.__iter__()
+
+  def __iter__(self):
+    return self
+
+  def next(self):
+    data = self.iter.next()
+    j = Jira(data, self)
+    return j
+
+class Outputs:
+  """Several different files to output to at the same time"""
+
+  def __init__(self, base_file_name, file_name_pattern, keys, params={}):
+    self.params = params
+    self.base = open(base_file_name%params, 'w')
+    self.others = {}
+    for key in keys:
+      both = dict(params)
+      both['key'] = key
+      self.others[key] = open(file_name_pattern%both, 'w')
+
+  def writeAll(self, pattern):
+    both = dict(self.params)
+    both['key'] = ''
+    self.base.write(pattern%both)
+    for key in self.others.keys():
+      both = dict(self.params)
+      both['key'] = key
+      self.others[key].write(pattern%both)
+
+  def writeKeyRaw(self, key, str):
+    self.base.write(str)
+    if (self.others.has_key(key)):
+      self.others[key].write(str)
+  
+  def close(self):
+    self.base.close()
+    for fd in self.others.values():
+      fd.close()
+
+def main():
+  parser = OptionParser(usage="usage: %prog [options] [USER-ignored] [PASSWORD-ignored] [VERSION]")
+  parser.add_option("-v", "--version", dest="versions",
+             action="append", type="string", 
+             help="versions in JIRA to include in releasenotes", metavar="VERSION")
+  parser.add_option("--previousVer", dest="previousVer",
+             action="store", type="string", 
+             help="previous version to include in releasenotes", metavar="VERSION")
+
+  (options, args) = parser.parse_args()
+
+  if (options.versions == None):
+    options.versions = []
+
+  if (len(args) > 2):
+    options.versions.append(args[2])
+
+  if (len(options.versions) <= 0):
+    parser.error("At least one version needs to be supplied")
+
+  versions = [ Version(v) for v in options.versions];
+  versions.sort();
+
+  maxVersion = str(versions[-1])
+  if(options.previousVer == None):  
+    options.previousVer = str(versions[0].decBugFix())
+    print >> sys.stderr, "WARNING: no previousVersion given, guessing it is "+options.previousVer
+
+  list = JiraIter(options.versions)
+  version = maxVersion
+  outputs = Outputs("releasenotes.%(ver)s.html", 
+    "releasenotes.%(key)s.%(ver)s.html", 
+    ["HADOOP","HDFS","MAPREDUCE","YARN"], {"ver":maxVersion, "previousVer":options.previousVer})
+
+  head = '<META http-equiv="Content-Type" content="text/html; charset=UTF-8">\n' \
+    '<title>Hadoop %(key)s %(ver)s Release Notes</title>\n' \
+    '<STYLE type="text/css">\n' \
+    '	H1 {font-family: sans-serif}\n' \
+    '	H2 {font-family: sans-serif; margin-left: 7mm}\n' \
+    '	TABLE {margin-left: 7mm}\n' \
+    '</STYLE>\n' \
+    '</head>\n' \
+    '<body>\n' \
+    '<h1>Hadoop %(key)s %(ver)s Release Notes</h1>\n' \
+    'These release notes include new developer and user-facing incompatibilities, features, and major improvements. \n' \
+    '<a name="changes"/>\n' \
+    '<h2>Changes since Hadoop %(previousVer)s</h2>\n' \
+    '<ul>\n'
+
+  outputs.writeAll(head)
+
+  for jira in list:
+    line = '<li> <a href="https://issues.apache.org/jira/browse/%s">%s</a>.\n' \
+      '     %s %s reported by %s and fixed by %s %s<br>\n' \
+      '     <b>%s</b><br>\n' \
+      '     <blockquote>%s</blockquote></li>\n' \
+      % (quoteHtml(jira.getId()), quoteHtml(jira.getId()), clean(jira.getPriority()), clean(jira.getType()).lower(),
+         quoteHtml(jira.getReporter()), quoteHtml(jira.getAssignee()), formatComponents(jira.getComponents()),
+         quoteHtml(jira.getSummary()), quoteHtml(jira.getReleaseNote()))
+    outputs.writeKeyRaw(jira.getProject(), line)
+ 
+  outputs.writeAll("</ul>\n</body></html>\n")
+  outputs.close()
+
+if __name__ == "__main__":
+  main()
+
diff --git a/branch-2.0.4-alpha/dev-support/smart-apply-patch.sh b/branch-2.0.4-alpha/dev-support/smart-apply-patch.sh
new file mode 100755
index 0000000..ff3c61c
--- /dev/null
+++ b/branch-2.0.4-alpha/dev-support/smart-apply-patch.sh
@@ -0,0 +1,106 @@
+#!/usr/bin/env bash
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+set -e
+
+PATCH_FILE=$1
+if [ -z "$PATCH_FILE" ]; then
+  echo usage: $0 patch-file
+  exit 1
+fi
+
+PATCH=${PATCH:-patch} # allow overriding patch binary
+
+# Cleanup handler for temporary files
+TOCLEAN=""
+cleanup() {
+  rm $TOCLEAN
+  exit $1
+}
+trap "cleanup 1" HUP INT QUIT TERM
+
+# Allow passing "-" for stdin patches
+if [ "$PATCH_FILE" == "-" ]; then
+  PATCH_FILE=/tmp/tmp.in.$$
+  cat /dev/fd/0 > $PATCH_FILE
+  TOCLEAN="$TOCLEAN $PATCH_FILE"
+fi
+
+# Come up with a list of changed files into $TMP
+TMP=/tmp/tmp.paths.$$
+TOCLEAN="$TOCLEAN $TMP"
+
+if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then
+  PLEVEL=0
+  #if the patch applied at P0 there is the possability that all we are doing
+  # is adding new files and they would apply anywhere. So try to guess the
+  # correct place to put those files.
+
+  TMP2=/tmp/tmp.paths.2.$$
+  TOCLEAN="$TOCLEAN $TMP2"
+
+  grep '^patching file ' $TMP | awk '{print $3}' | grep -v /dev/null | sort | uniq > $TMP2
+
+  #first off check that all of the files do not exist
+  FOUND_ANY=0
+  for CHECK_FILE in $(cat $TMP2)
+  do
+    if [[ -f $CHECK_FILE ]]; then
+      FOUND_ANY=1
+    fi
+  done
+
+  if [[ "$FOUND_ANY" = "0" ]]; then
+    #all of the files are new files so we have to guess where the correct place to put it is.
+
+    # if all of the lines start with a/ or b/, then this is a git patch that
+    # was generated without --no-prefix
+    if ! grep -qv '^a/\|^b/' $TMP2 ; then
+      echo Looks like this is a git patch. Stripping a/ and b/ prefixes
+      echo and incrementing PLEVEL
+      PLEVEL=$[$PLEVEL + 1]
+      sed -i -e 's,^[ab]/,,' $TMP2
+    fi
+
+    PREFIX_DIRS_AND_FILES=$(cut -d '/' -f 1 | sort | uniq)
+
+    # if we are at the project root then nothing more to do
+    if [[ -d hadoop-common-project ]]; then
+      echo Looks like this is being run at project root
+
+    # if all of the lines start with hadoop-common/, hadoop-hdfs/, hadoop-yarn/ or hadoop-mapreduce/, this is
+    # relative to the hadoop root instead of the subproject root, so we need
+    # to chop off another layer
+    elif [[ "$PREFIX_DIRS_AND_FILES" =~ ^(hadoop-common-project|hadoop-hdfs-project|hadoop-yarn-project|hadoop-mapreduce-project)$ ]]; then
+
+      echo Looks like this is relative to project root. Increasing PLEVEL
+      PLEVEL=$[$PLEVEL + 1]
+
+    elif ! echo "$PREFIX_DIRS_AND_FILES" | grep -vxq 'hadoop-common-project\|hadoop-hdfs-project\|hadoop-yarn-project\|hadoop-mapreduce-project' ; then
+      echo Looks like this is a cross-subproject patch. Try applying from the project root
+      cleanup 1
+    fi
+  fi
+elif $PATCH -p1 -E --dry-run < $PATCH_FILE 2>&1 > /dev/null; then
+  PLEVEL=1
+elif $PATCH -p2 -E --dry-run < $PATCH_FILE 2>&1 > /dev/null; then
+  PLEVEL=2
+else
+  echo "The patch does not appear to apply with p0 to p2";
+  cleanup 1;
+fi
+
+echo Going to apply patch with: $PATCH -p$PLEVEL
+$PATCH -p$PLEVEL -E < $PATCH_FILE
+
+cleanup $?
diff --git a/branch-2.0.4-alpha/dev-support/test-patch.properties b/branch-2.0.4-alpha/dev-support/test-patch.properties
new file mode 100644
index 0000000..d5e950c
--- /dev/null
+++ b/branch-2.0.4-alpha/dev-support/test-patch.properties
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The number of acceptable warning for *all* modules
+# Please update the per-module test-patch.properties if you update this file.
+
+OK_RELEASEAUDIT_WARNINGS=0
+OK_FINDBUGS_WARNINGS=0
+OK_JAVADOC_WARNINGS=8
diff --git a/branch-2.0.4-alpha/dev-support/test-patch.sh b/branch-2.0.4-alpha/dev-support/test-patch.sh
new file mode 100755
index 0000000..ac1a18a
--- /dev/null
+++ b/branch-2.0.4-alpha/dev-support/test-patch.sh
@@ -0,0 +1,842 @@
+#!/usr/bin/env bash
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+
+#set -x
+ulimit -n 1024
+
+### Setup some variables.  
+### SVN_REVISION and BUILD_URL are set by Hudson if it is run by patch process
+### Read variables from properties file
+bindir=$(dirname $0)
+
+# Defaults
+if [ -z "$MAVEN_HOME" ]; then
+  MVN=mvn
+else
+  MVN=$MAVEN_HOME/bin/mvn
+fi
+
+PROJECT_NAME=Hadoop
+JENKINS=false
+PATCH_DIR=/tmp
+SUPPORT_DIR=/tmp
+BASEDIR=$(pwd)
+
+PS=${PS:-ps}
+AWK=${AWK:-awk}
+WGET=${WGET:-wget}
+SVN=${SVN:-svn}
+GREP=${GREP:-grep}
+PATCH=${PATCH:-patch}
+JIRACLI=${JIRA:-jira}
+FINDBUGS_HOME=${FINDBUGS_HOME}
+FORREST_HOME=${FORREST_HOME}
+ECLIPSE_HOME=${ECLIPSE_HOME}
+
+###############################################################################
+printUsage() {
+  echo "Usage: $0 [options] patch-file | defect-number"
+  echo
+  echo "Where:"
+  echo "  patch-file is a local patch file containing the changes to test"
+  echo "  defect-number is a JIRA defect number (e.g. 'HADOOP-1234') to test (Jenkins only)"
+  echo
+  echo "Options:"
+  echo "--patch-dir=<dir>      The directory for working and output files (default '/tmp')"
+  echo "--basedir=<dir>        The directory to apply the patch to (default current directory)"
+  echo "--mvn-cmd=<cmd>        The 'mvn' command to use (default \$MAVEN_HOME/bin/mvn, or 'mvn')"
+  echo "--ps-cmd=<cmd>         The 'ps' command to use (default 'ps')"
+  echo "--awk-cmd=<cmd>        The 'awk' command to use (default 'awk')"
+  echo "--svn-cmd=<cmd>        The 'svn' command to use (default 'svn')"
+  echo "--grep-cmd=<cmd>       The 'grep' command to use (default 'grep')"
+  echo "--patch-cmd=<cmd>      The 'patch' command to use (default 'patch')"
+  echo "--findbugs-home=<path> Findbugs home directory (default FINDBUGS_HOME environment variable)"
+  echo "--forrest-home=<path>  Forrest home directory (default FORREST_HOME environment variable)"
+  echo "--dirty-workspace      Allow the local SVN workspace to have uncommitted changes"
+  echo "--run-tests            Run all tests below the base directory"
+  echo
+  echo "Jenkins-only options:"
+  echo "--jenkins              Run by Jenkins (runs tests and posts results to JIRA)"
+  echo "--support-dir=<dir>    The directory to find support files in"
+  echo "--wget-cmd=<cmd>       The 'wget' command to use (default 'wget')"
+  echo "--jira-cmd=<cmd>       The 'jira' command to use (default 'jira')"
+  echo "--jira-password=<pw>   The password for the 'jira' command"
+  echo "--eclipse-home=<path>  Eclipse home directory (default ECLIPSE_HOME environment variable)"
+}
+
+###############################################################################
+parseArgs() {
+  for i in $*
+  do
+    case $i in
+    --jenkins)
+      JENKINS=true
+      ;;
+    --patch-dir=*)
+      PATCH_DIR=${i#*=}
+      ;;
+    --support-dir=*)
+      SUPPORT_DIR=${i#*=}
+      ;;
+    --basedir=*)
+      BASEDIR=${i#*=}
+      ;;
+    --mvn-cmd=*)
+      MVN=${i#*=}
+      ;;
+    --ps-cmd=*)
+      PS=${i#*=}
+      ;;
+    --awk-cmd=*)
+      AWK=${i#*=}
+      ;;
+    --wget-cmd=*)
+      WGET=${i#*=}
+      ;;
+    --svn-cmd=*)
+      SVN=${i#*=}
+      ;;
+    --grep-cmd=*)
+      GREP=${i#*=}
+      ;;
+    --patch-cmd=*)
+      PATCH=${i#*=}
+      ;;
+    --jira-cmd=*)
+      JIRACLI=${i#*=}
+      ;;
+    --jira-password=*)
+      JIRA_PASSWD=${i#*=}
+      ;;
+    --findbugs-home=*)
+      FINDBUGS_HOME=${i#*=}
+      ;;
+    --forrest-home=*)
+      FORREST_HOME=${i#*=}
+      ;;
+    --eclipse-home=*)
+      ECLIPSE_HOME=${i#*=}
+      ;;
+    --dirty-workspace)
+      DIRTY_WORKSPACE=true
+      ;;
+    --run-tests)
+      RUN_TESTS=true
+      ;;
+    *)
+      PATCH_OR_DEFECT=$i
+      ;;
+    esac
+  done
+  if [ -z "$PATCH_OR_DEFECT" ]; then
+    printUsage
+    exit 1
+  fi
+  if [[ $JENKINS == "true" ]] ; then
+    echo "Running in Jenkins mode"
+    defect=$PATCH_OR_DEFECT
+    ECLIPSE_PROPERTY="-Declipse.home=$ECLIPSE_HOME"
+  else
+    echo "Running in developer mode"
+    JENKINS=false
+    ### PATCH_FILE contains the location of the patchfile
+    PATCH_FILE=$PATCH_OR_DEFECT
+    if [[ ! -e "$PATCH_FILE" ]] ; then
+      echo "Unable to locate the patch file $PATCH_FILE"
+      cleanupAndExit 0
+    fi
+    ### Check if $PATCH_DIR exists. If it does not exist, create a new directory
+    if [[ ! -e "$PATCH_DIR" ]] ; then
+      mkdir "$PATCH_DIR"
+      if [[ $? == 0 ]] ; then 
+        echo "$PATCH_DIR has been created"
+      else
+        echo "Unable to create $PATCH_DIR"
+        cleanupAndExit 0
+      fi
+    fi
+    ### Obtain the patch filename to append it to the version number
+    defect=`basename $PATCH_FILE`
+  fi
+}
+
+###############################################################################
+checkout () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Testing patch for ${defect}."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  ### When run by a developer, if the workspace contains modifications, do not continue
+  ### unless the --dirty-workspace option was set
+  status=`$SVN stat --ignore-externals | sed -e '/^X[ ]*/D'`
+  if [[ $JENKINS == "false" ]] ; then
+    if [[ "$status" != "" && -z $DIRTY_WORKSPACE ]] ; then
+      echo "ERROR: can't run in a workspace that contains the following modifications"
+      echo "$status"
+      cleanupAndExit 1
+    fi
+    echo
+  else   
+    cd $BASEDIR
+    $SVN revert -R .
+    rm -rf `$SVN status --no-ignore`
+    $SVN update
+  fi
+  return $?
+}
+
+###############################################################################
+setup () {
+  ### Download latest patch file (ignoring .htm and .html) when run from patch process
+  if [[ $JENKINS == "true" ]] ; then
+    $WGET -q -O $PATCH_DIR/jira http://issues.apache.org/jira/browse/$defect
+    if [[ `$GREP -c 'Patch Available' $PATCH_DIR/jira` == 0 ]] ; then
+      echo "$defect is not \"Patch Available\".  Exiting."
+      cleanupAndExit 0
+    fi
+    relativePatchURL=`$GREP -o '"/jira/secure/attachment/[0-9]*/[^"]*' $PATCH_DIR/jira | $GREP -v -e 'htm[l]*$' | sort | tail -1 | $GREP -o '/jira/secure/attachment/[0-9]*/[^"]*'`
+    patchURL="http://issues.apache.org${relativePatchURL}"
+    patchNum=`echo $patchURL | $GREP -o '[0-9]*/' | $GREP -o '[0-9]*'`
+    echo "$defect patch is being downloaded at `date` from"
+    echo "$patchURL"
+    $WGET -q -O $PATCH_DIR/patch $patchURL
+    VERSION=${SVN_REVISION}_${defect}_PATCH-${patchNum}
+    JIRA_COMMENT="Here are the results of testing the latest attachment 
+  $patchURL
+  against trunk revision ${SVN_REVISION}."
+
+    ### Copy in any supporting files needed by this process
+    cp -r $SUPPORT_DIR/lib/* ./lib
+    #PENDING: cp -f $SUPPORT_DIR/etc/checkstyle* ./src/test
+  ### Copy the patch file to $PATCH_DIR
+  else
+    VERSION=PATCH-${defect}
+    cp $PATCH_FILE $PATCH_DIR/patch
+    if [[ $? == 0 ]] ; then
+      echo "Patch file $PATCH_FILE copied to $PATCH_DIR"
+    else
+      echo "Could not copy $PATCH_FILE to $PATCH_DIR"
+      cleanupAndExit 0
+    fi
+  fi
+  . $BASEDIR/dev-support/test-patch.properties
+  ### exit if warnings are NOT defined in the properties file
+  if [ -z "$OK_FINDBUGS_WARNINGS" ] || [[ -z "$OK_JAVADOC_WARNINGS" ]] || [[ -z $OK_RELEASEAUDIT_WARNINGS ]]; then
+    echo "Please define the following properties in test-patch.properties file"
+	 echo  "OK_FINDBUGS_WARNINGS"
+	 echo  "OK_RELEASEAUDIT_WARNINGS"
+	 echo  "OK_JAVADOC_WARNINGS"
+    cleanupAndExit 1
+  fi
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo " Pre-build trunk to verify trunk stability and javac warnings" 
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  if [[ ! -d hadoop-common-project ]]; then
+    cd $bindir/..
+    echo "Compiling $(pwd)"
+    echo "$MVN clean test -DskipTests > $PATCH_DIR/trunkCompile.txt 2>&1"
+    $MVN clean test -DskipTests > $PATCH_DIR/trunkCompile.txt 2>&1
+    if [[ $? != 0 ]] ; then
+      echo "Top-level trunk compilation is broken?"
+      cleanupAndExit 1
+    fi
+    cd -
+  fi
+  echo "Compiling $(pwd)"
+  echo "$MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch > $PATCH_DIR/trunkJavacWarnings.txt 2>&1"
+  $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch > $PATCH_DIR/trunkJavacWarnings.txt 2>&1
+  if [[ $? != 0 ]] ; then
+    echo "Trunk compilation is broken?"
+    cleanupAndExit 1
+  fi
+}
+
+###############################################################################
+### Check for @author tags in the patch
+checkAuthor () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Checking there are no @author tags in the patch."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  authorTags=`$GREP -c -i '@author' $PATCH_DIR/patch`
+  echo "There appear to be $authorTags @author tags in the patch."
+  if [[ $authorTags != 0 ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 @author.  The patch appears to contain $authorTags @author tags which the Hadoop community has agreed to not allow in code contributions."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 @author.  The patch does not contain any @author tags."
+  return 0
+}
+
+###############################################################################
+### Check for tests in the patch
+checkTests () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Checking there are new or changed tests in the patch."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  echo "There appear to be $testReferences test files referenced in the patch."
+  if [[ $testReferences == 0 ]] ; then
+    if [[ $JENKINS == "true" ]] ; then
+      patchIsDoc=`$GREP -c -i 'title="documentation' $PATCH_DIR/jira`
+      if [[ $patchIsDoc != 0 ]] ; then
+        echo "The patch appears to be a documentation patch that doesn't require tests."
+        JIRA_COMMENT="$JIRA_COMMENT
+
+    +0 tests included.  The patch appears to be a documentation patch that doesn't require tests."
+        return 0
+      fi
+    fi
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 tests included.  The patch doesn't appear to include any new or modified tests.
+                        Please justify why no new tests are needed for this patch.
+                        Also please list what manual steps were performed to verify this patch."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 tests included.  The patch appears to include $testReferences new or modified tests."
+  return 0
+}
+
+cleanUpXml () {
+  cd $BASEDIR/conf
+  for file in `ls *.xml.template`
+    do
+      rm -f `basename $file .template`
+    done
+  cd $BASEDIR  
+}
+
+###############################################################################
+### Attempt to apply the patch
+applyPatch () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Applying patch."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  export PATCH
+  $bindir/smart-apply-patch.sh $PATCH_DIR/patch
+  if [[ $? != 0 ]] ; then
+    echo "PATCH APPLICATION FAILED"
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 patch.  The patch command could not apply the patch."
+    return 1
+  fi
+  return 0
+}
+
+###############################################################################
+### Check there are no javadoc warnings
+checkJavadocWarnings () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Determining number of patched javadoc warnings."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  echo "$MVN clean test javadoc:javadoc -DskipTests -Pdocs -D${PROJECT_NAME}PatchProcess > $PATCH_DIR/patchJavadocWarnings.txt 2>&1"
+  if [ -d hadoop-project ]; then
+    (cd hadoop-project; $MVN install)
+  fi
+  if [ -d hadoop-common-project/hadoop-annotations ]; then  
+    (cd hadoop-common-project/hadoop-annotations; $MVN install)
+  fi
+  $MVN clean test javadoc:javadoc -DskipTests -Pdocs -D${PROJECT_NAME}PatchProcess > $PATCH_DIR/patchJavadocWarnings.txt 2>&1
+  javadocWarnings=`$GREP '\[WARNING\]' $PATCH_DIR/patchJavadocWarnings.txt | $AWK '/Javadoc Warnings/,EOF' | $GREP warning | $AWK 'BEGIN {total = 0} {total += 1} END {print total}'`
+  echo ""
+  echo ""
+  echo "There appear to be $javadocWarnings javadoc warnings generated by the patched build."
+
+  ### if current warnings greater than OK_JAVADOC_WARNINGS
+  if [[ $javadocWarnings -gt $OK_JAVADOC_WARNINGS ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 javadoc.  The javadoc tool appears to have generated `expr $(($javadocWarnings-$OK_JAVADOC_WARNINGS))` warning messages."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 javadoc.  The javadoc tool did not generate any warning messages."
+  return 0
+}
+
+###############################################################################
+### Check there are no changes in the number of Javac warnings
+checkJavacWarnings () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Determining number of patched javac warnings."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  echo "$MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Pnative -Ptest-patch > $PATCH_DIR/patchJavacWarnings.txt 2>&1"
+  $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Pnative -Ptest-patch > $PATCH_DIR/patchJavacWarnings.txt 2>&1
+  if [[ $? != 0 ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 javac.  The patch appears to cause tar ant target to fail."
+    return 1
+  fi
+  ### Compare trunk and patch javac warning numbers
+  if [[ -f $PATCH_DIR/patchJavacWarnings.txt ]] ; then
+    trunkJavacWarnings=`$GREP '\[WARNING\]' $PATCH_DIR/trunkJavacWarnings.txt | $AWK 'BEGIN {total = 0} {total += 1} END {print total}'`
+    patchJavacWarnings=`$GREP '\[WARNING\]' $PATCH_DIR/patchJavacWarnings.txt | $AWK 'BEGIN {total = 0} {total += 1} END {print total}'`
+    echo "There appear to be $trunkJavacWarnings javac compiler warnings before the patch and $patchJavacWarnings javac compiler warnings after applying the patch."
+    if [[ $patchJavacWarnings != "" && $trunkJavacWarnings != "" ]] ; then
+      if [[ $patchJavacWarnings -gt $trunkJavacWarnings ]] ; then
+        JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 javac.  The applied patch generated $patchJavacWarnings javac compiler warnings (more than the trunk's current $trunkJavacWarnings warnings)."
+        return 1
+      fi
+    fi
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 javac.  The applied patch does not increase the total number of javac compiler warnings."
+  return 0
+}
+
+###############################################################################
+### Check there are no changes in the number of release audit (RAT) warnings
+checkReleaseAuditWarnings () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Determining number of patched release audit warnings."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  echo "$MVN apache-rat:check -D${PROJECT_NAME}PatchProcess 2>&1"
+  $MVN apache-rat:check -D${PROJECT_NAME}PatchProcess 2>&1
+  find $BASEDIR -name rat.txt | xargs cat > $PATCH_DIR/patchReleaseAuditWarnings.txt
+
+  ### Compare trunk and patch release audit warning numbers
+  if [[ -f $PATCH_DIR/patchReleaseAuditWarnings.txt ]] ; then
+    patchReleaseAuditWarnings=`$GREP -c '\!?????' $PATCH_DIR/patchReleaseAuditWarnings.txt`
+    echo ""
+    echo ""
+    echo "There appear to be $OK_RELEASEAUDIT_WARNINGS release audit warnings before the patch and $patchReleaseAuditWarnings release audit warnings after applying the patch."
+    if [[ $patchReleaseAuditWarnings != "" && $OK_RELEASEAUDIT_WARNINGS != "" ]] ; then
+      if [[ $patchReleaseAuditWarnings -gt $OK_RELEASEAUDIT_WARNINGS ]] ; then
+        JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 release audit.  The applied patch generated $patchReleaseAuditWarnings release audit warnings (more than the trunk's current $OK_RELEASEAUDIT_WARNINGS warnings)."
+        $GREP '\!?????' $PATCH_DIR/patchReleaseAuditWarnings.txt > $PATCH_DIR/patchReleaseAuditProblems.txt
+        echo "Lines that start with ????? in the release audit report indicate files that do not have an Apache license header." >> $PATCH_DIR/patchReleaseAuditProblems.txt
+        JIRA_COMMENT_FOOTER="Release audit warnings: $BUILD_URL/artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
+$JIRA_COMMENT_FOOTER"
+        return 1
+      fi
+    fi
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 release audit.  The applied patch does not increase the total number of release audit warnings."
+  return 0
+}
+
+###############################################################################
+### Check there are no changes in the number of Checkstyle warnings
+checkStyle () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Determining number of patched checkstyle warnings."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  echo "THIS IS NOT IMPLEMENTED YET"
+  echo ""
+  echo ""
+  echo "$MVN test checkstyle:checkstyle -DskipTests -D${PROJECT_NAME}PatchProcess"
+  $MVN test checkstyle:checkstyle -DskipTests -D${PROJECT_NAME}PatchProcess
+
+  JIRA_COMMENT_FOOTER="Checkstyle results: $BUILD_URL/artifact/trunk/build/test/checkstyle-errors.html
+$JIRA_COMMENT_FOOTER"
+  ### TODO: calculate actual patchStyleErrors
+#  patchStyleErrors=0
+#  if [[ $patchStyleErrors != 0 ]] ; then
+#    JIRA_COMMENT="$JIRA_COMMENT
+#
+#    -1 checkstyle.  The patch generated $patchStyleErrors code style errors."
+#    return 1
+#  fi
+#  JIRA_COMMENT="$JIRA_COMMENT
+#
+#    +1 checkstyle.  The patch generated 0 code style errors."
+  return 0
+}
+
+###############################################################################
+### Check there are no changes in the number of Findbugs warnings
+checkFindbugsWarnings () {
+  findbugs_version=`${FINDBUGS_HOME}/bin/findbugs -version`
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Determining number of patched Findbugs warnings."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  echo "$MVN clean test findbugs:findbugs -DskipTests -D${PROJECT_NAME}PatchProcess" 
+  $MVN clean test findbugs:findbugs -DskipTests -D${PROJECT_NAME}PatchProcess < /dev/null
+
+  if [ $? != 0 ] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 findbugs.  The patch appears to cause Findbugs (version ${findbugs_version}) to fail."
+    return 1
+  fi
+    
+  findbugsWarnings=0
+  for file in $(find $BASEDIR -name findbugsXml.xml)
+  do
+    relative_file=${file#$BASEDIR/} # strip leading $BASEDIR prefix
+    if [ ! $relative_file == "target/findbugsXml.xml" ]; then
+      module_suffix=${relative_file%/target/findbugsXml.xml} # strip trailing path
+      module_suffix=`basename ${module_suffix}`
+    fi
+    
+    cp $file $PATCH_DIR/patchFindbugsWarnings${module_suffix}.xml
+    $FINDBUGS_HOME/bin/setBugDatabaseInfo -timestamp "01/01/2000" \
+      $PATCH_DIR/patchFindbugsWarnings${module_suffix}.xml \
+      $PATCH_DIR/patchFindbugsWarnings${module_suffix}.xml
+    newFindbugsWarnings=`$FINDBUGS_HOME/bin/filterBugs -first "01/01/2000" $PATCH_DIR/patchFindbugsWarnings${module_suffix}.xml \
+      $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml | $AWK '{print $1}'`
+    echo "Found $newFindbugsWarnings Findbugs warnings ($file)"
+    findbugsWarnings=$((findbugsWarnings+newFindbugsWarnings))
+    $FINDBUGS_HOME/bin/convertXmlToText -html \
+      $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml \
+      $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.html
+    JIRA_COMMENT_FOOTER="Findbugs warnings: $BUILD_URL/artifact/trunk/patchprocess/newPatchFindbugsWarnings${module_suffix}.html
+$JIRA_COMMENT_FOOTER"
+  done
+
+  ### if current warnings greater than OK_FINDBUGS_WARNINGS
+  if [[ $findbugsWarnings -gt $OK_FINDBUGS_WARNINGS ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 findbugs.  The patch appears to introduce `expr $(($findbugsWarnings-$OK_FINDBUGS_WARNINGS))` new Findbugs (version ${findbugs_version}) warnings."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 findbugs.  The patch does not introduce any new Findbugs (version ${findbugs_version}) warnings."
+  return 0
+}
+
+###############################################################################
+### Verify eclipse:eclipse works
+checkEclipseGeneration () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Running mvn eclipse:eclipse."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+
+  echo "$MVN eclipse:eclipse -D${PROJECT_NAME}PatchProcess"
+  $MVN eclipse:eclipse -D${PROJECT_NAME}PatchProcess
+  if [[ $? != 0 ]] ; then
+      JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 eclipse:eclipse.  The patch failed to build with eclipse:eclipse."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 eclipse:eclipse.  The patch built with eclipse:eclipse."
+  return 0
+}
+
+
+
+###############################################################################
+### Run the tests
+runTests () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Running tests."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+
+  echo "$MVN clean install -Pnative -D${PROJECT_NAME}PatchProcess"
+  $MVN clean install -Pnative -Drequire.test.libhadoop -D${PROJECT_NAME}PatchProcess
+  if [[ $? != 0 ]] ; then
+    ### Find and format names of failed tests
+    failed_tests=`find . -name 'TEST*.xml' | xargs $GREP  -l -E "<failure|<error" | sed -e "s|.*target/surefire-reports/TEST-|                  |g" | sed -e "s|\.xml||g"`
+
+    if [[ -n "$failed_tests" ]] ; then
+      JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 core tests.  The patch failed these unit tests:
+$failed_tests"
+    else
+      JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 core tests.  The patch failed the unit tests build"
+    fi
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 core tests.  The patch passed unit tests in $modules."
+  return 0
+}
+
+###############################################################################
+### Run the test-contrib target
+runContribTests () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Running contrib tests."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+
+  if [[ `$GREP -c 'test-contrib' build.xml` == 0 ]] ; then
+    echo "No contrib tests in this project."
+    return 0
+  fi
+
+  ### Kill any rogue build processes from the last attempt
+  $PS auxwww | $GREP ${PROJECT_NAME}PatchProcess | $AWK '{print $2}' | /usr/bin/xargs -t -I {} /bin/kill -9 {} > /dev/null
+
+  #echo "$ANT_HOME/bin/ant -Dversion="${VERSION}" $ECLIPSE_PROPERTY -DHadoopPatchProcess= -Dtest.junit.output.format=xml -Dtest.output=no test-contrib"
+  #$ANT_HOME/bin/ant -Dversion="${VERSION}" $ECLIPSE_PROPERTY -DHadoopPatchProcess= -Dtest.junit.output.format=xml -Dtest.output=no test-contrib
+  echo "NOP"
+  if [[ $? != 0 ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 contrib tests.  The patch failed contrib unit tests."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 contrib tests.  The patch passed contrib unit tests."
+  return 0
+}
+
+###############################################################################
+### Run the inject-system-faults target
+checkInjectSystemFaults () {
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Checking the integrity of system test framework code."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  
+  ### Kill any rogue build processes from the last attempt
+  $PS auxwww | $GREP ${PROJECT_NAME}PatchProcess | $AWK '{print $2}' | /usr/bin/xargs -t -I {} /bin/kill -9 {} > /dev/null
+
+  #echo "$ANT_HOME/bin/ant -Dversion="${VERSION}" -DHadoopPatchProcess= -Dtest.junit.output.format=xml -Dtest.output=no -Dcompile.c++=yes -Dforrest.home=$FORREST_HOME inject-system-faults"
+  #$ANT_HOME/bin/ant -Dversion="${VERSION}" -DHadoopPatchProcess= -Dtest.junit.output.format=xml -Dtest.output=no -Dcompile.c++=yes -Dforrest.home=$FORREST_HOME inject-system-faults
+  echo "NOP"
+  return 0
+  if [[ $? != 0 ]] ; then
+    JIRA_COMMENT="$JIRA_COMMENT
+
+    -1 system test framework.  The patch failed system test framework compile."
+    return 1
+  fi
+  JIRA_COMMENT="$JIRA_COMMENT
+
+    +1 system test framework.  The patch passed system test framework compile."
+  return 0
+}
+
+###############################################################################
+### Submit a comment to the defect's Jira
+submitJiraComment () {
+  local result=$1
+  ### Do not output the value of JIRA_COMMENT_FOOTER when run by a developer
+  if [[  $JENKINS == "false" ]] ; then
+    JIRA_COMMENT_FOOTER=""
+  fi
+  if [[ $result == 0 ]] ; then
+    comment="+1 overall.  $JIRA_COMMENT
+
+$JIRA_COMMENT_FOOTER"
+  else
+    comment="-1 overall.  $JIRA_COMMENT
+
+$JIRA_COMMENT_FOOTER"
+  fi
+  ### Output the test result to the console
+  echo "
+
+
+
+$comment"  
+
+  if [[ $JENKINS == "true" ]] ; then
+    echo ""
+    echo ""
+    echo "======================================================================"
+    echo "======================================================================"
+    echo "    Adding comment to Jira."
+    echo "======================================================================"
+    echo "======================================================================"
+    echo ""
+    echo ""
+    ### Update Jira with a comment
+    export USER=hudson
+    $JIRACLI -s https://issues.apache.org/jira -a addcomment -u hadoopqa -p $JIRA_PASSWD --comment "$comment" --issue $defect
+    $JIRACLI -s https://issues.apache.org/jira -a logout -u hadoopqa -p $JIRA_PASSWD
+  fi
+}
+
+###############################################################################
+### Cleanup files
+cleanupAndExit () {
+  local result=$1
+  if [[ $JENKINS == "true" ]] ; then
+    if [ -e "$PATCH_DIR" ] ; then
+      mv $PATCH_DIR $BASEDIR
+    fi
+  fi
+  echo ""
+  echo ""
+  echo "======================================================================"
+  echo "======================================================================"
+  echo "    Finished build."
+  echo "======================================================================"
+  echo "======================================================================"
+  echo ""
+  echo ""
+  exit $result
+}
+
+###############################################################################
+###############################################################################
+###############################################################################
+
+JIRA_COMMENT=""
+JIRA_COMMENT_FOOTER="Console output: $BUILD_URL/console
+
+This message is automatically generated."
+
+### Check if arguments to the script have been specified properly or not
+parseArgs $@
+cd $BASEDIR
+
+checkout
+RESULT=$?
+if [[ $JENKINS == "true" ]] ; then
+  if [[ $RESULT != 0 ]] ; then
+    exit 100
+  fi
+fi
+setup
+checkAuthor
+RESULT=$?
+
+if [[ $JENKINS == "true" ]] ; then
+  cleanUpXml
+fi
+checkTests
+(( RESULT = RESULT + $? ))
+applyPatch
+if [[ $? != 0 ]] ; then
+  submitJiraComment 1
+  cleanupAndExit 1
+fi
+checkJavadocWarnings
+(( RESULT = RESULT + $? ))
+checkJavacWarnings
+(( RESULT = RESULT + $? ))
+checkEclipseGeneration
+(( RESULT = RESULT + $? ))
+### Checkstyle not implemented yet
+#checkStyle
+#(( RESULT = RESULT + $? ))
+checkFindbugsWarnings
+(( RESULT = RESULT + $? ))
+checkReleaseAuditWarnings
+(( RESULT = RESULT + $? ))
+### Run tests for Jenkins or if explictly asked for by a developer
+if [[ $JENKINS == "true" || $RUN_TESTS == "true" ]] ; then
+  runTests
+  (( RESULT = RESULT + $? ))
+  runContribTests
+  (( RESULT = RESULT + $? ))
+fi
+checkInjectSystemFaults
+(( RESULT = RESULT + $? ))
+JIRA_COMMENT_FOOTER="Test results: $BUILD_URL/testReport/
+$JIRA_COMMENT_FOOTER"
+
+submitJiraComment $RESULT
+cleanupAndExit $RESULT
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/pom.xml b/branch-2.0.4-alpha/hadoop-assemblies/pom.xml
new file mode 100644
index 0000000..d4d7f18
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/pom.xml
@@ -0,0 +1,85 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-project</artifactId>
+    <version>2.0.4-alpha</version>
+    <relativePath>../hadoop-project</relativePath>
+  </parent>
+  <groupId>org.apache.hadoop</groupId>
+  <artifactId>hadoop-assemblies</artifactId>
+  <version>2.0.4-alpha</version>
+  <name>Apache Hadoop Assemblies</name>
+  <description>Apache Hadoop Assemblies</description>
+
+  <properties>
+    <failIfNoTests>false</failIfNoTests>
+  </properties>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-enforcer-plugin</artifactId>
+        <inherited>false</inherited>
+        <configuration>
+          <rules>
+            <requireMavenVersion>
+              <version>[3.0.0,)</version>
+            </requireMavenVersion>
+            <requireJavaVersion>
+              <version>1.6</version>
+            </requireJavaVersion>
+          </rules>
+        </configuration>
+        <executions>
+          <execution>
+            <id>clean</id>
+            <goals>
+              <goal>enforce</goal>
+            </goals>
+            <phase>pre-clean</phase>
+          </execution>
+          <execution>
+            <id>default</id>
+            <goals>
+              <goal>enforce</goal>
+            </goals>
+            <phase>validate</phase>
+          </execution>
+          <execution>
+            <id>site</id>
+            <goals>
+              <goal>enforce</goal>
+            </goals>
+            <phase>pre-site</phase>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.rat</groupId>
+        <artifactId>apache-rat-plugin</artifactId>
+      </plugin>
+    </plugins>
+  </build>
+</project>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
new file mode 100644
index 0000000..4d93b11
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
@@ -0,0 +1,139 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly>
+  <id>hadoop-distro</id>
+  <formats>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+  <fileSets>
+    <fileSet>
+      <directory>${basedir}/src/main/bin</directory>
+      <outputDirectory>/bin</outputDirectory>
+      <excludes>
+        <exclude>*.sh</exclude>
+      </excludes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/conf</directory>
+      <outputDirectory>/etc/hadoop</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/bin</directory>
+      <outputDirectory>/libexec</outputDirectory>
+      <includes>
+        <include>*-config.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/bin</directory>
+      <outputDirectory>/sbin</outputDirectory>
+      <includes>
+        <include>*.sh</include>
+      </includes>
+      <excludes>
+        <exclude>hadoop-config.sh</exclude>
+      </excludes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/packages</directory>
+      <outputDirectory>/sbin</outputDirectory>
+      <includes>
+        <include>*.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+      <includes>
+        <include>*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/webapps</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/webapps</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates</outputDirectory>
+      <includes>
+        <include>*-site.xml</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/packages/templates/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates/conf</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}</outputDirectory>
+      <includes>
+        <include>${project.artifactId}-${project.version}.jar</include>
+        <include>${project.artifactId}-${project.version}-tests.jar</include>
+      </includes>
+      <excludes>
+        <exclude>hadoop-tools-dist-*.jar</exclude>
+      </excludes>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/sources</outputDirectory>
+      <includes>
+        <include>${project.artifactId}-${project.version}-sources.jar</include>
+        <include>${project.artifactId}-${project.version}-test-sources.jar</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/dev-support/jdiff</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site/jdiff/xml</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/native/libhdfs</directory>
+      <includes>
+        <include>hdfs.h</include>
+      </includes>
+      <outputDirectory>/include</outputDirectory>
+    </fileSet>
+  </fileSets>
+  <dependencySets>
+    <dependencySet>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib</outputDirectory>
+      <unpack>false</unpack>
+      <scope>runtime</scope>
+      <useProjectArtifact>false</useProjectArtifact>
+      <excludes>
+        <exclude>org.apache.ant:*:jar</exclude>
+        <exclude>jdiff:jdiff:jar</exclude>
+      </excludes>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
new file mode 100644
index 0000000..6468a8a
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
@@ -0,0 +1,52 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<assembly>
+  <id>hadoop-httpfs-dist</id>
+  <formats>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+  <fileSets>
+    <!-- Configuration files -->
+    <fileSet>
+      <directory>${basedir}/src/main/conf</directory>
+      <outputDirectory>/etc/hadoop</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/sbin</directory>
+      <outputDirectory>/sbin</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/libexec</directory>
+      <outputDirectory>/libexec</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <!-- Documentation -->
+    <fileSet>
+      <directory>${project.build.directory}/site</directory>
+      <outputDirectory>/share/doc/hadoop/httpfs</outputDirectory>
+    </fileSet>
+  </fileSets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
new file mode 100644
index 0000000..ee57576
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
@@ -0,0 +1,138 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>hadoop-mapreduce-dist</id>
+  <formats>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+  <fileSets>
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <includes>
+        <include>mapred</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>libexec</outputDirectory>
+      <includes>
+        <include>mapred-config.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>sbin</outputDirectory>
+      <includes>
+        <include>mr-jobhistory-daemon.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>conf</directory>
+      <outputDirectory>etc/hadoop</outputDirectory>
+      <includes>
+        <include>**/*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+      <includes>
+        <include>*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/webapps</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/webapps</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates</outputDirectory>
+      <includes>
+        <include>*-site.xml</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/packages/templates/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates/conf</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/dev-support/jdiff</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site/jdiff/xml</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+    </fileSet>
+  </fileSets>
+  <moduleSets>
+    <moduleSet>
+      <binaries>
+        <outputDirectory>share/hadoop/${hadoop.component}</outputDirectory>
+        <includeDependencies>false</includeDependencies>
+        <unpack>false</unpack>
+      </binaries>
+    </moduleSet>
+    <moduleSet>
+      <includes>
+        <include>org.apache.hadoop:hadoop-mapreduce-client-jobclient</include>
+        <include>org.apache.hadoop:hadoop-yarn-server-tests</include>
+      </includes>
+      <binaries>
+        <attachmentClassifier>tests</attachmentClassifier>
+        <outputDirectory>share/hadoop/${hadoop.component}</outputDirectory>
+        <includeDependencies>false</includeDependencies>
+        <unpack>false</unpack>
+      </binaries>
+    </moduleSet>
+  </moduleSets>
+  <dependencySets>
+    <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib</outputDirectory>
+      <!-- Exclude hadoop artifacts. They will be found via HADOOP* env -->
+      <excludes>
+        <exclude>org.apache.hadoop:hadoop-common</exclude>
+        <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
+        <!-- use slf4j from common to avoid multiple binding warnings -->
+        <exclude>org.slf4j:slf4j-api</exclude>
+        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.hsqldb:hsqldb</exclude>
+      </excludes>
+    </dependencySet>
+    <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib-examples</outputDirectory>
+      <includes>
+        <include>org.hsqldb:hsqldb</include>
+      </includes>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
new file mode 100644
index 0000000..fd03bfd
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
@@ -0,0 +1,48 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>hadoop-src</id>
+  <formats>
+    <format>tar.gz</format>
+  </formats>
+  <includeBaseDirectory>true</includeBaseDirectory>
+  <fileSets>
+    <fileSet>
+      <directory>.</directory>
+      <useDefaultExcludes>true</useDefaultExcludes>
+      <excludes>
+        <exclude>.git/**</exclude>
+        <exclude>**/.gitignore</exclude>
+        <exclude>**/.svn</exclude>
+        <exclude>**/*.iws</exclude>
+        <exclude>**/*.ipr</exclude>
+        <exclude>**/*.iml</exclude>
+        <exclude>**/.classpath</exclude>
+        <exclude>**/.project</exclude>
+        <exclude>**/.settings</exclude>
+        <exclude>**/target/**</exclude>
+        <!-- until the code that does this is fixed -->
+        <exclude>**/*.log</exclude>
+        <exclude>**/build/**</exclude>
+        <exclude>**/file:/**</exclude>
+        <exclude>**/SecurityAuth.audit*</exclude>
+      </excludes>
+    </fileSet>
+  </fileSets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
new file mode 100644
index 0000000..1e3356d
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
@@ -0,0 +1,67 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>hadoop-tools</id>
+  <formats>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+  <fileSets>
+    <fileSet>
+      <directory>../hadoop-pipes/src/main/native/pipes/api/hadoop</directory>
+      <includes>
+        <include>*.hh</include>
+      </includes>
+      <outputDirectory>/include</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>../hadoop-pipes/src/main/native/utils/api/hadoop</directory>
+      <includes>
+        <include>*.hh</include>
+      </includes>
+      <outputDirectory>/include</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>../hadoop-pipes/target/native</directory>
+      <includes>
+        <include>*.a</include>
+      </includes>
+      <outputDirectory>lib/native</outputDirectory>
+    </fileSet>
+  </fileSets>
+  <dependencySets>
+    <dependencySet>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib</outputDirectory>
+      <unpack>false</unpack>
+      <scope>runtime</scope>
+      <useProjectArtifact>false</useProjectArtifact>
+      <!-- Exclude hadoop artifacts. They will be found via HADOOP* env -->
+      <excludes>
+        <exclude>org.apache.hadoop:hadoop-common</exclude>
+        <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
+        <exclude>org.apache.hadoop:hadoop-mapreduce</exclude>
+        <!-- pipes is native stuff, this just keeps pom from being package-->
+        <exclude>org.apache.hadoop:hadoop-pipes</exclude>
+        <!-- use slf4j from common to avoid multiple binding warnings -->
+        <exclude>org.slf4j:slf4j-api</exclude>
+        <exclude>org.slf4j:slf4j-log4j12</exclude>
+      </excludes>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
new file mode 100644
index 0000000..9e9223e
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
@@ -0,0 +1,152 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>hadoop-yarn-dist</id>
+  <formats>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+  <fileSets>
+    <fileSet>
+      <directory>hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>hadoop-yarn/bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <includes>
+        <include>yarn</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>hadoop-yarn/bin</directory>
+      <outputDirectory>libexec</outputDirectory>
+      <includes>
+        <include>yarn-config.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>hadoop-yarn/bin</directory>
+      <outputDirectory>sbin</outputDirectory>
+      <includes>
+        <include>yarn-daemon.sh</include>
+        <include>yarn-daemons.sh</include>
+        <include>start-yarn.sh</include>
+        <include>stop-yarn.sh</include>
+      </includes>
+      <fileMode>0755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>hadoop-yarn/conf</directory>
+      <outputDirectory>etc/hadoop</outputDirectory>
+      <includes>
+        <include>**/*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/conf</directory>
+      <outputDirectory>etc/hadoop</outputDirectory>
+      <includes>
+        <include>**/*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+      <includes>
+        <include>*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/webapps</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/webapps</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates</outputDirectory>
+      <includes>
+        <include>*-site.xml</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/src/main/packages/templates/conf</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/templates/conf</outputDirectory>
+      <includes>
+        <include>*</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${basedir}/dev-support/jdiff</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site/jdiff/xml</directory>
+      <outputDirectory>/share/hadoop/${hadoop.component}/jdiff</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/site</directory>
+      <outputDirectory>/share/doc/hadoop/${hadoop.component}</outputDirectory>
+    </fileSet>
+  </fileSets>
+  <moduleSets>
+    <moduleSet>
+      <binaries>
+        <outputDirectory>share/hadoop/${hadoop.component}</outputDirectory>
+        <includeDependencies>false</includeDependencies>
+        <unpack>false</unpack>
+      </binaries>
+    </moduleSet>
+    <moduleSet>
+      <includes>
+        <include>org.apache.hadoop:hadoop-yarn-server-tests</include>
+      </includes>
+      <binaries>
+        <attachmentClassifier>tests</attachmentClassifier>
+        <outputDirectory>share/hadoop/${hadoop.component}/test</outputDirectory>
+        <includeDependencies>false</includeDependencies>
+        <unpack>false</unpack>
+      </binaries>
+    </moduleSet>
+  </moduleSets>
+  <dependencySets>
+    <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib</outputDirectory>
+      <!-- Exclude hadoop artifacts. They will be found via HADOOP* env -->
+      <excludes>
+        <exclude>org.apache.hadoop:hadoop-common</exclude>
+        <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
+        <!-- use slf4j from common to avoid multiple binding warnings -->
+        <exclude>org.slf4j:slf4j-api</exclude>
+        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.hsqldb:hsqldb</exclude>
+      </excludes>
+    </dependencySet>
+    <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
+      <outputDirectory>/share/hadoop/${hadoop.component}/lib-examples</outputDirectory>
+      <includes>
+        <include>org.hsqldb:hsqldb</include>
+      </includes>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/branch-2.0.4-alpha/hadoop-client/pom.xml b/branch-2.0.4-alpha/hadoop-client/pom.xml
new file mode 100644
index 0000000..33847d3
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-client/pom.xml
@@ -0,0 +1,352 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-project-dist</artifactId>
+    <version>2.0.4-alpha</version>
+    <relativePath>../hadoop-project-dist</relativePath>
+  </parent>
+  <groupId>org.apache.hadoop</groupId>
+  <artifactId>hadoop-client</artifactId>
+  <version>2.0.4-alpha</version>
+  <packaging>jar</packaging>
+
+  <description>Apache Hadoop Client</description>
+  <name>Apache Hadoop Client</name>
+
+<properties>
+   <hadoop.component>client</hadoop.component>
+ </properties>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>commons-httpclient</groupId>
+          <artifactId>commons-httpclient</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-compiler</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-runtime</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.servlet.jsp</groupId>
+          <artifactId>jsp-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>commons-logging</groupId>
+          <artifactId>commons-logging-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>jetty</groupId>
+          <artifactId>org.mortbay.jetty</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jetty</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jetty-util</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jsp-api-2.1</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>servlet-api-2.5</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-core</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-json</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jdt</groupId>
+          <artifactId>core</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.aspectj</groupId>
+          <artifactId>aspectjrt</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro-ipc</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>net.sf.kosmosfs</groupId>
+          <artifactId>kfs</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>net.java.dev.jets3t</groupId>
+          <artifactId>jets3t</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.jcraft</groupId>
+          <artifactId>jsch</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>commons-el</groupId>
+          <artifactId>commons-el</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-hdfs</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>commons-daemon</groupId>
+          <artifactId>commons-daemon</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jetty</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-core</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.servlet.jsp</groupId>
+          <artifactId>jsp-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-runtime</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-mapreduce-client-app</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-nodemanager</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-web-proxy</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject.extensions</groupId>
+          <artifactId>guice-servlet</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>junit</groupId>
+          <artifactId>junit</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>jline</groupId>
+          <artifactId>jline</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-api</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject</groupId>
+          <artifactId>guice</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey.jersey-test-framework</groupId>
+          <artifactId>jersey-test-framework-grizzly2</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey.contribs</groupId>
+          <artifactId>jersey-guice</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject.extensions</groupId>
+          <artifactId>guice-servlet</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-core</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-json</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-mapreduce-client-core</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>junit</groupId>
+          <artifactId>junit</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject</groupId>
+          <artifactId>guice</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey.jersey-test-framework</groupId>
+          <artifactId>jersey-test-framework-grizzly2</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey.contribs</groupId>
+          <artifactId>jersey-guice</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject.extensions</groupId>
+          <artifactId>guice-servlet</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-json</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>junit</groupId>
+          <artifactId>junit</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.avro</groupId>
+          <artifactId>avro</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.google.inject.extensions</groupId>
+          <artifactId>guice-servlet</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-annotations</artifactId>
+      <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>jdk.tools</groupId>
+          <artifactId>jdk.tools</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    
+  </dependencies>
+
+</project>
+
diff --git a/branch-2.0.4-alpha/hadoop-common-project/dev-support/test-patch.properties b/branch-2.0.4-alpha/hadoop-common-project/dev-support/test-patch.properties
new file mode 100644
index 0000000..c33b2a9
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/dev-support/test-patch.properties
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The number of acceptable warning for this module
+# Please update the root test-patch.properties if you update this file.
+
+OK_RELEASEAUDIT_WARNINGS=0
+OK_FINDBUGS_WARNINGS=0
+OK_JAVADOC_WARNINGS=13
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/pom.xml b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/pom.xml
new file mode 100644
index 0000000..655322a
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/pom.xml
@@ -0,0 +1,61 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-project</artifactId>
+    <version>2.0.4-alpha</version>
+    <relativePath>../../hadoop-project</relativePath>
+  </parent>
+  <groupId>org.apache.hadoop</groupId>
+  <artifactId>hadoop-annotations</artifactId>
+  <version>2.0.4-alpha</version>
+  <description>Apache Hadoop Annotations</description>
+  <name>Apache Hadoop Annotations</name>
+  <packaging>jar</packaging>
+
+  <dependencies>
+    <dependency>
+      <groupId>jdiff</groupId>
+      <artifactId>jdiff</artifactId>
+      <scope>provided</scope>
+    </dependency>
+  </dependencies>
+
+  <profiles>
+    <profile>
+      <id>os.linux</id>
+      <activation>
+        <os>
+          <family>!Mac</family>
+        </os>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>jdk.tools</groupId>
+          <artifactId>jdk.tools</artifactId>
+          <version>1.6</version>
+          <scope>system</scope>
+          <systemPath>${java.home}/../lib/tools.jar</systemPath>
+        </dependency>
+      </dependencies>
+    </profile>
+  </profiles>
+
+</project>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
new file mode 100644
index 0000000..019874f
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification;
+
+import java.lang.annotation.Documented;
+
+/**
+ * Annotation to inform users of a package, class or method's intended audience.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class InterfaceAudience {
+  /**
+   * Intended for use by any project or application.
+   */
+  @Documented public @interface Public {};
+  
+  /**
+   * Intended only for the project(s) specified in the annotation.
+   * For example, "Common", "HDFS", "MapReduce", "ZooKeeper", "HBase".
+   */
+  @Documented public @interface LimitedPrivate {
+    String[] value();
+  };
+  
+  /**
+   * Intended for use only within Hadoop itself.
+   */
+  @Documented public @interface Private {};
+
+  private InterfaceAudience() {} // Audience can't exist on its own
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
new file mode 100644
index 0000000..0ebf949
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification;
+
+import java.lang.annotation.Documented;
+
+/**
+ * Annotation to inform users of how much to rely on a particular package,
+ * class or method not changing over time.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class InterfaceStability {
+  /**
+   * Can evolve while retaining compatibility for minor release boundaries.; 
+   * can break compatibility only at major release (ie. at m.0).
+   */
+  @Documented
+  public @interface Stable {};
+  
+  /**
+   * Evolving, but can break compatibility at minor release (i.e. m.x)
+   */
+  @Documented
+  public @interface Evolving {};
+  
+  /**
+   * No guarantee is provided as to reliability or stability across any
+   * level of release granularity.
+   */
+  @Documented
+  public @interface Unstable {};
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsJDiffDoclet.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsJDiffDoclet.java
new file mode 100644
index 0000000..66913ff
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsJDiffDoclet.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification.tools;
+
+import com.sun.javadoc.DocErrorReporter;
+import com.sun.javadoc.LanguageVersion;
+import com.sun.javadoc.RootDoc;
+
+import jdiff.JDiff;
+
+/**
+ * A <a href="http://java.sun.com/javase/6/docs/jdk/api/javadoc/doclet/">Doclet</a>
+ * for excluding elements that are annotated with
+ * {@link org.apache.hadoop.classification.InterfaceAudience.Private} or
+ * {@link org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate}.
+ * It delegates to the JDiff Doclet, and takes the same options.
+ */
+public class ExcludePrivateAnnotationsJDiffDoclet {
+  
+  public static LanguageVersion languageVersion() {
+    return LanguageVersion.JAVA_1_5;
+  }
+  
+  public static boolean start(RootDoc root) {
+    System.out.println(
+	ExcludePrivateAnnotationsJDiffDoclet.class.getSimpleName());
+    return JDiff.start(RootDocProcessor.process(root));
+  }
+  
+  public static int optionLength(String option) {
+    Integer length = StabilityOptions.optionLength(option);
+    if (length != null) {
+      return length;
+    }
+    return JDiff.optionLength(option);
+  }
+  
+  public static boolean validOptions(String[][] options,
+      DocErrorReporter reporter) {
+    StabilityOptions.validOptions(options, reporter);
+    String[][] filteredOptions = StabilityOptions.filterOptions(options);
+    return JDiff.validOptions(filteredOptions, reporter);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java
new file mode 100644
index 0000000..62c44ea
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification.tools;
+
+import com.sun.javadoc.DocErrorReporter;
+import com.sun.javadoc.LanguageVersion;
+import com.sun.javadoc.RootDoc;
+import com.sun.tools.doclets.standard.Standard;
+
+/**
+ * A <a href="http://java.sun.com/javase/6/docs/jdk/api/javadoc/doclet/">Doclet</a>
+ * for excluding elements that are annotated with
+ * {@link org.apache.hadoop.classification.InterfaceAudience.Private} or
+ * {@link org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate}.
+ * It delegates to the Standard Doclet, and takes the same options.
+ */
+public class ExcludePrivateAnnotationsStandardDoclet {
+  
+  public static LanguageVersion languageVersion() {
+    return LanguageVersion.JAVA_1_5;
+  }
+  
+  public static boolean start(RootDoc root) {
+    System.out.println(
+	ExcludePrivateAnnotationsStandardDoclet.class.getSimpleName());
+    return Standard.start(RootDocProcessor.process(root));
+  }
+  
+  public static int optionLength(String option) {
+    Integer length = StabilityOptions.optionLength(option);
+    if (length != null) {
+      return length;
+    }
+    return Standard.optionLength(option);
+  }
+  
+  public static boolean validOptions(String[][] options,
+      DocErrorReporter reporter) {
+    StabilityOptions.validOptions(options, reporter);
+    String[][] filteredOptions = StabilityOptions.filterOptions(options);
+    return Standard.validOptions(filteredOptions, reporter);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java
new file mode 100644
index 0000000..10d554d
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification.tools;
+
+import com.sun.javadoc.DocErrorReporter;
+import com.sun.javadoc.LanguageVersion;
+import com.sun.javadoc.RootDoc;
+import com.sun.tools.doclets.standard.Standard;
+
+/**
+ * A <a href="http://java.sun.com/javase/6/docs/jdk/api/javadoc/doclet/">Doclet</a>
+ * that only includes class-level elements that are annotated with
+ * {@link org.apache.hadoop.classification.InterfaceAudience.Public}.
+ * Class-level elements with no annotation are excluded.
+ * In addition, all elements that are annotated with
+ * {@link org.apache.hadoop.classification.InterfaceAudience.Private} or
+ * {@link org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate}
+ * are also excluded.
+ * It delegates to the Standard Doclet, and takes the same options.
+ */
+public class IncludePublicAnnotationsStandardDoclet {
+  
+  public static LanguageVersion languageVersion() {
+    return LanguageVersion.JAVA_1_5;
+  }
+  
+  public static boolean start(RootDoc root) {
+    System.out.println(
+        IncludePublicAnnotationsStandardDoclet.class.getSimpleName());
+    RootDocProcessor.treatUnannotatedClassesAsPrivate = true;
+    return Standard.start(RootDocProcessor.process(root));
+  }
+  
+  public static int optionLength(String option) {
+    Integer length = StabilityOptions.optionLength(option);
+    if (length != null) {
+      return length;
+    }
+    return Standard.optionLength(option);
+  }
+  
+  public static boolean validOptions(String[][] options,
+      DocErrorReporter reporter) {
+    StabilityOptions.validOptions(options, reporter);
+    String[][] filteredOptions = StabilityOptions.filterOptions(options);
+    return Standard.validOptions(filteredOptions, reporter);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
new file mode 100644
index 0000000..a6ce035
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
@@ -0,0 +1,247 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification.tools;
+
+import com.sun.javadoc.AnnotationDesc;
+import com.sun.javadoc.AnnotationTypeDoc;
+import com.sun.javadoc.ClassDoc;
+import com.sun.javadoc.ConstructorDoc;
+import com.sun.javadoc.Doc;
+import com.sun.javadoc.FieldDoc;
+import com.sun.javadoc.MethodDoc;
+import com.sun.javadoc.PackageDoc;
+import com.sun.javadoc.ProgramElementDoc;
+import com.sun.javadoc.RootDoc;
+
+import java.lang.reflect.Array;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.WeakHashMap;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Process the {@link RootDoc} by substituting with (nested) proxy objects that
+ * exclude elements with Private or LimitedPrivate annotations.
+ * <p>
+ * Based on code from http://www.sixlegs.com/blog/java/exclude-javadoc-tag.html.
+ */
+class RootDocProcessor {
+  
+  static String stability = StabilityOptions.UNSTABLE_OPTION;
+  static boolean treatUnannotatedClassesAsPrivate = false;
+  
+  public static RootDoc process(RootDoc root) {
+    return (RootDoc) process(root, RootDoc.class);
+  }
+  
+  private static Object process(Object obj, Class<?> type) { 
+    if (obj == null) { 
+      return null; 
+    } 
+    Class<?> cls = obj.getClass(); 
+    if (cls.getName().startsWith("com.sun.")) { 
+      return getProxy(obj); 
+    } else if (obj instanceof Object[]) { 
+      Class<?> componentType = type.isArray() ? type.getComponentType() 
+	  : cls.getComponentType();
+      Object[] array = (Object[]) obj;
+      Object[] newArray = (Object[]) Array.newInstance(componentType,
+	  array.length); 
+      for (int i = 0; i < array.length; ++i) {
+        newArray[i] = process(array[i], componentType);
+      }
+      return newArray;
+    } 
+    return obj; 
+  }
+  
+  private static Map<Object, Object> proxies =
+    new WeakHashMap<Object, Object>(); 
+  
+  private static Object getProxy(Object obj) { 
+    Object proxy = proxies.get(obj); 
+    if (proxy == null) { 
+      proxy = Proxy.newProxyInstance(obj.getClass().getClassLoader(), 
+        obj.getClass().getInterfaces(), new ExcludeHandler(obj)); 
+      proxies.put(obj, proxy); 
+    } 
+    return proxy; 
+  } 
+
+  private static class ExcludeHandler implements InvocationHandler {
+    private Object target;
+
+    public ExcludeHandler(Object target) {
+      this.target = target;
+    }
+    
+    @Override
+    public Object invoke(Object proxy, Method method, Object[] args)
+	throws Throwable {
+      String methodName = method.getName();
+      if (target instanceof Doc) {
+	if (methodName.equals("isIncluded")) {
+	  Doc doc = (Doc) target;
+	  return !exclude(doc) && doc.isIncluded();
+	}
+	if (target instanceof RootDoc) {
+	  if (methodName.equals("classes")) {
+	    return filter(((RootDoc) target).classes(), ClassDoc.class);
+	  } else if (methodName.equals("specifiedClasses")) {
+	    return filter(((RootDoc) target).specifiedClasses(), ClassDoc.class);
+	  } else if (methodName.equals("specifiedPackages")) {
+	    return filter(((RootDoc) target).specifiedPackages(), PackageDoc.class);
+	  }
+	} else if (target instanceof ClassDoc) {
+	  if (isFiltered(args)) {
+	    if (methodName.equals("methods")) {
+	      return filter(((ClassDoc) target).methods(true), MethodDoc.class);
+	    } else if (methodName.equals("fields")) {
+	      return filter(((ClassDoc) target).fields(true), FieldDoc.class);
+	    } else if (methodName.equals("innerClasses")) {
+	      return filter(((ClassDoc) target).innerClasses(true),
+		  ClassDoc.class);
+	    } else if (methodName.equals("constructors")) {
+	      return filter(((ClassDoc) target).constructors(true),
+		  ConstructorDoc.class);
+	    }
+	  }
+	} else if (target instanceof PackageDoc) {
+	  if (methodName.equals("allClasses")) {
+	    if (isFiltered(args)) {
+	      return filter(((PackageDoc) target).allClasses(true),
+		ClassDoc.class);
+	    } else {
+	      return filter(((PackageDoc) target).allClasses(), ClassDoc.class);  
+	    }
+	  } else if (methodName.equals("annotationTypes")) {
+	    return filter(((PackageDoc) target).annotationTypes(),
+		AnnotationTypeDoc.class);
+	  } else if (methodName.equals("enums")) {
+	    return filter(((PackageDoc) target).enums(),
+		ClassDoc.class);
+	  } else if (methodName.equals("errors")) {
+	    return filter(((PackageDoc) target).errors(),
+		ClassDoc.class);
+	  } else if (methodName.equals("exceptions")) {
+	    return filter(((PackageDoc) target).exceptions(),
+		ClassDoc.class);
+	  } else if (methodName.equals("interfaces")) {
+	    return filter(((PackageDoc) target).interfaces(),
+		ClassDoc.class);
+	  } else if (methodName.equals("ordinaryClasses")) {
+	    return filter(((PackageDoc) target).ordinaryClasses(),
+		ClassDoc.class);
+	  }
+	}
+      }
+
+      if (args != null) {
+	if (methodName.equals("compareTo") || methodName.equals("equals")
+	    || methodName.equals("overrides")
+	    || methodName.equals("subclassOf")) {
+	  args[0] = unwrap(args[0]);
+	}
+      }
+      try {
+	return process(method.invoke(target, args), method.getReturnType());
+      } catch (InvocationTargetException e) {
+	throw e.getTargetException();
+      }
+    }
+      
+    private static boolean exclude(Doc doc) {
+      AnnotationDesc[] annotations = null;
+      if (doc instanceof ProgramElementDoc) {
+	annotations = ((ProgramElementDoc) doc).annotations();
+      } else if (doc instanceof PackageDoc) {
+	annotations = ((PackageDoc) doc).annotations();
+      }
+      if (annotations != null) {
+	for (AnnotationDesc annotation : annotations) {
+	  String qualifiedTypeName = annotation.annotationType().qualifiedTypeName();
+	  if (qualifiedTypeName.equals(
+	        InterfaceAudience.Private.class.getCanonicalName())
+	    || qualifiedTypeName.equals(
+                InterfaceAudience.LimitedPrivate.class.getCanonicalName())) {
+	    return true;
+	  }
+	  if (stability.equals(StabilityOptions.EVOLVING_OPTION)) {
+	    if (qualifiedTypeName.equals(
+		InterfaceStability.Unstable.class.getCanonicalName())) {
+	      return true;
+	    }
+	  }
+	  if (stability.equals(StabilityOptions.STABLE_OPTION)) {
+	    if (qualifiedTypeName.equals(
+		InterfaceStability.Unstable.class.getCanonicalName())
+              || qualifiedTypeName.equals(
+  		InterfaceStability.Evolving.class.getCanonicalName())) {
+	      return true;
+	    }
+	  }
+	}
+        for (AnnotationDesc annotation : annotations) {
+          String qualifiedTypeName =
+            annotation.annotationType().qualifiedTypeName();
+          if (qualifiedTypeName.equals(
+              InterfaceAudience.Public.class.getCanonicalName())) {
+            return false;
+          }
+        }
+      }
+      if (treatUnannotatedClassesAsPrivate) {
+        return doc.isClass() || doc.isInterface() || doc.isAnnotationType();
+      }
+      return false;
+    }
+      
+    private static Object[] filter(Doc[] array, Class<?> componentType) {
+      if (array == null || array.length == 0) {
+	return array;
+      }
+      List<Object> list = new ArrayList<Object>(array.length);
+      for (Doc entry : array) {
+	if (!exclude(entry)) {
+	  list.add(process(entry, componentType));
+	}
+      }
+      return list.toArray((Object[]) Array.newInstance(componentType, list
+	  .size()));
+    }
+
+    private Object unwrap(Object proxy) {
+      if (proxy instanceof Proxy)
+	return ((ExcludeHandler) Proxy.getInvocationHandler(proxy)).target;
+      return proxy;
+    }
+      
+    private boolean isFiltered(Object[] args) {
+      return args != null && Boolean.TRUE.equals(args[0]);
+    }
+
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
new file mode 100644
index 0000000..dbce31e
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.classification.tools;
+
+import com.sun.javadoc.DocErrorReporter;
+
+import java.util.ArrayList;
+import java.util.List;
+
+class StabilityOptions {
+  public static final String STABLE_OPTION = "-stable";
+  public static final String EVOLVING_OPTION = "-evolving";
+  public static final String UNSTABLE_OPTION = "-unstable";
+
+  public static Integer optionLength(String option) {
+    String opt = option.toLowerCase();
+    if (opt.equals(UNSTABLE_OPTION)) return 1;
+    if (opt.equals(EVOLVING_OPTION)) return 1;
+    if (opt.equals(STABLE_OPTION)) return 1;
+    return null;
+  }
+
+  public static void validOptions(String[][] options,
+      DocErrorReporter reporter) {
+    for (int i = 0; i < options.length; i++) {
+      String opt = options[i][0].toLowerCase();
+      if (opt.equals(UNSTABLE_OPTION)) {
+	RootDocProcessor.stability = UNSTABLE_OPTION;
+      } else if (opt.equals(EVOLVING_OPTION)) {
+	RootDocProcessor.stability = EVOLVING_OPTION;
+      } else if (opt.equals(STABLE_OPTION)) {
+	RootDocProcessor.stability = STABLE_OPTION;	
+      }
+    }
+  }
+  
+  public static String[][] filterOptions(String[][] options) {
+    List<String[]> optionsList = new ArrayList<String[]>();
+    for (int i = 0; i < options.length; i++) {
+      if (!options[i][0].equalsIgnoreCase(UNSTABLE_OPTION)
+	  && !options[i][0].equalsIgnoreCase(EVOLVING_OPTION)
+	  && !options[i][0].equalsIgnoreCase(STABLE_OPTION)) {
+	optionsList.add(options[i]);
+      }
+    }
+    String[][] filteredOptions = new String[optionsList.size()][];
+    int i = 0;
+    for (String[] option : optionsList) {
+      filteredOptions[i++] = option;
+    }
+    return filteredOptions;
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/package-info.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/package-info.java
new file mode 100644
index 0000000..dc647c5
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/package-info.java
@@ -0,0 +1,22 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+@InterfaceAudience.LimitedPrivate({"Common", "Avro", "Chukwa", "HBase", "HDFS",
+  "Hive", "MapReduce", "Pig", "ZooKeeper"})
+package org.apache.hadoop.classification.tools;
+
+import org.apache.hadoop.classification.InterfaceAudience;
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/pom.xml b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/pom.xml
new file mode 100644
index 0000000..3fc6178
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/pom.xml
@@ -0,0 +1,96 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-project</artifactId>
+    <version>2.0.4-alpha</version>
+    <relativePath>../../hadoop-project</relativePath>
+  </parent>
+  <groupId>org.apache.hadoop</groupId>
+  <artifactId>hadoop-auth-examples</artifactId>
+  <version>2.0.4-alpha</version>
+  <packaging>war</packaging>
+
+  <name>Apache Hadoop Auth Examples</name>
+  <description>Apache Hadoop Auth Examples - Java HTTP SPNEGO</description>
+
+  <dependencies>
+    <dependency>
+      <groupId>javax.servlet</groupId>
+      <artifactId>servlet-api</artifactId>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-auth</artifactId>
+      <scope>compile</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+      <scope>compile</scope>
+    </dependency>
+    <dependency>
+      <groupId>log4j</groupId>
+      <artifactId>log4j</artifactId>
+      <scope>runtime</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-log4j12</artifactId>
+      <scope>runtime</scope>
+    </dependency>
+  </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <artifactId>maven-war-plugin</artifactId>
+        <configuration>
+          <warName>hadoop-auth-examples</warName>
+        </configuration>
+      </plugin>
+      <plugin>
+        <artifactId>maven-deploy-plugin</artifactId>
+        <configuration>
+          <skip>true</skip>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>exec-maven-plugin</artifactId>
+        <executions>
+          <execution>
+            <goals>
+              <goal>java</goal>
+            </goals>
+          </execution>
+        </executions>
+        <configuration>
+          <mainClass>org.apache.hadoop.security.authentication.examples.WhoClient</mainClass>
+          <arguments>
+            <argument>${url}</argument>
+          </arguments>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
+
+</project>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/RequestLoggerFilter.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/RequestLoggerFilter.java
new file mode 100644
index 0000000..a9721c9
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/RequestLoggerFilter.java
@@ -0,0 +1,183 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.examples;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.servlet.Filter;
+import javax.servlet.FilterChain;
+import javax.servlet.FilterConfig;
+import javax.servlet.ServletException;
+import javax.servlet.ServletRequest;
+import javax.servlet.ServletResponse;
+import javax.servlet.http.Cookie;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletRequestWrapper;
+import javax.servlet.http.HttpServletResponse;
+import javax.servlet.http.HttpServletResponseWrapper;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Servlet filter that logs HTTP request/response headers
+ */
+public class RequestLoggerFilter implements Filter {
+  private static Logger LOG = LoggerFactory.getLogger(RequestLoggerFilter.class);
+
+  @Override
+  public void init(FilterConfig filterConfig) throws ServletException {
+  }
+
+  @Override
+  public void doFilter(ServletRequest request, ServletResponse response, FilterChain filterChain)
+    throws IOException, ServletException {
+    if (!LOG.isDebugEnabled()) {
+      filterChain.doFilter(request, response);
+    }
+    else {
+      XHttpServletRequest xRequest = new XHttpServletRequest((HttpServletRequest) request);
+      XHttpServletResponse xResponse = new XHttpServletResponse((HttpServletResponse) response);
+      try {
+        LOG.debug(xRequest.getResquestInfo().toString());
+        filterChain.doFilter(xRequest, xResponse);
+      }
+      finally {
+        LOG.debug(xResponse.getResponseInfo().toString());
+      }
+    }
+  }
+
+  @Override
+  public void destroy() {
+  }
+
+  private static class XHttpServletRequest extends HttpServletRequestWrapper {
+
+    public XHttpServletRequest(HttpServletRequest request) {
+      super(request);
+    }
+
+    public StringBuffer getResquestInfo() {
+      StringBuffer sb = new StringBuffer(512);
+      sb.append("\n").append("> ").append(getMethod()).append(" ").append(getRequestURL());
+      if (getQueryString() != null) {
+        sb.append("?").append(getQueryString());
+      }
+      sb.append("\n");
+      Enumeration names = getHeaderNames();
+      while (names.hasMoreElements()) {
+        String name = (String) names.nextElement();
+        Enumeration values = getHeaders(name);
+        while (values.hasMoreElements()) {
+          String value = (String) values.nextElement();
+          sb.append("> ").append(name).append(": ").append(value).append("\n");
+        }
+      }
+      sb.append(">");
+      return sb;
+    }
+  }
+
+  private static class XHttpServletResponse extends HttpServletResponseWrapper {
+    private Map<String, List<String>> headers = new HashMap<String, List<String>>();
+    private int status;
+    private String message;
+
+    public XHttpServletResponse(HttpServletResponse response) {
+      super(response);
+    }
+
+    private List<String> getHeaderValues(String name, boolean reset) {
+      List<String> values = headers.get(name);
+      if (reset || values == null) {
+        values = new ArrayList<String>();
+        headers.put(name, values);
+      }
+      return values;
+    }
+
+    @Override
+    public void addCookie(Cookie cookie) {
+      super.addCookie(cookie);
+      List<String> cookies = getHeaderValues("Set-Cookie", false);
+      cookies.add(cookie.getName() + "=" + cookie.getValue());
+    }
+
+    @Override
+    public void sendError(int sc, String msg) throws IOException {
+      super.sendError(sc, msg);
+      status = sc;
+      message = msg;
+    }
+
+
+    @Override
+    public void sendError(int sc) throws IOException {
+      super.sendError(sc);
+      status = sc;
+    }
+
+    @Override
+    public void setStatus(int sc) {
+      super.setStatus(sc);
+      status = sc;
+    }
+
+    @Override
+    public void setStatus(int sc, String msg) {
+      super.setStatus(sc, msg);
+      status = sc;
+      message = msg;
+    }
+
+    @Override
+    public void setHeader(String name, String value) {
+      super.setHeader(name, value);
+      List<String> values = getHeaderValues(name, true);
+      values.add(value);
+    }
+
+    @Override
+    public void addHeader(String name, String value) {
+      super.addHeader(name, value);
+      List<String> values = getHeaderValues(name, false);
+      values.add(value);
+    }
+
+    public StringBuffer getResponseInfo() {
+      if (status == 0) {
+        status = 200;
+        message = "OK";
+      }
+      StringBuffer sb = new StringBuffer(512);
+      sb.append("\n").append("< ").append("status code: ").append(status);
+      if (message != null) {
+        sb.append(", message: ").append(message);
+      }
+      sb.append("\n");
+      for (Map.Entry<String, List<String>> entry : headers.entrySet()) {
+        for (String value : entry.getValue()) {
+          sb.append("< ").append(entry.getKey()).append(": ").append(value).append("\n");
+        }
+      }
+      sb.append("<");
+      return sb;
+    }
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoClient.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoClient.java
new file mode 100644
index 0000000..2299ae1
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoClient.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.examples;
+
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+
+import java.io.BufferedReader;
+import java.io.InputStreamReader;
+import java.net.HttpURLConnection;
+import java.net.URL;
+
+/**
+ * Example that uses <code>AuthenticatedURL</code>.
+ */
+public class WhoClient {
+
+  public static void main(String[] args) {
+    try {
+      if (args.length != 1) {
+        System.err.println("Usage: <URL>");
+        System.exit(-1);
+      }
+      AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+      URL url = new URL(args[0]);
+      HttpURLConnection conn = new AuthenticatedURL().openConnection(url, token);
+      System.out.println();
+      System.out.println("Token value: " + token);
+      System.out.println("Status code: " + conn.getResponseCode() + " " + conn.getResponseMessage());
+      System.out.println();
+      if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
+        BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
+        String line = reader.readLine();
+        while (line != null) {
+          System.out.println(line);
+          line = reader.readLine();
+        }
+        reader.close();
+      }
+      System.out.println();
+    }
+    catch (Exception ex) {
+      System.err.println("ERROR: " + ex.getMessage());
+      System.exit(-1);
+    }
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoServlet.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoServlet.java
new file mode 100644
index 0000000..aae3813
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/java/org/apache/hadoop/security/authentication/examples/WhoServlet.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.examples;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.Writer;
+import java.text.MessageFormat;
+
+/**
+ * Example servlet that returns the user and principal of the request.
+ */
+public class WhoServlet extends HttpServlet {
+
+  @Override
+  protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
+    resp.setContentType("text/plain");
+    resp.setStatus(HttpServletResponse.SC_OK);
+    String user = req.getRemoteUser();
+    String principal = (req.getUserPrincipal() != null) ? req.getUserPrincipal().getName() : null;
+    Writer writer = resp.getWriter();
+    writer.write(MessageFormat.format("You are: user[{0}] principal[{1}]\n", user, principal));
+  }
+
+  @Override
+  protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
+    doGet(req, resp);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/resources/log4j.properties b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/resources/log4j.properties
new file mode 100644
index 0000000..5fa4020
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/resources/log4j.properties
@@ -0,0 +1,19 @@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License. See accompanying LICENSE file.
+#
+log4j.appender.test=org.apache.log4j.ConsoleAppender
+log4j.appender.test.Target=System.out
+log4j.appender.test.layout=org.apache.log4j.PatternLayout
+log4j.appender.test.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
+
+log4j.logger.org.apache.hadoop.security.authentication=DEBUG, test
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
new file mode 100644
index 0000000..ebd0768
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
@@ -0,0 +1,117 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+
+  <servlet>
+    <servlet-name>whoServlet</servlet-name>
+    <servlet-class>org.apache.hadoop.security.authentication.examples.WhoServlet</servlet-class>
+  </servlet>
+
+  <servlet-mapping>
+    <servlet-name>whoServlet</servlet-name>
+    <url-pattern>/anonymous/who</url-pattern>
+  </servlet-mapping>
+
+  <servlet-mapping>
+    <servlet-name>whoServlet</servlet-name>
+    <url-pattern>/simple/who</url-pattern>
+  </servlet-mapping>
+
+  <servlet-mapping>
+    <servlet-name>whoServlet</servlet-name>
+    <url-pattern>/kerberos/who</url-pattern>
+  </servlet-mapping>
+
+  <filter>
+    <filter-name>requestLoggerFilter</filter-name>
+    <filter-class>org.apache.hadoop.security.authentication.examples.RequestLoggerFilter</filter-class>
+  </filter>
+
+  <filter>
+    <filter-name>anonymousFilter</filter-name>
+    <filter-class>org.apache.hadoop.security.authentication.server.AuthenticationFilter</filter-class>
+    <init-param>
+      <param-name>type</param-name>
+      <param-value>simple</param-value>
+    </init-param>
+    <init-param>
+      <param-name>simple.anonymous.allowed</param-name>
+      <param-value>true</param-value>
+    </init-param>
+    <init-param>
+      <param-name>token.validity</param-name>
+      <param-value>30</param-value>
+    </init-param>
+  </filter>
+
+  <filter>
+    <filter-name>simpleFilter</filter-name>
+    <filter-class>org.apache.hadoop.security.authentication.server.AuthenticationFilter</filter-class>
+    <init-param>
+      <param-name>type</param-name>
+      <param-value>simple</param-value>
+    </init-param>
+    <init-param>
+      <param-name>simple.anonymous.allowed</param-name>
+      <param-value>false</param-value>
+    </init-param>
+    <init-param>
+      <param-name>token.validity</param-name>
+      <param-value>30</param-value>
+    </init-param>
+  </filter>
+
+  <filter>
+    <filter-name>kerberosFilter</filter-name>
+    <filter-class>org.apache.hadoop.security.authentication.server.AuthenticationFilter</filter-class>
+    <init-param>
+      <param-name>type</param-name>
+      <param-value>kerberos</param-value>
+    </init-param>
+    <init-param>
+      <param-name>kerberos.principal</param-name>
+      <param-value>HTTP/localhost@LOCALHOST</param-value>
+    </init-param>
+    <init-param>
+      <param-name>kerberos.keytab</param-name>
+      <param-value>/tmp/my.keytab</param-value>
+    </init-param>
+    <init-param>
+      <param-name>token.validity</param-name>
+      <param-value>30</param-value>
+    </init-param>
+  </filter>
+
+  <filter-mapping>
+    <filter-name>requestLoggerFilter</filter-name>
+    <url-pattern>/*</url-pattern>
+  </filter-mapping>
+
+  <filter-mapping>
+    <filter-name>anonymousFilter</filter-name>
+    <url-pattern>/anonymous/*</url-pattern>
+  </filter-mapping>
+
+  <filter-mapping>
+    <filter-name>simpleFilter</filter-name>
+    <url-pattern>/simple/*</url-pattern>
+  </filter-mapping>
+
+  <filter-mapping>
+    <filter-name>kerberosFilter</filter-name>
+    <url-pattern>/kerberos/*</url-pattern>
+  </filter-mapping>
+
+</web-app>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/annonymous/index.html b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/annonymous/index.html
new file mode 100644
index 0000000..73294e1
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/annonymous/index.html
@@ -0,0 +1,18 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<html>
+<body>
+<h1>Hello Hadoop Auth Pseudo/Simple Authentication with anonymous users!</h1>
+</body>
+</html>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/index.html b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/index.html
new file mode 100644
index 0000000..7c09261
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/index.html
@@ -0,0 +1,18 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<html>
+<body>
+<h1>Hello Hadoop Auth Examples!</h1>
+</body>
+</html>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/kerberos/index.html b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/kerberos/index.html
new file mode 100644
index 0000000..fec01f6
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/kerberos/index.html
@@ -0,0 +1,18 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<html>
+<body>
+<h1>Hello Hadoop Auth Kerberos SPNEGO Authentication!</h1>
+</body>
+</html>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/simple/index.html b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/simple/index.html
new file mode 100644
index 0000000..7981219
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth-examples/src/main/webapp/simple/index.html
@@ -0,0 +1,18 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<html>
+<body>
+<h1>Hello Hadoop Auth Pseudo/Simple Authentication!</h1>
+</body>
+</html>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/BUILDING.txt b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/BUILDING.txt
new file mode 100644
index 0000000..b81b71c
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/BUILDING.txt
@@ -0,0 +1,20 @@
+
+Build instructions for Hadoop Auth
+
+Same as for Hadoop.
+
+For more details refer to the Hadoop Auth documentation pages.
+
+-----------------------------------------------------------------------------
+Caveats:
+
+* Hadoop Auth has profile to enable Kerberos testcases (testKerberos)
+
+  To run Kerberos testcases a KDC, 2 kerberos principals and a keytab file
+  are required (refer to the Hadoop Auth documentation pages for details).
+
+* Hadoop Auth does not have a distribution profile (dist)
+
+* Hadoop Auth does not have a native code profile (native)
+
+-----------------------------------------------------------------------------
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/README.txt b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/README.txt
new file mode 100644
index 0000000..efa95dd
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/README.txt
@@ -0,0 +1,15 @@
+Hadoop Auth, Java HTTP SPNEGO
+
+Hadoop Auth is a Java library consisting of a client and a server
+components to enable Kerberos SPNEGO authentication for HTTP.
+
+The client component is the AuthenticatedURL class.
+
+The server component is the AuthenticationFilter servlet filter class.
+
+Authentication mechanisms support is pluggable in both the client and
+the server components via interfaces.
+
+In addition to Kerberos SPNEGO, Hadoop Auth also supports Pseudo/Simple
+authentication (trusting the value of the query string parameter
+'user.name').
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/pom.xml b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/pom.xml
new file mode 100644
index 0000000..f134ade
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/pom.xml
@@ -0,0 +1,214 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.hadoop</groupId>
+    <artifactId>hadoop-project</artifactId>
+    <version>2.0.4-alpha</version>
+    <relativePath>../../hadoop-project</relativePath>
+  </parent>
+  <groupId>org.apache.hadoop</groupId>
+  <artifactId>hadoop-auth</artifactId>
+  <version>2.0.4-alpha</version>
+  <packaging>jar</packaging>
+
+  <name>Apache Hadoop Auth</name>
+  <description>Apache Hadoop Auth - Java HTTP SPNEGO</description>
+
+  <properties>
+    <maven.build.timestamp.format>yyyyMMdd</maven.build.timestamp.format>
+    <kerberos.realm>LOCALHOST</kerberos.realm>
+  </properties>
+
+  <dependencies>
+    <dependency>
+      <!-- Used, even though 'mvn dependency:analyze' doesn't find it -->
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-annotations</artifactId>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.mockito</groupId>
+      <artifactId>mockito-all</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jetty</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>javax.servlet</groupId>
+      <artifactId>servlet-api</artifactId>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+      <scope>compile</scope>
+    </dependency>
+    <dependency>
+      <groupId>commons-codec</groupId>
+      <artifactId>commons-codec</artifactId>
+      <scope>compile</scope>
+    </dependency>
+    <dependency>
+      <groupId>log4j</groupId>
+      <artifactId>log4j</artifactId>
+      <scope>runtime</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-log4j12</artifactId>
+      <scope>runtime</scope>
+    </dependency>
+  </dependencies>
+
+  <build>
+    <testResources>
+      <testResource>
+        <directory>${basedir}/src/test/resources</directory>
+        <filtering>true</filtering>
+        <includes>
+          <include>krb5.conf</include>
+        </includes>
+      </testResource>
+    </testResources>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-surefire-plugin</artifactId>
+        <configuration>
+          <forkMode>always</forkMode>
+          <forkedProcessTimeoutInSeconds>600</forkedProcessTimeoutInSeconds>
+          <systemPropertyVariables>
+            <java.security.krb5.conf>${project.build.directory}/test-classes/krb5.conf</java.security.krb5.conf>
+            <kerberos.realm>${kerberos.realm}</kerberos.realm>
+          </systemPropertyVariables>
+          <excludes>
+            <exclude>**/${test.exclude}.java</exclude>
+            <exclude>${test.exclude.pattern}</exclude>
+            <exclude>**/TestKerberosAuth*.java</exclude>
+            <exclude>**/TestAltKerberosAuth*.java</exclude>
+            <exclude>**/Test*$*.java</exclude>
+          </excludes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-source-plugin</artifactId>
+        <executions>
+          <execution>
+            <phase>prepare-package</phase>
+            <goals>
+              <goal>jar</goal>
+            </goals>
+          </execution>
+        </executions>
+        <configuration>
+          <attach>true</attach>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
+
+  <profiles>
+    <profile>
+      <id>testKerberos</id>
+      <activation>
+        <activeByDefault>false</activeByDefault>
+      </activation>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-surefire-plugin</artifactId>
+            <configuration>
+              <forkMode>always</forkMode>
+              <forkedProcessTimeoutInSeconds>600</forkedProcessTimeoutInSeconds>
+              <systemPropertyVariables>
+                <java.security.krb5.conf>${project.build.directory}/test-classes/krb5.conf</java.security.krb5.conf>
+                <kerberos.realm>${kerberos.realm}</kerberos.realm>
+              </systemPropertyVariables>
+              <excludes>
+                <exclude>**/${test.exclude}.java</exclude>
+                <exclude>${test.exclude.pattern}</exclude>
+                <exclude>**/Test*$*.java</exclude>
+              </excludes>
+            </configuration>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>docs</id>
+      <activation>
+        <activeByDefault>false</activeByDefault>
+      </activation>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-site-plugin</artifactId>
+            <executions>
+              <execution>
+                <phase>package</phase>
+                <goals>
+                  <goal>site</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-project-info-reports-plugin</artifactId>
+            <executions>
+              <execution>
+                <configuration>
+                  <dependencyLocationsEnabled>false</dependencyLocationsEnabled>
+                </configuration>
+                <phase>package</phase>
+                <goals>
+                  <goal>dependencies</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-javadoc-plugin</artifactId>
+            <executions>
+              <execution>
+                <phase>package</phase>
+                <goals>
+                  <goal>javadoc</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+</project>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java
new file mode 100644
index 0000000..a43a7c9
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java
@@ -0,0 +1,293 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * The {@link AuthenticatedURL} class enables the use of the JDK {@link URL} class
+ * against HTTP endpoints protected with the {@link AuthenticationFilter}.
+ * <p/>
+ * The authentication mechanisms supported by default are Hadoop Simple  authentication
+ * (also known as pseudo authentication) and Kerberos SPNEGO authentication.
+ * <p/>
+ * Additional authentication mechanisms can be supported via {@link Authenticator} implementations.
+ * <p/>
+ * The default {@link Authenticator} is the {@link KerberosAuthenticator} class which supports
+ * automatic fallback from Kerberos SPNEGO to Hadoop Simple authentication.
+ * <p/>
+ * <code>AuthenticatedURL</code> instances are not thread-safe.
+ * <p/>
+ * The usage pattern of the {@link AuthenticatedURL} is:
+ * <p/>
+ * <pre>
+ *
+ * // establishing an initial connection
+ *
+ * URL url = new URL("http://foo:8080/bar");
+ * AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+ * AuthenticatedURL aUrl = new AuthenticatedURL();
+ * HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
+ * ....
+ * // use the 'conn' instance
+ * ....
+ *
+ * // establishing a follow up connection using a token from the previous connection
+ *
+ * HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
+ * ....
+ * // use the 'conn' instance
+ * ....
+ *
+ * </pre>
+ */
+public class AuthenticatedURL {
+
+  /**
+   * Name of the HTTP cookie used for the authentication token between the client and the server.
+   */
+  public static final String AUTH_COOKIE = "hadoop.auth";
+
+  private static final String AUTH_COOKIE_EQ = AUTH_COOKIE + "=";
+
+  /**
+   * Client side authentication token.
+   */
+  public static class Token {
+
+    private String token;
+
+    /**
+     * Creates a token.
+     */
+    public Token() {
+    }
+
+    /**
+     * Creates a token using an existing string representation of the token.
+     *
+     * @param tokenStr string representation of the tokenStr.
+     */
+    public Token(String tokenStr) {
+      if (tokenStr == null) {
+        throw new IllegalArgumentException("tokenStr cannot be null");
+      }
+      set(tokenStr);
+    }
+
+    /**
+     * Returns if a token from the server has been set.
+     *
+     * @return if a token from the server has been set.
+     */
+    public boolean isSet() {
+      return token != null;
+    }
+
+    /**
+     * Sets a token.
+     *
+     * @param tokenStr string representation of the tokenStr.
+     */
+    void set(String tokenStr) {
+      token = tokenStr;
+    }
+
+    /**
+     * Returns the string representation of the token.
+     *
+     * @return the string representation of the token.
+     */
+    @Override
+    public String toString() {
+      return token;
+    }
+
+    /**
+     * Return the hashcode for the token.
+     *
+     * @return the hashcode for the token.
+     */
+    @Override
+    public int hashCode() {
+      return (token != null) ? token.hashCode() : 0;
+    }
+
+    /**
+     * Return if two token instances are equal.
+     *
+     * @param o the other token instance.
+     *
+     * @return if this instance and the other instance are equal.
+     */
+    @Override
+    public boolean equals(Object o) {
+      boolean eq = false;
+      if (o instanceof Token) {
+        Token other = (Token) o;
+        eq = (token == null && other.token == null) || (token != null && this.token.equals(other.token));
+      }
+      return eq;
+    }
+  }
+
+  private static Class<? extends Authenticator> DEFAULT_AUTHENTICATOR = KerberosAuthenticator.class;
+
+  /**
+   * Sets the default {@link Authenticator} class to use when an {@link AuthenticatedURL} instance
+   * is created without specifying an authenticator.
+   *
+   * @param authenticator the authenticator class to use as default.
+   */
+  public static void setDefaultAuthenticator(Class<? extends Authenticator> authenticator) {
+    DEFAULT_AUTHENTICATOR = authenticator;
+  }
+
+  /**
+   * Returns the default {@link Authenticator} class to use when an {@link AuthenticatedURL} instance
+   * is created without specifying an authenticator.
+   *
+   * @return the authenticator class to use as default.
+   */
+  public static Class<? extends Authenticator> getDefaultAuthenticator() {
+    return DEFAULT_AUTHENTICATOR;
+  }
+
+  private Authenticator authenticator;
+  private ConnectionConfigurator connConfigurator;
+
+  /**
+   * Creates an {@link AuthenticatedURL}.
+   */
+  public AuthenticatedURL() {
+    this(null);
+  }
+
+  /**
+   * Creates an <code>AuthenticatedURL</code>.
+   *
+   * @param authenticator the {@link Authenticator} instance to use, if <code>null</code> a {@link
+   * KerberosAuthenticator} is used.
+   */
+  public AuthenticatedURL(Authenticator authenticator) {
+    this(authenticator, null);
+  }
+
+  /**
+   * Creates an <code>AuthenticatedURL</code>.
+   *
+   * @param authenticator the {@link Authenticator} instance to use, if <code>null</code> a {@link
+   * KerberosAuthenticator} is used.
+   * @param connConfigurator a connection configurator.
+   */
+  public AuthenticatedURL(Authenticator authenticator,
+                          ConnectionConfigurator connConfigurator) {
+    try {
+      this.authenticator = (authenticator != null) ? authenticator : DEFAULT_AUTHENTICATOR.newInstance();
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
+    }
+    this.connConfigurator = connConfigurator;
+    this.authenticator.setConnectionConfigurator(connConfigurator);
+  }
+
+  /**
+   * Returns an authenticated {@link HttpURLConnection}.
+   *
+   * @param url the URL to connect to. Only HTTP/S URLs are supported.
+   * @param token the authentication token being used for the user.
+   *
+   * @return an authenticated {@link HttpURLConnection}.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication exception occurred.
+   */
+  public HttpURLConnection openConnection(URL url, Token token) throws IOException, AuthenticationException {
+    if (url == null) {
+      throw new IllegalArgumentException("url cannot be NULL");
+    }
+    if (!url.getProtocol().equalsIgnoreCase("http") && !url.getProtocol().equalsIgnoreCase("https")) {
+      throw new IllegalArgumentException("url must be for a HTTP or HTTPS resource");
+    }
+    if (token == null) {
+      throw new IllegalArgumentException("token cannot be NULL");
+    }
+    authenticator.authenticate(url, token);
+    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+    if (connConfigurator != null) {
+      conn = connConfigurator.configure(conn);
+    }
+    injectToken(conn, token);
+    return conn;
+  }
+
+  /**
+   * Helper method that injects an authentication token to send with a connection.
+   *
+   * @param conn connection to inject the authentication token into.
+   * @param token authentication token to inject.
+   */
+  public static void injectToken(HttpURLConnection conn, Token token) {
+    String t = token.token;
+    if (t != null) {
+      if (!t.startsWith("\"")) {
+        t = "\"" + t + "\"";
+      }
+      conn.addRequestProperty("Cookie", AUTH_COOKIE_EQ + t);
+    }
+  }
+
+  /**
+   * Helper method that extracts an authentication token received from a connection.
+   * <p/>
+   * This method is used by {@link Authenticator} implementations.
+   *
+   * @param conn connection to extract the authentication token from.
+   * @param token the authentication token.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication exception occurred.
+   */
+  public static void extractToken(HttpURLConnection conn, Token token) throws IOException, AuthenticationException {
+    if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
+      Map<String, List<String>> headers = conn.getHeaderFields();
+      List<String> cookies = headers.get("Set-Cookie");
+      if (cookies != null) {
+        for (String cookie : cookies) {
+          if (cookie.startsWith(AUTH_COOKIE_EQ)) {
+            String value = cookie.substring(AUTH_COOKIE_EQ.length());
+            int separator = value.indexOf(";");
+            if (separator > -1) {
+              value = value.substring(0, separator);
+            }
+            if (value.length() > 0) {
+              token.set(value);
+            }
+          }
+        }
+      }
+    } else {
+      token.set(null);
+      throw new AuthenticationException("Authentication failed, status: " + conn.getResponseCode() +
+                                        ", message: " + conn.getResponseMessage());
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
new file mode 100644
index 0000000..13632fb
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
@@ -0,0 +1,50 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+/**
+ * Exception thrown when an authentication error occurrs.
+ */
+public class AuthenticationException extends Exception {
+  
+  static final long serialVersionUID = 0;
+
+  /**
+   * Creates an {@link AuthenticationException}.
+   *
+   * @param cause original exception.
+   */
+  public AuthenticationException(Throwable cause) {
+    super(cause);
+  }
+
+  /**
+   * Creates an {@link AuthenticationException}.
+   *
+   * @param msg exception message.
+   */
+  public AuthenticationException(String msg) {
+    super(msg);
+  }
+
+  /**
+   * Creates an {@link AuthenticationException}.
+   *
+   * @param msg exception message.
+   * @param cause original exception.
+   */
+  public AuthenticationException(String msg, Throwable cause) {
+    super(msg, cause);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/Authenticator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/Authenticator.java
new file mode 100644
index 0000000..e7bae4a
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/Authenticator.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+
+import java.io.IOException;
+import java.net.URL;
+
+/**
+ * Interface for client authentication mechanisms.
+ * <p/>
+ * Implementations are use-once instances, they don't need to be thread safe.
+ */
+public interface Authenticator {
+
+  /**
+   * Sets a {@link ConnectionConfigurator} instance to use for
+   * configuring connections.
+   *
+   * @param configurator the {@link ConnectionConfigurator} instance.
+   */
+  public void setConnectionConfigurator(ConnectionConfigurator configurator);
+
+  /**
+   * Authenticates against a URL and returns a {@link AuthenticatedURL.Token} to be
+   * used by subsequent requests.
+   *
+   * @param url the URl to authenticate against.
+   * @param token the authentication token being used for the user.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication error occurred.
+   */
+  public void authenticate(URL url, AuthenticatedURL.Token token) throws IOException, AuthenticationException;
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/ConnectionConfigurator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/ConnectionConfigurator.java
new file mode 100644
index 0000000..1eaecdd
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/ConnectionConfigurator.java
@@ -0,0 +1,36 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+
+/**
+ * Interface to configure  {@link HttpURLConnection} created by
+ * {@link AuthenticatedURL} instances.
+ */
+public interface ConnectionConfigurator {
+
+  /**
+   * Configures the given {@link HttpURLConnection} instance.
+   *
+   * @param conn the {@link HttpURLConnection} instance to configure.
+   * @return the configured {@link HttpURLConnection} instance.
+   * 
+   * @throws IOException if an IO error occurred.
+   */
+  public HttpURLConnection configure(HttpURLConnection conn) throws IOException;
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
new file mode 100644
index 0000000..4450c9c
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -0,0 +1,312 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.ietf.jgss.GSSContext;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+import javax.security.auth.login.LoginException;
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.security.AccessControlContext;
+import java.security.AccessController;
+import java.security.PrivilegedActionException;
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * The {@link KerberosAuthenticator} implements the Kerberos SPNEGO authentication sequence.
+ * <p/>
+ * It uses the default principal for the Kerberos cache (normally set via kinit).
+ * <p/>
+ * It falls back to the {@link PseudoAuthenticator} if the HTTP endpoint does not trigger an SPNEGO authentication
+ * sequence.
+ */
+public class KerberosAuthenticator implements Authenticator {
+  
+  private static Logger LOG = LoggerFactory.getLogger(
+      KerberosAuthenticator.class);
+
+  /**
+   * HTTP header used by the SPNEGO server endpoint during an authentication sequence.
+   */
+  public static final String WWW_AUTHENTICATE = "WWW-Authenticate";
+
+  /**
+   * HTTP header used by the SPNEGO client endpoint during an authentication sequence.
+   */
+  public static final String AUTHORIZATION = "Authorization";
+
+  /**
+   * HTTP header prefix used by the SPNEGO client/server endpoints during an authentication sequence.
+   */
+  public static final String NEGOTIATE = "Negotiate";
+
+  private static final String AUTH_HTTP_METHOD = "OPTIONS";
+
+  /*
+  * Defines the Kerberos configuration that will be used to obtain the Kerberos principal from the
+  * Kerberos cache.
+  */
+  private static class KerberosConfiguration extends Configuration {
+
+    private static final String OS_LOGIN_MODULE_NAME;
+    private static final boolean windows = System.getProperty("os.name").startsWith("Windows");
+
+    static {
+      if (windows) {
+        OS_LOGIN_MODULE_NAME = "com.sun.security.auth.module.NTLoginModule";
+      } else {
+        OS_LOGIN_MODULE_NAME = "com.sun.security.auth.module.UnixLoginModule";
+      }
+    }
+
+    private static final AppConfigurationEntry OS_SPECIFIC_LOGIN =
+      new AppConfigurationEntry(OS_LOGIN_MODULE_NAME,
+                                AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
+                                new HashMap<String, String>());
+
+    private static final Map<String, String> USER_KERBEROS_OPTIONS = new HashMap<String, String>();
+
+    static {
+      USER_KERBEROS_OPTIONS.put("doNotPrompt", "true");
+      USER_KERBEROS_OPTIONS.put("useTicketCache", "true");
+      USER_KERBEROS_OPTIONS.put("renewTGT", "true");
+      String ticketCache = System.getenv("KRB5CCNAME");
+      if (ticketCache != null) {
+        USER_KERBEROS_OPTIONS.put("ticketCache", ticketCache);
+      }
+    }
+
+    private static final AppConfigurationEntry USER_KERBEROS_LOGIN =
+      new AppConfigurationEntry(KerberosUtil.getKrb5LoginModuleName(),
+                                AppConfigurationEntry.LoginModuleControlFlag.OPTIONAL,
+                                USER_KERBEROS_OPTIONS);
+
+    private static final AppConfigurationEntry[] USER_KERBEROS_CONF =
+      new AppConfigurationEntry[]{OS_SPECIFIC_LOGIN, USER_KERBEROS_LOGIN};
+
+    @Override
+    public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
+      return USER_KERBEROS_CONF;
+    }
+  }
+  
+  private URL url;
+  private HttpURLConnection conn;
+  private Base64 base64;
+  private ConnectionConfigurator connConfigurator;
+
+  /**
+   * Sets a {@link ConnectionConfigurator} instance to use for
+   * configuring connections.
+   *
+   * @param configurator the {@link ConnectionConfigurator} instance.
+   */
+  @Override
+  public void setConnectionConfigurator(ConnectionConfigurator configurator) {
+    connConfigurator = configurator;
+  }
+
+  /**
+   * Performs SPNEGO authentication against the specified URL.
+   * <p/>
+   * If a token is given it does a NOP and returns the given token.
+   * <p/>
+   * If no token is given, it will perform the SPNEGO authentication sequence using an
+   * HTTP <code>OPTIONS</code> request.
+   *
+   * @param url the URl to authenticate against.
+   * @param token the authentication token being used for the user.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication error occurred.
+   */
+  @Override
+  public void authenticate(URL url, AuthenticatedURL.Token token)
+    throws IOException, AuthenticationException {
+    if (!token.isSet()) {
+      this.url = url;
+      base64 = new Base64(0);
+      conn = (HttpURLConnection) url.openConnection();
+      if (connConfigurator != null) {
+        conn = connConfigurator.configure(conn);
+      }
+      conn.setRequestMethod(AUTH_HTTP_METHOD);
+      conn.connect();
+      
+      if (conn.getRequestProperty(AUTHORIZATION) != null && conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
+        LOG.debug("JDK performed authentication on our behalf.");
+        // If the JDK already did the SPNEGO back-and-forth for
+        // us, just pull out the token.
+        AuthenticatedURL.extractToken(conn, token);
+        return;
+      } else if (isNegotiate()) {
+        LOG.debug("Performing our own SPNEGO sequence.");
+        doSpnegoSequence(token);
+      } else {
+        LOG.debug("Using fallback authenticator sequence.");
+        getFallBackAuthenticator().authenticate(url, token);
+      }
+    }
+  }
+
+  /**
+   * If the specified URL does not support SPNEGO authentication, a fallback {@link Authenticator} will be used.
+   * <p/>
+   * This implementation returns a {@link PseudoAuthenticator}.
+   *
+   * @return the fallback {@link Authenticator}.
+   */
+  protected Authenticator getFallBackAuthenticator() {
+    Authenticator auth = new PseudoAuthenticator();
+    if (connConfigurator != null) {
+      auth.setConnectionConfigurator(connConfigurator);
+    }
+    return auth;
+  }
+
+  /*
+  * Indicates if the response is starting a SPNEGO negotiation.
+  */
+  private boolean isNegotiate() throws IOException {
+    boolean negotiate = false;
+    if (conn.getResponseCode() == HttpURLConnection.HTTP_UNAUTHORIZED) {
+      String authHeader = conn.getHeaderField(WWW_AUTHENTICATE);
+      negotiate = authHeader != null && authHeader.trim().startsWith(NEGOTIATE);
+    }
+    return negotiate;
+  }
+
+  /**
+   * Implements the SPNEGO authentication sequence interaction using the current default principal
+   * in the Kerberos cache (normally set via kinit).
+   *
+   * @param token the authentication token being used for the user.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication error occurred.
+   */
+  private void doSpnegoSequence(AuthenticatedURL.Token token) throws IOException, AuthenticationException {
+    try {
+      AccessControlContext context = AccessController.getContext();
+      Subject subject = Subject.getSubject(context);
+      if (subject == null) {
+        LOG.debug("No subject in context, logging in");
+        subject = new Subject();
+        LoginContext login = new LoginContext("", subject,
+            null, new KerberosConfiguration());
+        login.login();
+      }
+
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Using subject: " + subject);
+      }
+      Subject.doAs(subject, new PrivilegedExceptionAction<Void>() {
+
+        @Override
+        public Void run() throws Exception {
+          GSSContext gssContext = null;
+          try {
+            GSSManager gssManager = GSSManager.getInstance();
+            String servicePrincipal = KerberosUtil.getServicePrincipal("HTTP",
+                KerberosAuthenticator.this.url.getHost());
+            Oid oid = KerberosUtil.getOidInstance("NT_GSS_KRB5_PRINCIPAL");
+            GSSName serviceName = gssManager.createName(servicePrincipal,
+                                                        oid);
+            oid = KerberosUtil.getOidInstance("GSS_KRB5_MECH_OID");
+            gssContext = gssManager.createContext(serviceName, oid, null,
+                                                  GSSContext.DEFAULT_LIFETIME);
+            gssContext.requestCredDeleg(true);
+            gssContext.requestMutualAuth(true);
+
+            byte[] inToken = new byte[0];
+            byte[] outToken;
+            boolean established = false;
+
+            // Loop while the context is still not established
+            while (!established) {
+              outToken = gssContext.initSecContext(inToken, 0, inToken.length);
+              if (outToken != null) {
+                sendToken(outToken);
+              }
+
+              if (!gssContext.isEstablished()) {
+                inToken = readToken();
+              } else {
+                established = true;
+              }
+            }
+          } finally {
+            if (gssContext != null) {
+              gssContext.dispose();
+              gssContext = null;
+            }
+          }
+          return null;
+        }
+      });
+    } catch (PrivilegedActionException ex) {
+      throw new AuthenticationException(ex.getException());
+    } catch (LoginException ex) {
+      throw new AuthenticationException(ex);
+    }
+    AuthenticatedURL.extractToken(conn, token);
+  }
+
+  /*
+  * Sends the Kerberos token to the server.
+  */
+  private void sendToken(byte[] outToken) throws IOException, AuthenticationException {
+    new Exception("sendToken").printStackTrace(System.out);
+    String token = base64.encodeToString(outToken);
+    conn = (HttpURLConnection) url.openConnection();
+    if (connConfigurator != null) {
+      conn = connConfigurator.configure(conn);
+    }
+    conn.setRequestMethod(AUTH_HTTP_METHOD);
+    conn.setRequestProperty(AUTHORIZATION, NEGOTIATE + " " + token);
+    conn.connect();
+  }
+
+  /*
+  * Retrieves the Kerberos token returned by the server.
+  */
+  private byte[] readToken() throws IOException, AuthenticationException {
+    int status = conn.getResponseCode();
+    if (status == HttpURLConnection.HTTP_OK || status == HttpURLConnection.HTTP_UNAUTHORIZED) {
+      String authHeader = conn.getHeaderField(WWW_AUTHENTICATE);
+      if (authHeader == null || !authHeader.trim().startsWith(NEGOTIATE)) {
+        throw new AuthenticationException("Invalid SPNEGO sequence, '" + WWW_AUTHENTICATE +
+                                          "' header incorrect: " + authHeader);
+      }
+      String negotiation = authHeader.trim().substring((NEGOTIATE + " ").length()).trim();
+      return base64.decode(negotiation);
+    }
+    throw new AuthenticationException("Invalid SPNEGO sequence, status code: " + status);
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
new file mode 100644
index 0000000..f534be9
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
@@ -0,0 +1,90 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+
+/**
+ * The {@link PseudoAuthenticator} implementation provides an authentication equivalent to Hadoop's
+ * Simple authentication, it trusts the value of the 'user.name' Java System property.
+ * <p/>
+ * The 'user.name' value is propagated using an additional query string parameter {@link #USER_NAME} ('user.name').
+ */
+public class PseudoAuthenticator implements Authenticator {
+
+  /**
+   * Name of the additional parameter that carries the 'user.name' value.
+   */
+  public static final String USER_NAME = "user.name";
+
+  private static final String USER_NAME_EQ = USER_NAME + "=";
+
+  private ConnectionConfigurator connConfigurator;
+
+  /**
+   * Sets a {@link ConnectionConfigurator} instance to use for
+   * configuring connections.
+   *
+   * @param configurator the {@link ConnectionConfigurator} instance.
+   */
+  @Override
+  public void setConnectionConfigurator(ConnectionConfigurator configurator) {
+    connConfigurator = configurator;
+  }
+
+  /**
+   * Performs simple authentication against the specified URL.
+   * <p/>
+   * If a token is given it does a NOP and returns the given token.
+   * <p/>
+   * If no token is given, it will perform an HTTP <code>OPTIONS</code> request injecting an additional
+   * parameter {@link #USER_NAME} in the query string with the value returned by the {@link #getUserName()}
+   * method.
+   * <p/>
+   * If the response is successful it will update the authentication token.
+   *
+   * @param url the URl to authenticate against.
+   * @param token the authencation token being used for the user.
+   *
+   * @throws IOException if an IO error occurred.
+   * @throws AuthenticationException if an authentication error occurred.
+   */
+  @Override
+  public void authenticate(URL url, AuthenticatedURL.Token token) throws IOException, AuthenticationException {
+    String strUrl = url.toString();
+    String paramSeparator = (strUrl.contains("?")) ? "&" : "?";
+    strUrl += paramSeparator + USER_NAME_EQ + getUserName();
+    url = new URL(strUrl);
+    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+    if (connConfigurator != null) {
+      conn = connConfigurator.configure(conn);
+    }
+    conn.setRequestMethod("OPTIONS");
+    conn.connect();
+    AuthenticatedURL.extractToken(conn, token);
+  }
+
+  /**
+   * Returns the current user name.
+   * <p/>
+   * This implementation returns the value of the Java system property 'user.name'
+   *
+   * @return the current user name.
+   */
+  protected String getUserName() {
+    return System.getProperty("user.name");
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
new file mode 100644
index 0000000..e786e37
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
@@ -0,0 +1,150 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import java.io.IOException;
+import java.util.Properties;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+
+ /**
+ * The {@link AltKerberosAuthenticationHandler} behaves exactly the same way as
+ * the {@link KerberosAuthenticationHandler}, except that it allows for an
+ * alternative form of authentication for browsers while still using Kerberos
+ * for Java access.  This is an abstract class that should be subclassed
+ * to allow a developer to implement their own custom authentication for browser
+ * access.  The alternateAuthenticate method will be called whenever a request
+ * comes from a browser.
+ * <p/>
+ */
+public abstract class AltKerberosAuthenticationHandler
+                        extends KerberosAuthenticationHandler {
+
+  /**
+   * Constant that identifies the authentication mechanism.
+   */
+  public static final String TYPE = "alt-kerberos";
+
+  /**
+   * Constant for the configuration property that indicates which user agents
+   * are not considered browsers (comma separated)
+   */
+  public static final String NON_BROWSER_USER_AGENTS =
+          TYPE + ".non-browser.user-agents";
+  private static final String NON_BROWSER_USER_AGENTS_DEFAULT =
+          "java,curl,wget,perl";
+
+  private String[] nonBrowserUserAgents;
+
+  /**
+   * Returns the authentication type of the authentication handler,
+   * 'alt-kerberos'.
+   * <p/>
+   *
+   * @return the authentication type of the authentication handler,
+   * 'alt-kerberos'.
+   */
+  @Override
+  public String getType() {
+    return TYPE;
+  }
+
+  @Override
+  public void init(Properties config) throws ServletException {
+    super.init(config);
+
+    nonBrowserUserAgents = config.getProperty(
+            NON_BROWSER_USER_AGENTS, NON_BROWSER_USER_AGENTS_DEFAULT)
+            .split("\\W*,\\W*");
+    for (int i = 0; i < nonBrowserUserAgents.length; i++) {
+        nonBrowserUserAgents[i] = nonBrowserUserAgents[i].toLowerCase();
+    }
+  }
+
+  /**
+   * It enforces the the Kerberos SPNEGO authentication sequence returning an
+   * {@link AuthenticationToken} only after the Kerberos SPNEGO sequence has
+   * completed successfully (in the case of Java access) and only after the
+   * custom authentication implemented by the subclass in alternateAuthenticate
+   * has completed successfully (in the case of browser access).
+   * <p/>
+   *
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return an authentication token if the request is authorized or null
+   *
+   * @throws IOException thrown if an IO error occurred
+   * @throws AuthenticationException thrown if an authentication error occurred
+   */
+  @Override
+  public AuthenticationToken authenticate(HttpServletRequest request,
+      HttpServletResponse response)
+      throws IOException, AuthenticationException {
+    AuthenticationToken token;
+    if (isBrowser(request.getHeader("User-Agent"))) {
+      token = alternateAuthenticate(request, response);
+    }
+    else {
+      token = super.authenticate(request, response);
+    }
+    return token;
+  }
+
+  /**
+   * This method parses the User-Agent String and returns whether or not it
+   * refers to a browser.  If its not a browser, then Kerberos authentication
+   * will be used; if it is a browser, alternateAuthenticate from the subclass
+   * will be used.
+   * <p/>
+   * A User-Agent String is considered to be a browser if it does not contain
+   * any of the values from alt-kerberos.non-browser.user-agents; the default
+   * behavior is to consider everything a browser unless it contains one of:
+   * "java", "curl", "wget", or "perl".  Subclasses can optionally override
+   * this method to use different behavior.
+   *
+   * @param userAgent The User-Agent String, or null if there isn't one
+   * @return true if the User-Agent String refers to a browser, false if not
+   */
+  protected boolean isBrowser(String userAgent) {
+    if (userAgent == null) {
+      return false;
+    }
+    userAgent = userAgent.toLowerCase();
+    boolean isBrowser = true;
+    for (String nonBrowserUserAgent : nonBrowserUserAgents) {
+        if (userAgent.contains(nonBrowserUserAgent)) {
+            isBrowser = false;
+            break;
+        }
+    }
+    return isBrowser;
+  }
+
+  /**
+   * Subclasses should implement this method to provide the custom
+   * authentication to be used for browsers.
+   *
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   * @return an authentication token if the request is authorized, or null
+   * @throws IOException thrown if an IO error occurs
+   * @throws AuthenticationException thrown if an authentication error occurs
+   */
+  public abstract AuthenticationToken alternateAuthenticate(
+      HttpServletRequest request, HttpServletResponse response)
+      throws IOException, AuthenticationException;
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
new file mode 100644
index 0000000..0bd78f5
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
@@ -0,0 +1,422 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.security.authentication.util.Signer;
+import org.apache.hadoop.security.authentication.util.SignerException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.servlet.Filter;
+import javax.servlet.FilterChain;
+import javax.servlet.FilterConfig;
+import javax.servlet.ServletException;
+import javax.servlet.ServletRequest;
+import javax.servlet.ServletResponse;
+import javax.servlet.http.Cookie;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletRequestWrapper;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.security.Principal;
+import java.util.Enumeration;
+import java.util.Properties;
+import java.util.Random;
+
+/**
+ * The {@link AuthenticationFilter} enables protecting web application resources with different (pluggable)
+ * authentication mechanisms.
+ * <p/>
+ * Out of the box it provides 2 authentication mechanisms: Pseudo and Kerberos SPNEGO.
+ * <p/>
+ * Additional authentication mechanisms are supported via the {@link AuthenticationHandler} interface.
+ * <p/>
+ * This filter delegates to the configured authentication handler for authentication and once it obtains an
+ * {@link AuthenticationToken} from it, sets a signed HTTP cookie with the token. For client requests
+ * that provide the signed HTTP cookie, it verifies the validity of the cookie, extracts the user information
+ * and lets the request proceed to the target resource.
+ * <p/>
+ * The supported configuration properties are:
+ * <ul>
+ * <li>config.prefix: indicates the prefix to be used by all other configuration properties, the default value
+ * is no prefix. See below for details on how/why this prefix is used.</li>
+ * <li>[#PREFIX#.]type: simple|kerberos|#CLASS#, 'simple' is short for the
+ * {@link PseudoAuthenticationHandler}, 'kerberos' is short for {@link KerberosAuthenticationHandler}, otherwise
+ * the full class name of the {@link AuthenticationHandler} must be specified.</li>
+ * <li>[#PREFIX#.]signature.secret: the secret used to sign the HTTP cookie value. The default value is a random
+ * value. Unless multiple webapp instances need to share the secret the random value is adequate.</li>
+ * <li>[#PREFIX#.]token.validity: time -in seconds- that the generated token is valid before a
+ * new authentication is triggered, default value is <code>3600</code> seconds.</li>
+ * <li>[#PREFIX#.]cookie.domain: domain to use for the HTTP cookie that stores the authentication token.</li>
+ * <li>[#PREFIX#.]cookie.path: path to use for the HTTP cookie that stores the authentication token.</li>
+ * </ul>
+ * <p/>
+ * The rest of the configuration properties are specific to the {@link AuthenticationHandler} implementation and the
+ * {@link AuthenticationFilter} will take all the properties that start with the prefix #PREFIX#, it will remove
+ * the prefix from it and it will pass them to the the authentication handler for initialization. Properties that do
+ * not start with the prefix will not be passed to the authentication handler initialization.
+ */
+public class AuthenticationFilter implements Filter {
+
+  private static Logger LOG = LoggerFactory.getLogger(AuthenticationFilter.class);
+
+  /**
+   * Constant for the property that specifies the configuration prefix.
+   */
+  public static final String CONFIG_PREFIX = "config.prefix";
+
+  /**
+   * Constant for the property that specifies the authentication handler to use.
+   */
+  public static final String AUTH_TYPE = "type";
+
+  /**
+   * Constant for the property that specifies the secret to use for signing the HTTP Cookies.
+   */
+  public static final String SIGNATURE_SECRET = "signature.secret";
+
+  /**
+   * Constant for the configuration property that indicates the validity of the generated token.
+   */
+  public static final String AUTH_TOKEN_VALIDITY = "token.validity";
+
+  /**
+   * Constant for the configuration property that indicates the domain to use in the HTTP cookie.
+   */
+  public static final String COOKIE_DOMAIN = "cookie.domain";
+
+  /**
+   * Constant for the configuration property that indicates the path to use in the HTTP cookie.
+   */
+  public static final String COOKIE_PATH = "cookie.path";
+
+  private static final Random RAN = new Random();
+
+  private Signer signer;
+  private AuthenticationHandler authHandler;
+  private boolean randomSecret;
+  private long validity;
+  private String cookieDomain;
+  private String cookiePath;
+
+  /**
+   * Initializes the authentication filter.
+   * <p/>
+   * It instantiates and initializes the specified {@link AuthenticationHandler}.
+   * <p/>
+   *
+   * @param filterConfig filter configuration.
+   *
+   * @throws ServletException thrown if the filter or the authentication handler could not be initialized properly.
+   */
+  @Override
+  public void init(FilterConfig filterConfig) throws ServletException {
+    String configPrefix = filterConfig.getInitParameter(CONFIG_PREFIX);
+    configPrefix = (configPrefix != null) ? configPrefix + "." : "";
+    Properties config = getConfiguration(configPrefix, filterConfig);
+    String authHandlerName = config.getProperty(AUTH_TYPE, null);
+    String authHandlerClassName;
+    if (authHandlerName == null) {
+      throw new ServletException("Authentication type must be specified: simple|kerberos|<class>");
+    }
+    if (authHandlerName.equals("simple")) {
+      authHandlerClassName = PseudoAuthenticationHandler.class.getName();
+    } else if (authHandlerName.equals("kerberos")) {
+      authHandlerClassName = KerberosAuthenticationHandler.class.getName();
+    } else {
+      authHandlerClassName = authHandlerName;
+    }
+
+    try {
+      Class<?> klass = Thread.currentThread().getContextClassLoader().loadClass(authHandlerClassName);
+      authHandler = (AuthenticationHandler) klass.newInstance();
+      authHandler.init(config);
+    } catch (ClassNotFoundException ex) {
+      throw new ServletException(ex);
+    } catch (InstantiationException ex) {
+      throw new ServletException(ex);
+    } catch (IllegalAccessException ex) {
+      throw new ServletException(ex);
+    }
+    String signatureSecret = config.getProperty(configPrefix + SIGNATURE_SECRET);
+    if (signatureSecret == null) {
+      signatureSecret = Long.toString(RAN.nextLong());
+      randomSecret = true;
+      LOG.warn("'signature.secret' configuration not set, using a random value as secret");
+    }
+    signer = new Signer(signatureSecret.getBytes());
+    validity = Long.parseLong(config.getProperty(AUTH_TOKEN_VALIDITY, "36000")) * 1000; //10 hours
+
+    cookieDomain = config.getProperty(COOKIE_DOMAIN, null);
+    cookiePath = config.getProperty(COOKIE_PATH, null);
+  }
+
+  /**
+   * Returns the authentication handler being used.
+   *
+   * @return the authentication handler being used.
+   */
+  protected AuthenticationHandler getAuthenticationHandler() {
+    return authHandler;
+  }
+
+  /**
+   * Returns if a random secret is being used.
+   *
+   * @return if a random secret is being used.
+   */
+  protected boolean isRandomSecret() {
+    return randomSecret;
+  }
+
+  /**
+   * Returns the validity time of the generated tokens.
+   *
+   * @return the validity time of the generated tokens, in seconds.
+   */
+  protected long getValidity() {
+    return validity / 1000;
+  }
+
+  /**
+   * Returns the cookie domain to use for the HTTP cookie.
+   *
+   * @return the cookie domain to use for the HTTP cookie.
+   */
+  protected String getCookieDomain() {
+    return cookieDomain;
+  }
+
+  /**
+   * Returns the cookie path to use for the HTTP cookie.
+   *
+   * @return the cookie path to use for the HTTP cookie.
+   */
+  protected String getCookiePath() {
+    return cookiePath;
+  }
+
+  /**
+   * Destroys the filter.
+   * <p/>
+   * It invokes the {@link AuthenticationHandler#destroy()} method to release any resources it may hold.
+   */
+  @Override
+  public void destroy() {
+    if (authHandler != null) {
+      authHandler.destroy();
+      authHandler = null;
+    }
+  }
+
+  /**
+   * Returns the filtered configuration (only properties starting with the specified prefix). The property keys
+   * are also trimmed from the prefix. The returned {@link Properties} object is used to initialized the
+   * {@link AuthenticationHandler}.
+   * <p/>
+   * This method can be overriden by subclasses to obtain the configuration from other configuration source than
+   * the web.xml file.
+   *
+   * @param configPrefix configuration prefix to use for extracting configuration properties.
+   * @param filterConfig filter configuration object
+   *
+   * @return the configuration to be used with the {@link AuthenticationHandler} instance.
+   *
+   * @throws ServletException thrown if the configuration could not be created.
+   */
+  protected Properties getConfiguration(String configPrefix, FilterConfig filterConfig) throws ServletException {
+    Properties props = new Properties();
+    Enumeration<?> names = filterConfig.getInitParameterNames();
+    while (names.hasMoreElements()) {
+      String name = (String) names.nextElement();
+      if (name.startsWith(configPrefix)) {
+        String value = filterConfig.getInitParameter(name);
+        props.put(name.substring(configPrefix.length()), value);
+      }
+    }
+    return props;
+  }
+
+  /**
+   * Returns the full URL of the request including the query string.
+   * <p/>
+   * Used as a convenience method for logging purposes.
+   *
+   * @param request the request object.
+   *
+   * @return the full URL of the request including the query string.
+   */
+  protected String getRequestURL(HttpServletRequest request) {
+    StringBuffer sb = request.getRequestURL();
+    if (request.getQueryString() != null) {
+      sb.append("?").append(request.getQueryString());
+    }
+    return sb.toString();
+  }
+
+  /**
+   * Returns the {@link AuthenticationToken} for the request.
+   * <p/>
+   * It looks at the received HTTP cookies and extracts the value of the {@link AuthenticatedURL#AUTH_COOKIE}
+   * if present. It verifies the signature and if correct it creates the {@link AuthenticationToken} and returns
+   * it.
+   * <p/>
+   * If this method returns <code>null</code> the filter will invoke the configured {@link AuthenticationHandler}
+   * to perform user authentication.
+   *
+   * @param request request object.
+   *
+   * @return the Authentication token if the request is authenticated, <code>null</code> otherwise.
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws AuthenticationException thrown if the token is invalid or if it has expired.
+   */
+  protected AuthenticationToken getToken(HttpServletRequest request) throws IOException, AuthenticationException {
+    AuthenticationToken token = null;
+    String tokenStr = null;
+    Cookie[] cookies = request.getCookies();
+    if (cookies != null) {
+      for (Cookie cookie : cookies) {
+        if (cookie.getName().equals(AuthenticatedURL.AUTH_COOKIE)) {
+          tokenStr = cookie.getValue();
+          try {
+            tokenStr = signer.verifyAndExtract(tokenStr);
+          } catch (SignerException ex) {
+            throw new AuthenticationException(ex);
+          }
+          break;
+        }
+      }
+    }
+    if (tokenStr != null) {
+      token = AuthenticationToken.parse(tokenStr);
+      if (!token.getType().equals(authHandler.getType())) {
+        throw new AuthenticationException("Invalid AuthenticationToken type");
+      }
+      if (token.isExpired()) {
+        throw new AuthenticationException("AuthenticationToken expired");
+      }
+    }
+    return token;
+  }
+
+  /**
+   * If the request has a valid authentication token it allows the request to continue to the target resource,
+   * otherwise it triggers an authentication sequence using the configured {@link AuthenticationHandler}.
+   *
+   * @param request the request object.
+   * @param response the response object.
+   * @param filterChain the filter chain object.
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws ServletException thrown if a processing error occurred.
+   */
+  @Override
+  public void doFilter(ServletRequest request, ServletResponse response, FilterChain filterChain)
+      throws IOException, ServletException {
+    boolean unauthorizedResponse = true;
+    String unauthorizedMsg = "";
+    HttpServletRequest httpRequest = (HttpServletRequest) request;
+    HttpServletResponse httpResponse = (HttpServletResponse) response;
+    try {
+      boolean newToken = false;
+      AuthenticationToken token;
+      try {
+        token = getToken(httpRequest);
+      }
+      catch (AuthenticationException ex) {
+        LOG.warn("AuthenticationToken ignored: " + ex.getMessage());
+        token = null;
+      }
+      if (authHandler.managementOperation(token, httpRequest, httpResponse)) {
+        if (token == null) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Request [{}] triggering authentication", getRequestURL(httpRequest));
+          }
+          token = authHandler.authenticate(httpRequest, httpResponse);
+          if (token != null && token.getExpires() != 0 &&
+              token != AuthenticationToken.ANONYMOUS) {
+            token.setExpires(System.currentTimeMillis() + getValidity() * 1000);
+          }
+          newToken = true;
+        }
+        if (token != null) {
+          unauthorizedResponse = false;
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Request [{}] user [{}] authenticated", getRequestURL(httpRequest), token.getUserName());
+          }
+          final AuthenticationToken authToken = token;
+          httpRequest = new HttpServletRequestWrapper(httpRequest) {
+
+            @Override
+            public String getAuthType() {
+              return authToken.getType();
+            }
+
+            @Override
+            public String getRemoteUser() {
+              return authToken.getUserName();
+            }
+
+            @Override
+            public Principal getUserPrincipal() {
+              return (authToken != AuthenticationToken.ANONYMOUS) ? authToken : null;
+            }
+          };
+          if (newToken && !token.isExpired() && token != AuthenticationToken.ANONYMOUS) {
+            String signedToken = signer.sign(token.toString());
+            Cookie cookie = createCookie(signedToken);
+            httpResponse.addCookie(cookie);
+          }
+          filterChain.doFilter(httpRequest, httpResponse);
+        }
+      } else {
+        unauthorizedResponse = false;
+      }
+    } catch (AuthenticationException ex) {
+      unauthorizedMsg = ex.toString();
+      LOG.warn("Authentication exception: " + ex.getMessage(), ex);
+    }
+    if (unauthorizedResponse) {
+      if (!httpResponse.isCommitted()) {
+        Cookie cookie = createCookie("");
+        cookie.setMaxAge(0);
+        httpResponse.addCookie(cookie);
+        httpResponse.sendError(HttpServletResponse.SC_UNAUTHORIZED, unauthorizedMsg);
+      }
+    }
+  }
+
+  /**
+   * Creates the Hadoop authentiation HTTP cookie.
+   * <p/>
+   * It sets the domain and path specified in the configuration.
+   *
+   * @param token authentication token for the cookie.
+   *
+   * @return the HTTP cookie.
+   */
+  protected Cookie createCookie(String token) {
+    Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, token);
+    if (getCookieDomain() != null) {
+      cookie.setDomain(getCookieDomain());
+    }
+    if (getCookiePath() != null) {
+      cookie.setPath(getCookiePath());
+    }
+    return cookie;
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java
new file mode 100644
index 0000000..7cafe8b
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java
@@ -0,0 +1,117 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.util.Properties;
+
+/**
+ * Interface for server authentication mechanisms.
+ * <p/>
+ * The {@link AuthenticationFilter} manages the lifecycle of the authentication handler.
+ * <p/>
+ * Implementations must be thread-safe as one instance is initialized and used for all requests.
+ */
+public interface AuthenticationHandler {
+
+  /**
+   * Returns the authentication type of the authentication handler.
+   * <p/>
+   * This should be a name that uniquely identifies the authentication type.
+   * For example 'simple' or 'kerberos'.
+   *
+   * @return the authentication type of the authentication handler.
+   */
+  public String getType();
+
+  /**
+   * Initializes the authentication handler instance.
+   * <p/>
+   * This method is invoked by the {@link AuthenticationFilter#init} method.
+   *
+   * @param config configuration properties to initialize the handler.
+   *
+   * @throws ServletException thrown if the handler could not be initialized.
+   */
+  public void init(Properties config) throws ServletException;
+
+  /**
+   * Destroys the authentication handler instance.
+   * <p/>
+   * This method is invoked by the {@link AuthenticationFilter#destroy} method.
+   */
+  public void destroy();
+
+  /**
+   * Performs an authentication management operation.
+   * <p/>
+   * This is useful for handling operations like get/renew/cancel
+   * delegation tokens which are being handled as operations of the
+   * service end-point.
+   * <p/>
+   * If the method returns <code>TRUE</code> the request will continue normal
+   * processing, this means the method has not produced any HTTP response.
+   * <p/>
+   * If the method returns <code>FALSE</code> the request will end, this means 
+   * the method has produced the corresponding HTTP response.
+   *
+   * @param token the authentication token if any, otherwise <code>NULL</code>.
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   * @return <code>TRUE</code> if the request should be processed as a regular
+   * request,
+   * <code>FALSE</code> otherwise.
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws AuthenticationException thrown if an Authentication error occurred.
+   */
+  public boolean managementOperation(AuthenticationToken token,
+                                     HttpServletRequest request,
+                                     HttpServletResponse response)
+    throws IOException, AuthenticationException;
+
+  /**
+   * Performs an authentication step for the given HTTP client request.
+   * <p/>
+   * This method is invoked by the {@link AuthenticationFilter} only if the HTTP client request is
+   * not yet authenticated.
+   * <p/>
+   * Depending upon the authentication mechanism being implemented, a particular HTTP client may
+   * end up making a sequence of invocations before authentication is successfully established (this is
+   * the case of Kerberos SPNEGO).
+   * <p/>
+   * This method must return an {@link AuthenticationToken} only if the the HTTP client request has
+   * been successfully and fully authenticated.
+   * <p/>
+   * If the HTTP client request has not been completely authenticated, this method must take over
+   * the corresponding HTTP response and it must return <code>null</code>.
+   *
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return an {@link AuthenticationToken} if the HTTP client request has been authenticated,
+   *         <code>null</code> otherwise (in this case it must take care of the response).
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws AuthenticationException thrown if an Authentication error occurred.
+   */
+  public AuthenticationToken authenticate(HttpServletRequest request, HttpServletResponse response)
+    throws IOException, AuthenticationException;
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java
new file mode 100644
index 0000000..ff68847
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java
@@ -0,0 +1,230 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+
+import java.security.Principal;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.StringTokenizer;
+
+import javax.servlet.http.HttpServletRequest;
+
+/**
+ * The {@link AuthenticationToken} contains information about an authenticated
+ * HTTP client and doubles as the {@link Principal} to be returned by
+ * authenticated {@link HttpServletRequest}s
+ * <p/>
+ * The token can be serialized/deserialized to and from a string as it is sent
+ * and received in HTTP client responses and requests as a HTTP cookie (this is
+ * done by the {@link AuthenticationFilter}).
+ */
+public class AuthenticationToken implements Principal {
+
+  /**
+   * Constant that identifies an anonymous request.
+   */
+  public static final AuthenticationToken ANONYMOUS = new AuthenticationToken();
+
+  private static final String ATTR_SEPARATOR = "&";
+  private static final String USER_NAME = "u";
+  private static final String PRINCIPAL = "p";
+  private static final String EXPIRES = "e";
+  private static final String TYPE = "t";
+
+  private final static Set<String> ATTRIBUTES =
+    new HashSet<String>(Arrays.asList(USER_NAME, PRINCIPAL, EXPIRES, TYPE));
+
+  private String userName;
+  private String principal;
+  private String type;
+  private long expires;
+  private String token;
+
+  private AuthenticationToken() {
+    userName = null;
+    principal = null;
+    type = null;
+    expires = -1;
+    token = "ANONYMOUS";
+    generateToken();
+  }
+
+  private static final String ILLEGAL_ARG_MSG = " is NULL, empty or contains a '" + ATTR_SEPARATOR + "'";
+
+  /**
+   * Creates an authentication token.
+   *
+   * @param userName user name.
+   * @param principal principal (commonly matches the user name, with Kerberos is the full/long principal
+   * name while the userName is the short name).
+   * @param type the authentication mechanism name.
+   * (<code>System.currentTimeMillis() + validityPeriod</code>).
+   */
+  public AuthenticationToken(String userName, String principal, String type) {
+    checkForIllegalArgument(userName, "userName");
+    checkForIllegalArgument(principal, "principal");
+    checkForIllegalArgument(type, "type");
+    this.userName = userName;
+    this.principal = principal;
+    this.type = type;
+    this.expires = -1;
+  }
+  
+  /**
+   * Check if the provided value is invalid. Throw an error if it is invalid, NOP otherwise.
+   * 
+   * @param value the value to check.
+   * @param name the parameter name to use in an error message if the value is invalid.
+   */
+  private static void checkForIllegalArgument(String value, String name) {
+    if (value == null || value.length() == 0 || value.contains(ATTR_SEPARATOR)) {
+      throw new IllegalArgumentException(name + ILLEGAL_ARG_MSG);
+    }
+  }
+
+  /**
+   * Sets the expiration of the token.
+   *
+   * @param expires expiration time of the token in milliseconds since the epoch.
+   */
+  public void setExpires(long expires) {
+    if (this != AuthenticationToken.ANONYMOUS) {
+      this.expires = expires;
+      generateToken();
+    }
+  }
+
+  /**
+   * Generates the token.
+   */
+  private void generateToken() {
+    StringBuffer sb = new StringBuffer();
+    sb.append(USER_NAME).append("=").append(getUserName()).append(ATTR_SEPARATOR);
+    sb.append(PRINCIPAL).append("=").append(getName()).append(ATTR_SEPARATOR);
+    sb.append(TYPE).append("=").append(getType()).append(ATTR_SEPARATOR);
+    sb.append(EXPIRES).append("=").append(getExpires());
+    token = sb.toString();
+  }
+
+  /**
+   * Returns the user name.
+   *
+   * @return the user name.
+   */
+  public String getUserName() {
+    return userName;
+  }
+
+  /**
+   * Returns the principal name (this method name comes from the JDK {@link Principal} interface).
+   *
+   * @return the principal name.
+   */
+  @Override
+  public String getName() {
+    return principal;
+  }
+
+  /**
+   * Returns the authentication mechanism of the token.
+   *
+   * @return the authentication mechanism of the token.
+   */
+  public String getType() {
+    return type;
+  }
+
+  /**
+   * Returns the expiration time of the token.
+   *
+   * @return the expiration time of the token, in milliseconds since Epoc.
+   */
+  public long getExpires() {
+    return expires;
+  }
+
+  /**
+   * Returns if the token has expired.
+   *
+   * @return if the token has expired.
+   */
+  public boolean isExpired() {
+    return getExpires() != -1 && System.currentTimeMillis() > getExpires();
+  }
+
+  /**
+   * Returns the string representation of the token.
+   * <p/>
+   * This string representation is parseable by the {@link #parse} method.
+   *
+   * @return the string representation of the token.
+   */
+  @Override
+  public String toString() {
+    return token;
+  }
+
+  /**
+   * Parses a string into an authentication token.
+   *
+   * @param tokenStr string representation of a token.
+   *
+   * @return the parsed authentication token.
+   *
+   * @throws AuthenticationException thrown if the string representation could not be parsed into
+   * an authentication token.
+   */
+  public static AuthenticationToken parse(String tokenStr) throws AuthenticationException {
+    Map<String, String> map = split(tokenStr);
+    if (!map.keySet().equals(ATTRIBUTES)) {
+      throw new AuthenticationException("Invalid token string, missing attributes");
+    }
+    long expires = Long.parseLong(map.get(EXPIRES));
+    AuthenticationToken token = new AuthenticationToken(map.get(USER_NAME), map.get(PRINCIPAL), map.get(TYPE));
+    token.setExpires(expires);
+    return token;
+  }
+
+  /**
+   * Splits the string representation of a token into attributes pairs.
+   *
+   * @param tokenStr string representation of a token.
+   *
+   * @return a map with the attribute pairs of the token.
+   *
+   * @throws AuthenticationException thrown if the string representation of the token could not be broken into
+   * attribute pairs.
+   */
+  private static Map<String, String> split(String tokenStr) throws AuthenticationException {
+    Map<String, String> map = new HashMap<String, String>();
+    StringTokenizer st = new StringTokenizer(tokenStr, ATTR_SEPARATOR);
+    while (st.hasMoreTokens()) {
+      String part = st.nextToken();
+      int separator = part.indexOf('=');
+      if (separator == -1) {
+        throw new AuthenticationException("Invalid authentication token");
+      }
+      String key = part.substring(0, separator);
+      String value = part.substring(separator + 1);
+      map.put(key, value);
+    }
+    return map;
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
new file mode 100644
index 0000000..07b64f4
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
@@ -0,0 +1,336 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
+import org.apache.commons.codec.binary.Base64;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.ietf.jgss.GSSContext;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosPrincipal;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+import javax.security.auth.login.LoginException;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.File;
+import java.io.IOException;
+import java.security.Principal;
+import java.security.PrivilegedActionException;
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * The {@link KerberosAuthenticationHandler} implements the Kerberos SPNEGO authentication mechanism for HTTP.
+ * <p/>
+ * The supported configuration properties are:
+ * <ul>
+ * <li>kerberos.principal: the Kerberos principal to used by the server. As stated by the Kerberos SPNEGO
+ * specification, it should be <code>HTTP/${HOSTNAME}@{REALM}</code>. The realm can be omitted from the
+ * principal as the JDK GSS libraries will use the realm name of the configured default realm.
+ * It does not have a default value.</li>
+ * <li>kerberos.keytab: the keytab file containing the credentials for the Kerberos principal.
+ * It does not have a default value.</li>
+ * <li>kerberos.name.rules: kerberos names rules to resolve principal names, see 
+ * {@link KerberosName#setRules(String)}</li>
+ * </ul>
+ */
+public class KerberosAuthenticationHandler implements AuthenticationHandler {
+  private static Logger LOG = LoggerFactory.getLogger(KerberosAuthenticationHandler.class);
+
+  /**
+   * Kerberos context configuration for the JDK GSS library.
+   */
+  private static class KerberosConfiguration extends Configuration {
+    private String keytab;
+    private String principal;
+
+    public KerberosConfiguration(String keytab, String principal) {
+      this.keytab = keytab;
+      this.principal = principal;
+    }
+
+    @Override
+    public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
+      Map<String, String> options = new HashMap<String, String>();
+      options.put("keyTab", keytab);
+      options.put("principal", principal);
+      options.put("useKeyTab", "true");
+      options.put("storeKey", "true");
+      options.put("doNotPrompt", "true");
+      options.put("useTicketCache", "true");
+      options.put("renewTGT", "true");
+      options.put("refreshKrb5Config", "true");
+      options.put("isInitiator", "false");
+      String ticketCache = System.getenv("KRB5CCNAME");
+      if (ticketCache != null) {
+        options.put("ticketCache", ticketCache);
+      }
+      if (LOG.isDebugEnabled()) {
+        options.put("debug", "true");
+      }
+
+      return new AppConfigurationEntry[]{
+          new AppConfigurationEntry(KerberosUtil.getKrb5LoginModuleName(),
+                                  AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
+                                  options),};
+    }
+  }
+
+  /**
+   * Constant that identifies the authentication mechanism.
+   */
+  public static final String TYPE = "kerberos";
+
+  /**
+   * Constant for the configuration property that indicates the kerberos principal.
+   */
+  public static final String PRINCIPAL = TYPE + ".principal";
+
+  /**
+   * Constant for the configuration property that indicates the keytab file path.
+   */
+  public static final String KEYTAB = TYPE + ".keytab";
+
+  /**
+   * Constant for the configuration property that indicates the Kerberos name
+   * rules for the Kerberos principals.
+   */
+  public static final String NAME_RULES = TYPE + ".name.rules";
+
+  private String principal;
+  private String keytab;
+  private GSSManager gssManager;
+  private LoginContext loginContext;
+
+  /**
+   * Initializes the authentication handler instance.
+   * <p/>
+   * It creates a Kerberos context using the principal and keytab specified in the configuration.
+   * <p/>
+   * This method is invoked by the {@link AuthenticationFilter#init} method.
+   *
+   * @param config configuration properties to initialize the handler.
+   *
+   * @throws ServletException thrown if the handler could not be initialized.
+   */
+  @Override
+  public void init(Properties config) throws ServletException {
+    try {
+      principal = config.getProperty(PRINCIPAL, principal);
+      if (principal == null || principal.trim().length() == 0) {
+        throw new ServletException("Principal not defined in configuration");
+      }
+      keytab = config.getProperty(KEYTAB, keytab);
+      if (keytab == null || keytab.trim().length() == 0) {
+        throw new ServletException("Keytab not defined in configuration");
+      }
+      if (!new File(keytab).exists()) {
+        throw new ServletException("Keytab does not exist: " + keytab);
+      }
+
+      String nameRules = config.getProperty(NAME_RULES, null);
+      if (nameRules != null) {
+        KerberosName.setRules(nameRules);
+      }
+      
+      Set<Principal> principals = new HashSet<Principal>();
+      principals.add(new KerberosPrincipal(principal));
+      Subject subject = new Subject(false, principals, new HashSet<Object>(), new HashSet<Object>());
+
+      KerberosConfiguration kerberosConfiguration = new KerberosConfiguration(keytab, principal);
+
+      LOG.info("Login using keytab "+keytab+", for principal "+principal);
+      loginContext = new LoginContext("", subject, null, kerberosConfiguration);
+      loginContext.login();
+
+      Subject serverSubject = loginContext.getSubject();
+      try {
+        gssManager = Subject.doAs(serverSubject, new PrivilegedExceptionAction<GSSManager>() {
+
+          @Override
+          public GSSManager run() throws Exception {
+            return GSSManager.getInstance();
+          }
+        });
+      } catch (PrivilegedActionException ex) {
+        throw ex.getException();
+      }
+      LOG.info("Initialized, principal [{}] from keytab [{}]", principal, keytab);
+    } catch (Exception ex) {
+      throw new ServletException(ex);
+    }
+  }
+
+  /**
+   * Releases any resources initialized by the authentication handler.
+   * <p/>
+   * It destroys the Kerberos context.
+   */
+  @Override
+  public void destroy() {
+    try {
+      if (loginContext != null) {
+        loginContext.logout();
+        loginContext = null;
+      }
+    } catch (LoginException ex) {
+      LOG.warn(ex.getMessage(), ex);
+    }
+  }
+
+  /**
+   * Returns the authentication type of the authentication handler, 'kerberos'.
+   * <p/>
+   *
+   * @return the authentication type of the authentication handler, 'kerberos'.
+   */
+  @Override
+  public String getType() {
+    return TYPE;
+  }
+
+  /**
+   * Returns the Kerberos principal used by the authentication handler.
+   *
+   * @return the Kerberos principal used by the authentication handler.
+   */
+  protected String getPrincipal() {
+    return principal;
+  }
+
+  /**
+   * Returns the keytab used by the authentication handler.
+   *
+   * @return the keytab used by the authentication handler.
+   */
+  protected String getKeytab() {
+    return keytab;
+  }
+
+  /**
+   * This is an empty implementation, it always returns <code>TRUE</code>.
+   *
+   *
+   *
+   * @param token the authentication token if any, otherwise <code>NULL</code>.
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return <code>TRUE</code>
+   * @throws IOException it is never thrown.
+   * @throws AuthenticationException it is never thrown.
+   */
+  @Override
+  public boolean managementOperation(AuthenticationToken token,
+                                     HttpServletRequest request,
+                                     HttpServletResponse response)
+    throws IOException, AuthenticationException {
+    return true;
+  }
+
+  /**
+   * It enforces the the Kerberos SPNEGO authentication sequence returning an {@link AuthenticationToken} only
+   * after the Kerberos SPNEGO sequence has completed successfully.
+   * <p/>
+   *
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return an authentication token if the Kerberos SPNEGO sequence is complete and valid,
+   *         <code>null</code> if it is in progress (in this case the handler handles the response to the client).
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws AuthenticationException thrown if Kerberos SPNEGO sequence failed.
+   */
+  @Override
+  public AuthenticationToken authenticate(HttpServletRequest request, final HttpServletResponse response)
+    throws IOException, AuthenticationException {
+    AuthenticationToken token = null;
+    String authorization = request.getHeader(KerberosAuthenticator.AUTHORIZATION);
+
+    if (authorization == null || !authorization.startsWith(KerberosAuthenticator.NEGOTIATE)) {
+      response.setHeader(KerberosAuthenticator.WWW_AUTHENTICATE, KerberosAuthenticator.NEGOTIATE);
+      response.setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+      if (authorization == null) {
+        LOG.trace("SPNEGO starting");
+      } else {
+        LOG.warn("'" + KerberosAuthenticator.AUTHORIZATION + "' does not start with '" +
+            KerberosAuthenticator.NEGOTIATE + "' :  {}", authorization);
+      }
+    } else {
+      authorization = authorization.substring(KerberosAuthenticator.NEGOTIATE.length()).trim();
+      final Base64 base64 = new Base64(0);
+      final byte[] clientToken = base64.decode(authorization);
+      Subject serverSubject = loginContext.getSubject();
+      try {
+        token = Subject.doAs(serverSubject, new PrivilegedExceptionAction<AuthenticationToken>() {
+
+          @Override
+          public AuthenticationToken run() throws Exception {
+            AuthenticationToken token = null;
+            GSSContext gssContext = null;
+            try {
+              gssContext = gssManager.createContext((GSSCredential) null);
+              byte[] serverToken = gssContext.acceptSecContext(clientToken, 0, clientToken.length);
+              if (serverToken != null && serverToken.length > 0) {
+                String authenticate = base64.encodeToString(serverToken);
+                response.setHeader(KerberosAuthenticator.WWW_AUTHENTICATE,
+                                   KerberosAuthenticator.NEGOTIATE + " " + authenticate);
+              }
+              if (!gssContext.isEstablished()) {
+                response.setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+                LOG.trace("SPNEGO in progress");
+              } else {
+                String clientPrincipal = gssContext.getSrcName().toString();
+                KerberosName kerberosName = new KerberosName(clientPrincipal);
+                String userName = kerberosName.getShortName();
+                token = new AuthenticationToken(userName, clientPrincipal, getType());
+                response.setStatus(HttpServletResponse.SC_OK);
+                LOG.trace("SPNEGO completed for principal [{}]", clientPrincipal);
+              }
+            } finally {
+              if (gssContext != null) {
+                gssContext.dispose();
+              }
+            }
+            return token;
+          }
+        });
+      } catch (PrivilegedActionException ex) {
+        if (ex.getException() instanceof IOException) {
+          throw (IOException) ex.getException();
+        }
+        else {
+          throw new AuthenticationException(ex.getException());
+        }
+      }
+    }
+    return token;
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java
new file mode 100644
index 0000000..1a2f98c
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java
@@ -0,0 +1,155 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.security.authentication.client.PseudoAuthenticator;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.util.Properties;
+
+/**
+ * The <code>PseudoAuthenticationHandler</code> provides a pseudo authentication mechanism that accepts
+ * the user name specified as a query string parameter.
+ * <p/>
+ * This mimics the model of Hadoop Simple authentication which trust the 'user.name' property provided in
+ * the configuration object.
+ * <p/>
+ * This handler can be configured to support anonymous users.
+ * <p/>
+ * The only supported configuration property is:
+ * <ul>
+ * <li>simple.anonymous.allowed: <code>true|false</code>, default value is <code>false</code></li>
+ * </ul>
+ */
+public class PseudoAuthenticationHandler implements AuthenticationHandler {
+
+  /**
+   * Constant that identifies the authentication mechanism.
+   */
+  public static final String TYPE = "simple";
+
+  /**
+   * Constant for the configuration property that indicates if anonymous users are allowed.
+   */
+  public static final String ANONYMOUS_ALLOWED = TYPE + ".anonymous.allowed";
+
+  private boolean acceptAnonymous;
+
+  /**
+   * Initializes the authentication handler instance.
+   * <p/>
+   * This method is invoked by the {@link AuthenticationFilter#init} method.
+   *
+   * @param config configuration properties to initialize the handler.
+   *
+   * @throws ServletException thrown if the handler could not be initialized.
+   */
+  @Override
+  public void init(Properties config) throws ServletException {
+    acceptAnonymous = Boolean.parseBoolean(config.getProperty(ANONYMOUS_ALLOWED, "false"));
+  }
+
+  /**
+   * Returns if the handler is configured to support anonymous users.
+   *
+   * @return if the handler is configured to support anonymous users.
+   */
+  protected boolean getAcceptAnonymous() {
+    return acceptAnonymous;
+  }
+
+  /**
+   * Releases any resources initialized by the authentication handler.
+   * <p/>
+   * This implementation does a NOP.
+   */
+  @Override
+  public void destroy() {
+  }
+
+  /**
+   * Returns the authentication type of the authentication handler, 'simple'.
+   * <p/>
+   *
+   * @return the authentication type of the authentication handler, 'simple'.
+   */
+  @Override
+  public String getType() {
+    return TYPE;
+  }
+
+  /**
+   * This is an empty implementation, it always returns <code>TRUE</code>.
+   *
+   *
+   *
+   * @param token the authentication token if any, otherwise <code>NULL</code>.
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return <code>TRUE</code>
+   * @throws IOException it is never thrown.
+   * @throws AuthenticationException it is never thrown.
+   */
+  @Override
+  public boolean managementOperation(AuthenticationToken token,
+                                     HttpServletRequest request,
+                                     HttpServletResponse response)
+    throws IOException, AuthenticationException {
+    return true;
+  }
+
+  /**
+   * Authenticates an HTTP client request.
+   * <p/>
+   * It extracts the {@link PseudoAuthenticator#USER_NAME} parameter from the query string and creates
+   * an {@link AuthenticationToken} with it.
+   * <p/>
+   * If the HTTP client request does not contain the {@link PseudoAuthenticator#USER_NAME} parameter and
+   * the handler is configured to allow anonymous users it returns the {@link AuthenticationToken#ANONYMOUS}
+   * token.
+   * <p/>
+   * If the HTTP client request does not contain the {@link PseudoAuthenticator#USER_NAME} parameter and
+   * the handler is configured to disallow anonymous users it throws an {@link AuthenticationException}.
+   *
+   * @param request the HTTP client request.
+   * @param response the HTTP client response.
+   *
+   * @return an authentication token if the HTTP client request is accepted and credentials are valid.
+   *
+   * @throws IOException thrown if an IO error occurred.
+   * @throws AuthenticationException thrown if HTTP client request was not accepted as an authentication request.
+   */
+  @Override
+  public AuthenticationToken authenticate(HttpServletRequest request, HttpServletResponse response)
+    throws IOException, AuthenticationException {
+    AuthenticationToken token;
+    String userName = request.getParameter(PseudoAuthenticator.USER_NAME);
+    if (userName == null) {
+      if (getAcceptAnonymous()) {
+        token = AuthenticationToken.ANONYMOUS;
+      } else {
+        throw new AuthenticationException("Anonymous requests are disallowed");
+      }
+    } else {
+      token = new AuthenticationToken(userName, userName, getType());
+    }
+    return token;
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
new file mode 100644
index 0000000..6c51186
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -0,0 +1,421 @@
+package org.apache.hadoop.security.authentication.util;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class implements parsing and handling of Kerberos principal names. In
+ * particular, it splits them apart and translates them down into local
+ * operating system names.
+ */
+@SuppressWarnings("all")
+@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
+@InterfaceStability.Evolving
+public class KerberosName {
+  private static final Logger LOG = LoggerFactory.getLogger(KerberosName.class);
+
+  /** The first component of the name */
+  private final String serviceName;
+  /** The second component of the name. It may be null. */
+  private final String hostName;
+  /** The realm of the name. */
+  private final String realm;
+
+  /**
+   * A pattern that matches a Kerberos name with at most 2 components.
+   */
+  private static final Pattern nameParser =
+    Pattern.compile("([^/@]*)(/([^/@]*))?@([^/@]*)");
+
+  /**
+   * A pattern that matches a string with out '$' and then a single
+   * parameter with $n.
+   */
+  private static Pattern parameterPattern =
+    Pattern.compile("([^$]*)(\\$(\\d*))?");
+
+  /**
+   * A pattern for parsing a auth_to_local rule.
+   */
+  private static final Pattern ruleParser =
+    Pattern.compile("\\s*((DEFAULT)|(RULE:\\[(\\d*):([^\\]]*)](\\(([^)]*)\\))?"+
+                    "(s/([^/]*)/([^/]*)/(g)?)?))");
+
+  /**
+   * A pattern that recognizes simple/non-simple names.
+   */
+  private static final Pattern nonSimplePattern = Pattern.compile("[/@]");
+
+  /**
+   * The list of translation rules.
+   */
+  private static List<Rule> rules;
+
+  private static String defaultRealm;
+
+  static {
+    try {
+      defaultRealm = KerberosUtil.getDefaultRealm();
+    } catch (Exception ke) {
+        LOG.debug("Kerberos krb5 configuration not found, setting default realm to empty");
+        defaultRealm="";
+    }
+  }
+
+  /**
+   * Create a name from the full Kerberos principal name.
+   * @param name
+   */
+  public KerberosName(String name) {
+    Matcher match = nameParser.matcher(name);
+    if (!match.matches()) {
+      if (name.contains("@")) {
+        throw new IllegalArgumentException("Malformed Kerberos name: " + name);
+      } else {
+        serviceName = name;
+        hostName = null;
+        realm = null;
+      }
+    } else {
+      serviceName = match.group(1);
+      hostName = match.group(3);
+      realm = match.group(4);
+    }
+  }
+
+  /**
+   * Get the configured default realm.
+   * @return the default realm from the krb5.conf
+   */
+  public String getDefaultRealm() {
+    return defaultRealm;
+  }
+
+  /**
+   * Put the name back together from the parts.
+   */
+  @Override
+  public String toString() {
+    StringBuilder result = new StringBuilder();
+    result.append(serviceName);
+    if (hostName != null) {
+      result.append('/');
+      result.append(hostName);
+    }
+    if (realm != null) {
+      result.append('@');
+      result.append(realm);
+    }
+    return result.toString();
+  }
+
+  /**
+   * Get the first component of the name.
+   * @return the first section of the Kerberos principal name
+   */
+  public String getServiceName() {
+    return serviceName;
+  }
+
+  /**
+   * Get the second component of the name.
+   * @return the second section of the Kerberos principal name, and may be null
+   */
+  public String getHostName() {
+    return hostName;
+  }
+
+  /**
+   * Get the realm of the name.
+   * @return the realm of the name, may be null
+   */
+  public String getRealm() {
+    return realm;
+  }
+
+  /**
+   * An encoding of a rule for translating kerberos names.
+   */
+  private static class Rule {
+    private final boolean isDefault;
+    private final int numOfComponents;
+    private final String format;
+    private final Pattern match;
+    private final Pattern fromPattern;
+    private final String toPattern;
+    private final boolean repeat;
+
+    Rule() {
+      isDefault = true;
+      numOfComponents = 0;
+      format = null;
+      match = null;
+      fromPattern = null;
+      toPattern = null;
+      repeat = false;
+    }
+
+    Rule(int numOfComponents, String format, String match, String fromPattern,
+         String toPattern, boolean repeat) {
+      isDefault = false;
+      this.numOfComponents = numOfComponents;
+      this.format = format;
+      this.match = match == null ? null : Pattern.compile(match);
+      this.fromPattern =
+        fromPattern == null ? null : Pattern.compile(fromPattern);
+      this.toPattern = toPattern;
+      this.repeat = repeat;
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder buf = new StringBuilder();
+      if (isDefault) {
+        buf.append("DEFAULT");
+      } else {
+        buf.append("RULE:[");
+        buf.append(numOfComponents);
+        buf.append(':');
+        buf.append(format);
+        buf.append(']');
+        if (match != null) {
+          buf.append('(');
+          buf.append(match);
+          buf.append(')');
+        }
+        if (fromPattern != null) {
+          buf.append("s/");
+          buf.append(fromPattern);
+          buf.append('/');
+          buf.append(toPattern);
+          buf.append('/');
+          if (repeat) {
+            buf.append('g');
+          }
+        }
+      }
+      return buf.toString();
+    }
+
+    /**
+     * Replace the numbered parameters of the form $n where n is from 1 to
+     * the length of params. Normal text is copied directly and $n is replaced
+     * by the corresponding parameter.
+     * @param format the string to replace parameters again
+     * @param params the list of parameters
+     * @return the generated string with the parameter references replaced.
+     * @throws BadFormatString
+     */
+    static String replaceParameters(String format,
+                                    String[] params) throws BadFormatString {
+      Matcher match = parameterPattern.matcher(format);
+      int start = 0;
+      StringBuilder result = new StringBuilder();
+      while (start < format.length() && match.find(start)) {
+        result.append(match.group(1));
+        String paramNum = match.group(3);
+        if (paramNum != null) {
+          try {
+            int num = Integer.parseInt(paramNum);
+            if (num < 0 || num > params.length) {
+              throw new BadFormatString("index " + num + " from " + format +
+                                        " is outside of the valid range 0 to " +
+                                        (params.length - 1));
+            }
+            result.append(params[num]);
+          } catch (NumberFormatException nfe) {
+            throw new BadFormatString("bad format in username mapping in " +
+                                      paramNum, nfe);
+          }
+
+        }
+        start = match.end();
+      }
+      return result.toString();
+    }
+
+    /**
+     * Replace the matches of the from pattern in the base string with the value
+     * of the to string.
+     * @param base the string to transform
+     * @param from the pattern to look for in the base string
+     * @param to the string to replace matches of the pattern with
+     * @param repeat whether the substitution should be repeated
+     * @return
+     */
+    static String replaceSubstitution(String base, Pattern from, String to,
+                                      boolean repeat) {
+      Matcher match = from.matcher(base);
+      if (repeat) {
+        return match.replaceAll(to);
+      } else {
+        return match.replaceFirst(to);
+      }
+    }
+
+    /**
+     * Try to apply this rule to the given name represented as a parameter
+     * array.
+     * @param params first element is the realm, second and later elements are
+     *        are the components of the name "a/b@FOO" -> {"FOO", "a", "b"}
+     * @return the short name if this rule applies or null
+     * @throws IOException throws if something is wrong with the rules
+     */
+    String apply(String[] params) throws IOException {
+      String result = null;
+      if (isDefault) {
+        if (defaultRealm.equals(params[0])) {
+          result = params[1];
+        }
+      } else if (params.length - 1 == numOfComponents) {
+        String base = replaceParameters(format, params);
+        if (match == null || match.matcher(base).matches()) {
+          if (fromPattern == null) {
+            result = base;
+          } else {
+            result = replaceSubstitution(base, fromPattern, toPattern,  repeat);
+          }
+        }
+      }
+      if (result != null && nonSimplePattern.matcher(result).find()) {
+        throw new NoMatchingRule("Non-simple name " + result +
+                                 " after auth_to_local rule " + this);
+      }
+      return result;
+    }
+  }
+
+  static List<Rule> parseRules(String rules) {
+    List<Rule> result = new ArrayList<Rule>();
+    String remaining = rules.trim();
+    while (remaining.length() > 0) {
+      Matcher matcher = ruleParser.matcher(remaining);
+      if (!matcher.lookingAt()) {
+        throw new IllegalArgumentException("Invalid rule: " + remaining);
+      }
+      if (matcher.group(2) != null) {
+        result.add(new Rule());
+      } else {
+        result.add(new Rule(Integer.parseInt(matcher.group(4)),
+                            matcher.group(5),
+                            matcher.group(7),
+                            matcher.group(9),
+                            matcher.group(10),
+                            "g".equals(matcher.group(11))));
+      }
+      remaining = remaining.substring(matcher.end());
+    }
+    return result;
+  }
+
+  @SuppressWarnings("serial")
+  public static class BadFormatString extends IOException {
+    BadFormatString(String msg) {
+      super(msg);
+    }
+    BadFormatString(String msg, Throwable err) {
+      super(msg, err);
+    }
+  }
+
+  @SuppressWarnings("serial")
+  public static class NoMatchingRule extends IOException {
+    NoMatchingRule(String msg) {
+      super(msg);
+    }
+  }
+
+  /**
+   * Get the translation of the principal name into an operating system
+   * user name.
+   * @return the short name
+   * @throws IOException
+   */
+  public String getShortName() throws IOException {
+    String[] params;
+    if (hostName == null) {
+      // if it is already simple, just return it
+      if (realm == null) {
+        return serviceName;
+      }
+      params = new String[]{realm, serviceName};
+    } else {
+      params = new String[]{realm, serviceName, hostName};
+    }
+    for(Rule r: rules) {
+      String result = r.apply(params);
+      if (result != null) {
+        return result;
+      }
+    }
+    throw new NoMatchingRule("No rules applied to " + toString());
+  }
+
+  /**
+   * Set the rules.
+   * @param ruleString the rules string.
+   */
+  public static void setRules(String ruleString) {
+    rules = (ruleString != null) ? parseRules(ruleString) : null;
+  }
+
+  /**
+   * Get the rules.
+   * @return String of configured rules, or null if not yet configured
+   */
+  public static String getRules() {
+    String ruleString = null;
+    if (rules != null) {
+      StringBuilder sb = new StringBuilder();
+      for (Rule rule : rules) {
+        sb.append(rule.toString()).append("\n");
+      }
+      ruleString = sb.toString().trim();
+    }
+    return ruleString;
+  }
+  
+  /**
+   * Indicates if the name rules have been set.
+   * 
+   * @return if the name rules have been set.
+   */
+  public static boolean hasRulesBeenSet() {
+    return rules != null;
+  }
+  
+  static void printRules() throws IOException {
+    int i = 0;
+    for(Rule r: rules) {
+      System.out.println(++i + " " + r);
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
new file mode 100644
index 0000000..428435d
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import java.lang.reflect.Field;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.Locale;
+
+import org.ietf.jgss.GSSException;
+import org.ietf.jgss.Oid;
+
+public class KerberosUtil {
+
+  /* Return the Kerberos login module name */
+  public static String getKrb5LoginModuleName() {
+    return System.getProperty("java.vendor").contains("IBM")
+      ? "com.ibm.security.auth.module.Krb5LoginModule"
+      : "com.sun.security.auth.module.Krb5LoginModule";
+  }
+  
+  public static Oid getOidInstance(String oidName) 
+      throws ClassNotFoundException, GSSException, NoSuchFieldException,
+      IllegalAccessException {
+    Class<?> oidClass;
+    if (System.getProperty("java.vendor").contains("IBM")) {
+      oidClass = Class.forName("com.ibm.security.jgss.GSSUtil");
+    } else {
+      oidClass = Class.forName("sun.security.jgss.GSSUtil");
+    }
+    Field oidField = oidClass.getDeclaredField(oidName);
+    return (Oid)oidField.get(oidClass);
+  }
+
+  public static String getDefaultRealm() 
+      throws ClassNotFoundException, NoSuchMethodException, 
+      IllegalArgumentException, IllegalAccessException, 
+      InvocationTargetException {
+    Object kerbConf;
+    Class<?> classRef;
+    Method getInstanceMethod;
+    Method getDefaultRealmMethod;
+    if (System.getProperty("java.vendor").contains("IBM")) {
+      classRef = Class.forName("com.ibm.security.krb5.internal.Config");
+    } else {
+      classRef = Class.forName("sun.security.krb5.Config");
+    }
+    getInstanceMethod = classRef.getMethod("getInstance", new Class[0]);
+    kerbConf = getInstanceMethod.invoke(classRef, new Object[0]);
+    getDefaultRealmMethod = classRef.getDeclaredMethod("getDefaultRealm",
+         new Class[0]);
+    return (String)getDefaultRealmMethod.invoke(kerbConf, new Object[0]);
+  }
+  
+  /* Return fqdn of the current host */
+  static String getLocalHostName() throws UnknownHostException {
+    return InetAddress.getLocalHost().getCanonicalHostName();
+  }
+  
+  /**
+   * Create Kerberos principal for a given service and hostname. It converts
+   * hostname to lower case. If hostname is null or "0.0.0.0", it uses
+   * dynamically looked-up fqdn of the current host instead.
+   * 
+   * @param service
+   *          Service for which you want to generate the principal.
+   * @param hostname
+   *          Fully-qualified domain name.
+   * @return Converted Kerberos principal name.
+   * @throws UnknownHostException
+   *           If no IP address for the local host could be found.
+   */
+  public static final String getServicePrincipal(String service, String hostname)
+      throws UnknownHostException {
+    String fqdn = hostname;
+    if (null == fqdn || fqdn.equals("") || fqdn.equals("0.0.0.0")) {
+      fqdn = getLocalHostName();
+    }
+    // convert hostname to lowercase as kerberos does not work with hostnames
+    // with uppercase characters.
+    return service + "/" + fqdn.toLowerCase(Locale.US);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/Signer.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/Signer.java
new file mode 100644
index 0000000..10c9a8e
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/Signer.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import org.apache.commons.codec.binary.Base64;
+
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * Signs strings and verifies signed strings using a SHA digest.
+ */
+public class Signer {
+  private static final String SIGNATURE = "&s=";
+
+  private byte[] secret;
+
+  /**
+   * Creates a Signer instance using the specified secret.
+   *
+   * @param secret secret to use for creating the digest.
+   */
+  public Signer(byte[] secret) {
+    if (secret == null) {
+      throw new IllegalArgumentException("secret cannot be NULL");
+    }
+    this.secret = secret.clone();
+  }
+
+  /**
+   * Returns a signed string.
+   * <p/>
+   * The signature '&s=SIGNATURE' is appended at the end of the string.
+   *
+   * @param str string to sign.
+   *
+   * @return the signed string.
+   */
+  public String sign(String str) {
+    if (str == null || str.length() == 0) {
+      throw new IllegalArgumentException("NULL or empty string to sign");
+    }
+    String signature = computeSignature(str);
+    return str + SIGNATURE + signature;
+  }
+
+  /**
+   * Verifies a signed string and extracts the original string.
+   *
+   * @param signedStr the signed string to verify and extract.
+   *
+   * @return the extracted original string.
+   *
+   * @throws SignerException thrown if the given string is not a signed string or if the signature is invalid.
+   */
+  public String verifyAndExtract(String signedStr) throws SignerException {
+    int index = signedStr.lastIndexOf(SIGNATURE);
+    if (index == -1) {
+      throw new SignerException("Invalid signed text: " + signedStr);
+    }
+    String originalSignature = signedStr.substring(index + SIGNATURE.length());
+    String rawValue = signedStr.substring(0, index);
+    String currentSignature = computeSignature(rawValue);
+    if (!originalSignature.equals(currentSignature)) {
+      throw new SignerException("Invalid signature");
+    }
+    return rawValue;
+  }
+
+  /**
+   * Returns then signature of a string.
+   *
+   * @param str string to sign.
+   *
+   * @return the signature for the string.
+   */
+  protected String computeSignature(String str) {
+    try {
+      MessageDigest md = MessageDigest.getInstance("SHA");
+      md.update(str.getBytes());
+      md.update(secret);
+      byte[] digest = md.digest();
+      return new Base64(0).encodeToString(digest);
+    } catch (NoSuchAlgorithmException ex) {
+      throw new RuntimeException("It should not happen, " + ex.getMessage(), ex);
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerException.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerException.java
new file mode 100644
index 0000000..faf2007
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerException.java
@@ -0,0 +1,31 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+/**
+ * Exception thrown by {@link Signer} when a string signature is invalid.
+ */
+public class SignerException extends Exception {
+  
+  static final long serialVersionUID = 0;
+
+  /**
+   * Creates an exception instance.
+   *
+   * @param msg message for the exception.
+   */
+  public SignerException(String msg) {
+    super(msg);
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
new file mode 100644
index 0000000..a2e015a
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
@@ -0,0 +1,75 @@
+~~ Licensed under the Apache License, Version 2.0 (the "License");
+~~ you may not use this file except in compliance with the License.
+~~ You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License. See accompanying LICENSE file.
+
+  ---
+  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Building It
+  ---
+  ---
+  ${maven.build.timestamp}
+
+Hadoop Auth, Java HTTP SPNEGO ${project.version} - Building It
+
+  \[ {{{./index.html}Go Back}} \]
+
+* Requirements
+
+  * Java 6+
+
+  * Maven 3+
+
+  * Kerberos KDC (for running Kerberos test cases)
+
+* Building
+
+  Use Maven goals: clean, test, compile, package, install
+
+  Available profiles: docs, testKerberos
+
+* Testing
+
+  By default Kerberos testcases are not run.
+
+  The requirements to run Kerberos testcases are a running KDC, a keytab
+  file with a client principal and a kerberos principal.
+
+  To run Kerberos tescases use the <<<testKerberos>>> Maven profile:
+
++---+
+$ mvn test -PtestKerberos
++---+
+
+  The following Maven <<<-D>>> options can be used to change the default
+  values:
+
+  * <<<hadoop-auth.test.kerberos.realm>>>: default value <<LOCALHOST>>
+
+  * <<<hadoop-auth.test.kerberos.client.principal>>>: default value <<client>>
+
+  * <<<hadoop-auth.test.kerberos.server.principal>>>: default value
+    <<HTTP/localhost>> (it must start 'HTTP/')
+
+  * <<<hadoop-auth.test.kerberos.keytab.file>>>: default value
+    <<${HOME}/${USER}.keytab>>
+
+** Generating Documentation
+
+  To create the documentation use the <<<docs>>> Maven profile:
+
++---+
+$ mvn package -Pdocs
++---+
+
+  The generated documentation is available at
+  <<<hadoop-auth/target/site/>>>.
+
+  \[ {{{./index.html}Go Back}} \]
+
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
new file mode 100644
index 0000000..f2fe11d
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
@@ -0,0 +1,248 @@
+~~ Licensed under the Apache License, Version 2.0 (the "License");
+~~ you may not use this file except in compliance with the License.
+~~ You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License. See accompanying LICENSE file.
+
+  ---
+  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Server Side
+  Configuration
+  ---
+  ---
+  ${maven.build.timestamp}
+
+Hadoop Auth, Java HTTP SPNEGO ${project.version} - Server Side
+Configuration
+
+  \[ {{{./index.html}Go Back}} \]
+
+* Server Side Configuration Setup
+
+  The {{{./apidocs/org/apache/hadoop/auth/server/AuthenticationFilter.html}
+  AuthenticationFilter filter}} is Hadoop Auth's server side component.
+
+  This filter must be configured in front of all the web application resources
+  that required authenticated requests. For example:
+
+  The Hadoop Auth and dependent JAR files must be in the web application
+  classpath (commonly the <<<WEB-INF/lib>>> directory).
+
+  Hadoop Auth uses SLF4J-API for logging. Auth Maven POM dependencies define
+  the SLF4J API dependency but it does not define the dependency on a concrete
+  logging implementation, this must be addded explicitly to the web
+  application. For example, if the web applicationan uses Log4j, the
+  SLF4J-LOG4J12 and LOG4J jar files must be part part of the web application
+  classpath as well as the Log4j configuration file.
+
+** Common Configuration parameters
+
+  * <<<config.prefix>>>: If specified, all other configuration parameter names
+    must start with the prefix. The default value is no prefix.
+
+  * <<<[PREFIX.]type>>>: the authentication type keyword (<<<simple>>> or
+    <<<kerberos>>>) or a
+    {{{./apidocs/org/apache/hadoop/auth/server/AuthenticationHandler.html}
+    Authentication handler implementation}}.
+
+  * <<<[PREFIX.]signature.secret>>>: The secret to SHA-sign the generated
+    authentication tokens. If a secret is not provided a random secret is
+    generated at start up time. If using multiple web application instances
+    behind a load-balancer a secret must be set for the application to work
+    properly.
+
+  * <<<[PREFIX.]token.validity>>>: The validity -in seconds- of the generated
+    authentication token. The default value is <<<3600>>> seconds.
+
+  * <<<[PREFIX.]cookie.domain>>>: domain to use for the HTTP cookie that stores
+    the authentication token.
+
+  * <<<[PREFIX.]cookie.path>>>: path to use for the HTTP cookie that stores the
+    authentication token.
+
+** Kerberos Configuration
+
+  <<IMPORTANT>>: A KDC must be configured and running.
+
+  To use Kerberos SPNEGO as the authentication mechanism, the authentication
+  filter must be configured with the following init parameters:
+
+    * <<<[PREFIX.]type>>>: the keyword <<<kerberos>>>.
+
+    * <<<[PREFIX.]kerberos.principal>>>: The web-application Kerberos principal
+      name. The Kerberos principal name must start with <<<HTTP/...>>>. For
+      example: <<<HTTP/localhost@LOCALHOST>>>.  There is no default value.
+
+    * <<<[PREFIX.]kerberos.keytab>>>: The path to the keytab file containing
+      the credentials for the kerberos principal. For example:
+      <<</Users/tucu/tucu.keytab>>>. There is no default value.
+
+  <<Example>>:
+
++---+
+<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+    ...
+
+    <filter>
+        <filter-name>kerberosFilter</filter-name>
+        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+        <init-param>
+            <param-name>type</param-name>
+            <param-value>kerberos</param-value>
+        </init-param>
+        <init-param>
+            <param-name>token.validity</param-name>
+            <param-value>30</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.domain</param-name>
+            <param-value>.foo.com</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.path</param-name>
+            <param-value>/</param-value>
+        </init-param>
+        <init-param>
+            <param-name>kerberos.principal</param-name>
+            <param-value>HTTP/localhost@LOCALHOST</param-value>
+        </init-param>
+        <init-param>
+            <param-name>kerberos.keytab</param-name>
+            <param-value>/tmp/auth.keytab</param-value>
+        </init-param>
+    </filter>
+
+    <filter-mapping>
+        <filter-name>kerberosFilter</filter-name>
+        <url-pattern>/kerberos/*</url-pattern>
+    </filter-mapping>
+
+    ...
+</web-app>
++---+
+
+** Pseudo/Simple Configuration
+
+  To use Pseudo/Simple as the authentication mechanism (trusting the value of
+  the query string parameter 'user.name'), the authentication filter must be
+  configured with the following init parameters:
+
+    * <<<[PREFIX.]type>>>: the keyword <<<simple>>>.
+
+    * <<<[PREFIX.]simple.anonymous.allowed>>>: is a boolean parameter that
+      indicates if anonymous requests are allowed or not. The default value is
+      <<<false>>>.
+
+  <<Example>>:
+
++---+
+<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+    ...
+
+    <filter>
+        <filter-name>simpleFilter</filter-name>
+        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+        <init-param>
+            <param-name>type</param-name>
+            <param-value>simple</param-value>
+        </init-param>
+        <init-param>
+            <param-name>token.validity</param-name>
+            <param-value>30</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.domain</param-name>
+            <param-value>.foo.com</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.path</param-name>
+            <param-value>/</param-value>
+        </init-param>
+        <init-param>
+            <param-name>simple.anonymous.allowed</param-name>
+            <param-value>false</param-value>
+        </init-param>
+    </filter>
+
+    <filter-mapping>
+        <filter-name>simpleFilter</filter-name>
+        <url-pattern>/simple/*</url-pattern>
+    </filter-mapping>
+
+    ...
+</web-app>
++---+
+
+** AltKerberos Configuration
+
+  <<IMPORTANT>>: A KDC must be configured and running.
+
+  The AltKerberos authentication mechanism is a partially implemented derivative
+  of the Kerberos SPNEGO authentication mechanism which allows a "mixed" form of
+  authentication where Kerberos SPNEGO is used by non-browsers while an
+  alternate form of authentication (to be implemented by the user) is used for
+  browsers.  To use AltKerberos as the authentication mechanism (besides
+  providing an implementation), the authentication filter must be configured
+  with the following init parameters, in addition to the previously mentioned
+  Kerberos SPNEGO ones:
+
+    * <<<[PREFIX.]type>>>: the full class name of the implementation of
+      AltKerberosAuthenticationHandler to use.
+
+    * <<<[PREFIX.]alt-kerberos.non-browser.user-agents>>>: a comma-separated
+      list of which user-agents should be considered non-browsers.
+
+  <<Example>>:
+
++---+
+<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+    ...
+
+    <filter>
+        <filter-name>kerberosFilter</filter-name>
+        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+        <init-param>
+            <param-name>type</param-name>
+            <param-value>org.my.subclass.of.AltKerberosAuthenticationHandler</param-value>
+        </init-param>
+        <init-param>
+            <param-name>alt-kerberos.non-browser.user-agents</param-name>
+            <param-value>java,curl,wget,perl</param-value>
+        </init-param>
+        <init-param>
+            <param-name>token.validity</param-name>
+            <param-value>30</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.domain</param-name>
+            <param-value>.foo.com</param-value>
+        </init-param>
+        <init-param>
+            <param-name>cookie.path</param-name>
+            <param-value>/</param-value>
+        </init-param>
+        <init-param>
+            <param-name>kerberos.principal</param-name>
+            <param-value>HTTP/localhost@LOCALHOST</param-value>
+        </init-param>
+        <init-param>
+            <param-name>kerberos.keytab</param-name>
+            <param-value>/tmp/auth.keytab</param-value>
+        </init-param>
+    </filter>
+
+    <filter-mapping>
+        <filter-name>kerberosFilter</filter-name>
+        <url-pattern>/kerberos/*</url-pattern>
+    </filter-mapping>
+
+    ...
+</web-app>
++---+
+
+  \[ {{{./index.html}Go Back}} \]
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
new file mode 100644
index 0000000..7070862
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
@@ -0,0 +1,137 @@
+~~ Licensed under the Apache License, Version 2.0 (the "License");
+~~ you may not use this file except in compliance with the License.
+~~ You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License. See accompanying LICENSE file.
+
+  ---
+  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Examples
+  ---
+  ---
+  ${maven.build.timestamp}
+
+Hadoop Auth, Java HTTP SPNEGO ${project.version} - Examples
+
+  \[ {{{./index.html}Go Back}} \]
+
+* Accessing a Hadoop Auth protected URL Using a browser
+
+  <<IMPORTANT:>> The browser must support HTTP Kerberos SPNEGO. For example,
+  Firefox or Internet Explorer.
+
+  For Firefox access the low level configuration page by loading the
+  <<<about:config>>> page. Then go to the
+  <<<network.negotiate-auth.trusted-uris>>> preference and add the hostname or
+  the domain of the web server that is HTTP Kerberos SPNEGO protected (if using
+  multiple domains and hostname use comma to separate them).
+  
+* Accessing a Hadoop Auth protected URL Using <<<curl>>>
+
+  <<IMPORTANT:>> The <<<curl>>> version must support GSS, run <<<curl -V>>>.
+
++---+
+$ curl -V
+curl 7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3
+Protocols: tftp ftp telnet dict ldap http file https ftps
+Features: GSS-Negotiate IPv6 Largefile NTLM SSL libz
++---+
+
+  Login to the KDC using <<kinit>> and then use <<<curl>>> to fetch protected
+  URL:
+
++---+
+$ kinit
+Please enter the password for tucu@LOCALHOST:
+$ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
+Enter host password for user 'tucu':
+
+Hello Hadoop Auth Examples!
++---+
+
+  * The <<<--negotiate>>> option enables SPNEGO in <<<curl>>>.
+
+  * The <<<-u foo>>> option is required but the user ignored (the principal
+    that has been kinit-ed is used).
+
+  * The <<<-b>>> and <<<-c>>> are use to store and send HTTP Cookies.
+
+* Using the Java Client
+
+  Use the <<<AuthenticatedURL>>> class to obtain an authenticated HTTP
+  connection:
+
++---+
+...
+URL url = new URL("http://localhost:8080/hadoop-auth/kerberos/who");
+AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+...
+HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
+...
+conn = new AuthenticatedURL(url, token).openConnection();
+...
++---+
+
+* Building and Running the Examples
+
+  Download Hadoop-Auth's source code, the examples are in the
+  <<<src/main/examples>>> directory.
+
+** Server Example:
+
+  Edit the <<<hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml>>> and set the
+  right configuration init parameters for the <<<AuthenticationFilter>>>
+  definition configured for Kerberos (the right Kerberos principal and keytab
+  file must be specified). Refer to the {{{./Configuration.html}Configuration
+  document}} for details.
+
+  Create the web application WAR file by running the <<<mvn package>>> command.
+
+  Deploy the WAR file in a servlet container. For example, if using Tomcat,
+  copy the WAR file to Tomcat's <<<webapps/>>> directory.
+
+  Start the servlet container.
+
+** Accessing the server using <<<curl>>>
+
+  Try accessing protected resources using <<<curl>>>. The protected resources
+  are:
+
++---+
+$ kinit
+Please enter the password for tucu@LOCALHOST:
+
+$ curl http://localhost:8080/hadoop-auth-examples/anonymous/who
+
+$ curl http://localhost:8080/hadoop-auth-examples/simple/who?user.name=foo
+
+$ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
++---+
+
+** Accessing the server using the Java client example
+
++---+
+$ kinit
+Please enter the password for tucu@LOCALHOST:
+
+$ cd examples
+
+$ mvn exec:java -Durl=http://localhost:8080/hadoop-auth-examples/kerberos/who
+
+....
+
+Token value: "u=tucu,p=tucu@LOCALHOST,t=kerberos,e=1295305313146,s=sVZ1mpSnC5TKhZQE3QLN5p2DWBo="
+Status code: 200 OK
+
+You are: user[tucu] principal[tucu@LOCALHOST]
+
+....
+
++---+
+
+  \[ {{{./index.html}Go Back}} \]
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
new file mode 100644
index 0000000..26fc249
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
@@ -0,0 +1,58 @@
+~~ Licensed under the Apache License, Version 2.0 (the "License");
+~~ you may not use this file except in compliance with the License.
+~~ You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License. See accompanying LICENSE file.
+
+  ---
+  Hadoop Auth, Java HTTP SPNEGO ${project.version}
+  ---
+  ---
+  ${maven.build.timestamp}
+
+Hadoop Auth, Java HTTP SPNEGO ${project.version}
+
+  Hadoop Auth is a Java library consisting of a client and a server
+  components to enable Kerberos SPNEGO authentication for HTTP.
+
+  Hadoop Auth also supports additional authentication mechanisms on the client
+  and the server side via 2 simple interfaces.
+
+  Additionally, it provides a partially implemented derivative of the Kerberos
+  SPNEGO authentication to allow a "mixed" form of authentication where Kerberos
+  SPNEGO is used by non-browsers while an alternate form of authentication
+  (to be implemented by the user) is used for browsers.
+
+* License
+
+  Hadoop Auth is distributed under {{{http://www.apache.org/licenses/}Apache
+  License 2.0}}.
+
+* How Does Auth Works?
+
+  Hadoop Auth enforces authentication on protected resources, once authentiation
+  has been established it sets a signed HTTP Cookie that contains an
+  authentication token with the user name, user principal, authentication type
+  and expiration time.
+
+  Subsequent HTTP client requests presenting the signed HTTP Cookie have access
+  to the protected resources until the HTTP Cookie expires.
+
+* User Documentation
+
+  * {{{./Examples.html}Examples}}
+
+  * {{{./Configuration.html}Configuration}}
+
+  * {{{./BuildingIt.html}Building It}}
+
+  * {{{./apidocs/index.html}JavaDocs}}
+
+  * {{{./dependencies.html}Dependencies}}
+
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/resources/css/site.css b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/resources/css/site.css
new file mode 100644
index 0000000..f830baa
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/resources/css/site.css
@@ -0,0 +1,30 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements.  See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+#banner {
+  height: 93px;
+  background: none;
+}
+
+#bannerLeft img {
+  margin-left: 30px;
+  margin-top: 10px;
+}
+
+#bannerRight img {
+  margin: 17px;
+}
+
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/site.xml b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/site.xml
new file mode 100644
index 0000000..2b3a512
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/site/site.xml
@@ -0,0 +1,28 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project name="Hadoop Auth">
+
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-stylus-skin</artifactId>
+    <version>1.2</version>
+  </skin>
+
+  <body>
+    <links>
+      <item name="Apache Hadoop" href="http://hadoop.apache.org/"/>
+    </links>
+  </body>
+
+</project>
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
new file mode 100644
index 0000000..ea0f17f
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
@@ -0,0 +1,131 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication;
+
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosPrincipal;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedActionException;
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.Callable;
+
+/**
+ * Test helper class for Java Kerberos setup.
+ */
+public class KerberosTestUtils {
+  private static final String PREFIX = "hadoop-auth.test.";
+
+  public static final String REALM = PREFIX + "kerberos.realm";
+
+  public static final String CLIENT_PRINCIPAL = PREFIX + "kerberos.client.principal";
+
+  public static final String SERVER_PRINCIPAL = PREFIX + "kerberos.server.principal";
+
+  public static final String KEYTAB_FILE = PREFIX + "kerberos.keytab.file";
+
+  public static String getRealm() {
+    return System.getProperty(REALM, "LOCALHOST");
+  }
+
+  public static String getClientPrincipal() {
+    return System.getProperty(CLIENT_PRINCIPAL, "client") + "@" + getRealm();
+  }
+
+  public static String getServerPrincipal() {
+    return System.getProperty(SERVER_PRINCIPAL, "HTTP/localhost") + "@" + getRealm();
+  }
+
+  public static String getKeytabFile() {
+    String keytabFile =
+      new File(System.getProperty("user.home"), System.getProperty("user.name") + ".keytab").toString();
+    return System.getProperty(KEYTAB_FILE, keytabFile);
+  }
+
+  private static class KerberosConfiguration extends Configuration {
+    private String principal;
+
+    public KerberosConfiguration(String principal) {
+      this.principal = principal;
+    }
+
+    @Override
+    public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
+      Map<String, String> options = new HashMap<String, String>();
+      options.put("keyTab", KerberosTestUtils.getKeytabFile());
+      options.put("principal", principal);
+      options.put("useKeyTab", "true");
+      options.put("storeKey", "true");
+      options.put("doNotPrompt", "true");
+      options.put("useTicketCache", "true");
+      options.put("renewTGT", "true");
+      options.put("refreshKrb5Config", "true");
+      options.put("isInitiator", "true");
+      String ticketCache = System.getenv("KRB5CCNAME");
+      if (ticketCache != null) {
+        options.put("ticketCache", ticketCache);
+      }
+      options.put("debug", "true");
+
+      return new AppConfigurationEntry[]{
+        new AppConfigurationEntry(KerberosUtil.getKrb5LoginModuleName(),
+                                  AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
+                                  options),};
+    }
+  }
+
+  public static <T> T doAs(String principal, final Callable<T> callable) throws Exception {
+    LoginContext loginContext = null;
+    try {
+      Set<Principal> principals = new HashSet<Principal>();
+      principals.add(new KerberosPrincipal(KerberosTestUtils.getClientPrincipal()));
+      Subject subject = new Subject(false, principals, new HashSet<Object>(), new HashSet<Object>());
+      loginContext = new LoginContext("", subject, null, new KerberosConfiguration(principal));
+      loginContext.login();
+      subject = loginContext.getSubject();
+      return Subject.doAs(subject, new PrivilegedExceptionAction<T>() {
+        @Override
+        public T run() throws Exception {
+          return callable.call();
+        }
+      });
+    } catch (PrivilegedActionException ex) {
+      throw ex.getException();
+    } finally {
+      if (loginContext != null) {
+        loginContext.logout();
+      }
+    }
+  }
+
+  public static <T> T doAsClient(Callable<T> callable) throws Exception {
+    return doAs(getClientPrincipal(), callable);
+  }
+
+  public static <T> T doAsServer(Callable<T> callable) throws Exception {
+    return doAs(getServerPrincipal(), callable);
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
new file mode 100644
index 0000000..6059d8c
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
@@ -0,0 +1,171 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import junit.framework.Assert;
+import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+import junit.framework.TestCase;
+import org.mockito.Mockito;
+import org.mortbay.jetty.Server;
+import org.mortbay.jetty.servlet.Context;
+import org.mortbay.jetty.servlet.FilterHolder;
+import org.mortbay.jetty.servlet.ServletHolder;
+
+import javax.servlet.FilterConfig;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.net.HttpURLConnection;
+import java.net.ServerSocket;
+import java.net.URL;
+import java.util.Properties;
+
+public abstract class AuthenticatorTestCase extends TestCase {
+  private Server server;
+  private String host = null;
+  private int port = -1;
+  Context context;
+
+  private static Properties authenticatorConfig;
+
+  protected static void setAuthenticationHandlerConfig(Properties config) {
+    authenticatorConfig = config;
+  }
+
+  public static class TestFilter extends AuthenticationFilter {
+
+    @Override
+    protected Properties getConfiguration(String configPrefix, FilterConfig filterConfig) throws ServletException {
+      return authenticatorConfig;
+    }
+  }
+
+  @SuppressWarnings("serial")
+  public static class TestServlet extends HttpServlet {
+
+    @Override
+    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+
+    @Override
+    protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
+      InputStream is = req.getInputStream();
+      OutputStream os = resp.getOutputStream();
+      int c = is.read();
+      while (c > -1) {
+        os.write(c);
+        c = is.read();
+      }
+      is.close();
+      os.close();
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+  }
+
+  protected void start() throws Exception {
+    server = new Server(0);
+    context = new Context();
+    context.setContextPath("/foo");
+    server.setHandler(context);
+    context.addFilter(new FilterHolder(TestFilter.class), "/*", 0);
+    context.addServlet(new ServletHolder(TestServlet.class), "/bar");
+    host = "localhost";
+    ServerSocket ss = new ServerSocket(0);
+    port = ss.getLocalPort();
+    ss.close();
+    server.getConnectors()[0].setHost(host);
+    server.getConnectors()[0].setPort(port);
+    server.start();
+    System.out.println("Running embedded servlet container at: http://" + host + ":" + port);
+  }
+
+  protected void stop() throws Exception {
+    try {
+      server.stop();
+    } catch (Exception e) {
+    }
+
+    try {
+      server.destroy();
+    } catch (Exception e) {
+    }
+  }
+
+  protected String getBaseURL() {
+    return "http://" + host + ":" + port + "/foo/bar";
+  }
+
+  private static class TestConnectionConfigurator
+      implements ConnectionConfigurator {
+    boolean invoked;
+
+    @Override
+    public HttpURLConnection configure(HttpURLConnection conn)
+        throws IOException {
+      invoked = true;
+      return conn;
+    }
+  }
+
+  private String POST = "test";
+
+  protected void _testAuthentication(Authenticator authenticator, boolean doPost) throws Exception {
+    start();
+    try {
+      URL url = new URL(getBaseURL());
+      AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+      Assert.assertFalse(token.isSet());
+      TestConnectionConfigurator connConf = new TestConnectionConfigurator();
+      AuthenticatedURL aUrl = new AuthenticatedURL(authenticator, connConf);
+      HttpURLConnection conn = aUrl.openConnection(url, token);
+      Assert.assertTrue(token.isSet());
+      Assert.assertTrue(connConf.invoked);
+      String tokenStr = token.toString();
+      if (doPost) {
+        conn.setRequestMethod("POST");
+        conn.setDoOutput(true);
+      }
+      conn.connect();
+      if (doPost) {
+        Writer writer = new OutputStreamWriter(conn.getOutputStream());
+        writer.write(POST);
+        writer.close();
+      }
+      assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+      if (doPost) {
+        BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
+        String echo = reader.readLine();
+        assertEquals(POST, echo);
+        assertNull(reader.readLine());
+      }
+      aUrl = new AuthenticatedURL();
+      conn = aUrl.openConnection(url, token);
+      conn.connect();
+      assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+      assertEquals(tokenStr, token.toString());
+    } finally {
+      stop();
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestAuthenticatedURL.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestAuthenticatedURL.java
new file mode 100644
index 0000000..02ab92f
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestAuthenticatedURL.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import junit.framework.Assert;
+import junit.framework.TestCase;
+import org.mockito.Mockito;
+
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class TestAuthenticatedURL extends TestCase {
+
+  public void testToken() throws Exception {
+    AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+    assertFalse(token.isSet());
+    token = new AuthenticatedURL.Token("foo");
+    assertTrue(token.isSet());
+    assertEquals("foo", token.toString());
+
+    AuthenticatedURL.Token token1 = new AuthenticatedURL.Token();
+    AuthenticatedURL.Token token2 = new AuthenticatedURL.Token();
+    assertEquals(token1.hashCode(), token2.hashCode());
+    assertTrue(token1.equals(token2));
+
+    token1 = new AuthenticatedURL.Token();
+    token2 = new AuthenticatedURL.Token("foo");
+    assertNotSame(token1.hashCode(), token2.hashCode());
+    assertFalse(token1.equals(token2));
+
+    token1 = new AuthenticatedURL.Token("foo");
+    token2 = new AuthenticatedURL.Token();
+    assertNotSame(token1.hashCode(), token2.hashCode());
+    assertFalse(token1.equals(token2));
+
+    token1 = new AuthenticatedURL.Token("foo");
+    token2 = new AuthenticatedURL.Token("foo");
+    assertEquals(token1.hashCode(), token2.hashCode());
+    assertTrue(token1.equals(token2));
+
+    token1 = new AuthenticatedURL.Token("bar");
+    token2 = new AuthenticatedURL.Token("foo");
+    assertNotSame(token1.hashCode(), token2.hashCode());
+    assertFalse(token1.equals(token2));
+
+    token1 = new AuthenticatedURL.Token("foo");
+    token2 = new AuthenticatedURL.Token("bar");
+    assertNotSame(token1.hashCode(), token2.hashCode());
+    assertFalse(token1.equals(token2));
+  }
+
+  public void testInjectToken() throws Exception {
+    HttpURLConnection conn = Mockito.mock(HttpURLConnection.class);
+    AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+    token.set("foo");
+    AuthenticatedURL.injectToken(conn, token);
+    Mockito.verify(conn).addRequestProperty(Mockito.eq("Cookie"), Mockito.anyString());
+  }
+
+  public void testExtractTokenOK() throws Exception {
+    HttpURLConnection conn = Mockito.mock(HttpURLConnection.class);
+
+    Mockito.when(conn.getResponseCode()).thenReturn(HttpURLConnection.HTTP_OK);
+
+    String tokenStr = "foo";
+    Map<String, List<String>> headers = new HashMap<String, List<String>>();
+    List<String> cookies = new ArrayList<String>();
+    cookies.add(AuthenticatedURL.AUTH_COOKIE + "=" + tokenStr);
+    headers.put("Set-Cookie", cookies);
+    Mockito.when(conn.getHeaderFields()).thenReturn(headers);
+
+    AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+    AuthenticatedURL.extractToken(conn, token);
+
+    assertEquals(tokenStr, token.toString());
+  }
+
+  public void testExtractTokenFail() throws Exception {
+    HttpURLConnection conn = Mockito.mock(HttpURLConnection.class);
+
+    Mockito.when(conn.getResponseCode()).thenReturn(HttpURLConnection.HTTP_UNAUTHORIZED);
+
+    String tokenStr = "foo";
+    Map<String, List<String>> headers = new HashMap<String, List<String>>();
+    List<String> cookies = new ArrayList<String>();
+    cookies.add(AuthenticatedURL.AUTH_COOKIE + "=" + tokenStr);
+    headers.put("Set-Cookie", cookies);
+    Mockito.when(conn.getHeaderFields()).thenReturn(headers);
+
+    AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+    token.set("bar");
+    try {
+      AuthenticatedURL.extractToken(conn, token);
+      fail();
+    } catch (AuthenticationException ex) {
+      // Expected
+      Assert.assertFalse(token.isSet());
+    } catch (Exception ex) {
+      fail();
+    }
+  }
+
+  public void testConnectionConfigurator() throws Exception {
+    HttpURLConnection conn = Mockito.mock(HttpURLConnection.class);
+    Mockito.when(conn.getResponseCode()).
+        thenReturn(HttpURLConnection.HTTP_UNAUTHORIZED);
+
+    ConnectionConfigurator connConf =
+        Mockito.mock(ConnectionConfigurator.class);
+    Mockito.when(connConf.configure(Mockito.<HttpURLConnection>any())).
+        thenReturn(conn);
+
+    Authenticator authenticator = Mockito.mock(Authenticator.class);
+
+    AuthenticatedURL aURL = new AuthenticatedURL(authenticator, connConf);
+    aURL.openConnection(new URL("http://foo"), new AuthenticatedURL.Token());
+    Mockito.verify(connConf).configure(Mockito.<HttpURLConnection>any());
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestKerberosAuthenticator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestKerberosAuthenticator.java
new file mode 100644
index 0000000..93d1d02
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestKerberosAuthenticator.java
@@ -0,0 +1,91 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import org.apache.hadoop.security.authentication.KerberosTestUtils;
+import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler;
+import org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler;
+
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Properties;
+import java.util.concurrent.Callable;
+
+public class TestKerberosAuthenticator extends AuthenticatorTestCase {
+
+  private Properties getAuthenticationHandlerConfiguration() {
+    Properties props = new Properties();
+    props.setProperty(AuthenticationFilter.AUTH_TYPE, "kerberos");
+    props.setProperty(KerberosAuthenticationHandler.PRINCIPAL, KerberosTestUtils.getServerPrincipal());
+    props.setProperty(KerberosAuthenticationHandler.KEYTAB, KerberosTestUtils.getKeytabFile());
+    props.setProperty(KerberosAuthenticationHandler.NAME_RULES,
+                      "RULE:[1:$1@$0](.*@" + KerberosTestUtils.getRealm()+")s/@.*//\n");
+    return props;
+  }
+
+  public void testFallbacktoPseudoAuthenticator() throws Exception {
+    Properties props = new Properties();
+    props.setProperty(AuthenticationFilter.AUTH_TYPE, "simple");
+    props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "false");
+    setAuthenticationHandlerConfig(props);
+    _testAuthentication(new KerberosAuthenticator(), false);
+  }
+
+  public void testFallbacktoPseudoAuthenticatorAnonymous() throws Exception {
+    Properties props = new Properties();
+    props.setProperty(AuthenticationFilter.AUTH_TYPE, "simple");
+    props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "true");
+    setAuthenticationHandlerConfig(props);
+    _testAuthentication(new KerberosAuthenticator(), false);
+  }
+
+  public void testNotAuthenticated() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration());
+    start();
+    try {
+      URL url = new URL(getBaseURL());
+      HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+      conn.connect();
+      assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, conn.getResponseCode());
+      assertTrue(conn.getHeaderField(KerberosAuthenticator.WWW_AUTHENTICATE) != null);
+    } finally {
+      stop();
+    }
+  }
+
+
+  public void testAuthentication() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration());
+    KerberosTestUtils.doAsClient(new Callable<Void>() {
+      @Override
+      public Void call() throws Exception {
+        _testAuthentication(new KerberosAuthenticator(), false);
+        return null;
+      }
+    });
+  }
+
+  public void testAuthenticationPost() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration());
+    KerberosTestUtils.doAsClient(new Callable<Void>() {
+      @Override
+      public Void call() throws Exception {
+        _testAuthentication(new KerberosAuthenticator(), true);
+        return null;
+      }
+    });
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestPseudoAuthenticator.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestPseudoAuthenticator.java
new file mode 100644
index 0000000..807052e
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestPseudoAuthenticator.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.client;
+
+import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler;
+
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Properties;
+
+public class TestPseudoAuthenticator extends AuthenticatorTestCase {
+
+  private Properties getAuthenticationHandlerConfiguration(boolean anonymousAllowed) {
+    Properties props = new Properties();
+    props.setProperty(AuthenticationFilter.AUTH_TYPE, "simple");
+    props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, Boolean.toString(anonymousAllowed));
+    return props;
+  }
+
+  public void testGetUserName() throws Exception {
+    PseudoAuthenticator authenticator = new PseudoAuthenticator();
+    assertEquals(System.getProperty("user.name"), authenticator.getUserName());
+  }
+
+  public void testAnonymousAllowed() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(true));
+    start();
+    try {
+      URL url = new URL(getBaseURL());
+      HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+      conn.connect();
+      assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+    } finally {
+      stop();
+    }
+  }
+
+  public void testAnonymousDisallowed() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(false));
+    start();
+    try {
+      URL url = new URL(getBaseURL());
+      HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+      conn.connect();
+      assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, conn.getResponseCode());
+    } finally {
+      stop();
+    }
+  }
+
+  public void testAuthenticationAnonymousAllowed() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(true));
+    _testAuthentication(new PseudoAuthenticator(), false);
+  }
+
+  public void testAuthenticationAnonymousDisallowed() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(false));
+    _testAuthentication(new PseudoAuthenticator(), false);
+  }
+
+  public void testAuthenticationAnonymousAllowedWithPost() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(true));
+    _testAuthentication(new PseudoAuthenticator(), true);
+  }
+
+  public void testAuthenticationAnonymousDisallowedWithPost() throws Exception {
+    setAuthenticationHandlerConfig(getAuthenticationHandlerConfiguration(false));
+    _testAuthentication(new PseudoAuthenticator(), true);
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAltKerberosAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAltKerberosAuthenticationHandler.java
new file mode 100644
index 0000000..c2d43eb
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAltKerberosAuthenticationHandler.java
@@ -0,0 +1,110 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import java.io.IOException;
+import java.util.Properties;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.mockito.Mockito;
+
+public class TestAltKerberosAuthenticationHandler
+    extends TestKerberosAuthenticationHandler {
+
+  @Override
+  protected KerberosAuthenticationHandler getNewAuthenticationHandler() {
+    // AltKerberosAuthenticationHandler is abstract; a subclass would normally
+    // perform some other authentication when alternateAuthenticate() is called.
+    // For the test, we'll just return an AuthenticationToken as the other
+    // authentication is left up to the developer of the subclass
+    return new AltKerberosAuthenticationHandler() {
+      @Override
+      public AuthenticationToken alternateAuthenticate(
+              HttpServletRequest request,
+              HttpServletResponse response)
+              throws IOException, AuthenticationException {
+        return new AuthenticationToken("A", "B", getType());
+      }
+    };
+  }
+
+  @Override
+  protected String getExpectedType() {
+    return AltKerberosAuthenticationHandler.TYPE;
+  }
+
+  public void testAlternateAuthenticationAsBrowser() throws Exception {
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    // By default, a User-Agent without "java", "curl", "wget", or "perl" in it
+    // is considered a browser
+    Mockito.when(request.getHeader("User-Agent")).thenReturn("Some Browser");
+
+    AuthenticationToken token = handler.authenticate(request, response);
+    assertEquals("A", token.getUserName());
+    assertEquals("B", token.getName());
+    assertEquals(getExpectedType(), token.getType());
+  }
+
+  public void testNonDefaultNonBrowserUserAgentAsBrowser() throws Exception {
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    if (handler != null) {
+      handler.destroy();
+      handler = null;
+    }
+    handler = getNewAuthenticationHandler();
+    Properties props = getDefaultProperties();
+    props.setProperty("alt-kerberos.non-browser.user-agents", "foo, bar");
+    try {
+      handler.init(props);
+    } catch (Exception ex) {
+      handler = null;
+      throw ex;
+    }
+
+    // Pretend we're something that will not match with "foo" (or "bar")
+    Mockito.when(request.getHeader("User-Agent")).thenReturn("blah");
+    // Should use alt authentication
+    AuthenticationToken token = handler.authenticate(request, response);
+    assertEquals("A", token.getUserName());
+    assertEquals("B", token.getName());
+    assertEquals(getExpectedType(), token.getType());
+  }
+
+  public void testNonDefaultNonBrowserUserAgentAsNonBrowser() throws Exception {
+    if (handler != null) {
+      handler.destroy();
+      handler = null;
+    }
+    handler = getNewAuthenticationHandler();
+    Properties props = getDefaultProperties();
+    props.setProperty("alt-kerberos.non-browser.user-agents", "foo, bar");
+    try {
+      handler.init(props);
+    } catch (Exception ex) {
+      handler = null;
+      throw ex;
+    }
+
+    // Run the kerberos tests again
+    testRequestWithoutAuthorization();
+    testRequestWithInvalidAuthorization();
+    testRequestWithAuthorization();
+    testRequestWithInvalidKerberosAuthorization();
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
new file mode 100644
index 0000000..1c31e54
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
@@ -0,0 +1,737 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.security.authentication.util.Signer;
+import junit.framework.TestCase;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import javax.servlet.FilterChain;
+import javax.servlet.FilterConfig;
+import javax.servlet.ServletException;
+import javax.servlet.ServletRequest;
+import javax.servlet.ServletResponse;
+import javax.servlet.http.Cookie;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Properties;
+import java.util.Vector;
+
+public class TestAuthenticationFilter extends TestCase {
+
+  public void testGetConfiguration() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    FilterConfig config = Mockito.mock(FilterConfig.class);
+    Mockito.when(config.getInitParameter(AuthenticationFilter.CONFIG_PREFIX)).thenReturn("");
+    Mockito.when(config.getInitParameter("a")).thenReturn("A");
+    Mockito.when(config.getInitParameterNames()).thenReturn(new Vector<String>(Arrays.asList("a")).elements());
+    Properties props = filter.getConfiguration("", config);
+    assertEquals("A", props.getProperty("a"));
+
+    config = Mockito.mock(FilterConfig.class);
+    Mockito.when(config.getInitParameter(AuthenticationFilter.CONFIG_PREFIX)).thenReturn("foo");
+    Mockito.when(config.getInitParameter("foo.a")).thenReturn("A");
+    Mockito.when(config.getInitParameterNames()).thenReturn(new Vector<String>(Arrays.asList("foo.a")).elements());
+    props = filter.getConfiguration("foo.", config);
+    assertEquals("A", props.getProperty("a"));
+  }
+
+  public void testInitEmpty() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameterNames()).thenReturn(new Vector<String>().elements());
+      filter.init(config);
+      fail();
+    } catch (ServletException ex) {
+      // Expected
+    } catch (Exception ex) {
+      fail();
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public static class DummyAuthenticationHandler implements AuthenticationHandler {
+    public static boolean init;
+    public static boolean managementOperationReturn;
+    public static boolean destroy;
+    public static boolean expired;
+
+    public static final String TYPE = "dummy";
+
+    public static void reset() {
+      init = false;
+      destroy = false;
+    }
+
+    @Override
+    public void init(Properties config) throws ServletException {
+      init = true;
+      managementOperationReturn =
+        config.getProperty("management.operation.return", "true").equals("true");
+      expired = config.getProperty("expired.token", "false").equals("true");
+    }
+
+    @Override
+    public boolean managementOperation(AuthenticationToken token,
+                                       HttpServletRequest request,
+                                       HttpServletResponse response)
+      throws IOException, AuthenticationException {
+      if (!managementOperationReturn) {
+        response.setStatus(HttpServletResponse.SC_ACCEPTED);
+      }
+      return managementOperationReturn;
+    }
+
+    @Override
+    public void destroy() {
+      destroy = true;
+    }
+
+    @Override
+    public String getType() {
+      return TYPE;
+    }
+
+    @Override
+    public AuthenticationToken authenticate(HttpServletRequest request, HttpServletResponse response)
+      throws IOException, AuthenticationException {
+      AuthenticationToken token = null;
+      String param = request.getParameter("authenticated");
+      if (param != null && param.equals("true")) {
+        token = new AuthenticationToken("u", "p", "t");
+        token.setExpires((expired) ? 0 : System.currentTimeMillis() + 1000);
+      } else {
+        response.setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+      }
+      return token;
+    }
+  }
+
+  public void testInit() throws Exception {
+
+    // minimal configuration & simple auth handler (Pseudo)
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn("simple");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TOKEN_VALIDITY)).thenReturn("1000");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                                 AuthenticationFilter.AUTH_TOKEN_VALIDITY)).elements());
+      filter.init(config);
+      assertEquals(PseudoAuthenticationHandler.class, filter.getAuthenticationHandler().getClass());
+      assertTrue(filter.isRandomSecret());
+      assertNull(filter.getCookieDomain());
+      assertNull(filter.getCookiePath());
+      assertEquals(1000, filter.getValidity());
+    } finally {
+      filter.destroy();
+    }
+
+    // custom secret
+    filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn("simple");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.SIGNATURE_SECRET)).thenReturn("secret");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                                 AuthenticationFilter.SIGNATURE_SECRET)).elements());
+      filter.init(config);
+      assertFalse(filter.isRandomSecret());
+    } finally {
+      filter.destroy();
+    }
+
+    // custom cookie domain and cookie path
+    filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn("simple");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.COOKIE_DOMAIN)).thenReturn(".foo.com");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.COOKIE_PATH)).thenReturn("/bar");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                                 AuthenticationFilter.COOKIE_DOMAIN,
+                                 AuthenticationFilter.COOKIE_PATH)).elements());
+      filter.init(config);
+      assertEquals(".foo.com", filter.getCookieDomain());
+      assertEquals("/bar", filter.getCookiePath());
+    } finally {
+      filter.destroy();
+    }
+
+
+    // authentication handler lifecycle, and custom impl
+    DummyAuthenticationHandler.reset();
+    filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+      assertTrue(DummyAuthenticationHandler.init);
+    } finally {
+      filter.destroy();
+      assertTrue(DummyAuthenticationHandler.destroy);
+    }
+
+    // kerberos auth handler
+    filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn("kerberos");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE)).elements());
+      filter.init(config);
+    } catch (ServletException ex) {
+      // Expected
+    } finally {
+      assertEquals(KerberosAuthenticationHandler.class, filter.getAuthenticationHandler().getClass());
+      filter.destroy();
+    }
+  }
+
+  public void testGetRequestURL() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+      Mockito.when(request.getQueryString()).thenReturn("a=A&b=B");
+
+      assertEquals("http://foo:8080/bar?a=A&b=B", filter.getRequestURL(request));
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testGetToken() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameter(AuthenticationFilter.SIGNATURE_SECRET)).thenReturn("secret");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        AuthenticationFilter.SIGNATURE_SECRET,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", DummyAuthenticationHandler.TYPE);
+      token.setExpires(System.currentTimeMillis() + 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      AuthenticationToken newToken = filter.getToken(request);
+
+      assertEquals(token.toString(), newToken.toString());
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testGetTokenExpired() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameter(AuthenticationFilter.SIGNATURE_SECRET)).thenReturn("secret");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        AuthenticationFilter.SIGNATURE_SECRET,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", "invalidtype");
+      token.setExpires(System.currentTimeMillis() - 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      try {
+        filter.getToken(request);
+        fail();
+      } catch (AuthenticationException ex) {
+        // Expected
+      } catch (Exception ex) {
+        fail();
+      }
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testGetTokenInvalidType() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameter(AuthenticationFilter.SIGNATURE_SECRET)).thenReturn("secret");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        AuthenticationFilter.SIGNATURE_SECRET,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", "invalidtype");
+      token.setExpires(System.currentTimeMillis() + 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      try {
+        filter.getToken(request);
+        fail();
+      } catch (AuthenticationException ex) {
+        // Expected
+      } catch (Exception ex) {
+        fail();
+      }
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testDoFilterNotAuthenticated() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            fail();
+            return null;
+          }
+        }
+      ).when(chain).doFilter(Mockito.<ServletRequest>anyObject(), Mockito.<ServletResponse>anyObject());
+
+      filter.doFilter(request, response, chain);
+
+      Mockito.verify(response).setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  private void _testDoFilterAuthentication(boolean withDomainPath,
+                                           boolean invalidToken,
+                                           boolean expired) throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter("expired.token")).
+        thenReturn(Boolean.toString(expired));
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TOKEN_VALIDITY)).thenReturn("1000");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.SIGNATURE_SECRET)).thenReturn("secret");
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                                 AuthenticationFilter.AUTH_TOKEN_VALIDITY,
+                                 AuthenticationFilter.SIGNATURE_SECRET,
+                                 "management.operation.return",
+                                 "expired.token")).elements());
+
+      if (withDomainPath) {
+        Mockito.when(config.getInitParameter(AuthenticationFilter.COOKIE_DOMAIN)).thenReturn(".foo.com");
+        Mockito.when(config.getInitParameter(AuthenticationFilter.COOKIE_PATH)).thenReturn("/bar");
+        Mockito.when(config.getInitParameterNames()).thenReturn(
+          new Vector<String>(Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                                   AuthenticationFilter.AUTH_TOKEN_VALIDITY,
+                                   AuthenticationFilter.SIGNATURE_SECRET,
+                                   AuthenticationFilter.COOKIE_DOMAIN,
+                                   AuthenticationFilter.COOKIE_PATH,
+                                   "management.operation.return")).elements());
+      }
+
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getParameter("authenticated")).thenReturn("true");
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+      Mockito.when(request.getQueryString()).thenReturn("authenticated=true");
+
+      if (invalidToken) {
+        Mockito.when(request.getCookies()).thenReturn(
+          new Cookie[] { new Cookie(AuthenticatedURL.AUTH_COOKIE, "foo")}
+        );
+      }
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      final boolean[] calledDoFilter = new boolean[1];
+
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            calledDoFilter[0] = true;
+            return null;
+          }
+        }
+      ).when(chain).doFilter(Mockito.<ServletRequest>anyObject(), Mockito.<ServletResponse>anyObject());
+
+      final Cookie[] setCookie = new Cookie[1];
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            Object[] args = invocation.getArguments();
+            setCookie[0] = (Cookie) args[0];
+            return null;
+          }
+        }
+      ).when(response).addCookie(Mockito.<Cookie>anyObject());
+
+      filter.doFilter(request, response, chain);
+
+      if (expired) {
+        Mockito.verify(response, Mockito.never()).
+          addCookie(Mockito.any(Cookie.class));
+      } else {
+        assertNotNull(setCookie[0]);
+        assertEquals(AuthenticatedURL.AUTH_COOKIE, setCookie[0].getName());
+        assertTrue(setCookie[0].getValue().contains("u="));
+        assertTrue(setCookie[0].getValue().contains("p="));
+        assertTrue(setCookie[0].getValue().contains("t="));
+        assertTrue(setCookie[0].getValue().contains("e="));
+        assertTrue(setCookie[0].getValue().contains("s="));
+        assertTrue(calledDoFilter[0]);
+
+        Signer signer = new Signer("secret".getBytes());
+        String value = signer.verifyAndExtract(setCookie[0].getValue());
+        AuthenticationToken token = AuthenticationToken.parse(value);
+        assertEquals(System.currentTimeMillis() + 1000 * 1000,
+                     token.getExpires(), 100);
+
+        if (withDomainPath) {
+          assertEquals(".foo.com", setCookie[0].getDomain());
+          assertEquals("/bar", setCookie[0].getPath());
+        } else {
+          assertNull(setCookie[0].getDomain());
+          assertNull(setCookie[0].getPath());
+        }
+      }
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testDoFilterAuthentication() throws Exception {
+    _testDoFilterAuthentication(false, false, false);
+  }
+
+  public void testDoFilterAuthenticationImmediateExpiration() throws Exception {
+    _testDoFilterAuthentication(false, false, true);
+  }
+
+  public void testDoFilterAuthenticationWithInvalidToken() throws Exception {
+    _testDoFilterAuthentication(false, true, false);
+  }
+
+  public void testDoFilterAuthenticationWithDomainPath() throws Exception {
+    _testDoFilterAuthentication(true, false, false);
+  }
+
+  public void testDoFilterAuthenticated() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", "t");
+      token.setExpires(System.currentTimeMillis() + 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            Object[] args = invocation.getArguments();
+            HttpServletRequest request = (HttpServletRequest) args[0];
+            assertEquals("u", request.getRemoteUser());
+            assertEquals("p", request.getUserPrincipal().getName());
+            return null;
+          }
+        }
+      ).when(chain).doFilter(Mockito.<ServletRequest>anyObject(), Mockito.<ServletResponse>anyObject());
+
+      filter.doFilter(request, response, chain);
+
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testDoFilterAuthenticatedExpired() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", DummyAuthenticationHandler.TYPE);
+      token.setExpires(System.currentTimeMillis() - 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            fail();
+            return null;
+          }
+        }
+      ).when(chain).doFilter(Mockito.<ServletRequest>anyObject(), Mockito.<ServletResponse>anyObject());
+
+      final Cookie[] setCookie = new Cookie[1];
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            Object[] args = invocation.getArguments();
+            setCookie[0] = (Cookie) args[0];
+            return null;
+          }
+        }
+      ).when(response).addCookie(Mockito.<Cookie>anyObject());
+
+      filter.doFilter(request, response, chain);
+
+      Mockito.verify(response).sendError(Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED), Mockito.anyString());
+
+      assertNotNull(setCookie[0]);
+      assertEquals(AuthenticatedURL.AUTH_COOKIE, setCookie[0].getName());
+      assertEquals("", setCookie[0].getValue());
+    } finally {
+      filter.destroy();
+    }
+  }
+
+
+  public void testDoFilterAuthenticatedInvalidType() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("true");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).thenReturn(
+        DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).thenReturn(new StringBuffer("http://foo:8080/bar"));
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", "invalidtype");
+      token.setExpires(System.currentTimeMillis() + 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            fail();
+            return null;
+          }
+        }
+      ).when(chain).doFilter(Mockito.<ServletRequest>anyObject(), Mockito.<ServletResponse>anyObject());
+
+      final Cookie[] setCookie = new Cookie[1];
+      Mockito.doAnswer(
+        new Answer<Object>() {
+          @Override
+          public Object answer(InvocationOnMock invocation) throws Throwable {
+            Object[] args = invocation.getArguments();
+            setCookie[0] = (Cookie) args[0];
+            return null;
+          }
+        }
+      ).when(response).addCookie(Mockito.<Cookie>anyObject());
+
+      filter.doFilter(request, response, chain);
+
+      Mockito.verify(response).sendError(Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED), Mockito.anyString());
+
+      assertNotNull(setCookie[0]);
+      assertEquals(AuthenticatedURL.AUTH_COOKIE, setCookie[0].getName());
+      assertEquals("", setCookie[0].getValue());
+    } finally {
+      filter.destroy();
+    }
+  }
+
+  public void testManagementOperation() throws Exception {
+    AuthenticationFilter filter = new AuthenticationFilter();
+    try {
+      FilterConfig config = Mockito.mock(FilterConfig.class);
+      Mockito.when(config.getInitParameter("management.operation.return")).
+        thenReturn("false");
+      Mockito.when(config.getInitParameter(AuthenticationFilter.AUTH_TYPE)).
+        thenReturn(DummyAuthenticationHandler.class.getName());
+      Mockito.when(config.getInitParameterNames()).thenReturn(
+        new Vector<String>(
+          Arrays.asList(AuthenticationFilter.AUTH_TYPE,
+                        "management.operation.return")).elements());
+      filter.init(config);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      Mockito.when(request.getRequestURL()).
+        thenReturn(new StringBuffer("http://foo:8080/bar"));
+
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      FilterChain chain = Mockito.mock(FilterChain.class);
+
+      filter.doFilter(request, response, chain);
+      Mockito.verify(response).setStatus(HttpServletResponse.SC_ACCEPTED);
+      Mockito.verifyNoMoreInteractions(response);
+
+      Mockito.reset(request);
+      Mockito.reset(response);
+
+      AuthenticationToken token = new AuthenticationToken("u", "p", "t");
+      token.setExpires(System.currentTimeMillis() + 1000);
+      Signer signer = new Signer("secret".getBytes());
+      String tokenSigned = signer.sign(token.toString());
+      Cookie cookie = new Cookie(AuthenticatedURL.AUTH_COOKIE, tokenSigned);
+      Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
+
+      filter.doFilter(request, response, chain);
+
+      Mockito.verify(response).setStatus(HttpServletResponse.SC_ACCEPTED);
+      Mockito.verifyNoMoreInteractions(response);
+
+    } finally {
+      filter.destroy();
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationToken.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationToken.java
new file mode 100644
index 0000000..25f9100
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationToken.java
@@ -0,0 +1,124 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import junit.framework.TestCase;
+
+public class TestAuthenticationToken extends TestCase {
+
+  public void testAnonymous() {
+    assertNotNull(AuthenticationToken.ANONYMOUS);
+    assertEquals(null, AuthenticationToken.ANONYMOUS.getUserName());
+    assertEquals(null, AuthenticationToken.ANONYMOUS.getName());
+    assertEquals(null, AuthenticationToken.ANONYMOUS.getType());
+    assertEquals(-1, AuthenticationToken.ANONYMOUS.getExpires());
+    assertFalse(AuthenticationToken.ANONYMOUS.isExpired());
+  }
+
+  public void testConstructor() throws Exception {
+    try {
+      new AuthenticationToken(null, "p", "t");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      new AuthenticationToken("", "p", "t");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      new AuthenticationToken("u", null, "t");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      new AuthenticationToken("u", "", "t");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      new AuthenticationToken("u", "p", null);
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      new AuthenticationToken("u", "p", "");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    new AuthenticationToken("u", "p", "t");
+  }
+
+  public void testGetters() throws Exception {
+    long expires = System.currentTimeMillis() + 50;
+    AuthenticationToken token = new AuthenticationToken("u", "p", "t");
+    token.setExpires(expires);
+    assertEquals("u", token.getUserName());
+    assertEquals("p", token.getName());
+    assertEquals("t", token.getType());
+    assertEquals(expires, token.getExpires());
+    assertFalse(token.isExpired());
+    Thread.sleep(51);
+    assertTrue(token.isExpired());
+  }
+
+  public void testToStringAndParse() throws Exception {
+    long expires = System.currentTimeMillis() + 50;
+    AuthenticationToken token = new AuthenticationToken("u", "p", "t");
+    token.setExpires(expires);
+    String str = token.toString();
+    token = AuthenticationToken.parse(str);
+    assertEquals("p", token.getName());
+    assertEquals("t", token.getType());
+    assertEquals(expires, token.getExpires());
+    assertFalse(token.isExpired());
+    Thread.sleep(51);
+    assertTrue(token.isExpired());
+  }
+
+  public void testParseInvalid() throws Exception {
+    long expires = System.currentTimeMillis() + 50;
+    AuthenticationToken token = new AuthenticationToken("u", "p", "t");
+    token.setExpires(expires);
+    String str = token.toString();
+    str = str.substring(0, str.indexOf("e="));
+    try {
+      AuthenticationToken.parse(str);
+      fail();
+    } catch (AuthenticationException ex) {
+      // Expected
+    } catch (Exception ex) {
+      fail();
+    }
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
new file mode 100644
index 0000000..d198e58
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -0,0 +1,224 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.KerberosTestUtils;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
+import junit.framework.TestCase;
+import org.apache.commons.codec.binary.Base64;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.ietf.jgss.GSSContext;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.mockito.Mockito;
+import org.ietf.jgss.Oid;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.util.Properties;
+import java.util.concurrent.Callable;
+
+public class TestKerberosAuthenticationHandler extends TestCase {
+
+  protected KerberosAuthenticationHandler handler;
+
+  protected KerberosAuthenticationHandler getNewAuthenticationHandler() {
+    return new KerberosAuthenticationHandler();
+  }
+
+  protected String getExpectedType() {
+    return KerberosAuthenticationHandler.TYPE;
+  }
+
+  protected Properties getDefaultProperties() {
+    Properties props = new Properties();
+    props.setProperty(KerberosAuthenticationHandler.PRINCIPAL,
+            KerberosTestUtils.getServerPrincipal());
+    props.setProperty(KerberosAuthenticationHandler.KEYTAB,
+            KerberosTestUtils.getKeytabFile());
+    props.setProperty(KerberosAuthenticationHandler.NAME_RULES,
+            "RULE:[1:$1@$0](.*@" + KerberosTestUtils.getRealm()+")s/@.*//\n");
+    return props;
+  }
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+    handler = getNewAuthenticationHandler();
+    Properties props = getDefaultProperties();
+    try {
+      handler.init(props);
+    } catch (Exception ex) {
+      handler = null;
+      throw ex;
+    }
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    if (handler != null) {
+      handler.destroy();
+      handler = null;
+    }
+    super.tearDown();
+  }
+
+  public void testNameRules() throws Exception {
+    KerberosName kn = new KerberosName(KerberosTestUtils.getServerPrincipal());
+    assertEquals(KerberosTestUtils.getRealm(), kn.getRealm());
+
+    //destroy handler created in setUp()
+    handler.destroy();
+
+    KerberosName.setRules("RULE:[1:$1@$0](.*@FOO)s/@.*//\nDEFAULT");
+    
+    handler = getNewAuthenticationHandler();
+    Properties props = getDefaultProperties();
+    props.setProperty(KerberosAuthenticationHandler.NAME_RULES, "RULE:[1:$1@$0](.*@BAR)s/@.*//\nDEFAULT");
+    try {
+      handler.init(props);
+    } catch (Exception ex) {
+    }
+    kn = new KerberosName("bar@BAR");
+    assertEquals("bar", kn.getShortName());
+    kn = new KerberosName("bar@FOO");
+    try {
+      kn.getShortName();
+      fail();
+    }
+    catch (Exception ex) {      
+    }
+  }
+  
+  public void testInit() throws Exception {
+    assertEquals(KerberosTestUtils.getServerPrincipal(), handler.getPrincipal());
+    assertEquals(KerberosTestUtils.getKeytabFile(), handler.getKeytab());
+  }
+
+  public void testType() throws Exception {
+    assertEquals(getExpectedType(), handler.getType());
+  }
+
+  public void testRequestWithoutAuthorization() throws Exception {
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    assertNull(handler.authenticate(request, response));
+    Mockito.verify(response).setHeader(KerberosAuthenticator.WWW_AUTHENTICATE, KerberosAuthenticator.NEGOTIATE);
+    Mockito.verify(response).setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+  }
+
+  public void testRequestWithInvalidAuthorization() throws Exception {
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    Mockito.when(request.getHeader(KerberosAuthenticator.AUTHORIZATION)).thenReturn("invalid");
+    assertNull(handler.authenticate(request, response));
+    Mockito.verify(response).setHeader(KerberosAuthenticator.WWW_AUTHENTICATE, KerberosAuthenticator.NEGOTIATE);
+    Mockito.verify(response).setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+  }
+
+  public void testRequestWithIncompleteAuthorization() throws Exception {
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    Mockito.when(request.getHeader(KerberosAuthenticator.AUTHORIZATION))
+      .thenReturn(KerberosAuthenticator.NEGOTIATE);
+    try {
+      handler.authenticate(request, response);
+      fail();
+    } catch (AuthenticationException ex) {
+      // Expected
+    } catch (Exception ex) {
+      fail();
+    }
+  }
+
+
+  public void testRequestWithAuthorization() throws Exception {
+    String token = KerberosTestUtils.doAsClient(new Callable<String>() {
+      @Override
+      public String call() throws Exception {
+        GSSManager gssManager = GSSManager.getInstance();
+        GSSContext gssContext = null;
+        try {
+          String servicePrincipal = KerberosTestUtils.getServerPrincipal();
+          Oid oid = KerberosUtil.getOidInstance("NT_GSS_KRB5_PRINCIPAL");
+          GSSName serviceName = gssManager.createName(servicePrincipal,
+              oid);
+          oid = KerberosUtil.getOidInstance("GSS_KRB5_MECH_OID");
+          gssContext = gssManager.createContext(serviceName, oid, null,
+                                                  GSSContext.DEFAULT_LIFETIME);
+          gssContext.requestCredDeleg(true);
+          gssContext.requestMutualAuth(true);
+
+          byte[] inToken = new byte[0];
+          byte[] outToken = gssContext.initSecContext(inToken, 0, inToken.length);
+          Base64 base64 = new Base64(0);
+          return base64.encodeToString(outToken);
+
+        } finally {
+          if (gssContext != null) {
+            gssContext.dispose();
+          }
+        }
+      }
+    });
+
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    Mockito.when(request.getHeader(KerberosAuthenticator.AUTHORIZATION))
+      .thenReturn(KerberosAuthenticator.NEGOTIATE + " " + token);
+
+    AuthenticationToken authToken = handler.authenticate(request, response);
+
+    if (authToken != null) {
+      Mockito.verify(response).setHeader(Mockito.eq(KerberosAuthenticator.WWW_AUTHENTICATE),
+                                         Mockito.matches(KerberosAuthenticator.NEGOTIATE + " .*"));
+      Mockito.verify(response).setStatus(HttpServletResponse.SC_OK);
+
+      assertEquals(KerberosTestUtils.getClientPrincipal(), authToken.getName());
+      assertTrue(KerberosTestUtils.getClientPrincipal().startsWith(authToken.getUserName()));
+      assertEquals(getExpectedType(), authToken.getType());
+    } else {
+      Mockito.verify(response).setHeader(Mockito.eq(KerberosAuthenticator.WWW_AUTHENTICATE),
+                                         Mockito.matches(KerberosAuthenticator.NEGOTIATE + " .*"));
+      Mockito.verify(response).setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+    }
+  }
+
+  public void testRequestWithInvalidKerberosAuthorization() throws Exception {
+
+    String token = new Base64(0).encodeToString(new byte[]{0, 1, 2});
+
+    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+    HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+    Mockito.when(request.getHeader(KerberosAuthenticator.AUTHORIZATION)).thenReturn(
+      KerberosAuthenticator.NEGOTIATE + token);
+
+    try {
+      handler.authenticate(request, response);
+      fail();
+    } catch (AuthenticationException ex) {
+      // Expected
+    } catch (Exception ex) {
+      fail();
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
new file mode 100644
index 0000000..dbc2c36
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
@@ -0,0 +1,113 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.server;
+
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import junit.framework.TestCase;
+import org.apache.hadoop.security.authentication.client.PseudoAuthenticator;
+import org.mockito.Mockito;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.util.Properties;
+
+public class TestPseudoAuthenticationHandler extends TestCase {
+
+  public void testInit() throws Exception {
+    PseudoAuthenticationHandler handler = new PseudoAuthenticationHandler();
+    try {
+      Properties props = new Properties();
+      props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "false");
+      handler.init(props);
+      assertEquals(false, handler.getAcceptAnonymous());
+    } finally {
+      handler.destroy();
+    }
+  }
+
+  public void testType() throws Exception {
+    PseudoAuthenticationHandler handler = new PseudoAuthenticationHandler();
+    assertEquals(PseudoAuthenticationHandler.TYPE, handler.getType());
+  }
+
+  public void testAnonymousOn() throws Exception {
+    PseudoAuthenticationHandler handler = new PseudoAuthenticationHandler();
+    try {
+      Properties props = new Properties();
+      props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "true");
+      handler.init(props);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      AuthenticationToken token = handler.authenticate(request, response);
+
+      assertEquals(AuthenticationToken.ANONYMOUS, token);
+    } finally {
+      handler.destroy();
+    }
+  }
+
+  public void testAnonymousOff() throws Exception {
+    PseudoAuthenticationHandler handler = new PseudoAuthenticationHandler();
+    try {
+      Properties props = new Properties();
+      props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "false");
+      handler.init(props);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+
+      handler.authenticate(request, response);
+      fail();
+    } catch (AuthenticationException ex) {
+      // Expected
+    } catch (Exception ex) {
+      fail();
+    } finally {
+      handler.destroy();
+    }
+  }
+
+  private void _testUserName(boolean anonymous) throws Exception {
+    PseudoAuthenticationHandler handler = new PseudoAuthenticationHandler();
+    try {
+      Properties props = new Properties();
+      props.setProperty(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, Boolean.toString(anonymous));
+      handler.init(props);
+
+      HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
+      HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+      Mockito.when(request.getParameter(PseudoAuthenticator.USER_NAME)).thenReturn("user");
+
+      AuthenticationToken token = handler.authenticate(request, response);
+
+      assertNotNull(token);
+      assertEquals("user", token.getUserName());
+      assertEquals("user", token.getName());
+      assertEquals(PseudoAuthenticationHandler.TYPE, token.getType());
+    } finally {
+      handler.destroy();
+    }
+  }
+
+  public void testUserNameAnonymousOff() throws Exception {
+    _testUserName(false);
+  }
+
+  public void testUserNameAnonymousOn() throws Exception {
+    _testUserName(true);
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
new file mode 100644
index 0000000..b6c0b0f
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
@@ -0,0 +1,88 @@
+package org.apache.hadoop.security.authentication.util;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import java.io.IOException;
+
+import org.apache.hadoop.security.authentication.KerberosTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+public class TestKerberosName {
+
+  @Before
+  public void setUp() throws Exception {
+    String rules =
+      "RULE:[1:$1@$0](.*@YAHOO\\.COM)s/@.*//\n" +
+      "RULE:[2:$1](johndoe)s/^.*$/guest/\n" +
+      "RULE:[2:$1;$2](^.*;admin$)s/;admin$//\n" +
+      "RULE:[2:$2](root)\n" +
+      "DEFAULT";
+    KerberosName.setRules(rules);
+    KerberosName.printRules();
+  }
+
+  private void checkTranslation(String from, String to) throws Exception {
+    System.out.println("Translate " + from);
+    KerberosName nm = new KerberosName(from);
+    String simple = nm.getShortName();
+    System.out.println("to " + simple);
+    assertEquals("short name incorrect", to, simple);
+  }
+
+  @Test
+  public void testRules() throws Exception {
+    checkTranslation("omalley@" + KerberosTestUtils.getRealm(), "omalley");
+    checkTranslation("hdfs/10.0.0.1@" + KerberosTestUtils.getRealm(), "hdfs");
+    checkTranslation("oom@YAHOO.COM", "oom");
+    checkTranslation("johndoe/zoo@FOO.COM", "guest");
+    checkTranslation("joe/admin@FOO.COM", "joe");
+    checkTranslation("joe/root@FOO.COM", "root");
+  }
+
+  private void checkBadName(String name) {
+    System.out.println("Checking " + name + " to ensure it is bad.");
+    try {
+      new KerberosName(name);
+      fail("didn't get exception for " + name);
+    } catch (IllegalArgumentException iae) {
+      // PASS
+    }
+  }
+
+  private void checkBadTranslation(String from) {
+    System.out.println("Checking bad translation for " + from);
+    KerberosName nm = new KerberosName(from);
+    try {
+      nm.getShortName();
+      fail("didn't get exception for " + from);
+    } catch (IOException ie) {
+      // PASS
+    }
+  }
+
+  @Test
+  public void testAntiPatterns() throws Exception {
+    checkBadName("owen/owen/owen@FOO.COM");
+    checkBadName("owen@foo/bar.com");
+    checkBadTranslation("foo@ACME.COM");
+    checkBadTranslation("root/joe@FOO.COM");
+  }
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
new file mode 100644
index 0000000..4c91e2b
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.junit.Test;
+
+public class TestKerberosUtil {
+
+  @Test
+  public void testGetServerPrincipal() throws IOException {
+    String service = "TestKerberosUtil";
+    String localHostname = KerberosUtil.getLocalHostName();
+    String testHost = "FooBar";
+
+    // send null hostname
+    assertEquals("When no hostname is sent",
+        service + "/" + localHostname.toLowerCase(),
+        KerberosUtil.getServicePrincipal(service, null));
+    // send empty hostname
+    assertEquals("When empty hostname is sent",
+        service + "/" + localHostname.toLowerCase(),
+        KerberosUtil.getServicePrincipal(service, ""));
+    // send 0.0.0.0 hostname
+    assertEquals("When 0.0.0.0 hostname is sent",
+        service + "/" + localHostname.toLowerCase(),
+        KerberosUtil.getServicePrincipal(service, "0.0.0.0"));
+    // send uppercase hostname
+    assertEquals("When uppercase hostname is sent",
+        service + "/" + testHost.toLowerCase(),
+        KerberosUtil.getServicePrincipal(service, testHost));
+    // send lowercase hostname
+    assertEquals("When lowercase hostname is sent",
+        service + "/" + testHost.toLowerCase(),
+        KerberosUtil.getServicePrincipal(service, testHost.toLowerCase()));
+  }
+}
\ No newline at end of file
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java
new file mode 100644
index 0000000..9b3d1a2
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java
@@ -0,0 +1,93 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import junit.framework.TestCase;
+
+public class TestSigner extends TestCase {
+
+  public void testNoSecret() throws Exception {
+    try {
+      new Signer(null);
+      fail();
+    }
+    catch (IllegalArgumentException ex) {
+    }
+  }
+
+  public void testNullAndEmptyString() throws Exception {
+    Signer signer = new Signer("secret".getBytes());
+    try {
+      signer.sign(null);
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+    try {
+      signer.sign("");
+      fail();
+    } catch (IllegalArgumentException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+  }
+
+  public void testSignature() throws Exception {
+    Signer signer = new Signer("secret".getBytes());
+    String s1 = signer.sign("ok");
+    String s2 = signer.sign("ok");
+    String s3 = signer.sign("wrong");
+    assertEquals(s1, s2);
+    assertNotSame(s1, s3);
+  }
+
+  public void testVerify() throws Exception {
+    Signer signer = new Signer("secret".getBytes());
+    String t = "test";
+    String s = signer.sign(t);
+    String e = signer.verifyAndExtract(s);
+    assertEquals(t, e);
+  }
+
+  public void testInvalidSignedText() throws Exception {
+    Signer signer = new Signer("secret".getBytes());
+    try {
+      signer.verifyAndExtract("test");
+      fail();
+    } catch (SignerException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+  }
+
+  public void testTampering() throws Exception {
+    Signer signer = new Signer("secret".getBytes());
+    String t = "test";
+    String s = signer.sign(t);
+    s += "x";
+    try {
+      signer.verifyAndExtract(s);
+      fail();
+    } catch (SignerException ex) {
+      // Expected
+    } catch (Throwable ex) {
+      fail();
+    }
+  }
+
+}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/resources/krb5.conf b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/resources/krb5.conf
new file mode 100644
index 0000000..c9f9567
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-auth/src/test/resources/krb5.conf
@@ -0,0 +1,28 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# 
+[libdefaults]
+	default_realm = ${kerberos.realm}
+	udp_preference_limit = 1
+	extra_addresses = 127.0.0.1
+[realms]
+	${kerberos.realm} = {
+		admin_server = localhost:88
+		kdc = localhost:88
+	}
+[domain_realm]
+	localhost = ${kerberos.realm}
diff --git a/branch-2.0.4-alpha/hadoop-common-project/hadoop-common/CHANGES.txt b/branch-2.0.4-alpha/hadoop-common-project/hadoop-common/CHANGES.txt
new file mode 100644
index 0000000..3b41a6a
--- /dev/null
+++ b/branch-2.0.4-alpha/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -0,0 +1,12994 @@
+Hadoop Change Log
+
+Release 2.0.4-alpha - UNRELEASED
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-9467. Metrics2 record filter should check name as well as tags.
+    (Ganeshan Iyler via llu)
+
+    HADOOP-9406. hadoop-client leaks dependency on JDK tools jar. (tucu)
+
+    HADOOP-9301. hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & HttpFS. (tucu)
+
+    HADOOP-9299.  kerberos name resolution is kicking in even when kerberos
+    is not configured (daryn)
+
+    HADOOP-9399. protoc maven plugin doesn't work on mvn 3.0.2 (todd)
+
+    HADOOP-9444. Modify hadoop-policy.xml to replace unexpanded variables to a
+    default value of '*'. (Roman Shaposhnik via vinodkv)
+
+    HADOOP-9405. TestGridmixSummary#testExecutionSummarizer is broken.
+    (Andrew Wang via atm)
+
+    HADOOP-9379. capture the ulimit info after printing the log to the 
+    console. (Arpit Gupta via suresh)
+
+    HADOOP-9471. hadoop-client wrongfully excludes jetty-util JAR, 
+    breaking webhdfs. (tucu)
+
+Release 2.0.3-alpha - 2013-02-06 
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-8999. SASL negotiation is flawed (daryn)
+
+  NEW FEATURES
+
+    HADOOP-8561. Introduce HADOOP_PROXY_USER for secure impersonation in child
+    hadoop client processes. (Yu Gao via llu)
+
+    HADOOP-8597. Permit FsShell's text command to read Avro files.
+    (Ivan Vladimirov Ivanov via cutting)
+
+    HADOOP-9020. Add a SASL PLAIN server (daryn via bobby)
+
+    HADOOP-9090. Support on-demand publish of metrics. (Mostafa Elhemali via
+    suresh)
+
+    HADOOP-9054. Add AuthenticationHandler that uses Kerberos but allows for 
+    an alternate form of authentication for browsers. (rkanter via tucu)
+
+  IMPROVEMENTS
+
+    HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR.
+    (Andy Isaacson via eli)
+
+    HADOOP-8755. Print thread dump when tests fail due to timeout. (Andrey
+    Klochkov via atm)
+
+    HADOOP-8806. libhadoop.so: dlopen should be better at locating
+    libsnappy.so, etc. (Colin Patrick McCabe via eli)
+
+    HADOOP-8812. ExitUtil#terminate should print Exception#toString. (eli)
+
+    HADOOP-3957. Change MutableQuantiles to use a shared thread for rolling
+    over metrics. (Andrew Wang via todd)
+
+    HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked
+    tests. (Ivan A. Veselovsky via atm)
+
+    HADOOP-8783. Improve RPC.Server's digest auth (daryn)
+
+    HADOOP-8889. Upgrade to Surefire 2.12.3 (todd)
+
+    HADOOP-8804. Improve Web UIs when the wildcard address is used.
+    (Senthil Kumar via eli)
+
+    HADOOP-8894. GenericTestUtils.waitFor should dump thread stacks on timeout
+    (todd)
+
+    HADOOP-8909. Hadoop Common Maven protoc calls must not depend on external
+    sh script. (Chris Nauroth via suresh)
+
+    HADOOP-8911. CRLF characters in source and text files.
+    (Raja Aluri via suresh)
+
+    HADOOP-8912. Add .gitattributes file to prevent CRLF and LF mismatches
+    for source and text files. (Raja Aluri via suresh)
+
+    HADOOP-8784. Improve IPC.Client's token use (daryn)
+
+    HADOOP-8929. Add toString, other improvements for SampleQuantiles (todd)
+
+    HADOOP-8922. Provide alternate JSONP output for JMXJsonServlet to allow
+    javascript in browser (Damien Hardy via bobby)
+
+    HADOOP-8931. Add Java version to startup message. (eli)
+
+    HADOOP-8925. Remove the packaging. (eli)
+
+    HADOOP-8985. Add namespace declarations in .proto files for languages 
+    other than java. (Binglin Chan via suresh)
+
+    HADOOP-9009. Add SecurityUtil methods to get/set authentication method
+    (daryn via bobby)
+
+    HADOOP-9010. Map UGI authenticationMethod to RPC authMethod (daryn via
+    bobby)
+
+    HADOOP-9013. UGI should not hardcode loginUser's authenticationType (daryn
+    via bobby)
+
+    HADOOP-9014. Standardize creation of SaslRpcClients (daryn via bobby)
+
+    HADOOP-9015. Standardize creation of SaslRpcServers (daryn via bobby)
+
+    HADOOP-9021. Enforce configured SASL method on the server (daryn via
+    bobby)
+
+    HADOO-8998. set Cache-Control no-cache header on all dynamic content. (tucu)
+
+    HADOOP-9035. Generalize setup of LoginContext (daryn via bobby)
+
+    HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package.
+    (suresh)
+
+    HADOOP-9127. Update documentation for ZooKeeper Failover Controller.
+    (Daisuke Kobayashi via atm)
+
+    HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu
+    via suresh)
+
+    HADOOP-9147. Add missing fields to FIleStatus.toString.
+    (Jonathan Allen via suresh)
+
+    HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu)
+
+    HADOOP-9162. Add utility to check native library availability.
+    (Binglin Chang via suresh)
+
+    HADOOP-9173. Add security token protobuf definition to common and
+    use it in hdfs. (suresh)
+
+    HADOOP-9119. Add test to FileSystemContractBaseTest to verify integrity
+    of overwritten files. (Steve Loughran via suresh)
+
+    HADOOP-9192. Move token related request/response messages to common.
+    (suresh)
+
+    HADOOP-8712. Change default hadoop.security.group.mapping to
+    JniBasedUnixGroupsNetgroupMappingWithFallback (Robert Parker via todd)
+
+    HADOOP-9106. Allow configuration of IPC connect timeout.
+    (Rober Parker via suresh)
+
+    HADOOP-9216. CompressionCodecFactory#getCodecClasses should trim the
+    result of parsing by Configuration. (Tsuyoshi Ozawa via todd)
+
+    HADOOP-9231. Parametrize staging URL for the uniformity of
+    distributionManagement. (Konstantin Boudnik via suresh)
+
+    HADOOP-9247. Parametrize Clover "generateXxx" properties to make them
+    re-definable via -D in mvn calls. (Ivan A. Veselovsky via suresh)
+
+    HADOOP-9276. Allow BoundedByteArrayOutputStream to be resettable.
+    (Arun Murthy via hitesh)
+
+  OPTIMIZATIONS
+
+    HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang
+    via atm)
+
+    HADOOP-8926. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static
+    data (Gopal V via bobby)
+
+    HADOOP-9042. Add a test for umask in FileSystemContractBaseTest.
+    (Colin McCabe via eli)
+
+  BUG FIXES
+
+    HADOOP-9041. FsUrlStreamHandlerFactory could cause an infinite loop in
+    FileSystem initialization. (Yanbo Liang and Radim Kolar via llu)
+
+    HADOOP-8418. Update UGI Principal classes name for running with
+    IBM JDK on 64 bits Windows.  (Yu Gao via eyang)
+
+    HADOOP-8795. BASH tab completion doesn't look in PATH, assumes path to
+    executable is specified. (Sean Mackrory via atm)
+
+    HADOOP-8780. Update DeprecatedProperties apt file. (Ahmed Radwan via
+    tomwhite)
+
+    HADOOP-8833. fs -text should make sure to call inputstream.seek(0)
+    before using input stream. (tomwhite and harsh)
+
+    HADOOP-8791. Fix rm command documentation to indicte it deletes
+    files and not directories. (Jing Zhao via suresh)
+
+    HADOOP-8855. SSL-based image transfer does not work when Kerberos
+    is disabled. (todd via eli)
+
+    HADOOP-8616. ViewFS configuration requires a trailing slash. (Sandy Ryza
+    via atm)
+
+    HADOOP-8756. Fix SEGV when libsnappy is in java.library.path but
+    not LD_LIBRARY_PATH. (Colin Patrick McCabe via eli)
+
+    HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be debug not info. (tucu)
+
+    HADOOP-8913. hadoop-metrics2.properties should give units in comment 
+    for sampling period. (Sandy Ryza via suresh)
+
+    HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
+    webhdfs filesystem and fsck to fail when security is on.
+    (Arpit Gupta via suresh)
+
+    HADOOP-8901. GZip and Snappy support may not work without unversioned
+    libraries (Colin Patrick McCabe via todd)
+
+    HADOOP-8883. Anonymous fallback in KerberosAuthenticator is broken.
+    (rkanter via tucu)
+
+    HADOOP-8900. BuiltInGzipDecompressor throws IOException - stored gzip size
+    doesn't match decompressed size. (Andy Isaacson via suresh)
+
+    HADOOP-8948. TestFileUtil.testGetDU fails on Windows due to incorrect
+    assumption of line separator. (Chris Nauroth via suresh)
+
+    HADOOP-8951. RunJar to fail with user-comprehensible error 
+    message if jar missing. (stevel via suresh)
+
+    HADOOP-8713. TestRPCCompatibility fails intermittently with JDK7
+    (Trevor Robinson via tgraves)
+
+    HADOOP-9012. IPC Client sends wrong connection context (daryn via bobby)
+
+    HADOOP-7115. Add a cache for getpwuid_r and getpwgid_r calls (tucu)
+
+    HADOOP-6607. Add different variants of non caching HTTP headers. (tucu)
+
+    HADOOP-9049. DelegationTokenRenewer needs to be Singleton and FileSystems
+    should register/deregister to/from. (Karthik Kambatla via tomwhite)
+
+    HADOOP-9064. Augment DelegationTokenRenewer API to cancel the tokens on 
+    calls to removeRenewAction. (kkambatl via tucu)
+
+    HADOOP-9103. UTF8 class does not properly decode Unicode characters
+    outside the basic multilingual plane. (todd)
+
+    HADOOP-9070. Kerberos SASL server cannot find kerberos key. (daryn via atm)
+
+    HADOOP-6762. Exception while doing RPC I/O closes channel
+    (Sam Rash and todd via todd)
+
+    HADOOP-9126. FormatZK and ZKFC startup can fail due to zkclient connection
+    establishment delay. (Rakesh R and todd via todd)
+
+    HADOOP-9113. o.a.h.fs.TestDelegationTokenRenewer is failing intermittently.
+    (Karthik Kambatla via eli)
+
+    HADOOP-9135. JniBasedUnixGroupsMappingWithFallback should log at debug
+    rather than info during fallback. (Colin Patrick McCabe via todd)
+
+    HADOOP-9152. HDFS can report negative DFS Used on clusters with very small
+    amounts of data. (Brock Noland via atm)
+
+    HADOOP-9153. Support createNonRecursive in ViewFileSystem.
+    (Sandy Ryza via tomwhite)
+
+    HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool.
+    (Liang Xie via suresh)
+
+    HADOOP-9155. FsPermission should have different default value, 777 for
+    directory and 666 for file. (Binglin Chang via atm)
+
+    HADOOP-9183. Potential deadlock in ActiveStandbyElector. (tomwhite)
+
+    HADOOP-9203. RPCCallBenchmark should find a random available port.
+    (Andrew Purtell via suresh)
+
+    HADOOP-9178. src/main/conf is missing hadoop-policy.xml.
+    (Sandy Ryza via eli)
+
+    HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. 
+    (moritzmoeller via tucu)
+    
+    HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (tomwhite)
+
+    HADOOP-8589 ViewFs tests fail when tests and home dirs are nested.
+    (sanjay Radia)
+
+    HADOOP-9193. hadoop script can inadvertently expand wildcard arguments
+    when delegating to hdfs script. (Andy Isaacson via todd)
+
+    HADOOP-9215. when using cmake-2.6, libhadoop.so doesn't get created
+    (only libhadoop.so.1.0.0) (Colin Patrick McCabe via todd)
+
+    HADOOP-8857. hadoop.http.authentication.signature.secret.file docs 
+    should not state that secret is randomly generated. (tucu)
+
+    HADOOP-9190. packaging docs is broken. (Andy Isaacson via tgraves)
+
+    HADOOP-9221. Convert remaining xdocs to APT. (Andy Isaacson via atm)
+
+    HADOOP-8981. TestMetricsSystemImpl fails on Windows. (Xuan Gong via suresh)
+    
+    HADOOP-9124. SortedMapWritable violates contract of Map interface for
+    equals() and hashCode(). (Surenkumar Nihalani via tomwhite)
+
+    HADOOP-9252. In StringUtils, humanReadableInt(..) has a race condition and
+    the synchronization of limitDecimalTo2(double) can be avoided.  (szetszwo)
+
+    HADOOP-9260. Hadoop version may be not correct when starting name node or
+    data node. (Chris Nauroth via jlowe)
+
+    HADOOP-9278. Fix the file handle leak in HarMetaData.parseMetaData() in
+    HarFileSystem. (Chris Nauroth via szetszwo)
+
+    HADOOP-9289. FsShell rm -f fails for non-matching globs. (Daryn Sharp via
+    suresh)
+
+Release 2.0.2-alpha - 2012-09-07 
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-8388. Remove unused BlockLocation serialization.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8689. Make trash a server side configuration option. (eli)
+
+    HADOOP-8710. Remove ability for users to easily run the trash emptire. (eli)
+    
+    HADOOP-8794. Rename YARN_HOME to HADOOP_YARN_HOME. (vinodkv via acmurthy)
+
+  NEW FEATURES
+ 
+    HDFS-3042. Automatic failover support for NameNode HA (todd)
+    (see dedicated section below for breakdown of subtasks)
+
+    HADOOP-8135. Add ByteBufferReadable interface to FSDataInputStream. (Henry
+    Robinson via atm)
+
+    HADOOP-8458. Add management hook to AuthenticationHandler to enable 
+    delegation token operations support (tucu)
+
+    HADOOP-8465. hadoop-auth should support ephemeral authentication (tucu)
+
+    HADOOP-8644. AuthenticatedURL should be able to use SSLFactory. (tucu)
+
+    HADOOP-8581. add support for HTTPS to the web UIs. (tucu)
+
+    HADOOP-7754. Expose file descriptors from Hadoop-wrapped local 
+    FileSystems (todd and ahmed via tucu)
+
+    HADOOP-8240. Add a new API to allow users to specify a checksum type
+    on FileSystem.create(..).  (Kihwal Lee via szetszwo)
+
+  IMPROVEMENTS
+
+    HADOOP-8340. SNAPSHOT build versions should compare as less than their eventual
+    final release. (todd)
+
+    HADOOP-8361. Avoid out-of-memory problems when deserializing strings.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8224. Don't hardcode hdfs.audit.logger in the scripts.
+    (Tomohiko Kinebuchi via eli)
+
+    HADOOP-8398. Cleanup BlockLocation. (eli)
+
+    HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault
+    methods that don't take a Path argument. (eli)
+
+    HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh)
+
+    HADOOP-8358. Config-related WARN for dfs.web.ugi can be avoided. (harsh)
+
+    HADOOP-8450. Remove src/test/system. (eli)
+
+    HADOOP-8244. Improve comments on ByteBufferReadable.read. (Henry Robinson
+    via atm)
+
+    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu)
+
+    HADOOP-8524. Allow users to get source of a Configuration
+    parameter (harsh)
+
+    HADOOP-8449. hadoop fs -text fails with compressed sequence files
+    with the codec file extension (harsh)
+
+    HADOOP-6802. Remove FS_CLIENT_BUFFER_DIR_KEY = "fs.client.buffer.dir"
+    from CommonConfigurationKeys.java (not used, deprecated)
+    (Sho Shimauchi via harsh)
+
+    HADOOP-3450. Add tests to Local Directory Allocator for
+    asserting their URI-returning capability (Sho Shimauchi via harsh)
+
+    HADOOP-8463. hadoop.security.auth_to_local needs a key definition and doc.
+    (Madhukara Phatak via eli)
+
+    HADOOP-8533. Remove parallel call ununsed capability in RPC.
+    (Brandon Li via suresh)
+
+    HADOOP-8423. MapFile.Reader.get() crashes jvm or throws
+    EOFException on Snappy or LZO block-compressed data
+    (todd via harsh)
+
+    HADOOP-8541. Better high-percentile latency metrics. (Andrew Wang via atm)
+
+    HADOOP-8362. Improve exception message when Configuration.set() is
+    called with a null key or value. (Madhukara Phatak
+    and Suresh Srinivas via harsh)
+
+    HADOOP-7818. DiskChecker#checkDir should fail if the directory is
+    not executable. (Madhukara Phatak via harsh)
+
+    HADOOP-8531. SequenceFile Writer can throw out a better error if a
+    serializer or deserializer isn't available
+    (Madhukara Phatak via harsh)
+
+    HADOOP-8609. IPC server logs a useless message when shutting down socket.
+    (Jon Zuanich via atm)
+
+    HADOOP-8620. Add -Drequire.fuse and -Drequire.snappy. (Colin
+    Patrick McCabe via eli)
+
+    HADOOP-8687. Upgrade log4j to 1.2.17. (eli)
+
+    HADOOP-8278. Make sure components declare correct set of dependencies.
+    (tomwhite)
+
+    HADOOP-8700.  Use enum to define the checksum constants in DataChecksum.
+    (szetszwo)
+
+    HADOOP-8686. Fix warnings in native code. (Colin Patrick McCabe via eli)
+
+    HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file
+    checksum with CRC32C.  (Kihwal Lee via szetszwo)
+
+    HADOOP-8075. Lower native-hadoop library log from info to debug.
+    (Hızır Sefa İrken via eli)
+
+    HADOOP-8748. Refactor DFSClient retry utility methods to a new class
+    in org.apache.hadoop.io.retry.  (Arun C Murthy via szetszwo)
+
+    HADOOP-8754. Deprecate all the RPC.getServer() variants.  (Brandon Li
+    via szetszwo)
+
+    HADOOP-8801. ExitUtil#terminate should capture the exception stack trace. (eli)
+
+    HADOOP-8819. Incorrectly & is used instead of && in some file system 
+    implementations. (Brandon Li via suresh)
+
+    HADOOP-8736. Add Builder for building RPC server. (Brandon Li via Suresh)
+
+  BUG FIXES
+
+    HADOOP-8372. NetUtils.normalizeHostName() incorrectly handles hostname
+    starting with a numeric character. (Junping Du via suresh)
+
+    HADOOP-8393. hadoop-config.sh missing variable exports, causes Yarn jobs 
+    to fail with ClassNotFoundException MRAppMaster. (phunt via tucu)
+
+    HADOOP-8316. Audit logging should be disabled by default. (eli)
+
+    HADOOP-8400. All commands warn "Kerberos krb5 configuration not found" 
+    when security is not enabled. (tucu)
+
+    HADOOP-8406. CompressionCodecFactory.CODEC_PROVIDERS iteration is
+    thread-unsafe (todd)
+
+    HADOOP-8287. etc/hadoop is missing hadoop-env.sh (eli)
+
+    HADOOP-8408. MR doesn't work with a non-default ViewFS mount table
+    and security enabled. (atm via eli)
+
+    HADOOP-8329. Build fails with Java 7. (eli)
+
+    HADOOP-8268. A few pom.xml across Hadoop project
+    may fail XML validation. (Radim Kolar via harsh)
+
+    HADOOP-8444. Fix the tests FSMainOperationsBaseTest.java and
+    FileContextMainOperationsBaseTest.java to avoid potential
+    test failure (Madhukara Phatak via harsh)
+
+    HADOOP-8452. DN logs backtrace when running under jsvc and /jmx is loaded 
+    (Andy Isaacson via bobby)
+
+    HADOOP-8460. Document proper setting of HADOOP_PID_DIR and 
+    HADOOP_SECURE_DN_PID_DIR (bobby)
+
+    HADOOP-8466. hadoop-client POM incorrectly excludes avro. (bmahe via tucu)
+
+    HADOOP-8481. update BUILDING.txt to talk about cmake rather than autotools.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8485. Don't hardcode "Apache Hadoop 0.23" in the docs. (eli)
+
+    HADOOP-8488. test-patch.sh gives +1 even if the native build fails.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8507. Avoid OOM while deserializing DelegationTokenIdentifer.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8433. Don't set HADOOP_LOG_DIR in hadoop-env.sh.
+    (Brahma Reddy Battula via eli)
+
+    HADOOP-8509. JarFinder duplicate entry: META-INF/MANIFEST.MF exception (tucu)
+
+    HADOOP-8512. AuthenticatedURL should reset the Token when the server returns 
+    other than OK on authentication (tucu)
+
+    HADOOP-8168. empty-string owners or groups causes {{MissingFormatWidthException}} 
+    in o.a.h.fs.shell.Ls.ProcessPath() (ekoontz via tucu)
+
+    HADOOP-8438. hadoop-validate-setup.sh refers to examples jar file which doesn't exist
+    (Devaraj K via umamahesh)
+
+    HADOOP-8538. CMake builds fail on ARM. (Trevor Robinson via eli)
+
+    HADOOP-8547. Package hadoop-pipes examples/bin directory (again).
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8563. don't package hadoop-pipes examples/bin
+    (Colin Patrick McCabe via tgraves)
+
+    HADOOP-8566. AvroReflectSerializer.accept(Class) throws a NPE if the class has no 
+    package (primitive types and arrays). (tucu)
+
+    HADOOP-8586. Fixup a bunch of SPNEGO misspellings. (eli)
+
+    HADOOP-3886. Error in javadoc of Reporter, Mapper and Progressable
+    (Jingguo Yao via harsh)
+
+    HADOOP-8587. HarFileSystem access of harMetaCache isn't threadsafe. (eli)
+
+    HADOOP-8585. Fix initialization circularity between UserGroupInformation
+    and HadoopConfiguration. (Colin Patrick McCabe via atm)
+
+    HADOOP-8552. Conflict: Same security.log.file for multiple users. 
+    (kkambatl via tucu)
+
+    HADOOP-8537. Fix TFile tests to pass even when native zlib support is not
+    compiled. (todd)
+
+    HADOOP-8626. Typo in default setting for
+    hadoop.security.group.mapping.ldap.search.filter.user. (Jonathan Natkins
+    via atm)
+
+    HADOOP-8480. The native build should honor -DskipTests.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8659. Native libraries must build with soft-float ABI for Oracle JVM
+    on ARM. (Trevor Robinson via todd)
+
+    HADOOP-8654. TextInputFormat delimiter bug (Gelesh and Jason Lowe via
+    bobby)
+
+    HADOOP-8614. IOUtils#skipFully hangs forever on EOF. 
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8720. TestLocalFileSystem should use test root subdirectory.
+    (Vlad Rozov via eli)
+
+    HADOOP-8721. ZKFC should not retry 45 times when attempting a graceful
+    fence during a failover. (Vinayakumar B via atm)
+
+    HADOOP-8632. Configuration leaking class-loaders (Costin Leau via bobby)
+
+    HADOOP-4572. Can not access user logs - Jetty is not configured by default 
+    to serve aliases/symlinks (ahmed via tucu)
+
+    HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu)
+
+    HADOOP-8699. some common testcases create core-site.xml in test-classes
+    making other testcases to fail. (tucu)
+
+    HADOOP-8031. Configuration class fails to find embedded .jar resources; 
+    should use URL.openStream() (genman via tucu)
+
+    HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8747. Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake. (cmccabe via tucu)
+
+    HADOOP-8722. Update BUILDING.txt with latest snappy info.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8764. CMake: HADOOP-8737 broke ARM build. (Trevor Robinson via eli)
+
+    HADOOP-8770. NN should not RPC to self to find trash defaults. (eli)
+
+    HADOOP-8648. libhadoop: native CRC32 validation crashes when
+    io.bytes.per.checksum=1. (Colin Patrick McCabe via eli)
+
+    HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root
+    dir. (Colin Patrick McCabe via atm)
+
+    HADOOP-8749. HADOOP-8031 changed the way in which relative xincludes are handled in 
+    Configuration. (ahmed via tucu)
+
+    HADOOP-8431. Running distcp wo args throws IllegalArgumentException.
+    (Sandy Ryza via eli)
+
+    HADOOP-8775. MR2 distcp permits non-positive value to -bandwidth option
+    which causes job never to complete. (Sandy Ryza via atm)
+
+    HADOOP-8781. hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH. (tucu)
+
+  BREAKDOWN OF HDFS-3042 SUBTASKS
+
+    HADOOP-8220. ZKFailoverController doesn't handle failure to become active
+    correctly (todd)
+
+    HADOOP-8228. Auto HA: Refactor tests and add stress tests. (todd)
+    
+    HADOOP-8215. Security support for ZK Failover controller (todd)
+    
+    HADOOP-8245. Fix flakiness in TestZKFailoverController (todd)
+    
+    HADOOP-8257. TestZKFailoverControllerStress occasionally fails with Mockito
+    error (todd)
+    
+    HADOOP-8260. Replace ClientBaseWithFixes with our own modified copy of the
+    class (todd)
+    
+    HADOOP-8246. Auto-HA: automatically scope znode by nameservice ID (todd)
+    
+    HADOOP-8247. Add a config to enable auto-HA, which disables manual
+    FailoverController (todd)
+    
+    HADOOP-8306. ZKFC: improve error message when ZK is not running. (todd)
+    
+    HADOOP-8279. Allow manual failover to be invoked when auto-failover is
+    enabled. (todd)
+    
+    HADOOP-8276. Auto-HA: add config for java options to pass to zkfc daemon
+    (todd via eli)
+    
+    HADOOP-8405. ZKFC tests leak ZK instances. (todd)
+
+Release 2.0.0-alpha - 05-23-2012
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-7920. Remove Avro Rpc. (suresh)
+
+  NEW FEATURES
+
+    HADOOP-7773. Add support for protocol buffer based RPC engine.
+    (suresh)
+
+    HADOOP-7875. Add helper class to unwrap protobuf ServiceException.
+    (suresh)
+
+    HADOOP-7454. Common side of High Availability Framework (HDFS-1623)
+    Contributed by Todd Lipcon, Aaron T. Myers, Eli Collins, Uma Maheswara Rao G,
+    Bikas Saha, Suresh Srinivas, Jitendra Nath Pandey, Hari Mankude, Brandon Li,
+    Sanjay Radia, Mingjie Lai, and Gregory Chanan
+
+    HADOOP-8121. Active Directory Group Mapping Service. (Jonathan Natkins via
+    atm)
+
+    HADOOP-7030. Add TableMapping topology implementation to read host to rack
+    mapping from a file. (Patrick Angeles and tomwhite via tomwhite)
+
+    HADOOP-8206. Common portion of a ZK-based failover controller (todd)
+
+    HADOOP-8210. Common side of HDFS-3148: The client should be able
+    to use multiple local interfaces for data transfer. (eli)
+
+    HADOOP-8343. Allow configuration of authorization for JmxJsonServlet and 
+    MetricsServlet (tucu)
+
+  IMPROVEMENTS
+
+    HADOOP-7524. Change RPC to allow multiple protocols including multuple
+    versions of the same protocol (sanjay Radia)
+
+    HADOOP-7607. Simplify the RPC proxy cleanup process. (atm)
+
+    HADOOP-7687. Make getProtocolSignature public  (sanjay)
+
+    HADOOP-7693. Enhance AvroRpcEngine to support the new #addProtocol
+    interface introduced in HADOOP-7524.  (cutting)
+
+    HADOOP-7716. RPC protocol registration on SS does not log the protocol name
+    (only the class which may be different) (sanjay)
+
+    HADOOP-7776. Make the Ipc-Header in a RPC-Payload an explicit header.
+    (sanjay)
+
+    HADOOP-7862. Move the support for multiple protocols to lower layer so
+    that Writable, PB and Avro can all use it (Sanjay)
+
+    HADOOP-7876. Provided access to encoded key in DelegationKey for
+    use in protobuf based RPCs. (suresh)
+
+    HADOOP-7899. Generate proto java files as part of the build. (tucu)
+
+    HADOOP-7957. Classes deriving GetGroupsBase should be able to override 
+    proxy creation. (jitendra)
+
+    HADOOP-7965. Support for protocol version and signature in PB. (jitendra)
+
+    HADOOP-8070. Add a standalone benchmark for RPC call performance. (todd)
+
+    HADOOP-8084. Updates ProtoBufRpc engine to not do an unnecessary copy 
+    for RPC request/response. (ddas)
+
+    HADOOP-8085. Add RPC metrics to ProtobufRpcEngine. (Hari Mankude via
+    suresh)
+
+    HADOOP-8098. KerberosAuthenticatorHandler should use _HOST replacement to 
+    resolve principal name (tucu)
+
+    HADOOP-8118.  In metrics2.util.MBeans, change log level to trace for the
+    stack trace of InstanceAlreadyExistsException.  (szetszwo)
+
+    HADOOP-8125. make hadoop-client set of curated jars available in a
+    distribution tarball (rvs via tucu)
+
+    HADOOP-7717. Move handling of concurrent client fail-overs to
+    RetryInvocationHandler (atm)
+
+    HADOOP-7728. Enable task memory management to be configurable in hadoop
+    config setup script. (ramya)
+
+    HADOOP-7358. Improve log levels when exceptions caught in RPC handler
+    (Todd Lipcon via shv)
+
+    HADOOP-7557 Make IPC header be extensible (sanjay radia)
+
+    HADOOP-7806. Support binding to sub-interfaces (eli)
+
+    HADOOP-6941. Adds support for building Hadoop with IBM's JDK 
+    (Stephen Watt, Eli and ddas)
+
+    HADOOP-8183. Stop using "mapred.used.genericoptions.parser" (harsh)
+
+    HADOOP-6924. Adds a directory to the list of directories to search
+    for the libjvm.so file. The new directory is found by running a 'find'
+    command and the first output is taken. This was done to handle the 
+    build of Hadoop with IBM's JDK. (Stephen Watt, Guillermo Cabrera and ddas)
+
+    HADOOP-8200. Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS. (eli)
+
+    HADOOP-8184.  ProtoBuf RPC engine uses the IPC layer reply packet.
+    (Sanjay Radia via szetszwo)
+
+    HADOOP-8163. Improve ActiveStandbyElector to provide hooks for
+    fencing old active. (todd)
+
+    HADOOP-8193. Refactor FailoverController/HAAdmin code to add an abstract
+    class for "target" services. (todd)
+
+    HADOOP-8212. Improve ActiveStandbyElector's behavior when session expires
+    (todd)
+
+    HADOOP-8216. Address log4j.properties inconsistencies btw main and
+    template dirs. (Patrick Hunt via eli)
+
+    HADOOP-8149. Cap space usage of default log4j rolling policy.
+    (Patrick Hunt via eli)
+
+    HADOOP-8211. Update commons-net version to 3.1. (eli)
+
+    HADOOP-8236. haadmin should have configurable timeouts for failover
+    commands. (todd)
+
+    HADOOP-8242. AbstractDelegationTokenIdentifier: add getter methods
+    for owner and realuser. (Colin Patrick McCabe via eli)
+
+    HADOOP-8007. Use substitution tokens for fencing argument (todd)
+
+    HADOOP-8077. HA: fencing method should be able to be configured on
+    a per-NN or per-NS basis (todd)
+
+    HADOOP-8086. KerberosName silently sets defaultRealm to "" if the 
+    Kerberos config is not found, it should log a WARN (tucu)
+
+    HADOOP-8280. Move VersionUtil/TestVersionUtil and GenericTestUtils from
+    HDFS into Common. (Ahmed Radwan via atm)
+
+    HADOOP-8117. Upgrade test build to Surefire 2.12 (todd)
+
+    HADOOP-8152. Expand public APIs for security library classes. (atm via eli)
+
+    HADOOP-7549. Use JDK ServiceLoader mechanism to find FileSystem implementations. (tucu)
+
+    HADOOP-8185. Update namenode -format documentation and add -nonInteractive
+    and -force. (Arpit Gupta via atm)
+
+    HADOOP-8214. make hadoop script recognize a full set of deprecated commands (rvs via tucu)
+
+    HADOOP-8347. Hadoop Common logs misspell 'successful'.
+    (Philip Zeyliger via eli)
+
+    HADOOP-8350. Improve NetUtils.getInputStream to return a stream which has
+    a tunable timeout. (todd)
+
+    HADOOP-8356. FileSystem service loading mechanism should print the FileSystem 
+    impl it is failing to load (tucu)
+
+    HADOOP-8353. hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop.
+    (Roman Shaposhnik via atm)
+
+    HADOOP-8113. Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too
+    (not just MapReduce). Contributed by Eugene Koontz.
+
+    HADOOP-8285 Use ProtoBuf for RpcPayLoadHeader (sanjay radia)
+
+    HADOOP-8366 Use ProtoBuf for RpcResponseHeader (sanjay radia)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-8199. Fix issues in start-all.sh and stop-all.sh (Devaraj K via umamahesh)
+    
+    HADOOP-7635. RetryInvocationHandler should release underlying resources on
+    close. (atm)
+
+    HADOOP-7695. RPC.stopProxy can throw unintended exception while logging
+    error. (atm)
+
+    HADOOP-7833. Fix findbugs warnings in protobuf generated code.
+    (John Lee via suresh)
+
+    HADOOP-7897. ProtobufRpcEngine client side exception mechanism is not
+    consistent with WritableRpcEngine. (suresh)
+
+    HADOOP-7913. Fix bug in ProtoBufRpcEngine.  (sanjay)
+
+    HADOOP-7892. IPC logs too verbose after "RpcKind" introduction. (todd)
+
+    HADOOP-7968. Errant println left in RPC.getHighestSupportedProtocol. (Sho
+    Shimauchi via harsh)
+
+    HADOOP-7931. o.a.h.ipc.WritableRpcEngine should have a way to force
+    initialization. (atm)
+
+    HADOOP-8104. Inconsistent Jackson versions (tucu)
+
+    HADOOP-8119. Fix javac warnings in TestAuthenticationFilter in hadoop-auth.
+    (szetszwo)
+
+    HADOOP-7888. TestFailoverProxy fails intermittently on trunk. (Jason Lowe
+    via atm)
+
+    HADOOP-8154. DNS#getIPs shouldn't silently return the local host
+    IP for bogus interface names. (eli)
+
+    HADOOP-8169.  javadoc generation fails with java.lang.OutOfMemoryError:
+    Java heap space (tgraves via bobby)
+
+    HADOOP-8167. Configuration deprecation logic breaks backwards compatibility (tucu)
+
+    HADOOP-8189. LdapGroupsMapping shouldn't throw away IOException. (Jonathan Natkins via atm)
+
+    HADOOP-8191. SshFenceByTcpPort uses netcat incorrectly (todd)
+
+    HADOOP-8157. Fix race condition in Configuration that could cause spurious
+    ClassNotFoundExceptions after a GC. (todd)
+
+    HADOOP-8197. Configuration logs WARNs on every use of a deprecated key (tucu)
+
+    HADOOP-8159. NetworkTopology: getLeaf should check for invalid topologies.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8204. TestHealthMonitor fails occasionally (todd)
+
+    HADOOP-8202. RPC stopProxy() does not close the proxy correctly.
+    (Hari Mankude via suresh)
+
+    HADOOP-8218. RPC.closeProxy shouldn't throw error when closing a mock
+    (todd)
+
+    HADOOP-8238. NetUtils#getHostNameOfIP blows up if given ip:port
+    string w/o port. (eli)
+
+    HADOOP-8243. Security support broken in CLI (manual) failover controller
+    (todd)
+
+    HADOOP-8251. Fix SecurityUtil.fetchServiceTicket after HADOOP-6941 (todd)
+
+    HADOOP-8249. invalid hadoop-auth cookies should trigger authentication 
+    if info is avail before returning HTTP 401 (tucu)
+
+    HADOOP-8261. Har file system doesn't deal with FS URIs with a host but no
+    port. (atm)
+
+    HADOOP-8263. Stringification of IPC calls not useful (todd)
+
+    HADOOP-8264. Remove irritating double double quotes in front of hostname
+    (Bernd Fondermann via bobby)
+
+    HADOOP-8270. hadoop-daemon.sh stop action should return 0 for an
+    already stopped service. (Roman Shaposhnik via eli)
+
+    HADOOP-8144. pseudoSortByDistance in NetworkTopology doesn't work
+    properly if no local node and first node is local rack node.
+    (Junping Du)
+
+    HADOOP-8282. start-all.sh refers incorrectly start-dfs.sh
+    existence for starting start-yarn.sh. (Devaraj K via eli)
+
+    HADOOP-7350. Use ServiceLoader to discover compression codec classes.
+    (tomwhite)
+
+    HADOOP-8284. clover integration broken, also mapreduce poms are pulling
+    in clover as a dependency. (phunt via tucu)
+
+    HADOOP-8309. Pseudo & Kerberos AuthenticationHandler should use 
+    getType() to create token (tucu)
+
+    HADOOP-8314. HttpServer#hasAdminAccess should return false if 
+    authorization is enabled but user is not authenticated. (tucu)
+
+    HADOOP-8296. hadoop/yarn daemonlog usage wrong (Devaraj K via tgraves)
+
+    HADOOP-8310. FileContext#checkPath should handle URIs with no port. (atm)
+
+    HADOOP-8321. TestUrlStreamHandler fails. (tucu)
+
+    HADOOP-8325. Add a ShutdownHookManager to be used by different
+    components instead of the JVM shutdownhook (tucu)
+
+    HADOOP-8275. Range check DelegationKey length.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8342. HDFS command fails with exception following merge of 
+    HADOOP-8325 (tucu)
+
+    HADOOP-8346. Makes oid changes to make SPNEGO work. Was broken due
+    to fixes introduced by the IBM JDK compatibility patch. (ddas)
+
+    HADOOP-8355. SPNEGO filter throws/logs exception when authentication fails (tucu)
+
+    HADOOP-8349. ViewFS doesn't work when the root of a file system is mounted. (atm)
+
+    HADOOP-8328. Duplicate FileSystem Statistics object for 'file' scheme.
+    (tomwhite)
+
+    HADOOP-8359. Fix javadoc warnings in Configuration.  (Anupam Seth via
+    szetszwo)
+
+  BREAKDOWN OF HADOOP-7454 SUBTASKS
+
+    HADOOP-7455. HA: Introduce HA Service Protocol Interface. (suresh)
+    
+    HADOOP-7774. HA: Administrative CLI to control HA daemons. (todd)
+    
+    HADOOP-7896. HA: if both NNs are in Standby mode, client needs to try failing
+    back and forth several times with sleeps. (atm)
+    
+    HADOOP-7922. Improve some logging for client IPC failovers and
+    StandbyExceptions (todd)
+    
+    HADOOP-7921. StandbyException should extend IOException (todd)
+    
+    HADOOP-7928. HA: Client failover policy is incorrectly trying to fail over all
+    IOExceptions (atm)
+    
+    HADOOP-7925. Add interface and update CLI to query current state to
+    HAServiceProtocol (eli via todd)
+    
+    HADOOP-7932. Make client connection retries on socket time outs configurable.
+    (Uma Maheswara Rao G via todd)
+    
+    HADOOP-7924. FailoverController for client-based configuration (eli)
+    
+    HADOOP-7961. Move HA fencing to common. (eli)
+    
+    HADOOP-7970. HAServiceProtocol methods must throw IOException.  (Hari Mankude
+    via suresh).
+    
+    HADOOP-7992. Add ZKClient library to facilitate leader election.  (Bikas Saha
+    via suresh).
+    
+    HADOOP-7983. HA: failover should be able to pass args to fencers. (eli)
+    
+    HADOOP-7938. HA: the FailoverController should optionally fence the active
+    during failover. (eli)
+    
+    HADOOP-7991. HA: the FailoverController should check the standby is ready
+    before failing over. (eli)
+    
+    HADOOP-8038. Add 'ipc.client.connect.max.retries.on.timeouts' entry in
+    core-default.xml file. (Uma Maheswara Rao G via atm)
+    
+    HADOOP-8041. Log a warning when a failover is first attempted (todd)
+    
+    HADOOP-8068. void methods can swallow exceptions when going through failover
+    path (todd)
+    
+    HADOOP-8116. RetriableCommand is using RetryPolicy incorrectly after
+    HADOOP-7896. (atm)
+
+    HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
+    (Radim Kolar via bobby)
+
+    HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
+    list. (Anupam Seth via bobby)
+
+    HADOOP-7868. Hadoop native fails to compile when default linker
+    option is -Wl,--as-needed. (Trevor Robinson via eli)
+
+    HADOOP-8655. Fix TextInputFormat for large deliminators. (Gelesh via
+    bobby) 
+
+Release 0.23.7 - UNRELEASED
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+    HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx
+    permissions (Ivan A. Veselovsky via bobby)
+
+    HADOOP-9067. provide test for LocalFileSystem.reportChecksumFailure
+    (Ivan A. Veselovsky via bobby)
+
+    HADOOP-9336. Allow UGI of current connection to be queried. (Daryn Sharp
+    via kihwal)
+
+    HADOOP-9352. Expose UGI.setLoginUser for tests (daryn)
+
+    HADOOP-9209. Add shell command to dump file checksums (Todd Lipcon via
+    jeagles)
+
+  OPTIMIZATIONS
+
+    HADOOP-8462. Native-code implementation of bzip2 codec. (Govind Kamat via
+    jlowe)
+
+  BUG FIXES
+
+    HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via
+    tgraves)
+
+    HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage
+    option (Andy Isaacson via tgraves)
+
+    HADOOP-9339. IPC.Server incorrectly sets UGI auth type (Daryn Sharp via 
+    kihwal)
+
+Release 0.23.6 - UNRELEASED
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+    HADOOP-9217. Print thread dumps when hadoop-common tests fail.
+    (Andrey Klochkov via suresh)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-9072. Hadoop-Common-0.23-Build Fails to build in Jenkins 
+    (Robert Parker via tgraves)
+
+    HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A.
+    Veselovsky via bobby)
+
+    HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A.
+    Veselovsky via bobby)
+
+    HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby)
+
+    HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves)
+
+    HADOOP-9255. relnotes.py missing last jira (tgraves)
+
+Release 0.23.5 - 2012-11-28
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+    HADOOP-8932. JNI-based user-group mapping modules can be too chatty on 
+    lookup failures. (Kihwal Lee via suresh)
+
+    HADOOP-8930. Cumulative code coverage calculation (Andrey Klochkov via
+    bobby)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-8906. paths with multiple globs are unreliable. (Daryn Sharp via
+    jlowe)
+
+    HADOOP-8811. Compile hadoop native library in FreeBSD (Radim Kolar via
+    bobby)
+
+    HADOOP-8962. RawLocalFileSystem.listStatus fails when a child filename
+    contains a colon (jlowe via bobby)
+
+    HADOOP-8986. Server$Call object is never released after it is sent (bobby)
+
+    HADOOP-9022. Hadoop distcp tool fails to copy file if -m 0 specified
+    (Jonathan Eagles vai bobby)
+
+    HADOOP-9025. org.apache.hadoop.tools.TestCopyListing failing (Jonathan
+    Eagles via jlowe)
+
+Release 0.23.4
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+    HADOOP-8822. relnotes.py was deleted post mavenization (bobby)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-8843. Old trash directories are never deleted on upgrade
+    from 1.x (jlowe)
+
+    HADOOP-8684. Deadlock between WritableComparator and WritableComparable.
+    (Jing Zhao via suresh)
+
+Release 0.23.3
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-7967. Need generalized multi-token filesystem support (daryn)
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+    HADOOP-8108. Move method getHostPortString() from NameNode to NetUtils.
+    (Brandon Li via jitendra)
+
+    HADOOP-8288. Remove references of mapred.child.ulimit etc. since they are
+    not being used any more (Ravi Prakash via bobby)
+
+    HADOOP-8535. Cut hadoop build times in half (Job Eagles via bobby)
+
+    HADOOP-8525. Provide Improved Traceability for Configuration (bobby)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-8088. User-group mapping cache incorrectly does negative caching on
+    transient failures (Khiwal Lee via bobby)
+
+    HADOOP-8179. risk of NPE in CopyCommands processArguments() (Daryn Sharp
+    via bobby)
+
+    HADOOP-6963. In FileUtil.getDU(..), neither include the size of directories
+    nor follow symbolic links.  (Ravi Prakash via szetszwo)
+
+    HADOOP-8180. Remove hsqldb since its not needed from pom.xml (Ravi Prakash
+    via tgraves)
+
+    HADOOP-8014. ViewFileSystem does not correctly implement getDefaultBlockSize,
+    getDefaultReplication, getContentSummary (John George via bobby)
+
+    HADOOP-7510. Tokens should use original hostname provided instead of ip
+    (Daryn Sharp via bobby)
+
+    HADOOP-8283. Allow tests to control token service value (Daryn Sharp via
+    bobby)
+
+    HADOOP-8286. Simplify getting a socket address from conf (Daryn Sharp via
+    bobby)
+
+    HADOOP-8227. Allow RPC to limit ephemeral port range. (bobby)
+
+    HADOOP-8305. distcp over viewfs is broken (John George via bobby)
+
+    HADOOP-8334. HttpServer sometimes returns incorrect port (Daryn Sharp via
+    bobby)
+
+    HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305.
+    (John George via szetszwo)
+
+    HADOOP-8335. Improve Configuration's address handling (Daryn Sharp via
+    bobby)
+
+    HADOOP-8327. distcpv2 and distcpv1 jars should not coexist (Dave Thompson
+    via bobby)
+
+    HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby)
+
+    HADOOP-8373. Port RPC.getServerAddress to 0.23 (Daryn Sharp via bobby)
+
+    HADOOP-8495. Update Netty to avoid leaking file descriptors during shuffle
+    (Jason Lowe via tgraves)
+
+    HADOOP-8129. ViewFileSystemTestSetup setupForViewFileSystem is erring
+    (Ahmed Radwan and Ravi Prakash via bobby)
+
+    HADOOP-8573. Configuration tries to read from an inputstream resource 
+    multiple times (Robert Evans via tgraves)
+
+    HADOOP-8599. Non empty response from FileSystem.getFileBlockLocations when
+    asking for data beyond the end of file. (Andrey Klochkov via todd)
+
+    HADOOP-8606. FileSystem.get may return the wrong filesystem (Daryn Sharp
+    via bobby)
+
+    HADOOP-8551. fs -mkdir creates parent directories without the -p option
+    (John George via bobby)
+
+    HADOOP-8613. AbstractDelegationTokenIdentifier#getUser() should set token
+    auth type. (daryn)
+
+    HADOOP-8627. FS deleteOnExit may delete the wrong path (daryn via bobby)
+
+    HADOOP-8634. Ensure FileSystem#close doesn't squawk for deleteOnExit paths 
+    (daryn via bobby)
+
+    HADOOP-8550. hadoop fs -touchz automatically created parent directories
+    (John George via bobby)
+
+    HADOOP-8635. Cannot cancel paths registered deleteOnExit (daryn via bobby)
+
+    HADOOP-8637. FilterFileSystem#setWriteChecksum is broken (daryn via bobby)
+
+    HADOOP-8370. Native build failure: javah: class file for 
+    org.apache.hadoop.classification.InterfaceAudience not found  (Trevor
+    Robinson via tgraves)
+
+    HADOOP-8633. Interrupted FsShell copies may leave tmp files (Daryn Sharp
+    via tgraves)
+
+    HADOOP-8703. distcpV2: turn CRC checking off for 0 byte size (Dave
+    Thompson via bobby)
+
+    HADOOP-8390. TestFileSystemCanonicalization fails with JDK7  (Trevor
+    Robinson via tgraves)
+
+    HADOOP-8692. TestLocalDirAllocator fails intermittently with JDK7 
+    (Trevor Robinson via tgraves)
+
+    HADOOP-8693. TestSecurityUtil fails intermittently with JDK7 (Trevor
+    Robinson via tgraves)
+
+    HADOOP-8697. TestWritableName fails intermittently with JDK7 (Trevor
+    Robinson via tgraves)
+
+    HADOOP-8695. TestPathData fails intermittently with JDK7 (Trevor
+    Robinson via tgraves)
+
+    HADOOP-8611. Allow fall-back to the shell-based implementation when 
+    JNI-based users-group mapping fails (Robert Parker via bobby) 
+
+    HADOOP-8225. DistCp fails when invoked by Oozie (daryn via bobby)
+
+    HADOOP-8709. globStatus changed behavior from 0.20/1.x (Jason Lowe via
+    bobby)
+
+    HADOOP-8725. MR is broken when security is off (daryn via bobby)
+
+    HADOOP-8726. The Secrets in Credentials are not available to MR tasks
+    (daryn and Benoy Antony via bobby)
+
+    HADOOP-8727. Gracefully deprecate dfs.umaskmode in 2.x onwards (Harsh J
+    via bobby)
+
+Release 0.23.2 - UNRELEASED 
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES                                                                    
+  
+  IMPROVEMENTS
+
+    HADOOP-8048. Allow merging of Credentials (Daryn Sharp via tgraves)
+ 
+    HADOOP-8032. mvn site:stage-deploy should be able to use the scp protocol
+    to stage documents (Ravi Prakash via tgraves)
+
+    HADOOP-7923. Automate the updating of version numbers in the doc system.
+    (szetszwo)
+
+    HADOOP-8137. Added links to CLI manuals to the site. (tgraves via
+    acmurthy)  
+
+  OPTIMIZATIONS
+    HADOOP-8071. Avoid an extra packet in client code when nagling is
+    disabled. (todd)
+
+    HADOOP-6502. Improve the performance of Configuration.getClassByName when
+    the class is not found by caching negative results.
+    (sharad, todd via todd)
+
+  BUG FIXES
+
+    HADOOP-7660. Maven generated .classpath doesnot includes 
+    "target/generated-test-source/java" as source directory.
+    (Laxman via bobby)
+
+    HADOOP-8042  When copying a file out of HDFS, modifying it, and uploading
+    it back into HDFS, the put fails due to a CRC mismatch
+    (Daryn Sharp via bobby)
+
+    HADOOP-8035 Hadoop Maven site is inefficient and runs phases redundantly
+    (abayer via tucu)
+
+    HADOOP-8051 HttpFS documentation it is not wired to the generated site (tucu)
+
+    HADOOP-8055. Hadoop tarball distribution lacks a core-site.xml (harsh)
+
+    HADOOP-8052. Hadoop Metrics2 should emit Float.MAX_VALUE (instead of 
+    Double.MAX_VALUE) to avoid making Ganglia's gmetad core. (Varun Kapoor
+    via mattf)
+
+    HADOOP-8074. Small bug in hadoop error message for unknown commands.
+    (Colin Patrick McCabe via eli)
+
+    HADOOP-8082 add hadoop-client and hadoop-minicluster to the 
+    dependency-management section. (tucu)
+
+    HADOOP-8066 The full docs build intermittently fails (abayer via tucu)
+
+    HADOOP-8083 javadoc generation for some modules is not done under target/ (tucu)
+
+    HADOOP-8036. TestViewFsTrash assumes the user's home directory is
+    2 levels deep. (Colin Patrick McCabe via eli)
+
+    HADOOP-8046 Revert StaticMapping semantics to the existing ones, add DNS
+    mapping diagnostics in progress (stevel)
+
+    HADOOP-8057 hadoop-setup-conf.sh not working because of some extra spaces.
+    (Vinayakumar B via stevel)
+
+    HADOOP-7680 TestHardLink fails on Mac OS X, when gnu stat is in path.
+    (Milind Bhandarkar via stevel)
+
+    HADOOP-8050. Deadlock in metrics. (Kihwal Lee via mattf)
+
+    HADOOP-8131. FsShell put doesn't correctly handle a non-existent dir
+    (Daryn Sharp via bobby)
+
+    HADOOP-8123. Use java.home rather than env.JAVA_HOME for java in the
+    project. (Jonathan Eagles via acmurthy) 
+
+    HADOOP-8064. Remove unnecessary dependency on w3c.org in document processing
+    (Khiwal Lee via bobby)
+
+    HADOOP-8140. dfs -getmerge should process its argments better (Daryn Sharp
+    via bobby)
+
+    HADOOP-8164. Back slash as path separator is handled for Windows only.
+    (Daryn Sharp via suresh)
+
+    HADOOP-8173. FsShell needs to handle quoted metachars.  (Daryn Sharp via
+    szetszwo)
+
+    HADOOP-8175. Add -p option to mkdir in FsShell.  (Daryn Sharp via szetszwo)
+
+    HADOOP-8176.  Disambiguate the destination of FsShell copies (Daryn Sharp
+    via bobby)
+
+    HADOOP-8208. Disallow self failover. (eli)
+
+Release 0.23.1 - 2012-02-17 
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+    HADOOP-7777 Implement a base class for DNSToSwitchMapping implementations 
+    that can offer extra topology information. (stevel)
+
+    HADOOP-7657. Add support for LZ4 compression. (Binglin Chang via todd)
+
+    HADOOP-7910. Add Configuration.getLongBytes to handle human readable byte size values. (Sho Shimauchi via harsh)
+
+
+  IMPROVEMENTS
+
+    HADOOP-7801. HADOOP_PREFIX cannot be overriden. (Bruno Mahé via tomwhite)
+
+    HADOOP-7802. Hadoop scripts unconditionally source
+    "$bin"/../libexec/hadoop-config.sh. (Bruno Mahé via tomwhite)
+
+    HADOOP-7858. Drop some info logging to DEBUG level in IPC,
+    metrics, and HTTP. (todd via eli)
+
+    HADOOP-7424. Log an error if the topology script doesn't handle multiple args.
+    (Uma Maheswara Rao G via eli)
+
+    HADOOP-7804. Enable hadoop config generator to set configurations to enable
+    short circuit read. (Arpit Gupta via jitendra)
+
+    HADOOP-7877. Update balancer CLI usage documentation to include the new
+    -policy option.  (szetszwo)
+
+    HADOOP-6840. Support non-recursive create() in FileSystem and 
+    SequenceFile.Writer. (jitendra and eli via eli)
+
+    HADOOP-6886. LocalFileSystem Needs createNonRecursive API.
+    (Nicolas Spiegelberg and eli via eli)
+
+    HADOOP-7912. test-patch should run eclipse:eclipse to verify that it does
+    not break again. (Robert Joseph Evans via tomwhite)
+
+    HADOOP-7890. Redirect hadoop script's deprecation message to stderr.
+    (Koji Knoguchi via mahadev)
+
+    HADOOP-7504. Add the missing Ganglia31 opts to hadoop-metrics.properties as a comment. (harsh)
+
+    HADOOP-7933. Add a getDelegationTokens api to FileSystem which checks
+    for known tokens in the passed Credentials object. (sseth)
+
+    HADOOP-7737. normalize hadoop-mapreduce & hadoop-dist dist/tar build with 
+    common/hdfs. (tucu)
+
+    HADOOP-7743. Add Maven profile to create a full source tarball. (tucu)
+
+    HADOOP-7758. Make GlobFilter class public. (tucu)
+
+    HADOOP-7590. Mavenize streaming and MR examples. (tucu)
+
+    HADOOP-7934. Normalize dependencies versions across all modules. (tucu)
+
+    HADOOP-7348. Change 'addnl' in getmerge util to be a flag '-nl' instead.
+    (XieXianshan via harsh)
+
+    HADOOP-7975. Add LZ4 as an entry in the default codec list, missed by HADOOP-7657 (harsh)
+
+    HADOOP-4515. Configuration#getBoolean must not be case sensitive. (Sho Shimauchi via harsh)
+
+    HADOOP-6490. Use StringUtils over String#replace in Path#normalizePath.
+    (Uma Maheswara Rao G via harsh)
+
+    HADOOP-7574. Improve FSShell -stat, add user/group elements.
+    (XieXianshan via harsh)
+
+    HADOOP-7736. Remove duplicate Path#normalizePath call. (harsh)
+
+    HADOOP-7919. Remove the unused hadoop.logfile.* properties from the 
+    core-default.xml file. (harsh)
+
+    HADOOP-7939. Improve Hadoop subcomponent integration in Hadoop 0.23. (rvs via tucu)
+
+    HADOOP-8002. SecurityUtil acquired token message should be a debug rather than info.
+    (Arpit Gupta via mahadev)
+
+    HADOOP-8009. Create hadoop-client and hadoop-minicluster artifacts for downstream 
+    projects. (tucu)
+
+    HADOOP-7470. Move up to Jackson 1.8.8.  (Enis Soztutar via szetszwo)
+
+    HADOOP-8027. Visiting /jmx on the daemon web interfaces may print
+    unnecessary error in logs. (atm)
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+   
+   HADOOP-7811. TestUserGroupInformation#testGetServerSideGroups test fails in chroot.
+   (Jonathan Eagles via mahadev)
+
+   HADOOP-7813. Fix test-patch to use proper numerical comparison when checking
+   javadoc and findbugs warning counts. (Jonathan Eagles via tlipcon)
+
+   HADOOP-7841. Run tests with non-secure random. (tlipcon)
+
+    HADOOP-7851. Configuration.getClasses() never returns the default value. 
+                 (Uma Maheswara Rao G via amarrk)
+
+   HADOOP-7787. Make source tarball use conventional name.
+   (Bruno Mahé via tomwhite)
+
+   HADOOP-6614. RunJar should provide more diags when it can't create
+   a temp file. (Jonathan Hsieh via eli)
+
+   HADOOP-7859. TestViewFsHdfs.testgetFileLinkStatus is failing an assert. (eli)
+
+   HADOOP-7864. Building mvn site with Maven < 3.0.2 causes OOM errors.
+   (Andrew Bayer via eli)
+
+   HADOOP-7854. UGI getCurrentUser is not synchronized. (Daryn Sharp via jitendra)
+
+   HADOOP-7870. fix SequenceFile#createWriter with boolean
+   createParent arg to respect createParent. (Jon Hsieh via eli)
+
+   HADOOP-7898. Fix javadoc warnings in AuthenticationToken.java. (suresh)
+
+   HADOOP-7878  Regression: HADOOP-7777 switch changes break HDFS tests when the
+   isSingleSwitch() predicate is used. (stevel)
+
+   HADOOP-7914. Remove the duplicated declaration of hadoop-hdfs test-jar in
+   hadoop-project/pom.xml.  (szetszwo)
+
+   HADOOP-7837. no NullAppender in the log4j config. (eli)
+
+   HADOOP-7948. Shell scripts created by hadoop-dist/pom.xml to build tar do not 
+   properly propagate failure. (cim_michajlomatijkiw via tucu)
+
+   HADOOP-7949. Updated maxIdleTime default in the code to match
+   core-default.xml (eli)
+
+   HADOOP-7853. multiple javax security configurations cause conflicts. 
+   (daryn via tucu)
+
+   HDFS-2614. hadoop dist tarball is missing hdfs headers. (tucu)
+
+   HADOOP-7874. native libs should be under lib/native/ dir. (tucu)
+
+   HADOOP-7887. KerberosAuthenticatorHandler is not setting
+   KerberosName name rules from configuration. (tucu)
+
+   HADOOP-7902. skipping name rules setting (if already set) should be done 
+   on UGI initialization only. (tucu)
+
+   HADOOP-7810. move hadoop archive to core from tools. (tucu)
+
+   HADOOP_7917. compilation of protobuf files fails in windows/cygwin. (tucu)
+
+   HADOOP-7907. hadoop-tools JARs are not part of the distro. (tucu)
+
+   HADOOP-7936. There's a Hoop README in the root dir of the tarball. (tucu)
+
+   HADOOP-7963. Fix ViewFS to catch a null canonical service-name and pass
+   tests TestViewFileSystem* (Siddharth Seth via vinodkv)
+
+   HADOOP-7964. Deadlock in NetUtils and SecurityUtil class initialization.
+   (Daryn Sharp via suresh)
+
+   HADOOP-7974. TestViewFsTrash incorrectly determines the user's home
+   directory. (harsh via eli)
+
+   HADOOP-7971. Adding back job/pipes/queue commands to bin/hadoop for
+   backward compatibility. (Prashath Sharma via acmurthy) 
+
+   HADOOP-7982. UserGroupInformation fails to login if thread's context
+   classloader can't load HadoopLoginModule. (todd)
+
+   HADOOP-7986. Adding config for MapReduce History Server protocol in
+   hadoop-policy.xml for service level authorization. (Mahadev Konar via vinodkv)
+
+   HADOOP-7981. Improve documentation for org.apache.hadoop.io.compress.
+   Decompressor.getRemaining (Jonathan Eagles via mahadev)
+
+   HADOOP-7997. SequenceFile.createWriter(...createParent...) no
+   longer works on existing file. (Gregory Chanan via eli)
+
+   HADOOP-7993. Hadoop ignores old-style config options for enabling compressed 
+   output. (Anupam Seth via mahadev)
+
+   HADOOP-8000. fetchdt command not available in bin/hadoop.
+   (Arpit Gupta via mahadev)
+
+   HADOOP-7999. "hadoop archive" fails with ClassNotFoundException.
+   (Jason Lowe via mahadev)
+
+   HADOOP-8012. hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir
+   and chown log/pid dirs which can fail. (Roman Shaposhnik via eli)
+
+   HADOOP-8013. ViewFileSystem does not honor setVerifyChecksum
+   (Daryn Sharp via bobby)
+
+   HADOOP-8054 NPE with FilterFileSystem (Daryn Sharp via bobby)
+
+Release 0.23.0 - 2011-11-01 
+
+  INCOMPATIBLE CHANGES
+
+   HADOOP-6904. Support method based RPC compatiblity. (hairong)
+
+   HADOOP-6432. Add Statistics support in FileContext. (jitendra)
+
+   HADOOP-7136. Remove failmon contrib component. (nigel)
+
+  NEW FEATURES
+
+    HADOOP-7324. Ganglia plugins for metrics v2. (Priyo Mustafi via llu)
+
+    HADOOP-7342. Add an utility API in FileUtil for JDK File.list
+    avoid NPEs on File.list() (Bharath Mundlapudi via mattf)
+
+    HADOOP-7322. Adding a util method in FileUtil for directory listing,
+    avoid NPEs on File.listFiles() (Bharath Mundlapudi via mattf)
+
+    HADOOP-7023. Add listCorruptFileBlocks to Filesysem. (Patrick Kling
+    via hairong)
+
+    HADOOP-7096. Allow setting of end-of-record delimiter for TextInputFormat
+    (Ahmed Radwan via todd)
+
+    HADOOP-6994. Api to get delegation token in AbstractFileSystem. (jitendra)
+
+    HADOOP-7171. Support UGI in FileContext API. (jitendra)
+
+    HADOOP-7257 Client side mount tables (sanjay)
+
+    HADOOP-6919. New metrics2 framework. (Luke Lu via acmurthy) 
+
+    HADOOP-6920. Metrics instrumentation to move new metrics2 framework.
+    (Luke Lu via suresh)
+
+    HADOOP-7214. Add Common functionality necessary to provide an equivalent
+    of /usr/bin/groups for Hadoop. (Aaron T. Myers via todd)
+
+    HADOOP-6832. Add an authentication plugin using a configurable static user
+    for the web UI. (Owen O'Malley and Todd Lipcon via cdouglas)
+
+    HADOOP-7144. Expose JMX metrics via JSON servlet. (Robert Joseph Evans via
+    cdouglas)
+
+    HADOOP-7379. Add the ability to serialize and deserialize protocol buffers
+    in ObjectWritable. (todd)
+
+    HADOOP-7206. Support Snappy compression. (Issei Yoshida and
+    Alejandro Abdelnur via eli)
+
+    HADOOP-7329. Add the capability of getting invividual attribute of a mbean
+    using JMXProxyServlet. (tanping)
+
+    HADOOP-7380. Add client failover functionality to o.a.h.io.(ipc|retry).
+    (atm via eli)
+
+    HADOOP-7460. Support pluggable trash policies. (Usman Masoon via suresh)
+
+    HADOOP-6385. dfs should support -rmdir (was HDFS-639). (Daryn Sharp
+    via mattf)
+
+    HADOOP-7119. add Kerberos HTTP SPNEGO authentication support to Hadoop
+    JT/NN/DN/TT web-consoles. (Alejandro Abdelnur via atm)
+
+  IMPROVEMENTS
+
+    HADOOP-7655. Provide a small validation script that smoke tests the installed
+    cluster. (Arpit Gupta via mattf)
+
+    HADOOP-7042. Updates to test-patch.sh to include failed test names and
+    improve other messaging. (nigel)
+
+    HADOOP-7001.  Configuration changes can occur via the Reconfigurable
+    interface. (Patrick Kling via dhruba)
+
+    HADOOP-6764. Add number of reader threads and queue length as
+    configuration parameters in RPC.getServer. (Dmytro Molkov via hairong)
+
+    HADOOP-7049. TestReconfiguration should be junit v4.
+    (Patrick Kling via eli)
+
+    HADOOP-7054 Change NN LoadGenerator to use FileContext APIs
+	  (Sanjay Radia)
+
+    HADOOP-7060. A more elegant FileSystem#listCorruptFileBlocks API.
+    (Patrick Kling via hairong)
+
+    HADOOP-7058. Expose number of bytes in FSOutputSummer buffer to
+    implementatins. (Todd Lipcon via hairong)
+
+    HADOOP-7061. unprecise javadoc for CompressionCodec. (Jingguo Yao via eli)
+
+    HADOOP-7059. Remove "unused" warning in native code.  (Noah Watkins via eli)
+
+    HADOOP-6864. Provide a JNI-based implementation of
+     ShellBasedUnixGroupsNetgroupMapping
+    (implementation of GroupMappingServiceProvider) (Erik Seffl via boryas)
+
+    HADOOP-7078. Improve javadocs for RawComparator interface.
+    (Harsh J Chouraria via todd)
+
+    HADOOP-6995. Allow wildcards to be used in ProxyUsers configurations.
+    (todd)
+
+    HADOOP-6376. Add a comment header to conf/slaves that specifies the file
+    format. (Kay Kay via todd)
+
+    HADOOP-7151. Document need for stable hashCode() in WritableComparable.
+    (Dmitriy V. Ryaboy via todd)
+
+    HADOOP-7112. Issue a warning when GenericOptionsParser libjars are not on
+    local filesystem. (tomwhite)
+
+    HADOOP-7114. FsShell should dump all exceptions at DEBUG level.
+    (todd via tomwhite)
+
+    HADOOP-7159. RPC server should log the client hostname when read exception
+    happened. (Scott Chen via todd)
+
+    HADOOP-7167. Allow using a file to exclude certain tests from build. (todd)
+
+    HADOOP-7133. Batch the calls in DataStorage to FileUtil.createHardLink().
+    (Matt Foley via jghoman)
+
+    HADOOP-7166. Add DaemonFactory to common. (Erik Steffl & jitendra)
+
+    HADOOP-7175. Add isEnabled() to Trash.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7180. Better support on CommandFormat on the API and exceptions.
+    (Daryn Sharp via szetszwo)
+
+    HADOOP-7202. Improve shell Command base class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7224. Add CommandFactory to shell.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7014. Generalize CLITest structure and interfaces to facilitate
+    upstream adoption (e.g. for web testing). (cos)
+
+    HADOOP-7230. Move "fs -help" shell command tests from HDFS to COMMOM; see
+    also HDFS-1844.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7233. Refactor ls to conform to new FsCommand class.  (Daryn Sharp
+    via szetszwo)
+
+    HADOOP-7235. Refactor the tail command to conform to new FsCommand class.
+    (Daryn Sharp via szetszwo)
+
+    HADOOP-7179. Federation: Improve HDFS startup scripts. (Erik Steffl
+    and Tanping Wang via suresh)
+
+    HADOOP-7227. Remove protocol version check at proxy creation in Hadoop
+    RPC. (jitendra)
+
+    HADOOP-7236. Refactor the mkdir command to conform to new FsCommand class.
+    (Daryn Sharp via szetszwo)
+
+    HADOOP-7250. Refactor the setrep command to conform to new FsCommand class.
+    (Daryn Sharp via szetszwo)
+
+    HADOOP-7249. Refactor the chmod/chown/chgrp command to conform to new
+    FsCommand class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7251. Refactor the getmerge command to conform to new FsCommand
+    class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7265. Keep track of relative paths in PathData.  (Daryn Sharp
+    via szetszwo)
+
+    HADOOP-7238. Refactor the cat and text commands to conform to new FsCommand
+    class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7271. Standardize shell command error messages.  (Daryn Sharp
+    via szetszwo)
+
+    HADOOP-7272. Remove unnecessary security related info logs. (suresh)
+
+    HADOOP-7275. Refactor the stat command to conform to new FsCommand
+    class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7237. Refactor the touchz command to conform to new FsCommand
+    class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7267. Refactor the rm/rmr/expunge commands to conform to new
+    FsCommand class.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7285. Refactor the test command to conform to new FsCommand
+    class. (Daryn Sharp via todd)
+
+    HADOOP-7289. In ivy.xml, test conf should not extend common conf.
+    (Eric Yang via szetszwo)
+
+    HADOOP-7291. Update Hudson job not to run test-contrib. (Nigel Daley via eli)
+
+    HADOOP-7286. Refactor the du/dus/df commands to conform to new FsCommand
+    class. (Daryn Sharp via todd)
+
+    HADOOP-7301. FSDataInputStream should expose a getWrappedStream method.
+    (Jonathan Hsieh via eli)
+
+    HADOOP-7306. Start metrics system even if config files are missing
+    (Luke Lu via todd)
+
+    HADOOP-7302. webinterface.private.actions should be renamed and moved to
+    the MapReduce project. (Ari Rabkin via todd)
+
+    HADOOP-7329. Improve help message for "df" to include "-h" flag.
+    (Xie Xianshan via todd)
+
+    HADOOP-7320. Refactor the copy and move commands to conform to new
+    FsCommand class. (Daryn Sharp via todd)
+
+    HADOOP-7312. Update value of hadoop.common.configuration.version.
+    (Harsh J Chouraria via todd)
+
+    HADOOP-7337. Change PureJavaCrc32 annotations to public stable.  (szetszwo)
+
+    HADOOP-7331. Make hadoop-daemon.sh return exit code 1 if daemon processes
+    did not get started. (Tanping Wang via todd)
+
+    HADOOP-7316. Add public javadocs to FSDataInputStream and
+    FSDataOutputStream. (eli)
+
+    HADOOP-7323. Add capability to resolve compression codec based on codec
+    name. (Alejandro Abdelnur via tomwhite)
+
+    HADOOP-1886. Undocumented parameters in FilesSystem. (Frank Conrad via eli)
+
+    HADOOP-7375. Add resolvePath method to FileContext. (Sanjay Radia via eli)
+
+    HADOOP-7383. HDFS needs to export protobuf library dependency in pom.
+    (todd via eli)
+
+    HADOOP-7374. Don't add tools.jar to the classpath when running Hadoop.
+    (eli)
+
+    HADOOP-7106. Reorganize project SVN layout to "unsplit" the projects.
+    (todd, nigel)
+
+    HADOOP-6605. Add JAVA_HOME detection to hadoop-config. (eli)
+
+    HADOOP-7384. Allow test-patch to be more flexible about patch format. (todd)
+
+    HADOOP-6929. RPC should have a way to pass Security information other than 
+    protocol annotations. (sharad and omalley via mahadev)
+
+    HADOOP-7385. Remove StringUtils.stringifyException(ie) in logger functions.
+    (Bharath Mundlapudi via Tanping Wang).
+
+    HADOOP-310. Additional constructor requested in BytesWritable. (Brock
+    Noland via atm)
+
+    HADOOP-7429. Add another IOUtils#copyBytes method. (eli)
+
+    HADOOP-7451. Generalize StringUtils#join. (Chris Douglas via mattf)
+
+    HADOOP-7449. Add Data(In,Out)putByteBuffer to work with ByteBuffer similar 
+    to Data(In,Out)putBuffer for byte[].  Merge from yahoo-merge branch,
+    -r 1079163.  Fix missing Apache license headers. (Chris Douglas via mattf)
+
+    HADOOP-7361. Provide an option, -overwrite/-f, in put and copyFromLocal
+    shell commands.  (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7430. Improve error message when moving to trash fails due to 
+    quota issue. (Ravi Prakash via mattf)
+
+    HADOOP-7444. Add Checksum API to verify and calculate checksums "in bulk"
+    (todd)
+
+    HADOOP-7443. Add CRC32C as another DataChecksum implementation (todd)
+
+    HADOOP-7305. Eclipse project files are incomplete. (Niels Basjes via eli)
+
+    HADOOP-7314. Add support for throwing UnknownHostException when a host doesn't 
+    resolve. (Jeffrey Naisbitt via jitendra)
+
+    HADOOP-7465. A several tiny improvements for the LOG format.
+    (Xie Xianshan via eli)
+
+    HADOOP-7434. Display error when using "daemonlog -setlevel" with
+    illegal level. (yanjinshuang via eli)
+
+    HADOOP-7463. Adding a configuration parameter to SecurityInfo interface.
+    (mahadev)
+
+    HADOOP-7298. Add test utility for writing multi-threaded tests. (todd and
+    Harsh J Chouraria via todd)
+
+    HADOOP-7485. Add -h option to ls to list file sizes in human readable
+    format. (XieXianshan via suresh)
+
+    HADOOP-7378. Add -d option to ls to not expand directories.
+    (Daryn Sharp via suresh)
+
+    HADOOP-7474. Refactor ClientCache out of WritableRpcEngine. (jitendra)
+
+    HADOOP-7491. hadoop command should respect HADOOP_OPTS when given
+    a class name. (eli)
+
+    HADOOP-7178. Add a parameter, useRawLocalFileSystem, to copyToLocalFile(..)
+    in FileSystem.  (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-6671. Use maven for hadoop common builds. (Alejandro Abdelnur
+    via tomwhite)
+
+    HADOOP-7502. Make generated sources IDE friendly.
+    (Alejandro Abdelnur via llu)
+
+    HADOOP-7501. Publish Hadoop Common artifacts (post HADOOP-6671) to Apache
+    SNAPSHOTs repo. (Alejandro Abdelnur via tomwhite)
+
+    HADOOP-7525. Make arguments to test-patch optional. (tomwhite)
+
+    HADOOP-7472. RPC client should deal with IP address change.
+    (Kihwal Lee via suresh)
+  
+    HADOOP-7499. Add method for doing a sanity check on hostnames in NetUtils.
+    (Jeffrey Naisbit via mahadev)
+
+    HADOOP-6158. Move CyclicIteration to HDFS. (eli)
+
+    HADOOP-7526. Add TestPath tests for URI conversion and reserved
+    characters. (eli)
+
+    HADOOP-7531. Add servlet util methods for handling paths in requests. (eli)
+
+    HADOOP-7493. Add ShortWritable.  (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7555. Add a eclipse-generated files to .gitignore. (atm)
+
+    HADOOP-7264. Bump avro version to at least 1.4.1. (Alejandro Abdelnur via
+    tomwhite)
+
+    HADOOP-7498. Remove legacy TAR layout creation. (Alejandro Abdelnur via
+    tomwhite)
+
+    HADOOP-7496. Break Maven TAR & bintar profiles into just LAYOUT & TAR proper.
+    (Alejandro Abdelnur via tomwhite)
+
+    HADOOP-7561. Make test-patch only run tests for changed modules. (tomwhite)
+
+    HADOOP-7547. Add generic type in WritableComparable subclasses.
+    (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7579. Rename package names from alfredo to auth.
+    (Alejandro Abdelnur via szetszwo)
+
+    HADOOP-7594. Support HTTP REST in HttpServer.  (szetszwo)
+
+    HADOOP-7552. FileUtil#fullyDelete doesn't throw IOE but lists it
+    in the throws clause. (eli)
+
+    HADOOP-7580. Add a version of getLocalPathForWrite to LocalDirAllocator
+    which doesn't create dirs. (Chris Douglas & Siddharth Seth via acmurthy) 
+
+    HADOOP-7507. Allow ganglia metrics to include the metrics system tags
+                 in the gmetric names. (Alejandro Abdelnur via todd)
+
+    HADOOP-7612. Change test-patch to run tests for all nested modules.
+    (tomwhite)
+
+    HADOOP-7599. Script improvements to setup a secure Hadoop cluster
+    (Eric Yang via ddas)
+
+    HADOOP-7639. Enhance HttpServer to allow passing path-specs for filtering,
+    so that servers like Yarn WebApp can get filtered the paths served by
+    their own injected servlets. (Thomas Graves via vinodkv)
+
+    HADOOP-7575. Enhanced LocalDirAllocator to support fully-qualified
+    paths. (Jonathan Eagles via vinodkv)
+
+    HADOOP-7469  Add a standard handler for socket connection problems which
+                 improves diagnostics (Uma Maheswara Rao G  and stevel via stevel)
+
+    HADOOP-7710. Added hadoop-setup-application.sh for creating 
+    application directory (Arpit Gupta via Eric Yang)
+
+    HADOOP-7707. Added toggle for dfs.support.append, webhdfs and hadoop proxy
+    user to setup config script. (Arpit Gupta via Eric Yang)
+
+    HADOOP-7720. Added parameter for HBase user to setup config script.
+    (Arpit Gupta via Eric Yang)
+
+    HADOOP-7624. Set things up for a top level hadoop-tools module. (tucu)
+
+    HADOOP-7627. Improve MetricsAsserts to give more understandable output
+    on failure. (todd)
+
+    HADOOP-7642. create hadoop-dist module where TAR stitching would happen.
+    (Thomas White via tucu)
+
+    HADOOP-7709. Running a set of methods in a Single Test Class. 
+    (Jonathan Eagles via mahadev)
+
+    HADOOP-7705. Add a log4j back end that can push out JSON data,
+    one per line. (stevel)
+
+    HADOOP-7749. Add a NetUtils createSocketAddr call which provides more
+    help in exception messages. (todd)
+
+    HADOOP-7762. Common side of MR-2736. (eli)
+
+    HADOOP-7668. Add a NetUtils method that can tell if an InetAddress 
+    belongs to local host. (suresh)
+
+    HADOOP-7509. Improve exception message thrown when Authentication is 
+    required. (Ravi Prakash via suresh)
+
+    HADOOP-7745. Fix wrong variable name in exception message introduced
+    in HADOOP-7509. (Ravi Prakash via suresh)
+
+    MAPREDUCE-2764. Fix renewal of dfs delegation tokens. (Owen via jitendra)
+
+    HADOOP-7360. Preserve relative paths that do not contain globs in FsShell.
+    (Daryn Sharp and Kihwal Lee via szetszwo)
+
+    HADOOP-7771. FsShell -copyToLocal, -get, etc. commands throw NPE if the
+    destination directory does not exist.  (John George and Daryn Sharp
+    via szetszwo)
+
+    HADOOP-7782. Aggregate project javadocs. (tomwhite)
+
+    HADOOP-7789. Improvements to site navigation. (acmurthy) 
+
+  OPTIMIZATIONS
+  
+    HADOOP-7333. Performance improvement in PureJavaCrc32. (Eric Caspole
+    via todd)
+
+    HADOOP-7445. Implement bulk checksum verification using efficient native
+    code. (todd)
+
+    HADOOP-7753. Support fadvise and sync_file_range in NativeIO. Add
+    ReadaheadPool infrastructure for use in HDFS and MR. (todd)
+
+    HADOOP-7446. Implement CRC32C native code using SSE4.2 instructions.
+    (Kihwal Lee and todd via todd)
+
+    HADOOP-7763. Add top-level navigation to APT docs. (tomwhite)
+
+    HADOOP-7785. Add equals, hashcode, toString to DataChecksum (todd)
+
+  BUG FIXES
+
+    HADOOP-7740. Fixed security audit logger configuration. (Arpit Gupta via Eric Yang)
+
+    HADOOP-7630. hadoop-metrics2.properties should have a property *.period 
+    set to a default value for metrics. (Eric Yang via mattf)
+
+    HADOOP-7327. FileSystem.listStatus() throws NullPointerException instead of
+    IOException upon access permission failure. (mattf)
+
+    HADOOP-7015. RawLocalFileSystem#listStatus does not deal with a directory
+    whose entries are changing (e.g. in a multi-thread or multi-process
+    environment). (Sanjay Radia via eli)
+
+    HADOOP-7045. TestDU fails on systems with local file systems with 
+    extended attributes. (eli)
+
+    HADOOP-6939. Inconsistent lock ordering in
+    AbstractDelegationTokenSecretManager. (Todd Lipcon via tomwhite)
+
+    HADOOP-7129. Fix typo in method name getProtocolSigature (todd)
+
+    HADOOP-7048.  Wrong description of Block-Compressed SequenceFile Format in
+    SequenceFile's javadoc.  (Jingguo Yao via tomwhite)
+
+    HADOOP-7153. MapWritable violates contract of Map interface for equals()
+    and hashCode(). (Nicholas Telford via todd)
+
+    HADOOP-6754. DefaultCodec.createOutputStream() leaks memory.
+    (Aaron Kimball via tomwhite)
+
+    HADOOP-7098. Tasktracker property not set in conf/hadoop-env.sh.
+    (Bernd Fondermann via tomwhite)
+
+    HADOOP-7131. Exceptions thrown by Text methods should include the causing
+    exception. (Uma Maheswara Rao G via todd)
+
+    HADOOP-6912. Guard against NPE when calling UGI.isLoginKeytabBased().
+    (Kan Zhang via jitendra)
+
+    HADOOP-7204. remove local unused fs variable from CmdHandler 
+    and FsShellPermissions.changePermissions (boryas)
+
+    HADOOP-7210. Chown command is not working from FSShell
+    (Uma Maheswara Rao G via todd)
+
+    HADOOP-7215. RPC clients must use network interface corresponding to 
+    the host in the client's kerberos principal key. (suresh)
+
+    HADOOP-7019. Refactor build targets to enable faster cross project dev
+    cycles. (Luke Lu via cos)
+
+    HADOOP-7216. Add FsCommand.runAll() with deprecated annotation for the
+    transition of Command base class improvement.  (Daryn Sharp via szetszwo)
+
+    HADOOP-7207. fs member of FSShell is not really needed (boryas)
+
+    HADOOP-7223. FileContext createFlag combinations are not clearly defined.
+    (suresh)
+
+    HADOOP-7231. Fix synopsis for -count. (Daryn Sharp via eli).
+
+    HADOOP-7261. Disable IPV6 for junit tests. (suresh)
+
+    HADOOP-7268. FileContext.getLocalFSFileContext() behavior needs to be fixed
+    w.r.t tokens. (jitendra)
+
+    HADOOP-7290. Unit test failure in 
+    TestUserGroupInformation.testGetServerSideGroups. (Trevor Robison via eli)
+
+    HADOOP-7292. Fix racy test case TestSinkQueue. (Luke Lu via todd)
+
+    HADOOP-7282. ipc.Server.getRemoteIp() may return null.  (John George
+    via szetszwo)
+
+    HADOOP-7208. Fix implementation of equals() and hashCode() in
+    StandardSocketFactory. (Uma Maheswara Rao G via todd)
+
+    HADOOP-7336. TestFileContextResolveAfs will fail with default 
+    test.build.data property. (jitendra)
+
+    HADOOP-7284 Trash and shell's rm does not work for viewfs (Sanjay Radia)
+
+    HADOOP-7341. Fix options parsing in CommandFormat (Daryn Sharp via todd)
+
+    HADOOP-7353. Cleanup FsShell and prevent masking of RTE stack traces.
+    (Daryn Sharp via todd)
+
+    HADOOP-7356. RPM packages broke bin/hadoop script in developer environment.
+    (Eric Yang via todd)
+
+    HADOOP-7389. Use of TestingGroups by tests causes subsequent tests to fail.
+    (atm via tomwhite)
+
+    HADOOP-7377. Fix command name handling affecting DFSAdmin. (Daryn Sharp
+    via mattf)
+
+    HADOOP-7402. TestConfiguration doesn't clean up after itself. (atm via eli)
+
+    HADOOP-7428. IPC connection is orphaned with null 'out' member.
+    (todd via eli)
+
+    HADOOP-7437. IOUtils.copybytes will suppress the stream closure exceptions.
+    (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7090. Fix resource leaks in s3.INode, BloomMapFile, WritableUtils
+    and CBZip2OutputStream.  (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7440. HttpServer.getParameterValues throws NPE for missing
+    parameters. (Uma Maheswara Rao G and todd via todd)
+
+    HADOOP-7442. Docs in core-default.xml still reference deprecated config
+    "topology.script.file.name" (atm)
+
+    HADOOP-7419. new hadoop-config.sh doesn't manage classpath for
+    HADOOP_CONF_DIR correctly. (Bing Zheng and todd via todd)
+
+    HADOOP-7448. merge from yahoo-merge branch (via mattf):
+    -r 1079157: Fix content type for /stacks servlet to be 
+    plain text (Luke Lu)
+    -r 1079164: No need to escape plain text (Luke Lu)
+
+    HADOOP-7471. The saveVersion.sh script sometimes fails to extract SVN URL.
+    (Alejandro Abdelnur via eli)
+
+    HADOOP-2081. Configuration getInt, getLong, and getFloat replace
+    invalid numbers with the default value. (Harsh J via eli)
+
+    HADOOP-7111. Several TFile tests failing when native libraries are
+    present. (atm)
+
+    HADOOP-7438. Fix deprecated warnings from hadoop-daemon.sh script.
+    (Ravi Prakash via suresh)
+
+    HADOOP-7468 hadoop-core JAR contains a log4j.properties file.
+    (Jolly Chen)
+
+    HADOOP-7508. Compiled nativelib is in wrong directory and it is not picked
+    up by surefire setup. (Alejandro Abdelnur via tomwhite)
+   
+    HADOOP-7520. Fix to add distribution management info to hadoop-main
+    (Alejandro Abdelnur via gkesavan)
+
+    HADOOP-7515. test-patch reports the wrong number of javadoc warnings.
+    (tomwhite)
+
+    HADOOP-7523. Test org.apache.hadoop.fs.TestFilterFileSystem fails due to
+    java.lang.NoSuchMethodException. (John Lee via tomwhite)
+
+    HADOOP-7528. Maven build fails in Windows. (Alejandro Abdelnur via
+    tomwhite)
+
+    HADOOP-7533. Allow test-patch to be run from any subproject directory.
+    (tomwhite)
+
+    HADOOP-7512. Fix example mistake in WritableComparable javadocs.
+    (Harsh J via eli)
+
+    HADOOP-7357. hadoop.io.compress.TestCodec#main() should exit with
+    non-zero exit code if test failed. (Philip Zeyliger via eli)
+
+    HADOOP-6622. Token should not print the password in toString. (eli)
+
+    HADOOP-7529. Fix lock cycles in metrics system. (llu)
+
+    HADOOP-7545. Common -tests JAR should not include properties and configs.
+    (todd)
+
+    HADOOP-7536. Correct the dependency version regressions introduced in
+    HADOOP-6671. (Alejandro Abdelnur via tomwhite)
+
+    HADOOP-7566. MR tests are failing webapps/hdfs not found in CLASSPATH.
+    (Alejandro Abdelnur via mahadev)
+
+    HADOOP-7567. 'mvn eclipse:eclipse' fails for hadoop-alfredo (auth).
+    (Alejandro Abdelnur via tomwhite)
+
+    HADOOP-7563. Setup HADOOP_HDFS_HOME, HADOOP_MAPRED_HOME and classpath
+    correction. (Eric Yang via acmurthy) 
+
+    HADOOP-7560. Change src layout to be heirarchical. (Alejandro Abdelnur
+    via acmurthy)
+
+    HADOOP-7576. Fix findbugs warnings and javac warnings in hadoop-auth.
+    (szetszwo)
+
+    HADOOP-7593. Fix AssertionError in TestHttpServer.testMaxThreads().
+    (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7598. Fix smart-apply-patch.sh to handle patching from a sub
+    directory correctly. (Robert Evans via acmurthy) 
+
+    HADOOP-7328. When a serializer class is missing, return null, not throw
+    an NPE. (Harsh J Chouraria via todd)
+
+    HADOOP-7626. Bugfix for a config generator (Eric Yang via ddas)
+
+    HADOOP-7629. Allow immutable FsPermission objects to be used as IPC
+    parameters. (todd)
+
+    HADOOP-7608. SnappyCodec check for Hadoop native lib is wrong
+    (Alejandro Abdelnur via todd)
+
+    HADOOP-7637. Fix to include FairScheduler configuration file in
+    RPM. (Eric Yang via ddas)
+
+    HADOOP-7633. Adds log4j.properties to the hadoop-conf dir on
+    deploy (Eric Yang via ddas)
+
+    HADOOP-7631. Fixes a config problem to do with running streaming jobs
+    (Eric Yang via ddas)
+
+    HADOOP-7662. Fixed logs servlet to use the pathspec '/*' instead of '/'
+    for correct filtering. (Thomas Graves via vinodkv)
+
+    HADOOP-7691. Fixed conflict uid for install packages. (Eric Yang)
+
+    HADOOP-7603. Set hdfs, mapred uid, and hadoop uid to fixed numbers. 
+    (Eric Yang)
+
+    HADOOP-7658. Fixed HADOOP_SECURE_DN_USER environment variable in 
+    hadoop-evn.sh (Eric Yang)
+
+    HADOOP-7684. Added init.d script for jobhistory server and
+    secondary namenode. (Eric Yang)
+
+    HADOOP-7715. Removed unnecessary security logger configuration. (Eric Yang)
+
+    HADOOP-7685. Improved directory ownership check function in 
+    hadoop-setup-conf.sh. (Eric Yang)
+
+    HADOOP-7711. Fixed recursive sourcing of HADOOP_OPTS environment
+    variables (Arpit Gupta via Eric Yang)
+
+    HADOOP-7681. Fixed security and hdfs audit log4j properties
+    (Arpit Gupta via Eric Yang)
+
+    HADOOP-7708. Fixed hadoop-setup-conf.sh to handle config files
+    consistently.  (Eric Yang)
+
+    HADOOP-7724. Fixed hadoop-setup-conf.sh to put proxy user in
+    core-site.xml.  (Arpit Gupta via Eric Yang)
+
+    HADOOP-7755. Detect MapReduce PreCommit Trunk builds silently failing
+    when running test-patch.sh. (Jonathan Eagles via tomwhite)
+
+    HADOOP-7744. Ensure failed tests exit with proper error code. (Jonathan
+    Eagles via acmurthy) 
+
+    HADOOP-7764. Allow HttpServer to set both ACL list and path spec filters. 
+    (Jonathan Eagles via acmurthy)
+
+    HADOOP-7766. The auth to local mappings are not being respected, with webhdfs 
+    and security enabled. (jitendra)
+
+    HADOOP-7721. Add log before login in KerberosAuthenticationHandler. 
+    (jitendra)
+
+    HADOOP-7778. FindBugs warning in Token.getKind(). (tomwhite)
+
+    HADOOP-7798. Add support gpg signatures for maven release artifacts.
+    (cutting via acmurthy) 
+
+    HADOOP-7797. Fix top-level pom.xml to refer to correct staging maven
+    repository. (omalley via acmurthy) 
+
+Release 0.22.1 - Unreleased
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+    HADOOP-7937. Forward port SequenceFile#syncFs and friends from Hadoop 1.x.
+    (tomwhite)
+
+Release 0.22.0 - 2011-11-29
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-7137. Remove hod contrib. (nigel via eli)
+
+  NEW FEATURES
+
+    HADOOP-6791.  Refresh for proxy superuser config
+    (common part for HDFS-1096) (boryas)
+
+    HADOOP-6581. Add authenticated TokenIdentifiers to UGI so that 
+    they can be used for authorization (Kan Zhang and Jitendra Pandey 
+    via jghoman)
+
+    HADOOP-6584. Provide Kerberized SSL encryption for webservices.
+    (jghoman and Kan Zhang via jghoman)
+
+    HADOOP-6853. Common component of HDFS-1045. (jghoman)
+
+    HADOOP-6859 - Introduce additional statistics to FileSystem to track 
+    file system operations (suresh)
+
+    HADOOP-6870. Add a new API getFiles to FileSystem and FileContext that
+    lists all files under the input path or the subtree rooted at the
+    input path if recursive is true. Block locations are returned together
+    with each file's status. (hairong)
+
+    HADOOP-6888. Add a new FileSystem API closeAllForUGI(..) for closing all
+    file systems associated with a particular UGI.  (Devaraj Das and Kan Zhang
+    via szetszwo)
+
+    HADOOP-6892. Common component of HDFS-1150 (Verify datanodes' identities 
+    to clients in secure clusters) (jghoman)
+
+    HADOOP-6889. Make RPC to have an option to timeout. (hairong)
+
+    HADOOP-6996. Allow CodecFactory to return a codec object given a codec'
+    class name. (hairong)
+
+    HADOOP-7013. Add boolean field isCorrupt to BlockLocation. 
+    (Patrick Kling via hairong)
+
+    HADOOP-6978. Adds support for NativeIO using JNI. 
+    (Todd Lipcon, Devaraj Das & Owen O'Malley via ddas)
+
+    HADOOP-7134. configure files that are generated as part of the released
+    tarball need to have executable bit set. (Roman Shaposhnik via cos)
+
+  IMPROVEMENTS
+
+    HADOOP-6644. util.Shell getGROUPS_FOR_USER_COMMAND method name 
+    - should use common naming convention (boryas)
+
+    HADOOP-6778. add isRunning() method to 
+    AbstractDelegationTokenSecretManager (for HDFS-1044) (boryas)
+
+    HADOOP-6633. normalize property names for JT/NN kerberos principal 
+    names in configuration (boryas)
+
+    HADOOP-6627. "Bad Connection to FS" message in FSShell should print 
+    message from the exception (boryas)
+
+    HADOOP-6600. mechanism for authorization check for inter-server 
+    protocols. (boryas)
+
+    HADOOP-6623. Add StringUtils.split for non-escaped single-character
+    separator. (Todd Lipcon via tomwhite)
+
+    HADOOP-6761. The Trash Emptier has the ability to run more frequently.
+    (Dmytro Molkov via dhruba)
+
+    HADOOP-6714. Resolve compressed files using CodecFactory in FsShell::text.
+    (Patrick Angeles via cdouglas)
+
+    HADOOP-6661. User document for UserGroupInformation.doAs. 
+    (Jitendra Pandey via jghoman)
+
+    HADOOP-6674. Makes use of the SASL authentication options in the
+    SASL RPC. (Jitendra Pandey via ddas)
+
+    HADOOP-6526. Need mapping from long principal names to local OS 
+    user names. (boryas)
+
+    HADOOP-6814. Adds an API in UserGroupInformation to get the real
+    authentication method of a passed UGI. (Jitendra Pandey via ddas)
+
+    HADOOP-6756. Documentation for common configuration keys.
+    (Erik Steffl via shv)
+
+    HADOOP-6835. Add support for concatenated gzip input. (Greg Roelofs via
+    cdouglas)
+
+    HADOOP-6845. Renames the TokenStorage class to Credentials. 
+    (Jitendra Pandey via ddas)
+
+    HADOOP-6826. FileStatus needs unit tests. (Rodrigo Schmidt via Eli
+    Collins)
+
+    HADOOP-6905. add buildDTServiceName method to SecurityUtil 
+    (as part of MAPREDUCE-1718)  (boryas)
+
+    HADOOP-6632. Adds support for using different keytabs for different
+    servers in a Hadoop cluster. In the earier implementation, all servers 
+    of a certain type (like TaskTracker), would have the same keytab and the
+    same principal. Now the principal name is a pattern that has _HOST in it.
+    (Kan Zhang & Jitendra Pandey via ddas)
+
+    HADOOP-6861. Adds new non-static methods in Credentials to read and 
+    write token storage file. (Jitendra Pandey & Owen O'Malley via ddas)
+
+    HADOOP-6877. Common part of HDFS-1178 (NameNode servlets should communicate
+    with NameNode directrly). (Kan Zhang via jghoman)
+    
+    HADOOP-6475. Adding some javadoc to Server.RpcMetrics, UGI. 
+    (Jitendra Pandey and borya via jghoman)
+
+    HADOOP-6656. Adds a thread in the UserGroupInformation to renew TGTs 
+    periodically. (Owen O'Malley and ddas via ddas)
+
+    HADOOP-6890. Improve listFiles API introduced by HADOOP-6870. (hairong)
+
+    HADOOP-6862. Adds api to add/remove user and group to AccessControlList
+    (amareshwari)
+
+    HADOOP-6911. doc update for DelegationTokenFetcher (boryas)
+
+    HADOOP-6900. Make the iterator returned by FileSystem#listLocatedStatus to 
+    throw IOException rather than RuntimeException when there is an IO error
+    fetching the next file. (hairong)
+
+    HADOOP-6905. Better logging messages when a delegation token is invalid.
+    (Kan Zhang via jghoman)
+
+    HADOOP-6693. Add metrics to track kerberol login activity. (suresh)
+
+    HADOOP-6803. Add native gzip read/write coverage to TestCodec.
+    (Eli Collins via tomwhite)
+
+    HADOOP-6950. Suggest that HADOOP_CLASSPATH should be preserved in 
+    hadoop-env.sh.template. (Philip Zeyliger via Eli Collins)
+
+    HADOOP-6922. Make AccessControlList a writable and update documentation
+    for Job ACLs.  (Ravi Gummadi via vinodkv)
+
+    HADOOP-6965. Introduces checks for whether the original tgt is valid 
+    in the reloginFromKeytab method.
+
+    HADOOP-6856. Simplify constructors for SequenceFile, and MapFile. (omalley)
+
+    HADOOP-6987. Use JUnit Rule to optionally fail test cases that run more
+    than 10 seconds (jghoman)
+
+    HADOOP-7005. Update test-patch.sh to remove callback to Hudson. (nigel)
+
+    HADOOP-6985. Suggest that HADOOP_OPTS be preserved in
+    hadoop-env.sh.template. (Ramkumar Vadali via cutting)
+
+    HADOOP-7007. Update the hudson-test-patch ant target to work with the
+    latest test-patch.sh script (gkesavan)
+
+    HADOOP-7010. Typo in FileSystem.java. (Jingguo Yao via eli)
+
+    HADOOP-7009. MD5Hash provides a public factory method that creates an
+    instance of thread local MessageDigest. (hairong)
+
+    HADOOP-7008. Enable test-patch.sh to have a configured number of 
+    acceptable findbugs and javadoc warnings. (nigel and gkesavan)
+
+    HADOOP-6818. Provides a JNI implementation of group resolution. (ddas)
+
+    HADOOP-6943. The GroupMappingServiceProvider interface should be public.
+    (Aaron T. Myers via tomwhite)
+
+    HADOOP-4675. Current Ganglia metrics implementation is incompatible with
+    Ganglia 3.1. (Brian Bockelman via tomwhite)
+
+    HADOOP-6977. Herriot daemon clients should vend statistics (cos)
+
+    HADOOP-7024. Create a test method for adding file systems during tests.
+    (Kan Zhang via jghoman)
+
+    HADOOP-6903. Make AbstractFSileSystem methods and some FileContext methods
+    to be public. (Sanjay Radia)
+
+    HADOOP-7034. Add TestPath tests to cover dot, dot dot, and slash 
+    normalization. (eli)
+
+    HADOOP-7032. Assert type constraints in the FileStatus constructor. (eli)
+
+    HADOOP-6562. FileContextSymlinkBaseTest should use FileContextTestHelper. 
+    (eli)
+
+    HADOOP-7028. ant eclipse does not include requisite ant.jar in the 
+    classpath. (Patrick Angeles via eli)
+
+    HADOOP-6298. Add copyBytes to Text and BytesWritable. (omalley)
+  
+    HADOOP-6578. Configuration should trim whitespace around a lot of value
+    types. (Michele Catasta via eli)
+
+    HADOOP-6811. Remove EC2 bash scripts. They are replaced by Apache Whirr
+    (incubating, http://incubator.apache.org/whirr). (tomwhite)
+
+    HADOOP-7102. Remove "fs.ramfs.impl" field from core-deafult.xml (shv)
+
+    HADOOP-7104. Remove unnecessary DNS reverse lookups from RPC layer
+    (Kan Zhang via todd)
+
+    HADOOP-6056. Use java.net.preferIPv4Stack to force IPv4.
+    (Michele Catasta via shv)
+
+    HADOOP-7110. Implement chmod with JNI. (todd)
+
+    HADOOP-6812. Change documentation for correct placement of configuration
+    variables: mapreduce.reduce.input.buffer.percent, 
+    mapreduce.task.io.sort.factor, mapreduce.task.io.sort.mb
+    (Chris Douglas via shv)
+
+    HADOOP-6436. Remove auto-generated native build files. (rvs via eli)
+
+    HADOOP-6970. SecurityAuth.audit should be generated under /build. (boryas)
+
+    HADOOP-7154. Should set MALLOC_ARENA_MAX in hadoop-env.sh (todd)
+
+    HADOOP-7187. Fix socket leak in GangliaContext.  (Uma Maheswara Rao G
+    via szetszwo)
+
+    HADOOP-7241. fix typo of command 'hadoop fs -help tail'. 
+    (Wei Yongjun via eli)
+
+    HADOOP-7244. Documentation change for updated configuration keys.
+    (tomwhite via eli)
+
+    HADOOP-7189. Add ability to enable 'debug' property in JAAS configuration.
+    (Ted Yu via todd)
+
+    HADOOP-7192. Update fs -stat docs to reflect the format features. (Harsh
+    J Chouraria via todd)
+
+    HADOOP-7355  Add audience and stability annotations to HttpServer class
+                 (stack)
+
+    HADOOP-7346. Send back nicer error message to clients using outdated IPC
+    version. (todd)
+
+    HADOOP-7335. Force entropy to come from non-true random for tests.
+    (todd via eli)
+
+    HADOOP-7325. The hadoop command should not accept class names starting with
+    a hyphen. (Brock Noland via todd)
+
+    HADOOP-7772. javadoc the topology classes (stevel)
+
+    HADOOP-7786. Remove HDFS-specific config keys defined in FsConfig. (eli)
+
+    HADOOP-7861. changes2html.pl generates links to HADOOP, HDFS, and MAPREDUCE
+    jiras. (shv)
+
+  OPTIMIZATIONS
+
+    HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..).
+    (Erik Steffl via szetszwo)
+
+    HADOOP-6683. ZlibCompressor does not fully utilize the buffer.
+    (Kang Xiao via eli)
+
+    HADOOP-6949. Reduce RPC packet size of primitive arrays using
+    ArrayPrimitiveWritable instead of ObjectWritable. (Matt Foley via suresh)
+
+  BUG FIXES
+
+    HADOOP-6638. try to relogin in a case of failed RPC connection (expired 
+    tgt) only in case the subject is loginUser or proxyUgi.realUser. (boryas)
+
+    HADOOP-6781. security audit log shouldn't have exception in it. (boryas)
+
+    HADOOP-6612.  Protocols RefreshUserToGroupMappingsProtocol and 
+    RefreshAuthorizationPolicyProtocol will fail with security enabled (boryas)
+
+    HADOOP-6764. Remove verbose logging from the Groups class. (Boris Shkolnik)
+
+    HADOOP-6730. Bug in FileContext#copy and provide base class for 
+    FileContext tests. (Ravi Phulari via jghoman)
+
+    HADOOP-6669. Respect compression configuration when creating DefaultCodec
+    instances. (Koji Noguchi via cdouglas)
+
+    HADOOP-6747. TestNetUtils fails on Mac OS X. (Todd Lipcon via jghoman)
+
+    HADOOP-6787. Factor out glob pattern code from FileContext and
+    Filesystem. Also fix bugs identified in HADOOP-6618 and make the
+    glob pattern code less restrictive and more POSIX standard
+    compliant. (Luke Lu via eli)
+
+    HADOOP-6649.  login object in UGI should be inside the subject (jnp via 
+    boryas)
+
+    HADOOP-6687.   user object in the subject in UGI should be reused in case 
+    of a relogin. (jnp via boryas)
+
+    HADOOP-6603. Provide workaround for issue with Kerberos not resolving 
+    cross-realm principal (Kan Zhang and Jitendra Pandey via jghoman)
+
+    HADOOP-6620. NPE if renewer is passed as null in getDelegationToken.
+    (Jitendra Pandey via jghoman)
+
+    HADOOP-6613. Moves the RPC version check ahead of the AuthMethod check.
+    (Kan Zhang via ddas)
+
+    HADOOP-6682. NetUtils:normalizeHostName does not process hostnames starting
+    with [a-f] correctly. (jghoman)
+
+    HADOOP-6652. Removes the unnecessary cache from 
+    ShellBasedUnixGroupsMapping. (ddas)
+
+    HADOOP-6815. refreshSuperUserGroupsConfiguration should use server side 
+    configuration for the refresh (boryas)
+
+    HADOOP-6648. Adds a check for null tokens in Credentials.addToken api.
+    (ddas)
+ 
+    HADOOP-6647. balancer fails with "is not authorized for protocol 
+    interface NamenodeProtocol" in secure environment (boryas)
+
+    HADOOP-6834. TFile.append compares initial key against null lastKey
+    (hong tang via mahadev)
+
+    HADOOP-6670. Use the UserGroupInformation's Subject as the criteria for
+    equals and hashCode. (Owen O'Malley and Kan Zhang via ddas)
+
+    HADOOP-6536. Fixes FileUtil.fullyDelete() not to delete the contents of
+    the sym-linked directory. (Ravi Gummadi via amareshwari)
+
+    HADOOP-6873. using delegation token over hftp for long 
+    running clients (boryas)
+
+    HADOOP-6706. Improves the sasl failure handling due to expired tickets,
+    and other server detected failures. (Jitendra Pandey and ddas via ddas)
+
+    HADOOP-6715. Fixes AccessControlList.toString() to return a descriptive
+    String representation of the ACL. (Ravi Gummadi via amareshwari)
+
+    HADOOP-6885. Fix java doc warnings in Groups and 
+    RefreshUserMappingsProtocol. (Eli Collins via jghoman) 
+
+    HADOOP-6482. GenericOptionsParser constructor that takes Options and 
+    String[] ignores options. (Eli Collins via jghoman)
+
+    HADOOP-6906.  FileContext copy() utility doesn't work with recursive
+    copying of directories. (vinod k v via mahadev)
+
+    HADOOP-6453. Hadoop wrapper script shouldn't ignore an existing 
+    JAVA_LIBRARY_PATH. (Chad Metcalf via jghoman)
+
+    HADOOP-6932.  Namenode start (init) fails because of invalid kerberos 
+    key, even when security set to "simple" (boryas)
+
+    HADOOP-6913. Circular initialization between UserGroupInformation and 
+    KerberosName (Kan Zhang via boryas)
+
+    HADOOP-6907. Rpc client doesn't use the per-connection conf to figure
+    out server's Kerberos principal (Kan Zhang via hairong)
+
+    HADOOP-6938. ConnectionId.getRemotePrincipal() should check if security
+    is enabled. (Kan Zhang via hairong)
+
+    HADOOP-6930. AvroRpcEngine doesn't work with generated Avro code. 
+    (sharad)
+
+    HADOOP-6940. RawLocalFileSystem's markSupported method misnamed 
+    markSupport. (Tom White via eli).
+
+    HADOOP-6951.  Distinct minicluster services (e.g. NN and JT) overwrite each
+    other's service policies.  (Aaron T. Myers via tomwhite)
+
+    HADOOP-6879. Provide SSH based (Jsch) remote execution API for system
+    tests (cos)
+
+    HADOOP-6989. Correct the parameter for SetFile to set the value type
+    for SetFile to be NullWritable instead of the key. (cdouglas via omalley)
+
+    HADOOP-6984. Combine the compress kind and the codec in the same option
+    for SequenceFiles. (cdouglas via omalley)
+
+    HADOOP-6933. TestListFiles is flaky. (Todd Lipcon via tomwhite)
+
+    HADOOP-6947.  Kerberos relogin should set refreshKrb5Config to true.
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-7006. Fix 'fs -getmerge' command to not be a no-op.
+    (Chris Nauroth via cutting)
+
+    HADOOP-6663.  BlockDecompressorStream get EOF exception when decompressing
+    the file compressed from empty file.  (Kang Xiao via tomwhite)
+
+    HADOOP-6991.  Fix SequenceFile::Reader to honor file lengths and call
+    openFile (cdouglas via omalley)
+
+    HADOOP-7011.  Fix KerberosName.main() to not throw an NPE.
+    (Aaron T. Myers via tomwhite)
+
+    HADOOP-6975.  Integer overflow in S3InputStream for blocks > 2GB.
+    (Patrick Kling via tomwhite)
+
+    HADOOP-6758. MapFile.fix does not allow index interval definition.
+    (Gianmarco De Francisci Morales via tomwhite)
+
+    HADOOP-6926. SocketInputStream incorrectly implements read().
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-6899 RawLocalFileSystem#setWorkingDir() does not work for relative names
+     (Sanjay Radia)
+
+    HADOOP-6496. HttpServer sends wrong content-type for CSS files
+    (and others). (Todd Lipcon via tomwhite)
+
+    HADOOP-7057. IOUtils.readFully and IOUtils.skipFully have typo in
+    exception creation's message. (cos)
+
+    HADOOP-7038. saveVersion script includes an additional \r while running
+    whoami under windows. (Wang Xu via cos)
+
+    HADOOP-7082. Configuration.writeXML should not hold lock while outputting
+    (todd)
+
+    HADOOP-7070. JAAS configuration should delegate unknown application names
+    to pre-existing configuration. (todd)
+
+    HADOOP-7087. SequenceFile.createWriter ignores FileSystem parameter (todd)
+
+    HADOOP-7091. reloginFromKeytab() should happen even if TGT can't be found.
+    (Kan Zhang via jghoman)
+
+    HADOOP-7100. Fix build to not refer to contrib/ec2 removed by HADOOP-6811
+    (todd)
+
+    HADOOP-7097. JAVA_LIBRARY_PATH missing base directory. (Noah Watkins via
+    todd)
+
+    HADOOP-7093. Servlets should default to text/plain (todd)
+
+    HADOOP-7101. UserGroupInformation.getCurrentUser() fails when called from
+    non-Hadoop JAAS context. (todd)
+
+    HADOOP-7089. Fix link resolution logic in hadoop-config.sh. (eli)
+
+    HADOOP-7046. Fix Findbugs warning in Configuration. (Po Cheung via shv)
+
+    HADOOP-7118. Fix NPE in Configuration.writeXml (todd)
+
+    HADOOP-7122. Fix thread leak when shell commands time out. (todd)
+
+    HADOOP-7126. Fix file permission setting for RawLocalFileSystem on Windows.
+    (Po Cheung via shv)
+
+    HADOOP-6642. Fix javac, javadoc, findbugs warnings related to security work. 
+    (Chris Douglas, Po Cheung via shv)
+
+    HADOOP-7140. IPC Reader threads do not stop when server stops (todd)
+
+    HADOOP-7094. hadoop.css got lost during project split (cos)
+
+    HADOOP-7145. Configuration.getLocalPath should trim whitespace from
+    the provided directories. (todd)
+
+    HADOOP-7156. Workaround for unsafe implementations of getpwuid_r (todd)
+
+    HADOOP-6898. FileSystem.copyToLocal creates files with 777 permissions.
+    (Aaron T. Myers via tomwhite)
+
+    HADOOP-7229. Do not default to an absolute path for kinit in Kerberos
+    auto-renewal thread. (Aaron T. Myers via todd)
+
+    HADOOP-7172. SecureIO should not check owner on non-secure
+    clusters that have no native support. (todd via eli)
+
+    HADOOP-7184. Remove deprecated config local.cache.size from
+    core-default.xml (todd)
+
+    HADOOP-7245. FsConfig should use constants in CommonConfigurationKeys.
+    (tomwhite via eli)
+
+    HADOOP-7068. Ivy resolve force mode should be turned off by default.
+    (Luke Lu via tomwhite)
+
+    HADOOP-7296. The FsPermission(FsPermission) constructor does not use the
+    sticky bit. (Siddharth Seth via tomwhite)
+
+    HADOOP-7300. Configuration methods that return collections are inconsistent
+    about mutability. (todd)
+
+    HADOOP-7305. Eclipse project classpath should include tools.jar from JDK.
+    (Niels Basjes via todd)
+
+    HADOOP-7318. MD5Hash factory should reset the digester it returns.
+    (todd via eli)
+
+    HADOOP-7287. Configuration deprecation mechanism doesn't work properly for
+    GenericOptionsParser and Tools. (Aaron T. Myers via todd)
+
+    HADOOP-7146. RPC server leaks file descriptors (todd)
+
+    HADOOP-7276. Hadoop native builds fail on ARM due to -m32 (Trevor Robinson
+    via eli)
+
+    HADOOP-7121. Exceptions while serializing IPC call responses are not
+    handled well. (todd)
+
+    HADOOP-7351  Regression: HttpServer#getWebAppsPath used to be protected
+    so subclasses could supply alternate webapps path but it was made private
+    by HADOOP-6461 (Stack)
+
+    HADOOP-7349. HADOOP-7121 accidentally disabled some tests in TestIPC.
+    (todd)
+
+    HADOOP-7390. VersionInfo not generated properly in git after unsplit. (todd
+    via atm)
+
+    HADOOP-7568. SequenceFile should not print into stdout.
+    (Plamen Jeliazkov via shv)
+
+    HADOOP-7663. Fix TestHDFSTrash failure. (Mayank Bansal via shv)
+
+    HADOOP-7457. Remove out-of-date Chinese language documentation.
+    (Jakob Homan via eli)
+
+    HADOOP-7783. Add more symlink tests that cover intermediate links. (eli)
+
+Release 0.21.1 - Unreleased
+
+  IMPROVEMENTS
+
+    HADOOP-6934. Test for ByteWritable comparator.
+    (Johannes Zillmann via Eli Collins)
+
+    HADOOP-6786. test-patch needs to verify Herriot integrity (cos)
+
+    HADOOP-7177. CodecPool should report which compressor it is using.
+    (Allen Wittenauer via eli)
+
+  BUG FIXES
+
+    HADOOP-6925. BZip2Codec incorrectly implements read(). 
+    (Todd Lipcon via Eli Collins)
+
+    HADOOP-6833. IPC leaks call parameters when exceptions thrown.
+    (Todd Lipcon via Eli Collins)
+
+    HADOOP-6971. Clover build doesn't generate per-test coverage (cos)
+
+    HADOOP-6993. Broken link on cluster setup page of docs. (eli)
+
+    HADOOP-6944. [Herriot] Implement a functionality for getting proxy users
+    definitions like groups and hosts. (Vinay Thota via cos)
+
+    HADOOP-6954.  Sources JARs are not correctly published to the Maven
+    repository. (tomwhite)
+
+    HADOOP-7052. misspelling of threshold in conf/log4j.properties.
+    (Jingguo Yao via eli)
+
+    HADOOP-7053. wrong FSNamesystem Audit logging setting in 
+    conf/log4j.properties. (Jingguo Yao via eli)
+
+    HADOOP-7120. Fix a syntax error in test-patch.sh.  (szetszwo)
+
+    HADOOP-7162. Rmove a duplicated call FileSystem.listStatus(..) in FsShell.
+    (Alexey Diomin via szetszwo)
+
+    HADOOP-7117. Remove fs.checkpoint.* from core-default.xml and replace
+    fs.checkpoint.* with dfs.namenode.checkpoint.* in documentations.
+    (Harsh J Chouraria via szetszwo)
+
+    HADOOP-7193. Correct the "fs -touchz" command help message.
+    (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7174. Null is displayed in the "fs -copyToLocal" command.
+    (Uma Maheswara Rao G via szetszwo)
+
+    HADOOP-7194. Fix resource leak in IOUtils.copyBytes(..).
+    (Devaraj K via szetszwo)
+
+    HADOOP-7183. WritableComparator.get should not cache comparator objects.
+    (tomwhite via eli)
+
+Release 0.21.0 - 2010-08-13
+
+  INCOMPATIBLE CHANGES
+
+    HADOOP-4895. Remove deprecated methods DFSClient.getHints(..) and
+    DFSClient.isDirectory(..).  (szetszwo)
+
+    HADOOP-4941. Remove deprecated FileSystem methods: getBlockSize(Path f),
+    getLength(Path f) and getReplication(Path src).  (szetszwo)
+
+    HADOOP-4648. Remove obsolete, deprecated InMemoryFileSystem and
+    ChecksumDistributedFileSystem.  (cdouglas via szetszwo)
+
+    HADOOP-4940. Remove a deprecated method FileSystem.delete(Path f).  (Enis
+    Soztutar via szetszwo)
+
+    HADOOP-4010. Change semantics for LineRecordReader to read an additional
+    line per split- rather than moving back one character in the stream- to
+    work with splittable compression codecs. (Abdul Qadeer via cdouglas)
+
+    HADOOP-5094. Show hostname and separate live/dead datanodes in DFSAdmin
+    report.  (Jakob Homan via szetszwo)
+
+    HADOOP-4942. Remove deprecated FileSystem methods getName() and
+    getNamed(String name, Configuration conf).  (Jakob Homan via szetszwo)
+
+    HADOOP-5486. Removes the CLASSPATH string from the command line and instead
+    exports it in the environment. (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-2827. Remove deprecated NetUtils::getServerAddress. (cdouglas)
+
+    HADOOP-5681. Change examples RandomWriter and RandomTextWriter to 
+    use new mapreduce API. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5680. Change org.apache.hadoop.examples.SleepJob to use new 
+    mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5699. Change org.apache.hadoop.examples.PiEstimator to use 
+    new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5720. Introduces new task types - JOB_SETUP, JOB_CLEANUP
+    and TASK_CLEANUP. Removes the isMap methods from TaskID/TaskAttemptID
+    classes. (ddas)
+
+    HADOOP-5668. Change TotalOrderPartitioner to use new API. (Amareshwari
+    Sriramadasu via cdouglas)
+
+    HADOOP-5738. Split "waiting_tasks" JobTracker metric into waiting maps and
+    waiting reduces. (Sreekanth Ramakrishnan via cdouglas)
+
+    HADOOP-5679. Resolve findbugs warnings in core/streaming/pipes/examples. 
+    (Jothi Padmanabhan via sharad)
+
+    HADOOP-4359. Support for data access authorization checking on Datanodes.
+    (Kan Zhang via rangadi)
+
+    HADOOP-5690. Change org.apache.hadoop.examples.DBCountPageView to use 
+    new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5694. Change org.apache.hadoop.examples.dancing to use new 
+    mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5696. Change org.apache.hadoop.examples.Sort to use new 
+    mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5698. Change org.apache.hadoop.examples.MultiFileWordCount to 
+    use new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5913. Provide ability to an administrator to stop and start
+    job queues. (Rahul Kumar Singh and Hemanth Yamijala via yhemanth)
+
+    MAPREDUCE-711. Removed Distributed Cache from Common, to move it
+    under Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
+
+    HADOOP-6201. Change FileSystem::listStatus contract to throw
+    FileNotFoundException if the directory does not exist, rather than letting
+    this be implementation-specific. (Jakob Homan via cdouglas)
+
+    HADOOP-6230. Moved process tree and memory calculator related classes
+    from Common to Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
+
+    HADOOP-6203. FsShell rm/rmr error message indicates exceeding Trash quota
+    and suggests using -skpTrash, when moving to trash fails.
+    (Boris Shkolnik via suresh)
+
+    HADOOP-6303. Eclipse .classpath template has outdated jar files and is
+    missing some new ones.  (cos)
+
+    HADOOP-6396. Fix uninformative exception message when unable to parse
+    umask. (jghoman)
+
+    HADOOP-6299. Reimplement the UserGroupInformation to use the OS
+    specific and Kerberos JAAS login. (omalley)
+
+    HADOOP-6686. Remove redundant exception class name from the exception
+    message for the exceptions thrown at RPC client. (suresh)
+
+    HADOOP-6701. Fix incorrect exit codes returned from chmod, chown and chgrp
+    commands from FsShell. (Ravi Phulari via suresh)
+
+  NEW FEATURES
+
+    HADOOP-6332. Large-scale Automated Test Framework. (sharad, Sreekanth
+    Ramakrishnan, at all via cos)
+
+    HADOOP-4268. Change fsck to use ClientProtocol methods so that the
+    corresponding permission requirement for running the ClientProtocol
+    methods will be enforced.  (szetszwo)
+
+    HADOOP-3953. Implement sticky bit for directories in HDFS. (Jakob Homan
+    via szetszwo)
+
+    HADOOP-4368. Implement df in FsShell to show the status of a FileSystem.
+    (Craig Macdonald via szetszwo)
+
+    HADOOP-3741. Add a web ui to the SecondaryNameNode for showing its status.
+    (szetszwo)
+
+    HADOOP-5018. Add pipelined writers to Chukwa. (Ari Rabkin via cdouglas)
+
+    HADOOP-5052. Add an example computing exact digits of pi using the
+    Bailey-Borwein-Plouffe algorithm. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+    HADOOP-4927. Adds a generic wrapper around outputformat to allow creation of
+    output on demand (Jothi Padmanabhan via ddas)
+
+    HADOOP-5144. Add a new DFSAdmin command for changing the setting of restore
+    failed storage replicas in namenode. (Boris Shkolnik via szetszwo)
+
+    HADOOP-5258. Add a new DFSAdmin command to print a tree of the rack and
+    datanode topology as seen by the namenode.  (Jakob Homan via szetszwo)
+    
+    HADOOP-4756. A command line tool to access JMX properties on NameNode
+    and DataNode. (Boris Shkolnik via rangadi)
+
+    HADOOP-4539. Introduce backup node and checkpoint node. (shv)
+
+    HADOOP-5363. Add support for proxying connections to multiple clusters with
+    different versions to hdfsproxy. (Zhiyong Zhang via cdouglas)
+
+    HADOOP-5528. Add a configurable hash partitioner operating on ranges of
+    BinaryComparable keys. (Klaas Bosteels via shv)
+
+    HADOOP-5257. HDFS servers may start and stop external components through
+    a plugin interface. (Carlos Valiente via dhruba)
+
+    HADOOP-5450. Add application-specific data types to streaming's typed bytes
+    interface. (Klaas Bosteels via omalley)
+
+    HADOOP-5518. Add contrib/mrunit, a MapReduce unit test framework.
+    (Aaron Kimball via cutting)
+
+    HADOOP-5469.  Add /metrics servlet to daemons, providing metrics
+    over HTTP as either text or JSON.  (Philip Zeyliger via cutting)
+
+    HADOOP-5467. Introduce offline fsimage image viewer. (Jakob Homan via shv)
+
+    HADOOP-5752. Add a new hdfs image processor, Delimited, to oiv. (Jakob
+    Homan via szetszwo)
+
+    HADOOP-5266. Adds the capability to do mark/reset of the reduce values 
+    iterator in the Context object API. (Jothi Padmanabhan via ddas)
+
+    HADOOP-5745. Allow setting the default value of maxRunningJobs for all
+    pools. (dhruba via matei)
+
+    HADOOP-5643. Adds a way to decommission TaskTrackers while the JobTracker
+    is running. (Amar Kamat via ddas)
+
+    HADOOP-4829. Allow FileSystem shutdown hook to be disabled.
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-5815. Sqoop: A database import tool for Hadoop.
+    (Aaron Kimball via tomwhite)
+
+    HADOOP-4861. Add disk usage with human-readable size (-duh).
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-5844. Use mysqldump when connecting to local mysql instance in Sqoop.
+    (Aaron Kimball via tomwhite)
+
+    HADOOP-5976. Add a new command, classpath, to the hadoop script.  (Owen
+    O'Malley and Gary Murry via szetszwo)
+
+    HADOOP-6120. Add support for Avro specific and reflect data.
+    (sharad via cutting)
+
+    HADOOP-6226. Moves BoundedByteArrayOutputStream from the tfile package to
+    the io package and makes it available to other users (MAPREDUCE-318). 
+    (Jothi Padmanabhan via ddas)
+
+    HADOOP-6105. Adds support for automatically handling deprecation of
+    configuration keys. (V.V.Chaitanya Krishna via yhemanth)
+    
+    HADOOP-6235. Adds new method to FileSystem for clients to get server
+    defaults. (Kan Zhang via suresh)
+
+    HADOOP-6234. Add new option dfs.umaskmode to set umask in configuration
+    to use octal or symbolic instead of decimal. (Jakob Homan via suresh)
+
+    HADOOP-5073. Add annotation mechanism for interface classification.
+    (Jakob Homan via suresh)
+
+    HADOOP-4012. Provide splitting support for bzip2 compressed files. (Abdul
+    Qadeer via cdouglas)
+
+    HADOOP-6246. Add backward compatibility support to use deprecated decimal 
+    umask from old configuration. (Jakob Homan via suresh)
+
+    HADOOP-4952. Add new improved file system interface FileContext for the
+    application writer (Sanjay Radia via suresh)
+
+    HADOOP-6170. Add facility to tunnel Avro RPCs through Hadoop RPCs.
+    This permits one to take advantage of both Avro's RPC versioning
+    features and Hadoop's proven RPC scalability.  (cutting)
+
+    HADOOP-6267. Permit building contrib modules located in external
+    source trees.  (Todd Lipcon via cutting)
+
+    HADOOP-6240. Add new FileContext rename operation that posix compliant
+    that allows overwriting existing destination. (suresh)
+
+    HADOOP-6204. Implementing aspects development and fault injeciton
+    framework for Hadoop (cos)
+
+    HADOOP-6313. Implement Syncable interface in FSDataOutputStream to expose
+    flush APIs to application users. (Hairong Kuang via suresh)
+
+    HADOOP-6284. Add a new parameter, HADOOP_JAVA_PLATFORM_OPTS, to
+    hadoop-config.sh so that it allows setting java command options for
+    JAVA_PLATFORM.  (Koji Noguchi via szetszwo)
+
+    HADOOP-6337. Updates FilterInitializer class to be more visible,
+    and the init of the class is made to take a Configuration argument.
+    (Jakob Homan via ddas)
+
+    Hadoop-6223. Add new file system interface AbstractFileSystem with
+    implementation of some file systems that delegate to old FileSystem.
+    (Sanjay Radia via suresh)
+
+    HADOOP-6433. Introduce asychronous deletion of files via a pool of
+    threads. This can be used to delete files in the Distributed
+    Cache. (Zheng Shao via dhruba)
+
+    HADOOP-6415. Adds a common token interface for both job token and 
+    delegation token. (Kan Zhang via ddas)
+
+    HADOOP-6408. Add a /conf servlet to dump running configuration.
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-6520. Adds APIs to read/write Token and secret keys. Also
+    adds the automatic loading of tokens into UserGroupInformation
+    upon login. The tokens are read from a file specified in the
+    environment variable. (ddas)
+
+    HADOOP-6419. Adds SASL based authentication to RPC.
+    (Kan Zhang via ddas)
+
+    HADOOP-6510. Adds a way for superusers to impersonate other users
+    in a secure environment. (Jitendra Nath Pandey via ddas)
+
+    HADOOP-6421. Adds Symbolic links to FileContext, AbstractFileSystem.
+    It also adds a limited implementation for the local file system
+     (RawLocalFs) that allows local symlinks. (Eli Collins via Sanjay Radia)
+
+    HADOOP-6577. Add hidden configuration option "ipc.server.max.response.size"
+    to change the default 1 MB, the maximum size when large IPC handler 
+    response buffer is reset. (suresh)
+
+    HADOOP-6568. Adds authorization for the default servlets. 
+    (Vinod Kumar Vavilapalli via ddas)
+
+    HADOOP-6586. Log authentication and authorization failures and successes
+    for RPC (boryas)
+
+    HADOOP-6580. UGI should contain authentication method. (jnp via boryas)
+    
+    HADOOP-6657. Add a capitalization method to StringUtils for MAPREDUCE-1545.
+    (Luke Lu via Steve Loughran)
+
+    HADOOP-6692. Add FileContext#listStatus that returns an iterator.
+    (hairong)
+
+    HADOOP-6869. Functionality to create file or folder on a remote daemon
+    side (Vinay Thota via cos)
+
+  IMPROVEMENTS
+
+    HADOOP-6798. Align Ivy version for all Hadoop subprojects. (cos)
+
+    HADOOP-6777. Implement a functionality for suspend and resume a process.
+    (Vinay Thota via cos)
+
+    HADOOP-6772. Utilities for system tests specific. (Vinay Thota via cos)
+
+    HADOOP-6771. Herriot's artifact id for Maven deployment should be set to
+    hadoop-core-instrumented (cos)
+
+    HADOOP-6752. Remote cluster control functionality needs JavaDocs
+    improvement (Balaji Rajagopalan via cos).
+
+    HADOOP-4565. Added CombineFileInputFormat to use data locality information
+    to create splits. (dhruba via zshao)
+
+    HADOOP-4936. Improvements to TestSafeMode. (shv)
+
+    HADOOP-4985. Remove unnecessary "throw IOException" declarations in
+    FSDirectory related methods.  (szetszwo)
+
+    HADOOP-5017. Change NameNode.namesystem declaration to private.  (szetszwo)
+
+    HADOOP-4794. Add branch information from the source version control into
+    the version information that is compiled into Hadoop. (cdouglas via 
+    omalley)
+
+    HADOOP-5070. Increment copyright year to 2009, remove assertions of ASF
+    copyright to licensed files. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+    HADOOP-5037. Deprecate static FSNamesystem.getFSNamesystem().  (szetszwo)
+
+    HADOOP-5088. Include releaseaudit target as part of developer test-patch
+    target.  (Giridharan Kesavan via nigel)
+
+    HADOOP-2721. Uses setsid when creating new tasks so that subprocesses of 
+    this process will be within this new session (and this process will be 
+    the process leader for all the subprocesses). Killing the process leader,
+    or the main Java task in Hadoop's case, kills the entire subtree of
+    processes. (Ravi Gummadi via ddas)
+
+    HADOOP-5097. Remove static variable JspHelper.fsn, a static reference to
+    a non-singleton FSNamesystem object.  (szetszwo)
+
+    HADOOP-3327. Improves handling of READ_TIMEOUT during map output copying.
+    (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-5124. Choose datanodes randomly instead of starting from the first
+    datanode for providing fairness.  (hairong via szetszwo)
+
+    HADOOP-4930. Implement a Linux native executable that can be used to 
+    launch tasks as users. (Sreekanth Ramakrishnan via yhemanth)
+
+    HADOOP-5122. Fix format of fs.default.name value in libhdfs test conf.
+    (Craig Macdonald via tomwhite)
+
+    HADOOP-5038. Direct daemon trace to debug log instead of stdout. (Jerome
+    Boulon via cdouglas)
+
+    HADOOP-5101. Improve packaging by adding 'all-jars' target building core,
+    tools, and example jars. Let findbugs depend on this rather than the 'tar'
+    target. (Giridharan Kesavan via cdouglas)
+
+    HADOOP-4868. Splits the hadoop script into three parts - bin/hadoop, 
+    bin/mapred and bin/hdfs. (Sharad Agarwal via ddas)
+
+    HADOOP-1722. Adds support for TypedBytes and RawBytes in Streaming.
+    (Klaas Bosteels via ddas)
+
+    HADOOP-4220. Changes the JobTracker restart tests so that they take much
+    less time. (Amar Kamat via ddas)
+
+    HADOOP-4885. Try to restore failed name-node storage directories at 
+    checkpoint time. (Boris Shkolnik via shv)
+
+    HADOOP-5209. Update year to 2009 for javadoc.  (szetszwo)
+
+    HADOOP-5279. Remove unnecessary targets from test-patch.sh.
+    (Giridharan Kesavan via nigel)
+
+    HADOOP-5120. Remove the use of FSNamesystem.getFSNamesystem() from 
+    UpgradeManagerNamenode and UpgradeObjectNamenode.  (szetszwo)
+
+    HADOOP-5222. Add offset to datanode clienttrace. (Lei Xu via cdouglas)
+
+    HADOOP-5240. Skip re-building javadoc when it is already
+    up-to-date. (Aaron Kimball via cutting)
+
+    HADOOP-5042. Add a cleanup stage to log rollover in Chukwa appender.
+    (Jerome Boulon via cdouglas)
+
+    HADOOP-5264. Removes redundant configuration object from the TaskTracker.
+    (Sharad Agarwal via ddas)
+
+    HADOOP-5232. Enable patch testing to occur on more than one host.
+    (Giri Kesavan via nigel)
+
+    HADOOP-4546. Fix DF reporting for AIX. (Bill Habermaas via cdouglas)
+
+    HADOOP-5023. Add Tomcat support to HdfsProxy. (Zhiyong Zhang via cdouglas)
+    
+    HADOOP-5317. Provide documentation for LazyOutput Feature. 
+    (Jothi Padmanabhan via johan)
+
+    HADOOP-5455. Document rpc metrics context to the extent dfs, mapred, and
+    jvm contexts are documented. (Philip Zeyliger via cdouglas)
+
+    HADOOP-5358. Provide scripting functionality to the synthetic load
+    generator. (Jakob Homan via hairong)
+
+    HADOOP-5442. Paginate jobhistory display and added some search
+    capabilities. (Amar Kamat via acmurthy) 
+
+    HADOOP-4842. Streaming now allows specifiying a command for the combiner.
+    (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-5196. avoiding unnecessary byte[] allocation in 
+    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes.
+    (hong tang via mahadev)
+
+    HADOOP-4655. New method FileSystem.newInstance() that always returns
+    a newly allocated FileSystem object. (dhruba)
+
+    HADOOP-4788. Set Fair scheduler to assign both a map and a reduce on each
+    heartbeat by default. (matei)
+
+    HADOOP-5491.  In contrib/index, better control memory usage.
+    (Ning Li via cutting)
+
+    HADOOP-5423. Include option of preserving file metadata in
+    SequenceFile::sort. (Michael Tamm via cdouglas)
+
+    HADOOP-5331. Add support for KFS appends. (Sriram Rao via cdouglas)
+
+    HADOOP-4365. Make Configuration::getProps protected in support of
+    meaningful subclassing. (Steve Loughran via cdouglas)
+
+    HADOOP-2413. Remove the static variable FSNamesystem.fsNamesystemObject.
+    (Konstantin Shvachko via szetszwo)
+
+    HADOOP-4584. Improve datanode block reports and associated file system
+    scan to avoid interefering with normal datanode operations.
+    (Suresh Srinivas via rangadi)
+
+    HADOOP-5502. Documentation for backup and checkpoint nodes.
+    (Jakob Homan via shv)
+
+    HADOOP-5485. Mask actions in the fair scheduler's servlet UI based on
+    value of webinterface.private.actions. 
+    (Vinod Kumar Vavilapalli via yhemanth)
+
+    HADOOP-5581. HDFS should throw FileNotFoundException when while opening
+    a file that does not exist. (Brian Bockelman via rangadi)
+
+    HADOOP-5509. PendingReplicationBlocks does not start monitor in the
+    constructor. (shv)
+
+    HADOOP-5494. Modify sorted map output merger to lazily read values,
+    rather than buffering at least one record for each segment. (Devaraj Das
+    via cdouglas)
+
+    HADOOP-5396. Provide ability to refresh queue ACLs in the JobTracker
+    without having to restart the daemon.
+    (Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
+
+    HADOOP-4490. Provide ability to run tasks as job owners.
+    (Sreekanth Ramakrishnan via yhemanth)
+
+    HADOOP-5697. Change org.apache.hadoop.examples.Grep to use new 
+    mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5625. Add operation duration to clienttrace. (Lei Xu via cdouglas)
+
+    HADOOP-5705. Improve TotalOrderPartitioner efficiency by updating the trie
+    construction. (Dick King via cdouglas)
+
+    HADOOP-5589. Eliminate source limit of 64 for map-side joins imposed by
+    TupleWritable encoding. (Jingkei Ly via cdouglas)
+
+    HADOOP-5734. Correct block placement policy description in HDFS
+    Design document. (Konstantin Boudnik via shv)
+
+    HADOOP-5657. Validate data in TestReduceFetch to improve merge test
+    coverage. (cdouglas)
+
+    HADOOP-5613. Change S3Exception to checked exception.
+    (Andrew Hitchcock via tomwhite)
+
+    HADOOP-5717. Create public enum class for the Framework counters in 
+    org.apache.hadoop.mapreduce. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5217. Split AllTestDriver for core, hdfs and mapred. (sharad)
+
+    HADOOP-5364. Add certificate expiration warning to HsftpFileSystem and HDFS
+    proxy. (Zhiyong Zhang via cdouglas)
+
+    HADOOP-5733. Add map/reduce slot capacity and blacklisted capacity to
+    JobTracker metrics. (Sreekanth Ramakrishnan via cdouglas)
+
+    HADOOP-5596. Add EnumSetWritable. (He Yongqiang via szetszwo)
+
+    HADOOP-5727. Simplify hashcode for ID types. (Shevek via cdouglas)
+
+    HADOOP-5500. In DBOutputFormat, where field names are absent permit the
+    number of fields to be sufficient to construct the select query. (Enis
+    Soztutar via cdouglas)
+
+    HADOOP-5081. Split TestCLI into HDFS, Mapred and Core tests. (sharad)
+
+    HADOOP-5015. Separate block management code from FSNamesystem.  (Suresh
+    Srinivas via szetszwo)
+
+    HADOOP-5080. Add new test cases to TestMRCLI and TestHDFSCLI
+    (V.Karthikeyan via nigel)
+
+    HADOOP-5135. Splits the tests into different directories based on the 
+    package. Four new test targets have been defined - run-test-core, 
+    run-test-mapred, run-test-hdfs and run-test-hdfs-with-mr.
+    (Sharad Agarwal via ddas)
+
+    HADOOP-5771. Implements unit tests for LinuxTaskController.
+    (Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
+
+    HADOOP-5419. Provide a facility to query the Queue ACLs for the
+    current user.
+    (Rahul Kumar Singh via yhemanth)
+
+    HADOOP-5780. Improve per block message prited by "-metaSave" in HDFS.
+    (Raghu Angadi)
+
+    HADOOP-5823. Added a new class DeprecatedUTF8 to help with removing
+    UTF8 related javac warnings. These warnings are removed in 
+    FSEditLog.java as a use case. (Raghu Angadi)
+
+    HADOOP-5824. Deprecate DataTransferProtocol.OP_READ_METADATA and remove
+    the corresponding unused codes.  (Kan Zhang via szetszwo)
+
+    HADOOP-5721. Factor out EditLogFileInputStream and EditLogFileOutputStream
+    into independent classes. (Luca Telloli & Flavio Junqueira via shv)
+
+    HADOOP-5838. Fix a few javac warnings in HDFS. (Raghu Angadi)
+
+    HADOOP-5854. Fix a few "Inconsistent Synchronization" warnings in HDFS.
+    (Raghu Angadi)
+
+    HADOOP-5369. Small tweaks to reduce MapFile index size. (Ben Maurer 
+    via sharad)
+
+    HADOOP-5858. Eliminate UTF8 and fix warnings in test/hdfs-with-mr package.
+    (shv)
+
+    HADOOP-5866. Move DeprecatedUTF8 from o.a.h.io to o.a.h.hdfs since it may
+    not be used outside hdfs. (Raghu Angadi)
+
+    HADOOP-5857. Move normal java methods from hdfs .jsp files to .java files.
+    (szetszwo)
+
+    HADOOP-5873. Remove deprecated methods randomDataNode() and
+    getDatanodeByIndex(..) in FSNamesystem.  (szetszwo)
+
+    HADOOP-5572. Improves the progress reporting for the sort phase for both
+    maps and reduces. (Ravi Gummadi via ddas)
+
+    HADOOP-5839. Fix EC2 scripts to allow remote job submission.
+    (Joydeep Sen Sarma via tomwhite)
+
+    HADOOP-5877. Fix javac warnings in TestHDFSServerPorts, TestCheckpoint, 
+    TestNameEditsConfig, TestStartup and TestStorageRestore.
+    (Jakob Homan via shv)
+
+    HADOOP-5438. Provide a single FileSystem method to create or 
+    open-for-append to a file.  (He Yongqiang via dhruba)
+
+    HADOOP-5472. Change DistCp to support globbing of input paths.  (Dhruba
+    Borthakur and Rodrigo Schmidt via szetszwo)
+
+    HADOOP-5175. Don't unpack libjars on classpath. (Todd Lipcon via tomwhite)
+
+    HADOOP-5620. Add an option to DistCp for preserving modification and access
+    times.  (Rodrigo Schmidt via szetszwo)
+
+    HADOOP-5664. Change map serialization so a lock is obtained only where
+    contention is possible, rather than for each write. (cdouglas)
+
+    HADOOP-5896. Remove the dependency of GenericOptionsParser on 
+    Option.withArgPattern. (Giridharan Kesavan and Sharad Agarwal via 
+    sharad)
+
+    HADOOP-5784. Makes the number of heartbeats that should arrive a second
+    at the JobTracker configurable. (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-5955. Changes TestFileOuputFormat so that is uses LOCAL_MR
+    instead of CLUSTER_MR. (Jothi Padmanabhan via das)
+
+    HADOOP-5948. Changes TestJavaSerialization to use LocalJobRunner 
+    instead of MiniMR/DFS cluster. (Jothi Padmanabhan via das)
+
+    HADOOP-2838. Add mapred.child.env to pass environment variables to 
+    tasktracker's child processes. (Amar Kamat via sharad)
+
+    HADOOP-5961. DataNode process understand generic hadoop command line
+    options (like -Ddfs.property=value). (Raghu Angadi)
+
+    HADOOP-5938. Change org.apache.hadoop.mapred.jobcontrol to use new
+    api. (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-2141. Improves the speculative execution heuristic. The heuristic
+    is currently based on the progress-rates of tasks and the expected time
+    to complete. Also, statistics about trackers are collected, and speculative
+    tasks are not given to the ones deduced to be slow. 
+    (Andy Konwinski and ddas)
+
+    HADOOP-5952. Change "-1 tests included" wording in test-patch.sh.
+    (Gary Murry via szetszwo)
+
+    HADOOP-6106. Provides an option in ShellCommandExecutor to timeout 
+    commands that do not complete within a certain amount of time.
+    (Sreekanth Ramakrishnan via yhemanth)
+
+    HADOOP-5925. EC2 scripts should exit on error. (tomwhite)
+
+    HADOOP-6109. Change Text to grow its internal buffer exponentially, rather
+    than the max of the current length and the proposed length to improve
+    performance reading large values. (thushara wijeratna via cdouglas)
+
+    HADOOP-2366. Support trimmed strings in Configuration.  (Michele Catasta
+    via szetszwo)
+
+    HADOOP-6099. The RPC module can be configured to not send period pings.
+    The default behaviour of sending periodic pings remain unchanged. (dhruba)
+
+    HADOOP-6142. Update documentation and use of harchives for relative paths
+    added in MAPREDUCE-739. (Mahadev Konar via cdouglas)
+
+    HADOOP-6148. Implement a fast, pure Java CRC32 calculator which outperforms
+    java.util.zip.CRC32.  (Todd Lipcon and Scott Carey via szetszwo)
+
+    HADOOP-6146. Upgrade to JetS3t version 0.7.1. (tomwhite)
+
+    HADOOP-6161. Add get/setEnum methods to Configuration. (cdouglas)
+
+    HADOOP-6160. Fix releaseaudit target to run on specific directories.
+    (gkesavan)
+    
+    HADOOP-6169. Removing deprecated method calls in TFile. (hong tang via 
+    mahadev)
+
+    HADOOP-6176. Add a couple package private methods to AccessTokenHandler
+    for testing.  (Kan Zhang via szetszwo)
+
+    HADOOP-6182. Fix ReleaseAudit warnings (Giridharan Kesavan and Lee Tucker
+    via gkesavan)
+
+    HADOOP-6173. Change src/native/packageNativeHadoop.sh to package all
+    native library files.  (Hong Tang via szetszwo)
+
+    HADOOP-6184. Provide an API to dump Configuration in a JSON format.
+    (V.V.Chaitanya Krishna via yhemanth)
+
+    HADOOP-6224. Add a method to WritableUtils performing a bounded read of an
+    encoded String. (Jothi Padmanabhan via cdouglas)
+
+    HADOOP-6133. Add a caching layer to Configuration::getClassByName to
+    alleviate a performance regression introduced in a compatibility layer.
+    (Todd Lipcon via cdouglas)
+
+    HADOOP-6252. Provide a method to determine if a deprecated key is set in
+    config file. (Jakob Homan via suresh)
+
+    HADOOP-5879. Read compression level and strategy from Configuration for
+    gzip compression. (He Yongqiang via cdouglas)
+
+    HADOOP-6216. Support comments in host files.  (Ravi Phulari and Dmytro
+    Molkov via szetszwo)
+
+    HADOOP-6217. Update documentation for project split. (Corinne Chandel via 
+    omalley)
+
+    HADOOP-6268. Add ivy jar to .gitignore. (Todd Lipcon via cdouglas)
+
+    HADOOP-6270. Support deleteOnExit in FileContext.  (Suresh Srinivas via
+    szetszwo)
+
+    HADOOP-6233. Rename configuration keys towards API standardization and
+    backward compatibility. (Jithendra Pandey via suresh)
+
+    HADOOP-6260. Add additional unit tests for FileContext util methods.
+    (Gary Murry via suresh).
+
+    HADOOP-6309. Change build.xml to run tests with java asserts.  (Eli
+    Collins via szetszwo)
+
+    HADOOP-6326. Hundson runs should check for AspectJ warnings and report
+    failure if any is present (cos)
+
+    HADOOP-6329. Add build-fi directory to the ignore lists.  (szetszwo)
+
+    HADOOP-5107. Use Maven ant tasks to publish the subproject jars.
+    (Giridharan Kesavan via omalley)
+
+    HADOOP-6343. Log unexpected throwable object caught in RPC.  (Jitendra Nath
+    Pandey via szetszwo)
+
+    HADOOP-6367. Removes Access Token implementation from common.
+    (Kan Zhang via ddas)
+
+    HADOOP-6395. Upgrade some libraries to be consistent across common, hdfs,
+    and mapreduce. (omalley)
+
+    HADOOP-6398. Build is broken after HADOOP-6395 patch has been applied (cos)
+
+    HADOOP-6413. Move TestReflectionUtils to Common. (Todd Lipcon via tomwhite)
+
+    HADOOP-6283. Improve the exception messages thrown by
+    FileUtil$HardLink.getLinkCount(..).  (szetszwo)
+
+    HADOOP-6279. Add Runtime::maxMemory to JVM metrics. (Todd Lipcon via
+    cdouglas)
+
+    HADOOP-6305. Unify build property names to facilitate cross-projects
+    modifications (cos)
+
+    HADOOP-6312. Remove unnecessary debug logging in Configuration constructor.
+    (Aaron Kimball via cdouglas)
+
+    HADOOP-6366. Reduce ivy console output to ovservable level (cos)
+
+    HADOOP-6400. Log errors getting Unix UGI. (Todd Lipcon via tomwhite)
+
+    HADOOP-6346. Add support for specifying unpack pattern regex to
+    RunJar.unJar. (Todd Lipcon via tomwhite)
+
+    HADOOP-6422. Make RPC backend plugable, protocol-by-protocol, to
+    ease evolution towards Avro.  (cutting)
+
+    HADOOP-5958. Use JDK 1.6 File APIs in DF.java wherever possible.
+    (Aaron Kimball via tomwhite)
+
+    HADOOP-6222. Core doesn't have TestCommonCLI facility. (cos)
+
+    HADOOP-6394. Add a helper class to simplify FileContext related tests and
+    improve code reusability. (Jitendra Nath Pandey via suresh)
+
+    HADOOP-4656. Add a user to groups mapping service. (boryas, acmurthy)
+
+    HADOOP-6435. Make RPC.waitForProxy with timeout public. (Steve Loughran
+    via tomwhite)
+  
+    HADOOP-6472. add tokenCache option to GenericOptionsParser for passing
+     file with secret keys to a map reduce job. (boryas)
+
+    HADOOP-3205. Read multiple chunks directly from FSInputChecker subclass
+    into user buffers. (Todd Lipcon via tomwhite)
+
+    HADOOP-6479. TestUTF8 assertions could fail with better text.
+    (Steve Loughran via tomwhite)
+
+    HADOOP-6155. Deprecate RecordIO anticipating Avro. (Tom White via cdouglas)
+
+    HADOOP-6492. Make some Avro serialization APIs public.
+    (Aaron Kimball via cutting)
+
+    HADOOP-6497. Add an adapter for Avro's SeekableInput interface, so
+    that Avro can read FileSystem data.
+    (Aaron Kimball via cutting)
+
+    HADOOP-6495.  Identifier should be serialized after the password is
+     created In Token constructor (jnp via boryas)
+
+    HADOOP-6518. Makes the UGI honor the env var KRB5CCNAME. 
+    (Owen O'Malley via ddas)
+
+    HADOOP-6531. Enhance FileUtil with an API to delete all contents of a
+    directory. (Amareshwari Sriramadasu via yhemanth)
+
+    HADOOP-6547. Move DelegationToken into Common, so that it can be used by
+    MapReduce also. (devaraj via omalley)
+
+    HADOOP-6552. Puts renewTGT=true and useTicketCache=true for the keytab
+    kerberos options. (ddas)
+
+    HADOOP-6534. Trim whitespace from directory lists initializing
+    LocalDirAllocator. (Todd Lipcon via cdouglas)
+
+    HADOOP-6559. Makes the RPC client automatically re-login when the SASL 
+    connection setup fails. This is applicable only to keytab based logins.
+    (Devaraj Das)
+
+    HADOOP-6551. Delegation token renewing and cancelling should provide
+    meaningful exceptions when there are failures instead of returning 
+    false. (omalley)
+
+    HADOOP-6583. Captures authentication and authorization metrics. (ddas)
+
+    HADOOP-6543. Allows secure clients to talk to unsecure clusters. 
+    (Kan Zhang via ddas)
+
+    HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
+    a url-safe string and change the commons-code library to 1.4. (omalley)
+
+    HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
+    serialized value. (omalley)
+
+    HADOOP-6573. Support for persistent delegation tokens.
+    (Jitendra Pandey via shv)
+
+    HADOOP-6594. Provide a fetchdt tool via bin/hdfs. (jhoman via acmurthy) 
+
+    HADOOP-6589. Provide better error messages when RPC authentication fails.
+    (Kan Zhang via omalley)
+
+    HADOOP-6599  Split existing RpcMetrics into RpcMetrics & RpcDetailedMetrics.
+    (Suresh Srinivas via Sanjay Radia)
+
+    HADOOP-6537 Declare more detailed exceptions in FileContext and 
+    AbstractFileSystem (Suresh Srinivas via Sanjay Radia)
+
+    HADOOP-6486. fix common classes to work with Avro 1.3 reflection.
+    (cutting via tomwhite)
+
+    HADOOP-6591. HarFileSystem can handle paths with the whitespace characters.
+    (Rodrigo Schmidt via dhruba)
+
+    HADOOP-6407. Have a way to automatically update Eclipse .classpath file
+    when new libs are added to the classpath through Ivy. (tomwhite)
+
+    HADOOP-3659. Patch to allow hadoop native to compile on Mac OS X.
+    (Colin Evans and Allen Wittenauer via tomwhite)
+
+    HADOOP-6471. StringBuffer -> StringBuilder - conversion of references
+    as necessary. (Kay Kay via tomwhite)
+
+    HADOOP-6646. Move HarfileSystem out of Hadoop Common. (mahadev)
+
+    HADOOP-6566. Add methods supporting, enforcing narrower permissions on
+    local daemon directories. (Arun Murthy and Luke Lu via cdouglas)
+
+    HADOOP-6705. Fix to work with 1.5 version of jiracli
+    (Giridharan Kesavan)
+
+    HADOOP-6658. Exclude Private elements from generated Javadoc. (tomwhite)
+
+    HADOOP-6635. Install/deploy source jars to Maven repo. 
+    (Patrick Angeles via jghoman)
+
+    HADOOP-6717. Log levels in o.a.h.security.Groups too high 
+    (Todd Lipcon via jghoman)
+
+    HADOOP-6667. RPC.waitForProxy should retry through NoRouteToHostException.
+    (Todd Lipcon via tomwhite)
+
+    HADOOP-6677. InterfaceAudience.LimitedPrivate should take a string not an
+    enum. (tomwhite)
+
+    HADOOP-678. Remove FileContext#isFile, isDirectory, and exists.
+    (Eli Collins via hairong)
+
+    HADOOP-6515. Make maximum number of http threads configurable.
+    (Scott Chen via zshao)
+
+    HADOOP-6563. Add more symlink tests to cover intermediate symlinks
+    in paths. (Eli Collins via suresh)
+
+    HADOOP-6585.  Add FileStatus#isDirectory and isFile.  (Eli Collins via
+    tomwhite)
+
+    HADOOP-6738.  Move cluster_setup.xml from MapReduce to Common.
+    (Tom White via tomwhite)
+
+    HADOOP-6794. Move configuration and script files post split. (tomwhite)
+
+    HADOOP-6403.  Deprecate EC2 bash scripts.  (tomwhite)
+
+    HADOOP-6769. Add an API in FileSystem to get FileSystem instances based 
+    on users(ddas via boryas)
+
+    HADOOP-6813. Add a new newInstance method in FileSystem that takes 
+    a "user" as argument (ddas via boryas)
+
+    HADOOP-6668.  Apply audience and stability annotations to classes in
+    common.  (tomwhite)
+
+    HADOOP-6821.  Document changes to memory monitoring.  (Hemanth Yamijala
+    via tomwhite)
+
+  OPTIMIZATIONS
+
+    HADOOP-5595. NameNode does not need to run a replicator to choose a
+    random DataNode. (hairong)
+
+    HADOOP-5603. Improve NameNode's block placement performance. (hairong)
+
+    HADOOP-5638. More improvement on block placement performance. (hairong)
+
+    HADOOP-6180. NameNode slowed down when many files with same filename
+    were moved to Trash. (Boris Shkolnik via hairong)
+
+    HADOOP-6166. Further improve the performance of the pure-Java CRC32
+    implementation. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+    HADOOP-6271. Add recursive and non recursive create and mkdir to 
+    FileContext. (Sanjay Radia via suresh)
+
+    HADOOP-6261. Add URI based tests for FileContext. 
+    (Ravi Pulari via suresh).
+
+    HADOOP-6307. Add a new SequenceFile.Reader constructor in order to support
+    reading on un-closed file.  (szetszwo)
+
+    HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).
+    (mahadev via szetszwo)
+
+    HADOOP-6569. FsShell#cat should avoid calling unecessary getFileStatus
+    before opening a file to read. (hairong)
+
+    HADOOP-6689. Add directory renaming test to existing FileContext tests.
+    (Eli Collins via suresh)
+
+    HADOOP-6713. The RPC server Listener thread is a scalability bottleneck.
+    (Dmytro Molkov via hairong)
+
+  BUG FIXES
+
+    HADOOP-6748. Removes hadoop.cluster.administrators, cluster administrators
+    acl is passed as parameter in constructor. (amareshwari) 
+
+    HADOOP-6828. Herrior uses old way of accessing logs directories (Sreekanth
+    Ramakrishnan via cos)
+
+    HADOOP-6788. [Herriot] Exception exclusion functionality is not working
+    correctly. (Vinay Thota via cos)
+
+    HADOOP-6773. Ivy folder contains redundant files (cos)
+
+    HADOOP-5379. CBZip2InputStream to throw IOException on data crc error.
+    (Rodrigo Schmidt via zshao)
+
+    HADOOP-5326. Fixes CBZip2OutputStream data corruption problem.
+    (Rodrigo Schmidt via zshao)
+
+    HADOOP-4963. Fixes a logging to do with getting the location of
+    map output file. (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-2337. Trash should close FileSystem on exit and should not start 
+    emtying thread if disabled. (shv)
+
+    HADOOP-5072. Fix failure in TestCodec because testSequenceFileGzipCodec 
+    won't pass without native gzip codec. (Zheng Shao via dhruba)
+
+    HADOOP-5050. TestDFSShell.testFilePermissions should not assume umask
+    setting.  (Jakob Homan via szetszwo)
+
+    HADOOP-4975. Set classloader for nested mapred.join configs. (Jingkei Ly
+    via cdouglas)
+
+    HADOOP-5078. Remove invalid AMI kernel in EC2 scripts. (tomwhite)
+
+    HADOOP-5045. FileSystem.isDirectory() should not be deprecated.  (Suresh
+    Srinivas via szetszwo)
+
+    HADOOP-4960. Use datasource time, rather than system time, during metrics
+    demux. (Eric Yang via cdouglas)
+
+    HADOOP-5032. Export conf dir set in config script. (Eric Yang via cdouglas)
+
+    HADOOP-5176. Fix a typo in TestDFSIO.  (Ravi Phulari via szetszwo)
+
+    HADOOP-4859. Distinguish daily rolling output dir by adding a timestamp.
+    (Jerome Boulon via cdouglas)
+
+    HADOOP-4959. Correct system metric collection from top on Redhat 5.1. (Eric
+    Yang via cdouglas)
+
+    HADOOP-5039. Fix log rolling regex to process only the relevant
+    subdirectories. (Jerome Boulon via cdouglas)
+
+    HADOOP-5095. Update Chukwa watchdog to accept config parameter. (Jerome
+    Boulon via cdouglas)
+
+    HADOOP-5147. Correct reference to agent list in Chukwa bin scripts. (Ari
+    Rabkin via cdouglas)
+
+    HADOOP-5148. Fix logic disabling watchdog timer in Chukwa daemon scripts.
+    (Ari Rabkin via cdouglas)
+
+    HADOOP-5100. Append, rather than truncate, when creating log4j metrics in
+    Chukwa. (Jerome Boulon via cdouglas)
+
+    HADOOP-5204. Fix broken trunk compilation on Hudson by letting 
+    task-controller be an independent target in build.xml.
+    (Sreekanth Ramakrishnan via yhemanth)
+
+    HADOOP-5212. Fix the path translation problem introduced by HADOOP-4868 
+    running on cygwin. (Sharad Agarwal via omalley)
+
+    HADOOP-5226. Add license headers to html and jsp files.  (szetszwo)
+
+    HADOOP-5172. Disable misbehaving Chukwa unit test until it can be fixed.
+    (Jerome Boulon via nigel)
+
+    HADOOP-4933. Fixes a ConcurrentModificationException problem that shows up
+    when the history viewer is accessed concurrently. 
+    (Amar Kamat via ddas)
+
+    HADOOP-5253. Remove duplicate call to cn-docs target. 
+    (Giri Kesavan via nigel)
+
+    HADOOP-5251. Fix classpath for contrib unit tests to include clover jar.
+    (nigel)
+
+    HADOOP-5206. Synchronize "unprotected*" methods of FSDirectory on the root.
+    (Jakob Homan via shv)
+
+    HADOOP-5292. Fix NPE in KFS::getBlockLocations. (Sriram Rao via lohit)
+
+    HADOOP-5219. Adds a new property io.seqfile.local.dir for use by
+    SequenceFile, which earlier used mapred.local.dir. (Sharad Agarwal
+    via ddas)
+
+    HADOOP-5300. Fix ant javadoc-dev target and the typo in the class name
+    NameNodeActivtyMBean.  (szetszwo)
+
+    HADOOP-5218.  libhdfs unit test failed because it was unable to 
+    start namenode/datanode. Fixed. (dhruba)
+
+    HADOOP-5273. Add license header to TestJobInProgress.java.  (Jakob Homan
+    via szetszwo)
+    
+    HADOOP-5229. Remove duplicate version variables in build files
+    (Stefan Groschupf via johan)
+
+    HADOOP-5383. Avoid building an unused string in NameNode's 
+    verifyReplication(). (Raghu Angadi)
+
+    HADOOP-5347. Create a job output directory for the bbp examples. (szetszwo)
+
+    HADOOP-5341. Make hadoop-daemon scripts backwards compatible with the
+    changes in HADOOP-4868. (Sharad Agarwal via yhemanth)
+
+    HADOOP-5456. Fix javadoc links to ClientProtocol#restoreFailedStorage(..).
+    (Boris Shkolnik via szetszwo)
+
+    HADOOP-5458. Remove leftover Chukwa entries from build, etc. (cdouglas)
+
+    HADOOP-5386. Modify hdfsproxy unit test to start on a random port,
+    implement clover instrumentation. (Zhiyong Zhang via cdouglas)
+
+    HADOOP-5511. Add Apache License to EditLogBackupOutputStream. (shv)
+
+    HADOOP-5507. Fix JMXGet javadoc warnings.  (Boris Shkolnik via szetszwo)
+
+    HADOOP-5191. Accessing HDFS with any ip or hostname should work as long 
+    as it points to the interface NameNode is listening on. (Raghu Angadi)
+
+    HADOOP-5561. Add javadoc.maxmemory parameter to build, preventing OOM
+    exceptions from javadoc-dev. (Jakob Homan via cdouglas)
+
+    HADOOP-5149. Modify HistoryViewer to ignore unfamiliar files in the log
+    directory. (Hong Tang via cdouglas)
+
+    HADOOP-5477. Fix rare failure in TestCLI for hosts returning variations of
+    'localhost'. (Jakob Homan via cdouglas)
+
+    HADOOP-5194. Disables setsid for tasks run on cygwin. 
+    (Ravi Gummadi via ddas)
+
+    HADOOP-5322. Fix misleading/outdated comments in JobInProgress.
+    (Amareshwari Sriramadasu via cdouglas)
+
+    HADOOP-5198. Fixes a problem to do with the task PID file being absent and 
+    the JvmManager trying to look for it. (Amareshwari Sriramadasu via ddas)
+
+    HADOOP-5464. DFSClient did not treat write timeout of 0 properly.
+    (Raghu Angadi)
+
+    HADOOP-4045. Fix processing of IO errors in EditsLog.
+    (Boris Shkolnik via shv)
+
+    HADOOP-5462. Fixed a double free bug in the task-controller
+    executable. (Sreekanth Ramakrishnan via yhemanth)
+
+    HADOOP-5652. Fix a bug where in-memory segments are incorrectly retained in
+    memory. (cdouglas)
+
+    HADOOP-5533. Recovery duration shown on the jobtracker webpage is 
+    inaccurate. (Amar Kamat via sharad)
+
+    HADOOP-5647. Fix TestJobHistory to not depend on /tmp. (Ravi Gummadi 
+    via sharad)
+
+    HADOOP-5661. Fixes some findbugs warnings in o.a.h.mapred* packages and
+    supresses a bunch of them. (Jothi Padmanabhan via ddas)
+
+    HADOOP-5704. Fix compilation problems in TestFairScheduler and
+    TestCapacityScheduler.  (Chris Douglas via szetszwo)
+
+    HADOOP-5650. Fix safemode messages in the Namenode log.  (Suresh Srinivas
+    via szetszwo)
+
+    HADOOP-5488. Removes the pidfile management for the Task JVM from the
+    framework and instead passes the PID back and forth between the
+    TaskTracker and the Task processes. (Ravi Gummadi via ddas)
+
+    HADOOP-5658. Fix Eclipse templates. (Philip Zeyliger via shv)
+
+    HADOOP-5709. Remove redundant synchronization added in HADOOP-5661. (Jothi
+    Padmanabhan via cdouglas)
+
+    HADOOP-5715. Add conf/mapred-queue-acls.xml to the ignore lists.
+    (szetszwo)
+
+    HADOOP-5592. Fix typo in Streaming doc in reference to GzipCodec.
+    (Corinne Chandel via tomwhite)
+
+    HADOOP-5656. Counter for S3N Read Bytes does not work. (Ian Nowland
+    via tomwhite)
+
+    HADOOP-5406. Fix JNI binding for ZlibCompressor::setDictionary. (Lars
+    Francke via cdouglas)
+
+    HADOOP-3426. Fix/provide handling when DNS lookup fails on the loopback
+    address. Also cache the result of the lookup. (Steve Loughran via cdouglas)
+
+    HADOOP-5476. Close the underlying InputStream in SequenceFile::Reader when
+    the constructor throws an exception. (Michael Tamm via cdouglas)
+
+    HADOOP-5675. Do not launch a job if DistCp has no work to do. (Tsz Wo
+    (Nicholas), SZE via cdouglas)
+
+    HADOOP-5737. Fixes a problem in the way the JobTracker used to talk to
+    other daemons like the NameNode to get the job's files. Also adds APIs
+    in the JobTracker to get the FileSystem objects as per the JobTracker's
+    configuration. (Amar Kamat via ddas) 
+
+    HADOOP-5648. Not able to generate gridmix.jar on the already compiled 
+    version of hadoop. (gkesavan)	
+
+    HADOOP-5808. Fix import never used javac warnings in hdfs. (szetszwo)
+
+    HADOOP-5203. TT's version build is too restrictive. (Rick Cox via sharad)
+
+    HADOOP-5818. Revert the renaming from FSNamesystem.checkSuperuserPrivilege
+    to checkAccess by HADOOP-5643.  (Amar Kamat via szetszwo)
+
+    HADOOP-5820. Fix findbugs warnings for http related codes in hdfs.
+    (szetszwo)
+
+    HADOOP-5822. Fix javac warnings in several dfs tests related to unncessary
+    casts.  (Jakob Homan via szetszwo)
+
+    HADOOP-5842. Fix a few javac warnings under packages fs and util.
+    (Hairong Kuang via szetszwo)
+
+    HADOOP-5845. Build successful despite test failure on test-core target.
+    (sharad)
+
+    HADOOP-5314. Prevent unnecessary saving of the file system image during 
+    name-node startup. (Jakob Homan via shv)
+
+    HADOOP-5855. Fix javac warnings for DisallowedDatanodeException and
+    UnsupportedActionException.  (szetszwo)
+
+    HADOOP-5582. Fixes a problem in Hadoop Vaidya to do with reading
+    counters from job history files. (Suhas Gogate via ddas)
+
+    HADOOP-5829. Fix javac warnings found in ReplicationTargetChooser,
+    FSImage, Checkpointer, SecondaryNameNode and a few other hdfs classes.
+    (Suresh Srinivas via szetszwo)
+
+    HADOOP-5835. Fix findbugs warnings found in Block, DataNode, NameNode and
+    a few other hdfs classes.  (Suresh Srinivas via szetszwo)
+
+    HADOOP-5853. Undeprecate HttpServer.addInternalServlet method.  (Suresh
+    Srinivas via szetszwo)
+
+    HADOOP-5801. Fixes the problem: If the hosts file is changed across restart
+    then it should be refreshed upon recovery so that the excluded hosts are 
+    lost and the maps are re-executed. (Amar Kamat via ddas)
+
+    HADOOP-5841. Resolve findbugs warnings in DistributedFileSystem,
+    DatanodeInfo, BlocksMap, DataNodeDescriptor.  (Jakob Homan via szetszwo)
+
+    HADOOP-5878. Fix import and Serializable javac warnings found in hdfs jsp.
+    (szetszwo)
+
+    HADOOP-5782. Revert a few formatting changes introduced in HADOOP-5015.
+    (Suresh Srinivas via rangadi)
+
+    HADOOP-5687. NameNode throws NPE if fs.default.name is the default value.
+    (Philip Zeyliger via shv)
+
+    HADOOP-5867. Fix javac warnings found in NNBench and NNBenchWithoutMR.
+    (Konstantin Boudnik via szetszwo)
+    
+    HADOOP-5728. Fixed FSEditLog.printStatistics IndexOutOfBoundsException.
+    (Wang Xu via johan)
+
+    HADOOP-5847. Fixed failing Streaming unit tests (gkesavan) 
+
+    HADOOP-5252. Streaming overrides -inputformat option (Klaas Bosteels 
+    via sharad)
+
+    HADOOP-5710. Counter MAP_INPUT_BYTES missing from new mapreduce api. 
+    (Amareshwari Sriramadasu via sharad)
+
+    HADOOP-5809. Fix job submission, broken by errant directory creation.
+    (Sreekanth Ramakrishnan and Jothi Padmanabhan via cdouglas)
+
+    HADOOP-5635. Change distributed cache to work with other distributed file
+    systems. (Andrew Hitchcock via tomwhite)
+
+    HADOOP-5856. Fix "unsafe multithreaded use of DateFormat" findbugs warning
+    in DataBlockScanner.  (Kan Zhang via szetszwo)
+
+    HADOOP-4864. Fixes a problem to do with -libjars with multiple jars when
+    client and cluster reside on different OSs. (Amareshwari Sriramadasu via 
+    ddas)
+
+    HADOOP-5623. Fixes a problem to do with status messages getting overwritten
+    in streaming jobs. (Rick Cox and Jothi Padmanabhan via ddas)
+
+    HADOOP-5895. Fixes computation of count of merged bytes for logging.
+    (Ravi Gummadi via ddas)
+
+    HADOOP-5805. problem using top level s3 buckets as input/output 
+    directories. (Ian Nowland via tomwhite)
+   
+    HADOOP-5940. trunk eclipse-plugin build fails while trying to copy 
+    commons-cli jar from the lib dir (Giridharan Kesavan via gkesavan)
+
+    HADOOP-5864. Fix DMI and OBL findbugs in packages hdfs and metrics.
+    (hairong)
+
+    HADOOP-5935. Fix Hudson's release audit warnings link is broken. 
+    (Giridharan Kesavan via gkesavan)
+
+    HADOOP-5947. Delete empty TestCombineFileInputFormat.java
+
+    HADOOP-5899. Move a log message in FSEditLog to the right place for
+    avoiding unnecessary log.  (Suresh Srinivas via szetszwo)
+
+    HADOOP-5944. Add Apache license header to BlockManager.java.  (Suresh
+    Srinivas via szetszwo)
+
+    HADOOP-5891. SecondaryNamenode is able to converse with the NameNode 
+    even when the default value of dfs.http.address is not overridden.
+    (Todd Lipcon via dhruba)
+
+    HADOOP-5953. The isDirectory(..) and isFile(..) methods in KosmosFileSystem
+    should not be deprecated.  (szetszwo)
+
+    HADOOP-5954. Fix javac warnings in TestFileCreation, TestSmallBlock,
+    TestFileStatus, TestDFSShellGenericOptions, TestSeekBug and
+    TestDFSStartupVersions.  (szetszwo)
+
+    HADOOP-5956. Fix ivy dependency in hdfsproxy and capacity-scheduler.
+    (Giridharan Kesavan via szetszwo)
+
+    HADOOP-5836. Bug in S3N handling of directory markers using an object with
+    a trailing "/" causes jobs to fail. (Ian Nowland via tomwhite)
+
+    HADOOP-5861. s3n files are not getting split by default. (tomwhite)
+
+    HADOOP-5762. Fix a problem that DistCp does not copy empty directory.
+    (Rodrigo Schmidt via szetszwo)
+
+    HADOOP-5859. Fix "wait() or sleep() with locks held" findbugs warnings in