reverting changes from bad commit on previous release while starting next release

git-svn-id: https://svn.apache.org/repos/asf/incubator/kafka/branches/0.7.1@1387665 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/trunk/.gitignore b/trunk/.gitignore
deleted file mode 100644
index 1fc794d..0000000
--- a/trunk/.gitignore
+++ /dev/null
@@ -1,14 +0,0 @@
-dist
-*classes
-target/
-lib_managed/
-src_managed/
-project/boot/
-project/plugins/project/
-project/sbt_project_definition.iml
-.idea
-.svn
-.classpath
-*~
-*#
-.#*
diff --git a/trunk/.rat-excludes b/trunk/.rat-excludes
deleted file mode 100644
index 01d6298..0000000
--- a/trunk/.rat-excludes
+++ /dev/null
@@ -1,26 +0,0 @@
-.rat-excludes
-rat.out
-sbt
-sbt.boot.lock
-README*
-.gitignore
-.git
-.svn
-build.properties
-target
-src_managed
-update.log
-clients/target
-core/target
-contrib/target
-project/plugins/target
-project/build/target
-*.iml
-*.csproj
-TODO
-Makefile*
-*.html
-*.xml
-*expected.out
-*.kafka
-
diff --git a/trunk/DISCLAIMER b/trunk/DISCLAIMER
deleted file mode 100644
index 950e15d..0000000
--- a/trunk/DISCLAIMER
+++ /dev/null
@@ -1,15 +0,0 @@
-Apache Kafka is an effort undergoing incubation at the Apache Software
-Foundation (ASF), sponsored by the Apache Incubator PMC.
-
-Incubation is required of all newly accepted projects until a further review
-indicates that the infrastructure, communications, and decision making process
-have stabilized in a manner consistent with other successful ASF projects.
-
-While incubation status is not necessarily a reflection of the completeness
-or stability of the code, it does indicate that the project has yet to be
-fully endorsed by the ASF.
-
-For more information about the incubation status of the Kafka project you
-can go to the following page:
-
-http://incubator.apache.org/kafka/
\ No newline at end of file
diff --git a/trunk/LICENSE b/trunk/LICENSE
deleted file mode 100644
index fa84041..0000000
--- a/trunk/LICENSE
+++ /dev/null
@@ -1,262 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
------------------------------------------------------------------------
-
-SBT LICENSE
-
-Copyright (c) 2008, 2009, 2010 Mark Harrah, Jason Zaugg
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions
-are met:
-1. Redistributions of source code must retain the above copyright
-   notice, this list of conditions and the following disclaimer.
-2. Redistributions in binary form must reproduce the above copyright
-   notice, this list of conditions and the following disclaimer in the
-   documentation and/or other materials provided with the distribution.
-3. The name of the author may not be used to endorse or promote products
-   derived from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
-IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
-OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
-IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
-INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
-NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
-THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
------------------------------------------------------------------------
-
-For Nunit, used in clients/csharp: zlib/libpng license
-
-Copyright (c) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A. Vorontsov,
-Charlie Poole
-Copyright (c) 2000-2004 Philip A. Craig
-
-This software is provided 'as-is', without any express or implied warranty. In
-no event will the authors be held liable for any damages arising from the use
-of this software.
-
-Permission is granted to anyone to use this software for any purpose, including
-commercial applications, and to alter it and redistribute it freely, subject to
-the following restrictions:
-
-1. The origin of this software must not be misrepresented; you must not claim
-that you wrote the original software. If you use this software in a product, an
-acknowledgment (see the following) in the product documentation is required.
-
-Portions Copyright © 2002 James W. Newkirk, Michael C. Two, Alexei A. Vorontsov
-or Copyright © 2000-2002 Philip A. Craig
-
-2. Altered source versions must be plainly marked as such, and must not be
-misrepresented as being the original software.
-
-3. This notice may not be removed or altered from any source distribution.
-
-
-
diff --git a/trunk/NOTICE b/trunk/NOTICE
deleted file mode 100644
index 9cf7aa1..0000000
--- a/trunk/NOTICE
+++ /dev/null
@@ -1,10 +0,0 @@
-Apache Kafka
-Copyright 2012 The Apache Software Foundation.
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-This product includes Nunit, developed by Charlie Pool, James W.
-Newkirk, Michael C. Two, Alexei A. Vorontsov and Philip A. Craig.
-(www.nunit.org)
-
diff --git a/trunk/README.md b/trunk/README.md
deleted file mode 100644
index a028bce..0000000
--- a/trunk/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# Kafka is a distributed publish/subscribe messaging system #
-
-It is designed to support the following
-
-* Persistent messaging with O(1) disk structures that provide constant time performance even with many TB of stored messages.
-* High-throughput: even with very modest hardware Kafka can support hundreds of thousands of messages per second.
-* Explicit support for partitioning messages over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
-* Support for parallel data load into Hadoop.
-
-Kafka is aimed at providing a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by "logging" and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to an offline analysis system like Hadoop, but is very limiting for building real-time processing. Kafka aims to unify offline and online processing by providing a mechanism for parallel load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines.
-
-See our [web site](http://incubator.apache.org/kafka/) for more details on the project.
-
-## Contribution ##
-
-Kafka is a new project, and we are interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://incubator.apache.org/kafka/contact.html).
-
-The Kafka code is available from svn or a read only git mirror:
- * svn co http://svn.apache.org/repos/asf/incubator/kafka/trunk kafka
- * git clone git://git.apache.org/kafka.git
-
-To build: 
-
-1. ./sbt
-2. update - This downloads all the dependencies for all sub projects
-3. package - This will compile all sub projects and creates all the jars
-
-Here are some useful sbt commands, to be executed at the sbt command prompt (./sbt) -
-
-actions : Lists all the sbt commands and their descriptions
-
-clean : Deletes all generated files (the target directory).
-
-clean-cache : Deletes the cache of artifacts downloaded for automatically managed dependencies.
-
-clean-lib : Deletes the managed library directory.
-
-compile : Compile all the sub projects, but not create the jars
-
-test : Run all unit tests in all sub projects
-
-release-zip : Create all the jars, run unit tests and create a deployable release zip
-
-package-all: Creates jars for src, test, docs etc
-
-projects : List all the sub projects 
-
-project sub_project_name : Switch to a particular sub-project. For example, to switch to the core kafka code, use "project core-kafka"
-
-Following commands can be run only on a particular sub project -
-
-test-only package.test.TestName : Runs only the specified test in the current sub project
-
-run : Provides options to run any of the classes that have a main method. For example, you can switch to project java-examples, and run the examples there by executing "project java-examples" followed by "run" 
-
-
diff --git a/trunk/bin/kafka-console-consumer-log4j.properties b/trunk/bin/kafka-console-consumer-log4j.properties
deleted file mode 100644
index 6b76444..0000000
--- a/trunk/bin/kafka-console-consumer-log4j.properties
+++ /dev/null
@@ -1,21 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-log4j.rootLogger=INFO, stderr
-
-log4j.appender.stderr=org.apache.log4j.ConsoleAppender
-log4j.appender.stderr.target=System.err
-log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
-log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
-
diff --git a/trunk/bin/kafka-console-consumer.sh b/trunk/bin/kafka-console-consumer.sh
deleted file mode 100755
index 8a76c6c..0000000
--- a/trunk/bin/kafka-console-consumer.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)
-export KAFKA_OPTS="-Xmx512M -server -Dcom.sun.management.jmxremote -Dlog4j.configuration=file:$base_dir/kafka-console-consumer-log4j.properties"
-$base_dir/kafka-run-class.sh kafka.consumer.ConsoleConsumer $@
diff --git a/trunk/bin/kafka-console-producer.sh b/trunk/bin/kafka-console-producer.sh
deleted file mode 100755
index 99d2dc6..0000000
--- a/trunk/bin/kafka-console-producer.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)
-$base_dir/kafka-run-class.sh kafka.producer.ConsoleProducer $@
diff --git a/trunk/bin/kafka-consumer-perf-test.sh b/trunk/bin/kafka-consumer-perf-test.sh
deleted file mode 100755
index f94d1f5..0000000
--- a/trunk/bin/kafka-consumer-perf-test.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.tools.ConsumerPerformance $@
diff --git a/trunk/bin/kafka-consumer-shell.sh b/trunk/bin/kafka-consumer-shell.sh
deleted file mode 100755
index 8d280db..0000000
--- a/trunk/bin/kafka-consumer-shell.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.tools.ConsumerShell $@
diff --git a/trunk/bin/kafka-producer-perf-test.sh b/trunk/bin/kafka-producer-perf-test.sh
deleted file mode 100755
index 25c4dce..0000000
--- a/trunk/bin/kafka-producer-perf-test.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.perf.ProducerPerformance $@
diff --git a/trunk/bin/kafka-producer-shell.sh b/trunk/bin/kafka-producer-shell.sh
deleted file mode 100755
index 3f75a34..0000000
--- a/trunk/bin/kafka-producer-shell.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.tools.ProducerShell $@
diff --git a/trunk/bin/kafka-replay-log-producer.sh b/trunk/bin/kafka-replay-log-producer.sh
deleted file mode 100755
index f36e0e2..0000000
--- a/trunk/bin/kafka-replay-log-producer.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)
-export KAFKA_OPTS="-Xmx512M -server -Dcom.sun.management.jmxremote -Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
-$base_dir/kafka-run-class.sh kafka.tools.ReplayLogProducer $@
diff --git a/trunk/bin/kafka-run-class.sh b/trunk/bin/kafka-run-class.sh
deleted file mode 100755
index e93f670..0000000
--- a/trunk/bin/kafka-run-class.sh
+++ /dev/null
@@ -1,66 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -lt 1 ];
-then
-  echo "USAGE: $0 classname [opts]"
-  exit 1
-fi
-
-base_dir=$(dirname $0)/..
-
-for file in $base_dir/project/boot/scala-2.8.0/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/perf/target/scala_2.8.0/kafka*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
-do
-  if [ ${file##*/} != "sbt-launch.jar" ]; then
-    CLASSPATH=$CLASSPATH:$file
-  fi
-done
-if [ -z "$KAFKA_JMX_OPTS" ]; then
-  KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false "
-fi
-if [ -z "$KAFKA_OPTS" ]; then
-  KAFKA_OPTS="-Xmx512M -server  -Dlog4j.configuration=file:$base_dir/config/log4j.properties"
-fi
-if [  $JMX_PORT ]; then
-  KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
-fi
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-$JAVA $KAFKA_OPTS $KAFKA_JMX_OPTS -cp $CLASSPATH $@
diff --git a/trunk/bin/kafka-server-start.sh b/trunk/bin/kafka-server-start.sh
deleted file mode 100755
index dd200bc..0000000
--- a/trunk/bin/kafka-server-start.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -lt 1 ];
-then
-	echo "USAGE: $0 server.properties [consumer.properties producer.properties]"
-	exit 1
-fi
-
-export JMX_PORT=${JMX_PORT:-9999}
-
-$(dirname $0)/kafka-run-class.sh kafka.Kafka $@
diff --git a/trunk/bin/kafka-server-stop.sh b/trunk/bin/kafka-server-stop.sh
deleted file mode 100755
index b149694..0000000
--- a/trunk/bin/kafka-server-stop.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/sh
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT
diff --git a/trunk/bin/kafka-simple-consumer-perf-test.sh b/trunk/bin/kafka-simple-consumer-perf-test.sh
deleted file mode 100755
index b211d02..0000000
--- a/trunk/bin/kafka-simple-consumer-perf-test.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.tools.SimpleConsumerPerformance $@
diff --git a/trunk/bin/kafka-simple-consumer-shell.sh b/trunk/bin/kafka-simple-consumer-shell.sh
deleted file mode 100755
index 8d69357..0000000
--- a/trunk/bin/kafka-simple-consumer-shell.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-$(dirname $0)/kafka-run-class.sh kafka.tools.SimpleConsumerShell $@
diff --git a/trunk/bin/run-rat.sh b/trunk/bin/run-rat.sh
deleted file mode 100644
index 28c0ccd..0000000
--- a/trunk/bin/run-rat.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)/..
-rat_excludes_file=$base_dir/.rat-excludes
-
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-rat_command="$JAVA -jar $base_dir/lib_managed/scala_2.8.0/compile/apache-rat-0.8.jar --dir $base_dir "
-
-for f in $(cat $rat_excludes_file);
-do
-  rat_command="${rat_command} -e $f"  
-done
-
-echo "Running " $rat_command
-$rat_command > $base_dir/rat.out
-
diff --git a/trunk/bin/zookeeper-server-start.sh b/trunk/bin/zookeeper-server-start.sh
deleted file mode 100755
index 184a10b..0000000
--- a/trunk/bin/zookeeper-server-start.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -ne 1 ];
-then
-	echo "USAGE: $0 zookeeper.properties"
-	exit 1
-fi
-
-$(dirname $0)/kafka-run-class.sh org.apache.zookeeper.server.quorum.QuorumPeerMain $@
diff --git a/trunk/bin/zookeeper-server-stop.sh b/trunk/bin/zookeeper-server-stop.sh
deleted file mode 100755
index 975d9ae..0000000
--- a/trunk/bin/zookeeper-server-stop.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/sh
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT
diff --git a/trunk/bin/zookeeper-shell.sh b/trunk/bin/zookeeper-shell.sh
deleted file mode 100755
index e0de33f..0000000
--- a/trunk/bin/zookeeper-shell.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/sh
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -ne 1 ];
-then
-	echo "USAGE: $0 zookeeper_host:port[/path]"
-	exit 1
-fi
-
-$(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server $1
\ No newline at end of file
diff --git a/trunk/clients/cpp/LICENSE b/trunk/clients/cpp/LICENSE
deleted file mode 100644
index 614c632..0000000
--- a/trunk/clients/cpp/LICENSE
+++ /dev/null
@@ -1,203 +0,0 @@
-
-                              Apache License
-                        Version 2.0, January 2004
-                     http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   "License" shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   "Licensor" shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   "Legal Entity" shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   "control" means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   "You" (or "Your") shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   "Source" form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   "Object" form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   "Work" shall mean the work of authorship, whether in Source or
-   Object form, made available under the License, as indicated by a
-   copyright notice that is included in or attached to the work
-   (an example is provided in the Appendix below).
-
-   "Derivative Works" shall mean any work, whether in Source or Object
-   form, that is based on (or derived from) the Work and for which the
-   editorial revisions, annotations, elaborations, or other modifications
-   represent, as a whole, an original work of authorship. For the purposes
-   of this License, Derivative Works shall not include works that remain
-   separable from, or merely link (or bind by name) to the interfaces of,
-   the Work and Derivative Works thereof.
-
-   "Contribution" shall mean any work of authorship, including
-   the original version of the Work and any modifications or additions
-   to that Work or Derivative Works thereof, that is intentionally
-   submitted to Licensor for inclusion in the Work by the copyright owner
-   or by an individual or Legal Entity authorized to submit on behalf of
-   the copyright owner. For the purposes of this definition, "submitted"
-   means any form of electronic, verbal, or written communication sent
-   to the Licensor or its representatives, including but not limited to
-   communication on electronic mailing lists, source code control systems,
-   and issue tracking systems that are managed by, or on behalf of, the
-   Licensor for the purpose of discussing and improving the Work, but
-   excluding communication that is conspicuously marked or otherwise
-   designated in writing by the copyright owner as "Not a Contribution."
-
-   "Contributor" shall mean Licensor and any individual or Legal Entity
-   on behalf of whom a Contribution has been received by Licensor and
-   subsequently incorporated within the Work.
-
-2. Grant of Copyright License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   copyright license to reproduce, prepare Derivative Works of,
-   publicly display, publicly perform, sublicense, and distribute the
-   Work and such Derivative Works in Source or Object form.
-
-3. Grant of Patent License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   (except as stated in this section) patent license to make, have made,
-   use, offer to sell, sell, import, and otherwise transfer the Work,
-   where such license applies only to those patent claims licensable
-   by such Contributor that are necessarily infringed by their
-   Contribution(s) alone or by combination of their Contribution(s)
-   with the Work to which such Contribution(s) was submitted. If You
-   institute patent litigation against any entity (including a
-   cross-claim or counterclaim in a lawsuit) alleging that the Work
-   or a Contribution incorporated within the Work constitutes direct
-   or contributory patent infringement, then any patent licenses
-   granted to You under this License for that Work shall terminate
-   as of the date such litigation is filed.
-
-4. Redistribution. You may reproduce and distribute copies of the
-   Work or Derivative Works thereof in any medium, with or without
-   modifications, and in Source or Object form, provided that You
-   meet the following conditions:
-
-   (a) You must give any other recipients of the Work or
-       Derivative Works a copy of this License; and
-
-   (b) You must cause any modified files to carry prominent notices
-       stating that You changed the files; and
-
-   (c) You must retain, in the Source form of any Derivative Works
-       that You distribute, all copyright, patent, trademark, and
-       attribution notices from the Source form of the Work,
-       excluding those notices that do not pertain to any part of
-       the Derivative Works; and
-
-   (d) If the Work includes a "NOTICE" text file as part of its
-       distribution, then any Derivative Works that You distribute must
-       include a readable copy of the attribution notices contained
-       within such NOTICE file, excluding those notices that do not
-       pertain to any part of the Derivative Works, in at least one
-       of the following places: within a NOTICE text file distributed
-       as part of the Derivative Works; within the Source form or
-       documentation, if provided along with the Derivative Works; or,
-       within a display generated by the Derivative Works, if and
-       wherever such third-party notices normally appear. The contents
-       of the NOTICE file are for informational purposes only and
-       do not modify the License. You may add Your own attribution
-       notices within Derivative Works that You distribute, alongside
-       or as an addendum to the NOTICE text from the Work, provided
-       that such additional attribution notices cannot be construed
-       as modifying the License.
-
-   You may add Your own copyright statement to Your modifications and
-   may provide additional or different license terms and conditions
-   for use, reproduction, or distribution of Your modifications, or
-   for any such Derivative Works as a whole, provided Your use,
-   reproduction, and distribution of the Work otherwise complies with
-   the conditions stated in this License.
-
-5. Submission of Contributions. Unless You explicitly state otherwise,
-   any Contribution intentionally submitted for inclusion in the Work
-   by You to the Licensor shall be under the terms and conditions of
-   this License, without any additional terms or conditions.
-   Notwithstanding the above, nothing herein shall supersede or modify
-   the terms of any separate license agreement you may have executed
-   with Licensor regarding such Contributions.
-
-6. Trademarks. This License does not grant permission to use the trade
-   names, trademarks, service marks, or product names of the Licensor,
-   except as required for reasonable and customary use in describing the
-   origin of the Work and reproducing the content of the NOTICE file.
-
-7. Disclaimer of Warranty. Unless required by applicable law or
-   agreed to in writing, Licensor provides the Work (and each
-   Contributor provides its Contributions) on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-   implied, including, without limitation, any warranties or conditions
-   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-   PARTICULAR PURPOSE. You are solely responsible for determining the
-   appropriateness of using or redistributing the Work and assume any
-   risks associated with Your exercise of permissions under this License.
-
-8. Limitation of Liability. In no event and under no legal theory,
-   whether in tort (including negligence), contract, or otherwise,
-   unless required by applicable law (such as deliberate and grossly
-   negligent acts) or agreed to in writing, shall any Contributor be
-   liable to You for damages, including any direct, indirect, special,
-   incidental, or consequential damages of any character arising as a
-   result of this License or out of the use or inability to use the
-   Work (including but not limited to damages for loss of goodwill,
-   work stoppage, computer failure or malfunction, or any and all
-   other commercial damages or losses), even if such Contributor
-   has been advised of the possibility of such damages.
-
-9. Accepting Warranty or Additional Liability. While redistributing
-   the Work or Derivative Works thereof, You may choose to offer,
-   and charge a fee for, acceptance of support, warranty, indemnity,
-   or other liability obligations and/or rights consistent with this
-   License. However, in accepting such obligations, You may act only
-   on Your own behalf and on Your sole responsibility, not on behalf
-   of any other Contributor, and only if You agree to indemnify,
-   defend, and hold each Contributor harmless for any liability
-   incurred by, or claims asserted against, such Contributor by reason
-   of your accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-APPENDIX: How to apply the Apache License to your work.
-
-   To apply the Apache License to your work, attach the following
-   boilerplate notice, with the fields enclosed by brackets "[]"
-   replaced with your own identifying information. (Don't include
-   the brackets!)  The text should be enclosed in the appropriate
-   comment syntax for the file format. We also recommend that a
-   file or class name and description of purpose be included on the
-   same "printed page" as the copyright notice for easier
-   identification within third-party archives.
-
-Copyright [yyyy] [name of copyright owner]
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
diff --git a/trunk/clients/cpp/Makefile.am b/trunk/clients/cpp/Makefile.am
deleted file mode 100644
index 5a116a5..0000000
--- a/trunk/clients/cpp/Makefile.am
+++ /dev/null
@@ -1,51 +0,0 @@
-## LibKafkaConect
-## A C++ shared libray for connecting to Kafka
-
-#
-# Warning this is the first time I've made a configure.ac/Makefile.am thing
-# Please improve it as I have no idea what I am doing
-# @benjamg
-#
-
-ACLOCAL_AMFLAGS = -I build-aux/m4 ${ACLOCAL_FLAGS}
-AM_CPPFLAGS = $(DEPS_CFLAGS)
-EXAMPLE_LIBS = -lboost_system -lboost_thread -lkafkaconnect
-
-#
-# Shared Library
-#
-
-lib_LTLIBRARIES = libkafkaconnect.la
-
-libkafkaconnect_la_SOURCES = src/producer.cpp
-libkafkaconnect_la_LDFLAGS = -version-info $(KAFKACONNECT_VERSION)
-
-kafkaconnect_includedir = $(includedir)/kafkaconnect
-kafkaconnect_include_HEADERS = src/producer.hpp \
-								src/encoder.hpp \
-								src/encoder_helper.hpp
-
-#
-# Examples
-#                                 
-
-noinst_PROGRAMS = producer
-
-producer_SOURCES = src/example.cpp
-producer_LDADD = $(DEPS_LIBS) $(EXAMPLE_LIBS)
-
-#
-# Tests
-#
-
-check_PROGRAMS = tests/encoder_helper tests/encoder tests/producer
-TESTS = tests/encoder_helper tests/encoder tests/producer
-
-tests_encoder_helper_SOURCES = src/tests/encoder_helper_tests.cpp
-tests_encoder_helper_LDADD = $(DEPS_LIBS) $(EXAMPLE_LIBS) -lboost_unit_test_framework
-
-tests_encoder_SOURCES = src/tests/encoder_tests.cpp
-tests_encoder_LDADD = $(DEPS_LIBS) $(EXAMPLE_LIBS) -lboost_unit_test_framework
-
-tests_producer_SOURCES = src/tests/producer_tests.cpp
-tests_producer_LDADD = $(DEPS_LIBS) $(EXAMPLE_LIBS) -lboost_unit_test_framework
diff --git a/trunk/clients/cpp/README.md b/trunk/clients/cpp/README.md
deleted file mode 100644
index 33f69f1..0000000
--- a/trunk/clients/cpp/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# C++ kafka library
-This library allows you to produce messages to the Kafka distributed publish/subscribe messaging service.
-
-## Requirements
-Tested on Ubuntu and Redhat both with g++ 4.4 and Boost 1.46.1
-
-## Installation
-Make sure you have g++ and the latest version of Boost: 
-http://gcc.gnu.org/
-http://www.boost.org/
-
-```bash
-./autoconf.sh
-./configure
-```
-
-Run this to generate the makefile for your system. Do this first.
-
-
-```bash
-make
-```
-
-builds the producer example and the KafkaConnect library
-
-
-```bash
-make check
-```
-
-builds and runs the unit tests, 
-
-
-```bash
-make install
-```
-
-to install as a shared library to 'default' locations (/usr/local/lib and /usr/local/include on linux) 
-
-
-## Usage
-Example.cpp is a very basic Kafka Producer
-
-
-## API docs
-There isn't much code, if I get around to writing the other parts of the library I'll document it sensibly, 
-for now have a look at the header file:  /src/producer.hpp
-
-
-## Contact for questions
-
-Ben Gray, MediaSift Ltd.
-
-http://twitter.com/benjamg
-
-
diff --git a/trunk/clients/cpp/autoconf.sh b/trunk/clients/cpp/autoconf.sh
deleted file mode 100644
index 27850e3..0000000
--- a/trunk/clients/cpp/autoconf.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/sh
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# We need libtool for ./configure && make && make install stage
-command -v libtool
-if  [ $? -ne 0 ]; then
-    echo "autoconf.sh: error: unable to locate libtool"
-    exit 1
-fi
-
-# We need autoreconf to build the ./configure script
-command -v autoreconf
-if  [ $? -ne 0 ]; then
-    echo "autoconf.sh: error: unable to locate autoreconf"
-    exit 1
-fi
-
-mkdir -p ./build-aux/m4
-autoreconf --verbose --force --install
-
diff --git a/trunk/clients/cpp/configure.ac b/trunk/clients/cpp/configure.ac
deleted file mode 100644
index 6200202..0000000
--- a/trunk/clients/cpp/configure.ac
+++ /dev/null
@@ -1,42 +0,0 @@
-## LibKafkaConect
-## A C++ shared libray for connecting to Kafka
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#
-# Warning this is the first time I've made a configure.ac/Makefile.am thing
-# Please improve it as I have no idea what I am doing
-# @benjamg
-#
-
-AC_INIT([LibKafkaConnect], [0.1])
-AC_PREREQ([2.59])
-
-AC_CONFIG_AUX_DIR([build-aux])
-AM_INIT_AUTOMAKE([foreign -Wall])
-
-AC_PROG_LIBTOOL
-AC_PROG_CXX
-AC_PROG_CPP
-
-AC_CONFIG_MACRO_DIR([build-aux/m4])
-
-#
-# Version number
-#
-AC_SUBST([KAFKACONNECT_VERSION], [1:0:1])
-
-AC_CONFIG_FILES([Makefile])
-AC_OUTPUT
diff --git a/trunk/clients/cpp/src/encoder.hpp b/trunk/clients/cpp/src/encoder.hpp
deleted file mode 100644
index a4c542c..0000000
--- a/trunk/clients/cpp/src/encoder.hpp
+++ /dev/null
@@ -1,62 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-/*
- * encoder.hpp
- */
-
-#ifndef KAFKA_ENCODER_HPP_
-#define KAFKA_ENCODER_HPP_
-
-#include <boost/foreach.hpp>
-#include "encoder_helper.hpp"
-
-namespace kafkaconnect {
-
-template <typename List>
-void encode(std::ostream& stream, const std::string& topic, const uint32_t partition, const List& messages)
-{
-	// Pre-calculate size of message set
-	uint32_t messageset_size = 0;
-	BOOST_FOREACH(const std::string& message, messages)
-	{
-		messageset_size += message_format_header_size + message.length();
-	}
-
-	// Packet format is ... packet size (4 bytes)
-	encoder_helper::raw(stream, htonl(2 + 2 + topic.size() + 4 + 4 + messageset_size));
-
-	// ... magic number (2 bytes)
-	encoder_helper::raw(stream, htons(kafka_format_version));
-
-	// ... topic string size (2 bytes) & topic string
-	encoder_helper::raw(stream, htons(topic.size()));
-	stream << topic;
-
-	// ... partition (4 bytes)
-	encoder_helper::raw(stream, htonl(partition));
-
-	// ... message set size (4 bytes) and message set
-	encoder_helper::raw(stream, htonl(messageset_size));
-	BOOST_FOREACH(const std::string& message, messages)
-	{
-		encoder_helper::message(stream, message);
-	}
-}
-
-}
-
-#endif /* KAFKA_ENCODER_HPP_ */
diff --git a/trunk/clients/cpp/src/encoder_helper.hpp b/trunk/clients/cpp/src/encoder_helper.hpp
deleted file mode 100644
index 10e7c50..0000000
--- a/trunk/clients/cpp/src/encoder_helper.hpp
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-/*
- * encoder_helper.hpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#ifndef KAFKA_ENCODER_HELPER_HPP_
-#define KAFKA_ENCODER_HELPER_HPP_
-
-#include <ostream>
-#include <string>
-
-#include <arpa/inet.h>
-#include <boost/crc.hpp>
-
-#include <stdint.h>
-
-namespace kafkaconnect {
-namespace test { class encoder_helper; }
-
-const uint16_t kafka_format_version = 0;
-
-const uint8_t message_format_magic_number = 0;
-const uint8_t message_format_extra_data_size = 1 + 4;
-const uint8_t message_format_header_size = message_format_extra_data_size + 4;
-
-class encoder_helper
-{
-private:
-	friend class test::encoder_helper;
-	template <typename T> friend void encode(std::ostream&, const std::string&, const uint32_t, const T&);
-
-	static std::ostream& message(std::ostream& stream, const std::string message)
-	{
-		// Message format is ... message & data size (4 bytes)
-		raw(stream, htonl(message_format_extra_data_size + message.length()));
-
-		// ... magic number (1 byte)
-		stream << message_format_magic_number;
-
-		// ... string crc32 (4 bytes)
-		boost::crc_32_type result;
-		result.process_bytes(message.c_str(), message.length());
-		raw(stream, htonl(result.checksum()));
-
-		// ... message string bytes
-		stream << message;
-
-		return stream;
-	}
-
-	template <typename Data>
-	static std::ostream& raw(std::ostream& stream, const Data& data)
-	{
-		stream.write(reinterpret_cast<const char*>(&data), sizeof(Data));
-		return stream;
-	}
-};
-
-}
-
-#endif /* KAFKA_ENCODER_HELPER_HPP_ */
diff --git a/trunk/clients/cpp/src/example.cpp b/trunk/clients/cpp/src/example.cpp
deleted file mode 100644
index ec0aa24..0000000
--- a/trunk/clients/cpp/src/example.cpp
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-#include <exception>
-#include <cstdlib>
-#include <iostream>
-#include <string>
-
-#include <boost/thread.hpp>
-
-#include "producer.hpp"
-
-int main(int argc, char* argv[])
-{
-	std::string hostname = (argc >= 2) ? argv[1] : "localhost";
-	std::string port = (argc >= 3) ? argv[2] : "9092";
-
-	boost::asio::io_service io_service;
-	std::auto_ptr<boost::asio::io_service::work> work(new boost::asio::io_service::work(io_service));
-	boost::thread bt(boost::bind(&boost::asio::io_service::run, &io_service));
-
-	kafkaconnect::producer producer(io_service);
-	producer.connect(hostname, port);
-
-	while(!producer.is_connected())
-	{
-		boost::this_thread::sleep(boost::posix_time::seconds(1));
-	}
-
-	std::vector<std::string> messages;
-	messages.push_back("So long and thanks for all the fish");
-	messages.push_back("Time is an illusion. Lunchtime doubly so.");
-	producer.send(messages, "test");
-
-	work.reset();
-	io_service.stop();
-
-	return EXIT_SUCCESS;
-}
-
diff --git a/trunk/clients/cpp/src/producer.cpp b/trunk/clients/cpp/src/producer.cpp
deleted file mode 100644
index 7ad10df..0000000
--- a/trunk/clients/cpp/src/producer.cpp
+++ /dev/null
@@ -1,117 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-/*
- * producer.cpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#include <boost/lexical_cast.hpp>
-
-#include "producer.hpp"
-
-namespace kafkaconnect {
-
-producer::producer(boost::asio::io_service& io_service, const error_handler_function& error_handler)
-	: _connected(false)
-	, _resolver(io_service)
-	, _socket(io_service)
-	, _error_handler(error_handler)
-{
-}
-
-producer::~producer()
-{
-	close();
-}
-
-void producer::connect(const std::string& hostname, const uint16_t port)
-{
-	connect(hostname, boost::lexical_cast<std::string>(port));
-}
-
-void producer::connect(const std::string& hostname, const std::string& servicename)
-{
-	boost::asio::ip::tcp::resolver::query query(hostname, servicename);
-	_resolver.async_resolve(
-		query,
-		boost::bind(
-			&producer::handle_resolve, this,
-			boost::asio::placeholders::error, boost::asio::placeholders::iterator
-		)
-	);
-}
-
-void producer::close()
-{
-	_connected = false;
-	_socket.close();
-}
-
-bool producer::is_connected() const
-{
-	return _connected;
-}
-
-
-void producer::handle_resolve(const boost::system::error_code& error_code, boost::asio::ip::tcp::resolver::iterator endpoints)
-{
-	if (!error_code)
-	{
-		boost::asio::ip::tcp::endpoint endpoint = *endpoints;
-		_socket.async_connect(
-			endpoint,
-			boost::bind(
-				&producer::handle_connect, this,
-				boost::asio::placeholders::error, ++endpoints
-			)
-		);
-	}
-	else { fail_fast_error_handler(error_code); }
-}
-
-void producer::handle_connect(const boost::system::error_code& error_code, boost::asio::ip::tcp::resolver::iterator endpoints)
-{
-	if (!error_code)
-	{
-		// The connection was successful. Send the request.
-		_connected = true;
-	}
-	else if (endpoints != boost::asio::ip::tcp::resolver::iterator())
-	{
-		// TODO: handle connection error (we might not need this as we have others though?)
-
-		// The connection failed, but we have more potential endpoints so throw it back to handle resolve
-		_socket.close();
-		handle_resolve(boost::system::error_code(), endpoints);
-	}
-	else { fail_fast_error_handler(error_code); }
-}
-
-void producer::handle_write_request(const boost::system::error_code& error_code, boost::asio::streambuf* buffer)
-{
-	if (error_code)
-	{
-		fail_fast_error_handler(error_code);
-	}
-
-	delete buffer;
-}
-
-}
diff --git a/trunk/clients/cpp/src/producer.hpp b/trunk/clients/cpp/src/producer.hpp
deleted file mode 100644
index cb84403..0000000
--- a/trunk/clients/cpp/src/producer.hpp
+++ /dev/null
@@ -1,116 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-/*
- * producer.hpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#ifndef KAFKA_PRODUCER_HPP_
-#define KAFKA_PRODUCER_HPP_
-
-#include <string>
-#include <vector>
-
-#include <boost/array.hpp>
-#include <boost/asio.hpp>
-#include <boost/bind.hpp>
-#include <boost/function.hpp>
-#include <stdint.h>
-
-#include "encoder.hpp"
-
-namespace kafkaconnect {
-
-const uint32_t use_random_partition = 0xFFFFFFFF;
-
-class producer
-{
-public:
-	typedef boost::function<void(const boost::system::error_code&)> error_handler_function;
-
-	producer(boost::asio::io_service& io_service, const error_handler_function& error_handler = error_handler_function());
-	~producer();
-
-	void connect(const std::string& hostname, const uint16_t port);
-	void connect(const std::string& hostname, const std::string& servicename);
-
-	void close();
-	bool is_connected() const;
-
-	bool send(const std::string& message, const std::string& topic, const uint32_t partition = kafkaconnect::use_random_partition)
-	{
-		boost::array<std::string, 1> messages = { { message } };
-		return send(messages, topic, partition);
-	}
-
-	// TODO: replace this with a sending of the buffered data so encode is called prior to send this will allow for decoupling from the encoder
-	template <typename List>
-	bool send(const List& messages, const std::string& topic, const uint32_t partition = kafkaconnect::use_random_partition)
-	{
-		if (!is_connected())
-		{
-			return false;
-		}
-
-		// TODO: make this more efficient with memory allocations.
-		boost::asio::streambuf* buffer = new boost::asio::streambuf();
-		std::ostream stream(buffer);
-
-		kafkaconnect::encode(stream, topic, partition, messages);
-
-		boost::asio::async_write(
-			_socket, *buffer,
-			boost::bind(&producer::handle_write_request, this, boost::asio::placeholders::error, buffer)
-		);
-
-		return true;
-	}
-
-
-private:
-	bool _connected;
-	boost::asio::ip::tcp::resolver _resolver;
-	boost::asio::ip::tcp::socket _socket;
-	error_handler_function _error_handler;
-
-	void handle_resolve(const boost::system::error_code& error_code, boost::asio::ip::tcp::resolver::iterator endpoints);
-	void handle_connect(const boost::system::error_code& error_code, boost::asio::ip::tcp::resolver::iterator endpoints);
-	void handle_write_request(const boost::system::error_code& error_code, boost::asio::streambuf* buffer);
-
-	/* Fail Fast Error Handler Braindump
-	 *
-	 * If an error handler is not provided in the constructor then the default response is to throw
-	 * back the boost error_code from asio as a boost system_error exception.
-	 *
-	 * Most likely this will cause whatever thread you have processing boost io to terminate unless caught.
-	 * This is great on debug systems or anything where you use io polling to process any outstanding io,
-	 * however if your io thread is seperate and not monitored it is recommended to pass a handler to
-	 * the constructor.
-	 */
-	inline void fail_fast_error_handler(const boost::system::error_code& error_code)
-	{
-		if(_error_handler.empty()) { throw boost::system::system_error(error_code); }
-		else { _error_handler(error_code); }
-	}
-};
-
-}
-
-#endif /* KAFKA_PRODUCER_HPP_ */
diff --git a/trunk/clients/cpp/src/tests/encoder_helper_tests.cpp b/trunk/clients/cpp/src/tests/encoder_helper_tests.cpp
deleted file mode 100644
index d8194be..0000000
--- a/trunk/clients/cpp/src/tests/encoder_helper_tests.cpp
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-/*
- * encoder_helper_tests.cpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#define BOOST_TEST_DYN_LINK
-#define BOOST_TEST_MODULE kafkaconnect
-#include <boost/test/unit_test.hpp>
-
-#include <arpa/inet.h>
-
-#include "../encoder_helper.hpp"
-
-// test wrapper
-namespace kafkaconnect { namespace test {
-class encoder_helper {
-public:
-	static std::ostream& message(std::ostream& stream, const std::string message) { return kafkaconnect::encoder_helper::message(stream, message); }
-	template <typename T> static std::ostream& raw(std::ostream& stream, const T& t) { return kafkaconnect::encoder_helper::raw(stream, t); }
-};
-} }
-
-using namespace kafkaconnect::test;
-
-BOOST_AUTO_TEST_SUITE(kafka_encoder_helper)
-
-BOOST_AUTO_TEST_CASE(encode_raw_char)
-{
-	std::ostringstream stream;
-	char value = 0x1;
-
-	encoder_helper::raw(stream, value);
-
-	BOOST_CHECK_EQUAL(stream.str().length(), 1);
-	BOOST_CHECK_EQUAL(stream.str().at(0), value);
-}
-
-BOOST_AUTO_TEST_CASE(encode_raw_integer)
-{
-	std::ostringstream stream;
-	int value = 0x10203;
-
-	encoder_helper::raw(stream, htonl(value));
-
-	BOOST_CHECK_EQUAL(stream.str().length(), 4);
-	BOOST_CHECK_EQUAL(stream.str().at(0), 0);
-	BOOST_CHECK_EQUAL(stream.str().at(1), 0x1);
-	BOOST_CHECK_EQUAL(stream.str().at(2), 0x2);
-	BOOST_CHECK_EQUAL(stream.str().at(3), 0x3);
-}
-
-BOOST_AUTO_TEST_CASE(encode_message)
-{
-	std::string message = "a simple test";
-	std::ostringstream stream;
-
-	encoder_helper::message(stream, message);
-
-	BOOST_CHECK_EQUAL(stream.str().length(), kafkaconnect::message_format_header_size + message.length());
-	BOOST_CHECK_EQUAL(stream.str().at(3), 5 + message.length());
-	BOOST_CHECK_EQUAL(stream.str().at(4), kafkaconnect::message_format_magic_number);
-
-	for(size_t i = 0; i < message.length(); ++i)
-	{
-		BOOST_CHECK_EQUAL(stream.str().at(9 + i), message.at(i));
-	}
-}
-
-BOOST_AUTO_TEST_SUITE_END()
diff --git a/trunk/clients/cpp/src/tests/encoder_tests.cpp b/trunk/clients/cpp/src/tests/encoder_tests.cpp
deleted file mode 100644
index 72cb4b6..0000000
--- a/trunk/clients/cpp/src/tests/encoder_tests.cpp
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-/*
- * encoder_tests.cpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#define BOOST_TEST_DYN_LINK
-#define BOOST_TEST_MODULE kafkaconnect
-#include <boost/test/unit_test.hpp>
-
-#include <string>
-#include <vector>
-
-#include "../encoder.hpp"
-
-BOOST_AUTO_TEST_CASE(single_message_test)
-{
-	std::ostringstream stream;
-
-	std::vector<std::string> messages;
-	messages.push_back("test message");
-
-	kafkaconnect::encode(stream, "topic", 1, messages);
-
-	BOOST_CHECK_EQUAL(stream.str().length(), 4 + 2 + 2 + strlen("topic") + 4 + 4 + 9 + strlen("test message"));
-	BOOST_CHECK_EQUAL(stream.str().at(3), 2 + 2 + strlen("topic") + 4 + 4 + 9 + strlen("test message"));
-	BOOST_CHECK_EQUAL(stream.str().at(5), 0);
-	BOOST_CHECK_EQUAL(stream.str().at(7), strlen("topic"));
-	BOOST_CHECK_EQUAL(stream.str().at(8), 't');
-	BOOST_CHECK_EQUAL(stream.str().at(8 + strlen("topic") - 1), 'c');
-	BOOST_CHECK_EQUAL(stream.str().at(11 + strlen("topic")), 1);
-	BOOST_CHECK_EQUAL(stream.str().at(15 + strlen("topic")), 9 + strlen("test message"));
-	BOOST_CHECK_EQUAL(stream.str().at(16 + strlen("topic")), 0);
-	BOOST_CHECK_EQUAL(stream.str().at(25 + strlen("topic")), 't');
-}
-
-BOOST_AUTO_TEST_CASE(multiple_message_test)
-{
-	std::ostringstream stream;
-
-	std::vector<std::string> messages;
-	messages.push_back("test message");
-	messages.push_back("another message to check");
-
-	kafkaconnect::encode(stream, "topic", 1, messages);
-
-	BOOST_CHECK_EQUAL(stream.str().length(), 4 + 2 + 2 + strlen("topic") + 4 + 4 + 9 + strlen("test message") + 9 + strlen("another message to check"));
-	BOOST_CHECK_EQUAL(stream.str().at(3), 2 + 2 + strlen("topic") + 4 + 4 + 9 + strlen("test message") + 9 + strlen("another message to check"));
-	BOOST_CHECK_EQUAL(stream.str().at(15 + strlen("topic")), 9 + strlen("test message") + 9 + strlen("another message to check"));
-}
-
diff --git a/trunk/clients/cpp/src/tests/producer_tests.cpp b/trunk/clients/cpp/src/tests/producer_tests.cpp
deleted file mode 100644
index edde85f..0000000
--- a/trunk/clients/cpp/src/tests/producer_tests.cpp
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-/*
- * producer_tests.cpp
- *
- *  Created on: 21 Jun 2011
- *      Author: Ben Gray (@benjamg)
- */
-
-#define BOOST_TEST_DYN_LINK
-#define BOOST_TEST_MODULE kafkaconnect
-#include <boost/test/unit_test.hpp>
-
-#include <memory>
-
-#include <boost/thread.hpp>
-
-#include "../producer.hpp"
-
-BOOST_AUTO_TEST_CASE(basic_message_test)
-{
-	boost::asio::io_service io_service;
-	std::auto_ptr<boost::asio::io_service::work> work(new boost::asio::io_service::work(io_service));
-	boost::asio::ip::tcp::acceptor acceptor(io_service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 12345));
-	boost::thread bt(boost::bind(&boost::asio::io_service::run, &io_service));
-
-	kafkaconnect::producer producer(io_service);
-	BOOST_CHECK_EQUAL(producer.is_connected(), false);
-	producer.connect("localhost", 12345);
-
-	boost::asio::ip::tcp::socket socket(io_service);
-	acceptor.accept(socket);
-
-	while(!producer.is_connected())
-	{
-		boost::this_thread::sleep(boost::posix_time::seconds(1));
-	}
-
-	std::vector<std::string> messages;
-	messages.push_back("so long and thanks for all the fish");
-	producer.send(messages, "mice", 42);
-
-	boost::array<char, 1024> buffer;
-	boost::system::error_code error;
-	size_t len = socket.read_some(boost::asio::buffer(buffer), error);
-
-	BOOST_CHECK_EQUAL(len, 4 + 2 + 2 + strlen("mice") + 4 + 4 + 9 + strlen("so long and thanks for all the fish"));
-	BOOST_CHECK_EQUAL(buffer[3], 2 + 2 + strlen("mice") + 4 + 4 + 9 + strlen("so long and thanks for all the fish"));
-	BOOST_CHECK_EQUAL(buffer[5], 0);
-	BOOST_CHECK_EQUAL(buffer[7], strlen("mice"));
-	BOOST_CHECK_EQUAL(buffer[8], 'm');
-	BOOST_CHECK_EQUAL(buffer[8 + strlen("mice") - 1], 'e');
-	BOOST_CHECK_EQUAL(buffer[11 + strlen("mice")], 42);
-	BOOST_CHECK_EQUAL(buffer[15 + strlen("mice")], 9 + strlen("so long and thanks for all the fish"));
-	BOOST_CHECK_EQUAL(buffer[16 + strlen("mice")], 0);
-	BOOST_CHECK_EQUAL(buffer[25 + strlen("mice")], 's');
-
-	work.reset();
-	io_service.stop();
-}
-
diff --git a/trunk/clients/csharp/.gitignore b/trunk/clients/csharp/.gitignore
deleted file mode 100644
index 1196633..0000000
--- a/trunk/clients/csharp/.gitignore
+++ /dev/null
@@ -1,5 +0,0 @@
-StyleCop.Cache
-bin
-obj
-*.suo
-*.csproj.user
\ No newline at end of file
diff --git a/trunk/clients/csharp/LICENSE b/trunk/clients/csharp/LICENSE
deleted file mode 100644
index 78f9f30..0000000
--- a/trunk/clients/csharp/LICENSE
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                              Apache License
-                        Version 2.0, January 2004
-                     http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   "License" shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   "Licensor" shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   "Legal Entity" shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   "control" means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   "You" (or "Your") shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   "Source" form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   "Object" form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   "Work" shall mean the work of authorship, whether in Source or
-   Object form, made available under the License, as indicated by a
-   copyright notice that is included in or attached to the work
-   (an example is provided in the Appendix below).
-
-   "Derivative Works" shall mean any work, whether in Source or Object
-   form, that is based on (or derived from) the Work and for which the
-   editorial revisions, annotations, elaborations, or other modifications
-   represent, as a whole, an original work of authorship. For the purposes
-   of this License, Derivative Works shall not include works that remain
-   separable from, or merely link (or bind by name) to the interfaces of,
-   the Work and Derivative Works thereof.
-
-   "Contribution" shall mean any work of authorship, including
-   the original version of the Work and any modifications or additions
-   to that Work or Derivative Works thereof, that is intentionally
-   submitted to Licensor for inclusion in the Work by the copyright owner
-   or by an individual or Legal Entity authorized to submit on behalf of
-   the copyright owner. For the purposes of this definition, "submitted"
-   means any form of electronic, verbal, or written communication sent
-   to the Licensor or its representatives, including but not limited to
-   communication on electronic mailing lists, source code control systems,
-   and issue tracking systems that are managed by, or on behalf of, the
-   Licensor for the purpose of discussing and improving the Work, but
-   excluding communication that is conspicuously marked or otherwise
-   designated in writing by the copyright owner as "Not a Contribution."
-
-   "Contributor" shall mean Licensor and any individual or Legal Entity
-   on behalf of whom a Contribution has been received by Licensor and
-   subsequently incorporated within the Work.
-
-2. Grant of Copyright License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   copyright license to reproduce, prepare Derivative Works of,
-   publicly display, publicly perform, sublicense, and distribute the
-   Work and such Derivative Works in Source or Object form.
-
-3. Grant of Patent License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   (except as stated in this section) patent license to make, have made,
-   use, offer to sell, sell, import, and otherwise transfer the Work,
-   where such license applies only to those patent claims licensable
-   by such Contributor that are necessarily infringed by their
-   Contribution(s) alone or by combination of their Contribution(s)
-   with the Work to which such Contribution(s) was submitted. If You
-   institute patent litigation against any entity (including a
-   cross-claim or counterclaim in a lawsuit) alleging that the Work
-   or a Contribution incorporated within the Work constitutes direct
-   or contributory patent infringement, then any patent licenses
-   granted to You under this License for that Work shall terminate
-   as of the date such litigation is filed.
-
-4. Redistribution. You may reproduce and distribute copies of the
-   Work or Derivative Works thereof in any medium, with or without
-   modifications, and in Source or Object form, provided that You
-   meet the following conditions:
-
-   (a) You must give any other recipients of the Work or
-       Derivative Works a copy of this License; and
-
-   (b) You must cause any modified files to carry prominent notices
-       stating that You changed the files; and
-
-   (c) You must retain, in the Source form of any Derivative Works
-       that You distribute, all copyright, patent, trademark, and
-       attribution notices from the Source form of the Work,
-       excluding those notices that do not pertain to any part of
-       the Derivative Works; and
-
-   (d) If the Work includes a "NOTICE" text file as part of its
-       distribution, then any Derivative Works that You distribute must
-       include a readable copy of the attribution notices contained
-       within such NOTICE file, excluding those notices that do not
-       pertain to any part of the Derivative Works, in at least one
-       of the following places: within a NOTICE text file distributed
-       as part of the Derivative Works; within the Source form or
-       documentation, if provided along with the Derivative Works; or,
-       within a display generated by the Derivative Works, if and
-       wherever such third-party notices normally appear. The contents
-       of the NOTICE file are for informational purposes only and
-       do not modify the License. You may add Your own attribution
-       notices within Derivative Works that You distribute, alongside
-       or as an addendum to the NOTICE text from the Work, provided
-       that such additional attribution notices cannot be construed
-       as modifying the License.
-
-   You may add Your own copyright statement to Your modifications and
-   may provide additional or different license terms and conditions
-   for use, reproduction, or distribution of Your modifications, or
-   for any such Derivative Works as a whole, provided Your use,
-   reproduction, and distribution of the Work otherwise complies with
-   the conditions stated in this License.
-
-5. Submission of Contributions. Unless You explicitly state otherwise,
-   any Contribution intentionally submitted for inclusion in the Work
-   by You to the Licensor shall be under the terms and conditions of
-   this License, without any additional terms or conditions.
-   Notwithstanding the above, nothing herein shall supersede or modify
-   the terms of any separate license agreement you may have executed
-   with Licensor regarding such Contributions.
-
-6. Trademarks. This License does not grant permission to use the trade
-   names, trademarks, service marks, or product names of the Licensor,
-   except as required for reasonable and customary use in describing the
-   origin of the Work and reproducing the content of the NOTICE file.
-
-7. Disclaimer of Warranty. Unless required by applicable law or
-   agreed to in writing, Licensor provides the Work (and each
-   Contributor provides its Contributions) on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-   implied, including, without limitation, any warranties or conditions
-   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-   PARTICULAR PURPOSE. You are solely responsible for determining the
-   appropriateness of using or redistributing the Work and assume any
-   risks associated with Your exercise of permissions under this License.
-
-8. Limitation of Liability. In no event and under no legal theory,
-   whether in tort (including negligence), contract, or otherwise,
-   unless required by applicable law (such as deliberate and grossly
-   negligent acts) or agreed to in writing, shall any Contributor be
-   liable to You for damages, including any direct, indirect, special,
-   incidental, or consequential damages of any character arising as a
-   result of this License or out of the use or inability to use the
-   Work (including but not limited to damages for loss of goodwill,
-   work stoppage, computer failure or malfunction, or any and all
-   other commercial damages or losses), even if such Contributor
-   has been advised of the possibility of such damages.
-
-9. Accepting Warranty or Additional Liability. While redistributing
-   the Work or Derivative Works thereof, You may choose to offer,
-   and charge a fee for, acceptance of support, warranty, indemnity,
-   or other liability obligations and/or rights consistent with this
-   License. However, in accepting such obligations, You may act only
-   on Your own behalf and on Your sole responsibility, not on behalf
-   of any other Contributor, and only if You agree to indemnify,
-   defend, and hold each Contributor harmless for any liability
-   incurred by, or claims asserted against, such Contributor by reason
-   of your accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-APPENDIX: How to apply the Apache License to your work.
-
-   To apply the Apache License to your work, attach the following
-   boilerplate notice, with the fields enclosed by brackets "[]"
-   replaced with your own identifying information. (Don't include
-   the brackets!)  The text should be enclosed in the appropriate
-   comment syntax for the file format. We also recommend that a
-   file or class name and description of purpose be included on the
-   same "printed page" as the copyright notice for easier
-   identification within third-party archives.
-
-Copyright 2011 LinkedIn
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
\ No newline at end of file
diff --git a/trunk/clients/csharp/README.md b/trunk/clients/csharp/README.md
deleted file mode 100644
index 3cbe9c0..0000000
--- a/trunk/clients/csharp/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# .NET Kafka Client
-
-This is a .NET implementation of a client for Kafka using C#.  It provides for a basic implementation that covers most basic functionalities to include a simple Producer and Consumer.
-
-The .NET client will wrap Kafka server error codes to the `KafkaException` class.  Exceptions are not trapped within the library and basically bubble up directly from the TcpClient and it's underlying Socket connection.  Clients using this library should look to do their own exception handling regarding these kinds of errors.
-
-## Producer
-
-The Producer can send one or more messages to Kafka in both a synchronous and asynchronous fashion.
-
-### Producer Usage
-
-    string payload1 = "kafka 1.";
-    byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);
-    Message msg1 = new Message(payloadData1);
-
-    string payload2 = "kafka 2.";
-    byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);
-    Message msg2 = new Message(payloadData2);
-
-    Producer producer = new Producer("localhost", 9092);
-    producer.Send("test", 0, new List<Message> { msg1, msg2 });
-
-### Asynchronous Producer Usage
-
-    List<Message> messages = GetBunchOfMessages();
-
-    Producer producer = new Producer("localhost", 9092);
-    producer.SendAsync("test", 0, messages, (requestContext) => { // doing work });
-
-### Multi-Producer Usage
-
-    List<ProducerRequest> requests = new List<ProducerRequest>
-    { 
-        new ProducerRequest("test a", 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("1: " + DateTime.UtcNow)) }),
-        new ProducerRequest("test b", 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("2: " + DateTime.UtcNow)) }),
-        new ProducerRequest("test c", 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("3: " + DateTime.UtcNow)) }),
-        new ProducerRequest("test d", 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("4: " + DateTime.UtcNow)) })
-    };
-
-    MultiProducerRequest request = new MultiProducerRequest(requests);
-    Producer producer = new Producer("localhost", 9092);
-    producer.Send(request);
-
-## Consumer
-
-The consumer has several functions of interest: `GetOffsetsBefore` and `Consume`.  `GetOffsetsBefore` will retrieve a list of offsets before a given time and `Consume` will attempt to get a list of messages from Kafka given a topic, partition and offset.  `Consume` allows for both a single and batched request function using the `MultiFetchRequest`.
-
-### Consumer Usage
-
-    Consumer consumer = new Consumer("localhost", 9092);
-    int max = 10;
-    long[] offsets = consumer.GetOffsetsBefore("test", 0, OffsetRequest.LatestTime, max);
-    List<Message> messages = consumer.Consume("test", 0, offsets[0]);
-
-### Consumer Multi-fetch
-
-    Consumer consumer = new Consumer("localhost", 9092);
-    MultiFetchRequest request = new MultiFetchRequest(new List<FetchRequest>
-    {
-        new FetchRequest("testa", 0, 0),
-        new FetchRequest("testb", 0, 0),
-        new FetchRequest("testc", 0, 0)
-    });
-
-    List<List<Message>> messages = consumer.Consume(request);
\ No newline at end of file
diff --git a/trunk/clients/csharp/Settings.StyleCop b/trunk/clients/csharp/Settings.StyleCop
deleted file mode 100644
index 70c77f6..0000000
--- a/trunk/clients/csharp/Settings.StyleCop
+++ /dev/null
@@ -1,92 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<StyleCopSettings Version="4.3">
-  <Parsers>
-    <Parser ParserId="Microsoft.StyleCop.CSharp.CsParser">
-      <ParserSettings>
-        <BooleanProperty Name="AnalyzeDesignerFiles">False</BooleanProperty>
-      </ParserSettings>
-    </Parser>
-  </Parsers>
-  <Analyzers>
-    <Analyzer AnalyzerId="Microsoft.StyleCop.CSharp.NamingRules">
-      <Rules>
-        <Rule Name="FieldNamesMustNotBeginWithUnderscore">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-      </Rules>
-      <AnalyzerSettings />
-    </Analyzer>
-    <Analyzer AnalyzerId="Microsoft.StyleCop.CSharp.DocumentationRules">
-      <Rules>
-        <Rule Name="FileMustHaveHeader">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-        <Rule Name="FileHeaderMustShowCopyright">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-        <Rule Name="FileHeaderMustHaveCopyrightText">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-        <Rule Name="FileHeaderMustContainFileName">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-        <Rule Name="FileHeaderFileNameDocumentationMustMatchFileName">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-        <Rule Name="FileHeaderMustHaveValidCompanyText">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-      </Rules>
-      <AnalyzerSettings />
-    </Analyzer>
-    <Analyzer AnalyzerId="Microsoft.StyleCop.CSharp.OrderingRules">
-      <Rules>
-        <Rule Name="UsingDirectivesMustBePlacedWithinNamespace">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-      </Rules>
-      <AnalyzerSettings />
-    </Analyzer>
-    <Analyzer AnalyzerId="Microsoft.StyleCop.CSharp.ReadabilityRules">
-      <Rules>
-        <Rule Name="PrefixLocalCallsWithThis">
-          <RuleSettings>
-            <BooleanProperty Name="Enabled">False</BooleanProperty>
-          </RuleSettings>
-        </Rule>
-      </Rules>
-      <AnalyzerSettings />
-    </Analyzer>
-  </Analyzers>
-</StyleCopSettings>
\ No newline at end of file
diff --git a/trunk/clients/csharp/lib/StyleCop/Microsoft.StyleCop.Targets b/trunk/clients/csharp/lib/StyleCop/Microsoft.StyleCop.Targets
deleted file mode 100644
index 2b319a4..0000000
--- a/trunk/clients/csharp/lib/StyleCop/Microsoft.StyleCop.Targets
+++ /dev/null
@@ -1,125 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
-  <!-- Specify where tasks are implemented. -->
-  <UsingTask AssemblyFile="Microsoft.StyleCop.dll" TaskName="StyleCopTask"/>
-
-  <PropertyGroup>
-    <BuildDependsOn>$(BuildDependsOn);StyleCop</BuildDependsOn>
-    <RebuildDependsOn>StyleCopForceFullAnalysis;$(RebuildDependsOn)</RebuildDependsOn>
-  </PropertyGroup>
-
-  <!-- Define StyleCopForceFullAnalysis property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisForceFullAnalysis)' != '') and ('$(StyleCopForceFullAnalysis)' == '')">
-    <StyleCopForceFullAnalysis>$(SourceAnalysisForceFullAnalysis)</StyleCopForceFullAnalysis>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopForceFullAnalysis)' == ''">
-    <StyleCopForceFullAnalysis>false</StyleCopForceFullAnalysis>
-  </PropertyGroup>
-
-  <!-- Define StyleCopCacheResults property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisCacheResults)' != '') and ('$(StyleCopCacheResults)' == '')">
-    <StyleCopCacheResults>$(SourceAnalysisCacheResults)</StyleCopCacheResults>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopCacheResults)' == ''">
-    <StyleCopCacheResults>true</StyleCopCacheResults>
-  </PropertyGroup>
-
-  <!-- Define StyleCopTreatErrorsAsWarnings property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisTreatErrorsAsWarnings)' != '') and ('$(StyleCopTreatErrorsAsWarnings)' == '')">
-    <StyleCopTreatErrorsAsWarnings>$(SourceAnalysisTreatErrorsAsWarnings)</StyleCopTreatErrorsAsWarnings>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopTreatErrorsAsWarnings)' == ''">
-    <StyleCopTreatErrorsAsWarnings>true</StyleCopTreatErrorsAsWarnings>
-  </PropertyGroup>
-
-  <!-- Define StyleCopEnabled property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisEnabled)' != '') and ('$(StyleCopEnabled)' == '')">
-    <StyleCopEnabled>$(SourceAnalysisEnabled)</StyleCopEnabled>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopEnabled)' == ''">
-    <StyleCopEnabled>true</StyleCopEnabled>
-  </PropertyGroup>
-
-  <!-- Define StyleCopOverrideSettingsFile property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisOverrideSettingsFile)' != '') and ('$(StyleCopOverrideSettingsFile)' == '')">
-    <StyleCopOverrideSettingsFile>$(SourceAnalysisOverrideSettingsFile)</StyleCopOverrideSettingsFile>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopOverrideSettingsFile)' == ''">
-    <StyleCopOverrideSettingsFile> </StyleCopOverrideSettingsFile>
-  </PropertyGroup>
-
-  <!-- Define StyleCopOutputFile property. -->
-  <PropertyGroup Condition="('$(SourceAnalysisOutputFile)' != '') and ('$(StyleCopOutputFile)' == '')">
-    <StyleCopOutputFile>$(SourceAnalysisOutputFile)</StyleCopOutputFile>
-  </PropertyGroup>
-  <PropertyGroup Condition="'$(StyleCopOutputFile)' == ''">
-    <StyleCopOutputFile>$(IntermediateOutputPath)StyleCopViolations.xml</StyleCopOutputFile>
-  </PropertyGroup>
-
-  <!-- Define all new properties which do not need to have both StyleCop and SourceAnalysis variations. -->
-  <PropertyGroup>
-    <!-- Specifying 0 will cause StyleCop to use the default violation count limit.
-         Specifying any positive number will cause StyleCop to use that number as the violation count limit.
-         Specifying any negative number will cause StyleCop to allow any number of violations without limit. -->
-    <StyleCopMaxViolationCount Condition="'$(StyleCopMaxViolationCount)' == ''">0</StyleCopMaxViolationCount>
-  </PropertyGroup>
-
-  <!-- Define target: StyleCopForceFullAnalysis -->
-  <Target Name="StyleCopForceFullAnalysis">
-    <CreateProperty Value="true">
-      <Output TaskParameter="Value" PropertyName="StyleCopForceFullAnalysis" />
-    </CreateProperty>
-  </Target>
-
-  <!-- Define target: StyleCop -->
-  <Target Name="StyleCop" Condition="'$(StyleCopEnabled)' != 'false'">
-    <!-- Determine what files should be checked. Take all Compile items, but exclude those that have
-        set ExcludeFromStyleCop=true or ExcludeFromSourceAnalysis=true. -->
-    <CreateItem Include="@(Compile)" Condition="('%(Compile.ExcludeFromStyleCop)' != 'true') and ('%(Compile.ExcludeFromSourceAnalysis)' != 'true')">
-      <Output TaskParameter="Include" ItemName="StyleCopFiles"/>
-    </CreateItem>
-
-    <Message Text="Forcing full StyleCop reanalysis." Condition="'$(StyleCopForceFullAnalysis)' == 'true'" Importance="Low" />
-
-    <Message Text="Analyzing @(StyleCopFiles)" Importance="Low" />
-
-    <!-- Run the StyleCop MSBuild task. -->
-    <StyleCopTask
-      ProjectFullPath="$(MSBuildProjectFile)"
-      SourceFiles="@(StyleCopFiles)"
-      AdditionalAddinPaths="@(StyleCopAdditionalAddinPaths)"
-      ForceFullAnalysis="$(StyleCopForceFullAnalysis)"
-      DefineConstants="$(DefineConstants)"
-      TreatErrorsAsWarnings="$(StyleCopTreatErrorsAsWarnings)"
-      CacheResults="$(StyleCopCacheResults)"
-      OverrideSettingsFile="$(StyleCopOverrideSettingsFile)"
-      OutputFile="$(StyleCopOutputFile)"
-      MaxViolationCount="$(StyleCopMaxViolationCount)"
-            />
-
-    <!-- Make output files cleanable -->
-    <CreateItem Include="$(StyleCopOutputFile)">
-      <Output TaskParameter="Include" ItemName="FileWrites"/>
-    </CreateItem>
-
-    <!-- Add the StyleCop.cache file to the list of files we've written - so they can be cleaned up on a Build Clean. -->
-    <CreateItem Include="StyleCop.Cache" Condition="'$(StyleCopCacheResults)' == 'true'">
-      <Output TaskParameter="Include" ItemName="FileWrites"/>
-    </CreateItem>
-  </Target>
-</Project>
diff --git a/trunk/clients/csharp/lib/StyleCop/Settings.StyleCop b/trunk/clients/csharp/lib/StyleCop/Settings.StyleCop
deleted file mode 100644
index 3069279..0000000
--- a/trunk/clients/csharp/lib/StyleCop/Settings.StyleCop
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<StyleCopSettings Version="4.3">
-  <Parsers>
-    <Parser ParserId="Microsoft.StyleCop.CSharp.CsParser">
-      <ParserSettings>
-        <CollectionProperty Name="GeneratedFileFilters">
-          <Value>\.g\.cs$</Value>
-          <Value>\.generated\.cs$</Value>
-          <Value>\.g\.i\.cs$</Value>
-        </CollectionProperty>
-      </ParserSettings>
-    </Parser>
-  </Parsers>
-  <Analyzers>
-    <Analyzer AnalyzerId="Microsoft.StyleCop.CSharp.NamingRules">
-      <AnalyzerSettings>
-        <CollectionProperty Name="Hungarian">
-          <Value>as</Value>
-          <Value>do</Value>
-          <Value>id</Value>
-          <Value>if</Value>
-          <Value>in</Value>
-          <Value>is</Value>
-          <Value>my</Value>
-          <Value>no</Value>
-          <Value>on</Value>
-          <Value>to</Value>
-          <Value>ui</Value>
-        </CollectionProperty>
-      </AnalyzerSettings>
-    </Analyzer>
-  </Analyzers>
-</StyleCopSettings>
\ No newline at end of file
diff --git a/trunk/clients/csharp/lib/nunit/2.5.9/nunit.framework.dll b/trunk/clients/csharp/lib/nunit/2.5.9/nunit.framework.dll
deleted file mode 100644
index 875e098..0000000
--- a/trunk/clients/csharp/lib/nunit/2.5.9/nunit.framework.dll
+++ /dev/null
Binary files differ
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/AbstractRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/AbstractRequest.cs
deleted file mode 100644
index ee08c4c..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/AbstractRequest.cs
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// Base request to make to Kafka.
-    /// </summary>
-    public abstract class AbstractRequest
-    {
-        /// <summary>
-        /// Gets or sets the topic to publish to.
-        /// </summary>
-        public string Topic { get; set; }
-
-        /// <summary>
-        /// Gets or sets the partition to publish to.
-        /// </summary>
-        public int Partition { get; set; }
-
-        /// <summary>
-        /// Converts the request to an array of bytes that is expected by Kafka.
-        /// </summary>
-        /// <returns>An array of bytes that represents the request.</returns>
-        public abstract byte[] GetBytes();
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public abstract bool IsValid();
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/AsyncProducerConfig.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/AsyncProducerConfig.cs
deleted file mode 100644
index c0fc2b4..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/AsyncProducerConfig.cs
+++ /dev/null
@@ -1,74 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Configuration used by the asynchronous producer

-    /// </summary>

-    public class AsyncProducerConfig : SyncProducerConfig, IAsyncProducerConfigShared

-    {

-        public const int DefaultQueueTime = 5000;

-

-        public const int DefaultQueueSize = 10000;

-

-        public const int DefaultBatchSize = 200;

-

-        public static readonly string DefaultSerializerClass = typeof(DefaultEncoder).FullName; 

-

-        public AsyncProducerConfig()

-        {

-            this.QueueTime = DefaultQueueTime;

-            this.QueueSize = DefaultQueueSize;

-            this.BatchSize = DefaultBatchSize;

-            this.SerializerClass = DefaultSerializerClass;  

-        }

-

-        public AsyncProducerConfig(KafkaClientConfiguration kafkaClientConfiguration) 

-            : this()

-        {

-            Guard.Assert<ArgumentNullException>(() => kafkaClientConfiguration != null);

-            Guard.Assert<ArgumentNullException>(() => kafkaClientConfiguration.KafkaServer != null);

-            Guard.Assert<ArgumentNullException>(() => kafkaClientConfiguration.KafkaServer.Address != null);

-            Guard.Assert<ArgumentOutOfRangeException>(() => kafkaClientConfiguration.KafkaServer.Port > 0);

-

-            this.Host = kafkaClientConfiguration.KafkaServer.Address;

-            this.Port = kafkaClientConfiguration.KafkaServer.Port;

-        }

-

-        public int QueueTime { get; set; }

-

-        public int QueueSize { get; set; }

-

-        public int BatchSize { get; set; }

-

-        public string SerializerClass { get; set; }

-

-        public string CallbackHandler { get; set; }

-

-        public string EventHandler { get; set; }

-

-        public IDictionary<string, string> CallbackHandlerProps { get; set; }

-

-        public IDictionary<string, string> EventHandlerProps { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfo.cs
deleted file mode 100644
index 44a70bc..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfo.cs
+++ /dev/null
@@ -1,72 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System.Configuration;

-    using System.Globalization;

-

-    public class BrokerPartitionInfo : ConfigurationElement

-    {

-        [ConfigurationProperty("id")]

-        public int Id

-        {

-            get

-            {

-                return (int)this["id"];

-            }

-

-            set

-            {

-                this["id"] = value;

-            }

-        }

-

-        [ConfigurationProperty("address")]

-        public string Address

-        {

-            get

-            {

-                return (string)this["address"];

-            }

-

-            set

-            {

-                this["address"] = value;

-            }

-        }

-

-        [ConfigurationProperty("port")]

-        public int Port

-        {

-            get

-            {

-                return (int)this["port"];

-            }

-

-            set

-            {

-                this["port"] = value;

-            }

-        }

-

-        public string GetBrokerPartitionInfoAsString()

-        {

-            return string.Format(CultureInfo.InvariantCulture, "{0}:{1}:{2}", Id, Address, Port);

-        }

-    }

-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfoCollection.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfoCollection.cs
deleted file mode 100644
index 5dbed5e..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/BrokerPartitionInfoCollection.cs
+++ /dev/null
@@ -1,52 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System.Configuration;

-

-    public class BrokerPartitionInfoCollection : ConfigurationElementCollection

-    {

-        public BrokerPartitionInfo this[int index]

-        {

-            get

-            {

-                return this.BaseGet(index) as BrokerPartitionInfo;

-            }

-

-            set

-            {

-                if (this.BaseGet(index) != null)

-                {

-                    this.BaseRemoveAt(index);

-                }

-

-                this.BaseAdd(index, value);

-            }

-        }

-    

-        protected override ConfigurationElement CreateNewElement()

-        {

-            return new BrokerPartitionInfo();

-        }

-

-        protected override object GetElementKey(ConfigurationElement element)

-        {

-            return ((BrokerPartitionInfo)element).Id;

-        }

-    }

-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/Consumer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/Consumer.cs
deleted file mode 100644
index 5ba8145..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/Consumer.cs
+++ /dev/null
@@ -1,136 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System.Configuration;

-

-    public class Consumer : ConfigurationElement

-    {

-        [ConfigurationProperty("numberOfTries")]

-        public short NumberOfTries

-        {

-            get

-            {

-                return (short)this["numberOfTries"];

-            }

-

-            set

-            {

-                this["numberOfTries"] = value;

-            }

-        }

-

-        [ConfigurationProperty("groupId")]

-        public string GroupId

-        {

-            get

-            {

-                return (string)this["groupId"];

-            }

-

-            set

-            {

-                this["groupId"] = value;

-            }

-        }

-

-        [ConfigurationProperty("timeout")]

-        public int Timeout

-        {

-            get

-            {

-                return (int)this["timeout"];

-            }

-

-            set

-            {

-                this["timeout"] = value;

-            }

-        }

-

-        [ConfigurationProperty("autoOffsetReset")]

-        public string AutoOffsetReset

-        {

-            get

-            {

-                return (string)this["autoOffsetReset"];

-            }

-

-            set

-            {

-                this["autoOffsetReset"] = value;

-            }

-        }

-

-        [ConfigurationProperty("autoCommit")]

-        public bool AutoCommit

-        {

-            get

-            {

-                return (bool)this["autoCommit"];

-            }

-

-            set

-            {

-                this["autoCommit"] = value;

-            }

-        }

-

-        [ConfigurationProperty("autoCommitIntervalMs")]

-        public int AutoCommitIntervalMs

-        {

-            get

-            {

-                return (int)this["autoCommitIntervalMs"];

-            }

-

-            set

-            {

-                this["autoCommitIntervalMs"] = value;

-            }

-        }

-

-        [ConfigurationProperty("fetchSize")]

-        public int FetchSize

-        {

-            get

-            {

-                return (int)this["fetchSize"];

-            }

-

-            set

-            {

-                this["fetchSize"] = value;

-            }

-        }

-

-        [ConfigurationProperty("backOffIncrementMs")]

-        public int BackOffIncrementMs

-        {

-            get

-            {

-                return (int)this["backOffIncrementMs"];

-            }

-

-            set

-            {

-                this["backOffIncrementMs"] = value;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ConsumerConfig.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ConsumerConfig.cs
deleted file mode 100644
index eacf407..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ConsumerConfig.cs
+++ /dev/null
@@ -1,72 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    /// <summary>

-    /// Configuration used by the consumer

-    /// </summary>

-    public class ConsumerConfig : ZKConfig

-    {

-        public const short DefaultNumberOfTries = 2;

-

-        public short NumberOfTries { get; set; }

-

-        public string Host { get; set; }

-

-        public int Port { get; set; }

-

-        public string GroupId { get; set; }

-

-        public int Timeout { get; set; }

-

-        public string AutoOffsetReset { get; set; }

-

-        public bool AutoCommit { get; set; }

-

-        public int AutoCommitIntervalMs { get; set; }

-

-        public int FetchSize { get; set; }

-

-        public int BackOffIncrementMs { get; set; }

-

-        public ConsumerConfig()

-        {

-            this.NumberOfTries = DefaultNumberOfTries;

-        }

-

-        public ConsumerConfig(KafkaClientConfiguration kafkaClientConfiguration) : this()

-        {

-            this.Host = kafkaClientConfiguration.KafkaServer.Address;

-            this.Port = kafkaClientConfiguration.KafkaServer.Port;

-            this.NumberOfTries = kafkaClientConfiguration.Consumer.NumberOfTries;

-            this.GroupId = kafkaClientConfiguration.Consumer.GroupId;

-            this.Timeout = kafkaClientConfiguration.Consumer.Timeout;

-            this.AutoOffsetReset = kafkaClientConfiguration.Consumer.AutoOffsetReset;

-            this.AutoCommit = kafkaClientConfiguration.Consumer.AutoCommit;

-            this.AutoCommitIntervalMs = kafkaClientConfiguration.Consumer.AutoCommitIntervalMs;

-            this.FetchSize = kafkaClientConfiguration.Consumer.FetchSize;

-            this.BackOffIncrementMs = kafkaClientConfiguration.Consumer.BackOffIncrementMs;

-            if (kafkaClientConfiguration.IsZooKeeperEnabled)

-            {

-                this.ZkConnect = kafkaClientConfiguration.ZooKeeperServers.AddressList;

-                this.ZkSessionTimeoutMs = kafkaClientConfiguration.ZooKeeperServers.SessionTimeout;

-                this.ZkConnectionTimeoutMs = kafkaClientConfiguration.ZooKeeperServers.ConnectionTimeout;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/IAsyncProducerConfigShared.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/IAsyncProducerConfigShared.cs
deleted file mode 100644
index b84f7ec..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/IAsyncProducerConfigShared.cs
+++ /dev/null
@@ -1,26 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    internal interface IAsyncProducerConfigShared

-    {

-        string SerializerClass { get; set; }

-

-        string CallbackHandlerClass { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ISyncProducerConfigShared.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ISyncProducerConfigShared.cs
deleted file mode 100644
index f18d5c5..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ISyncProducerConfigShared.cs
+++ /dev/null
@@ -1,30 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    internal interface ISyncProducerConfigShared

-    {

-        int BufferSize { get; set; }

-

-        int ConnectTimeout { get; set; }

-

-        int SocketTimeout { get; set; }

-

-        int MaxMessageSize { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaClientConfiguration.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaClientConfiguration.cs
deleted file mode 100644
index db8a664..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaClientConfiguration.cs
+++ /dev/null
@@ -1,93 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System.Configuration;

-    using System.Text;

-

-    /// <summary>

-    /// Implementation of the custom configuration section for the kafka client

-    /// </summary>

-    public class KafkaClientConfiguration : ConfigurationSection

-    {

-        private static KafkaClientConfiguration config = ConfigurationManager.GetSection("kafkaClientConfiguration") as KafkaClientConfiguration;

-        private bool enabled = true;

-

-        public static KafkaClientConfiguration GetConfiguration()

-        {

-            config.enabled = !string.IsNullOrEmpty(config.ZooKeeperServers.AddressList);

-            return config;

-        }

-

-        [ConfigurationProperty("kafkaServer")]

-        public KafkaServer KafkaServer

-        {

-            get { return (KafkaServer)this["kafkaServer"]; }

-            set { this["kafkaServer"] = value; }

-        }

-

-        [ConfigurationProperty("consumer")]

-        public Consumer Consumer

-        {

-            get { return (Consumer)this["consumer"]; }

-            set { this["consumer"] = value; }

-        }

-

-        [ConfigurationProperty("brokerPartitionInfos")]

-        public BrokerPartitionInfoCollection BrokerPartitionInfos

-        {

-            get

-            {

-                return (BrokerPartitionInfoCollection)this["brokerPartitionInfos"] ??

-                       new BrokerPartitionInfoCollection();

-            }

-        }

-

-        [ConfigurationProperty("zooKeeperServers")]

-        public ZooKeeperServers ZooKeeperServers

-        {

-            get { return (ZooKeeperServers)this["zooKeeperServers"]; }

-            set { this["zooKeeperServers"] = value; }

-        }

-

-        public string GetBrokerPartitionInfosAsString()

-        {

-            StringBuilder sb = new StringBuilder();

-            for (int i = 0; i < BrokerPartitionInfos.Count; i++)

-            {

-                sb.Append(BrokerPartitionInfos[i].GetBrokerPartitionInfoAsString());

-                if ((i + 1) < BrokerPartitionInfos.Count)

-                {

-                    sb.Append(",");

-                }

-            }

-

-            return sb.ToString();

-        }

-

-        internal void SupressZooKeeper()

-        {

-            this.enabled = false;

-        }

-

-        public bool IsZooKeeperEnabled

-        {

-            get { return this.enabled; }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaServer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaServer.cs
deleted file mode 100644
index e3c3af2..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/KafkaServer.cs
+++ /dev/null
@@ -1,52 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-
-namespace Kafka.Client.Cfg
-{

-    using System.Configuration;
-
-    public class KafkaServer : ConfigurationElement
-    {
-        [ConfigurationProperty("address")]
-        public string Address
-        {
-            get
-            {
-                return (string)this["address"];
-            }
-
-            set
-            {
-                this["address"] = value;
-            }
-        }
-
-        [ConfigurationProperty("port")]
-        public int Port
-        {
-            get
-            {
-                return (int)this["port"];
-            }
-
-            set
-            {
-                this["port"] = value;
-            }
-        }
-    }
-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ProducerConfig.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ProducerConfig.cs
deleted file mode 100644
index d2c0b8a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ProducerConfig.cs
+++ /dev/null
@@ -1,94 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Producers;

-    using Kafka.Client.Producers.Partitioning;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// High-level API configuration for the producer

-    /// </summary>

-    public class ProducerConfig : ZKConfig, ISyncProducerConfigShared, IAsyncProducerConfigShared

-    {

-        public const ProducerTypes DefaultProducerType = ProducerTypes.Sync;

-        public static readonly string DefaultPartitioner = typeof(DefaultPartitioner<>).FullName;

-

-        public ProducerConfig()

-        {

-            this.ProducerType = DefaultProducerType;

-            this.BufferSize = SyncProducerConfig.DefaultBufferSize;

-            this.ConnectTimeout = SyncProducerConfig.DefaultConnectTimeout;

-            this.SocketTimeout = SyncProducerConfig.DefaultSocketTimeout;

-            this.ReconnectInterval = SyncProducerConfig.DefaultReconnectInterval;

-            this.MaxMessageSize = SyncProducerConfig.DefaultMaxMessageSize;

-            this.QueueTime = AsyncProducerConfig.DefaultQueueTime;

-            this.QueueSize = AsyncProducerConfig.DefaultQueueSize;

-            this.BatchSize = AsyncProducerConfig.DefaultBatchSize;

-            this.SerializerClass = AsyncProducerConfig.DefaultSerializerClass; 

-        }

-

-        public ProducerConfig(KafkaClientConfiguration kafkaClientConfiguration) 

-            : this()

-        {

-            Guard.Assert<ArgumentNullException>(() => kafkaClientConfiguration != null);

-            if (kafkaClientConfiguration.IsZooKeeperEnabled)

-            {

-                this.ZkConnect = kafkaClientConfiguration.ZooKeeperServers.AddressList;

-                this.ZkSessionTimeoutMs = kafkaClientConfiguration.ZooKeeperServers.SessionTimeout;

-                this.ZkConnectionTimeoutMs = kafkaClientConfiguration.ZooKeeperServers.ConnectionTimeout;

-            }

-

-            this.BrokerPartitionInfo = kafkaClientConfiguration.GetBrokerPartitionInfosAsString();

-        }

-

-        public string BrokerPartitionInfo { get; set; }

-

-        public string PartitionerClass { get; set; }

-

-        public ProducerTypes ProducerType { get; set; }

-

-        public int BufferSize { get; set; }

-

-        public int ConnectTimeout { get; set; }

-

-        public int SocketTimeout { get; set; }

-

-        public int ReconnectInterval { get; set; }

-

-        public int MaxMessageSize { get; set; }

-

-        public int QueueTime { get; set; }

-

-        public int QueueSize { get; set; }

-

-        public int BatchSize { get; set; }

-

-        public string SerializerClass { get; set; }

-

-        public string CallbackHandler { get; set; }

-

-        public string EventHandler { get; set; }

-

-        public IDictionary<string, string> CallbackHandlerProps { get; set; }

-

-        public IDictionary<string, string> EventHandlerProps { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/SyncProducerConfig.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/SyncProducerConfig.cs
deleted file mode 100644
index f88c301..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/SyncProducerConfig.cs
+++ /dev/null
@@ -1,66 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System;

-    using Kafka.Client.Utils;

-

-    public class SyncProducerConfig : ISyncProducerConfigShared

-    {

-        public const int DefaultBufferSize = 102400;

-

-        public const int DefaultConnectTimeout = 5000;

-

-        public const int DefaultSocketTimeout = 30000;

-

-        public const int DefaultReconnectInterval = 30000;

-

-        public const int DefaultMaxMessageSize = 1000000;

-

-        public SyncProducerConfig()

-        {

-            this.BufferSize = DefaultBufferSize;

-            this.ConnectTimeout = DefaultConnectTimeout;

-            this.SocketTimeout = DefaultSocketTimeout;

-            this.ReconnectInterval = DefaultReconnectInterval;

-            this.MaxMessageSize = DefaultMaxMessageSize;

-        }

-

-        public SyncProducerConfig(KafkaClientConfiguration kafkaClientConfiguration) : this()

-        {

-            Guard.Assert<ArgumentNullException>(() => kafkaClientConfiguration != null);

-

-            this.Host = kafkaClientConfiguration.KafkaServer.Address;

-            this.Port = kafkaClientConfiguration.KafkaServer.Port;

-        }

-

-        public int BufferSize { get; set; }

-

-        public int ConnectTimeout { get; set; }

-

-        public int SocketTimeout { get; set; }

-

-        public int ReconnectInterval { get; set; }

-

-        public int MaxMessageSize { get; set; }

-

-        public string Host { get; set; }

-

-        public int Port { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZKConfig.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZKConfig.cs
deleted file mode 100644
index 044279f..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZKConfig.cs
+++ /dev/null
@@ -1,43 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    public class ZKConfig

-    {

-        public ZKConfig()

-            : this(null, 6000, 6000, 2000)

-        {

-        }

-

-        public ZKConfig(string zkconnect, int zksessionTimeoutMs, int zkconnectionTimeoutMs, int zksyncTimeMs)

-        {

-            this.ZkConnect = zkconnect;

-            this.ZkConnectionTimeoutMs = zkconnectionTimeoutMs;

-            this.ZkSessionTimeoutMs = zksessionTimeoutMs;

-            this.ZkSyncTimeMs = zksyncTimeMs;

-        }

-

-        public string ZkConnect { get; set; }

-

-        public int ZkSessionTimeoutMs { get; set; }

-

-        public int ZkConnectionTimeoutMs { get; set; }

-

-        public int ZkSyncTimeMs { get; set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZooKeeperServers.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZooKeeperServers.cs
deleted file mode 100644
index 1e357b1..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cfg/ZooKeeperServers.cs
+++ /dev/null
@@ -1,45 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cfg

-{

-    using System.Configuration;

-

-    public class ZooKeeperServers : ConfigurationElement

-    {

-        [ConfigurationProperty("addressList")]

-        public string AddressList

-        {

-            get { return (string)this["addressList"]; }

-            set { this["addressList"] = value; }

-        }

-

-        [ConfigurationProperty("sessionTimeout")]

-        public int SessionTimeout

-        {

-            get { return (int)this["sessionTimeout"]; }

-            set { this["sessionTimeout"] = value; }

-        }

-

-        [ConfigurationProperty("connectionTimeout")]

-        public int ConnectionTimeout

-        {

-            get { return (int)this["connectionTimeout"]; }

-            set { this["connectionTimeout"] = value; }

-        }

-    }

-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Broker.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Broker.cs
deleted file mode 100644
index fd57147..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Broker.cs
+++ /dev/null
@@ -1,68 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cluster

-{

-    /// <summary>

-    /// Represents Kafka broker

-    /// </summary>

-    internal class Broker

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Broker"/> class.

-        /// </summary>

-        /// <param name="id">

-        /// The broker id.

-        /// </param>

-        /// <param name="creatorId">

-        /// The broker creator id.

-        /// </param>

-        /// <param name="host">

-        /// The broker host.

-        /// </param>

-        /// <param name="port">

-        /// The broker port.

-        /// </param>

-        public Broker(int id, string creatorId, string host, int port)

-        {

-            this.Id = id;

-            this.CreatorId = creatorId;

-            this.Host = host;

-            this.Port = port;

-        }

-

-        /// <summary>

-        /// Gets the broker Id.

-        /// </summary>

-        public int Id { get; private set; }

-

-        /// <summary>

-        /// Gets the broker creatorId.

-        /// </summary>

-        public string CreatorId { get; private set; }

-

-        /// <summary>

-        /// Gets the broker host.

-        /// </summary>

-        public string Host { get; private set; }

-

-        /// <summary>

-        /// Gets the broker port.

-        /// </summary>

-        public int Port { get; private set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Cluster.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Cluster.cs
deleted file mode 100644
index eca7b30..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Cluster.cs
+++ /dev/null
@@ -1,130 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using System;

-using System.Globalization;

-

-namespace Kafka.Client.Cluster

-{

-    using System.Collections.Generic;

-using Kafka.Client.ZooKeeperIntegration;

-

-    /// <summary>

-    /// The set of active brokers in the cluster

-    /// </summary>

-    internal class Cluster

-    {

-        private readonly Dictionary<int, Broker> brokers = new Dictionary<int, Broker>();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Cluster"/> class.

-        /// </summary>

-        public Cluster()

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Cluster"/> class.

-        /// </summary>

-        /// <param name="zkClient">IZooKeeperClient object</param>

-        public Cluster(IZooKeeperClient zkClient)

-        {

-            var nodes = zkClient.GetChildrenParentMayNotExist(ZooKeeperClient.DefaultBrokerIdsPath);

-            foreach (var node in nodes)

-            {

-                var brokerZkString = zkClient.ReadData<string>(ZooKeeperClient.DefaultBrokerIdsPath + "/" + node);

-                Broker broker = this.CreateBroker(node, brokerZkString);

-                if (brokers.ContainsKey(broker.Id))

-                {

-                    brokers[broker.Id] = broker;

-                }

-                else

-                {

-                    brokers.Add(broker.Id, broker);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Cluster"/> class.

-        /// </summary>

-        /// <param name="brokers">

-        /// The set of active brokers.

-        /// </param>

-        public Cluster(IEnumerable<Broker> brokers)

-        {

-            foreach (var broker in brokers)

-            {

-                this.brokers.Add(broker.Id, broker);

-            }

-        }

-

-        /// <summary>

-        /// Gets broker with given ID

-        /// </summary>

-        /// <param name="id">

-        /// The broker ID.

-        /// </param>

-        /// <returns>

-        /// The broker with given ID

-        /// </returns>

-        public Broker GetBroker(int id)

-        {

-            if (this.brokers.ContainsKey(id))

-            {

-                return this.brokers[id];

-            }

-

-            return null;

-        }

-

-        /// <summary>

-        /// Creates a new Broker object out of a BrokerInfoString

-        /// </summary>

-        /// <param name="node">node string</param>

-        /// <param name="brokerInfoString">the BrokerInfoString</param>

-        /// <returns>Broker object</returns>

-        private Broker CreateBroker(string node, string brokerInfoString)

-        {

-            int id;

-            if (int.TryParse(node, NumberStyles.Integer, CultureInfo.InvariantCulture, out id))

-            {

-                var brokerInfo = brokerInfoString.Split(':');

-                if (brokerInfo.Length > 2)

-                {

-                    int port;

-                    if (int.TryParse(brokerInfo[2], NumberStyles.Integer, CultureInfo.InvariantCulture, out port))

-                    {

-                        return new Broker(id, brokerInfo[0], brokerInfo[1], int.Parse(brokerInfo[2], CultureInfo.InvariantCulture));

-                    }

-                    else

-                    {

-                        throw new ArgumentException(String.Format(CultureInfo.CurrentCulture, "{0} is not a valid integer", brokerInfo[2]));

-                    }

-                }

-                else

-                {

-                    throw new ArgumentException(String.Format(CultureInfo.CurrentCulture, "{0} is not a valid BrokerInfoString", brokerInfoString));

-                }

-            }

-            else

-            {

-                throw new ArgumentException(String.Format(CultureInfo.CurrentCulture, "{0} is not a valid integer", node));

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Partition.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Partition.cs
deleted file mode 100644
index a7164ed..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Cluster/Partition.cs
+++ /dev/null
@@ -1,147 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Cluster

-{

-    using System;

-    using System.Globalization;

-

-    /// <summary>

-    /// Represents broker partition

-    /// </summary>

-    internal class Partition : IComparable<Partition>

-    {

-        /// <summary>

-        /// Factory method that instantiates <see cref="Partition"/>  object based on configuration given as string

-        /// </summary>

-        /// <param name="partition">

-        /// The partition info.

-        /// </param>

-        /// <returns>

-        /// Instantiated <see cref="Partition"/>  object

-        /// </returns>

-        public static Partition ParseFrom(string partition)

-        {

-            var pieces = partition.Split('-');

-            if (pieces.Length != 2)

-            {

-                throw new ArgumentException("Expected name in the form x-y");

-            }

-

-            return new Partition(int.Parse(pieces[0], CultureInfo.InvariantCulture), int.Parse(pieces[1], CultureInfo.InvariantCulture));

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Partition"/> class.

-        /// </summary>

-        /// <param name="brokerId">

-        /// The broker ID.

-        /// </param>

-        /// <param name="partId">

-        /// The partition ID.

-        /// </param>

-        public Partition(int brokerId, int partId)

-        {

-            this.BrokerId = brokerId;

-            this.PartId = partId;

-        }

-

-        /// <summary>

-        /// Gets the broker Dd.

-        /// </summary>

-        public int BrokerId { get; private set; }

-

-        /// <summary>

-        /// Gets the partition ID.

-        /// </summary>

-        public int PartId { get; private set; }

-

-        /// <summary>

-        /// Gets broker name as concatanate broker ID and partition ID

-        /// </summary>

-        public string Name

-        {

-            get { return this.BrokerId + "-" + this.PartId; }

-        }

-

-        /// <summary>

-        /// Compares current object with another object of type <see cref="Partition"/>

-        /// </summary>

-        /// <param name="other">

-        /// The other object.

-        /// </param>

-        /// <returns>

-        /// 0 if equals, positive number if greater and negative otherwise

-        /// </returns>

-        public int CompareTo(Partition other)

-        {

-            if (this.BrokerId == other.BrokerId)

-            {

-                return this.PartId - other.PartId;

-            }

-

-            return this.BrokerId - other.BrokerId;

-        }

-

-        /// <summary>

-        /// Gets string representation of current object

-        /// </summary>

-        /// <returns>

-        /// String that represents current object

-        /// </returns>

-        public override string ToString()

-        {

-            return "(" + this.BrokerId + "," + this.PartId + ")";

-        }

-

-        /// <summary>

-        /// Determines whether a given object is equal to the current object

-        /// </summary>

-        /// <param name="obj">

-        /// The other object.

-        /// </param>

-        /// <returns>

-        /// Equality of given and current objects

-        /// </returns>

-        public override bool Equals(object obj)

-        {

-            if (obj == null)

-            {

-                return false;

-            }

-

-            var other = obj as Partition;

-            if (other == null)

-            {

-                return false;

-            }

-

-            return this.BrokerId == other.BrokerId && this.PartId == other.PartId;

-        }

-

-        /// <summary>

-        /// Gets hash code of current object

-        /// </summary>

-        /// <returns>

-        /// Hash code

-        /// </returns>

-        public override int GetHashCode()

-        {

-            return (31 * (17 + this.BrokerId)) + this.PartId;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumer.cs
deleted file mode 100644
index 4a3e80e..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumer.cs
+++ /dev/null
@@ -1,249 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Request;
-using Kafka.Client.Util;
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// Consumes messages from Kafka.
-    /// </summary>
-    public class Consumer
-    {
-        /// <summary>
-        /// Maximum size.
-        /// </summary>
-        private static readonly int MaxSize = 1048576;
-
-        /// <summary>
-        /// Initializes a new instance of the Consumer class.
-        /// </summary>
-        /// <param name="server">The server to connect to.</param>
-        /// <param name="port">The port to connect to.</param>
-        public Consumer(string server, int port)
-        {
-            Server = server;
-            Port = port;
-        }
-
-        /// <summary>
-        /// Gets the server to which the connection is to be established.
-        /// </summary>
-        public string Server { get; private set; }
-
-        /// <summary>
-        /// Gets the port to which the connection is to be established.
-        /// </summary>
-        public int Port { get; private set; }
-
-        /// <summary>
-        /// Consumes messages from Kafka.
-        /// </summary>
-        /// <param name="topic">The topic to consume from.</param>
-        /// <param name="partition">The partition to consume from.</param>
-        /// <param name="offset">The offset to start at.</param>
-        /// <returns>A list of messages from Kafka.</returns>
-        public List<Message> Consume(string topic, int partition, long offset)
-        {
-            return Consume(topic, partition, offset, MaxSize);
-        }
-
-        /// <summary>
-        /// Consumes messages from Kafka.
-        /// </summary>
-        /// <param name="topic">The topic to consume from.</param>
-        /// <param name="partition">The partition to consume from.</param>
-        /// <param name="offset">The offset to start at.</param>
-        /// <param name="maxSize">The maximum size.</param>
-        /// <returns>A list of messages from Kafka.</returns>
-        public List<Message> Consume(string topic, int partition, long offset, int maxSize)
-        {
-            return Consume(new FetchRequest(topic, partition, offset, maxSize));
-        }
-
-        /// <summary>
-        /// Consumes messages from Kafka.
-        /// </summary>
-        /// <param name="request">The request to send to Kafka.</param>
-        /// <returns>A list of messages from Kafka.</returns>
-        public List<Message> Consume(FetchRequest request)
-        {
-            List<Message> messages = new List<Message>();
-            using (KafkaConnection connection = new KafkaConnection(Server, Port))
-            {
-                connection.Write(request.GetBytes());
-                int dataLength = BitConverter.ToInt32(BitWorks.ReverseBytes(connection.Read(4)), 0);
-
-                if (dataLength > 0) 
-                {
-                    byte[] data = connection.Read(dataLength);
-
-                    int errorCode = BitConverter.ToInt16(BitWorks.ReverseBytes(data.Take(2).ToArray<byte>()), 0);
-                    if (errorCode != KafkaException.NoError)
-                    {
-                        throw new KafkaException(errorCode);
-                    }
-
-                    // skip the error code and process the rest
-                    byte[] unbufferedData = data.Skip(2).ToArray();
-
-                    int processed = 0;
-                    int length = unbufferedData.Length - 4;
-                    int messageSize = 0;
-                    while (processed <= length) 
-                    {
-                        messageSize = BitConverter.ToInt32(BitWorks.ReverseBytes(unbufferedData.Skip(processed).Take(4).ToArray<byte>()), 0);
-                        messages.Add(Message.ParseFrom(unbufferedData.Skip(processed).Take(messageSize + 4).ToArray<byte>()));
-                        processed += 4 + messageSize;
-                    }
-                }
-            }
-
-            return messages;
-        }
-
-        /// <summary>
-        /// Executes a multi-fetch operation.
-        /// </summary>
-        /// <param name="request">The request to push to Kafka.</param>
-        /// <returns>
-        /// A list containing sets of messages. The message sets should match the request order.
-        /// </returns>
-        public List<List<Message>> Consume(MultiFetchRequest request)
-        {
-            int fetchRequests = request.ConsumerRequests.Count;
-
-            List<List<Message>> messages = new List<List<Message>>();
-            using (KafkaConnection connection = new KafkaConnection(Server, Port))
-            {
-                connection.Write(request.GetBytes());
-                int dataLength = BitConverter.ToInt32(BitWorks.ReverseBytes(connection.Read(4)), 0);
-
-                if (dataLength > 0)
-                {
-                    byte[] data = connection.Read(dataLength);
-
-                    int position = 0;
-
-                    int errorCode = BitConverter.ToInt16(BitWorks.ReverseBytes(data.Take(2).ToArray<byte>()), 0);
-                    if (errorCode != KafkaException.NoError)
-                    {
-                        throw new KafkaException(errorCode);
-                    }
-
-                    // skip the error code and process the rest
-                    position = position + 2;
-
-                    for (int ix = 0; ix < fetchRequests; ix++)
-                    {
-                        messages.Add(new List<Message>()); 
-
-                        int messageSetSize = BitConverter.ToInt32(BitWorks.ReverseBytes(data.Skip(position).Take(4).ToArray<byte>()), 0);
-                        position = position + 4;
-
-                        errorCode = BitConverter.ToInt16(BitWorks.ReverseBytes(data.Skip(position).Take(2).ToArray<byte>()), 0);
-                        if (errorCode != KafkaException.NoError)
-                        {
-                            throw new KafkaException(errorCode);
-                        }
-
-                        // skip the error code and process the rest
-                        position = position + 2;
-
-                        byte[] messageSetBytes = data.Skip(position).ToArray<byte>().Take(messageSetSize).ToArray<byte>();
-
-                        int processed = 0;
-                        int messageSize = 0;
-
-                        // dropped 2 bytes at the end...padding???
-                        while (processed < messageSetBytes.Length - 2)
-                        {
-                            messageSize = BitConverter.ToInt32(BitWorks.ReverseBytes(messageSetBytes.Skip(processed).Take(4).ToArray<byte>()), 0);
-                            messages[ix].Add(Message.ParseFrom(messageSetBytes.Skip(processed).Take(messageSize + 4).ToArray<byte>()));
-                            processed += 4 + messageSize;
-                        }
-
-                        position = position + processed;
-                    }
-                }
-            }
-
-            return messages;
-        }
-
-        /// <summary>
-        /// Get a list of valid offsets (up to maxSize) before the given time.
-        /// </summary>
-        /// <param name="topic">The topic to check.</param>
-        /// <param name="partition">The partition on the topic.</param>
-        /// <param name="time">time in millisecs (if -1, just get from the latest available)</param>
-        /// <param name="maxNumOffsets">That maximum number of offsets to return.</param>
-        /// <returns>List of offsets, in descending order.</returns>
-        public IList<long> GetOffsetsBefore(string topic, int partition, long time, int maxNumOffsets)
-        {
-            return GetOffsetsBefore(new OffsetRequest(topic, partition, time, maxNumOffsets));
-        }
-
-        /// <summary>
-        /// Get a list of valid offsets (up to maxSize) before the given time.
-        /// </summary>
-        /// <param name="request">The offset request.</param>
-        /// <returns>List of offsets, in descending order.</returns>
-        public IList<long> GetOffsetsBefore(OffsetRequest request)
-        {
-            List<long> offsets = new List<long>();
-
-            using (KafkaConnection connection = new KafkaConnection(Server, Port))
-            {
-                connection.Write(request.GetBytes());
-
-                int dataLength = BitConverter.ToInt32(BitWorks.ReverseBytes(connection.Read(4)), 0);
-                
-                if (dataLength > 0)
-                {
-                    byte[] data = connection.Read(dataLength);
-
-                    int errorCode = BitConverter.ToInt16(BitWorks.ReverseBytes(data.Take(2).ToArray<byte>()), 0);
-                    if (errorCode != KafkaException.NoError)
-                    {
-                        throw new KafkaException(errorCode);
-                    }
-
-                    // skip the error code and process the rest
-                    byte[] unbufferedData = data.Skip(2).ToArray();
-
-                    // first four bytes are the number of offsets
-                    int numOfOffsets = BitConverter.ToInt32(BitWorks.ReverseBytes(unbufferedData.Take(4).ToArray<byte>()), 0);
-
-                    int position = 0;
-                    for (int ix = 0; ix < numOfOffsets; ix++)
-                    {
-                        position = (ix * 8) + 4;
-                        offsets.Add(BitConverter.ToInt64(BitWorks.ReverseBytes(unbufferedData.Skip(position).Take(8).ToArray<byte>()), 0));
-                    }
-                }
-            }
-
-            return offsets;
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Consumer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Consumer.cs
deleted file mode 100644
index 511dcc9..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Consumer.cs
+++ /dev/null
@@ -1,235 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using log4net;

-

-    /// <summary>

-    /// The low-level API of consumer of Kafka messages

-    /// </summary>

-    /// <remarks>

-    /// Maintains a connection to a single broker and has a close correspondence

-    /// to the network requests sent to the server.

-    /// Also, is completely stateless.

-    /// </remarks>

-    public class Consumer : IConsumer

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly ConsumerConfiguration config;

-        private readonly string host;

-        private readonly int port;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Consumer"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The consumer configuration.

-        /// </param>

-        public Consumer(ConsumerConfiguration config)

-        {

-            Guard.NotNull(config, "config");

-

-            this.config = config;

-            this.host = config.Broker.Host;

-            this.port = config.Broker.Port;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Consumer"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The consumer configuration.

-        /// </param>

-        /// <param name="host"></param>

-        /// <param name="port"></param>

-        public Consumer(ConsumerConfiguration config, string host, int port)

-        {

-            Guard.NotNull(config, "config");

-

-            this.config = config;

-            this.host = host;

-            this.port = port;

-        }

-

-        /// <summary>

-        /// Fetch a set of messages from a topic.

-        /// </summary>

-        /// <param name="request">

-        /// Specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.

-        /// </param>

-        /// <returns>

-        /// A set of fetched messages.

-        /// </returns>

-        /// <remarks>

-        /// Offset is passed in on every request, allowing the user to maintain this metadata

-        /// however they choose.

-        /// </remarks>

-        public BufferedMessageSet Fetch(FetchRequest request)

-        {

-            short tryCounter = 1;

-            while (tryCounter <= this.config.NumberOfTries)

-            {

-                try

-                {

-                    using (var conn = new KafkaConnection(

-                        this.host,

-                        this.port,

-                        this.config.BufferSize,

-                        this.config.SocketTimeout))

-                    {

-                        conn.Write(request);

-                        int size = conn.Reader.ReadInt32();

-                        return BufferedMessageSet.ParseFrom(conn.Reader, size);

-                    }

-                }

-                catch (Exception ex)

-                {

-                    //// if maximum number of tries reached

-                    if (tryCounter == this.config.NumberOfTries)

-                    {

-                        throw;

-                    }

-

-                    tryCounter++;

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "Fetch reconnect due to {0}", ex);

-                }

-            }

-

-            return null;

-        }

-

-        /// <summary>

-        /// Combine multiple fetch requests in one call.

-        /// </summary>

-        /// <param name="request">

-        /// The list of fetch requests.

-        /// </param>

-        /// <returns>

-        /// A list of sets of fetched messages.

-        /// </returns>

-        /// <remarks>

-        /// Offset is passed in on every request, allowing the user to maintain this metadata 

-        /// however they choose.

-        /// </remarks>

-        public IList<BufferedMessageSet> MultiFetch(MultiFetchRequest request)

-        {

-            var result = new List<BufferedMessageSet>();

-            short tryCounter = 1;

-            while (tryCounter <= this.config.NumberOfTries)

-            {

-                try

-                {

-                    using (var conn = new KafkaConnection(

-                        this.host,

-                        this.port,

-                        this.config.BufferSize,

-                        this.config.SocketTimeout))

-                    {

-                        conn.Write(request);

-                        int size = conn.Reader.ReadInt32();

-                        return BufferedMessageSet.ParseMultiFrom(conn.Reader, size, request.ConsumerRequests.Count);

-                    }

-                }

-                catch (Exception ex)

-                {

-                    // if maximum number of tries reached

-                    if (tryCounter == this.config.NumberOfTries)

-                    {

-                        throw;

-                    }

-

-                    tryCounter++;

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "MultiFetch reconnect due to {0}", ex);

-                }

-            }

-

-            return result;

-        }

-

-        /// <summary>

-        /// Gets a list of valid offsets (up to maxSize) before the given time.

-        /// </summary>

-        /// <param name="request">

-        /// The offset request.

-        /// </param>

-        /// <returns>

-        /// The list of offsets, in descending order.

-        /// </returns>

-        public IList<long> GetOffsetsBefore(OffsetRequest request)

-        {

-            var result = new List<long>();

-            short tryCounter = 1;

-            while (tryCounter <= this.config.NumberOfTries)

-            {

-                try

-                {

-                    using (var conn = new KafkaConnection(

-                        this.host,

-                        this.port,

-                        this.config.BufferSize,

-                        this.config.SocketTimeout))

-                    {

-                        conn.Write(request);

-                        int size = conn.Reader.ReadInt32();

-                        if (size == 0)

-                        {

-                            return result;

-                        }

-

-                        short errorCode = conn.Reader.ReadInt16();

-                        if (errorCode != KafkaException.NoError)

-                        {

-                            throw new KafkaException(errorCode);

-                        }

-

-                        int count = conn.Reader.ReadInt32();

-                        for (int i = 0; i < count; i++)

-                        {

-                            result.Add(conn.Reader.ReadInt64());

-                        }

-

-                        return result;

-                    }

-                }

-                catch (Exception ex)

-                {

-                    //// if maximum number of tries reached

-                    if (tryCounter == this.config.NumberOfTries)

-                    {

-                        throw;

-                    }

-

-                    tryCounter++;

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "GetOffsetsBefore reconnect due to {0}", ex);

-                }

-            }

-

-            return result;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIterator.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIterator.cs
deleted file mode 100644
index cc1e0c0..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIterator.cs
+++ /dev/null
@@ -1,211 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Messages;

-    using log4net;

-

-    /// <summary>

-    /// An iterator that blocks until a value can be read from the supplied queue.

-    /// </summary>

-    /// <remarks>

-    /// The iterator takes a shutdownCommand object which can be added to the queue to trigger a shutdown

-    /// </remarks>

-    internal class ConsumerIterator : IEnumerator<Message>

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private readonly BlockingCollection<FetchedDataChunk> channel;

-        private readonly int consumerTimeoutMs;

-        private PartitionTopicInfo currentTopicInfo;

-        private ConsumerIteratorState state = ConsumerIteratorState.NotReady;

-        private IEnumerator<MessageAndOffset> current;

-        private FetchedDataChunk currentDataChunk = null;

-        private Message nextItem;

-        private long consumedOffset = -1;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ConsumerIterator"/> class.

-        /// </summary>

-        /// <param name="channel">

-        /// The queue containing 

-        /// </param>

-        /// <param name="consumerTimeoutMs">

-        /// The consumer timeout in ms.

-        /// </param>

-        public ConsumerIterator(BlockingCollection<FetchedDataChunk> channel, int consumerTimeoutMs)

-        {

-            this.channel = channel;

-            this.consumerTimeoutMs = consumerTimeoutMs;

-        }

-

-        /// <summary>

-        /// Gets the element in the collection at the current position of the enumerator.

-        /// </summary>

-        /// <returns>

-        /// The element in the collection at the current position of the enumerator.

-        /// </returns>

-        public Message Current

-        {

-            get

-            {

-                if (!MoveNext())

-                {

-                    throw new NoSuchElementException();

-                }

-

-                state = ConsumerIteratorState.NotReady;

-                if (nextItem != null)

-                {

-                    if (consumedOffset < 0)

-                    {

-                        throw new IllegalStateException(String.Format(CultureInfo.CurrentCulture, "Offset returned by the message set is invalid {0}.", consumedOffset));

-                    }

-

-                    currentTopicInfo.ResetConsumeOffset(consumedOffset);

-                    if (Logger.IsDebugEnabled)

-                    {

-                        Logger.DebugFormat(CultureInfo.CurrentCulture, "Setting consumed offset to {0}", consumedOffset);

-                    }

-

-                    return nextItem;

-                }

-

-                throw new IllegalStateException("Expected item but none found.");

-            }

-        }

-

-        /// <summary>

-        /// Gets the current element in the collection.

-        /// </summary>

-        /// <returns>

-        /// The current element in the collection.

-        /// </returns>

-        object IEnumerator.Current

-        {

-            get { return this.Current; }

-        }

-

-        /// <summary>

-        /// Advances the enumerator to the next element of the collection.

-        /// </summary>

-        /// <returns>

-        /// true if the enumerator was successfully advanced to the next element; false if the enumerator has passed the end of the collection.

-        /// </returns>

-        public bool MoveNext()

-        {

-            if (state == ConsumerIteratorState.Failed)

-            {

-                throw new IllegalStateException("Iterator is in failed state");

-            }

-            

-            switch (state)

-            {

-                case ConsumerIteratorState.Done:

-                    return false;

-                case ConsumerIteratorState.Ready:

-                    return true;

-                default:

-                    return MaybeComputeNext();

-            }

-        }

-

-        /// <summary>

-        /// Resets the enumerator's state to NotReady.

-        /// </summary>

-        public void Reset()

-        {

-            state = ConsumerIteratorState.NotReady;

-        }

-

-        public void Dispose()

-        {

-        }

-

-        private bool MaybeComputeNext()

-        {

-            state = ConsumerIteratorState.Failed;

-            nextItem = this.MakeNext();

-            if (state == ConsumerIteratorState.Done)

-            {

-                return false;

-            }

-

-            state = ConsumerIteratorState.Ready;

-            return true;

-        }

-

-        private Message MakeNext()

-        {

-            if (current == null || !current.MoveNext())

-            {

-                if (consumerTimeoutMs < 0)

-                {

-                    currentDataChunk = this.channel.Take();

-                }

-                else

-                {

-                    bool done = channel.TryTake(out currentDataChunk, consumerTimeoutMs);

-                    if (!done)

-                    {

-                        Logger.Debug("Consumer iterator timing out...");

-                        throw new ConsumerTimeoutException();

-                    }

-                }

-

-                if (currentDataChunk.Equals(ZookeeperConsumerConnector.ShutdownCommand))

-                {

-                    Logger.Debug("Received the shutdown command");

-                    channel.Add(currentDataChunk);

-                    return this.AllDone();

-                }

-

-                currentTopicInfo = currentDataChunk.TopicInfo;

-                if (currentTopicInfo.GetConsumeOffset() != currentDataChunk.FetchOffset)

-                {

-                    Logger.ErrorFormat(

-                        CultureInfo.CurrentCulture,

-                        "consumed offset: {0} doesn't match fetch offset: {1} for {2}; consumer may lose data",

-                        currentTopicInfo.GetConsumeOffset(),

-                        currentDataChunk.FetchOffset,

-                        currentTopicInfo);

-                    currentTopicInfo.ResetConsumeOffset(currentDataChunk.FetchOffset);

-                }

-

-                current = currentDataChunk.Messages.GetEnumerator();

-                current.MoveNext();

-            }

-

-            var item = current.Current;

-            consumedOffset = item.Offset;

-            return item.Message;

-        }

-

-        private Message AllDone()

-        {

-            this.state = ConsumerIteratorState.Done;

-            return null;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIteratorState.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIteratorState.cs
deleted file mode 100644
index 7da6e29..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ConsumerIteratorState.cs
+++ /dev/null
@@ -1,28 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-

-namespace Kafka.Client.Consumers

-{

-    internal enum ConsumerIteratorState

-    {

-        Done,

-        Ready,

-        NotReady,

-        Failed

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetchedDataChunk.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetchedDataChunk.cs
deleted file mode 100644
index 7e20b96..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetchedDataChunk.cs
+++ /dev/null
@@ -1,58 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using Kafka.Client.Messages;

-

-    internal class FetchedDataChunk : IEquatable<FetchedDataChunk>

-    {

-        public BufferedMessageSet Messages { get; set; }

-

-        public PartitionTopicInfo TopicInfo { get; set; }

-

-        public long FetchOffset { get; set; }

-

-        public FetchedDataChunk(BufferedMessageSet messages, PartitionTopicInfo topicInfo, long fetchOffset)

-        {

-            this.Messages = messages;

-            this.TopicInfo = topicInfo;

-            this.FetchOffset = fetchOffset;

-        }

-

-        public override bool Equals(object obj)

-        {

-            FetchedDataChunk other = obj as FetchedDataChunk;

-            if (other == null)

-            {

-                return false;

-            }

-            else

-            {

-                return this.Equals(other);

-            }

-        }

-

-        public bool Equals(FetchedDataChunk other)

-        {

-            return this.Messages == other.Messages &&

-                    this.TopicInfo == other.TopicInfo &&

-                    this.FetchOffset == other.FetchOffset;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Fetcher.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Fetcher.cs
deleted file mode 100644
index 6c83d75..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/Fetcher.cs
+++ /dev/null
@@ -1,170 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.ZooKeeperIntegration;

-    using log4net;

-

-    /// <summary>

-    /// Background thread that fetches data from a set of servers

-    /// </summary>

-    internal class Fetcher : IDisposable

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private readonly ConsumerConfiguration config;

-        private readonly IZooKeeperClient zkClient;

-        private FetcherRunnable[] fetcherWorkerObjects;

-        private volatile bool disposed;

-        private readonly object shuttingDownLock = new object();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Fetcher"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The consumer configuration.

-        /// </param>

-        /// <param name="zkClient">

-        /// The wrapper above ZooKeeper client.

-        /// </param>

-        public Fetcher(ConsumerConfiguration config, IZooKeeperClient zkClient)

-        {

-            this.config = config;

-            this.zkClient = zkClient;

-        }

-

-        /// <summary>

-        /// Shuts down all fetch threads

-        /// </summary>

-        private void Shutdown()

-        {

-            if (fetcherWorkerObjects != null)

-            {

-                foreach (FetcherRunnable fetcherRunnable in fetcherWorkerObjects)

-                {

-                    fetcherRunnable.Shutdown();

-                }

-

-                fetcherWorkerObjects = null;

-            }

-        }

-

-        /// <summary>

-        /// Opens connections to brokers.

-        /// </summary>

-        /// <param name="topicInfos">

-        /// The topic infos.

-        /// </param>

-        /// <param name="cluster">

-        /// The cluster.

-        /// </param>

-        /// <param name="queuesToBeCleared">

-        /// The queues to be cleared.

-        /// </param>

-        public void InitConnections(IEnumerable<PartitionTopicInfo> topicInfos, Cluster cluster, IEnumerable<BlockingCollection<FetchedDataChunk>> queuesToBeCleared)

-        {

-            this.EnsuresNotDisposed();

-            this.Shutdown();

-            if (topicInfos == null)

-            {

-                return;

-            }

-

-            foreach (var queueToBeCleared in queuesToBeCleared)

-            {

-                while (queueToBeCleared.Count > 0)

-                {

-                    queueToBeCleared.Take();

-                }

-            }

-

-            var partitionTopicInfoMap = new Dictionary<int, List<PartitionTopicInfo>>();

-

-            //// re-arrange by broker id

-            foreach (var topicInfo in topicInfos)

-            {

-                if (!partitionTopicInfoMap.ContainsKey(topicInfo.BrokerId))

-                {

-                    partitionTopicInfoMap.Add(topicInfo.BrokerId, new List<PartitionTopicInfo>() { topicInfo });

-                }

-                else

-                {

-                    partitionTopicInfoMap[topicInfo.BrokerId].Add(topicInfo);

-                } 

-            }

-

-            //// open a new fetcher thread for each broker

-            fetcherWorkerObjects = new FetcherRunnable[partitionTopicInfoMap.Count];

-            int i = 0;

-            foreach (KeyValuePair<int, List<PartitionTopicInfo>> item in partitionTopicInfoMap)

-            {

-                Broker broker = cluster.GetBroker(item.Key);

-                var fetcherRunnable = new FetcherRunnable("FetcherRunnable-" + i, zkClient, config, broker, item.Value);

-                var threadStart = new ThreadStart(fetcherRunnable.Run);

-                var fetcherThread = new Thread(threadStart);

-                fetcherWorkerObjects[i] = fetcherRunnable;

-                fetcherThread.Start();

-                i++;

-            }

-        }

-

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-

-            try

-            {

-                this.Shutdown();

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Ignoring unexpected errors on closing", exc);

-            }

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetcherRunnable.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetcherRunnable.cs
deleted file mode 100644
index 0c5ace5..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/FetcherRunnable.cs
+++ /dev/null
@@ -1,191 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using log4net;

-

-    /// <summary>

-    /// Background thread worker class that is used to fetch data from a single broker

-    /// </summary>

-    internal class FetcherRunnable

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly string name;

-

-        private readonly IZooKeeperClient zkClient;

-

-        private readonly ConsumerConfiguration config;

-

-        private readonly Broker broker;

-

-        private readonly IList<PartitionTopicInfo> partitionTopicInfos;

-

-        private readonly IConsumer simpleConsumer;

-

-        private bool shouldStop;

-

-        internal FetcherRunnable(string name, IZooKeeperClient zkClient, ConsumerConfiguration config, Broker broker, List<PartitionTopicInfo> partitionTopicInfos)

-        {

-            this.name = name;

-            this.zkClient = zkClient;

-            this.config = config;

-            this.broker = broker;

-            this.partitionTopicInfos = partitionTopicInfos;

-

-            this.simpleConsumer = new Consumer(this.config, broker.Host, broker.Port);

-        }

-

-        /// <summary>

-        /// Method to be used for starting a new thread

-        /// </summary>

-        internal void Run()

-        {

-            foreach (var partitionTopicInfo in partitionTopicInfos)

-            {

-                Logger.InfoFormat(

-                    CultureInfo.CurrentCulture,

-                    "{0} start fetching topic: {1} part: {2} offset: {3} from {4}:{5}",

-                    this.name,

-                    partitionTopicInfo.Topic,

-                    partitionTopicInfo.Partition.PartId,

-                    partitionTopicInfo.GetFetchOffset(),

-                    this.broker.Host,

-                    this.broker.Port);

-            }

-

-            try

-            {

-                while (!this.shouldStop)

-                {

-                    var requestList = new List<FetchRequest>();

-                    foreach (var partitionTopicInfo in this.partitionTopicInfos)

-                    {

-                        var singleRequest = new FetchRequest(partitionTopicInfo.Topic, partitionTopicInfo.Partition.PartId, partitionTopicInfo.GetFetchOffset(), this.config.MaxFetchSize);

-                        requestList.Add(singleRequest);

-                    }

-

-                    Logger.Debug("Fetch request: " + string.Join(", ", requestList.Select(x => x.ToString())));

-                    var request = new MultiFetchRequest(requestList);

-                    var response = this.simpleConsumer.MultiFetch(request);

-                    int read = 0;

-                    var items = this.partitionTopicInfos.Zip(

-                        response,

-                        (x, y) =>

-                        new Tuple<PartitionTopicInfo, BufferedMessageSet>(x, y));

-                    foreach (Tuple<PartitionTopicInfo, BufferedMessageSet> item in items)

-                    {

-                        BufferedMessageSet messages = item.Item2;

-                        PartitionTopicInfo info = item.Item1;

-                        try

-                        {

-                            bool done = false;

-                            if (messages.ErrorCode == ErrorMapping.OffsetOutOfRangeCode)

-                            {

-                                Logger.InfoFormat(CultureInfo.CurrentCulture, "offset {0} out of range", info.GetFetchOffset());

-                                //// see if we can fix this error

-                                var resetOffset = this.ResetConsumerOffsets(info.Topic, info.Partition);

-                                if (resetOffset >= 0)

-                                {

-                                    info.ResetFetchOffset(resetOffset);

-                                    info.ResetConsumeOffset(resetOffset);

-                                    done = true;

-                                }

-                            }

-

-                            if (!done)

-                            {

-                                read += info.Add(messages, info.GetFetchOffset());

-                            }

-                        }

-                        catch (Exception ex)

-                        {

-                            if (!shouldStop)

-                            {

-                                Logger.ErrorFormat(CultureInfo.CurrentCulture, "error in FetcherRunnable for {0}" + info, ex);

-                            }

-

-                            throw;

-                        }

-                    }

-

-                    Logger.Info("Fetched bytes: " + read);

-                    if (read == 0)

-                    {

-                        Logger.DebugFormat(CultureInfo.CurrentCulture, "backing off {0} ms", this.config.BackOffIncrement);

-                        Thread.Sleep(this.config.BackOffIncrement);

-                    }

-                }

-            }

-            catch (Exception ex)

-            {

-                if (shouldStop)

-                {

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "FetcherRunnable {0} interrupted", this);

-                }

-                else

-                {

-                    Logger.ErrorFormat(CultureInfo.CurrentCulture, "error in FetcherRunnable {0}", ex);

-                }

-            }

-

-            Logger.InfoFormat(CultureInfo.CurrentCulture, "stopping fetcher {0} to host {1}", this.name, this.broker.Host);

-        }

-

-        internal void Shutdown()

-        {

-            this.shouldStop = true;

-        }

-

-        private long ResetConsumerOffsets(string topic, Partition partition)

-        {

-            long offset;

-            switch (this.config.AutoOffsetReset)

-            {

-                case OffsetRequest.SmallestTime:

-                    offset = OffsetRequest.EarliestTime;

-                    break;

-                case OffsetRequest.LargestTime:

-                    offset = OffsetRequest.LatestTime;

-                    break;

-                default:

-                    return -1;

-            }

-

-            var request = new OffsetRequest(topic, partition.PartId, offset, 1);

-            var offsets = this.simpleConsumer.GetOffsetsBefore(request);

-            var topicDirs = new ZKGroupTopicDirs(this.config.GroupId, topic);

-            Logger.InfoFormat(CultureInfo.CurrentCulture, "updating partition {0} with {1} offset {2}", partition.Name, offset == OffsetRequest.EarliestTime ? "earliest" : "latest", offsets[0]);

-            ZkUtils.UpdatePersistentPath(this.zkClient, topicDirs.ConsumerOffsetDir + "/" + partition.Name, offsets[0].ToString());

-

-            return offsets[0];

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumer.cs
deleted file mode 100644
index 0c9aced..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumer.cs
+++ /dev/null
@@ -1,74 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System.Collections.Generic;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// The low-level API of consumer of Kafka messages

-    /// </summary>

-    /// <remarks>

-    /// Maintains a connection to a single broker and has a close correspondence 

-    /// to the network requests sent to the server.

-    /// </remarks>

-    public interface IConsumer

-    {

-        /// <summary>

-        /// Fetch a set of messages from a topic.

-        /// </summary>

-        /// <param name="request">

-        /// Specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.

-        /// </param>

-        /// <returns>

-        /// A set of fetched messages.

-        /// </returns>

-        /// <remarks>

-        /// Offset is passed in on every request, allowing the user to maintain this metadata 

-        /// however they choose.

-        /// </remarks>

-        BufferedMessageSet Fetch(FetchRequest request);

-

-        /// <summary>

-        /// Combine multiple fetch requests in one call.

-        /// </summary>

-        /// <param name="request">

-        /// The list of fetch requests.

-        /// </param>

-        /// <returns>

-        /// A list of sets of fetched messages.

-        /// </returns>

-        /// <remarks>

-        /// Offset is passed in on every request, allowing the user to maintain this metadata 

-        /// however they choose.

-        /// </remarks>

-        IList<BufferedMessageSet> MultiFetch(MultiFetchRequest request);

-

-        /// <summary>

-        /// Gets a list of valid offsets (up to maxSize) before the given time.

-        /// </summary>

-        /// <param name="request">

-        /// The offset request.

-        /// </param>

-        /// <returns>

-        /// The list of offsets, in descending order.

-        /// </returns>

-        IList<long> GetOffsetsBefore(OffsetRequest request);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumerConnector.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumerConnector.cs
deleted file mode 100644
index 9413f43..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/IConsumerConnector.cs
+++ /dev/null
@@ -1,45 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Generic;

-

-    /// <summary>

-    /// The consumer high-level API, that hides the details of brokers from the consumer 

-    /// It also maintains the state of what has been consumed. 

-    /// </summary>

-    public interface IConsumerConnector : IDisposable

-    {

-        /// <summary>

-        /// Creates a list of message streams for each topic.

-        /// </summary>

-        /// <param name="topicCountDict">

-        /// The map of topic on number of streams

-        /// </param>

-        /// <returns>

-        /// The list of <see cref="KafkaMessageStream"/>, which are iterators over topic.

-        /// </returns>

-        IDictionary<string, IList<KafkaMessageStream>> CreateMessageStreams(IDictionary<string, int> topicCountDict);

-

-        /// <summary>

-        /// Commits the offsets of all messages consumed so far.

-        /// </summary>

-        void CommitOffsets();

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/KafkaMessageStream.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/KafkaMessageStream.cs
deleted file mode 100644
index a7c9aff..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/KafkaMessageStream.cs
+++ /dev/null
@@ -1,53 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System.Collections;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using Kafka.Client.Messages;

-

-    /// <summary>

-    /// This class is a thread-safe IEnumerable of <see cref="Message"/> that can be enumerated to get messages.

-    /// </summary>

-    public class KafkaMessageStream : IEnumerable<Message>

-    {

-        private readonly BlockingCollection<FetchedDataChunk> queue;

-

-        private readonly int consumerTimeoutMs;

-

-        private readonly ConsumerIterator iterator;

-

-        internal KafkaMessageStream(BlockingCollection<FetchedDataChunk> queue, int consumerTimeoutMs)

-        {

-            this.consumerTimeoutMs = consumerTimeoutMs;

-            this.queue = queue;

-            this.iterator = new ConsumerIterator(this.queue, this.consumerTimeoutMs);

-        }

-

-        public IEnumerator<Message> GetEnumerator()

-        {

-            return this.iterator;

-        }

-

-        IEnumerator IEnumerable.GetEnumerator()

-        {

-            return this.GetEnumerator();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/PartitionTopicInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/PartitionTopicInfo.cs
deleted file mode 100644
index 9617e08..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/PartitionTopicInfo.cs
+++ /dev/null
@@ -1,198 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System.Collections.Concurrent;

-    using System.Globalization;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Messages;

-    using log4net;

-

-    /// <summary>

-    /// Represents topic in brokers's partition.

-    /// </summary>

-    internal class PartitionTopicInfo

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly object consumedOffsetLock = new object();

-

-        private readonly object fetchedOffsetLock = new object();

-

-        private readonly BlockingCollection<FetchedDataChunk> chunkQueue;

-

-        private long consumedOffset;

-

-        private long fetchedOffset;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="PartitionTopicInfo"/> class.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="brokerId">

-        /// The broker ID.

-        /// </param>

-        /// <param name="partition">

-        /// The broker's partition.

-        /// </param>

-        /// <param name="chunkQueue">

-        /// The chunk queue.

-        /// </param>

-        /// <param name="consumedOffset">

-        /// The consumed offset value.

-        /// </param>

-        /// <param name="fetchedOffset">

-        /// The fetched offset value.

-        /// </param>

-        /// <param name="fetchSize">

-        /// The fetch size.

-        /// </param>

-        public PartitionTopicInfo(

-            string topic, 

-            int brokerId, 

-            Partition partition, 

-            BlockingCollection<FetchedDataChunk> chunkQueue, 

-            long consumedOffset, 

-            long fetchedOffset, 

-            int fetchSize)

-        {

-            this.Topic = topic;

-            this.Partition = partition;

-            this.chunkQueue = chunkQueue;

-            this.BrokerId = brokerId;

-            this.consumedOffset = consumedOffset;

-            this.fetchedOffset = fetchedOffset;

-            this.FetchSize = fetchSize;

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture, "initial consumer offset of {0} is {1}", this, consumedOffset);

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture, "initial fetch offset of {0} is {1}", this, fetchedOffset);

-            }

-        }

-

-        /// <summary>

-        /// Gets broker ID.

-        /// </summary>

-        public int BrokerId { get; private set; }

-

-        /// <summary>

-        /// Gets the fetch size.

-        /// </summary>

-        public int FetchSize { get; private set; }

-

-        /// <summary>

-        /// Gets the partition.

-        /// </summary>

-        public Partition Partition { get; private set; }

-

-        /// <summary>

-        /// Gets the topic.

-        /// </summary>

-        public string Topic { get; private set; }

-

-        /// <summary>

-        /// Records the given number of bytes as having been consumed

-        /// </summary>

-        /// <param name="messageSize">

-        /// The message size.

-        /// </param>

-        public void Consumed(int messageSize)

-        {

-            long newOffset;

-            lock (this.consumedOffsetLock)

-            {

-                this.consumedOffset += messageSize;

-                newOffset = this.consumedOffset;

-            }

-

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture, "updated consume offset of {0} to {1}", this, newOffset);

-            }

-        }

-

-        public int Add(BufferedMessageSet messages, long fetchOffset)

-        {

-            int size = messages.SetSize;

-            if (size > 0)

-            {

-                long newOffset = Interlocked.Add(ref this.fetchedOffset, size);

-                Logger.Debug("Updated fetch offset of " + this + " to " + newOffset);

-                this.chunkQueue.Add(new FetchedDataChunk(messages, this, fetchOffset));

-            }

-

-            return size;

-        }

-

-        public long GetConsumeOffset()

-        {

-            lock (this.consumedOffsetLock)

-            {

-                return this.consumedOffset;

-            }

-        }

-

-        public long GetFetchOffset()

-        {

-            lock (this.fetchedOffsetLock)

-            {

-                return this.fetchedOffset;

-            }

-        }

-

-        public void ResetConsumeOffset(long newConsumeOffset)

-        {

-            lock (this.consumedOffsetLock)

-            {

-                this.consumedOffset = newConsumeOffset;

-            }

-

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture, "reset consume offset of {0} to {1}", this, newConsumeOffset);

-            }

-        }

-

-        public void ResetFetchOffset(long newFetchOffset)

-        {

-            lock (this.fetchedOffsetLock)

-            {

-                this.fetchedOffset = newFetchOffset;

-            }

-

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture, "reset fetch offset of {0} to {1}", this, newFetchOffset);

-            }

-        }

-

-        public override string ToString()

-        {

-            return this.Topic + ":" + this.Partition;

-        }

-    }

-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/TopicCount.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/TopicCount.cs
deleted file mode 100644
index 3d97567..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/TopicCount.cs
+++ /dev/null
@@ -1,111 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using System.Text;

-    using System.Web.Script.Serialization;

-    using log4net;

-

-    internal class TopicCount

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly IDictionary<string, int> topicCountMap;

-        private readonly string consumerIdString;

-

-        public TopicCount(string consumerIdString, IDictionary<string, int> topicCountMap)

-        {

-            this.topicCountMap = topicCountMap;

-            this.consumerIdString = consumerIdString;

-        }

-

-        public static TopicCount ConstructTopicCount(string consumerIdString, string json)

-        {

-            Dictionary<string, int> result = null;

-            var ser = new JavaScriptSerializer();

-            try

-            {

-                result = ser.Deserialize<Dictionary<string, int>>(json);

-            }

-            catch (Exception ex)

-            {

-                Logger.ErrorFormat(CultureInfo.CurrentCulture, "error parsing consumer json string {0}. {1}", json, ex);

-            }

-

-            return new TopicCount(consumerIdString, result);

-        }

-

-        public IDictionary<string, IList<string>> GetConsumerThreadIdsPerTopic()

-        {

-            var result = new Dictionary<string, IList<string>>();

-            foreach (KeyValuePair<string, int> item in topicCountMap)

-            {

-                var consumerSet = new List<string>();

-                for (int i = 0; i < item.Value; i++)

-                {

-                    consumerSet.Add(consumerIdString + "-" + i);

-                }

-

-                result.Add(item.Key, consumerSet);

-            }

-

-            return result;

-        }

-

-        public override bool Equals(object obj)

-        {

-            var o = obj as TopicCount;

-            if (o != null)

-            {

-                return this.consumerIdString == o.consumerIdString && this.topicCountMap == o.topicCountMap;

-            }

-

-            return false;

-        }

-

-        /*

-         return json of

-         { "topic1" : 4,

-           "topic2" : 4

-         }

-        */

-        public string ToJsonString()

-        {

-            var sb = new StringBuilder();

-            sb.Append("{ ");

-            int i = 0;

-            foreach (KeyValuePair<string, int> entry in this.topicCountMap)

-            {

-                if (i > 0)

-                {

-                    sb.Append(",");

-                }

-

-                sb.Append("\"" + entry.Key + "\": " + entry.Value);

-                i++;

-            }

-

-            sb.Append(" }");

-            return sb.ToString();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ZookeeperConsumerConnector.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ZookeeperConsumerConnector.cs
deleted file mode 100644
index 753c536..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Consumers/ZookeeperConsumerConnector.cs
+++ /dev/null
@@ -1,322 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Consumers

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using log4net;

-

-    /// <summary>

-    /// The consumer high-level API, that hides the details of brokers from the consumer. 

-    /// It also maintains the state of what has been consumed. 

-    /// </summary>

-    public class ZookeeperConsumerConnector : KafkaClientBase, IConsumerConnector

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        

-        public static readonly int MaxNRetries = 4;

-        

-        internal static readonly FetchedDataChunk ShutdownCommand = new FetchedDataChunk(null, null, -1);

-

-        private readonly ConsumerConfiguration config;

-       

-        private IZooKeeperClient zkClient;

-       

-        private readonly object shuttingDownLock = new object();

-       

-        private readonly bool enableFetcher;

-        

-        private Fetcher fetcher;

-        

-        private readonly KafkaScheduler scheduler = new KafkaScheduler();

-        

-        private readonly IDictionary<string, IDictionary<Partition, PartitionTopicInfo>> topicRegistry = new ConcurrentDictionary<string, IDictionary<Partition, PartitionTopicInfo>>();

-        

-        private readonly IDictionary<Tuple<string, string>, BlockingCollection<FetchedDataChunk>> queues = new Dictionary<Tuple<string, string>, BlockingCollection<FetchedDataChunk>>();

-

-        private readonly object syncLock = new object();

-

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Gets the consumer group ID.

-        /// </summary>

-        public string ConsumerGroup

-        {

-            get { return this.config.GroupId; }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZookeeperConsumerConnector"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The consumer configuration. At the minimum, need to specify the group ID 

-        /// of the consumer and the ZooKeeper connection string.

-        /// </param>

-        /// <param name="enableFetcher">

-        /// Indicates whether fetchers should be enabled

-        /// </param>

-        public ZookeeperConsumerConnector(ConsumerConfiguration config, bool enableFetcher)

-        {

-            this.config = config;

-            this.enableFetcher = enableFetcher;

-            this.ConnectZk();

-            this.CreateFetcher();

-

-            if (this.config.AutoCommit)

-            {

-                Logger.InfoFormat(CultureInfo.CurrentCulture, "starting auto committer every {0} ms", this.config.AutoCommitInterval);

-                scheduler.ScheduleWithRate(this.AutoCommit, this.config.AutoCommitInterval, this.config.AutoCommitInterval);

-            }

-        }

-

-        /// <summary>

-        /// Commits the offsets of all messages consumed so far.

-        /// </summary>

-        public void CommitOffsets()

-        {

-            this.EnsuresNotDisposed();

-            if (this.zkClient == null)

-            {

-                return;

-            }

-

-            foreach (KeyValuePair<string, IDictionary<Partition, PartitionTopicInfo>> topic in topicRegistry)

-            {

-                var topicDirs = new ZKGroupTopicDirs(this.config.GroupId, topic.Key);

-                foreach (KeyValuePair<Partition, PartitionTopicInfo> partition in topic.Value)

-                {

-                    var newOffset = partition.Value.GetConsumeOffset();

-                    try

-                    {

-                        ZkUtils.UpdatePersistentPath(zkClient, topicDirs.ConsumerOffsetDir + "/" + partition.Value.Partition.Name, newOffset.ToString());

-                    }

-                    catch (Exception ex)

-                    {

-                        Logger.WarnFormat(CultureInfo.CurrentCulture, "exception during CommitOffsets: {0}", ex);

-                    }

-

-                    if (Logger.IsDebugEnabled)

-                    {

-                        Logger.DebugFormat(CultureInfo.CurrentCulture, "Commited offset {0} for topic {1}", newOffset, partition);

-                    }

-                }

-            }

-        }

-

-        public void AutoCommit()

-        {

-            this.EnsuresNotDisposed();

-            try

-            {

-                this.CommitOffsets();

-            }

-            catch (Exception ex)

-            {

-                Logger.ErrorFormat(CultureInfo.CurrentCulture, "exception during AutoCommit: {0}", ex);

-            }

-        }

-

-        protected override void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                Logger.Info("ZookeeperConsumerConnector shutting down");

-                this.disposed = true;

-            }

-

-            try

-            {

-                if (this.scheduler != null)

-                {

-                    this.scheduler.Dispose();

-                }

-

-                if (this.fetcher != null)

-                {

-                    this.fetcher.Dispose();

-                }

-

-                this.SendShutdownToAllQueues();

-                if (this.zkClient != null)

-                {

-                    this.zkClient.Dispose();

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Debug("Ignoring unexpected errors on shutting down", exc);

-            }

-

-            Logger.Info("ZookeeperConsumerConnector shut down completed");

-        }

-

-        /// <summary>

-        /// Creates a list of message streams for each topic.

-        /// </summary>

-        /// <param name="topicCountDict">

-        /// The map of topic on number of streams

-        /// </param>

-        /// <returns>

-        /// The list of <see cref="KafkaMessageStream"/>, which are iterators over topic.

-        /// </returns>

-        /// <remarks>

-        /// Explicitly triggers load balancing for this consumer

-        /// </remarks>

-        public IDictionary<string, IList<KafkaMessageStream>> CreateMessageStreams(IDictionary<string, int> topicCountDict)

-        {

-            this.EnsuresNotDisposed();

-            return this.Consume(topicCountDict);

-        }

-

-        private void ConnectZk()

-        {

-            Logger.InfoFormat(CultureInfo.CurrentCulture, "Connecting to zookeeper instance at {0}", this.config.ZooKeeper.ZkConnect);

-            this.zkClient = new ZooKeeperClient(this.config.ZooKeeper.ZkConnect, this.config.ZooKeeper.ZkSessionTimeoutMs, ZooKeeperStringSerializer.Serializer);

-            this.zkClient.Connect();

-        }

-

-        private void CreateFetcher()

-        {

-            if (this.enableFetcher)

-            {

-                this.fetcher = new Fetcher(this.config, this.zkClient);

-            }

-        }

-

-        private IDictionary<string, IList<KafkaMessageStream>> Consume(IDictionary<string, int> topicCountDict)

-        {

-            Logger.Debug("entering consume");

-

-            if (topicCountDict == null)

-            {

-                throw new ArgumentNullException();

-            }

-

-            var dirs = new ZKGroupDirs(this.config.GroupId);

-            var result = new Dictionary<string, IList<KafkaMessageStream>>();

-

-            string consumerUuid = Environment.MachineName + "-" + DateTime.Now.Millisecond;

-            string consumerIdString = this.config.GroupId + "_" + consumerUuid;

-            var topicCount = new TopicCount(consumerIdString, topicCountDict);

-

-            // listener to consumer and partition changes

-            var loadBalancerListener = new ZKRebalancerListener(

-                this.config, 

-                consumerIdString, 

-                this.topicRegistry, 

-                this.zkClient, 

-                this, 

-                queues, 

-                this.fetcher, 

-                this.syncLock);

-            this.RegisterConsumerInZk(dirs, consumerIdString, topicCount);

-            this.zkClient.Subscribe(dirs.ConsumerRegistryDir, loadBalancerListener);

-

-            //// create a queue per topic per consumer thread

-            var consumerThreadIdsPerTopicMap = topicCount.GetConsumerThreadIdsPerTopic();

-            foreach (var topic in consumerThreadIdsPerTopicMap.Keys)

-            {

-                var streamList = new List<KafkaMessageStream>();

-                foreach (string threadId in consumerThreadIdsPerTopicMap[topic])

-                {

-                    var stream = new BlockingCollection<FetchedDataChunk>(new ConcurrentQueue<FetchedDataChunk>());

-                    this.queues.Add(new Tuple<string, string>(topic, threadId), stream);

-                    streamList.Add(new KafkaMessageStream(stream, this.config.Timeout));

-                }

-

-                result.Add(topic, streamList);

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "adding topic {0} and stream to map...", topic);

-

-                // register on broker partition path changes

-                string partitionPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic;

-                this.zkClient.MakeSurePersistentPathExists(partitionPath);

-                this.zkClient.Subscribe(partitionPath, loadBalancerListener);

-            }

-

-            //// register listener for session expired event

-            this.zkClient.Subscribe(new ZKSessionExpireListener(dirs, consumerIdString, topicCount, loadBalancerListener, this));

-

-            //// explicitly trigger load balancing for this consumer

-            lock (this.syncLock)

-            {

-                loadBalancerListener.SyncedRebalance();

-            }

-

-            return result;

-        }

-

-        private void SendShutdownToAllQueues()

-        {

-            foreach (var queue in this.queues)

-            {

-                Logger.Debug("Clearing up queue");

-                //// clear the queue

-                while (queue.Value.Count > 0)

-                {

-                    queue.Value.Take();

-                }

-

-                queue.Value.Add(ShutdownCommand);

-                Logger.Debug("Cleared queue and sent shutdown command");

-            }

-        }

-

-        internal void RegisterConsumerInZk(ZKGroupDirs dirs, string consumerIdString, TopicCount topicCount)

-        {

-            this.EnsuresNotDisposed();

-            Logger.InfoFormat(CultureInfo.CurrentCulture, "begin registering consumer {0} in ZK", consumerIdString);

-            ZkUtils.CreateEphemeralPathExpectConflict(this.zkClient, dirs.ConsumerRegistryDir + "/" + consumerIdString, topicCount.ToJsonString());

-            Logger.InfoFormat(CultureInfo.CurrentCulture, "end registering consumer {0} in ZK", consumerIdString);

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ConsumerTimeoutException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ConsumerTimeoutException.cs
deleted file mode 100644
index b9481d1..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ConsumerTimeoutException.cs
+++ /dev/null
@@ -1,28 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using System;

-using System.Collections.Generic;

-using System.Linq;

-using System.Text;

-

-namespace Kafka.Client.Exceptions

-{

-    public class ConsumerTimeoutException : Exception

-    {

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/KafkaException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/KafkaException.cs
deleted file mode 100644
index 3721bc9..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/KafkaException.cs
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-namespace Kafka.Client.Exceptions
-{
-    using System;
-
-    /// <summary>
-    /// A wrapping of an error code returned from Kafka.
-    /// </summary>
-    public class KafkaException : Exception
-    {
-        /// <summary>
-        /// No error occurred.
-        /// </summary>
-        public const int NoError = 0;
-
-        /// <summary>
-        /// The offset requested was out of range.
-        /// </summary>
-        public const int OffsetOutOfRangeCode = 1;
-
-        /// <summary>
-        /// The message was invalid.
-        /// </summary>
-        public const int InvalidMessageCode = 2;
-
-        /// <summary>
-        /// The wrong partition.
-        /// </summary>
-        public const int WrongPartitionCode = 3;
-
-        /// <summary>
-        /// Invalid message size.
-        /// </summary>
-        public const int InvalidRetchSizeCode = 4;
-
-        public KafkaException()
-        {
-            ErrorCode = NoError;
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the KafkaException class.
-        /// </summary>
-        /// <param name="errorCode">The error code generated by a request to Kafka.</param>
-        public KafkaException(int errorCode) : base(GetMessage(errorCode))
-        {
-            ErrorCode = errorCode;
-        }
-
-        /// <summary>
-        /// Gets the error code that was sent from Kafka.
-        /// </summary>
-        public int ErrorCode { get; private set; }
-
-        /// <summary>
-        /// Gets the message for the exception based on the Kafka error code.
-        /// </summary>
-        /// <param name="errorCode">The error code from Kafka.</param>
-        /// <returns>A string message representation </returns>
-        private static string GetMessage(int errorCode)
-        {
-            if (errorCode == OffsetOutOfRangeCode)
-            {
-                return "Offset out of range";
-            }
-            else if (errorCode == InvalidMessageCode)
-            {
-                return "Invalid message";
-            }
-            else if (errorCode == WrongPartitionCode)
-            {
-                return "Wrong partition";
-            }
-            else if (errorCode == InvalidRetchSizeCode)
-            {
-                return "Invalid message size";
-            }
-            else
-            {
-                return "Unknown error";
-            }
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/MessageSizeTooLargeException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/MessageSizeTooLargeException.cs
deleted file mode 100644
index 9cc17f8..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/MessageSizeTooLargeException.cs
+++ /dev/null
@@ -1,25 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Exceptions

-{

-    using System;

-

-    public class MessageSizeTooLargeException : Exception

-    {

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZKRebalancerException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZKRebalancerException.cs
deleted file mode 100644
index f8fa71c..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZKRebalancerException.cs
+++ /dev/null
@@ -1,33 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using System;

-

-namespace Kafka.Client.Exceptions

-{

-    public class ZKRebalancerException : Exception

-    {

-        public ZKRebalancerException()

-        {

-        }

-

-        public ZKRebalancerException(string message)

-            : base(message)

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperException.cs
deleted file mode 100644
index 3b94a3d..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperException.cs
+++ /dev/null
@@ -1,45 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Exceptions

-{

-    using System;

-    using System.Runtime.Serialization;

-

-    [Serializable]

-    public class ZooKeeperException : Exception

-    {

-        public ZooKeeperException()

-        {

-        }

-

-        public ZooKeeperException(string message)

-            : base(message)

-        {

-        }

-

-        public ZooKeeperException(string message, Exception exc)

-            : base(message, exc)

-        {

-        }

-

-        protected ZooKeeperException(SerializationInfo info, StreamingContext context)

-            : base(info, context)

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperTimeoutException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperTimeoutException.cs
deleted file mode 100644
index c922dfc..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Exceptions/ZooKeeperTimeoutException.cs
+++ /dev/null
@@ -1,34 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Exceptions

-{

-    using System;

-

-    public class ZooKeeperTimeoutException : Exception

-    {

-        public ZooKeeperTimeoutException()

-            : base("Unable to connect to zookeeper server within timeout: unknown value")

-        {

-        }

-

-        public ZooKeeperTimeoutException(int connectionTimeout)

-            : base("Unable to connect to zookeeper server within timeout: " + connectionTimeout)

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Kafka.Client.csproj b/trunk/clients/csharp/src/Kafka/Kafka.Client/Kafka.Client.csproj
deleted file mode 100644
index 0d1de6b..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Kafka.Client.csproj
+++ /dev/null
@@ -1,245 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>

-<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

-  <PropertyGroup>

-    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>

-    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>

-    <ProductVersion>8.0.30703</ProductVersion>

-    <SchemaVersion>2.0</SchemaVersion>

-    <ProjectGuid>{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}</ProjectGuid>

-    <OutputType>Library</OutputType>

-    <AppDesignerFolder>Properties</AppDesignerFolder>

-    <RootNamespace>Kafka.Client</RootNamespace>

-    <AssemblyName>Kafka.Client</AssemblyName>

-    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>

-    <FileAlignment>512</FileAlignment>

-    <CodeContractsAssemblyMode>0</CodeContractsAssemblyMode>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">

-    <DebugSymbols>true</DebugSymbols>

-    <DebugType>full</DebugType>

-    <Optimize>false</Optimize>

-    <OutputPath>bin\Debug\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <CodeContractsEnableRuntimeChecking>False</CodeContractsEnableRuntimeChecking>

-    <CodeContractsRuntimeOnlyPublicSurface>False</CodeContractsRuntimeOnlyPublicSurface>

-    <CodeContractsRuntimeThrowOnFailure>True</CodeContractsRuntimeThrowOnFailure>

-    <CodeContractsRuntimeCallSiteRequires>False</CodeContractsRuntimeCallSiteRequires>

-    <CodeContractsRuntimeSkipQuantifiers>False</CodeContractsRuntimeSkipQuantifiers>

-    <CodeContractsRunCodeAnalysis>False</CodeContractsRunCodeAnalysis>

-    <CodeContractsNonNullObligations>False</CodeContractsNonNullObligations>

-    <CodeContractsBoundsObligations>False</CodeContractsBoundsObligations>

-    <CodeContractsArithmeticObligations>False</CodeContractsArithmeticObligations>

-    <CodeContractsEnumObligations>False</CodeContractsEnumObligations>

-    <CodeContractsRedundantAssumptions>False</CodeContractsRedundantAssumptions>

-    <CodeContractsRunInBackground>True</CodeContractsRunInBackground>

-    <CodeContractsShowSquigglies>False</CodeContractsShowSquigglies>

-    <CodeContractsUseBaseLine>False</CodeContractsUseBaseLine>

-    <CodeContractsEmitXMLDocs>False</CodeContractsEmitXMLDocs>

-    <CodeContractsCustomRewriterAssembly />

-    <CodeContractsCustomRewriterClass />

-    <CodeContractsLibPaths />

-    <CodeContractsExtraRewriteOptions />

-    <CodeContractsExtraAnalysisOptions />

-    <CodeContractsBaseLineFile />

-    <CodeContractsCacheAnalysisResults>False</CodeContractsCacheAnalysisResults>

-    <CodeContractsRuntimeCheckingLevel>Full</CodeContractsRuntimeCheckingLevel>

-    <CodeContractsReferenceAssembly>%28none%29</CodeContractsReferenceAssembly>

-    <CodeContractsAnalysisWarningLevel>0</CodeContractsAnalysisWarningLevel>

-    <StyleCopTreatErrorsAsWarnings>true</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">

-    <DebugType>pdbonly</DebugType>

-    <Optimize>true</Optimize>

-    <OutputPath>bin\Release\</OutputPath>

-    <DefineConstants>TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Integration|AnyCPU'">

-    <DebugSymbols>true</DebugSymbols>

-    <OutputPath>bin\Integration\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <DebugType>full</DebugType>

-    <PlatformTarget>AnyCPU</PlatformTarget>

-    <ErrorReport>prompt</ErrorReport>

-    <CodeAnalysisIgnoreBuiltInRuleSets>false</CodeAnalysisIgnoreBuiltInRuleSets>

-    <CodeAnalysisIgnoreBuiltInRules>true</CodeAnalysisIgnoreBuiltInRules>

-    <CodeAnalysisFailOnMissingRules>true</CodeAnalysisFailOnMissingRules>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-    <CodeContractsEnableRuntimeChecking>True</CodeContractsEnableRuntimeChecking>

-    <CodeContractsRuntimeOnlyPublicSurface>False</CodeContractsRuntimeOnlyPublicSurface>

-    <CodeContractsRuntimeThrowOnFailure>True</CodeContractsRuntimeThrowOnFailure>

-    <CodeContractsRuntimeCallSiteRequires>False</CodeContractsRuntimeCallSiteRequires>

-    <CodeContractsRuntimeSkipQuantifiers>False</CodeContractsRuntimeSkipQuantifiers>

-    <CodeContractsRunCodeAnalysis>False</CodeContractsRunCodeAnalysis>

-    <CodeContractsNonNullObligations>False</CodeContractsNonNullObligations>

-    <CodeContractsBoundsObligations>False</CodeContractsBoundsObligations>

-    <CodeContractsArithmeticObligations>False</CodeContractsArithmeticObligations>

-    <CodeContractsEnumObligations>False</CodeContractsEnumObligations>

-    <CodeContractsRedundantAssumptions>False</CodeContractsRedundantAssumptions>

-    <CodeContractsRunInBackground>True</CodeContractsRunInBackground>

-    <CodeContractsShowSquigglies>False</CodeContractsShowSquigglies>

-    <CodeContractsUseBaseLine>False</CodeContractsUseBaseLine>

-    <CodeContractsEmitXMLDocs>False</CodeContractsEmitXMLDocs>

-    <CodeContractsCustomRewriterAssembly />

-    <CodeContractsCustomRewriterClass />

-    <CodeContractsLibPaths />

-    <CodeContractsExtraRewriteOptions />

-    <CodeContractsExtraAnalysisOptions />

-    <CodeContractsBaseLineFile />

-    <CodeContractsCacheAnalysisResults>False</CodeContractsCacheAnalysisResults>

-    <CodeContractsRuntimeCheckingLevel>Full</CodeContractsRuntimeCheckingLevel>

-    <CodeContractsReferenceAssembly>%28none%29</CodeContractsReferenceAssembly>

-    <CodeContractsAnalysisWarningLevel>0</CodeContractsAnalysisWarningLevel>

-  </PropertyGroup>

-  <ItemGroup>

-    <Reference Include="log4net">

-      <HintPath>..\..\..\lib\log4Net\log4net.dll</HintPath>

-    </Reference>

-    <Reference Include="System" />

-    <Reference Include="System.Configuration" />

-    <Reference Include="System.Core" />

-    <Reference Include="Microsoft.CSharp" />

-    <Reference Include="System.Web.Extensions" />

-    <Reference Include="ZooKeeperNet">

-      <HintPath>..\..\..\lib\zookeeper\ZooKeeperNet.dll</HintPath>

-    </Reference>

-  </ItemGroup>

-  <ItemGroup>

-    <Compile Include="Cfg\AsyncProducerConfiguration.cs" />

-    <Compile Include="Cfg\BrokerConfiguration.cs" />

-    <Compile Include="Cfg\BrokerConfigurationElement.cs" />

-    <Compile Include="Cfg\BrokerConfigurationElementCollection.cs" />

-    <Compile Include="Cfg\ConsumerConfigurationSection.cs" />

-    <Compile Include="Cfg\ConsumerConfiguration.cs" />

-    <Compile Include="Cfg\IAsyncProducerConfigShared.cs" />

-    <Compile Include="Cfg\ISyncProducerConfigShared.cs" />

-    <Compile Include="Cfg\ProducerConfiguration.cs" />

-    <Compile Include="Cfg\ProducerConfigurationSection.cs" />

-    <Compile Include="Cfg\ZooKeeperConfigurationElement.cs" />

-    <Compile Include="Cfg\ZooKeeperServerConfigurationElement.cs" />

-    <Compile Include="Cfg\ZooKeeperServerConfigurationElementCollection.cs" />

-    <Compile Include="Cluster\Cluster.cs" />

-    <Compile Include="Cluster\Partition.cs" />

-    <Compile Include="Consumers\Consumer.cs" />

-    <Compile Include="Consumers\ConsumerIterator.cs" />

-    <Compile Include="Consumers\ConsumerIteratorState.cs" />

-    <Compile Include="Consumers\FetchedDataChunk.cs" />

-    <Compile Include="Consumers\Fetcher.cs" />

-    <Compile Include="Consumers\FetcherRunnable.cs" />

-    <Compile Include="Consumers\IConsumer.cs" />

-    <Compile Include="Consumers\IConsumerConnector.cs" />

-    <Compile Include="Consumers\KafkaMessageStream.cs" />

-    <Compile Include="Consumers\PartitionTopicInfo.cs" />

-    <Compile Include="Consumers\TopicCount.cs" />

-    <Compile Include="Consumers\ZookeeperConsumerConnector.cs" />

-    <Compile Include="Exceptions\ConsumerTimeoutException.cs" />

-    <Compile Include="Exceptions\IllegalStateException.cs" />

-    <Compile Include="Exceptions\InvalidMessageSizeException.cs" />

-    <Compile Include="Exceptions\MessageSizeTooLargeException.cs" />

-    <Compile Include="Exceptions\NoSuchElementException.cs" />

-    <Compile Include="Exceptions\UnknownCodecException.cs" />

-    <Compile Include="Exceptions\ZKRebalancerException.cs" />

-    <Compile Include="Exceptions\ZooKeeperException.cs" />

-    <Compile Include="Exceptions\ZooKeeperTimeoutException.cs" />

-    <Compile Include="KafkaConnection.cs">

-      <SubType>Code</SubType>

-    </Compile>

-    <Compile Include="Exceptions\KafkaException.cs">

-      <SubType>Code</SubType>

-    </Compile>

-    <Compile Include="KafkaStopWatch.cs" />

-    <Compile Include="Messages\BoundedBuffer.cs" />

-    <Compile Include="Messages\CompressionCodec.cs" />

-    <Compile Include="Messages\CompressionCodecs.cs" />

-    <Compile Include="Messages\CompressionUtils.cs" />

-    <Compile Include="Messages\MessageAndOffset.cs" />

-    <Compile Include="Producers\Async\AsyncProducerPool.cs" />

-    <Compile Include="Producers\Async\MessageSent.cs" />

-    <Compile Include="Producers\Producer.StrMsg.cs" />

-    <Compile Include="Producers\Sync\SyncProducerPool.cs" />

-    <Compile Include="Serialization\StringEncoder.cs" />

-    <Compile Include="Serialization\IWritable.cs" />

-    <Compile Include="Producers\ProducerTypes.cs" />

-    <Compile Include="Cfg\SyncProducerConfiguration.cs" />

-    <Compile Include="Cfg\ZooKeeperConfiguration.cs" />

-    <Compile Include="Cluster\Broker.cs" />

-    <Compile Include="Producers\Partitioning\ConfigBrokerPartitionInfo.cs" />

-    <Compile Include="Producers\Partitioning\DefaultPartitioner.cs" />

-    <Compile Include="Producers\Partitioning\ZKBrokerPartitionInfo.cs" />

-    <Compile Include="KafkaClientBase.cs" />

-    <Compile Include="Serialization\DefaultEncoder.cs" />

-    <Compile Include="Serialization\KafkaBinaryReader.cs" />

-    <Compile Include="Serialization\KafkaBinaryWriter.cs" />

-    <Compile Include="Utils\ErrorMapping.cs" />

-    <Compile Include="Utils\Extensions.cs" />

-    <Compile Include="Utils\Guard.cs" />

-    <Compile Include="Utils\KafkaScheduler.cs" />

-    <Compile Include="Utils\ZKGroupDirs.cs" />

-    <Compile Include="Utils\ZKGroupTopicDirs.cs" />

-    <Compile Include="Utils\ZkUtils.cs" />

-    <Compile Include="ZooKeeperAwareKafkaClientBase.cs" />

-    <Compile Include="Messages\MessageSet.cs" />

-    <Compile Include="Messages\BufferedMessageSet.cs" />

-    <Compile Include="Producers\Async\AsyncProducer.cs" />

-    <Compile Include="Producers\Async\ICallbackHandler.cs" />

-    <Compile Include="Producers\Partitioning\IBrokerPartitionInfo.cs" />

-    <Compile Include="Producers\Partitioning\IPartitioner.cs" />

-    <Compile Include="Producers\Async\IAsyncProducer.cs" />

-    <Compile Include="Producers\IProducer.cs" />

-    <Compile Include="Producers\IProducerPool.cs" />

-    <Compile Include="Producers\Sync\ISyncProducer.cs" />

-    <Compile Include="Producers\ProducerPool.cs" />

-    <Compile Include="Producers\ProducerPoolData.cs" />

-    <Compile Include="Producers\Sync\SyncProducer.cs" />

-    <Compile Include="Requests\AbstractRequest.cs" />

-    <Compile Include="Messages\Message.cs" />

-    <Compile Include="Producers\ProducerData.cs" />

-    <Compile Include="RequestContext.cs" />

-    <Compile Include="Requests\FetchRequest.cs" />

-    <Compile Include="Requests\MultiFetchRequest.cs" />

-    <Compile Include="Requests\MultiProducerRequest.cs" />

-    <Compile Include="Requests\OffsetRequest.cs" />

-    <Compile Include="Requests\ProducerRequest.cs" />

-    <Compile Include="Serialization\IEncoder.cs" />

-    <Compile Include="Utils\Crc32Hasher.cs" />

-    <Compile Include="Producers\Producer.cs" />

-    <Compile Include="Properties\AssemblyInfo.cs" />

-    <Compile Include="Requests\RequestTypes.cs" />

-    <Compile Include="Utils\BitWorks.cs" />

-    <Compile Include="Utils\ReflectionHelper.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\BrokerTopicsListener.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ChildChangedEventItem.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\DataChangedEventItem.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperChildChangedEventArgs.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperDataChangedEventArgs.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperEventTypes.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\IZooKeeperDataListener.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\IZooKeeperStateListener.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperEventArgs.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperSessionCreatedEventArgs.cs" />

-    <Compile Include="ZooKeeperIntegration\Events\ZooKeeperStateChangedEventArgs.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\IZooKeeperChildListener.cs" />

-    <Compile Include="ZooKeeperIntegration\IZooKeeperConnection.cs" />

-    <Compile Include="ZooKeeperIntegration\IZooKeeperClient.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\ZKRebalancerListener.cs" />

-    <Compile Include="ZooKeeperIntegration\Listeners\ZKSessionExpireListener.cs" />

-    <Compile Include="ZooKeeperIntegration\ZooKeeperConnection.cs" />

-    <Compile Include="ZooKeeperIntegration\ZooKeeperClient.cs" />

-    <Compile Include="ZooKeeperIntegration\ZooKeeperClient.Watcher.cs">

-      <DependentUpon>ZooKeeperClient.cs</DependentUpon>

-    </Compile>

-    <Compile Include="ZooKeeperIntegration\IZooKeeperSerializer.cs" />

-    <Compile Include="ZooKeeperIntegration\ZooKeeperStringSerializer.cs" />

-  </ItemGroup>

-  <ItemGroup>

-    <None Include="..\..\..\Settings.StyleCop">

-      <Link>Settings.StyleCop</Link>

-    </None>

-  </ItemGroup>

-  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

-  <Import Project="..\..\..\lib\StyleCop\Microsoft.StyleCop.Targets" />

-</Project>

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaClientBase.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaClientBase.cs
deleted file mode 100644
index ec0663a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaClientBase.cs
+++ /dev/null
@@ -1,44 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client

-{

-    using System;

-

-    /// <summary>

-    /// Base class for all Kafka clients

-    /// </summary>

-    public abstract class KafkaClientBase : IDisposable

-    {

-        /// <summary>

-        /// Releases all unmanaged and managed resources

-        /// </summary>

-        public void Dispose()

-        {

-            this.Dispose(true);

-            GC.SuppressFinalize(this);

-        }

-

-        /// <summary>

-        /// Releases all unmanaged and managed resources

-        /// </summary>

-        /// <param name="disposing">

-        /// Indicates whether release managed resources.

-        /// </param>

-        protected abstract void Dispose(bool disposing);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaConnection.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaConnection.cs
deleted file mode 100644
index 475c897..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaConnection.cs
+++ /dev/null
@@ -1,224 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client

-{

-    using System;

-    using System.IO;

-    using System.Net.Sockets;

-    using System.Threading;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Manages connections to the Kafka.

-    /// </summary>

-    public class KafkaConnection : IDisposable

-    {

-        private readonly int bufferSize;

-

-        private readonly int socketTimeout;

-

-        private readonly TcpClient client;

-

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Initializes a new instance of the KafkaConnection class.

-        /// </summary>

-        /// <param name="server">The server to connect to.</param>

-        /// <param name="port">The port to connect to.</param>

-        public KafkaConnection(string server, int port, int bufferSize, int socketTimeout)

-        {

-            this.bufferSize = bufferSize;

-            this.socketTimeout = socketTimeout;

-

-            // connection opened

-            this.client = new TcpClient(server, port)

-                {

-                    ReceiveTimeout = socketTimeout,

-                    SendTimeout = socketTimeout,

-                    ReceiveBufferSize = bufferSize,

-                    SendBufferSize = bufferSize

-                };

-            var stream = this.client.GetStream();

-            this.Reader = new KafkaBinaryReader(stream);

-        }

-

-        public KafkaBinaryReader Reader { get; private set; }

-

-        /// <summary>

-        /// Writes a producer request to the server asynchronously.

-        /// </summary>

-        /// <param name="request">The request to make.</param>

-        public void BeginWrite(ProducerRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            NetworkStream stream = client.GetStream();

-            byte[] data = request.RequestBuffer.GetBuffer();

-            stream.BeginWrite(data, 0, data.Length, asyncResult => ((NetworkStream)asyncResult.AsyncState).EndWrite(asyncResult), stream);

-        }

-        

-        /// <summary>

-        /// Writes a producer request to the server asynchronously.

-        /// </summary>

-        /// <param name="request">The request to make.</param>

-        /// <param name="callback">The code to execute once the message is completely sent.</param>

-        /// <remarks>

-        /// Do not dispose connection till callback is invoked, 

-        /// otherwise underlying network stream will be closed.

-        /// </remarks>

-        public void BeginWrite(ProducerRequest request, MessageSent<ProducerRequest> callback)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            if (callback == null)

-            {

-                this.BeginWrite(request);

-                return;

-            }

-

-            NetworkStream stream = client.GetStream();

-            var ctx = new RequestContext<ProducerRequest>(stream, request);

-

-            byte[] data = request.RequestBuffer.GetBuffer();

-            stream.BeginWrite(

-                data,

-                0,

-                data.Length,

-                delegate(IAsyncResult asyncResult)

-                    {

-                        var context = (RequestContext<ProducerRequest>)asyncResult.AsyncState;

-                        callback(context);

-                        context.NetworkStream.EndWrite(asyncResult);

-                    },

-                ctx);

-        }

-

-        /// <summary>

-        /// Writes a producer request to the server.

-        /// </summary>

-        /// <remarks>

-        /// Write timeout is defaulted to infitite.

-        /// </remarks>

-        /// <param name="request">The <see cref="ProducerRequest"/> to send to the server.</param>

-        public void Write(ProducerRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            this.Write(request.RequestBuffer.GetBuffer());

-        }

-

-        /// <summary>

-        /// Writes a multi-producer request to the server.

-        /// </summary>

-        /// <remarks>

-        /// Write timeout is defaulted to infitite.

-        /// </remarks>

-        /// <param name="request">The <see cref="MultiProducerRequest"/> to send to the server.</param>

-        public void Write(MultiProducerRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            this.Write(request.RequestBuffer.GetBuffer());

-        }

-

-        /// <summary>

-        /// Writes data to the server.

-        /// </summary>

-        /// <param name="data">The data to write to the server.</param>

-        private void Write(byte[] data)

-        {

-            NetworkStream stream = this.client.GetStream();

-            //// Send the message to the connected TcpServer. 

-            stream.Write(data, 0, data.Length);

-        }

-

-        /// <summary>

-        /// Writes a fetch request to the server.

-        /// </summary>

-        /// <remarks>

-        /// Write timeout is defaulted to infitite.

-        /// </remarks>

-        /// <param name="request">The <see cref="FetchRequest"/> to send to the server.</param>

-        public void Write(FetchRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            this.Write(request.RequestBuffer.GetBuffer());

-        }

-

-        /// <summary>

-        /// Writes a multifetch request to the server.

-        /// </summary>

-        /// <remarks>

-        /// Write timeout is defaulted to infitite.

-        /// </remarks>

-        /// <param name="request">The <see cref="MultiFetchRequest"/> to send to the server.</param>

-        public void Write(MultiFetchRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            this.Write(request.RequestBuffer.GetBuffer());

-        }

-

-        /// <summary>

-        /// Writes a offset request to the server.

-        /// </summary>

-        /// <remarks>

-        /// Write timeout is defaulted to infitite.

-        /// </remarks>

-        /// <param name="request">The <see cref="OffsetRequest"/> to send to the server.</param>

-        public void Write(OffsetRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            this.Write(request.RequestBuffer.GetBuffer());

-        }

-

-        /// <summary>

-        /// Close the connection to the server.

-        /// </summary>

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            this.disposed = true;

-            if (this.client != null)

-            {

-                this.client.Close();

-            }

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaException.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaException.cs
deleted file mode 100644
index ce90171..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/KafkaException.cs
+++ /dev/null
@@ -1,98 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// A wrapping of an error code returned from Kafka.
-    /// </summary>
-    public class KafkaException : Exception
-    {
-        /// <summary>
-        /// No error occurred.
-        /// </summary>
-        public const int NoError = 0;
-
-        /// <summary>
-        /// The offset requested was out of range.
-        /// </summary>
-        public const int OffsetOutOfRangeCode = 1;
-
-        /// <summary>
-        /// The message was invalid.
-        /// </summary>
-        public const int InvalidMessageCode = 2;
-
-        /// <summary>
-        /// The wrong partition.
-        /// </summary>
-        public const int WrongPartitionCode = 3;
-
-        /// <summary>
-        /// Invalid message size.
-        /// </summary>
-        public const int InvalidRetchSizeCode = 4;
-
-        /// <summary>
-        /// Initializes a new instance of the KafkaException class.
-        /// </summary>
-        /// <param name="errorCode">The error code generated by a request to Kafka.</param>
-        public KafkaException(int errorCode) : base(GetMessage(errorCode))
-        {
-            ErrorCode = errorCode;
-        }
-
-        /// <summary>
-        /// Gets the error code that was sent from Kafka.
-        /// </summary>
-        public int ErrorCode { get; private set; }
-
-        /// <summary>
-        /// Gets the message for the exception based on the Kafka error code.
-        /// </summary>
-        /// <param name="errorCode">The error code from Kafka.</param>
-        /// <returns>A string message representation </returns>
-        private static string GetMessage(int errorCode)
-        {
-            if (errorCode == OffsetOutOfRangeCode)
-            {
-                return "Offset out of range";
-            }
-            else if (errorCode == InvalidMessageCode)
-            {
-                return "Invalid message";
-            }
-            else if (errorCode == WrongPartitionCode)
-            {
-                return "Wrong partition";
-            }
-            else if (errorCode == InvalidRetchSizeCode)
-            {
-                return "Invalid message size";
-            }
-            else
-            {
-                return "Unknown error";
-            }
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Message.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Message.cs
deleted file mode 100644
index 2c5bc1f..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Message.cs
+++ /dev/null
@@ -1,157 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-using System;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// Message for Kafka.
-    /// </summary>
-    /// <remarks>
-    /// A message. The format of an N byte message is the following:
-    /// <list type="bullet">
-    ///     <item>
-    ///         <description>1 byte "magic" identifier to allow format changes</description>
-    ///     </item>
-    ///     <item>
-    ///         <description>4 byte CRC32 of the payload</description>
-    ///     </item>
-    ///     <item>
-    ///         <description>N - 5 byte payload</description>
-    ///     </item>
-    /// </list>
-    /// </remarks>
-    public class Message
-    {
-        /// <summary>
-        /// Magic identifier for Kafka.
-        /// </summary>
-        private static readonly byte DefaultMagicIdentifier = 0;
-
-        /// <summary>
-        /// Initializes a new instance of the Message class.
-        /// </summary>
-        /// <remarks>
-        /// Uses the <see cref="DefaultMagicIdentifier"/> as a default.
-        /// </remarks>
-        /// <param name="payload">The data for the payload.</param>
-        public Message(byte[] payload) : this(payload, DefaultMagicIdentifier)
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the Message class.
-        /// </summary>
-        /// <remarks>
-        /// Initializes the checksum as null.  It will be automatically computed.
-        /// </remarks>
-        /// <param name="payload">The data for the payload.</param>
-        /// <param name="magic">The magic identifier.</param>
-        public Message(byte[] payload, byte magic) : this(payload, magic, null)
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the Message class.
-        /// </summary>
-        /// <param name="payload">The data for the payload.</param>
-        /// <param name="magic">The magic identifier.</param>
-        /// <param name="checksum">The checksum for the payload.</param>
-        public Message(byte[] payload, byte magic, byte[] checksum)
-        {
-            Payload = payload;
-            Magic = magic;
-            Checksum = checksum == null ? CalculateChecksum() : checksum;
-        }
-    
-        /// <summary>
-        /// Gets the magic bytes.
-        /// </summary>
-        public byte Magic { get; private set; }
-        
-        /// <summary>
-        /// Gets the CRC32 checksum for the payload.
-        /// </summary>
-        public byte[] Checksum { get; private set; }
-
-        /// <summary>
-        /// Gets the payload.
-        /// </summary>
-        public byte[] Payload { get; private set; }
-
-        /// <summary>
-        /// Parses a message from a byte array given the format Kafka likes. 
-        /// </summary>
-        /// <param name="data">The data for a message.</param>
-        /// <returns>The message.</returns>
-        public static Message ParseFrom(byte[] data)
-        {
-            int size = BitConverter.ToInt32(BitWorks.ReverseBytes(data.Take(4).ToArray<byte>()), 0);
-            byte magic = data[4];
-            byte[] checksum = data.Skip(5).Take(4).ToArray<byte>();
-            byte[] payload = data.Skip(9).Take(size).ToArray<byte>();
-
-            return new Message(payload, magic, checksum);
-        }
-
-        /// <summary>
-        /// Converts the message to bytes in the format Kafka likes.
-        /// </summary>
-        /// <returns>The byte array.</returns>
-        public byte[] GetBytes()
-        {
-            byte[] encodedMessage = new byte[Payload.Length + 1 + Checksum.Length];
-            encodedMessage[0] = Magic;
-            Buffer.BlockCopy(Checksum, 0, encodedMessage, 1, Checksum.Length);
-            Buffer.BlockCopy(Payload, 0, encodedMessage, 1 + Checksum.Length, Payload.Length);
-
-            return encodedMessage;
-        }
-
-        /// <summary>
-        /// Determines if the message is valid given the payload and its checksum.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public bool IsValid()
-        {
-            return Checksum.SequenceEqual(CalculateChecksum());
-        }
-
-        /// <summary>
-        /// Try to show the payload as decoded to UTF-8.
-        /// </summary>
-        /// <returns>The decoded payload as string.</returns>
-        public override string ToString()
-        {
-            return Encoding.UTF8.GetString(Payload);
-        }
-
-        /// <summary>
-        /// Calculates the CRC32 checksum on the payload of the message.
-        /// </summary>
-        /// <returns>The checksum given the payload.</returns>
-        private byte[] CalculateChecksum()
-        { 
-            Crc32 crc32 = new Crc32();
-            return crc32.ComputeHash(Payload);
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BoundedBuffer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BoundedBuffer.cs
deleted file mode 100644
index 91d17a7..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BoundedBuffer.cs
+++ /dev/null
@@ -1,38 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Messages

-{

-    using System.IO;

-

-    /// <summary>

-    /// Wrapper over memory set with fixed capacity

-    /// </summary>

-    internal class BoundedBuffer : MemoryStream

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="BoundedBuffer"/> class.

-        /// </summary>

-        /// <param name="size">

-        /// The max size of stream.

-        /// </param>

-        public BoundedBuffer(int size)

-            : base(new byte[size], 0, size, true, true)

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BufferedMessageSet.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BufferedMessageSet.cs
deleted file mode 100644
index f090e71..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/BufferedMessageSet.cs
+++ /dev/null
@@ -1,407 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using Kafka.Client.Exceptions;

-

-namespace Kafka.Client.Messages

-{

-    using System;

-    using System.Collections;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.IO;

-    using System.Linq;

-    using System.Reflection;

-    using System.Text;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-    using log4net;

-

-    /// <summary>

-    /// A collection of messages stored as memory stream

-    /// </summary>

-    public class BufferedMessageSet : MessageSet, IEnumerable<MessageAndOffset>, IEnumerator<MessageAndOffset>

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private MemoryStream topIter;

-        private int topIterPosition;

-        private long currValidBytes = 0;

-        private IEnumerator<MessageAndOffset> innerIter = null;

-        private long lastMessageSize = 0;

-        private long deepValidByteCount = -1;

-        private long shallowValidByteCount = -1;

-        private ConsumerIteratorState state = ConsumerIteratorState.NotReady;

-        private MessageAndOffset nextItem;

-

-        /// <summary>

-        /// Gets the error code

-        /// </summary>

-        public int ErrorCode { get; private set; }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="BufferedMessageSet"/> class.

-        /// </summary>

-        /// <param name="messages">

-        /// The list of messages.

-        /// </param>

-        public BufferedMessageSet(IEnumerable<Message> messages)

-            : this(messages, ErrorMapping.NoError)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="BufferedMessageSet"/> class.

-        /// </summary>

-        /// <param name="messages">

-        /// The list of messages.

-        /// </param>

-        /// <param name="errorCode">

-        /// The error code.

-        /// </param>

-        public BufferedMessageSet(IEnumerable<Message> messages, int errorCode)

-        {

-            int length = GetMessageSetSize(messages);

-            this.Messages = messages;

-            this.ErrorCode = errorCode;

-            this.topIterPosition = 0;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="BufferedMessageSet"/> class with compression.

-        /// </summary>

-        /// <param name="compressionCodec">compression method</param>

-        /// <param name="messages">messages to add</param>

-        public BufferedMessageSet(CompressionCodecs compressionCodec, IEnumerable<Message> messages)

-        {

-            IEnumerable<Message> messagesToAdd;

-            switch (compressionCodec)

-            {

-                case CompressionCodecs.NoCompressionCodec:

-                    messagesToAdd = messages;

-                    break;

-                default:

-                    var message = CompressionUtils.Compress(messages, compressionCodec);

-                    messagesToAdd = new List<Message>() { message };

-                    break;

-            }

-

-            int length = GetMessageSetSize(messagesToAdd);

-            this.Messages = messagesToAdd;

-            this.ErrorCode = ErrorMapping.NoError;

-            this.topIterPosition = 0;

-        }

-

-        /// <summary>

-        /// Gets the list of messages.

-        /// </summary>

-        public IEnumerable<Message> Messages { get; private set; }

-

-        /// <summary>

-        /// Gets the total set size.

-        /// </summary>

-        public override int SetSize

-        {

-            get { return GetMessageSetSize(this.Messages); }

-        }

-

-        public MessageAndOffset Current

-        {

-            get

-            {

-                if (!MoveNext())

-                {

-                    throw new NoSuchElementException();

-                }

-

-                state = ConsumerIteratorState.NotReady;

-                if (nextItem != null)

-                {

-                    return nextItem;

-                }

-

-                throw new IllegalStateException("Expected item but none found.");

-            }

-        }

-

-        object IEnumerator.Current

-        {

-            get { return this.Current; }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public sealed override void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public sealed override void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-            foreach (var message in this.Messages)

-            {

-                writer.Write(message.Size);

-                message.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Gets string representation of set

-        /// </summary>

-        /// <returns>

-        /// String representation of set

-        /// </returns>

-        public override string ToString()

-        {

-            var sb = new StringBuilder();

-            int i = 1;

-            foreach (var message in this.Messages)

-            {

-                sb.Append("Message ");

-                sb.Append(i);

-                sb.Append(" {Length: ");

-                sb.Append(message.Size);

-                sb.Append(", ");

-                sb.Append(message.ToString());

-                sb.AppendLine("} ");

-                i++;

-            }

-

-            return sb.ToString();

-        }

-

-        internal static BufferedMessageSet ParseFrom(KafkaBinaryReader reader, int size)

-        {

-            if (size == 0)

-            {

-                return new BufferedMessageSet(Enumerable.Empty<Message>());

-            }

-

-            short errorCode = reader.ReadInt16();

-            if (errorCode != KafkaException.NoError)

-            {

-                throw new KafkaException(errorCode);

-            }

-

-            int readed = 2;

-            if (readed == size)

-            {

-                return new BufferedMessageSet(Enumerable.Empty<Message>());

-            }

-

-            var messages = new List<Message>();

-            do

-            {

-                int msgSize = reader.ReadInt32();

-                readed += 4;

-                Message msg = Message.ParseFrom(reader, msgSize);

-                readed += msgSize;

-                messages.Add(msg);

-            }

-            while (readed < size);

-            if (size != readed)

-            {

-                throw new KafkaException(KafkaException.InvalidRetchSizeCode);

-            }

-

-            return new BufferedMessageSet(messages);

-        }

-

-        internal static IList<BufferedMessageSet> ParseMultiFrom(KafkaBinaryReader reader, int size, int count)

-        {

-            var result = new List<BufferedMessageSet>();

-            if (size == 0)

-            {

-                return result;

-            }

-

-            int readed = 0;

-            short errorCode = reader.ReadInt16();

-            readed += 2;

-            if (errorCode != KafkaException.NoError)

-            {

-                throw new KafkaException(errorCode);

-            }

-

-            for (int i = 0; i < count; i++)

-            {

-                int partSize = reader.ReadInt32();

-                readed += 4;

-                var item = ParseFrom(reader, partSize);

-                readed += partSize;

-                result.Add(item);

-            }

-

-            if (size != readed)

-            {

-                throw new KafkaException(KafkaException.InvalidRetchSizeCode);

-            }

-

-            return result;

-        }

-

-        [Obsolete]

-        internal static BufferedMessageSet ParseFrom(byte[] bytes)

-        {

-            var messages = new List<Message>();

-            int processed = 0;

-            int length = bytes.Length - 4;

-            while (processed <= length)

-            {

-                int messageSize = BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(processed).Take(4).ToArray()), 0);

-                messages.Add(Message.ParseFrom(bytes.Skip(processed).Take(messageSize + 4).ToArray()));

-                processed += 4 + messageSize;

-            }

-

-            return new BufferedMessageSet(messages);

-        }

-

-        public IEnumerator<MessageAndOffset> GetEnumerator()

-        {

-            return this;

-        }

-

-        IEnumerator IEnumerable.GetEnumerator()

-        {

-            return GetEnumerator();

-        }

-

-        private bool InnerDone()

-        {

-            return innerIter == null || !innerIter.MoveNext();

-        }

-

-        private MessageAndOffset MakeNextOuter()

-        {

-            if (topIterPosition >= this.Messages.Count())

-            {

-                return AllDone();

-            }

-

-            Message newMessage = this.Messages.ToList()[topIterPosition];

-            topIterPosition++;

-            switch (newMessage.CompressionCodec)

-            {

-                case CompressionCodecs.NoCompressionCodec:

-                    if (Logger.IsDebugEnabled)

-                    {

-                        Logger.DebugFormat(

-                            CultureInfo.CurrentCulture,

-                            "Message is uncompressed. Valid byte count = {0}",

-                            currValidBytes);

-                    }

-

-                    innerIter = null;

-                    currValidBytes += 4 + newMessage.Size;

-                    return new MessageAndOffset(newMessage, currValidBytes);

-                default:

-                    if (Logger.IsDebugEnabled)

-                    {

-                        Logger.DebugFormat(CultureInfo.CurrentCulture, "Message is compressed. Valid byte count = {0}", currValidBytes);

-                    }

-

-                    innerIter = CompressionUtils.Decompress(newMessage).GetEnumerator();

-                    return MakeNext();

-            }

-        }

-

-        private MessageAndOffset MakeNext()

-        {

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "MakeNext() in deepIterator: innerDone = {0}", InnerDone());

-            }

-

-            switch (InnerDone())

-            {

-                case true:

-                    return MakeNextOuter();

-                default:

-                    var messageAndOffset = innerIter.Current;

-                    if (!innerIter.MoveNext())

-                    {

-                        currValidBytes += 4 + lastMessageSize;

-                    }

-

-                    return new MessageAndOffset(messageAndOffset.Message, currValidBytes);

-            }

-        }

-

-        private MessageAndOffset AllDone()

-        {

-            state = ConsumerIteratorState.Done;

-            return null;

-        }

-

-        public void Dispose()

-        {

-        }

-

-        public bool MoveNext()

-        {

-            if (state == ConsumerIteratorState.Failed)

-            {

-                throw new IllegalStateException("Iterator is in failed state");

-            }

-

-            switch (state)

-            {

-                case ConsumerIteratorState.Done:

-                    return false;

-                case ConsumerIteratorState.Ready:

-                    return true;

-                default:

-                    return MaybeComputeNext();

-            }

-        }

-

-        private bool MaybeComputeNext()

-        {

-            state = ConsumerIteratorState.Failed;

-            nextItem = MakeNext();

-            if (state == ConsumerIteratorState.Done)

-            {

-                return false;

-            }

-            else

-            {

-                state = ConsumerIteratorState.Ready;

-                return true;

-            }

-        }

-

-        public void Reset()

-        {

-            this.topIterPosition = 0;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/Message.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/Message.cs
deleted file mode 100644
index d083a7c..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/Message.cs
+++ /dev/null
@@ -1,329 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Messages

-{

-    using System;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Message send to Kafaka server

-    /// </summary>

-    /// <remarks>

-    /// Format:

-    /// 1 byte "magic" identifier to allow format changes

-    /// 4 byte CRC32 of the payload

-    /// N - 5 byte payload

-    /// </remarks>

-    public class Message : IWritable

-    {

-        private const byte DefaultMagicValue = 1;

-        private const byte DefaultMagicLength = 1;

-        private const byte DefaultCrcLength = 4;

-        private const int DefaultHeaderSize = DefaultMagicLength + DefaultCrcLength;

-        private const byte CompressionCodeMask = 3;

-

-        public CompressionCodecs CompressionCodec

-        {

-            get

-            {

-                switch (Magic)

-                {

-                    case 0:

-                        return CompressionCodecs.NoCompressionCodec;

-                    case 1:

-                        return Messages.CompressionCodec.GetCompressionCodec(Attributes & CompressionCodeMask);

-                    default:

-                        throw new KafkaException(KafkaException.InvalidMessageCode);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Message"/> class.

-        /// </summary>

-        /// <param name="payload">

-        /// The payload.

-        /// </param>

-        /// <param name="checksum">

-        /// The checksum.

-        /// </param>

-        /// <remarks>

-        /// Initializes with default magic number

-        /// </remarks>

-        public Message(byte[] payload, byte[] checksum)

-            : this(payload, checksum, CompressionCodecs.NoCompressionCodec)

-        {

-            Guard.NotNull(payload, "payload");

-            Guard.NotNull(checksum, "checksum");

-            Guard.Count(checksum, 4, "checksum");

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Message"/> class.

-        /// </summary>

-        /// <param name="payload">

-        /// The payload.

-        /// </param>

-        /// <remarks>

-        /// Initializes the magic number as default and the checksum as null. It will be automatically computed.

-        /// </remarks>

-        public Message(byte[] payload)

-            : this(payload, CompressionCodecs.NoCompressionCodec)

-        {

-            Guard.NotNull(payload, "payload");

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the Message class.

-        /// </summary>

-        /// <remarks>

-        /// Initializes the checksum as null.  It will be automatically computed.

-        /// </remarks>

-        /// <param name="payload">The data for the payload.</param>

-        /// <param name="magic">The magic identifier.</param>

-        public Message(byte[] payload, CompressionCodecs compressionCodec)

-            : this(payload, Crc32Hasher.Compute(payload), compressionCodec)

-        {

-            Guard.NotNull(payload, "payload");

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the Message class.

-        /// </summary>

-        /// <param name="payload">The data for the payload.</param>

-        /// <param name="magic">The magic identifier.</param>

-        /// <param name="checksum">The checksum for the payload.</param>

-        public Message(byte[] payload, byte[] checksum, CompressionCodecs compressionCodec)

-        {

-            Guard.NotNull(payload, "payload");

-            Guard.NotNull(checksum, "checksum");

-            Guard.Count(checksum, 4, "checksum");

-

-            int length = DefaultHeaderSize + payload.Length;

-            this.Payload = payload;

-            this.Magic = DefaultMagicValue;

-            

-            if (compressionCodec != CompressionCodecs.NoCompressionCodec)

-            {

-                this.Attributes |=

-                    (byte)(CompressionCodeMask & Messages.CompressionCodec.GetCompressionCodecValue(compressionCodec));

-            }

-

-            if (Magic == 1)

-            {

-                length++;

-            }

-

-            this.Checksum = checksum;

-            this.Size = length;

-        }

-

-        /// <summary>

-        /// Gets the payload.

-        /// </summary>

-        public byte[] Payload { get; private set; }

-

-        /// <summary>

-        /// Gets the magic bytes.

-        /// </summary>

-        public byte Magic { get; private set; }

-

-        /// <summary>

-        /// Gets the CRC32 checksum for the payload.

-        /// </summary>

-        public byte[] Checksum { get; private set; }

-

-        /// <summary>

-        /// Gets the Attributes for the message.

-        /// </summary>

-        public byte Attributes { get; private set; }

-

-        /// <summary>

-        /// Gets the total size of message.

-        /// </summary>

-        public int Size { get; private set; }

-

-        /// <summary>

-        /// Gets the payload size.

-        /// </summary>

-        public int PayloadSize

-        {

-            get

-            {

-                return this.Payload.Length;

-            }

-        }

-

-        /// <summary>

-        /// Writes message data into given message buffer

-        /// </summary>

-        /// <param name="output">

-        /// The output.

-        /// </param>

-        public void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes message data using given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-            writer.Write(this.Magic);

-            writer.Write(this.Attributes);

-            writer.Write(this.Checksum);

-            writer.Write(this.Payload);

-        }

-

-        /// <summary>

-        /// Try to show the payload as decoded to UTF-8.

-        /// </summary>

-        /// <returns>The decoded payload as string.</returns>

-        public override string ToString()

-        {

-            var sb = new StringBuilder();

-            sb.Append("Magic: ");

-            sb.Append(this.Magic);

-            if (this.Magic == 1)

-            {

-                sb.Append(", Attributes: ");

-                sb.Append(this.Attributes);

-            }

-

-            sb.Append(", Checksum: ");

-            for (int i = 0; i < 4; i++)

-            {

-                sb.Append("[");

-                sb.Append(this.Checksum[i]);

-                sb.Append("]");

-            }

-

-            sb.Append(", topic: ");

-            try

-            {

-                sb.Append(Encoding.UTF8.GetString(this.Payload));

-            }

-            catch (Exception)

-            {

-                sb.Append("n/a");

-            }

-

-            return sb.ToString();

-        }

-

-        [Obsolete("Use KafkaBinaryReader instead")]

-        public static Message FromMessageBytes(byte[] data)

-        {

-            byte magic = data[0];

-            byte[] checksum;

-            byte[] payload;

-            byte attributes;

-            if (magic == (byte)1)

-            {

-                attributes = data[1];

-                checksum = data.Skip(2).Take(4).ToArray();

-                payload = data.Skip(6).ToArray();

-                return new Message(payload, checksum, Messages.CompressionCodec.GetCompressionCodec(attributes & CompressionCodeMask));

-            }

-            else

-            {

-                checksum = data.Skip(1).Take(4).ToArray();

-                payload = data.Skip(5).ToArray();

-                return new Message(payload, checksum);

-            }

-        }

-

-        internal static Message ParseFrom(KafkaBinaryReader reader, int size)

-        {

-            Message result;

-            int readed = 0;

-            byte magic = reader.ReadByte();

-            readed++;

-            byte[] checksum;

-            byte[] payload;

-            if (magic == 1)

-            {

-                byte attributes = reader.ReadByte();

-                readed++;

-                checksum = reader.ReadBytes(4);

-                readed += 4;

-                payload = reader.ReadBytes(size - (DefaultHeaderSize + 1));

-                readed += size - (DefaultHeaderSize + 1);

-                result = new Message(payload, checksum, Messages.CompressionCodec.GetCompressionCodec(attributes & CompressionCodeMask));

-            }

-            else

-            {

-                checksum = reader.ReadBytes(4);

-                readed += 4;

-                payload = reader.ReadBytes(size - DefaultHeaderSize);

-                readed += size - DefaultHeaderSize;

-                result = new Message(payload, checksum);

-            }

-

-            if (size != readed)

-            {

-                throw new KafkaException(KafkaException.InvalidRetchSizeCode);

-            }

-

-            return result;

-        }

-

-        /// <summary>

-        /// Parses a message from a byte array given the format Kafka likes. 

-        /// </summary>

-        /// <param name="data">The data for a message.</param>

-        /// <returns>The message.</returns>

-        [Obsolete("Use KafkaBinaryReader instead")]

-        public static Message ParseFrom(byte[] data)

-        {

-            int size = BitConverter.ToInt32(BitWorks.ReverseBytes(data.Take(4).ToArray()), 0);

-            byte magic = data[4];

-            byte[] checksum;

-            byte[] payload;

-            byte attributes;

-            if (magic == 1)

-            {

-                attributes = data[5];

-                checksum = data.Skip(6).Take(4).ToArray();

-                payload = data.Skip(10).Take(size).ToArray();

-                return new Message(payload, checksum, Messages.CompressionCodec.GetCompressionCodec(attributes & CompressionCodeMask));

-            }

-            else

-            {

-                checksum = data.Skip(5).Take(4).ToArray();

-                payload = data.Skip(9).Take(size).ToArray();

-                return new Message(payload, checksum);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/MessageSet.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/MessageSet.cs
deleted file mode 100644
index d4cee22..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Messages/MessageSet.cs
+++ /dev/null
@@ -1,91 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Messages

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Linq;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// A set of messages. A message set has a fixed serialized form, though the container

-    /// for the bytes could be either in-memory or on disk.

-    /// </summary>

-    /// <remarks>

-    /// Format:

-    /// 4 byte size containing an integer N

-    /// N message bytes as described in the message class

-    /// </remarks>

-    public abstract class MessageSet : IWritable

-    {

-        protected const byte DefaultMessageLengthSize = 4;

-

-        /// <summary>

-        /// Gives the size of a size-delimited entry in a message set

-        /// </summary>

-        /// <param name="message">

-        /// The message.

-        /// </param>

-        /// <returns>

-        /// Size of message

-        /// </returns>

-        public static int GetEntrySize(Message message)

-        {

-            Guard.NotNull(message, "message");

-

-            return message.Size + DefaultMessageLengthSize;

-        }

-

-        /// <summary>

-        /// Gives the size of a list of messages

-        /// </summary>

-        /// <param name="messages">

-        /// The messages.

-        /// </param>

-        /// <returns>

-        /// Size of all messages

-        /// </returns>

-        public static int GetMessageSetSize(IEnumerable<Message> messages)

-        {

-            return messages == null ? 0 : messages.Sum(x => GetEntrySize(x));

-        }

-

-        /// <summary>

-        /// Gets the total size of this message set in bytes

-        /// </summary>

-        public abstract int SetSize { get; }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public abstract void WriteTo(MemoryStream output);

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public abstract void WriteTo(KafkaBinaryWriter writer);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producer.cs
deleted file mode 100644
index c64013e..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producer.cs
+++ /dev/null
@@ -1,152 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-using System;
-using System.Collections.Generic;
-using System.Text;
-using Kafka.Client.Request;
-using Kafka.Client.Util;
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// Sends message to Kafka.
-    /// </summary>
-    public class Producer
-    {
-        /// <summary>
-        /// Initializes a new instance of the Producer class.
-        /// </summary>
-        /// <param name="server">The server to connect to.</param>
-        /// <param name="port">The port to connect to.</param>
-        public Producer(string server, int port)
-        {
-            Server = server;
-            Port = port;
-        }
-
-        /// <summary>
-        /// Gets the server to which the connection is to be established.
-        /// </summary>
-        public string Server { get; private set; }
-
-        /// <summary>
-        /// Gets the port to which the connection is to be established.
-        /// </summary>
-        public int Port { get; private set; }
-
-        /// <summary>
-        /// Sends a message to Kafka.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="msg">The message to send.</param>
-        public void Send(string topic, int partition, Message msg)
-        {
-            Send(topic, partition, new List<Message> { msg });
-        }
-
-        /// <summary>
-        /// Sends a list of messages to Kafka.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="messages">The list of messages to send.</param>
-        public void Send(string topic, int partition, IList<Message> messages)
-        {
-            Send(new ProducerRequest(topic, partition, messages));
-        }
-
-        /// <summary>
-        /// Sends a request to Kafka.
-        /// </summary>
-        /// <param name="request">The request to send to Kafka.</param>
-        public void Send(ProducerRequest request)
-        {
-            if (request.IsValid())
-            {
-                using (KafkaConnection connection = new KafkaConnection(Server, Port))
-                {
-                    connection.Write(request);
-                }
-            }
-        }
-
-        /// <summary>
-        /// Sends a request to Kafka.
-        /// </summary>
-        /// <param name="request">The request to send to Kafka.</param>
-        public void Send(MultiProducerRequest request)
-        {
-            if (request.IsValid())
-            {
-                using (KafkaConnection connection = new KafkaConnection(Server, Port))
-                {
-                    connection.Write(request);
-                }
-            }
-        }
-
-        /// <summary>
-        /// Sends a list of messages to Kafka.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="messages">The list of messages to send.</param>
-        /// <param name="callback">
-        /// A block of code to execute once the request has been sent to Kafka.  This value may 
-        /// be set to null.
-        /// </param>
-        public void SendAsync(string topic, int partition, IList<Message> messages, MessageSent<ProducerRequest> callback)
-        {
-            SendAsync(new ProducerRequest(topic, partition, messages), callback);
-        }
-
-        /// <summary>
-        /// Send a request to Kafka asynchronously.
-        /// </summary>
-        /// <remarks>
-        /// If the callback is not specified then the method behaves as a fire-and-forget call
-        /// with the callback being ignored.  By the time this callback is executed, the 
-        /// <see cref="RequestContext.NetworkStream"/> will already have been closed given an 
-        /// internal call <see cref="NetworkStream.EndWrite"/>.
-        /// </remarks>
-        /// <param name="request">The request to send to Kafka.</param>
-        /// <param name="callback">
-        /// A block of code to execute once the request has been sent to Kafka.  This value may 
-        /// be set to null.
-        /// </param>
-        public void SendAsync(ProducerRequest request, MessageSent<ProducerRequest> callback)
-        {
-            if (request.IsValid())
-            {
-                KafkaConnection connection = new KafkaConnection(Server, Port);
-
-                if (callback == null)
-                {
-                    // fire and forget
-                    connection.BeginWrite(request.GetBytes());
-                }
-                else
-                {
-                    // execute with callback
-                    connection.BeginWrite(request, callback);
-                }
-            }
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducer.cs
deleted file mode 100644
index d0af95f..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducer.cs
+++ /dev/null
@@ -1,207 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Async

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Linq;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Sends messages encapsulated in request to Kafka server asynchronously

-    /// </summary>

-    public class AsyncProducer : IAsyncProducer

-    {

-        private readonly ICallbackHandler callbackHandler;

-        private readonly KafkaConnection connection;

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Gets producer config

-        /// </summary>

-        public AsyncProducerConfiguration Config { get; private set; }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="AsyncProducer"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The producer config.

-        /// </param>

-        public AsyncProducer(AsyncProducerConfiguration config)

-            : this(

-                config,

-                ReflectionHelper.Instantiate<ICallbackHandler>(config.CallbackHandlerClass))

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="AsyncProducer"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The producer config.

-        /// </param>

-        /// <param name="callbackHandler">

-        /// The callback invoked when a request is finished being sent.

-        /// </param>

-        public AsyncProducer(

-            AsyncProducerConfiguration config,

-            ICallbackHandler callbackHandler)

-        {

-            Guard.NotNull(config, "config");

-

-            this.Config = config;

-            this.callbackHandler = callbackHandler;

-            this.connection = new KafkaConnection(

-                this.Config.Host,

-                this.Config.Port,

-                this.Config.BufferSize,

-                this.Config.SocketTimeout);

-        }

-

-        /// <summary>

-        /// Sends request to Kafka server asynchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        public void Send(ProducerRequest request)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            Guard.Assert<ArgumentException>(() => request.MessageSet.Messages.All(x => x.PayloadSize <= this.Config.MaxMessageSize));

-            if (this.callbackHandler != null)

-            {

-                this.Send(request, this.callbackHandler.Handle);

-            }

-            else

-            {

-                this.connection.BeginWrite(request);

-            }

-        }

-

-        /// <summary>

-        /// Sends request to Kafka server asynchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        /// <param name="callback">

-        /// The callback invoked when a request is finished being sent.

-        /// </param>

-        public void Send(ProducerRequest request, MessageSent<ProducerRequest> callback)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(request, "request");

-            Guard.NotNull(request.MessageSet, "request.MessageSet");

-            Guard.NotNull(request.MessageSet.Messages, "request.MessageSet.Messages");

-            Guard.Assert<ArgumentException>(

-                () => request.MessageSet.Messages.All(x => x.PayloadSize <= this.Config.MaxMessageSize));

-

-            connection.BeginWrite(request, callback);

-        }

-

-        /// <summary>

-        /// Constructs request and sent it to Kafka server asynchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages to sent.

-        /// </param>

-        public void Send(string topic, int partition, IEnumerable<Message> messages)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNullNorEmpty(topic, "topic");

-            Guard.NotNull(messages, "messages");

-            Guard.Assert<ArgumentException>(() => messages.All(x => x.PayloadSize <= this.Config.MaxMessageSize));

-

-            this.Send(new ProducerRequest(topic, partition, messages));

-        }

-

-        /// <summary>

-        /// Constructs request and sent it to Kafka server asynchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages to sent.

-        /// </param>

-        /// <param name="callback">

-        /// The callback invoked when a request is finished being sent.

-        /// </param>

-        public void Send(string topic, int partition, IEnumerable<Message> messages, MessageSent<ProducerRequest> callback)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNullNorEmpty(topic, "topic");

-            Guard.NotNull(messages, "messages");

-            Guard.Assert<ArgumentException>(() => messages.All(x => x.PayloadSize <= this.Config.MaxMessageSize));

-

-            this.Send(new ProducerRequest(topic, partition, messages), callback);

-        }

-

-        /// <summary>

-        /// Releases all unmanaged and managed resources

-        /// </summary>

-        public void Dispose()

-        {

-            this.Dispose(true);

-            GC.SuppressFinalize(this);

-        }

-

-        protected virtual void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            this.disposed = true;

-            if (this.connection != null)

-            {

-                this.connection.Dispose();

-            }

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducerPool.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducerPool.cs
deleted file mode 100644
index b022998..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/AsyncProducerPool.cs
+++ /dev/null
@@ -1,224 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Async

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-    using log4net;

-

-    /// <summary>

-    /// Pool of asynchronous producers used by high-level API

-    /// </summary>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    internal class AsyncProducerPool<TData> : ProducerPool<TData>

-        where TData : class 

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private readonly IDictionary<int, IAsyncProducer> asyncProducers;

-        private volatile bool disposed;

-        

-        /// <summary>

-        /// Factory method used to instantiating asynchronous producer pool

-        /// </summary>

-        /// <param name="config">

-        /// The asynchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <returns>

-        /// Instantiated asynchronous producer pool

-        /// </returns>

-        public static AsyncProducerPool<TData> CreateAsyncPool(

-            ProducerConfiguration config, 

-            IEncoder<TData> serializer, 

-            ICallbackHandler cbkHandler)

-        {

-            return new AsyncProducerPool<TData>(config, serializer, cbkHandler);

-        }

-

-        /// <summary>

-        /// Factory method used to instantiating asynchronous producer pool

-        /// </summary>

-        /// <param name="config">

-        /// The asynchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <returns>

-        /// Instantiated asynchronous producer pool

-        /// </returns>

-        public static AsyncProducerPool<TData> CreateAsyncPool(

-            ProducerConfiguration config,

-            IEncoder<TData> serializer)

-        {

-            return new AsyncProducerPool<TData>(config, serializer);

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="AsyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The asynchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="asyncProducers">

-        /// The list of asynchronous producers.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <remarks>

-        /// Should be used for testing purpose only

-        /// </remarks>

-        private AsyncProducerPool(

-            ProducerConfiguration config, 

-            IEncoder<TData> serializer, 

-            IDictionary<int, IAsyncProducer> asyncProducers, 

-            ICallbackHandler cbkHandler)

-            : base(config, serializer, cbkHandler)

-        {

-            this.asyncProducers = asyncProducers;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="AsyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The asynchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        private AsyncProducerPool(

-            ProducerConfiguration config, 

-            IEncoder<TData> serializer, 

-            ICallbackHandler cbkHandler)

-            : this(config, serializer, new Dictionary<int, IAsyncProducer>(), cbkHandler)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="AsyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The asynchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        private AsyncProducerPool(ProducerConfiguration config, IEncoder<TData> serializer)

-            : this(

-            config,

-            serializer,

-            new Dictionary<int, IAsyncProducer>(),

-            ReflectionHelper.Instantiate<ICallbackHandler>(config.CallbackHandlerClass))

-        {

-        }

-

-        /// <summary>

-        /// Selects an asynchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for multi-topic request

-        /// </remarks>

-        public override void Send(IEnumerable<ProducerPoolData<TData>> poolData)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(poolData, "poolData");

-            Dictionary<int, List<ProducerPoolData<TData>>> distinctBrokers = poolData.GroupBy(

-                x => x.BidPid.BrokerId, x => x)

-                .ToDictionary(x => x.Key, x => x.ToList());

-            foreach (var broker in distinctBrokers)

-            {

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "Fetching async producer for broker id: {0}", broker.Key);

-                var producer = this.asyncProducers[broker.Key];

-                IEnumerable<ProducerRequest> requests = broker.Value.Select(x => new ProducerRequest(

-                    x.Topic,

-                    x.BidPid.PartId,

-                    new BufferedMessageSet(x.Data.Select(y => this.Serializer.ToMessage(y)))));

-                foreach (var request in requests)

-                {

-                    producer.Send(request);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Add a new asynchronous producer to the pool.

-        /// </summary>

-        /// <param name="broker">The broker informations.</param>

-        public override void AddProducer(Broker broker)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(broker, "broker");

-            var asyncConfig = new AsyncProducerConfiguration(this.Config, broker.Id, broker.Host, broker.Port)

-                {

-                    SerializerClass = this.Config.SerializerClass

-                };

-            var asyncProducer = new AsyncProducer(asyncConfig, this.CallbackHandler);

-            Logger.InfoFormat(

-                CultureInfo.CurrentCulture,

-                "Creating async producer for broker id = {0} at {1}:{2}",

-                broker.Id,

-                broker.Host,

-                broker.Port);

-            this.asyncProducers.Add(broker.Id, asyncProducer);

-        }

-

-        protected override void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            this.disposed = true;

-            foreach (var asyncProducer in this.asyncProducers.Values)

-            {

-                asyncProducer.Dispose();

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/IAsyncProducer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/IAsyncProducer.cs
deleted file mode 100644
index 8366c70..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/IAsyncProducer.cs
+++ /dev/null
@@ -1,80 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Async

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// Sends messages encapsulated in request to Kafka server asynchronously

-    /// </summary>

-    public interface IAsyncProducer : IDisposable

-    {

-        /// <summary>

-        /// Sends request to Kafka server asynchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        void Send(ProducerRequest request);

-

-        /// <summary>

-        /// Sends request to Kafka server asynchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        /// <param name="callback">

-        /// The callback invoked when a request is finished being sent.

-        /// </param>

-        void Send(ProducerRequest request, MessageSent<ProducerRequest> callback);

-

-        /// <summary>

-        /// Constructs request and sent it to Kafka server asynchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages to sent.

-        /// </param>

-        void Send(string topic, int partition, IEnumerable<Message> messages);

-

-        /// <summary>

-        /// Constructs request and sent it to Kafka server asynchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages to sent.

-        /// </param>

-        /// <param name="callback">

-        /// The callback invoked when a request is finished being sent.

-        /// </param>

-        void Send(string topic, int partition, IEnumerable<Message> messages, MessageSent<ProducerRequest> callback);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/ICallbackHandler.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/ICallbackHandler.cs
deleted file mode 100644
index 95e80cf..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/ICallbackHandler.cs
+++ /dev/null
@@ -1,35 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Async

-{

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// Performs action when a producer request is finished being sent asynchronously.

-    /// </summary>

-    public interface ICallbackHandler

-    {

-        /// <summary>

-        /// Performs action when a producer request is finished being sent asynchronously.

-        /// </summary>

-        /// <param name="context">

-        /// The sent request context.

-        /// </param>

-        void Handle(RequestContext<ProducerRequest> context);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/MessageSent.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/MessageSent.cs
deleted file mode 100644
index 6cc7dfa..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Async/MessageSent.cs
+++ /dev/null
@@ -1,31 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Async

-{

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// Callback made when a message request is finished being sent asynchronously.

-    /// </summary>

-    /// <typeparam name="T">

-    /// Must be of type <see cref="AbstractRequest"/> and represents the type of message 

-    /// sent to Kafka.

-    /// </typeparam>

-    /// <param name="request">The request that was sent to the server.</param>

-    public delegate void MessageSent<T>(RequestContext<T> request) where T : AbstractRequest;

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducer.cs
deleted file mode 100644
index 060ccc6..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducer.cs
+++ /dev/null
@@ -1,46 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System;

-    using System.Collections.Generic;

-

-    /// <summary>

-    /// High-level Producer API that exposing all the producer functionality through a single API to the client

-    /// </summary>

-    /// <typeparam name="TKey">The type of the key.</typeparam>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    public interface IProducer<TKey, TData> : IDisposable

-        where TKey : class

-        where TData : class 

-    {

-        /// <summary>

-        /// Sends the data to a single topic, partitioned by key, using either the

-        /// synchronous or the asynchronous producer.

-        /// </summary>

-        /// <param name="data">The producer data object that encapsulates the topic, key and message data.</param>

-        void Send(ProducerData<TKey, TData> data);

-

-        /// <summary>

-        /// Sends the data to a multiple topics, partitioned by key, using either the

-        /// synchronous or the asynchronous producer.

-        /// </summary>

-        /// <param name="data">The producer data object that encapsulates the topic, key and message data.</param>

-        void Send(IEnumerable<ProducerData<TKey, TData>> data);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducerPool.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducerPool.cs
deleted file mode 100644
index 2a1a491..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/IProducerPool.cs
+++ /dev/null
@@ -1,58 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Cluster;

-

-    /// <summary>

-    /// Pool of producers used by producer high-level API

-    /// </summary>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    internal interface IProducerPool<TData> : IDisposable

-    {

-        /// <summary>

-        /// Selects either a synchronous or an asynchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for single-topic request

-        /// </remarks>

-        void Send(ProducerPoolData<TData> poolData);

-

-        /// <summary>

-        /// Selects either a synchronous or an asynchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for multi-topic request

-        /// </remarks>

-        void Send(IEnumerable<ProducerPoolData<TData>> poolData);

-

-        /// <summary>

-        /// Add a new producer, either synchronous or asynchronous, to the pool

-        /// </summary>

-        /// <param name="broker">The broker informations.</param>

-        void AddProducer(Broker broker);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ConfigBrokerPartitionInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ConfigBrokerPartitionInfo.cs
deleted file mode 100644
index 92e15b9..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ConfigBrokerPartitionInfo.cs
+++ /dev/null
@@ -1,119 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Partitioning

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Fetch broker info like ID, host and port from configuration.

-    /// </summary>

-    /// <remarks>

-    /// Used when zookeeper based auto partition discovery is disabled

-    /// </remarks>

-    internal class ConfigBrokerPartitionInfo : IBrokerPartitionInfo

-    {

-        private readonly ProducerConfiguration config;

-        private IDictionary<int, Broker> brokers;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ConfigBrokerPartitionInfo"/> class.

-        /// </summary>

-        /// <param name="config">The config.</param>

-        public ConfigBrokerPartitionInfo(ProducerConfiguration config)

-        {

-            Guard.NotNull(config, "config");

-            this.config = config;

-            this.InitializeBrokers();

-        }

-

-        /// <summary>

-        /// Gets a mapping from broker ID to the host and port for all brokers

-        /// </summary>

-        /// <returns>

-        /// Mapping from broker ID to the host and port for all brokers

-        /// </returns>

-        public IDictionary<int, Broker> GetAllBrokerInfo()

-        {

-            return this.brokers;

-        }

-

-        /// <summary>

-        /// Gets a mapping from broker ID to partition IDs

-        /// </summary>

-        /// <param name="topic">The topic for which this information is to be returned</param>

-        /// <returns>

-        /// Mapping from broker ID to partition IDs

-        /// </returns>

-        /// <remarks>Partition ID would be allways 0</remarks>

-        public SortedSet<Partition> GetBrokerPartitionInfo(string topic)

-        {

-            Guard.NotNullNorEmpty(topic, "topic");

-            var partitions = new SortedSet<Partition>();

-            foreach (var item in this.brokers)

-            {

-                partitions.Add(new Partition(item.Key, 0));

-            }

-

-            return partitions;

-        }

-

-        /// <summary>

-        /// Gets the host and port information for the broker identified by the given broker ID

-        /// </summary>

-        /// <param name="brokerId">The broker ID.</param>

-        /// <returns>

-        /// Host and port of broker

-        /// </returns>

-        public Broker GetBrokerInfo(int brokerId)

-        {

-            return this.brokers.ContainsKey(brokerId) ? this.brokers[brokerId] : null;

-        }

-

-        /// <summary>

-        /// Releasing unmanaged resources if any are used.

-        /// </summary>

-        /// <remarks>Do nothing</remarks>

-        public void Dispose()

-        {

-        }

-

-        /// <summary>

-        /// Initialize list of brokers from configuration

-        /// </summary>

-        private void InitializeBrokers()

-        {

-            if (this.brokers != null)

-            {

-                return;

-            }

-

-            this.brokers = new Dictionary<int, Broker>();

-            foreach (var item in this.config.Brokers)

-            {

-                this.brokers.Add(

-                    item.BrokerId, 

-                    new Broker(item.BrokerId, item.Host, item.Host, item.Port));

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/DefaultPartitioner.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/DefaultPartitioner.cs
deleted file mode 100644
index 6efdb0c..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/DefaultPartitioner.cs
+++ /dev/null
@@ -1,50 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Partitioning

-{

-    using System;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Default partitioner using hash code to calculate partition

-    /// </summary>

-    /// <typeparam name="TKey">The type of the key.</typeparam>

-    public class DefaultPartitioner<TKey> : IPartitioner<TKey>

-        where TKey : class 

-    {

-        private static readonly Random Randomizer = new Random();

-

-        /// <summary>

-        /// Uses the key to calculate a partition bucket id for routing

-        /// the data to the appropriate broker partition

-        /// </summary>

-        /// <param name="key">The key.</param>

-        /// <param name="numPartitions">The num partitions.</param>

-        /// <returns>ID between 0 and numPartitions-1</returns>

-        /// <remarks>

-        /// Used hash code to calculate partition

-        /// </remarks>

-        public int Partition(TKey key, int numPartitions)

-        {

-            Guard.Greater(numPartitions, 0, "numPartitions");

-            return key == null 

-                ? Randomizer.Next(numPartitions) 

-                : key.GetHashCode() % numPartitions;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IBrokerPartitionInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IBrokerPartitionInfo.cs
deleted file mode 100644
index 25ee080..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IBrokerPartitionInfo.cs
+++ /dev/null
@@ -1,49 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Partitioning

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Cluster;

-

-    /// <summary>

-    /// Retrieves brokers and partitions info

-    /// </summary>

-    internal interface IBrokerPartitionInfo : IDisposable

-    {

-        /// <summary>

-        /// Gets a mapping from broker ID to the host and port for all brokers

-        /// </summary>

-        /// <returns>Mapping from broker ID to the host and port for all brokers</returns>

-        IDictionary<int, Broker> GetAllBrokerInfo();

-

-        /// <summary>

-        /// Gets a mapping from broker ID to partition IDs

-        /// </summary>

-        /// <param name="topic">The topic for which this information is to be returned</param>

-        /// <returns>Mapping from broker ID to partition IDs</returns>

-        SortedSet<Partition> GetBrokerPartitionInfo(string topic);

-

-        /// <summary>

-        /// Gets the host and port information for the broker identified by the given broker ID

-        /// </summary>

-        /// <param name="brokerId">The broker ID.</param>

-        /// <returns>Host and port of broker</returns>

-        Broker GetBrokerInfo(int brokerId);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IPartitioner.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IPartitioner.cs
deleted file mode 100644
index 34b5666..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/IPartitioner.cs
+++ /dev/null
@@ -1,36 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Partitioning

-{

-    /// <summary>

-    /// User-defined partitioner

-    /// </summary>

-    /// <typeparam name="TKey">The type of the key.</typeparam>

-    public interface IPartitioner<TKey>

-        where TKey : class 

-    {

-        /// <summary>

-        /// Uses the key to calculate a partition bucket id for routing

-        /// the data to the appropriate broker partition

-        /// </summary>

-        /// <param name="key">The key.</param>

-        /// <param name="numPartitions">The num partitions.</param>

-        /// <returns>ID between 0 and numPartitions-1</returns>

-        int Partition(TKey key, int numPartitions);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ZKBrokerPartitionInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ZKBrokerPartitionInfo.cs
deleted file mode 100644
index 6c1a8b3..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Partitioning/ZKBrokerPartitionInfo.cs
+++ /dev/null
@@ -1,342 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Partitioning

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using log4net;

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Fetch broker info like ID, host, port and number of partitions from ZooKeeper.

-    /// </summary>

-    /// <remarks>

-    /// Used when zookeeper based auto partition discovery is enabled

-    /// </remarks>

-    internal class ZKBrokerPartitionInfo : IBrokerPartitionInfo, IZooKeeperStateListener

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);   

-        private readonly Action<int, string, int> callback;

-        private IDictionary<int, Broker> brokers;

-        private IDictionary<string, SortedSet<Partition>> topicBrokerPartitions;

-        private readonly IZooKeeperClient zkclient;

-        private readonly BrokerTopicsListener brokerTopicsListener;

-        private volatile bool disposed;

-        private readonly object shuttingDownLock = new object();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZKBrokerPartitionInfo"/> class.

-        /// </summary>

-        /// <param name="zkclient">The wrapper above ZooKeeper client.</param>

-        public ZKBrokerPartitionInfo(IZooKeeperClient zkclient)

-        {

-            this.zkclient = zkclient;

-            this.zkclient.Connect();

-            this.InitializeBrokers();

-            this.InitializeTopicBrokerPartitions();

-            this.brokerTopicsListener = new BrokerTopicsListener(this.zkclient, this.topicBrokerPartitions, this.brokers, this.callback);

-            this.RegisterListeners();

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZKBrokerPartitionInfo"/> class.

-        /// </summary>

-        /// <param name="config">The config.</param>

-        /// <param name="callback">The callback invoked when new broker is added.</param>

-        public ZKBrokerPartitionInfo(ProducerConfiguration config, Action<int, string, int> callback)

-            : this(new ZooKeeperClient(config.ZooKeeper.ZkConnect, config.ZooKeeper.ZkSessionTimeoutMs, ZooKeeperStringSerializer.Serializer))

-        {

-            this.callback = callback;

-        }

-

-        /// <summary>

-        /// Gets a mapping from broker ID to the host and port for all brokers

-        /// </summary>

-        /// <returns>

-        /// Mapping from broker ID to the host and port for all brokers

-        /// </returns>

-        public IDictionary<int, Broker> GetAllBrokerInfo()

-        {

-            this.EnsuresNotDisposed();

-            return this.brokers;

-        }

-

-        /// <summary>

-        /// Gets a mapping from broker ID to partition IDs

-        /// </summary>

-        /// <param name="topic">The topic for which this information is to be returned</param>

-        /// <returns>

-        /// Mapping from broker ID to partition IDs

-        /// </returns>

-        public SortedSet<Partition> GetBrokerPartitionInfo(string topic)

-        {

-            Guard.NotNullNorEmpty(topic, "topic");

-

-            this.EnsuresNotDisposed();

-            SortedSet<Partition> brokerPartitions = null;

-            if (this.topicBrokerPartitions.ContainsKey(topic))

-            {

-                brokerPartitions = this.topicBrokerPartitions[topic];

-            }

-            else

-            {

-                this.topicBrokerPartitions.Add(topic, null);

-            }

-

-            if (brokerPartitions == null || brokerPartitions.Count == 0)

-            {

-                var numBrokerPartitions = this.BootstrapWithExistingBrokers(topic);

-                this.topicBrokerPartitions[topic] = numBrokerPartitions;

-                return numBrokerPartitions;

-            }

-

-            return brokerPartitions;

-        }

-

-        /// <summary>

-        /// Gets the host and port information for the broker identified by the given broker ID

-        /// </summary>

-        /// <param name="brokerId">The broker ID.</param>

-        /// <returns>

-        /// Host and port of broker

-        /// </returns>

-        public Broker GetBrokerInfo(int brokerId)

-        {

-            this.EnsuresNotDisposed();

-            return this.brokers.ContainsKey(brokerId) ? this.brokers[brokerId] : null;

-        }

-

-        /// <summary>

-        /// Closes underlying connection to ZooKeeper

-        /// </summary>

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-

-            try

-            {

-                if (this.zkclient != null)

-                {

-                    this.zkclient.Dispose();

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Ignoring unexpected errors on closing", exc);

-            }

-        }

-

-        /// <summary>

-        /// Initializes the list of brokers.

-        /// </summary>

-        private void InitializeBrokers()

-        {

-            if (this.brokers != null)

-            {

-                return;

-            }

-

-            this.brokers = new ConcurrentDictionary<int, Broker>();

-            IList<string> brokerIds = this.zkclient.GetChildrenParentMayNotExist(ZooKeeperClient.DefaultBrokerIdsPath);

-            foreach (var brokerId in brokerIds)

-            {

-                string path = ZooKeeperClient.DefaultBrokerIdsPath + "/" + brokerId;

-                int id = int.Parse(brokerId, CultureInfo.InvariantCulture);

-                var info = this.zkclient.ReadData<string>(path, null);

-                string[] parts = info.Split(':');

-                int port = int.Parse(parts[2], CultureInfo.InvariantCulture);

-                this.brokers.Add(id, new Broker(id, parts[0], parts[1], port));

-            }

-        }

-

-        /// <summary>

-        /// Initializes the topic - broker's partitions mappings.

-        /// </summary>

-        private void InitializeTopicBrokerPartitions()

-        {

-            if (this.topicBrokerPartitions != null)

-            {

-                return;

-            }

-

-            this.topicBrokerPartitions = new ConcurrentDictionary<string, SortedSet<Partition>>();

-            this.zkclient.MakeSurePersistentPathExists(ZooKeeperClient.DefaultBrokerTopicsPath);

-            IList<string> topics = this.zkclient.GetChildrenParentMayNotExist(ZooKeeperClient.DefaultBrokerTopicsPath);

-            foreach (string topic in topics)

-            {

-                string brokerTopicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic;

-                IList<string> brokersPerTopic = this.zkclient.GetChildrenParentMayNotExist(brokerTopicPath);

-                var brokerPartitions = new SortedDictionary<int, int>();

-                foreach (string brokerId in brokersPerTopic)

-                {

-                    string path = brokerTopicPath + "/" + brokerId;

-                    var numPartitionsPerBrokerAndTopic = this.zkclient.ReadData<string>(path);

-                    brokerPartitions.Add(int.Parse(brokerId, CultureInfo.InvariantCulture), int.Parse(numPartitionsPerBrokerAndTopic, CultureInfo.CurrentCulture));

-                }              

-

-                var brokerParts = new SortedSet<Partition>();

-                foreach (var brokerPartition in brokerPartitions)

-                {

-                    for (int i = 0; i < brokerPartition.Value; i++)

-                    {

-                        var bidPid = new Partition(brokerPartition.Key, i);

-                        brokerParts.Add(bidPid);

-                    }

-                }

-

-                this.topicBrokerPartitions.Add(topic, brokerParts);

-            }

-        }

-

-        /// <summary>

-        /// Add the all available brokers with default one partition for new topic, so all of the brokers

-        /// participate in hosting this topic

-        /// </summary>

-        /// <param name="topic">The new topic.</param>

-        /// <returns>Default partitions for new broker</returns>

-        /// <remarks>

-        /// Since we do not have the in formation about number of partitions on these brokers, just assume single partition

-        /// just pick partition 0 from each broker as a candidate

-        /// </remarks>

-        private SortedSet<Partition> BootstrapWithExistingBrokers(string topic)

-        {

-            Logger.Debug("Currently, no brokers are registered under topic: " + topic);

-            Logger.Debug("Bootstrapping topic: " + topic + " with available brokers in the cluster with default "

-                + "number of partitions = 1");

-            var numBrokerPartitions = new SortedSet<Partition>();

-            var allBrokers = this.zkclient.GetChildrenParentMayNotExist(ZooKeeperClient.DefaultBrokerIdsPath);

-            Logger.Debug("List of all brokers currently registered in zookeeper -> " + string.Join(", ", allBrokers));

-            foreach (var broker in allBrokers)

-            {

-                numBrokerPartitions.Add(new Partition(int.Parse(broker, CultureInfo.InvariantCulture), 0));

-            }

-

-            Logger.Debug("Adding following broker id, partition id for NEW topic: " + topic + " -> " + string.Join(", ", numBrokerPartitions));

-            return numBrokerPartitions;

-        }

-

-        /// <summary>

-        /// Registers the listeners under several path in ZooKeeper 

-        /// to keep related data structures updated.

-        /// </summary>

-        /// <remarks>

-        /// Watch on following path:

-        /// /broker/topics

-        /// /broker/topics/[topic]

-        /// /broker/ids

-        /// </remarks>

-        private void RegisterListeners()

-        {

-            this.zkclient.Subscribe(ZooKeeperClient.DefaultBrokerTopicsPath, this.brokerTopicsListener);

-            Logger.Debug("Registering listener on path: " + ZooKeeperClient.DefaultBrokerTopicsPath);

-            foreach (string topic in this.topicBrokerPartitions.Keys)

-            {

-                string path = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic;

-                this.zkclient.Subscribe(path, this.brokerTopicsListener);

-                Logger.Debug("Registering listener on path: " + path);

-            }

-

-            this.zkclient.Subscribe(ZooKeeperClient.DefaultBrokerIdsPath, this.brokerTopicsListener);

-            Logger.Debug("Registering listener on path: " + ZooKeeperClient.DefaultBrokerIdsPath);

-

-            this.zkclient.Subscribe(this);

-            Logger.Debug("Registering listener on state changed event");

-        }

-

-        /// <summary>

-        /// Resets the related data structures

-        /// </summary>

-        private void Reset()

-        {

-            this.topicBrokerPartitions = null;

-            this.brokers = null;

-            this.InitializeBrokers();

-            this.InitializeTopicBrokerPartitions();

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-

-        /// <summary>

-        /// Called when the ZooKeeper connection state has changed.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperStateChangedEventArgs"/> instance containing the event data.</param>

-        /// <remarks>

-        /// Do nothing, since zkclient will do reconnect for us.

-        /// </remarks>

-        public void HandleStateChanged(ZooKeeperStateChangedEventArgs args)

-        {

-            Guard.NotNull(args, "args");

-            Guard.Assert<ArgumentException>(() => args.State != KeeperState.Unknown);

-

-            this.EnsuresNotDisposed();

-            Logger.Debug("Handle state change: do nothing, since zkclient will do reconnect for us.");

-        }

-

-        /// <summary>

-        /// Called after the ZooKeeper session has expired and a new session has been created.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperSessionCreatedEventArgs"/> instance containing the event data.</param>

-        /// <remarks>

-        /// We would have to re-create any ephemeral nodes here.

-        /// </remarks>

-        public void HandleSessionCreated(ZooKeeperSessionCreatedEventArgs args)

-        {

-            Guard.NotNull(args, "args");

-

-            this.EnsuresNotDisposed();

-            Logger.Debug("ZK expired; release old list of broker partitions for topics ");

-            this.Reset();

-            this.brokerTopicsListener.ResetState();

-            foreach (var topic in this.topicBrokerPartitions.Keys)

-            {

-                this.zkclient.Subscribe(ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic, this.brokerTopicsListener);   

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.StrMsg.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.StrMsg.cs
deleted file mode 100644
index efb8a6f..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.StrMsg.cs
+++ /dev/null
@@ -1,96 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Producers.Partitioning;

-    using Kafka.Client.Serialization;

-

-    /// <summary>

-    /// High-level Producer API that exposes all the producer functionality to the client 

-    /// using <see cref="System.String" /> as type of key and <see cref="Message" /> as type of data

-    /// </summary>

-    public class Producer : Producer<string, Message>

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;String&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="producerPool">Pool of producers, one per broker.</param>

-        /// <param name="populateProducerPool">if set to <c>true</c>, producers should be populated.</param>

-        /// <remarks>

-        /// Should be used for testing purpose only.

-        /// </remarks>

-        internal Producer(ProducerConfiguration config, IPartitioner<string> partitioner, IProducerPool<Message> producerPool, bool populateProducerPool)

-            : base(config, partitioner, producerPool, populateProducerPool)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <remarks>

-        /// Can be used when all config parameters will be specified through the config object

-        /// and will be instantiated via reflection

-        /// </remarks>

-        public Producer(ProducerConfiguration config)

-            : base(config)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;String&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="encoder">The encoder that implements <see cref="IEncoder&lt;Message&gt;" /></param>

-        /// <param name="callbackHandler">The callback handler that implements <see cref="ICallbackHandler" />, used 

-        /// to supply callback invoked when sending asynchronous request is completed.</param>

-        /// <remarks>

-        /// Can be used to provide pre-instantiated objects for all config parameters

-        /// that would otherwise be instantiated via reflection.

-        /// </remarks>

-        public Producer(ProducerConfiguration config, IPartitioner<string> partitioner, IEncoder<Message> encoder, ICallbackHandler callbackHandler)

-            : base(config, partitioner, encoder, callbackHandler)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;TKey&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="encoder">The encoder that implements <see cref="IEncoder&lt;Message&gt;" /> 

-        /// </param>

-        /// <remarks>

-        /// Can be used to provide pre-instantiated objects for all config parameters

-        /// that would otherwise be instantiated via reflection.

-        /// </remarks>

-        public Producer(ProducerConfiguration config, IPartitioner<string> partitioner, IEncoder<Message> encoder)

-            : base(config, partitioner, encoder)

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.cs
deleted file mode 100644
index 35c1d4c..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Producer.cs
+++ /dev/null
@@ -1,337 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Producers.Partitioning;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-    using log4net;

-

-    /// <summary>

-    /// High-level Producer API that exposes all the producer functionality to the client

-    /// </summary>

-    /// <typeparam name="TKey">The type of the key.</typeparam>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    /// <remarks>

-    /// Provides serialization of data through a user-specified encoder, zookeeper based automatic broker discovery

-    /// and software load balancing through an optionally user-specified partitioner

-    /// </remarks>

-    public class Producer<TKey, TData> : KafkaClientBase, IProducer<TKey, TData>

-        where TKey : class 

-        where TData : class 

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);       

-        private static readonly Random Randomizer = new Random();

-        private readonly ProducerConfiguration config;

-        private readonly IProducerPool<TData> producerPool;

-        private readonly IPartitioner<TKey> partitioner;

-        private readonly bool populateProducerPool;

-        private readonly IBrokerPartitionInfo brokerPartitionInfo;

-        private volatile bool disposed;

-        private readonly object shuttingDownLock = new object();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer&lt;TKey, TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;TKey&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="producerPool">Pool of producers, one per broker.</param>

-        /// <param name="populateProducerPool">if set to <c>true</c>, producers should be populated.</param>

-        /// <remarks>

-        /// Should be used for testing purpose only.

-        /// </remarks>

-        internal Producer(

-            ProducerConfiguration config,

-            IPartitioner<TKey> partitioner,

-            IProducerPool<TData> producerPool,

-            bool populateProducerPool = true)

-        {

-            Guard.NotNull(config, "config");

-            Guard.NotNull(producerPool, "producerPool");

-

-            this.config = config;

-            this.partitioner = partitioner ?? new DefaultPartitioner<TKey>();

-            this.populateProducerPool = populateProducerPool;

-            this.producerPool = producerPool;

-            if (this.config.IsZooKeeperEnabled)

-            {

-                this.brokerPartitionInfo = new ZKBrokerPartitionInfo(this.config, this.Callback);

-            }

-            else

-            {

-                this.brokerPartitionInfo = new ConfigBrokerPartitionInfo(this.config);   

-            }

-

-            if (this.populateProducerPool)

-            {

-                IDictionary<int, Broker> allBrokers = this.brokerPartitionInfo.GetAllBrokerInfo();

-                foreach (var broker in allBrokers)

-                {

-                    this.producerPool.AddProducer(

-                        new Broker(broker.Key, broker.Value.Host, broker.Value.Host, broker.Value.Port));

-                }

-            }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer&lt;TKey, TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <remarks>

-        /// Can be used when all config parameters will be specified through the config object

-        /// and will be instantiated via reflection

-        /// </remarks>

-        public Producer(ProducerConfiguration config)

-            : this(

-                config, 

-                ReflectionHelper.Instantiate<IPartitioner<TKey>>(config.PartitionerClass),

-                ProducerPool<TData>.CreatePool(config, ReflectionHelper.Instantiate<IEncoder<TData>>(config.SerializerClass)))

-        {

-            Guard.NotNull(config, "config");

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer&lt;TKey, TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;TKey&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="encoder">The encoder that implements <see cref="IEncoder&lt;TData&gt;" /> 

-        /// used to convert an object of type TData to <see cref="Message" />.</param>

-        /// <param name="callbackHandler">The callback handler that implements <see cref="ICallbackHandler" />, used 

-        /// to supply callback invoked when sending asynchronous request is completed.</param>

-        /// <remarks>

-        /// Can be used to provide pre-instantiated objects for all config parameters

-        /// that would otherwise be instantiated via reflection.

-        /// </remarks>

-        public Producer(

-            ProducerConfiguration config,

-            IPartitioner<TKey> partitioner,

-            IEncoder<TData> encoder,

-            ICallbackHandler callbackHandler)

-            : this(

-                config, 

-                partitioner,

-                ProducerPool<TData>.CreatePool(config, encoder, callbackHandler))

-        {

-            Guard.NotNull(config, "config");

-            Guard.NotNull(encoder, "encoder");

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="Producer&lt;TKey, TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">The config object.</param>

-        /// <param name="partitioner">The partitioner that implements <see cref="IPartitioner&lt;TKey&gt;" /> 

-        /// used to supply a custom partitioning strategy based on the message key.</param>

-        /// <param name="encoder">The encoder that implements <see cref="IEncoder&lt;TData&gt;" /> 

-        /// used to convert an object of type TData to <see cref="Message" />.</param>

-        /// <remarks>

-        /// Can be used to provide pre-instantiated objects for all config parameters

-        /// that would otherwise be instantiated via reflection.

-        /// </remarks>

-        public Producer(

-            ProducerConfiguration config,

-            IPartitioner<TKey> partitioner,

-            IEncoder<TData> encoder)

-            : this(

-                config, 

-                partitioner,

-                ProducerPool<TData>.CreatePool(config, encoder, null))

-        {

-            Guard.NotNull(config, "config");

-            Guard.NotNull(encoder, "encoder");

-        }

-

-        /// <summary>

-        /// Sends the data to a multiple topics, partitioned by key, using either the

-        /// synchronous or the asynchronous producer.

-        /// </summary>

-        /// <param name="data">The producer data objects that encapsulate the topic, key and message data.</param>

-        public void Send(IEnumerable<ProducerData<TKey, TData>> data)

-        {

-            Guard.NotNull(data, "data");

-            Guard.Greater(data.Count(), 0, "data");

-

-            this.EnsuresNotDisposed();

-            var poolRequests = new List<ProducerPoolData<TData>>();

-            foreach (var dataItem in data)

-            {

-                Partition partition = this.GetPartition(dataItem);

-                var poolRequest = new ProducerPoolData<TData>(dataItem.Topic, partition, dataItem.Data);

-                poolRequests.Add(poolRequest);

-            }

-

-            this.producerPool.Send(poolRequests);

-        }

-

-        /// <summary>

-        /// Sends the data to a single topic, partitioned by key, using either the

-        /// synchronous or the asynchronous producer.

-        /// </summary>

-        /// <param name="data">The producer data object that encapsulates the topic, key and message data.</param>

-        public void Send(ProducerData<TKey, TData> data)

-        {

-            Guard.NotNull(data, "data");

-            Guard.NotNullNorEmpty(data.Topic, "data.Topic");

-            Guard.NotNull(data.Data, "data.Data");

-            Guard.Greater(data.Data.Count(), 0, "data.Data");

-

-            this.EnsuresNotDisposed();

-            this.Send(new[] { data });

-        }

-

-        protected override void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-

-            try

-            {

-                if (this.brokerPartitionInfo != null)

-                {

-                    this.brokerPartitionInfo.Dispose();

-                }

-

-                if (this.producerPool != null)

-                {

-                    this.producerPool.Dispose();

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Ignoring unexpected errors on closing", exc);

-            }

-        }

-

-        /// <summary>

-        /// Callback to add a new producer to the producer pool.

-        /// Used by <see cref="ZKBrokerPartitionInfo" /> on registration of new broker in ZooKeeper

-        /// </summary>

-        /// <param name="bid">The broker Id.</param>

-        /// <param name="host">The broker host address.</param>

-        /// <param name="port">The broker port.</param>

-        private void Callback(int bid, string host, int port)

-        {

-            Guard.NotNullNorEmpty(host, "host");

-            Guard.Greater(port, 0, "port");

-

-            if (this.populateProducerPool)

-            {

-                this.producerPool.AddProducer(new Broker(bid, host, host, port));

-            }

-            else

-            {

-                Logger.Debug("Skipping the callback since populating producers is off");

-            }

-        }

-

-        /// <summary>

-        /// Retrieves the partition id based on key using given partitioner or select random partition if key is null

-        /// </summary>

-        /// <param name="key">The partition key.</param>

-        /// <param name="numPartitions">The total number of available partitions.</param>

-        /// <returns>Partition Id</returns>

-        private int GetPartitionId(TKey key, int numPartitions)

-        {

-            Guard.Greater(numPartitions, 0, "numPartitions");

-            return key == null 

-                ? Randomizer.Next(numPartitions) 

-                : this.partitioner.Partition(key, numPartitions);

-        }

-

-        /// <summary>

-        /// Gets the partition for topic.

-        /// </summary>

-        /// <param name="dataItem">The producer data object that encapsulates the topic, key and message data.</param>

-        /// <returns>Partition for topic</returns>

-        private Partition GetPartition(ProducerData<TKey, TData> dataItem)

-        {

-            Logger.DebugFormat(

-                CultureInfo.CurrentCulture,

-                "Getting the number of broker partitions registered for topic: {0}",

-                dataItem.Topic);

-            SortedSet<Partition> brokerPartitions = this.brokerPartitionInfo.GetBrokerPartitionInfo(dataItem.Topic);

-            int totalNumPartitions = brokerPartitions.Count;

-            Logger.DebugFormat(

-                CultureInfo.CurrentCulture,

-                "Broker partitions registered for topic: {0} = {1}",

-                dataItem.Topic,

-                totalNumPartitions);

-            int partitionId = this.GetPartitionId(dataItem.Key, totalNumPartitions);

-            Partition brokerIdPartition = brokerPartitions.ToList()[partitionId];

-            Broker brokerInfo = this.brokerPartitionInfo.GetBrokerInfo(brokerIdPartition.BrokerId);

-            if (this.config.IsZooKeeperEnabled)

-            {

-                Logger.DebugFormat(

-                    CultureInfo.CurrentCulture,

-                    "Sending message to broker {0}:{1} on partition {2}",

-                    brokerInfo.Host,

-                    brokerInfo.Port,

-                    brokerIdPartition.PartId);

-                return new Partition(brokerIdPartition.BrokerId, brokerIdPartition.PartId);

-            }

-

-            Logger.DebugFormat(

-                CultureInfo.CurrentCulture,

-                "Sending message to broker {0}:{1} on a randomly chosen partition",

-                brokerInfo.Host,

-                brokerInfo.Port);

-            return new Partition(brokerIdPartition.BrokerId, ProducerRequest.RandomPartition);

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerData.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerData.cs
deleted file mode 100644
index 2c299c8..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerData.cs
+++ /dev/null
@@ -1,95 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System.Collections.Generic;

-

-    /// <summary>

-    /// Encapsulates data to be send on topic

-    /// </summary>

-    /// <typeparam name="TKey">

-    /// Type of partitioning key

-    /// </typeparam>

-    /// <typeparam name="TData">

-    /// Type of data

-    /// </typeparam>

-    public class ProducerData<TKey, TData>

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerData{TKey,TData}"/> class.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="key">

-        /// The partitioning key.

-        /// </param>

-        /// <param name="data">

-        /// The list of data to send on the same topic.

-        /// </param>

-        public ProducerData(string topic, TKey key, IEnumerable<TData> data)

-            : this(topic, data)

-        {

-            this.Key = key;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerData{TKey,TData}"/> class.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="data">

-        /// The list of data to send on the same topic.

-        /// </param>

-        public ProducerData(string topic, IEnumerable<TData> data)

-        {

-            this.Topic = topic;

-            this.Data = data;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerData{TKey,TData}"/> class.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="data">

-        /// The data to send on the topic.

-        /// </param>

-        public ProducerData(string topic, TData data)

-            : this(topic, new[] { data })

-        {

-        }

-

-        /// <summary>

-        /// Gets topic.

-        /// </summary>

-        public string Topic { get; private set; }

-

-        /// <summary>

-        /// Gets the partitioning key.

-        /// </summary>

-        public TKey Key { get; private set; }

-

-        /// <summary>

-        /// Gets the data.

-        /// </summary>

-        public IEnumerable<TData> Data { get; private set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPool.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPool.cs
deleted file mode 100644
index 882e7b4..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPool.cs
+++ /dev/null
@@ -1,206 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Producers.Sync;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// The base for all classes that represents pool of producers used by high-level API

-    /// </summary>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    internal abstract class ProducerPool<TData> : IProducerPool<TData>

-        where TData : class 

-    {

-        protected bool Disposed { get; set; }

-

-        /// <summary>

-        /// Factory method used to instantiating either, 

-        /// synchronous or asynchronous, producer pool based on configuration.

-        /// </summary>

-        /// <param name="config">

-        /// The producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <returns>

-        /// Instantiated either, synchronous or asynchronous, producer pool

-        /// </returns>

-        public static ProducerPool<TData> CreatePool(ProducerConfiguration config, IEncoder<TData> serializer)

-        {

-            if (config.ProducerType == ProducerTypes.Async)

-            {

-                return AsyncProducerPool<TData>.CreateAsyncPool(config, serializer);

-            }

-

-            if (config.ProducerType == ProducerTypes.Sync)

-            {

-                return SyncProducerPool<TData>.CreateSyncPool(config, serializer);

-            }

-

-            throw new InvalidOperationException("Not supported producer type " + config.ProducerType);

-        }

-

-        /// <summary>

-        /// Factory method used to instantiating either, 

-        /// synchronous or asynchronous, producer pool based on configuration.

-        /// </summary>

-        /// <param name="config">

-        /// The producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <returns>

-        /// Instantiated either, synchronous or asynchronous, producer pool

-        /// </returns>

-        public static ProducerPool<TData> CreatePool(

-            ProducerConfiguration config,

-            IEncoder<TData> serializer,

-            ICallbackHandler cbkHandler)

-        {

-            if (config.ProducerType == ProducerTypes.Async)

-            {

-                return AsyncProducerPool<TData>.CreateAsyncPool(config, serializer, cbkHandler);

-            }

-

-            if (config.ProducerType == ProducerTypes.Sync)

-            {

-                return SyncProducerPool<TData>.CreateSyncPool(config, serializer, cbkHandler);

-            }

-

-            throw new InvalidOperationException("Not supported producer type " + config.ProducerType);

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerPool&lt;TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">The config.</param>

-        /// <param name="serializer">The serializer.</param>

-        /// <remarks>

-        /// Should be used for testing purpose only

-        /// </remarks>

-        protected ProducerPool(

-            ProducerConfiguration config,

-            IEncoder<TData> serializer)

-        {

-            Guard.NotNull(config, "config");

-            Guard.NotNull(serializer, "serializer");

-

-            this.Config = config;

-            this.Serializer = serializer;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerPool&lt;TData&gt;"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The config.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="callbackHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        protected ProducerPool(

-            ProducerConfiguration config,

-            IEncoder<TData> serializer,

-            ICallbackHandler callbackHandler)

-        {

-            Guard.NotNull(config, "config");

-            Guard.NotNull(serializer, "serializer");

-

-            this.Config = config;

-            this.Serializer = serializer;

-            this.CallbackHandler = callbackHandler;

-        }

-

-        protected ProducerConfiguration Config { get; private set; }

-

-        protected IEncoder<TData> Serializer { get; private set; }

-

-        protected ICallbackHandler CallbackHandler { get; private set; }

-

-        /// <summary>

-        /// Add a new producer, either synchronous or asynchronous, to the pool

-        /// </summary>

-        /// <param name="broker">The broker informations.</param>

-        public abstract void AddProducer(Broker broker);

-

-        /// <summary>

-        /// Selects either a synchronous or an asynchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for single-topic request

-        /// </remarks>

-        public void Send(ProducerPoolData<TData> poolData)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(poolData, "poolData");

-

-            this.Send(new[] { poolData });

-        }

-

-        /// <summary>

-        /// Selects either a synchronous or an asynchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for multi-topic request

-        /// </remarks>

-        public abstract void Send(IEnumerable<ProducerPoolData<TData>> poolData);

-

-        /// <summary>

-        /// Releases all unmanaged and managed resources

-        /// </summary>

-        public void Dispose()

-        {

-            this.Dispose(true);

-            GC.SuppressFinalize(this);

-        }

-

-        protected abstract void Dispose(bool disposing);

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        protected void EnsuresNotDisposed()

-        {

-            if (this.Disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPoolData.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPoolData.cs
deleted file mode 100644
index fab9559..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerPoolData.cs
+++ /dev/null
@@ -1,65 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    using System.Collections.Generic;

-    using Kafka.Client.Cluster;

-

-    /// <summary>

-    /// Encapsulates data to be send on chosen partition

-    /// </summary>

-    /// <typeparam name="TData">

-    /// Type of data

-    /// </typeparam>

-    internal class ProducerPoolData<TData>

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ProducerPoolData{TData}"/> class.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="bidPid">

-        /// The chosen partition.

-        /// </param>

-        /// <param name="data">

-        /// The data.

-        /// </param>

-        public ProducerPoolData(string topic, Partition bidPid, IEnumerable<TData> data)

-        {

-            this.Topic = topic;

-            this.BidPid = bidPid;

-            this.Data = data;

-        }

-

-        /// <summary>

-        /// Gets the topic.

-        /// </summary>

-        public string Topic { get; private set; }

-

-        /// <summary>

-        /// Gets the chosen partition.

-        /// </summary>

-        public Partition BidPid { get; private set; }

-

-        /// <summary>

-        /// Gets the data.

-        /// </summary>

-        public IEnumerable<TData> Data { get; private set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerTypes.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerTypes.cs
deleted file mode 100644
index 428c606..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/ProducerTypes.cs
+++ /dev/null
@@ -1,29 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers

-{

-    /// <summary>

-    /// Type of producer

-    /// </summary>

-    public enum ProducerTypes

-    {

-        Unknow = 0,

-        Sync = 1,

-        Async = 2

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/ISyncProducer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/ISyncProducer.cs
deleted file mode 100644
index 71d3dbc..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/ISyncProducer.cs
+++ /dev/null
@@ -1,60 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Sync

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// Sends messages encapsulated in request to Kafka server synchronously

-    /// </summary>

-    public interface ISyncProducer : IDisposable

-    {

-        /// <summary>

-        /// Constructs producer request and sends it to given broker partition synchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages messages.

-        /// </param>

-        void Send(string topic, int partition, IEnumerable<Message> messages);

-

-        /// <summary>

-        /// Sends request to Kafka server synchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        void Send(ProducerRequest request);

-

-        /// <summary>

-        /// Sends the data to a multiple topics on Kafka server synchronously

-        /// </summary>

-        /// <param name="requests">

-        /// The requests.

-        /// </param>

-        void MultiSend(IEnumerable<ProducerRequest> requests);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducer.cs
deleted file mode 100644
index 2880da6..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducer.cs
+++ /dev/null
@@ -1,155 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Sync

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Linq;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Sends messages encapsulated in request to Kafka server synchronously

-    /// </summary>

-    public class SyncProducer : ISyncProducer

-    {

-        private readonly KafkaConnection connection;

-

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Gets producer config

-        /// </summary>

-        public SyncProducerConfiguration Config { get; private set; }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="SyncProducer"/> class.

-        /// </summary>

-        /// <param name="config">

-        /// The producer config.

-        /// </param>

-        public SyncProducer(SyncProducerConfiguration config)

-        {

-            Guard.NotNull(config, "config");

-            this.Config = config;

-            this.connection = new KafkaConnection(

-                this.Config.Host, 

-                this.Config.Port,

-                config.BufferSize,

-                config.SocketTimeout);

-        }

-

-        /// <summary>

-        /// Constructs producer request and sends it to given broker partition synchronously

-        /// </summary>

-        /// <param name="topic">

-        /// The topic.

-        /// </param>

-        /// <param name="partition">

-        /// The partition.

-        /// </param>

-        /// <param name="messages">

-        /// The list of messages messages.

-        /// </param>

-        public void Send(string topic, int partition, IEnumerable<Message> messages)

-        {

-            Guard.NotNullNorEmpty(topic, "topic");

-            Guard.NotNull(messages, "messages");

-            Guard.AllNotNull(messages, "messages.items");

-            Guard.Assert<ArgumentOutOfRangeException>(

-                () => messages.All(

-                    x => x.PayloadSize <= this.Config.MaxMessageSize));

-            this.EnsuresNotDisposed();

-            this.Send(new ProducerRequest(topic, partition, messages));

-        }

-

-        /// <summary>

-        /// Sends request to Kafka server synchronously

-        /// </summary>

-        /// <param name="request">

-        /// The request.

-        /// </param>

-        public void Send(ProducerRequest request)

-        {

-            this.EnsuresNotDisposed();

-            this.connection.Write(request);

-        }

-

-        /// <summary>

-        /// Sends the data to a multiple topics on Kafka server synchronously

-        /// </summary>

-        /// <param name="requests">

-        /// The requests.

-        /// </param>

-        public void MultiSend(IEnumerable<ProducerRequest> requests)

-        {

-            Guard.NotNull(requests, "requests");

-            Guard.Assert<ArgumentNullException>(

-                () => requests.All(

-                    x => x != null && x.MessageSet != null && x.MessageSet.Messages != null));

-            Guard.Assert<ArgumentNullException>(

-                () => requests.All(

-                    x => x.MessageSet.Messages.All(

-                        y => y != null && y.PayloadSize <= this.Config.MaxMessageSize)));

-            this.EnsuresNotDisposed();

-            var multiRequest = new MultiProducerRequest(requests);

-            this.connection.Write(multiRequest);

-        }

-

-        /// <summary>

-        /// Releases all unmanaged and managed resources

-        /// </summary>

-        public void Dispose()

-        {

-            this.Dispose(true);

-            GC.SuppressFinalize(this);

-        }

-

-        protected virtual void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            this.disposed = true;

-            if (this.connection != null)

-            {

-                this.connection.Dispose();

-            }

-        }

-

-        /// <summary>

-        /// Ensures that object was not disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducerPool.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducerPool.cs
deleted file mode 100644
index dad0f7a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Producers/Sync/SyncProducerPool.cs
+++ /dev/null
@@ -1,226 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Producers.Sync

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-    using log4net;

-

-    /// <summary>

-    /// Pool of synchronous producers used by high-level API

-    /// </summary>

-    /// <typeparam name="TData">The type of the data.</typeparam>

-    internal class SyncProducerPool<TData> : ProducerPool<TData>

-        where TData : class 

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private readonly IDictionary<int, ISyncProducer> syncProducers;

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Factory method used to instantiating synchronous producer pool

-        /// </summary>

-        /// <param name="config">

-        /// The synchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <returns>

-        /// Instantiated synchronous producer pool

-        /// </returns>

-        public static SyncProducerPool<TData> CreateSyncPool(ProducerConfiguration config, IEncoder<TData> serializer)

-        {

-            return new SyncProducerPool<TData>(config, serializer);

-        }

-

-        /// <summary>

-        /// Factory method used to instantiating synchronous producer pool

-        /// </summary>

-        /// <param name="config">

-        /// The synchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="callbackHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <returns>

-        /// Instantiated synchronous producer pool

-        /// </returns>

-        public static SyncProducerPool<TData> CreateSyncPool(ProducerConfiguration config, IEncoder<TData> serializer, ICallbackHandler callbackHandler)

-        {

-            return new SyncProducerPool<TData>(config, serializer, callbackHandler);

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="SyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The synchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="syncProducers">

-        /// The list of synchronous producers.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <remarks>

-        /// Should be used for testing purpose only

-        /// </remarks>

-        private SyncProducerPool(

-            ProducerConfiguration config, 

-            IEncoder<TData> serializer,

-            IDictionary<int, ISyncProducer> syncProducers,

-            ICallbackHandler cbkHandler)

-            : base(config, serializer, cbkHandler)

-        {

-            this.syncProducers = syncProducers;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="SyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The synchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        /// <param name="cbkHandler">

-        /// The callback invoked after new broker is added.

-        /// </param>

-        /// <remarks>

-        /// Should be used for testing purpose only

-        /// </remarks>

-        private SyncProducerPool(

-            ProducerConfiguration config,

-            IEncoder<TData> serializer,

-            ICallbackHandler cbkHandler)

-            : this(config, serializer, new Dictionary<int, ISyncProducer>(), cbkHandler)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="SyncProducerPool{TData}"/> class. 

-        /// </summary>

-        /// <param name="config">

-        /// The synchronous producer pool configuration.

-        /// </param>

-        /// <param name="serializer">

-        /// The serializer.

-        /// </param>

-        private SyncProducerPool(ProducerConfiguration config, IEncoder<TData> serializer)

-            : this(

-                config,

-                serializer,

-                new Dictionary<int, ISyncProducer>(),

-                ReflectionHelper.Instantiate<ICallbackHandler>(config.CallbackHandlerClass))

-        {

-        }

-

-        /// <summary>

-        /// Selects a synchronous producer, for

-        /// the specified broker id and calls the send API on the selected

-        /// producer to publish the data to the specified broker partition.

-        /// </summary>

-        /// <param name="poolData">The producer pool request object.</param>

-        /// <remarks>

-        /// Used for multi-topic request

-        /// </remarks>

-        public override void Send(IEnumerable<ProducerPoolData<TData>> poolData)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(poolData, "poolData");

-            Dictionary<int, List<ProducerPoolData<TData>>> distinctBrokers = poolData.GroupBy(

-                x => x.BidPid.BrokerId, x => x)

-                .ToDictionary(x => x.Key, x => x.ToList());

-            foreach (var broker in distinctBrokers)

-            {

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "Fetching sync producer for broker id: {0}", broker.Key);

-                ISyncProducer producer = this.syncProducers[broker.Key];

-                IEnumerable<ProducerRequest> requests = broker.Value.Select(x => new ProducerRequest(

-                    x.Topic,

-                    x.BidPid.PartId,

-                    new BufferedMessageSet(x.Data.Select(y => this.Serializer.ToMessage(y)))));

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "Sending message to broker {0}", broker.Key);

-                if (requests.Count() > 1)

-                {

-                    producer.MultiSend(requests);

-                }

-                else

-                {

-                    producer.Send(requests.First());

-                }

-            }

-        }

-

-        /// <summary>

-        /// Add a new synchronous producer to the pool

-        /// </summary>

-        /// <param name="broker">The broker informations.</param>

-        public override void AddProducer(Broker broker)

-        {

-            this.EnsuresNotDisposed();

-            Guard.NotNull(broker, "broker");

-

-            var syncConfig = new SyncProducerConfiguration(this.Config, broker.Id, broker.Host, broker.Port);

-            var syncProducer = new SyncProducer(syncConfig);

-            Logger.InfoFormat(

-                CultureInfo.CurrentCulture,

-                "Creating sync producer for broker id = {0} at {1}:{2}",

-                broker.Id,

-                broker.Host,

-                broker.Port);

-            this.syncProducers.Add(broker.Id, syncProducer);

-        }

-

-        protected override void Dispose(bool disposing)

-        {

-            if (!disposing)

-            {

-                return;

-            }

-

-            if (this.disposed)

-            {

-                return;

-            }

-

-            this.disposed = true;

-            foreach (var syncProducer in this.syncProducers.Values)

-            {

-                syncProducer.Dispose();

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Properties/AssemblyInfo.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Properties/AssemblyInfo.cs
deleted file mode 100644
index 044e6ca..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Properties/AssemblyInfo.cs
+++ /dev/null
@@ -1,34 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

-*/

-using System;

-using System.Reflection;

-using System.Runtime.CompilerServices;

-using System.Runtime.InteropServices;

-

-[assembly: AssemblyTitle("Kafka.Client")]

-[assembly: AssemblyDescription(".NET Client for Kafka")]

-[assembly: AssemblyCompany("ExactTarget")]

-[assembly: AssemblyProduct("Kafka.Client")]

-[assembly: AssemblyCopyright("Copyright © ExactTarget 2011")]

-

-[assembly: ComVisible(false)]

-[assembly: AssemblyVersion("1.0.0.0")]

-[assembly: AssemblyFileVersion("1.0.0.0")]

-[assembly: InternalsVisibleTo("Kafka.Client.Tests")]

-[assembly: InternalsVisibleTo("Kafka.Client.IntegrationTests")]

-[assembly: CLSCompliant(true)]

-

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/FetchRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/FetchRequest.cs
deleted file mode 100644
index 30bd05a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/FetchRequest.cs
+++ /dev/null
@@ -1,129 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client.Request
-{
-    /// <summary>
-    /// Constructs a request to send to Kafka.
-    /// </summary>
-    public class FetchRequest : AbstractRequest
-    {
-        /// <summary>
-        /// Maximum size.
-        /// </summary>
-        private static readonly int DefaultMaxSize = 1048576;
-
-        /// <summary>
-        /// Initializes a new instance of the FetchRequest class.
-        /// </summary>
-        public FetchRequest()
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the FetchRequest class.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="offset">The offset in the topic/partition to retrieve from.</param>
-        public FetchRequest(string topic, int partition, long offset)
-            : this(topic, partition, offset, DefaultMaxSize)
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the FetchRequest class.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="offset">The offset in the topic/partition to retrieve from.</param>
-        /// <param name="maxSize">The maximum size.</param>
-        public FetchRequest(string topic, int partition, long offset, int maxSize)
-        {
-            Topic = topic;
-            Partition = partition;
-            Offset = offset;
-            MaxSize = maxSize;
-        }
-
-        /// <summary>
-        /// Gets or sets the offset to request.
-        /// </summary>
-        public long Offset { get; set; }
-
-        /// <summary>
-        /// Gets or sets the maximum size to pass in the request.
-        /// </summary>
-        public int MaxSize { get; set; }
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public override bool IsValid()
-        {
-            return !string.IsNullOrWhiteSpace(Topic);
-        }
-
-        /// <summary>
-        /// Gets the bytes matching the expected Kafka structure. 
-        /// </summary>
-        /// <returns>The byte array of the request.</returns>
-        public override byte[] GetBytes()
-        {
-            byte[] internalBytes = GetInternalBytes();
-
-            List<byte> request = new List<byte>();
-
-            // add the 2 for the RequestType.Fetch
-            request.AddRange(BitWorks.GetBytesReversed(internalBytes.Length + 2));
-            request.AddRange(BitWorks.GetBytesReversed((short)RequestType.Fetch));
-            request.AddRange(internalBytes);
-
-            return request.ToArray<byte>();
-        }
-
-        /// <summary>
-        /// Gets the bytes representing the request which is used when generating a multi-request.
-        /// </summary>
-        /// <remarks>
-        /// The <see cref="GetBytes"/> method is used for sending a single <see cref="RequestType.Fetch"/>.
-        /// It prefixes this byte array with the request type and the number of messages. This method
-        /// is used to supply the <see cref="MultiFetchRequest"/> with the contents for its message.
-        /// </remarks>
-        /// <returns>The bytes that represent this <see cref="FetchRequest"/>.</returns>
-        internal byte[] GetInternalBytes()
-        {
-            // TOPIC LENGTH + TOPIC + PARTITION + OFFSET + MAX SIZE
-            int requestSize = 2 + Topic.Length + 4 + 8 + 4;
-
-            List<byte> request = new List<byte>();
-            request.AddRange(BitWorks.GetBytesReversed((short)Topic.Length));
-            request.AddRange(Encoding.ASCII.GetBytes(Topic));
-            request.AddRange(BitWorks.GetBytesReversed(Partition));
-            request.AddRange(BitWorks.GetBytesReversed(Offset));
-            request.AddRange(BitWorks.GetBytesReversed(MaxSize));
-
-            return request.ToArray<byte>();
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiFetchRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiFetchRequest.cs
deleted file mode 100644
index efa6cf5..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiFetchRequest.cs
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client.Request
-{
-    /// <summary>
-    /// Constructs a multi-consumer request to send to Kafka.
-    /// </summary>
-    public class MultiFetchRequest : AbstractRequest
-    {
-        /// <summary>
-        /// Initializes a new instance of the MultiFetchRequest class.
-        /// </summary>
-        /// <param name="requests">Requests to package up and batch.</param>
-        public MultiFetchRequest(IList<FetchRequest> requests)
-        {
-            ConsumerRequests = requests;
-        }
-
-        /// <summary>
-        /// Gets or sets the consumer requests to be batched into this multi-request.
-        /// </summary>
-        public IList<FetchRequest> ConsumerRequests { get; set; }
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public override bool IsValid()
-        {
-            return ConsumerRequests != null && ConsumerRequests.Count > 0
-                && ConsumerRequests.Select(itm => !itm.IsValid()).Count() > 0;
-        }
-
-        /// <summary>
-        /// Gets the bytes matching the expected Kafka structure. 
-        /// </summary>
-        /// <returns>The byte array of the request.</returns>
-        public override byte[] GetBytes()
-        {
-            List<byte> messagePack = new List<byte>();
-            byte[] requestBytes = BitWorks.GetBytesReversed(Convert.ToInt16((int)RequestType.MultiFetch));
-            byte[] consumerRequestCountBytes = BitWorks.GetBytesReversed(Convert.ToInt16(ConsumerRequests.Count));
-
-            List<byte> encodedMessageSet = new List<byte>();
-            encodedMessageSet.AddRange(requestBytes);
-            encodedMessageSet.AddRange(consumerRequestCountBytes);
-
-            foreach (FetchRequest consumerRequest in ConsumerRequests)
-            {
-                encodedMessageSet.AddRange(consumerRequest.GetInternalBytes());
-            }
-
-            encodedMessageSet.InsertRange(0, BitWorks.GetBytesReversed(encodedMessageSet.Count));
-
-            return encodedMessageSet.ToArray();
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiProducerRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiProducerRequest.cs
deleted file mode 100644
index d086b41..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/MultiProducerRequest.cs
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client.Request
-{
-    /// <summary>
-    /// Constructs a request containing multiple producer requests to send to Kafka.
-    /// </summary>
-    public class MultiProducerRequest : AbstractRequest
-    {
-        /// <summary>
-        /// Initializes a new instance of the MultiProducerRequest class.
-        /// </summary>
-        public MultiProducerRequest()
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the MultiProducerRequest class.
-        /// </summary>
-        /// <param name="producerRequests">
-        /// The list of individual producer requests to send in this request.
-        /// </param>
-        public MultiProducerRequest(IList<ProducerRequest> producerRequests)
-        {
-            ProducerRequests = producerRequests;
-        }
-
-        /// <summary>
-        /// Gets or sets the list of producer requests to be sent in batch.
-        /// </summary>
-        public IList<ProducerRequest> ProducerRequests { get; set; }
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public override bool IsValid()
-        {
-            return ProducerRequests != null && ProducerRequests.Count > 0
-                && ProducerRequests.Select(itm => !itm.IsValid()).Count() > 0;
-        }
-
-        /// <summary>
-        /// Gets the bytes matching the expected Kafka structure. 
-        /// </summary>
-        /// <returns>The byte array of the request.</returns>
-        public override byte[] GetBytes()
-        {
-            List<byte> messagePack = new List<byte>();
-            byte[] requestBytes = BitWorks.GetBytesReversed(Convert.ToInt16((int)RequestType.MultiProduce));
-            byte[] producerRequestCountBytes = BitWorks.GetBytesReversed(Convert.ToInt16(ProducerRequests.Count));
-
-            List<byte> encodedMessageSet = new List<byte>();
-            encodedMessageSet.AddRange(requestBytes);
-            encodedMessageSet.AddRange(producerRequestCountBytes);
-
-            foreach (ProducerRequest producerRequest in ProducerRequests)
-            {
-                encodedMessageSet.AddRange(producerRequest.GetInternalBytes());
-            }
-
-            encodedMessageSet.InsertRange(0, BitWorks.GetBytesReversed(encodedMessageSet.Count));
-
-            return encodedMessageSet.ToArray();
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/OffsetRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/OffsetRequest.cs
deleted file mode 100644
index ba733b9..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/OffsetRequest.cs
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client.Request
-{
-    /// <summary>
-    /// Constructs a request to send to Kafka.
-    /// </summary>
-    public class OffsetRequest : AbstractRequest
-    {
-        /// <summary>
-        /// The latest time constant.
-        /// </summary>
-        public static readonly long LatestTime = -1L;
-
-        /// <summary>
-        /// The earliest time constant.
-        /// </summary>
-        public static readonly long EarliestTime = -2L;
-
-        /// <summary>
-        /// Initializes a new instance of the OffsetRequest class.
-        /// </summary>
-        public OffsetRequest()
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the OffsetRequest class.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="time">The time from which to request offsets.</param>
-        /// <param name="maxOffsets">The maximum amount of offsets to return.</param>
-        public OffsetRequest(string topic, int partition, long time, int maxOffsets)
-        {
-            Topic = topic;
-            Partition = partition;
-            Time = time;
-            MaxOffsets = maxOffsets;
-        }
-
-        /// <summary>
-        /// Gets the time.
-        /// </summary>
-        public long Time { get; private set; }
-
-        /// <summary>
-        /// Gets the maximum number of offsets to return.
-        /// </summary>
-        public int MaxOffsets { get; private set; }
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public override bool IsValid()
-        {
-            return !string.IsNullOrWhiteSpace(Topic);
-        }
-
-        /// <summary>
-        /// Converts the request to an array of bytes that is expected by Kafka.
-        /// </summary>
-        /// <returns>An array of bytes that represents the request.</returns>
-        public override byte[] GetBytes()
-        {
-            byte[] requestBytes = BitWorks.GetBytesReversed(Convert.ToInt16((int)RequestType.Offsets));
-            byte[] topicLengthBytes = BitWorks.GetBytesReversed(Convert.ToInt16(Topic.Length));
-            byte[] topicBytes = Encoding.UTF8.GetBytes(Topic);
-            byte[] partitionBytes = BitWorks.GetBytesReversed(Partition);
-            byte[] timeBytes = BitWorks.GetBytesReversed(Time);
-            byte[] maxOffsetsBytes = BitWorks.GetBytesReversed(MaxOffsets);
-
-            List<byte> encodedMessageSet = new List<byte>();
-            encodedMessageSet.AddRange(requestBytes);
-            encodedMessageSet.AddRange(topicLengthBytes);
-            encodedMessageSet.AddRange(topicBytes);
-            encodedMessageSet.AddRange(partitionBytes);
-            encodedMessageSet.AddRange(timeBytes);
-            encodedMessageSet.AddRange(maxOffsetsBytes);
-            encodedMessageSet.InsertRange(0, BitWorks.GetBytesReversed(encodedMessageSet.Count));
-
-            return encodedMessageSet.ToArray();
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/ProducerRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/ProducerRequest.cs
deleted file mode 100644
index 42ef597..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Request/ProducerRequest.cs
+++ /dev/null
@@ -1,115 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-using System;
-using System.Collections.Generic;
-using System.Text;
-using Kafka.Client.Util;
-
-namespace Kafka.Client.Request
-{
-    /// <summary>
-    /// Constructs a request to send to Kafka.
-    /// </summary>
-    public class ProducerRequest : AbstractRequest
-    {
-        /// <summary>
-        /// Initializes a new instance of the ProducerRequest class.
-        /// </summary>
-        public ProducerRequest()
-        {
-        }
-
-        /// <summary>
-        /// Initializes a new instance of the ProducerRequest class.
-        /// </summary>
-        /// <param name="topic">The topic to publish to.</param>
-        /// <param name="partition">The partition to publish to.</param>
-        /// <param name="messages">The list of messages to send.</param>
-        public ProducerRequest(string topic, int partition, IList<Message> messages)
-        {
-            Topic = topic;
-            Partition = partition;
-            Messages = messages;
-        }
-
-        /// <summary>
-        /// Gets or sets the messages to publish.
-        /// </summary>
-        public IList<Message> Messages { get; set; }
-
-        /// <summary>
-        /// Determines if the request has valid settings.
-        /// </summary>
-        /// <returns>True if valid and false otherwise.</returns>
-        public override bool IsValid()
-        {
-            return !string.IsNullOrWhiteSpace(Topic) && Messages != null && Messages.Count > 0;
-        }
-
-        /// <summary>
-        /// Gets the bytes matching the expected Kafka structure. 
-        /// </summary>
-        /// <returns>The byte array of the request.</returns>
-        public override byte[] GetBytes()
-        {
-            List<byte> encodedMessageSet = new List<byte>();
-            encodedMessageSet.AddRange(GetInternalBytes());
-
-            byte[] requestBytes = BitWorks.GetBytesReversed(Convert.ToInt16((int)RequestType.Produce));
-            encodedMessageSet.InsertRange(0, requestBytes);
-            encodedMessageSet.InsertRange(0, BitWorks.GetBytesReversed(encodedMessageSet.Count));
-
-            return encodedMessageSet.ToArray();
-        }
-
-        /// <summary>
-        /// Gets the bytes representing the request which is used when generating a multi-request.
-        /// </summary>
-        /// <remarks>
-        /// The <see cref="GetBytes"/> method is used for sending a single <see cref="RequestType.Produce"/>.
-        /// It prefixes this byte array with the request type and the number of messages. This method
-        /// is used to supply the <see cref="MultiProducerRequest"/> with the contents for its message.
-        /// </remarks>
-        /// <returns>The bytes that represent this <see cref="ProducerRequest"/>.</returns>
-        internal byte[] GetInternalBytes()
-        {
-            List<byte> messagePack = new List<byte>();
-            foreach (Message message in Messages)
-            {
-                byte[] messageBytes = message.GetBytes();
-                messagePack.AddRange(BitWorks.GetBytesReversed(messageBytes.Length));
-                messagePack.AddRange(messageBytes);
-            }
-
-            byte[] topicLengthBytes = BitWorks.GetBytesReversed(Convert.ToInt16(Topic.Length));
-            byte[] topicBytes = Encoding.UTF8.GetBytes(Topic);
-            byte[] partitionBytes = BitWorks.GetBytesReversed(Partition);
-            byte[] messagePackLengthBytes = BitWorks.GetBytesReversed(messagePack.Count);
-            byte[] messagePackBytes = messagePack.ToArray();
-
-            List<byte> encodedMessageSet = new List<byte>();
-            encodedMessageSet.AddRange(topicLengthBytes);
-            encodedMessageSet.AddRange(topicBytes);
-            encodedMessageSet.AddRange(partitionBytes);
-            encodedMessageSet.AddRange(messagePackLengthBytes);
-            encodedMessageSet.AddRange(messagePackBytes);
-
-            return encodedMessageSet.ToArray();
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestContext.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestContext.cs
deleted file mode 100644
index 80fc66b..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestContext.cs
+++ /dev/null
@@ -1,54 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client

-{

-    using System.Net.Sockets;

-    using Kafka.Client.Requests;

-

-    /// <summary>

-    /// The context of a request made to Kafka.

-    /// </summary>

-    /// <typeparam name="T">

-    /// Must be of type <see cref="AbstractRequest"/> and represents the type of request

-    /// sent to Kafka.

-    /// </typeparam>

-    public class RequestContext<T> where T : AbstractRequest

-    {

-        /// <summary>

-        /// Initializes a new instance of the RequestContext class.

-        /// </summary>

-        /// <param name="networkStream">The network stream that sent the message.</param>

-        /// <param name="request">The request sent over the stream.</param>

-        public RequestContext(NetworkStream networkStream, T request)

-        {

-            NetworkStream = networkStream;

-            Request = request;

-        }

-

-        /// <summary>

-        /// Gets the <see cref="NetworkStream"/> instance of the request.

-        /// </summary>

-        public NetworkStream NetworkStream { get; private set; }

-

-        /// <summary>

-        /// Gets the <see cref="FetchRequest"/> or <see cref="ProducerRequest"/> object

-        /// associated with the <see cref="RequestContext"/>.

-        /// </summary>

-        public T Request { get; private set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestType.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestType.cs
deleted file mode 100644
index dd38f90..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/RequestType.cs
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-namespace Kafka.Client
-{
-    /// <summary>
-    /// Requests types for Kafka
-    /// </summary>
-    /// <remarks>
-    /// Many of these are not in play yet.
-    /// </remarks>
-    public enum RequestType
-    {
-        /// <summary>
-        /// Produce a message.
-        /// </summary>
-        Produce = 0,
-
-        /// <summary>
-        /// Fetch a message.
-        /// </summary>
-        Fetch = 1,
-
-        /// <summary>
-        /// Multi-fetch messages.
-        /// </summary>
-        MultiFetch = 2,
-        
-        /// <summary>
-        /// Multi-produce messages.
-        /// </summary>
-        MultiProduce = 3,
-
-        /// <summary>
-        /// Gets offsets.
-        /// </summary>
-        Offsets = 4
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/AbstractRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/AbstractRequest.cs
deleted file mode 100644
index eacfe10..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/AbstractRequest.cs
+++ /dev/null
@@ -1,61 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System.IO;

-    using System.Text;

-

-    /// <summary>

-    /// Base request to make to Kafka.

-    /// </summary>

-    public abstract class AbstractRequest

-    {

-        public const string DefaultEncoding = "UTF-8";

-        public const byte DefaultRequestSizeSize = 4;

-        public const byte DefaultRequestIdSize = 2;

-        public const short DefaultTopicLengthIfNonePresent = 2;

-

-        /// <summary>

-        /// Gets or sets the topic to publish to.

-        /// </summary>

-        public string Topic { get; set; }

-

-        /// <summary>

-        /// Gets or sets the partition to publish to.

-        /// </summary>

-        public int Partition { get; set; }

-

-        public MemoryStream RequestBuffer { get; protected set; }

-

-        public abstract RequestTypes RequestType { get; }

-

-        protected short RequestTypeId

-        {

-            get

-            {

-                return (short)this.RequestType;

-            }

-        }

-

-        protected static short GetTopicLength(string topic, string encoding = DefaultEncoding)

-        {

-            Encoding encoder = Encoding.GetEncoding(encoding);

-            return string.IsNullOrEmpty(topic) ? DefaultTopicLengthIfNonePresent : (short)encoder.GetByteCount(topic);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/FetchRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/FetchRequest.cs
deleted file mode 100644
index d401526..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/FetchRequest.cs
+++ /dev/null
@@ -1,156 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System;

-    using System.Globalization;

-    using System.IO;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Constructs a request to send to Kafka.

-    /// </summary>

-    public class FetchRequest : AbstractRequest, IWritable

-    {

-        /// <summary>

-        /// Maximum size.

-        /// </summary>

-        private static readonly int DefaultMaxSize = 1048576;

-        public const byte DefaultTopicSizeSize = 2;

-        public const byte DefaultPartitionSize = 4;

-        public const byte DefaultOffsetSize = 8;

-        public const byte DefaultMaxSizeSize = 4;

-        public const byte DefaultHeaderSize = DefaultRequestSizeSize + DefaultTopicSizeSize + DefaultPartitionSize + DefaultRequestIdSize + DefaultOffsetSize + DefaultMaxSizeSize;

-        public const byte DefaultHeaderAsPartOfMultirequestSize = DefaultTopicSizeSize + DefaultPartitionSize + DefaultOffsetSize + DefaultMaxSizeSize;

-

-        public static int GetRequestLength(string topic, string encoding = DefaultEncoding)

-        {

-            short topicLength = GetTopicLength(topic, encoding);

-            return topicLength + DefaultHeaderSize;

-        }

-

-        public static int GetRequestAsPartOfMultirequestLength(string topic, string encoding = DefaultEncoding)

-        {

-            short topicLength = GetTopicLength(topic, encoding);

-            return topicLength + DefaultHeaderAsPartOfMultirequestSize;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the FetchRequest class.

-        /// </summary>

-        public FetchRequest()

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the FetchRequest class.

-        /// </summary>

-        /// <param name="topic">The topic to publish to.</param>

-        /// <param name="partition">The partition to publish to.</param>

-        /// <param name="offset">The offset in the topic/partition to retrieve from.</param>

-        public FetchRequest(string topic, int partition, long offset)

-            : this(topic, partition, offset, DefaultMaxSize)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the FetchRequest class.

-        /// </summary>

-        /// <param name="topic">The topic to publish to.</param>

-        /// <param name="partition">The partition to publish to.</param>

-        /// <param name="offset">The offset in the topic/partition to retrieve from.</param>

-        /// <param name="maxSize">The maximum size.</param>

-        public FetchRequest(string topic, int partition, long offset, int maxSize)

-        {

-            Topic = topic;

-            Partition = partition;

-            Offset = offset;

-            MaxSize = maxSize;

-

-            int length = GetRequestLength(topic, DefaultEncoding);

-            this.RequestBuffer = new BoundedBuffer(length);

-            this.WriteTo(this.RequestBuffer);

-        }

-

-        /// <summary>

-        /// Gets or sets the offset to request.

-        /// </summary>

-        public long Offset { get; set; }

-

-        /// <summary>

-        /// Gets or sets the maximum size to pass in the request.

-        /// </summary>

-        public int MaxSize { get; set; }

-

-        public override RequestTypes RequestType

-        {

-            get

-            {

-                return RequestTypes.Fetch;

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                writer.Write(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-                writer.Write(this.RequestTypeId);

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-

-            writer.WriteTopic(this.Topic, DefaultEncoding);

-            writer.Write(this.Partition);

-            writer.Write(this.Offset);

-            writer.Write(this.MaxSize);

-        }

-

-        public override string ToString()

-        {

-            return String.Format(

-                CultureInfo.CurrentCulture,

-                "topic: {0}, part: {1}, offset: {2}, maxSize: {3}",

-                this.Topic,

-                this.Partition,

-                this.Offset,

-                this.MaxSize);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiFetchRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiFetchRequest.cs
deleted file mode 100644
index ea71127..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiFetchRequest.cs
+++ /dev/null
@@ -1,109 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Constructs a multi-consumer request to send to Kafka.

-    /// </summary>

-    public class MultiFetchRequest : AbstractRequest, IWritable

-    {

-        public const byte DefaultNumberOfRequestsSize = 2;

-

-        public const byte DefaultHeaderSize =

-            DefaultRequestSizeSize + DefaultRequestIdSize + DefaultNumberOfRequestsSize;

-

-        public static int GetRequestLength(IList<FetchRequest> requests, string encoding = DefaultEncoding)

-        {

-            int requestsLength = 0;

-            foreach (var request in requests)

-            {

-                requestsLength += FetchRequest.GetRequestAsPartOfMultirequestLength(request.Topic, encoding);

-            }

-

-            return requestsLength + DefaultHeaderSize;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the MultiFetchRequest class.

-        /// </summary>

-        /// <param name="requests">Requests to package up and batch.</param>

-        public MultiFetchRequest(IList<FetchRequest> requests)

-        {

-            Guard.NotNull(requests, "requests");

-            ConsumerRequests = requests;

-            int length = GetRequestLength(requests, DefaultEncoding);

-            this.RequestBuffer = new BoundedBuffer(length);

-            this.WriteTo(this.RequestBuffer);

-        }

-

-        /// <summary>

-        /// Gets or sets the consumer requests to be batched into this multi-request.

-        /// </summary>

-        public IList<FetchRequest> ConsumerRequests { get; set; }

-

-        public override RequestTypes RequestType

-        {

-            get

-            {

-                return RequestTypes.MultiFetch;

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                writer.Write(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-                writer.Write(this.RequestTypeId);

-                writer.Write((short)this.ConsumerRequests.Count);

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-

-            foreach (var consumerRequest in ConsumerRequests)

-            {

-                consumerRequest.WriteTo(writer);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiProducerRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiProducerRequest.cs
deleted file mode 100644
index 995f6e3..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/MultiProducerRequest.cs
+++ /dev/null
@@ -1,134 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Constructs a request containing multiple producer requests to send to Kafka.

-    /// </summary>

-    public class MultiProducerRequest : AbstractRequest, IWritable

-    {

-        public const byte DefaultRequestsCountSize = 2;

-

-        public static int GetBufferLength(IEnumerable<ProducerRequest> requests)

-        {

-            Guard.NotNull(requests, "requests");

-

-            return DefaultRequestSizeSize 

-                + DefaultRequestIdSize 

-                + DefaultRequestsCountSize

-                + (int)requests.Sum(x => x.RequestBuffer.Length - DefaultRequestIdSize - DefaultRequestSizeSize);

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the MultiProducerRequest class.

-        /// </summary>

-        /// <param name="requests">

-        /// The list of individual producer requests to send in this request.

-        /// </param>

-        public MultiProducerRequest(IEnumerable<ProducerRequest> requests)

-        {

-            Guard.NotNull(requests, "requests");

-

-            int length = GetBufferLength(requests);

-            ProducerRequests = requests;

-            this.RequestBuffer = new BoundedBuffer(length);

-            this.WriteTo(this.RequestBuffer);

-        }

-

-        /// <summary>

-        /// Gets or sets the list of producer requests to be sent in batch.

-        /// </summary>

-        public IEnumerable<ProducerRequest> ProducerRequests { get; set; }

-

-        public override RequestTypes RequestType

-        {

-            get

-            {

-                return RequestTypes.MultiProduce;

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                writer.Write(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-                writer.Write(this.RequestTypeId);

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-

-            writer.Write((short)this.ProducerRequests.Count());

-            foreach (var request in ProducerRequests)

-            {

-                request.WriteTo(writer);

-            }

-        }

-

-        public override string ToString()

-        {

-            var sb = new StringBuilder();

-            sb.Append("Request size: ");

-            sb.Append(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-            sb.Append(", RequestId: ");

-            sb.Append(this.RequestTypeId);

-            sb.Append("(");

-            sb.Append((RequestTypes)this.RequestTypeId);

-            sb.Append("), Single Requests: {");

-            int i = 1;

-            foreach (var request in ProducerRequests)

-            {

-                sb.Append("Request ");

-                sb.Append(i);

-                sb.Append(" {");

-                sb.Append(request.ToString());

-                sb.AppendLine("} ");

-                i++;

-            }

-

-            return sb.ToString();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/OffsetRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/OffsetRequest.cs
deleted file mode 100644
index c601e69..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/OffsetRequest.cs
+++ /dev/null
@@ -1,136 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Constructs a request to send to Kafka to get the current offset for a given topic

-    /// </summary>

-    public class OffsetRequest : AbstractRequest, IWritable

-    {

-        /// <summary>

-        /// The latest time constant.

-        /// </summary>

-        public static readonly long LatestTime = -1L;

-

-        /// <summary>

-        /// The earliest time constant.

-        /// </summary>

-        public static readonly long EarliestTime = -2L;

-

-        public const string SmallestTime = "smallest";

-

-        public const string LargestTime = "largest";

-

-        public const byte DefaultTopicSizeSize = 2;

-        public const byte DefaultPartitionSize = 4;

-        public const byte DefaultTimeSize = 8;

-        public const byte DefaultMaxOffsetsSize = 4;

-        public const byte DefaultHeaderSize = DefaultRequestSizeSize + DefaultTopicSizeSize + DefaultPartitionSize + DefaultRequestIdSize + DefaultTimeSize + DefaultMaxOffsetsSize;

-

-        public static int GetRequestLength(string topic, string encoding = DefaultEncoding)

-        {

-            short topicLength = GetTopicLength(topic, encoding);

-            return topicLength + DefaultHeaderSize;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the OffsetRequest class.

-        /// </summary>

-        public OffsetRequest()

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the OffsetRequest class.

-        /// </summary>

-        /// <param name="topic">The topic to publish to.</param>

-        /// <param name="partition">The partition to publish to.</param>

-        /// <param name="time">The time from which to request offsets.</param>

-        /// <param name="maxOffsets">The maximum amount of offsets to return.</param>

-        public OffsetRequest(string topic, int partition, long time, int maxOffsets)

-        {

-            Topic = topic;

-            Partition = partition;

-            Time = time;

-            MaxOffsets = maxOffsets;

-

-            int length = GetRequestLength(topic, DefaultEncoding);

-            this.RequestBuffer = new BoundedBuffer(length);

-            this.WriteTo(this.RequestBuffer);

-        }

-

-        /// <summary>

-        /// Gets the time.

-        /// </summary>

-        public long Time { get; private set; }

-

-        /// <summary>

-        /// Gets the maximum number of offsets to return.

-        /// </summary>

-        public int MaxOffsets { get; private set; }

-

-        public override RequestTypes RequestType

-        {

-            get

-            {

-                return RequestTypes.Offsets;

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public void WriteTo(System.IO.MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                writer.Write(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-                writer.Write(this.RequestTypeId);

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-

-            writer.WriteTopic(this.Topic, DefaultEncoding);

-            writer.Write(this.Partition);

-            writer.Write(this.Time);

-            writer.Write(this.MaxOffsets);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/ProducerRequest.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/ProducerRequest.cs
deleted file mode 100644
index 6adf7ac3..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/ProducerRequest.cs
+++ /dev/null
@@ -1,142 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Serialization;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Constructs a request to send to Kafka.

-    /// </summary>

-    public class ProducerRequest : AbstractRequest, IWritable

-    {

-        public const int RandomPartition = -1;

-        public const byte DefaultTopicSizeSize = 2;

-        public const byte DefaultPartitionSize = 4;

-        public const byte DefaultSetSizeSize = 4;

-        public const byte DefaultHeaderSize = DefaultRequestSizeSize + DefaultTopicSizeSize + DefaultPartitionSize + DefaultRequestIdSize + DefaultSetSizeSize;

-

-        public static int GetRequestLength(string topic, int messegesSize, string encoding = DefaultEncoding)

-        {

-            short topicLength = GetTopicLength(topic, encoding);

-            return topicLength + DefaultHeaderSize + messegesSize;

-        }

-

-        public ProducerRequest(string topic, int partition, BufferedMessageSet messages)

-        {

-            Guard.NotNull(messages, "messages");

-

-            int length = GetRequestLength(topic, messages.SetSize);

-            this.RequestBuffer = new BoundedBuffer(length);

-            this.Topic = topic;

-            this.Partition = partition;

-            this.MessageSet = messages;

-            this.WriteTo(this.RequestBuffer);

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the ProducerRequest class.

-        /// </summary>

-        /// <param name="topic">The topic to publish to.</param>

-        /// <param name="partition">The partition to publish to.</param>

-        /// <param name="messages">The list of messages to send.</param>

-        public ProducerRequest(string topic, int partition, IEnumerable<Message> messages)

-            : this(topic, partition, new BufferedMessageSet(messages))

-        {

-        }

-

-        public BufferedMessageSet MessageSet { get; private set; }

-

-        public override RequestTypes RequestType

-        {

-            get

-            {

-                return RequestTypes.Produce;

-            }

-        }

-

-        public int TotalSize

-        {

-            get

-            {

-                return (int)this.RequestBuffer.Length;

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public void WriteTo(MemoryStream output)

-        {

-            Guard.NotNull(output, "output");

-

-            using (var writer = new KafkaBinaryWriter(output))

-            {

-                writer.Write(this.RequestBuffer.Capacity - DefaultRequestSizeSize);

-                writer.Write(this.RequestTypeId);

-                this.WriteTo(writer);

-            }

-        }

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        public void WriteTo(KafkaBinaryWriter writer)

-        {

-            Guard.NotNull(writer, "writer");

-

-            writer.WriteTopic(this.Topic, DefaultEncoding);

-            writer.Write(this.Partition);

-            writer.Write(this.MessageSet.SetSize);

-            this.MessageSet.WriteTo(writer);

-        }

-

-        public override string ToString()

-        {

-            var sb = new StringBuilder();

-            sb.Append("Request size: ");

-            sb.Append(this.TotalSize);

-            sb.Append(", RequestId: ");

-            sb.Append(this.RequestTypeId);

-            sb.Append("(");

-            sb.Append((RequestTypes)this.RequestTypeId);

-            sb.Append(")");

-            sb.Append(", Topic: ");

-            sb.Append(this.Topic);

-            sb.Append(", Partition: ");

-            sb.Append(this.Partition);

-            sb.Append(", Set size: ");

-            sb.Append(this.MessageSet.SetSize);

-            sb.Append(", Set {");

-            sb.Append(this.MessageSet.ToString());

-            sb.Append("}");

-            return sb.ToString();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/RequestTypes.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/RequestTypes.cs
deleted file mode 100644
index 38df68a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Requests/RequestTypes.cs
+++ /dev/null
@@ -1,53 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Requests

-{

-    /// <summary>

-    /// Requests types for Kafka

-    /// </summary>

-    /// <remarks>

-    /// Many of these are not in play yet.

-    /// </remarks>

-    public enum RequestTypes : short 

-    {

-        /// <summary>

-        /// Produce a message.

-        /// </summary>

-        Produce = 0,

-

-        /// <summary>

-        /// Fetch a message.

-        /// </summary>

-        Fetch = 1,

-

-        /// <summary>

-        /// Multi-fetch messages.

-        /// </summary>

-        MultiFetch = 2,

-        

-        /// <summary>

-        /// Multi-produce messages.

-        /// </summary>

-        MultiProduce = 3,

-

-        /// <summary>

-        /// Gets offsets.

-        /// </summary>

-        Offsets = 4

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/DefaultEncoder.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/DefaultEncoder.cs
deleted file mode 100644
index d8b136e..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/DefaultEncoder.cs
+++ /dev/null
@@ -1,41 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using Kafka.Client.Messages;

-

-    /// <summary>

-    /// Default serializer that expects <see cref="Message" /> object

-    /// </summary>

-    public class DefaultEncoder : IEncoder<Message>

-    {

-        /// <summary>

-        /// Do nothing with data

-        /// </summary>

-        /// <param name="data">

-        /// The data, that are already in <see cref="Message" /> format.

-        /// </param>

-        /// <returns>

-        /// Serialized data

-        /// </returns>

-        public Message ToMessage(Message data)

-        {

-            return data;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IEncoder.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IEncoder.cs
deleted file mode 100644
index 533720d..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IEncoder.cs
+++ /dev/null
@@ -1,41 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using Kafka.Client.Messages;

-

-    /// <summary>

-    /// User-defined serializer to <see cref="Message" /> format

-    /// </summary>

-    /// <typeparam name="TData">

-    /// Type od data

-    /// </typeparam>

-    public interface IEncoder<TData>

-    {

-        /// <summary>

-        /// Serializes given data to <see cref="Message" /> format

-        /// </summary>

-        /// <param name="data">

-        /// The data to serialize.

-        /// </param>

-        /// <returns>

-        /// Serialized data

-        /// </returns>

-        Message ToMessage(TData data);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IWritable.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IWritable.cs
deleted file mode 100644
index 2991583..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/IWritable.cs
+++ /dev/null
@@ -1,43 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using System.IO;

-

-    /// <summary>

-    /// Writes content into given stream

-    /// </summary>

-    internal interface IWritable

-    {

-        /// <summary>

-        /// Writes content into given stream

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        void WriteTo(MemoryStream output);

-

-        /// <summary>

-        /// Writes content into given writer

-        /// </summary>

-        /// <param name="writer">

-        /// The writer.

-        /// </param>

-        void WriteTo(KafkaBinaryWriter writer);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryReader.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryReader.cs
deleted file mode 100644
index 2ad95b4..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryReader.cs
+++ /dev/null
@@ -1,148 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using System.IO;

-    using System.Net;

-    using System.Text;

-    using System.Net.Sockets;

-

-    /// <summary>

-    /// Reads data from underlying stream using big endian bytes order for primitive types

-    /// and UTF-8 encoding for strings.

-    /// </summary>

-    public class KafkaBinaryReader : BinaryReader

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="KafkaBinaryReader"/> class

-        /// using big endian bytes order for primive types and UTF-8 encoding for strings.

-        /// </summary>

-        /// <param name="input">

-        /// The input stream.

-        /// </param>

-        public KafkaBinaryReader(Stream input)

-            : base(input)

-        { 

-        }

-

-        /// <summary>

-        /// Resets position pointer.

-        /// </summary>

-        /// <param name="disposing">

-        /// Not used

-        /// </param>

-        protected override void Dispose(bool disposing)

-        {

-            if (this.BaseStream.CanSeek)

-            {

-                this.BaseStream.Position = 0;

-            }

-        }

-

-        /// <summary>

-        /// Reads two-bytes signed integer from the current stream using big endian bytes order 

-        /// and advances the stream position by two bytes

-        /// </summary>

-        /// <returns>

-        /// The two-byte signed integer read from the current stream.

-        /// </returns>

-        public override short ReadInt16()

-        {

-            short value = base.ReadInt16();

-            short currentOrdered = IPAddress.NetworkToHostOrder(value);

-            return currentOrdered;

-        }

-

-        /// <summary>

-        /// Reads four-bytes signed integer from the current stream using big endian bytes order 

-        /// and advances the stream position by four bytes

-        /// </summary>

-        /// <returns>

-        /// The four-byte signed integer read from the current stream.

-        /// </returns>

-        public override int ReadInt32()

-        {

-            int value = base.ReadInt32();

-            int currentOrdered = IPAddress.NetworkToHostOrder(value);

-            return currentOrdered;

-        }

-

-        /// <summary>

-        /// Reads eight-bytes signed integer from the current stream using big endian bytes order 

-        /// and advances the stream position by eight bytes

-        /// </summary>

-        /// <returns>

-        /// The eight-byte signed integer read from the current stream.

-        /// </returns>

-        public override long ReadInt64()

-        {

-            long value = base.ReadInt64();

-            long currentOrdered = IPAddress.NetworkToHostOrder(value);

-            return currentOrdered;

-        }

-

-        /// <summary>

-        /// Reads four-bytes signed integer from the current stream using big endian bytes order 

-        /// and advances the stream position by four bytes

-        /// </summary>

-        /// <returns>

-        /// The four-byte signed integer read from the current stream.

-        /// </returns>

-        public override int Read()

-        {

-            int value = base.Read();

-            int currentOrdered = IPAddress.NetworkToHostOrder(value);

-            return currentOrdered;

-        }

-

-        /// <summary>

-        /// Reads fixed-length topic from underlying stream using given encoding.

-        /// </summary>

-        /// <param name="encoding">

-        /// The encoding to use.

-        /// </param>

-        /// <returns>

-        /// The read topic.

-        /// </returns>

-        public string ReadTopic(string encoding)

-        {

-            short length = this.ReadInt16();

-            if (length == -1)

-            {

-                return null;

-            }

-

-            var bytes = this.ReadBytes(length);

-            Encoding encoder = Encoding.GetEncoding(encoding);

-            return encoder.GetString(bytes);

-        }

-

-        public bool DataAvailabe

-        {

-            get

-            {

-                if (this.BaseStream is NetworkStream)

-                {

-                    return ((NetworkStream)this.BaseStream).DataAvailable;

-                }

-

-                return this.BaseStream.Length != this.BaseStream.Position;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryWriter.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryWriter.cs
deleted file mode 100644
index 7b668cd..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/KafkaBinaryWriter.cs
+++ /dev/null
@@ -1,127 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using System.IO;

-    using System.Net;

-    using System.Text;

-

-    /// <summary>

-    /// Writes data into underlying stream using big endian bytes order for primitive types

-    /// and UTF-8 encoding for strings.

-    /// </summary>

-    public class KafkaBinaryWriter : BinaryWriter

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="KafkaBinaryWriter"/> class 

-        /// using big endian bytes order for primive types and UTF-8 encoding for strings.

-        /// </summary>

-        protected KafkaBinaryWriter()

-        {  

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="KafkaBinaryWriter"/> class 

-        /// using big endian bytes order for primive types and UTF-8 encoding for strings.

-        /// </summary>

-        /// <param name="output">

-        /// The output stream.

-        /// </param>

-        public KafkaBinaryWriter(Stream output)

-            : base(output)

-        {

-        }

-  

-        /// <summary>

-        /// Flushes data into stream and resets position pointer.

-        /// </summary>

-        /// <param name="disposing">

-        /// Not used

-        /// </param>

-        protected override void Dispose(bool disposing)

-        {

-            this.Flush();

-            this.OutStream.Position = 0;

-        }

-

-        /// <summary>

-        /// Writes four-bytes signed integer to the current stream using big endian bytes order 

-        /// and advances the stream position by four bytes

-        /// </summary>

-        /// <param name="value">

-        /// The value to write.

-        /// </param>

-        public override void Write(int value)

-        {

-            int bigOrdered = IPAddress.HostToNetworkOrder(value);

-            base.Write(bigOrdered);

-        }

-

-        /// <summary>

-        /// Writes eight-bytes signed integer to the current stream using big endian bytes order 

-        /// and advances the stream position by eight bytes

-        /// </summary>

-        /// <param name="value">

-        /// The value to write.

-        /// </param>

-        public override void Write(long value)

-        {

-            long bigOrdered = IPAddress.HostToNetworkOrder(value);

-            base.Write(bigOrdered);

-        }

-

-        /// <summary>

-        /// Writes two-bytes signed integer to the current stream using big endian bytes order 

-        /// and advances the stream position by two bytes

-        /// </summary>

-        /// <param name="value">

-        /// The value to write.

-        /// </param>

-        public override void Write(short value)

-        {

-            short bigOrdered = IPAddress.HostToNetworkOrder(value);

-            base.Write(bigOrdered);

-        }

-

-        /// <summary>

-        /// Writes topic and his size into underlying stream using given encoding.

-        /// </summary>

-        /// <param name="topic">

-        /// The topic to write.

-        /// </param>

-        /// <param name="encoding">

-        /// The encoding to use.

-        /// </param>

-        public void WriteTopic(string topic, string encoding)

-        {

-            if (string.IsNullOrEmpty(topic))

-            {

-                short defaultTopic = -1;

-                this.Write(defaultTopic);

-            }

-            else

-            {

-                var length = (short)topic.Length;

-                this.Write(length);

-                Encoding encoder = Encoding.GetEncoding(encoding);

-                byte[] encodedTopic = encoder.GetBytes(topic);

-                this.Write(encodedTopic);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/StringEncoder.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/StringEncoder.cs
deleted file mode 100644
index e20392e..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Serialization/StringEncoder.cs
+++ /dev/null
@@ -1,43 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Serialization

-{

-    using System.Text;

-    using Kafka.Client.Messages;

-

-    /// <summary>

-    /// Serializes data to <see cref="Message" /> format using UTF-8 encoding

-    /// </summary>

-    public class StringEncoder : IEncoder<string>

-    {

-        /// <summary>

-        /// Serializes given data to <see cref="Message" /> format using UTF-8 encoding

-        /// </summary>

-        /// <param name="data">

-        /// The data to serialize.

-        /// </param>

-        /// <returns>

-        /// Serialized data

-        /// </returns>

-        public Message ToMessage(string data)

-        {

-            byte[] encodedData = Encoding.UTF8.GetBytes(data);

-            return new Message(encodedData);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/BitWorks.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/BitWorks.cs
deleted file mode 100644
index 09dad69..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/BitWorks.cs
+++ /dev/null
@@ -1,86 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-
-namespace Kafka.Client.Util
-{
-    /// <summary>
-    /// Utilty class for managing bits and bytes.
-    /// </summary>
-    public class BitWorks
-    {
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(short value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(int value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(long value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Reverse the position of an array of bytes.
-        /// </summary>
-        /// <param name="inArray">
-        /// The array to reverse.  If null or zero-length then the returned array will be null.
-        /// </param>
-        /// <returns>The reversed array.</returns>
-        public static byte[] ReverseBytes(byte[] inArray)
-        {
-            if (inArray != null && inArray.Length > 0)
-            {
-                int highCtr = inArray.Length - 1;
-                byte temp;
-
-                for (int ctr = 0; ctr < inArray.Length / 2; ctr++)
-                {
-                    temp = inArray[ctr];
-                    inArray[ctr] = inArray[highCtr];
-                    inArray[highCtr] = temp;
-                    highCtr -= 1;
-                }
-            }
-
-            return inArray;
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/Crc32.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/Crc32.cs
deleted file mode 100644
index 8c1bb6a..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Util/Crc32.cs
+++ /dev/null
@@ -1,132 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-// <auto-generated />
-using System;
-using System.Security.Cryptography;
-
-namespace Kafka.Client.Util
-{
-    /// <summary>
-    /// From http://damieng.com/blog/2006/08/08/calculating_crc32_in_c_and_net
-    /// </summary>
-    public class Crc32 : HashAlgorithm
-    {
-        public const UInt32 DefaultPolynomial = 0xedb88320;
-        public const UInt32 DefaultSeed = 0xffffffff;
-
-        private UInt32 hash;
-        private UInt32 seed;
-        private UInt32[] table;
-        private static UInt32[] defaultTable;
-
-        public Crc32()
-        {
-            table = InitializeTable(DefaultPolynomial);
-            seed = DefaultSeed;
-            Initialize();
-        }
-
-        public Crc32(UInt32 polynomial, UInt32 seed)
-        {
-            table = InitializeTable(polynomial);
-            this.seed = seed;
-            Initialize();
-        }
-
-        public override void Initialize()
-        {
-            hash = seed;
-        }
-
-        protected override void HashCore(byte[] buffer, int start, int length)
-        {
-            hash = CalculateHash(table, hash, buffer, start, length);
-        }
-
-        protected override byte[] HashFinal()
-        {
-            byte[] hashBuffer = UInt32ToBigEndianBytes(~hash);
-            this.HashValue = hashBuffer;
-            return hashBuffer;
-        }
-
-        public override int HashSize
-        {
-            get { return 32; }
-        }
-
-        public static UInt32 Compute(byte[] buffer)
-        {
-            return ~CalculateHash(InitializeTable(DefaultPolynomial), DefaultSeed, buffer, 0, buffer.Length);
-        }
-
-        public static UInt32 Compute(UInt32 seed, byte[] buffer)
-        {
-            return ~CalculateHash(InitializeTable(DefaultPolynomial), seed, buffer, 0, buffer.Length);
-        }
-
-        public static UInt32 Compute(UInt32 polynomial, UInt32 seed, byte[] buffer)
-        {
-            return ~CalculateHash(InitializeTable(polynomial), seed, buffer, 0, buffer.Length);
-        }
-
-        private static UInt32[] InitializeTable(UInt32 polynomial)
-        {
-            if (polynomial == DefaultPolynomial && defaultTable != null)
-                return defaultTable;
-
-            UInt32[] createTable = new UInt32[256];
-            for (int i = 0; i < 256; i++)
-            {
-                UInt32 entry = (UInt32)i;
-                for (int j = 0; j < 8; j++)
-                    if ((entry & 1) == 1)
-                        entry = (entry >> 1) ^ polynomial;
-                    else
-                        entry = entry >> 1;
-                createTable[i] = entry;
-            }
-
-            if (polynomial == DefaultPolynomial)
-                defaultTable = createTable;
-
-            return createTable;
-        }
-
-        private static UInt32 CalculateHash(UInt32[] table, UInt32 seed, byte[] buffer, int start, int size)
-        {
-            UInt32 crc = seed;
-            for (int i = start; i < size; i++)
-                unchecked
-                {
-                    crc = (crc >> 8) ^ table[buffer[i] ^ crc & 0xff];
-                }
-            return crc;
-        }
-
-        private byte[] UInt32ToBigEndianBytes(UInt32 x)
-        {
-            return new byte[] {
-			    (byte)((x >> 24) & 0xff),
-			    (byte)((x >> 16) & 0xff),
-			    (byte)((x >> 8) & 0xff),
-			    (byte)(x & 0xff)
-		    };
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/BitWorks.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/BitWorks.cs
deleted file mode 100644
index b759970..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/BitWorks.cs
+++ /dev/null
@@ -1,83 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-namespace Kafka.Client.Utils
-{
-    using System;
-
-    /// <summary>
-    /// Utilty class for managing bits and bytes.
-    /// </summary>
-    internal class BitWorks
-    {
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(short value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(int value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Converts the value to bytes and reverses them.
-        /// </summary>
-        /// <param name="value">The value to convert to bytes.</param>
-        /// <returns>Bytes representing the value.</returns>
-        public static byte[] GetBytesReversed(long value)
-        {
-            return ReverseBytes(BitConverter.GetBytes(value));
-        }
-
-        /// <summary>
-        /// Reverse the position of an array of bytes.
-        /// </summary>
-        /// <param name="inArray">
-        /// The array to reverse.  If null or zero-length then the returned array will be null.
-        /// </param>
-        /// <returns>The reversed array.</returns>
-        public static byte[] ReverseBytes(byte[] inArray)
-        {
-            if (inArray != null && inArray.Length > 0)
-            {
-                int highCtr = inArray.Length - 1;
-                byte temp;
-
-                for (int ctr = 0; ctr < inArray.Length / 2; ctr++)
-                {
-                    temp = inArray[ctr];
-                    inArray[ctr] = inArray[highCtr];
-                    inArray[highCtr] = temp;
-                    highCtr -= 1;
-                }
-            }
-
-            return inArray;
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Crc32Hasher.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Crc32Hasher.cs
deleted file mode 100644
index b9a5912..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Crc32Hasher.cs
+++ /dev/null
@@ -1,138 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-namespace Kafka.Client.Utils
-{
-    using System;
-    using System.Security.Cryptography;
-
-    /// <summary>
-    /// From http://damieng.com/blog/2006/08/08/calculating_crc32_in_c_and_net
-    /// </summary>
-    internal class Crc32Hasher : HashAlgorithm
-    {
-        public const UInt32 DefaultPolynomial = 0xedb88320;
-        public const UInt32 DefaultSeed = 0xffffffff;
-
-        private UInt32 hash;
-        private UInt32 seed;
-        private UInt32[] table;
-        private static UInt32[] defaultTable;
-
-        public Crc32Hasher()
-        {
-            table = InitializeTable(DefaultPolynomial);
-            seed = DefaultSeed;
-            Initialize();
-        }
-
-        public Crc32Hasher(UInt32 polynomial, UInt32 seed)
-        {
-            table = InitializeTable(polynomial);
-            this.seed = seed;
-            Initialize();
-        }
-
-        public override void Initialize()
-        {
-            hash = seed;
-        }
-
-        protected override void HashCore(byte[] buffer, int start, int length)
-        {
-            hash = CalculateHash(table, hash, buffer, start, length);
-        }
-
-        protected override byte[] HashFinal()
-        {
-            byte[] hashBuffer = UInt32ToBigEndianBytes(~hash);
-            this.HashValue = hashBuffer;
-            return hashBuffer;
-        }
-
-        public override int HashSize
-        {
-            get { return 32; }
-        }
-
-        public static byte[] Compute(byte[] bytes)
-        {
-            var hasher = new Crc32Hasher();
-            byte[] hash = hasher.ComputeHash(bytes);
-            return hash;
-        }
-
-        //public static UInt32 Compute(byte[] buffer)
-        //{
-        //    return ~CalculateHash(InitializeTable(DefaultPolynomial), DefaultSeed, buffer, 0, buffer.Length);
-        //}
-
-        //public static UInt32 Compute(UInt32 seed, byte[] buffer)
-        //{
-        //    return ~CalculateHash(InitializeTable(DefaultPolynomial), seed, buffer, 0, buffer.Length);
-        //}
-
-        //public static UInt32 Compute(UInt32 polynomial, UInt32 seed, byte[] buffer)
-        //{
-        //    return ~CalculateHash(InitializeTable(polynomial), seed, buffer, 0, buffer.Length);
-        //}
-
-        private static UInt32[] InitializeTable(UInt32 polynomial)
-        {
-            if (polynomial == DefaultPolynomial && defaultTable != null)
-                return defaultTable;
-
-            UInt32[] createTable = new UInt32[256];
-            for (int i = 0; i < 256; i++)
-            {
-                UInt32 entry = (UInt32)i;
-                for (int j = 0; j < 8; j++)
-                    if ((entry & 1) == 1)
-                        entry = (entry >> 1) ^ polynomial;
-                    else
-                        entry = entry >> 1;
-                createTable[i] = entry;
-            }
-
-            if (polynomial == DefaultPolynomial)
-                defaultTable = createTable;
-
-            return createTable;
-        }
-
-        private static UInt32 CalculateHash(UInt32[] table, UInt32 seed, byte[] buffer, int start, int size)
-        {
-            UInt32 crc = seed;
-            for (int i = start; i < size; i++)
-                unchecked
-                {
-                    crc = (crc >> 8) ^ table[buffer[i] ^ crc & 0xff];
-                }
-            return crc;
-        }
-
-        private byte[] UInt32ToBigEndianBytes(UInt32 x)
-        {
-            return new byte[] {
-			    (byte)((x >> 24) & 0xff),
-			    (byte)((x >> 16) & 0xff),
-			    (byte)((x >> 8) & 0xff),
-			    (byte)(x & 0xff)
-		    };
-        }
-    }
-}
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ErrorMapping.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ErrorMapping.cs
deleted file mode 100644
index 6595b47..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ErrorMapping.cs
+++ /dev/null
@@ -1,29 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    public class ErrorMapping

-    {

-        public static readonly int UnknownCode = -1;

-        public static readonly int NoError = 0;

-        public static readonly int OffsetOutOfRangeCode = 1;

-        public static readonly int InvalidMessageCode = 2;

-        public static readonly int WrongPartitionCode = 3;

-        public static readonly int InvalidFetchSizeCode = 4;

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Extensions.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Extensions.cs
deleted file mode 100644
index 098b838..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Extensions.cs
+++ /dev/null
@@ -1,49 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Linq;

-    using System.Linq.Expressions;

-    using System.Text;

-

-    internal static class Extensions

-    {

-        public static string ToMultiString<T>(this IEnumerable<T> items, string separator)

-        {

-            if (items.Count() == 0)

-            {

-                return "NULL";

-            }

-

-            return String.Join(separator, items);

-        }

-

-        public static string ToMultiString<T>(this IEnumerable<T> items, Expression<Func<T, object>> selector, string separator)

-        {

-            if (items.Count() == 0)

-            {

-                return "NULL";

-            }

-

-            Func<T, object> compiled = selector.Compile();

-            return String.Join(separator, items.Select(compiled));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Guard.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Guard.cs
deleted file mode 100644
index 12194c3..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/Guard.cs
+++ /dev/null
@@ -1,120 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    using System;

-    using System.Collections;

-    using System.Collections.Generic;

-    using System.Linq.Expressions;

-    using System.Text.RegularExpressions;

-

-    internal static class Guard

-    {

-        public static void NotNull(object parameter, string paramName)

-        {

-            if (parameter == null)

-            {

-                throw new ArgumentNullException(paramName);

-            }

-        }

-

-        public static void Count(ICollection parameter, int length, string paramName)

-        {

-            if (parameter.Count != length)

-            {

-                throw new ArgumentOutOfRangeException(paramName, parameter.Count, string.Empty);

-            }

-        }

-

-        public static void Greater(int parameter, int expected, string paramName)

-        {

-            if (parameter <= expected)

-            {

-                throw new ArgumentOutOfRangeException(paramName, parameter, string.Empty);

-            }

-        }

-

-        public static void NotNullNorEmpty(string parameter, string paramName)

-        {

-            if (string.IsNullOrEmpty(parameter))

-            {

-                throw new ArgumentException("Given string is empty", paramName);

-            }

-        }

-

-        public static void AllNotNull(IEnumerable parameter, string paramName)

-        {

-            foreach (var par in parameter)

-            {

-                if (par == null)

-                {

-                    throw new ArgumentNullException(paramName);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Checks whether given expression is true. Throws given exception type if not.

-        /// </summary>

-        /// <typeparam name="TException">

-        /// Type of exception that i thrown when condition is not met.

-        /// </typeparam>

-        /// <param name="assertion">

-        /// The assertion.

-        /// </param>

-        public static void Assert<TException>(Expression<Func<bool>> assertion)

-            where TException : Exception, new()

-        {

-            var compiled = assertion.Compile();

-            var evaluatedValue = compiled();

-            if (!evaluatedValue)

-            {

-                var e = (Exception)Activator.CreateInstance(

-                    typeof(TException),

-                    new object[] { string.Format("'{0}' is not met.", Normalize(assertion.ToString())) });

-                throw e;

-            }

-        }

-

-        /// <summary>

-        /// Creates string representation of lambda expression with unnecessary information 

-        /// stripped out. 

-        /// </summary>

-        /// <param name="expression">Lambda expression to process. </param>

-        /// <returns>Normalized string representation. </returns>

-        private static string Normalize(string expression)

-        {

-            var result = expression;

-            var replacements = new Dictionary<Regex, string>()

-            {

-                { new Regex("value\\([^)]*\\)\\."), string.Empty },

-                { new Regex("\\(\\)\\."), string.Empty },

-                { new Regex("\\(\\)\\ =>"), string.Empty },                

-                { new Regex("Not"), "!" }            

-            };

-

-            foreach (var pattern in replacements)

-            {

-                result = pattern.Key.Replace(result, pattern.Value);

-            }

-

-            result = result.Trim();

-            return result;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/KafkaScheduler.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/KafkaScheduler.cs
deleted file mode 100644
index c261b7d..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/KafkaScheduler.cs
+++ /dev/null
@@ -1,86 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    using System;

-    using System.Globalization;

-    using System.Reflection;

-    using System.Threading;

-    using log4net;

-

-    /// <summary>

-    /// A scheduler for running jobs in the background

-    /// </summary>

-    internal class KafkaScheduler : IDisposable

-    {

-        public delegate void KafkaSchedulerDelegate();

-

-        private Timer timer;

-

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private KafkaSchedulerDelegate methodToRun;

-

-        private volatile bool disposed;

-

-        private readonly object shuttingDownLock = new object();

-

-        public void ScheduleWithRate(KafkaSchedulerDelegate method, long delayMs, long periodMs)

-        {

-            methodToRun = method;

-            TimerCallback tcb = HandleCallback;

-            timer = new Timer(tcb, null, delayMs, periodMs);

-        }

-

-        private void HandleCallback(object o)

-        {

-            methodToRun();

-        }

-

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-

-            try

-            {

-                if (timer != null)

-                {

-                    timer.Dispose();

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "shutdown scheduler");

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Ignoring unexpected errors on closing", exc);

-            }

-        }

-    }

-}
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ReflectionHelper.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ReflectionHelper.cs
deleted file mode 100644
index 9c408ed..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ReflectionHelper.cs
+++ /dev/null
@@ -1,56 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    using System;

-    using System.Reflection;

-

-    internal static class ReflectionHelper

-    {

-        public static T Instantiate<T>(string className)

-            where T : class

-        {

-            object o1;

-            if (string.IsNullOrEmpty(className))

-            {

-                return default(T);

-            }

-

-            Type t1 = Type.GetType(className, true);

-            if (t1.IsGenericType)

-            {

-                var t2 = typeof(T).GetGenericArguments();

-                var t3 = t1.MakeGenericType(t2);

-                o1 = Activator.CreateInstance(t3);

-                return o1 as T;

-            }

-

-            o1 = Activator.CreateInstance(t1);

-            return o1 as T;

-        }

-

-        public static T GetInstanceField<T>(string name, object obj)

-            where T : class

-        {

-            Type type = obj.GetType();

-            FieldInfo info = type.GetField(name, BindingFlags.NonPublic | BindingFlags.Instance);

-            object value = info.GetValue(obj);

-            return (T)value;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupDirs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupDirs.cs
deleted file mode 100644
index 7a871e6..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupDirs.cs
+++ /dev/null
@@ -1,39 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    internal class ZKGroupDirs

-    {

-        private readonly string consumersPath = "/consumers";

-

-        public string ConsumerDir

-        {

-            get { return this.consumersPath; }

-        }

-

-        public string ConsumerGroupDir { get; private set; }

-

-        public string ConsumerRegistryDir { get; private set; }

-

-        public ZKGroupDirs(string group)

-        {

-            this.ConsumerGroupDir = this.consumersPath + "/" + group;

-            this.ConsumerRegistryDir = this.ConsumerGroupDir + "/ids";

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupTopicDirs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupTopicDirs.cs
deleted file mode 100644
index 4ff626b..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZKGroupTopicDirs.cs
+++ /dev/null
@@ -1,32 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    internal class ZKGroupTopicDirs : ZKGroupDirs

-    {

-        public string ConsumerOffsetDir { get; private set; }

-

-        public string ConsumerOwnerDir { get; private set; }

-

-        public ZKGroupTopicDirs(string group, string topic) : base(group)

-        {

-            this.ConsumerOffsetDir = this.ConsumerGroupDir + "/offsets/" + topic;

-            this.ConsumerOwnerDir = this.ConsumerGroupDir + "/owners/" + topic;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZkUtils.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZkUtils.cs
deleted file mode 100644
index 18202c6..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/Utils/ZkUtils.cs
+++ /dev/null
@@ -1,147 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Utils

-{

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.ZooKeeperIntegration;

-    using log4net;

-    using ZooKeeperNet;

-

-    internal class ZkUtils

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        internal static void UpdatePersistentPath(IZooKeeperClient zkClient, string path, string data)

-        {

-            try

-            {

-                zkClient.WriteData(path, data);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                CreateParentPath(zkClient, path);

-

-                try

-                {

-                    zkClient.CreatePersistent(path, data);

-                }

-                catch (KeeperException.NodeExistsException)

-                {

-                    zkClient.WriteData(path, data);

-                }

-            }

-        }

-

-        internal static void CreateParentPath(IZooKeeperClient zkClient, string path)

-        {

-            string parentDir = path.Substring(0, path.LastIndexOf('/'));

-            if (parentDir.Length != 0)

-            {

-                zkClient.CreatePersistent(parentDir, true);

-            }

-        }

-

-        internal static void DeletePath(IZooKeeperClient zkClient, string path)

-        {

-            try

-            {

-                zkClient.Delete(path);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                Logger.InfoFormat(CultureInfo.CurrentCulture, "{0} deleted during connection loss; this is ok", path);

-            }

-        }

-

-        internal static IDictionary<string, IList<string>> GetPartitionsForTopics(IZooKeeperClient zkClient, IEnumerable<string> topics)

-        {

-            var result = new Dictionary<string, IList<string>>();

-            foreach (string topic in topics)

-            {

-                var partList = new List<string>();

-                var brokers =

-                    zkClient.GetChildrenParentMayNotExist(ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic);

-                foreach (var broker in brokers)

-                {

-                    var numberOfParts =

-                        int.Parse(

-                            zkClient.ReadData<string>(ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic + "/" +

-                                                      broker),

-                                                      CultureInfo.CurrentCulture);

-                    for (int i = 0; i < numberOfParts; i++)

-                    {

-                        partList.Add(broker + "-" + i);

-                    }

-                }

-

-                partList.Sort();

-                result.Add(topic, partList);

-            }

-

-            return result;

-        }

-

-        internal static void CreateEphemeralPathExpectConflict(IZooKeeperClient zkClient, string path, string data)

-        {

-            try

-            {

-                CreateEphemeralPath(zkClient, path, data);

-            }

-            catch (KeeperException.NodeExistsException)

-            {

-                string storedData;

-                try

-                {

-                    storedData = zkClient.ReadData<string>(path);

-                }

-                catch (KeeperException.NoNodeException)

-                {

-                    // the node disappeared; treat as if node existed and let caller handles this

-                    throw;

-                }

-

-                if (storedData == null || storedData != data)

-                {

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "conflict in {0} data: {1} stored data: {2}", path, data, storedData);

-                    throw;

-                }

-                else

-                {

-                    // otherwise, the creation succeeded, return normally

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "{0} exits with value {1} during connection loss; this is ok", path, data);

-                }

-            }

-        }

-

-        internal static void CreateEphemeralPath(IZooKeeperClient zkClient, string path, string data)

-        {

-            try

-            {

-                zkClient.CreateEphemeral(path, data);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                ZkUtils.CreateParentPath(zkClient, path);

-                zkClient.CreateEphemeral(path, data);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperAwareKafkaClientBase.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperAwareKafkaClientBase.cs
deleted file mode 100644
index 2352c31..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperAwareKafkaClientBase.cs
+++ /dev/null
@@ -1,44 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client

-{

-    using Kafka.Client.Cfg;

-

-    /// <summary>

-    /// A base class for all Kafka clients that support ZooKeeper based automatic broker discovery

-    /// </summary>

-    public abstract class ZooKeeperAwareKafkaClientBase : KafkaClientBase

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperAwareKafkaClientBase"/> class.

-        /// </summary>

-        /// <param name="config">The config.</param>

-        protected ZooKeeperAwareKafkaClientBase(ZooKeeperConfiguration config)

-        {

-            this.IsZooKeeperEnabled = config != null && !string.IsNullOrEmpty(config.ZkConnect);

-        }

-

-        /// <summary>

-        /// Gets a value indicating whether ZooKeeper based automatic broker discovery is enabled.

-        /// </summary>

-        /// <value>

-        /// <c>true</c> if this instance is zoo keeper enabled; otherwise, <c>false</c>.

-        /// </value>

-        protected bool IsZooKeeperEnabled { get; private set; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ChildChangedEventItem.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ChildChangedEventItem.cs
deleted file mode 100644
index 7188474..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ChildChangedEventItem.cs
+++ /dev/null
@@ -1,113 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    using System.Linq;

-    using log4net;

-

-    /// <summary>

-    /// Represents methods that will handle a ZooKeeper child events  

-    /// </summary>

-    internal class ChildChangedEventItem

-    {

-        private readonly ILog logger;

-        private ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperChildChangedEventArgs> childChanged;

-

-        /// <summary>

-        /// Occurs when znode children changes

-        /// </summary>

-        public event ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperChildChangedEventArgs> ChildChanged

-        {

-            add

-            {

-                this.childChanged -= value;

-                this.childChanged += value;

-            }

-

-            remove

-            {

-                this.childChanged -= value;

-            }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ChildChangedEventItem"/> class. 

-        /// </summary>

-        /// <param name="logger">

-        /// The logger.

-        /// </param>

-        /// <remarks>

-        /// Should use external logger to keep same format of all event logs

-        /// </remarks>

-        public ChildChangedEventItem(ILog logger)

-        {

-            this.logger = logger;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ChildChangedEventItem"/> class.

-        /// </summary>

-        /// <param name="logger">

-        /// The logger.

-        /// </param>

-        /// <param name="handler">

-        /// The subscribed handler.

-        /// </param>

-        /// <remarks>

-        /// Should use external logger to keep same format of all event logs

-        /// </remarks>

-        public ChildChangedEventItem(ILog logger, ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperChildChangedEventArgs> handler)

-        {

-            this.logger = logger;

-            this.ChildChanged += handler;

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper children changes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        public void OnChildChanged(ZooKeeperChildChangedEventArgs e)

-        {

-            var handlers = this.childChanged;

-            if (handlers == null)

-            {

-                return;

-            }

-

-            foreach (var handler in handlers.GetInvocationList())

-            {

-                this.logger.Debug(e + " sent to " + handler.Target);

-            }

-

-            handlers(e);

-        }

-

-        /// <summary>

-        /// Gets the total count of subscribed handlers

-        /// </summary>

-        public int Count

-        {

-            get

-            {

-                return this.childChanged != null ? this.childChanged.GetInvocationList().Count() : 0;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/DataChangedEventItem.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/DataChangedEventItem.cs
deleted file mode 100644
index d7ee904..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/DataChangedEventItem.cs
+++ /dev/null
@@ -1,161 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    using System.Linq;

-    using log4net;

-

-    /// <summary>

-    /// Represents methods that will handle a ZooKeeper data events  

-    /// </summary>

-    internal class DataChangedEventItem

-    {

-        private readonly ILog logger;

-        private ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> dataChanged;

-        private ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> dataDeleted;

-

-        /// <summary>

-        /// Occurs when znode data changes

-        /// </summary>

-        public event ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> DataChanged

-        {

-            add

-            {

-                this.dataChanged -= value;

-                this.dataChanged += value;

-            }

-

-            remove

-            {

-                this.dataChanged -= value;

-            }

-        }

-

-        /// <summary>

-        /// Occurs when znode data deletes

-        /// </summary>

-        public event ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> DataDeleted

-        {

-            add

-            {

-                this.dataDeleted -= value;

-                this.dataDeleted += value;

-            }

-

-            remove

-            {

-                this.dataDeleted -= value;

-            }

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="DataChangedEventItem"/> class.

-        /// </summary>

-        /// <param name="logger">

-        /// The logger.

-        /// </param>

-        /// <remarks>

-        /// Should use external logger to keep same format of all event logs

-        /// </remarks>

-        public DataChangedEventItem(ILog logger)

-        {

-            this.logger = logger;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="DataChangedEventItem"/> class.

-        /// </summary>

-        /// <param name="logger">

-        /// The logger.

-        /// </param>

-        /// <param name="changedHandler">

-        /// The changed handler.

-        /// </param>

-        /// <param name="deletedHandler">

-        /// The deleted handler.

-        /// </param>

-        /// <remarks>

-        /// Should use external logger to keep same format of all event logs

-        /// </remarks>

-        public DataChangedEventItem(

-            ILog logger,

-            ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> changedHandler,

-            ZooKeeperClient.ZooKeeperEventHandler<ZooKeeperDataChangedEventArgs> deletedHandler)

-        {

-            this.logger = logger;

-            this.DataChanged += changedHandler;

-            this.DataDeleted += deletedHandler;

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper data changes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        public void OnDataChanged(ZooKeeperDataChangedEventArgs e)

-        {

-            var handlers = this.dataChanged;

-            if (handlers == null)

-            {

-                return;

-            }

-

-            foreach (var handler in handlers.GetInvocationList())

-            {

-                this.logger.Debug(e + " sent to " + handler.Target);

-            }

-

-            handlers(e);

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper data deletes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        public void OnDataDeleted(ZooKeeperDataChangedEventArgs e)

-        {

-            var handlers = this.dataDeleted;

-            if (handlers == null)

-            {

-                return;

-            }

-

-            foreach (var handler in handlers.GetInvocationList())

-            {

-                this.logger.Debug(e + " sent to " + handler.Target);

-            }

-

-            handlers(e);

-        }

-

-        /// <summary>

-        /// Gets the total count of subscribed handlers

-        /// </summary>

-        public int TotalCount

-        {

-            get

-            {

-                return (this.dataChanged != null ? this.dataChanged.GetInvocationList().Count() : 0) +

-                    (this.dataDeleted != null ? this.dataDeleted.GetInvocationList().Count() : 0);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperChildChangedEventArgs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperChildChangedEventArgs.cs
deleted file mode 100644
index 89c2df4..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperChildChangedEventArgs.cs
+++ /dev/null
@@ -1,60 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    using System.Collections.Generic;

-

-    /// <summary>

-    /// Contains znode children changed event data

-    /// </summary>

-    internal class ZooKeeperChildChangedEventArgs : ZooKeeperEventArgs

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperChildChangedEventArgs"/> class.

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        public ZooKeeperChildChangedEventArgs(string path)

-            : base("Children of " + path + " changed")

-        {

-            this.Path = path;

-        }

-

-        /// <summary>

-        /// Gets the znode path

-        /// </summary>

-        public string Path { get; private set; }

-

-        /// <summary>

-        /// Gets or sets the current znode children

-        /// </summary>

-        public IList<string> Children { get; set; }

-

-        /// <summary>

-        /// Gets the current event type

-        /// </summary>

-        public override ZooKeeperEventTypes Type

-        {

-            get

-            {

-                return ZooKeeperEventTypes.ChildChanged;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperDataChangedEventArgs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperDataChangedEventArgs.cs
deleted file mode 100644
index e8c6651..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperDataChangedEventArgs.cs
+++ /dev/null
@@ -1,88 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    /// <summary>

-    /// Contains znode data changed event data

-    /// </summary>

-    internal class ZooKeeperDataChangedEventArgs : ZooKeeperEventArgs

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperDataChangedEventArgs"/> class.

-        /// </summary>

-        /// <param name="path">

-        /// The znode path.

-        /// </param>

-        public ZooKeeperDataChangedEventArgs(string path)

-            : base("Data of " + path + " changed")

-        {

-            this.Path = path;

-        }

-

-        /// <summary>

-        /// Gets the znode path

-        /// </summary>

-        public string Path { get; private set; }

-

-        /// <summary>

-        /// Gets or sets znode changed data.

-        /// </summary>

-        /// <remarks>

-        /// Null if data was deleted.

-        /// </remarks>

-        public string Data { get; set; }

-

-        /// <summary>

-        /// Gets the event type.

-        /// </summary>

-        public override ZooKeeperEventTypes Type

-        {

-            get

-            {

-                return ZooKeeperEventTypes.DataChanged;

-            }

-        }

-

-        /// <summary>

-        /// Gets a value indicating whether data was deleted

-        /// </summary>

-        public bool DataDeleted

-        {

-            get

-            {

-                return string.IsNullOrEmpty(this.Data);

-            }

-        }

-

-        /// <summary>

-        /// Gets string representation of event data

-        /// </summary>

-        /// <returns>

-        /// String representation of event data

-        /// </returns>

-        public override string ToString()

-        {

-            if (this.DataDeleted)

-            {

-                return base.ToString().Replace("changed", "deleted");

-            }

-

-            return base.ToString();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventArgs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventArgs.cs
deleted file mode 100644
index 3e329cba..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventArgs.cs
+++ /dev/null
@@ -1,56 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    using System;

-

-    /// <summary>

-    /// Base class for classes containing ZooKeeper event data

-    /// </summary>

-    internal abstract class ZooKeeperEventArgs : EventArgs

-    {

-        private readonly string description;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperEventArgs"/> class.

-        /// </summary>

-        /// <param name="description">

-        /// The event description.

-        /// </param>

-        protected ZooKeeperEventArgs(string description)

-        {

-            this.description = description;

-        }

-

-        /// <summary>

-        /// Gets string representation of event data

-        /// </summary>

-        /// <returns>

-        /// String representation of event data

-        /// </returns>

-        public override string ToString()

-        {

-            return "ZooKeeperEvent[" + this.description + "]";

-        }

-

-        /// <summary>

-        /// Gets the event type.

-        /// </summary>

-        public abstract ZooKeeperEventTypes Type { get; }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventTypes.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventTypes.cs
deleted file mode 100644
index 8ee8025..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperEventTypes.cs
+++ /dev/null
@@ -1,35 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    /// <summary>

-    /// Event types

-    /// </summary>

-    internal enum ZooKeeperEventTypes

-    {

-        Unknow = 0,

-

-        StateChanged = 1,

-

-        SessionCreated = 2,

-

-        ChildChanged = 3,

-

-        DataChanged = 4,

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperSessionCreatedEventArgs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperSessionCreatedEventArgs.cs
deleted file mode 100644
index 0419679..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperSessionCreatedEventArgs.cs
+++ /dev/null
@@ -1,46 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    /// <summary>

-    /// Contains ZooKeeper session created event data

-    /// </summary>

-    internal class ZooKeeperSessionCreatedEventArgs : ZooKeeperEventArgs

-    {

-        public static new readonly ZooKeeperSessionCreatedEventArgs Empty = new ZooKeeperSessionCreatedEventArgs();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperSessionCreatedEventArgs"/> class.

-        /// </summary>

-        protected ZooKeeperSessionCreatedEventArgs()

-            : base("New session created")

-        {

-        }

-

-        /// <summary>

-        /// Gets the event type.

-        /// </summary>

-        public override ZooKeeperEventTypes Type

-        {

-            get

-            {

-                return ZooKeeperEventTypes.SessionCreated;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperStateChangedEventArgs.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperStateChangedEventArgs.cs
deleted file mode 100644
index 7228fc0..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Events/ZooKeeperStateChangedEventArgs.cs
+++ /dev/null
@@ -1,55 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Events

-{

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Contains ZooKeeper session state changed event data

-    /// </summary>

-    internal class ZooKeeperStateChangedEventArgs : ZooKeeperEventArgs

-    {

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperStateChangedEventArgs"/> class.

-        /// </summary>

-        /// <param name="state">

-        /// The current ZooKeeper state.

-        /// </param>

-        public ZooKeeperStateChangedEventArgs(KeeperState state)

-            : base("State changed to " + state)

-        {

-            this.State = state;

-        }

-

-        /// <summary>

-        /// Gets current ZooKeeper state

-        /// </summary>

-        public KeeperState State { get; private set; }

-

-        /// <summary>

-        /// Gets the event type.

-        /// </summary>

-        public override ZooKeeperEventTypes Type

-        {

-            get

-            {

-                return ZooKeeperEventTypes.StateChanged;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperClient.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperClient.cs
deleted file mode 100644
index 8a14ac6..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperClient.cs
+++ /dev/null
@@ -1,469 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using Org.Apache.Zookeeper.Data;

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Abstracts the interaction with zookeeper

-    /// </summary>

-    internal interface IZooKeeperClient : IWatcher, IDisposable

-    {

-        /// <summary>

-        /// Gets time (in miliseconds) of event thread idleness

-        /// </summary>

-        /// <remarks>

-        /// Used for testing purpose

-        /// </remarks>

-        int? IdleTime { get; }

-

-        /// <summary>

-        /// Connects to ZooKeeper server within given time period and installs watcher in ZooKeeper

-        /// </summary>

-        void Connect();

-

-        /// <summary>

-        /// Closes current connection to ZooKeeper

-        /// </summary>

-        void Disconnect();

-

-        /// <summary>

-        /// Re-connect to ZooKeeper server when session expired

-        /// </summary>

-        /// <param name="servers">

-        /// The servers.

-        /// </param>

-        /// <param name="connectionTimeout">

-        /// The connection timeout.

-        /// </param>

-        void Reconnect(string servers, int connectionTimeout);

-

-        /// <summary>

-        /// Waits untill ZooKeeper connection is established

-        /// </summary>

-        /// <param name="connectionTimeout">

-        /// The connection timeout.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        bool WaitUntilConnected(int connectionTimeout);

-

-        /// <summary>

-        /// Retries given delegate until connections is established

-        /// </summary>

-        /// <param name="callback">

-        /// The delegate to invoke.

-        /// </param>

-        /// <typeparam name="T">

-        /// Type of data returned by delegate 

-        /// </typeparam>

-        /// <returns>

-        /// data returned by delegate

-        /// </returns>

-        T RetryUntilConnected<T>(Func<T> callback);

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper state changes events

-        /// </summary>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Subscribe(IZooKeeperStateListener listener);

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper state changes events

-        /// </summary>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Unsubscribe(IZooKeeperStateListener listener);

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper child changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Subscribe(string path, IZooKeeperChildListener listener);

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper child changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Unsubscribe(string path, IZooKeeperChildListener listener);

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper data changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Subscribe(string path, IZooKeeperDataListener listener);

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper data changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        void Unsubscribe(string path, IZooKeeperDataListener listener);

-

-        /// <summary>

-        /// Un-subscribes all listeners

-        /// </summary>

-        void UnsubscribeAll();

-

-        /// <summary>

-        /// Installs a child watch for the given path. 

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <returns>

-        /// the current children of the path or null if the znode with the given path doesn't exist

-        /// </returns>

-        IList<string> WatchForChilds(string path);

-

-        /// <summary>

-        /// Installs a data watch for the given path. 

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        void WatchForData(string path);

-

-        /// <summary>

-        /// Checks whether znode for a given path exists

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        bool Exists(string path);

-

-        /// <summary>

-        /// Checks whether znode for a given path exists.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        bool Exists(string path, bool watch);

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        IList<string> GetChildren(string path);

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        IList<string> GetChildren(string path, bool watch);

-

-        /// <summary>

-        /// Counts number of children for a given path.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Number of children 

-        /// </returns>

-        int CountChildren(string path);

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        T ReadData<T>(string path, Stat stats, bool watch)

-            where T : class;

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        T ReadData<T>(string path, Stat stats)

-            where T : class;

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Data or null, if znode does not exist

-        /// </returns>

-        T ReadData<T>(string path)

-            where T : class;

-

-        /// <summary>

-        /// Fetches data for given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="returnNullIfPathNotExists">

-        /// Indicates, whether should return null or throw exception when 

-        /// znode doesn't exist

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        T ReadData<T>(string path, bool returnNullIfPathNotExists)

-            where T : class;

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        void WriteData(string path, object data);

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="expectedVersion">

-        /// Expected version of data

-        /// </param>

-        void WriteData(string path, object data, int expectedVersion);

-

-        /// <summary>

-        /// Deletes znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        bool Delete(string path);

-

-        /// <summary>

-        /// Deletes znode and his children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        bool DeleteRecursive(string path);

-

-        /// <summary>

-        /// Creates persistent znode and all intermediate znodes (if do not exist) for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        void MakeSurePersistentPathExists(string path);

-

-        /// <summary>

-        /// Fetches children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        /// <returns>

-        /// Children or null, if znode does not exist

-        /// </returns>

-        IList<string> GetChildrenParentMayNotExist(string path);

-

-        /// <summary>

-        /// Creates a persistent znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="createParents">

-        /// Indicates whether should create all intermediate znodes

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// Doesn't re-create missing intermediate znodes

-        /// </remarks>

-        void CreatePersistent(string path, bool createParents);

-

-        /// <summary>

-        /// Creates a persistent znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// Doesn't re-create missing intermediate znodes

-        /// </remarks>

-        void CreatePersistent(string path);

-

-        /// <summary>

-        /// Creates a persistent znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// </remarks>

-        void CreatePersistent(string path, object data);

-

-        /// <summary>

-        /// Creates a sequential, persistent znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't dissapear after session close

-        /// </remarks>

-        /// <returns>

-        /// The created znode's path

-        /// </returns>

-        string CreatePersistentSequential(string path, object data);

-

-        /// <summary>

-        /// Creates a ephemeral znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        void CreateEphemeral(string path);

-

-        /// <summary>

-        /// Creates a ephemeral znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        void CreateEphemeral(string path, object data);

-

-        /// <summary>

-        /// Creates a ephemeral, sequential znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        /// <returns>

-        /// Created znode's path

-        /// </returns>

-        string CreateEphemeralSequential(string path, object data);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperConnection.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperConnection.cs
deleted file mode 100644
index 3252856..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperConnection.cs
+++ /dev/null
@@ -1,164 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Collections.Generic;

-    using Org.Apache.Zookeeper.Data;

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Abstracts connection with ZooKeeper server

-    /// </summary>

-    internal interface IZooKeeperConnection : IDisposable

-    {

-        /// <summary>

-        /// Gets the ZooKeeper client state

-        /// </summary>

-        ZooKeeper.States ClientState { get; }

-

-        /// <summary>

-        /// Gets the list of ZooKeeper servers.

-        /// </summary>

-        string Servers { get; }

-

-        /// <summary>

-        /// Gets the ZooKeeper session timeout

-        /// </summary>

-        int SessionTimeout { get; }

-

-        /// <summary>

-        /// Gets ZooKeeper client.

-        /// </summary>

-        ZooKeeper Client { get; }

-

-        /// <summary>

-        /// Connects to ZooKeeper server

-        /// </summary>

-        /// <param name="watcher">

-        /// The watcher to be installed in ZooKeeper.

-        /// </param>

-        void Connect(IWatcher watcher);

-

-        /// <summary>

-        /// Creates znode using given create mode for given path and writes given data to it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="mode">

-        /// The create mode.

-        /// </param>

-        /// <returns>

-        /// The created znode's path

-        /// </returns>

-        string Create(string path, byte[] data, CreateMode mode);

-

-        /// <summary>

-        /// Deletes znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        void Delete(string path);

-

-        /// <summary>

-        /// Checks whether znode for a given path exists.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        bool Exists(string path, bool watch);

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        IList<string> GetChildren(string path, bool watch);

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Data

-        /// </returns>

-        byte[] ReadData(string path, Stat stats, bool watch);

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        void WriteData(string path, byte[] data);

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="version">

-        /// Expected version of data

-        /// </param>

-        void WriteData(string path, byte[] data, int version);

-

-        /// <summary>

-        /// Gets time when connetion was created

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        /// <returns>

-        /// Connection creation time

-        /// </returns>

-        long GetCreateTime(string path);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperSerializer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperSerializer.cs
deleted file mode 100644
index 3340a94..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/IZooKeeperSerializer.cs
+++ /dev/null
@@ -1,48 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    /// <summary>

-    /// Zookeeper is able to store data in form of byte arrays. This interfacte is a bridge between those byte-array format

-    /// and higher level objects.

-    /// </summary>

-    internal interface IZooKeeperSerializer

-    {

-        /// <summary>

-        /// Serializes data

-        /// </summary>

-        /// <param name="obj">

-        /// The data to serialize

-        /// </param>

-        /// <returns>

-        /// Serialized data

-        /// </returns>

-        byte[] Serialize(object obj);

-

-        /// <summary>

-        /// Deserializes data

-        /// </summary>

-        /// <param name="bytes">

-        /// The serialized data

-        /// </param>

-        /// <returns>

-        /// The deserialized data

-        /// </returns>

-        object Deserialize(byte[] bytes);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/BrokerTopicsListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/BrokerTopicsListener.cs
deleted file mode 100644
index 50a8afc..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/BrokerTopicsListener.cs
+++ /dev/null
@@ -1,257 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using log4net;

-

-    /// <summary>

-    /// Listens to new broker registrations under a particular topic, in zookeeper and

-    /// keeps the related data structures updated

-    /// </summary>

-    internal class BrokerTopicsListener : IZooKeeperChildListener

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly IDictionary<int, Broker> actualBrokerIdMap;

-        private readonly Action<int, string, int> callback;

-        private readonly IDictionary<string, SortedSet<Partition>> actualBrokerTopicsPartitionsMap;

-        private IDictionary<int, Broker> oldBrokerIdMap;

-        private IDictionary<string, SortedSet<Partition>> oldBrokerTopicsPartitionsMap;

-        private readonly IZooKeeperClient zkclient;

-        private readonly object syncLock = new object();

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="BrokerTopicsListener"/> class.

-        /// </summary>

-        /// <param name="zkclient">The wrapper on ZooKeeper client.</param>

-        /// <param name="actualBrokerTopicsPartitionsMap">The actual broker topics partitions map.</param>

-        /// <param name="actualBrokerIdMap">The actual broker id map.</param>

-        /// <param name="callback">The callback invoked after new broker is added.</param>

-        public BrokerTopicsListener(

-            IZooKeeperClient zkclient,

-            IDictionary<string, SortedSet<Partition>> actualBrokerTopicsPartitionsMap, 

-            IDictionary<int, Broker> actualBrokerIdMap, 

-            Action<int, string, int> callback)

-        {

-            this.zkclient = zkclient;

-            this.actualBrokerTopicsPartitionsMap = actualBrokerTopicsPartitionsMap;

-            this.actualBrokerIdMap = actualBrokerIdMap;

-            this.callback = callback;

-            this.oldBrokerIdMap = new Dictionary<int, Broker>(this.actualBrokerIdMap);

-            this.oldBrokerTopicsPartitionsMap = new Dictionary<string, SortedSet<Partition>>(this.actualBrokerTopicsPartitionsMap);

-            Logger.Debug("Creating broker topics listener to watch the following paths - \n"

-                + "/broker/topics, /broker/topics/topic, /broker/ids");

-            Logger.Debug("Initialized this broker topics listener with initial mapping of broker id to "

-                + "partition id per topic with " + this.oldBrokerTopicsPartitionsMap.ToMultiString(

-                    x => x.Key + " --> " + x.Value.ToMultiString(y => y.ToString(), ","), "; "));

-        }

-

-        /// <summary>

-        /// Called when the children of the given path changed

-        /// </summary>

-        /// <param name="e">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperChildChangedEventArgs"/> instance containing the event data

-        /// as parent path and children (null if parent was deleted).

-        /// </param>

-        public void HandleChildChange(ZooKeeperChildChangedEventArgs e)

-        {

-            Guard.NotNull(e, "e");

-            Guard.NotNullNorEmpty(e.Path, "e.Path");

-            Guard.NotNull(e.Children, "e.Children");

-

-            lock (this.syncLock)

-            {

-                try

-                {

-                    string path = e.Path;

-                    IList<string> childs = e.Children;

-                    Logger.Debug("Watcher fired for path: " + path);

-                    switch (path)

-                    {

-                        case ZooKeeperClient.DefaultBrokerTopicsPath:

-                            List<string> oldTopics = this.oldBrokerTopicsPartitionsMap.Keys.ToList();

-                            List<string> newTopics = childs.Except(oldTopics).ToList();

-                            Logger.Debug("List of topics was changed at " + e.Path);

-                            Logger.Debug("Current topics -> " + e.Children.ToMultiString(","));

-                            Logger.Debug("Old list of topics -> " + oldTopics.ToMultiString(","));

-                            Logger.Debug("List of newly registered topics -> " + newTopics.ToMultiString(","));

-                            foreach (var newTopic in newTopics)

-                            {

-                                string brokerTopicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + newTopic;

-                                IList<string> brokerList = this.zkclient.GetChildrenParentMayNotExist(brokerTopicPath);

-                                this.ProcessNewBrokerInExistingTopic(newTopic, brokerList);

-                                this.zkclient.Subscribe(ZooKeeperClient.DefaultBrokerTopicsPath + "/" + newTopic, this);

-                            }

-

-                            break;

-                        case ZooKeeperClient.DefaultBrokerIdsPath:

-                            Logger.Debug("List of brokers changed in the Kafka cluster " + e.Path);

-                            Logger.Debug("Currently registered list of brokers -> " + e.Children.ToMultiString(","));

-                            this.ProcessBrokerChange(path, childs);

-                            break;

-                        default:

-                            string[] parts = path.Split('/');

-                            string topic = parts.Last();

-                            if (parts.Length == 4 && parts[2] == "topics" && childs != null)

-                            {

-                                Logger.Debug("List of brokers changed at " + path);

-                                Logger.Debug(

-                                    "Currently registered list of brokers for topic " + topic + " -> " +

-                                    childs.ToMultiString(","));

-                                this.ProcessNewBrokerInExistingTopic(topic, childs);

-                            }

-

-                            break;

-                    }

-

-                    this.oldBrokerTopicsPartitionsMap = this.actualBrokerTopicsPartitionsMap;

-                    this.oldBrokerIdMap = this.actualBrokerIdMap;

-                }

-                catch (Exception exc)

-                {

-                    Logger.Debug("Error while handling " + e, exc);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Resets the state of listener.

-        /// </summary>

-        public void ResetState()

-        {

-            Logger.Debug("Before reseting broker topic partitions state -> " 

-                + this.oldBrokerTopicsPartitionsMap.ToMultiString(

-                x => x.Key + " --> " + x.Value.ToMultiString(y => y.ToString(), ","), "; "));

-            this.oldBrokerTopicsPartitionsMap = actualBrokerTopicsPartitionsMap;

-            Logger.Debug("After reseting broker topic partitions state -> "

-                + this.oldBrokerTopicsPartitionsMap.ToMultiString(

-                x => x.Key + " --> " + x.Value.ToMultiString(y => y.ToString(), ","), "; "));

-            Logger.Debug("Before reseting broker id map state -> "

-                + this.oldBrokerIdMap.ToMultiString(", "));

-            this.oldBrokerIdMap = this.actualBrokerIdMap;

-            Logger.Debug("After reseting broker id map state -> "

-                + this.oldBrokerIdMap.ToMultiString(", "));

-        }

-

-        /// <summary>

-        /// Generate the updated mapping of (brokerId, numPartitions) for the new list of brokers

-        /// registered under some topic.

-        /// </summary>

-        /// <param name="topic">The path of the topic under which the brokers have changed..</param>

-        /// <param name="childs">The list of changed brokers.</param>

-        private void ProcessNewBrokerInExistingTopic(string topic, IEnumerable<string> childs)

-        {

-            if (this.actualBrokerTopicsPartitionsMap.ContainsKey(topic))

-            {

-                Logger.Debug("Old list of brokers -> " + this.oldBrokerTopicsPartitionsMap[topic].ToMultiString(x => x.BrokerId.ToString(), ","));

-            }

-

-            var updatedBrokers = new SortedSet<int>(childs.Select(x => int.Parse(x, CultureInfo.InvariantCulture)));

-            string brokerTopicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + topic;

-            var sortedBrokerPartitions = new SortedDictionary<int, int>();

-            foreach (var bid in updatedBrokers)

-            {

-                var num = this.zkclient.ReadData<string>(brokerTopicPath + "/" + bid);

-                sortedBrokerPartitions.Add(bid, int.Parse(num, CultureInfo.InvariantCulture));

-            }

-

-            var updatedBrokerParts = new SortedSet<Partition>();

-            foreach (var bp in sortedBrokerPartitions)

-            {

-                for (int i = 0; i < bp.Value; i++)

-                {

-                    var bidPid = new Partition(bp.Key, i);

-                    updatedBrokerParts.Add(bidPid);

-                }

-            }

-

-            Logger.Debug("Currently registered list of brokers for topic " + topic + " -> " + childs.ToMultiString(", "));

-            SortedSet<Partition> mergedBrokerParts = updatedBrokerParts;

-            if (this.actualBrokerTopicsPartitionsMap.ContainsKey(topic))

-            {

-                SortedSet<Partition> oldBrokerParts = this.actualBrokerTopicsPartitionsMap[topic];

-                Logger.Debug(

-                    "Unregistered list of brokers for topic " + topic + " -> " + oldBrokerParts.ToMultiString(", "));

-                foreach (var oldBrokerPart in oldBrokerParts)

-                {

-                    mergedBrokerParts.Add(oldBrokerPart);

-                }

-            }

-            else

-            {

-                this.actualBrokerTopicsPartitionsMap.Add(topic, null);

-            }

-

-            this.actualBrokerTopicsPartitionsMap[topic] = new SortedSet<Partition>(mergedBrokerParts.Where(x => this.actualBrokerIdMap.ContainsKey(x.BrokerId)));

-        }

-

-        /// <summary>

-        /// Processes change in the broker lists.

-        /// </summary>

-        /// <param name="path">The parent path of brokers list.</param>

-        /// <param name="childs">The current brokers.</param>

-        private void ProcessBrokerChange(string path, IEnumerable<string> childs)

-        {

-            if (path != ZooKeeperClient.DefaultBrokerIdsPath)

-            {

-                return;

-            }

-

-            List<int> updatedBrokers = childs.Select(x => int.Parse(x, CultureInfo.InvariantCulture)).ToList();

-            List<int> oldBrokers = this.oldBrokerIdMap.Select(x => x.Key).ToList();

-            List<int> newBrokers = updatedBrokers.Except(oldBrokers).ToList();

-            Logger.Debug("List of newly registered brokers -> " + newBrokers.ToMultiString(","));

-            foreach (int bid in newBrokers)

-            {

-                string brokerInfo = this.zkclient.ReadData<string>(ZooKeeperClient.DefaultBrokerIdsPath + "/" + bid);

-                string[] brokerHost = brokerInfo.Split(':');

-                var port = int.Parse(brokerHost[2], CultureInfo.InvariantCulture); 

-                this.actualBrokerIdMap.Add(bid, new Broker(bid, brokerHost[1], brokerHost[1], port));

-                if (this.callback != null)

-                {

-                    Logger.Debug("Invoking the callback for broker: " + bid);

-                    this.callback(bid, brokerHost[1], port);

-                }

-            }

-

-            List<int> deadBrokers = oldBrokers.Except(updatedBrokers).ToList();

-            Logger.Debug("Deleting broker ids for dead brokers -> " + deadBrokers.ToMultiString(","));

-            foreach (int bid in deadBrokers)

-            {

-                Logger.Debug("Deleting dead broker: " + bid);

-                this.actualBrokerIdMap.Remove(bid);

-                foreach (var topicMap in this.actualBrokerTopicsPartitionsMap)

-                {

-                    int affected = topicMap.Value.RemoveWhere(x => x.BrokerId == bid);

-                    if (affected > 0)

-                    {

-                        Logger.Debug("Removing dead broker " + bid + " for topic: " + topicMap.Key);

-                        Logger.Debug("Actual list of mapped brokers is -> " + topicMap.Value.ToMultiString(x => x.ToString(), ","));

-                    }

-                }

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperChildListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperChildListener.cs
deleted file mode 100644
index 0448b57..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperChildListener.cs
+++ /dev/null
@@ -1,43 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using Kafka.Client.ZooKeeperIntegration.Events;

-

-    /// <summary>

-    /// Listener that can be registered for listening on ZooKeeper znode changes for a given path

-    /// </summary>

-    internal interface IZooKeeperChildListener

-    {

-        /// <summary>

-        /// Called when the children of the given path changed

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperChildChangedEventArgs"/> instance containing the event data

-        /// as parent path and children (null if parent was deleted).

-        /// </param>

-        /// <remarks> 

-        /// http://zookeeper.wiki.sourceforge.net/ZooKeeperWatches

-        /// </remarks>

-        void HandleChildChange(ZooKeeperChildChangedEventArgs args);

-

-        /// <summary>

-        /// Resets the state of listener.

-        /// </summary>

-        void ResetState();

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperDataListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperDataListener.cs
deleted file mode 100644
index 2e15ab4..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperDataListener.cs
+++ /dev/null
@@ -1,49 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using Kafka.Client.ZooKeeperIntegration.Events;

-

-    /// <summary>

-    /// Listener that can be registered for listening on ZooKeeper znode data changes for a given path

-    /// </summary>

-    internal interface IZooKeeperDataListener

-    {

-        /// <summary>

-        /// Called when the data of the given path changed

-        /// </summary>

-        /// <param name="args">The <see cref="ZooKeeperDataChangedEventArgs"/> instance containing the event data

-        /// as path and data.

-        /// </param>

-        /// <remarks> 

-        /// http://zookeeper.wiki.sourceforge.net/ZooKeeperWatches

-        /// </remarks>

-        void HandleDataChange(ZooKeeperDataChangedEventArgs args);

-

-        /// <summary>

-        /// Called when the data of the given path was deleted

-        /// </summary>

-        /// <param name="args">The <see cref="ZooKeeperDataChangedEventArgs"/> instance containing the event data

-        /// as path.

-        /// </param>

-        /// <remarks> 

-        /// http://zookeeper.wiki.sourceforge.net/ZooKeeperWatches

-        /// </remarks>

-        void HandleDataDelete(ZooKeeperDataChangedEventArgs args);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperStateListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperStateListener.cs
deleted file mode 100644
index 8a46c1d..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/IZooKeeperStateListener.cs
+++ /dev/null
@@ -1,42 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using Kafka.Client.ZooKeeperIntegration.Events;

-

-    /// <summary>

-    /// Handles the session expiration event in ZooKeeper

-    /// </summary>

-    internal interface IZooKeeperStateListener

-    {

-        /// <summary>

-        /// Called when the ZooKeeper connection state has changed.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperStateChangedEventArgs"/> instance containing the event data.</param>

-        void HandleStateChanged(ZooKeeperStateChangedEventArgs args);

-

-        /// <summary>

-        /// Called after the ZooKeeper session has expired and a new session has been created.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperSessionCreatedEventArgs"/> instance containing the event data.</param>

-        /// <remarks>

-        /// You would have to re-create any ephemeral nodes here.

-        /// </remarks>

-        void HandleSessionCreated(ZooKeeperSessionCreatedEventArgs args);

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKRebalancerListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKRebalancerListener.cs
deleted file mode 100644
index db0b138..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKRebalancerListener.cs
+++ /dev/null
@@ -1,368 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Globalization;

-    using System.Linq;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using log4net;

-    using ZooKeeperNet;

-

-    internal class ZKRebalancerListener : IZooKeeperChildListener

-    {

-        private IDictionary<string, IList<string>> oldPartitionsPerTopicMap = new Dictionary<string, IList<string>>();

-

-        private IDictionary<string, IList<string>> oldConsumersPerTopicMap = new Dictionary<string, IList<string>>();

-

-        private readonly IDictionary<string, IDictionary<Partition, PartitionTopicInfo>> topicRegistry;

-

-        private readonly IDictionary<Tuple<string, string>, BlockingCollection<FetchedDataChunk>> queues;

-

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly string consumerIdString;

-

-        private readonly object syncLock;

-

-        private readonly ConsumerConfiguration config;

-

-        private readonly IZooKeeperClient zkClient;

-

-        private readonly ZKGroupDirs dirs;

-

-        private readonly Fetcher fetcher;

-

-        private readonly ZookeeperConsumerConnector zkConsumerConnector;

-

-        internal ZKRebalancerListener(

-            ConsumerConfiguration config,

-            string consumerIdString,

-            IDictionary<string, IDictionary<Partition, PartitionTopicInfo>> topicRegistry,

-            IZooKeeperClient zkClient,

-            ZookeeperConsumerConnector zkConsumerConnector,

-            IDictionary<Tuple<string, string>, BlockingCollection<FetchedDataChunk>> queues,

-            Fetcher fetcher,

-            object syncLock)

-        {

-            this.syncLock = syncLock;

-            this.consumerIdString = consumerIdString;

-            this.config = config;

-            this.topicRegistry = topicRegistry;

-            this.zkClient = zkClient;

-            this.dirs = new ZKGroupDirs(config.GroupId);

-            this.zkConsumerConnector = zkConsumerConnector;

-            this.queues = queues;

-            this.fetcher = fetcher;

-        }

-

-        public void SyncedRebalance()

-        {

-            lock (this.syncLock)

-            {

-                for (int i = 0; i < ZookeeperConsumerConnector.MaxNRetries; i++)

-                {

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "begin rebalancing consumer {0} try #{1}", consumerIdString, i);

-                    bool done = false;

-                    try

-                    {

-                        done = this.Rebalance();

-                    }

-                    catch (Exception ex)

-                    {

-                        Logger.InfoFormat(CultureInfo.CurrentCulture, "exception during rebalance {0}", ex);

-                    }

-

-                    Logger.InfoFormat(CultureInfo.CurrentCulture, "end rebalancing consumer {0} try #{1}", consumerIdString, i);

-                    if (done)

-                    {

-                        return;

-                    }

-

-                    //// release all partitions, reset state and retry

-                    this.ReleasePartitionOwnership();

-                    this.ResetState();

-                    Thread.Sleep(config.ZooKeeper.ZkSyncTimeMs);

-                }

-            }

-

-            throw new ZKRebalancerException(string.Format(CultureInfo.CurrentCulture, "{0} can't rebalance after {1} retries", this.consumerIdString, ZookeeperConsumerConnector.MaxNRetries));

-        }

-

-        /// <summary>

-        /// Called when the children of the given path changed

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperChildChangedEventArgs"/> instance containing the event data

-        /// as parent path and children (null if parent was deleted).

-        /// </param>

-        /// <remarks> 

-        /// http://zookeeper.wiki.sourceforge.net/ZooKeeperWatches

-        /// </remarks>

-        public void HandleChildChange(ZooKeeperChildChangedEventArgs args)

-        {

-            Guard.NotNull(args, "args");

-            Guard.NotNullNorEmpty(args.Path, "args.Path");

-            Guard.NotNull(args.Children, "args.Children");

-

-            SyncedRebalance();

-        }

-

-        /// <summary>

-        /// Resets the state of listener.

-        /// </summary>

-        public void ResetState()

-        {

-            this.topicRegistry.Clear();

-            this.oldConsumersPerTopicMap.Clear();

-            this.oldPartitionsPerTopicMap.Clear();

-        }

-

-        private bool Rebalance()

-        {

-            var myTopicThresdIdsMap = this.GetTopicCount(this.consumerIdString).GetConsumerThreadIdsPerTopic();

-            var cluster = new Cluster(zkClient);

-            var consumersPerTopicMap = this.GetConsumersPerTopic(this.config.GroupId);

-            var partitionsPerTopicMap = ZkUtils.GetPartitionsForTopics(this.zkClient, myTopicThresdIdsMap.Keys);

-            var relevantTopicThreadIdsMap = GetRelevantTopicMap(

-                myTopicThresdIdsMap,

-                partitionsPerTopicMap,

-                this.oldPartitionsPerTopicMap,

-                consumersPerTopicMap,

-                this.oldConsumersPerTopicMap);

-            if (relevantTopicThreadIdsMap.Count <= 0)

-            {

-                Logger.InfoFormat(CultureInfo.CurrentCulture, "Consumer {0} with {1} doesn't need to rebalance.", this.consumerIdString, consumersPerTopicMap);

-                return true;

-            }

-

-            Logger.Info("Committing all offsets");

-            this.zkConsumerConnector.CommitOffsets();

-

-            Logger.Info("Releasing parittion ownership");

-            this.ReleasePartitionOwnership();

-

-            var queuesToBeCleared = new List<BlockingCollection<FetchedDataChunk>>();

-            foreach (var item in relevantTopicThreadIdsMap)

-            {

-                this.topicRegistry.Remove(item.Key);

-                this.topicRegistry.Add(item.Key, new Dictionary<Partition, PartitionTopicInfo>());

-

-                var topicDirs = new ZKGroupTopicDirs(config.GroupId, item.Key);

-                var curConsumers = consumersPerTopicMap[item.Key];

-                var curPartitions = new List<string>(partitionsPerTopicMap[item.Key]);

-

-                var numberOfPartsPerConsumer = curPartitions.Count / curConsumers.Count;

-                var numberOfConsumersWithExtraPart = curPartitions.Count % curConsumers.Count;

-

-                Logger.InfoFormat(

-                    CultureInfo.CurrentCulture,

-                    "Consumer {0} rebalancing the following partitions: {1} for topic {2} with consumers: {3}",

-                    this.consumerIdString,

-                    string.Join(",", curPartitions),

-                    item.Key,

-                    string.Join(",", curConsumers));

-

-                foreach (string consumerThreadId in item.Value)

-                {

-                    var myConsumerPosition = curConsumers.IndexOf(consumerThreadId);

-                    if (myConsumerPosition < 0)

-                    {

-                        continue;

-                    }

-

-                    var startPart = (numberOfPartsPerConsumer * myConsumerPosition) +

-                                    Math.Min(myConsumerPosition, numberOfConsumersWithExtraPart);

-                    var numberOfParts = numberOfPartsPerConsumer + (myConsumerPosition + 1 > numberOfConsumersWithExtraPart ? 0 : 1);

-

-                    if (numberOfParts <= 0)

-                    {

-                        Logger.WarnFormat(CultureInfo.CurrentCulture, "No broker partitions consumed by consumer thread {0} for topic {1}", consumerThreadId, item.Key);

-                    }

-                    else

-                    {

-                        for (int i = startPart; i < startPart + numberOfParts; i++)

-                        {

-                            var partition = curPartitions[i];

-                            Logger.InfoFormat(CultureInfo.CurrentCulture, "{0} attempting to claim partition {1}", consumerThreadId, partition);

-                            if (!this.ProcessPartition(topicDirs, partition, item.Key, consumerThreadId))

-                            {

-                                return false;

-                            }

-                        }

-

-                        queuesToBeCleared.Add(queues[new Tuple<string, string>(item.Key, consumerThreadId)]);

-                    }

-                }

-            }

-

-            this.UpdateFetcher(cluster, queuesToBeCleared);

-            this.oldPartitionsPerTopicMap = partitionsPerTopicMap;

-            this.oldConsumersPerTopicMap = consumersPerTopicMap;

-            return true;

-        }

-

-        private void UpdateFetcher(Cluster cluster, IEnumerable<BlockingCollection<FetchedDataChunk>> queuesToBeCleared)

-        {

-            var allPartitionInfos = new List<PartitionTopicInfo>();

-            foreach (var item in this.topicRegistry.Values)

-            {

-                foreach (var partitionTopicInfo in item.Values)

-                {

-                    allPartitionInfos.Add(partitionTopicInfo);

-                }

-            }

-

-            Logger.InfoFormat(

-                CultureInfo.CurrentCulture,

-                "Consumer {0} selected partitions: {1}",

-                this.consumerIdString,

-                string.Join(",", allPartitionInfos.OrderBy(x => x.Partition.Name).Select(y => y.Partition.Name)));

-            if (this.fetcher != null)

-            {

-                this.fetcher.InitConnections(allPartitionInfos, cluster, queuesToBeCleared);

-            }

-        }

-

-        private bool ProcessPartition(ZKGroupTopicDirs topicDirs, string partition, string topic, string consumerThreadId)

-        {

-            var partitionOwnerPath = topicDirs.ConsumerOwnerDir + "/" + partition;

-            try

-            {

-                ZkUtils.CreateEphemeralPathExpectConflict(zkClient, partitionOwnerPath, consumerThreadId);

-            }

-            catch (KeeperException.NodeExistsException)

-            {

-                //// The node hasn't been deleted by the original owner. So wait a bit and retry.

-                Logger.InfoFormat(CultureInfo.CurrentCulture, "waiting for the partition ownership to be deleted: {0}", partition);

-                return false;

-            }

-

-            AddPartitionTopicInfo(topicDirs, partition, topic, consumerThreadId);

-            return true;

-        }

-

-        private void AddPartitionTopicInfo(ZKGroupTopicDirs topicDirs, string partitionString, string topic, string consumerThreadId)

-        {

-            var partition = Partition.ParseFrom(partitionString);

-            var partTopicInfoMap = this.topicRegistry[topic];

-            var znode = topicDirs.ConsumerOffsetDir + "/" + partition.Name;

-            var offsetString = this.zkClient.ReadData<string>(znode, true);

-            long offset = string.IsNullOrEmpty(offsetString) ? 0 : long.Parse(offsetString, CultureInfo.InvariantCulture);

-            var queue = this.queues[new Tuple<string, string>(topic, consumerThreadId)];

-            var partTopicInfo = new PartitionTopicInfo(

-                topic,

-                partition.BrokerId,

-                partition,

-                queue,

-                offset,

-                offset,

-                this.config.FetchSize);

-            partTopicInfoMap.Add(partition, partTopicInfo);

-            if (Logger.IsDebugEnabled)

-            {

-                Logger.DebugFormat(CultureInfo.CurrentCulture, "{0} selected new offset {1}", partTopicInfo, offset);

-            }

-        }

-

-        private void ReleasePartitionOwnership()

-        {

-            foreach (KeyValuePair<string, IDictionary<Partition, PartitionTopicInfo>> item in topicRegistry)

-            {

-                var topicDirs = new ZKGroupTopicDirs(this.config.GroupId, item.Key);

-                foreach (var partition in item.Value.Keys)

-                {

-                    string znode = topicDirs.ConsumerOwnerDir + "/" + partition.Name;

-                    ZkUtils.DeletePath(zkClient, znode);

-                    if (Logger.IsDebugEnabled)

-                    {

-                        Logger.DebugFormat(CultureInfo.CurrentCulture, "Consumer {0} releasing {1}", this.consumerIdString, znode);

-                    }

-                }

-            }

-        }

-

-        private TopicCount GetTopicCount(string consumerId)

-        {

-            var topicCountJson = this.zkClient.ReadData<string>(this.dirs.ConsumerRegistryDir + "/" + consumerId);

-            return TopicCount.ConstructTopicCount(consumerId, topicCountJson);

-        }

-

-        private IDictionary<string, IList<string>> GetConsumersPerTopic(string group)

-        {

-            var consumers = this.zkClient.GetChildrenParentMayNotExist(this.dirs.ConsumerRegistryDir);

-            var consumersPerTopicMap = new Dictionary<string, IList<string>>();

-            foreach (var consumer in consumers)

-            {

-                TopicCount topicCount = GetTopicCount(consumer);

-                foreach (KeyValuePair<string, IList<string>> consumerThread in topicCount.GetConsumerThreadIdsPerTopic())

-                {

-                    foreach (string consumerThreadId in consumerThread.Value)

-                    {

-                        if (!consumersPerTopicMap.ContainsKey(consumerThread.Key))

-                        {

-                            consumersPerTopicMap.Add(consumerThread.Key, new List<string> { consumerThreadId });

-                        }

-                        else

-                        {

-                            consumersPerTopicMap[consumerThread.Key].Add(consumerThreadId);

-                        }

-                    }

-                }

-            }

-

-            foreach (KeyValuePair<string, IList<string>> item in consumersPerTopicMap)

-            {

-                item.Value.ToList().Sort();

-            }

-

-            return consumersPerTopicMap;

-        }

-

-        private static IDictionary<string, IList<string>> GetRelevantTopicMap(

-            IDictionary<string, IList<string>> myTopicThreadIdsMap,

-            IDictionary<string, IList<string>> newPartMap,

-            IDictionary<string, IList<string>> oldPartMap,

-            IDictionary<string, IList<string>> newConsumerMap,

-            IDictionary<string, IList<string>> oldConsumerMap)

-        {

-            var relevantTopicThreadIdsMap = new Dictionary<string, IList<string>>();

-            foreach (var myMap in myTopicThreadIdsMap)

-            {

-                var oldPartValue = oldPartMap.ContainsKey(myMap.Key) ? oldPartMap[myMap.Key] : null;

-                var newPartValue = newPartMap.ContainsKey(myMap.Key) ? newPartMap[myMap.Key] : null;

-                var oldConsumerValue = oldConsumerMap.ContainsKey(myMap.Key) ? oldConsumerMap[myMap.Key] : null;

-                var newConsumerValue = newConsumerMap.ContainsKey(myMap.Key) ? newConsumerMap[myMap.Key] : null;

-                if (oldPartValue != newPartValue || oldConsumerValue != newConsumerValue)

-                {

-                    relevantTopicThreadIdsMap.Add(myMap.Key, myMap.Value);

-                }

-            }

-

-            return relevantTopicThreadIdsMap;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKSessionExpireListener.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKSessionExpireListener.cs
deleted file mode 100644
index b025671..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/Listeners/ZKSessionExpireListener.cs
+++ /dev/null
@@ -1,86 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration.Listeners

-{

-    using System;

-    using System.Globalization;

-    using System.Reflection;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using log4net;

-    using ZooKeeperNet;

-

-    internal class ZKSessionExpireListener : IZooKeeperStateListener

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        private readonly string consumerIdString;

-

-        private readonly ZKRebalancerListener loadBalancerListener;

-

-        private readonly ZookeeperConsumerConnector zkConsumerConnector;

-

-        private readonly ZKGroupDirs dirs;

-

-        private readonly TopicCount topicCount;

-

-        public ZKSessionExpireListener(ZKGroupDirs dirs, string consumerIdString, TopicCount topicCount, ZKRebalancerListener loadBalancerListener, ZookeeperConsumerConnector zkConsumerConnector)

-        {

-            this.consumerIdString = consumerIdString;

-            this.loadBalancerListener = loadBalancerListener;

-            this.zkConsumerConnector = zkConsumerConnector;

-            this.dirs = dirs;

-            this.topicCount = topicCount;

-        }

-

-        /// <summary>

-        /// Called when the ZooKeeper connection state has changed.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperStateChangedEventArgs"/> instance containing the event data.</param>

-        /// <remarks>

-        /// Do nothing, since zkclient will do reconnect for us.

-        /// </remarks>

-        public void HandleStateChanged(ZooKeeperStateChangedEventArgs args)

-        {

-            Guard.NotNull(args, "args");

-            Guard.Assert<ArgumentException>(() => args.State != KeeperState.Unknown);

-        }

-

-        /// <summary>

-        /// Called after the ZooKeeper session has expired and a new session has been created.

-        /// </summary>

-        /// <param name="args">The <see cref="Kafka.Client.ZooKeeperIntegration.Events.ZooKeeperSessionCreatedEventArgs"/> instance containing the event data.</param>

-        /// <remarks>

-        /// You would have to re-create any ephemeral nodes here.

-        /// Explicitly trigger load balancing for this consumer.

-        /// </remarks>

-        public void HandleSessionCreated(ZooKeeperSessionCreatedEventArgs args)

-        {

-            Guard.NotNull(args, "args");

-

-            Logger.InfoFormat(

-                CultureInfo.CurrentCulture,

-                "ZK expired; release old broker partition ownership; re-register consumer {0}",

-                this.consumerIdString);

-            this.loadBalancerListener.ResetState();

-            this.zkConsumerConnector.RegisterConsumerInZk(this.dirs, this.consumerIdString, this.topicCount);

-            this.loadBalancerListener.SyncedRebalance();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.Watcher.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.Watcher.cs
deleted file mode 100644
index d17335d..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.Watcher.cs
+++ /dev/null
@@ -1,682 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

-*/

-

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Threading;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using ZooKeeperNet;

-

-    internal partial class ZooKeeperClient

-    {

-        /// <summary>

-        /// Represents the method that will handle a ZooKeeper event  

-        /// </summary>

-        /// <param name="args">

-        /// The args.

-        /// </param>

-        /// <typeparam name="T">

-        /// Type of event data

-        /// </typeparam>

-        public delegate void ZooKeeperEventHandler<T>(T args)

-            where T : ZooKeeperEventArgs;

-

-        /// <summary>

-        /// Occurs when ZooKeeper connection state changes

-        /// </summary>

-        public event ZooKeeperEventHandler<ZooKeeperStateChangedEventArgs> StateChanged

-        {

-            add

-            {

-                this.EnsuresNotDisposed();

-                lock (this.eventLock)

-                {

-                    this.stateChangedHandlers -= value;

-                    this.stateChangedHandlers += value;

-                }

-            }

-

-            remove

-            {

-                this.EnsuresNotDisposed();

-                lock (this.eventLock)

-                {

-                    this.stateChangedHandlers -= value;

-                }

-            }

-        }

-

-        /// <summary>

-        /// Occurs when ZooKeeper session re-creates

-        /// </summary>

-        public event ZooKeeperEventHandler<ZooKeeperSessionCreatedEventArgs> SessionCreated

-        {

-            add

-            {

-                this.EnsuresNotDisposed();

-                lock (this.eventLock)

-                {

-                    this.sessionCreatedHandlers -= value;

-                    this.sessionCreatedHandlers += value;

-                }

-            }

-

-            remove

-            {

-                this.EnsuresNotDisposed();

-                lock (this.eventLock)

-                {

-                    this.sessionCreatedHandlers -= value;

-                }

-            }

-        }

-

-        private readonly ConcurrentQueue<ZooKeeperEventArgs> eventsQueue = new ConcurrentQueue<ZooKeeperEventArgs>();

-        private readonly object eventLock = new object();

-        private ZooKeeperEventHandler<ZooKeeperStateChangedEventArgs> stateChangedHandlers;

-        private ZooKeeperEventHandler<ZooKeeperSessionCreatedEventArgs> sessionCreatedHandlers;

-        private Thread eventWorker;

-        private Thread zooKeeperEventWorker;

-        private readonly ConcurrentDictionary<string, ChildChangedEventItem> childChangedHandlers = new ConcurrentDictionary<string, ChildChangedEventItem>();

-        private readonly ConcurrentDictionary<string, DataChangedEventItem> dataChangedHandlers = new ConcurrentDictionary<string, DataChangedEventItem>();

-        private DateTime? idleTime;

-

-        /// <summary>

-        /// Gets time (in miliseconds) of event thread iddleness

-        /// </summary>

-        /// <remarks>

-        /// Used for testing purpose

-        /// </remarks>

-        public int IdleTime

-        {

-            get

-            {

-                return this.idleTime.HasValue ? Convert.ToInt32((DateTime.Now - this.idleTime.Value).TotalMilliseconds) : 0;

-            }

-        }

-

-        /// <summary>

-        /// Processes ZooKeeper event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        /// <remarks>

-        /// Requires installed watcher

-        /// </remarks>

-        public void Process(WatchedEvent e)

-        {

-            this.EnsuresNotDisposed();

-            Logger.Debug("Received event: " + e);

-            this.zooKeeperEventWorker = Thread.CurrentThread;

-            if (this.shutdownTriggered)

-            {

-                Logger.Debug("ignoring event '{" + e.Type + " | " + e.Path + "}' since shutdown triggered");

-                return;

-            }

-

-            bool stateChanged = e.Path == null;

-            bool znodeChanged = e.Path != null;

-            bool dataChanged =

-                e.Type == EventType.NodeDataChanged

-                || e.Type == EventType.NodeDeleted

-                || e.Type == EventType.NodeCreated

-                || e.Type == EventType.NodeChildrenChanged;

-

-            lock (this.somethingChanged)

-            {

-                try

-                {

-                    if (stateChanged)

-                    {

-                        this.ProcessStateChange(e);

-                    }

-

-                    if (dataChanged)

-                    {

-                        this.ProcessDataOrChildChange(e);

-                    }

-                }

-                finally

-                {

-                    if (stateChanged)

-                    {

-                        lock (this.stateChangedLock)

-                        {

-                            Monitor.PulseAll(this.stateChangedLock);

-                        }

-

-                        if (e.State == KeeperState.Expired)

-                        {

-                            lock (this.znodeChangedLock)

-                            {

-                                Monitor.PulseAll(this.znodeChangedLock);

-                            }

-

-                            foreach (string path in this.childChangedHandlers.Keys)

-                            {

-                                this.Enqueue(new ZooKeeperChildChangedEventArgs(path));

-                            }

-

-                            foreach (string path in this.dataChangedHandlers.Keys)

-                            {

-                                this.Enqueue(new ZooKeeperDataChangedEventArgs(path));

-                            }

-                        }

-                    }

-

-                    if (znodeChanged)

-                    {

-                        lock (this.znodeChangedLock)

-                        {

-                            Monitor.PulseAll(this.znodeChangedLock);

-                        }

-                    }

-                }

-

-                Monitor.PulseAll(this.somethingChanged);

-            }

-        }

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper state changes events

-        /// </summary>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Subscribe(IZooKeeperStateListener listener)

-        {

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.StateChanged += listener.HandleStateChanged;

-            this.SessionCreated += listener.HandleSessionCreated;

-            Logger.Debug("Subscribed state changes handler " + listener.GetType().Name);

-        }

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper state changes events

-        /// </summary>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Unsubscribe(IZooKeeperStateListener listener)

-        {

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.StateChanged -= listener.HandleStateChanged;

-            this.SessionCreated -= listener.HandleSessionCreated;

-            Logger.Debug("Unsubscribed state changes handler " + listener.GetType().Name);

-        }

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper child changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Subscribe(string path, IZooKeeperChildListener listener)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.childChangedHandlers.AddOrUpdate(

-                path,

-                new ChildChangedEventItem(Logger, listener.HandleChildChange),

-                (key, oldValue) => { oldValue.ChildChanged += listener.HandleChildChange; return oldValue; });

-            this.WatchForChilds(path);

-            Logger.Debug("Subscribed child changes handler " + listener.GetType().Name + " for path: " + path);

-        }

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper child changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Unsubscribe(string path, IZooKeeperChildListener listener)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.childChangedHandlers.AddOrUpdate(

-                path,

-                new ChildChangedEventItem(Logger),

-                (key, oldValue) => { oldValue.ChildChanged -= listener.HandleChildChange; return oldValue; });

-            Logger.Debug("Unsubscribed child changes handler " + listener.GetType().Name + " for path: " + path);

-        }

-

-        /// <summary>

-        /// Subscribes listeners on ZooKeeper data changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Subscribe(string path, IZooKeeperDataListener listener)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.dataChangedHandlers.AddOrUpdate(

-                path,

-                new DataChangedEventItem(Logger, listener.HandleDataChange, listener.HandleDataDelete),

-                (key, oldValue) =>

-                {

-                    oldValue.DataChanged += listener.HandleDataChange;

-                    oldValue.DataDeleted += listener.HandleDataDelete;

-                    return oldValue;

-                });

-            this.WatchForData(path);

-            Logger.Debug("Subscribed data changes handler " + listener.GetType().Name + " for path: " + path);

-        }

-

-        /// <summary>

-        /// Un-subscribes listeners on ZooKeeper data changes under given path

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <param name="listener">

-        /// The listener.

-        /// </param>

-        public void Unsubscribe(string path, IZooKeeperDataListener listener)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-            Guard.Assert<ArgumentNullException>(() => listener != null);

-

-            this.EnsuresNotDisposed();

-            this.dataChangedHandlers.AddOrUpdate(

-                path,

-                new DataChangedEventItem(Logger),

-                (key, oldValue) =>

-                {

-                    oldValue.DataChanged -= listener.HandleDataChange;

-                    oldValue.DataDeleted -= listener.HandleDataDelete;

-                    return oldValue;

-                });

-            Logger.Debug("Unsubscribed data changes handler " + listener.GetType().Name + " for path: " + path);

-        }

-

-        /// <summary>

-        /// Un-subscribes all listeners

-        /// </summary>

-        public void UnsubscribeAll()

-        {

-            this.EnsuresNotDisposed();

-            lock (this.eventLock)

-            {

-                this.stateChangedHandlers = null;

-                this.sessionCreatedHandlers = null;

-                this.childChangedHandlers.Clear();

-                this.dataChangedHandlers.Clear();

-            }

-

-            Logger.Debug("Unsubscribed all handlers");

-        }

-

-        /// <summary>

-        /// Installs a child watch for the given path. 

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        /// <returns>

-        /// the current children of the path or null if the znode with the given path doesn't exist

-        /// </returns>

-        public IList<string> WatchForChilds(string path)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-

-            this.EnsuresNotDisposed();

-            if (this.zooKeeperEventWorker != null && Thread.CurrentThread == this.zooKeeperEventWorker)

-            {

-                throw new InvalidOperationException("Must not be done in the zookeeper event thread.");

-            }

-

-            return this.RetryUntilConnected(

-                () =>

-                {

-                    this.Exists(path);

-                    try

-                    {

-                        return this.GetChildren(path);

-                    }

-                    catch (KeeperException.NoNodeException)

-                    {

-                        return null;

-                    }

-                });

-        }

-

-        /// <summary>

-        /// Installs a data watch for the given path. 

-        /// </summary>

-        /// <param name="path">

-        /// The parent path.

-        /// </param>

-        public void WatchForData(string path)

-        {

-            Guard.Assert<ArgumentException>(() => !string.IsNullOrEmpty(path));

-

-            this.EnsuresNotDisposed();

-            this.RetryUntilConnected(

-                () => this.Exists(path, true));

-        }

-

-        /// <summary>

-        /// Checks whether any data or child listeners are registered

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        /// <returns>

-        /// Value indicates whether any data or child listeners are registered

-        /// </returns>

-        private bool HasListeners(string path)

-        {

-            ChildChangedEventItem childChanged;

-            this.childChangedHandlers.TryGetValue(path, out childChanged);

-            if (childChanged != null && childChanged.Count > 0)

-            {

-                return true;

-            }

-

-            DataChangedEventItem dataChanged;

-            this.dataChangedHandlers.TryGetValue(path, out dataChanged);

-            if (dataChanged != null && dataChanged.TotalCount > 0)

-            {

-                return true;

-            }

-

-            return false;

-        }

-

-        /// <summary>

-        /// Event thread starting method

-        /// </summary>

-        private void RunEventWorker()

-        {

-            Logger.Debug("Starting ZooKeeper watcher event thread");

-            try

-            {

-                this.PoolEventsQueue();

-            }

-            catch (ThreadInterruptedException)

-            {

-                Logger.Debug("Terminate ZooKeeper watcher event thread");

-            }

-        }

-

-        /// <summary>

-        /// Pools ZooKeeper events form events queue

-        /// </summary>

-        /// <remarks>

-        /// Thread sleeps if queue is empty

-        /// </remarks>

-        private void PoolEventsQueue()

-        {

-            while (true)

-            {

-                while (!this.eventsQueue.IsEmpty)

-                {

-                    this.Dequeue();

-                }

-

-                lock (this.somethingChanged)

-                {

-                    Logger.Debug("Awaiting events ...");

-                    this.idleTime = DateTime.Now;

-                    Monitor.Wait(this.somethingChanged);

-                    this.idleTime = null;

-                }

-            }

-        }

-

-        /// <summary>

-        /// Enqueues new event from ZooKeeper in events queue

-        /// </summary>

-        /// <param name="e">

-        /// The event from ZooKeeper.

-        /// </param>

-        private void Enqueue(ZooKeeperEventArgs e)

-        {

-            Logger.Debug("New event queued: " + e);

-            this.eventsQueue.Enqueue(e);

-        }

-

-        /// <summary>

-        /// Dequeues event from events queue and invokes subscribed handlers

-        /// </summary>

-        private void Dequeue()

-        {

-            try

-            {

-                ZooKeeperEventArgs e;

-                var success = this.eventsQueue.TryDequeue(out e);

-                if (success)

-                {

-                    if (e != null)

-                    {

-                        Logger.Debug("Event dequeued: " + e);

-                        switch (e.Type)

-                        {

-                            case ZooKeeperEventTypes.StateChanged:

-                                this.OnStateChanged((ZooKeeperStateChangedEventArgs)e);

-                                break;

-                            case ZooKeeperEventTypes.SessionCreated:

-                                this.OnSessionCreated((ZooKeeperSessionCreatedEventArgs)e);

-                                break;

-                            case ZooKeeperEventTypes.ChildChanged:

-                                this.OnChildChanged((ZooKeeperChildChangedEventArgs)e);

-                                break;

-                            case ZooKeeperEventTypes.DataChanged:

-                                this.OnDataChanged((ZooKeeperDataChangedEventArgs)e);

-                                break;

-                            default:

-                                throw new InvalidOperationException("Not supported event type");

-                        }

-                    }

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Error handling event ", exc);

-            }

-        }

-

-        /// <summary>

-        /// Processess ZooKeeper state changes events

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void ProcessStateChange(WatchedEvent e)

-        {

-            Logger.Info("zookeeper state changed (" + e.State + ")");

-            lock (this.stateChangedLock)

-            {

-                this.currentState = e.State;

-            }

-

-            if (this.shutdownTriggered)

-            {

-                return;

-            }

-

-            this.Enqueue(new ZooKeeperStateChangedEventArgs(e.State));

-            if (e.State == KeeperState.Expired)

-            {

-                this.Reconnect(this.connection.Servers, this.connection.SessionTimeout);

-                this.Enqueue(ZooKeeperSessionCreatedEventArgs.Empty);

-            }

-        }

-

-        /// <summary>

-        /// Processess ZooKeeper childs or data changes events

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void ProcessDataOrChildChange(WatchedEvent e)

-        {

-            if (this.shutdownTriggered)

-            {

-                return;

-            }

-

-            if (e.Type == EventType.NodeChildrenChanged

-                || e.Type == EventType.NodeCreated

-                || e.Type == EventType.NodeDeleted)

-            {

-                this.Enqueue(new ZooKeeperChildChangedEventArgs(e.Path));

-            }

-

-            if (e.Type == EventType.NodeDataChanged

-                || e.Type == EventType.NodeCreated

-                || e.Type == EventType.NodeDeleted)

-            {

-                this.Enqueue(new ZooKeeperDataChangedEventArgs(e.Path));

-            }

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper state changes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void OnStateChanged(ZooKeeperStateChangedEventArgs e)

-        {

-            try

-            {

-                var handlers = this.stateChangedHandlers;

-                if (handlers == null)

-                {

-                    return;

-                }

-

-                foreach (var handler in handlers.GetInvocationList())

-                {

-                    Logger.Debug(e + " sent to " + handler.Target);

-                }

-

-                handlers(e);

-            }

-            catch (Exception exc)

-            {

-                Logger.Error("Failed to handle state changed event.", exc);

-            }

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper session re-creates event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void OnSessionCreated(ZooKeeperSessionCreatedEventArgs e)

-        {

-            var handlers = this.sessionCreatedHandlers;

-            if (handlers == null)

-            {

-                return;

-            }

-

-            foreach (var handler in handlers.GetInvocationList())

-            {

-                Logger.Debug(e + " sent to " + handler.Target);

-            }

-

-            handlers(e);

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper child changes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void OnChildChanged(ZooKeeperChildChangedEventArgs e)

-        {

-            ChildChangedEventItem handlers;

-            this.childChangedHandlers.TryGetValue(e.Path, out handlers);

-            if (handlers == null || handlers.Count == 0)

-            {

-                return;

-            }

-

-            this.Exists(e.Path);

-            try

-            {

-                IList<string> children = this.GetChildren(e.Path);

-                e.Children = children;

-            }

-            catch (KeeperException.NoNodeException)

-            {

-            }

-

-            handlers.OnChildChanged(e);

-        }

-

-        /// <summary>

-        /// Invokes subscribed handlers for ZooKeeeper data changes event

-        /// </summary>

-        /// <param name="e">

-        /// The event data.

-        /// </param>

-        private void OnDataChanged(ZooKeeperDataChangedEventArgs e)

-        {

-            DataChangedEventItem handlers;

-            this.dataChangedHandlers.TryGetValue(e.Path, out handlers);

-            if (handlers == null || handlers.TotalCount == 0)

-            {

-                return;

-            }

-

-            try

-            {

-                this.Exists(e.Path, true);

-                var data = this.ReadData<string>(e.Path, null, true);

-                e.Data = data;

-                handlers.OnDataChanged(e);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                handlers.OnDataDeleted(e);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.cs
deleted file mode 100644
index 633a5ac..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperClient.cs
+++ /dev/null
@@ -1,893 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Utils;

-    using log4net;

-    using Org.Apache.Zookeeper.Data;

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Abstracts the interaction with zookeeper and allows permanent (not just one time) watches on nodes in ZooKeeper 

-    /// </summary>

-    internal partial class ZooKeeperClient : IZooKeeperClient

-    {

-        private const int DefaultConnectionTimeout = int.MaxValue;

-        public const string DefaultConsumersPath = "/consumers";

-        public const string DefaultBrokerIdsPath = "/brokers/ids";

-        public const string DefaultBrokerTopicsPath = "/brokers/topics";

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private IZooKeeperConnection connection;

-        private bool shutdownTriggered;

-        private KeeperState currentState;

-        private readonly IZooKeeperSerializer serializer;

-        private readonly object stateChangedLock = new object();

-        private readonly object znodeChangedLock = new object();

-        private readonly object somethingChanged = new object();

-        private readonly object shuttingDownLock = new object();

-        private volatile bool disposed;

-        private readonly int connectionTimeout;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperClient"/> class.

-        /// </summary>

-        /// <param name="connection">

-        /// The connection to ZooKeeper.

-        /// </param>

-        /// <param name="serializer">

-        /// The given serializer.

-        /// </param>

-        /// <param name="connectionTimeout">

-        /// The connection timeout (in miliseconds). Default is infinitive.

-        /// </param>

-        /// <remarks>

-        /// Default serializer is string UTF-8 serializer

-        /// </remarks>

-        public ZooKeeperClient(

-            IZooKeeperConnection connection, 

-            IZooKeeperSerializer serializer, 

-            int connectionTimeout = DefaultConnectionTimeout)

-        {

-            this.serializer = serializer;

-            this.connection = connection;

-            this.connectionTimeout = connectionTimeout;

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperClient"/> class.

-        /// </summary>

-        /// <param name="servers">

-        /// The list of ZooKeeper servers.

-        /// </param>

-        /// <param name="sessionTimeout">

-        /// The session timeout (in miliseconds).

-        /// </param>

-        /// <param name="serializer">

-        /// The given serializer.

-        /// </param>

-        /// <remarks>

-        /// Default serializer is string UTF-8 serializer.

-        /// It is recommended to use quite large sessions timeouts for ZooKeeper.

-        /// </remarks>

-        public ZooKeeperClient(string servers, int sessionTimeout, IZooKeeperSerializer serializer)

-            : this(new ZooKeeperConnection(servers, sessionTimeout), serializer)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperClient"/> class.

-        /// </summary>

-        /// <param name="servers">

-        /// The list of ZooKeeper servers.

-        /// </param>

-        /// <param name="sessionTimeout">

-        /// The session timeout (in miliseconds).

-        /// </param>

-        /// <param name="serializer">

-        /// The given serializer.

-        /// </param>

-        /// <param name="connectionTimeout">

-        /// The connection timeout (in miliseconds).

-        /// </param>

-        /// <remarks>

-        /// Default serializer is string UTF-8 serializer.

-        /// It is recommended to use quite large sessions timeouts for ZooKeeper.

-        /// </remarks>

-        public ZooKeeperClient(

-            string servers, 

-            int sessionTimeout, 

-            IZooKeeperSerializer serializer,

-            int connectionTimeout)

-            : this(new ZooKeeperConnection(servers, sessionTimeout), serializer, connectionTimeout)

-        {

-        }

-

-        /// <summary>

-        /// Connects to ZooKeeper server within given time period and installs watcher in ZooKeeper

-        /// </summary>

-        /// <remarks>

-        /// Also, starts background thread for event handling

-        /// </remarks>

-        public void Connect()

-        {

-            this.EnsuresNotDisposed();

-            bool started = false;

-            try

-            {

-                this.shutdownTriggered = false;

-                this.eventWorker = new Thread(this.RunEventWorker) { IsBackground = true };

-                this.eventWorker.Name = "ZooKeeperkWatcher-EventThread-" + this.eventWorker.ManagedThreadId + "-" + this.connection.Servers;

-                this.eventWorker.Start();

-                this.connection.Connect(this);

-                Logger.Debug("Awaiting connection to Zookeeper server");

-                if (!this.WaitUntilConnected(this.connectionTimeout))

-                {

-                    throw new ZooKeeperException(

-                        "Unable to connect to zookeeper server within timeout: " + this.connection.SessionTimeout);

-                }

-

-                started = true;

-                Logger.Debug("Connection to Zookeeper server established");

-            }

-            catch (ThreadInterruptedException)

-            {

-                throw new InvalidOperationException(

-                    "Not connected with zookeeper server yet. Current state is " + this.connection.ClientState);

-            }

-            finally

-            {

-                if (!started)

-                {

-                    this.Disconnect();

-                }

-            }

-        }

-

-        /// <summary>

-        /// Closes current connection to ZooKeeper

-        /// </summary>

-        /// <remarks>

-        /// Also, stops background thread

-        /// </remarks>

-        public void Disconnect()

-        {

-            Logger.Debug("Closing ZooKeeperClient...");

-            this.shutdownTriggered = true;

-            this.eventWorker.Interrupt();

-            this.eventWorker.Join(2000);

-            this.connection.Dispose();

-            this.connection = null;

-        }

-

-        /// <summary>

-        /// Re-connect to ZooKeeper server when session expired

-        /// </summary>

-        /// <param name="servers">

-        /// The servers.

-        /// </param>

-        /// <param name="connectionTimeout">

-        /// The connection timeout.

-        /// </param>

-        public void Reconnect(string servers, int connectionTimeout)

-        {

-            this.EnsuresNotDisposed();

-            Logger.Debug("Reconnecting");

-            this.connection.Dispose();

-            this.connection = new ZooKeeperConnection(servers, connectionTimeout);

-            this.connection.Connect(this);

-            Logger.Debug("Reconnected");

-        }

-

-        /// <summary>

-        /// Waits untill ZooKeeper connection is established

-        /// </summary>

-        /// <param name="connectionTimeout">

-        /// The connection timeout.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        public bool WaitUntilConnected(int connectionTimeout)

-        {

-            Guard.Greater(connectionTimeout, 0, "connectionTimeout");

-

-            this.EnsuresNotDisposed();

-            if (this.eventWorker != null && this.eventWorker == Thread.CurrentThread)

-            {

-                throw new InvalidOperationException("Must not be done in the ZooKeeper event thread.");

-            }

-

-            Logger.Debug("Waiting for keeper state: " + KeeperState.SyncConnected);

-            bool stillWaiting = true;

-            lock (this.stateChangedLock)

-            {

-                while (this.currentState != KeeperState.SyncConnected)

-                {

-                    if (!stillWaiting)

-                    {

-                        return false;

-                    }

-

-                    stillWaiting = Monitor.Wait(this.stateChangedLock, connectionTimeout);

-                }

-

-                Logger.Debug("State is " + this.currentState);

-            }

-

-            return true;

-        }

-

-        /// <summary>

-        /// Retries given delegate until connections is established

-        /// </summary>

-        /// <param name="callback">

-        /// The delegate to invoke.

-        /// </param>

-        /// <typeparam name="T">

-        /// Type of data returned by delegate 

-        /// </typeparam>

-        /// <returns>

-        /// data returned by delegate

-        /// </returns>

-        public T RetryUntilConnected<T>(Func<T> callback)

-        {

-            Guard.NotNull(callback, "callback");

-

-            this.EnsuresNotDisposed();

-            if (this.zooKeeperEventWorker != null && this.zooKeeperEventWorker == Thread.CurrentThread)

-            {

-                throw new InvalidOperationException("Must not be done in the zookeeper event thread");

-            }

-

-            while (true)

-            {

-                try

-                {

-                    return callback();

-                }

-                catch (KeeperException.ConnectionLossException)

-                {

-                    Thread.Yield();

-                    this.WaitUntilConnected(this.connection.SessionTimeout);

-                }

-                catch (KeeperException.SessionExpiredException)

-                {

-                    Thread.Yield();

-                    this.WaitUntilConnected(this.connection.SessionTimeout);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Checks whether znode for a given path exists

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        /// <remarks>

-        /// Will reinstall watcher in ZooKeeper if any listener for given path exists 

-        /// </remarks>

-        public bool Exists(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            bool hasListeners = this.HasListeners(path);

-            return this.Exists(path, hasListeners);

-        }

-

-        /// <summary>

-        /// Checks whether znode for a given path exists.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        public bool Exists(string path, bool watch)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.RetryUntilConnected(

-                () => this.connection.Exists(path, watch));

-        }

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        /// <remarks>

-        /// Will reinstall watcher in ZooKeeper if any listener for given path exists 

-        /// </remarks>

-        public IList<string> GetChildren(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            bool hasListeners = this.HasListeners(path);

-            return this.GetChildren(path, hasListeners);

-        }

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        public IList<string> GetChildren(string path, bool watch)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.RetryUntilConnected(

-                () => this.connection.GetChildren(path, watch));

-        }

-

-        /// <summary>

-        /// Counts number of children for a given path.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Number of children 

-        /// </returns>

-        /// <remarks>

-        /// Will reinstall watcher in ZooKeeper if any listener for given path exists.

-        /// Returns 0 if path does not exist

-        /// </remarks>

-        public int CountChildren(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            try

-            {

-                return this.GetChildren(path).Count;

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                return 0;

-            }

-        }

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        /// <remarks>

-        /// Uses given serializer to deserialize data

-        /// Use null for stats

-        /// </remarks>

-        public T ReadData<T>(string path, Stat stats, bool watch)

-            where T : class 

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            byte[] bytes = this.RetryUntilConnected(

-                () => this.connection.ReadData(path, stats, watch));

-            return this.serializer.Deserialize(bytes) as T;

-        }

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        /// <remarks>

-        /// Uses given serializer to deserialize data.

-        /// Will reinstall watcher in ZooKeeper if any listener for given path exists.

-        /// Use null for stats

-        /// </remarks>

-        public T ReadData<T>(string path, Stat stats) where T : class

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            bool hasListeners = this.HasListeners(path);

-            return this.ReadData<T>(path, null, hasListeners);

-        }

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        public void WriteData(string path, object data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            this.WriteData(path, data, -1);

-        }

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="expectedVersion">

-        /// Expected version of data

-        /// </param>

-        /// <remarks>

-        /// Use -1 for expected version

-        /// </remarks>

-        public void WriteData(string path, object data, int expectedVersion)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            byte[] bytes = this.serializer.Serialize(data);

-            this.RetryUntilConnected(

-                () =>

-                    {

-                        this.connection.WriteData(path, bytes, expectedVersion);

-                        return null as object;

-                    });

-        }

-

-        /// <summary>

-        /// Deletes znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        public bool Delete(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.RetryUntilConnected(

-                () =>

-                    {

-                        try

-                        {

-                            this.connection.Delete(path);

-                            return true;

-                        }

-                        catch (KeeperException.NoNodeException)

-                        {

-                            return false;

-                        }

-                    });

-        }

-

-        /// <summary>

-        /// Deletes znode and his children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Status

-        /// </returns>

-        public bool DeleteRecursive(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            IList<string> children;

-            try

-            {

-                children = this.GetChildren(path, false);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                return true;

-            }

-

-            foreach (var child in children)

-            {

-                if (!this.DeleteRecursive(path + "/" + child))

-                {

-                    return false;

-                }

-            }

-

-            return this.Delete(path);

-        }

-

-        /// <summary>

-        /// Creates persistent znode and all intermediate znodes (if do not exist) for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        public void MakeSurePersistentPathExists(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            if (!this.Exists(path))

-            {

-                this.CreatePersistent(path, true);

-            }

-        }

-

-        /// <summary>

-        /// Fetches children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        /// <returns>

-        /// Children or null, if znode does not exist

-        /// </returns>

-        public IList<string> GetChildrenParentMayNotExist(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            try

-            {

-                return this.GetChildren(path);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                return null;

-            }

-        }

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <returns>

-        /// Data or null, if znode does not exist

-        /// </returns>

-        public T ReadData<T>(string path)

-            where T : class

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.ReadData<T>(path, false);

-        }

-

-        /// <summary>

-        /// Closes connection to ZooKeeper

-        /// </summary>

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-

-            try

-            {

-                this.Disconnect();

-            }

-            catch (ThreadInterruptedException)

-            {

-            }

-            catch (Exception exc)

-            {

-                Logger.Debug("Ignoring unexpected errors on closing ZooKeeperClient", exc);

-            }

-

-            Logger.Debug("Closing ZooKeeperClient... done");

-        }

-

-        /// <summary>

-        /// Creates a persistent znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="createParents">

-        /// Indicates, whether should create missing intermediate znodes

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// </remarks>

-        public void CreatePersistent(string path, bool createParents)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            try

-            {

-                this.Create(path, null, CreateMode.Persistent);

-            }

-            catch (KeeperException.NodeExistsException)

-            {

-                if (!createParents)

-                {

-                    throw;

-                }

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                if (!createParents)

-                {

-                    throw;

-                }

-

-                string parentDir = path.Substring(0, path.LastIndexOf('/'));

-                this.CreatePersistent(parentDir, true);

-                this.CreatePersistent(path, true);

-            }

-        }

-

-        /// <summary>

-        /// Creates a persistent znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// Doesn't re-create missing intermediate znodes

-        /// </remarks>

-        public void CreatePersistent(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            this.CreatePersistent(path, false);

-        }

-

-        /// <summary>

-        /// Creates a persistent znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// Doesn't re-create missing intermediate znodes

-        /// </remarks>

-        public void CreatePersistent(string path, object data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            this.Create(path, data, CreateMode.Persistent);

-        }

-

-        /// <summary>

-        /// Creates a sequential, persistent znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Persistent znodes won't disappear after session close

-        /// Doesn't re-create missing intermediate znodes

-        /// </remarks>

-        /// <returns>

-        /// The created znode's path

-        /// </returns>

-        public string CreatePersistentSequential(string path, object data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            return this.Create(path, data, CreateMode.PersistentSequential);

-        }

-

-        /// <summary>

-        /// Helper method to create znode

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="mode">

-        /// The create mode.

-        /// </param>

-        /// <returns>

-        /// The created znode's path

-        /// </returns>

-        private string Create(string path, object data, CreateMode mode)

-        {

-            if (path == null)

-            {

-                throw new ArgumentNullException("Path must not be null");

-            }

-

-            byte[] bytes = data == null ? null : this.serializer.Serialize(data);

-            return this.RetryUntilConnected(() => 

-                this.connection.Create(path, bytes, mode));

-        }

-

-        /// <summary>

-        /// Creates a ephemeral znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        public void CreateEphemeral(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            this.Create(path, null, CreateMode.Ephemeral);

-        }

-

-        /// <summary>

-        /// Creates a ephemeral znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        public void CreateEphemeral(string path, object data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            this.Create(path, data, CreateMode.Ephemeral);

-        }

-

-        /// <summary>

-        /// Creates a ephemeral, sequential znode for a given path and writes data into it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <remarks>

-        /// Ephemeral znodes will disappear after session close

-        /// </remarks>

-        /// <returns>

-        /// Created znode's path

-        /// </returns>

-        public string CreateEphemeralSequential(string path, object data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            return this.Create(path, data, CreateMode.EphemeralSequential);

-        }

-

-        /// <summary>

-        /// Fetches data for given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="returnNullIfPathNotExists">

-        /// Indicates, whether should return null or throw exception when 

-        /// znode doesn't exist

-        /// </param>

-        /// <typeparam name="T">

-        /// Expected type of data

-        /// </typeparam>

-        /// <returns>

-        /// Data

-        /// </returns>

-        public T ReadData<T>(string path, bool returnNullIfPathNotExists)

-            where T : class 

-        {

-            Guard.NotNullNorEmpty(path, "path");

-            this.EnsuresNotDisposed();

-            try

-            {

-                return this.ReadData<T>(path, null);

-            }

-            catch (KeeperException.NoNodeException)

-            {

-                if (!returnNullIfPathNotExists)

-                {

-                    throw;

-                }

-

-                return null;

-            }

-        }

-

-        /// <summary>

-        /// Ensures that object wasn't disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperConnection.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperConnection.cs
deleted file mode 100644
index 7ed31ca..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperConnection.cs
+++ /dev/null
@@ -1,327 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Reflection;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Utils;

-    using log4net;

-    using Org.Apache.Zookeeper.Data;

-    using ZooKeeperNet;

-

-    /// <summary>

-    /// Abstracts connection with ZooKeeper server

-    /// </summary>

-    internal class ZooKeeperConnection : IZooKeeperConnection

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-

-        public const int DefaultSessionTimeout = 30000;

-

-        private readonly object syncLock = new object();

-

-        private readonly object shuttingDownLock = new object();

-

-        private volatile bool disposed;

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperConnection"/> class.

-        /// </summary>

-        /// <param name="servers">

-        /// The list of ZooKeeper servers.

-        /// </param>

-        public ZooKeeperConnection(string servers)

-            : this(servers, DefaultSessionTimeout)

-        {

-        }

-

-        /// <summary>

-        /// Initializes a new instance of the <see cref="ZooKeeperConnection"/> class.

-        /// </summary>

-        /// <param name="servers">

-        /// The list of ZooKeeper servers.

-        /// </param>

-        /// <param name="sessionTimeout">

-        /// The session timeout.

-        /// </param>

-        public ZooKeeperConnection(string servers, int sessionTimeout)

-        {

-            this.Servers = servers;

-            this.SessionTimeout = sessionTimeout;

-        }

-

-        /// <summary>

-        /// Gets the list of ZooKeeper servers.

-        /// </summary>

-        public string Servers { get; private set; }

-

-        /// <summary>

-        /// Gets the ZooKeeper session timeout

-        /// </summary>

-        public int SessionTimeout { get; private set; }

-

-        /// <summary>

-        /// Gets ZooKeeper client.

-        /// </summary>

-        public ZooKeeper Client { get; private set; }

-

-        /// <summary>

-        /// Gets the ZooKeeper client state

-        /// </summary>

-        public ZooKeeper.States ClientState

-        {

-            get

-            {

-                return this.Client == null ? null : this.Client.State;

-            }

-        }

-

-        /// <summary>

-        /// Connects to ZooKeeper server

-        /// </summary>

-        /// <param name="watcher">

-        /// The watcher to be installed in ZooKeeper.

-        /// </param>

-        public void Connect(IWatcher watcher)

-        {

-            this.EnsuresNotDisposed();

-            lock (this.syncLock)

-            {

-                if (this.Client != null)

-                {

-                    throw new InvalidOperationException("ZooKeeper client has already been started");

-                }

-

-                try

-                {

-                    Logger.Debug("Starting ZK client");

-                    this.Client = new ZooKeeper(this.Servers, new TimeSpan(0, 0, 0, 0, this.SessionTimeout), watcher);

-                }

-                catch (IOException exc)

-                {

-                    throw new ZooKeeperException("Unable to connect to " + this.Servers, exc);

-                }

-            }

-        }

-

-        /// <summary>

-        /// Deletes znode for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        public void Delete(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            this.Client.Delete(path, -1);

-        }

-

-        /// <summary>

-        /// Checks whether znode for a given path exists.

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Result of check

-        /// </returns>

-        public bool Exists(string path, bool watch)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.Client.Exists(path, true) != null;

-        }

-

-        /// <summary>

-        /// Creates znode using given create mode for given path and writes given data to it

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="mode">

-        /// The create mode.

-        /// </param>

-        /// <returns>

-        /// The created znode's path

-        /// </returns>

-        public string Create(string path, byte[] data, CreateMode mode)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.Client.Create(path, data, Ids.OPEN_ACL_UNSAFE, mode);

-        }

-

-        /// <summary>

-        /// Gets all children for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Children

-        /// </returns>

-        public IList<string> GetChildren(string path, bool watch)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.Client.GetChildren(path, watch);

-        }

-

-        /// <summary>

-        /// Fetches data from a given path in ZooKeeper

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="stats">

-        /// The statistics.

-        /// </param>

-        /// <param name="watch">

-        /// Indicates whether should reinstall watcher in ZooKeeper.

-        /// </param>

-        /// <returns>

-        /// Data

-        /// </returns>

-        public byte[] ReadData(string path, Stat stats, bool watch)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            return this.Client.GetData(path, watch, stats);

-        }

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        public void WriteData(string path, byte[] data)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            this.WriteData(path, data, -1);

-        }

-

-        /// <summary>

-        /// Writes data for a given path

-        /// </summary>

-        /// <param name="path">

-        /// The given path.

-        /// </param>

-        /// <param name="data">

-        /// The data to write.

-        /// </param>

-        /// <param name="version">

-        /// Expected version of data

-        /// </param>

-        public void WriteData(string path, byte[] data, int version)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            this.Client.SetData(path, data, version);

-        }

-

-        /// <summary>

-        /// Gets time when connetion was created

-        /// </summary>

-        /// <param name="path">

-        /// The path.

-        /// </param>

-        /// <returns>

-        /// Connection creation time

-        /// </returns>

-        public long GetCreateTime(string path)

-        {

-            Guard.NotNullNorEmpty(path, "path");

-

-            this.EnsuresNotDisposed();

-            Stat stats = this.Client.Exists(path, false);

-            return stats != null ? stats.Ctime : -1;

-        }

-

-        /// <summary>

-        /// Closes underlying ZooKeeper client

-        /// </summary>

-        public void Dispose()

-        {

-            if (this.disposed)

-            {

-                return;

-            }

-

-            lock (this.shuttingDownLock)

-            {

-                if (this.disposed)

-                {

-                    return;

-                }

-

-                this.disposed = true;

-            }

-            

-            try

-            {

-                if (this.Client != null)

-                {

-                    Logger.Debug("Closing ZooKeeper client connected to " + this.Servers);

-                    this.Client.Dispose();

-                    this.Client = null;

-                    Logger.Debug("ZooKeeper client connection closed");

-                }

-            }

-            catch (Exception exc)

-            {

-                Logger.Warn("Ignoring unexpected errors on closing", exc);

-            }

-        }

-

-        /// <summary>

-        /// Ensures object wasn't disposed

-        /// </summary>

-        private void EnsuresNotDisposed()

-        {

-            if (this.disposed)

-            {

-                throw new ObjectDisposedException(this.GetType().Name);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperStringSerializer.cs b/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperStringSerializer.cs
deleted file mode 100644
index daaf365..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.Client/ZooKeeperIntegration/ZooKeeperStringSerializer.cs
+++ /dev/null
@@ -1,72 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.ZooKeeperIntegration

-{

-    using System;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Utils;

-

-    /// <summary>

-    /// Zookeeper is able to store data in form of byte arrays. This interfacte is a bridge between those byte-array format

-    /// and higher level objects.

-    /// </summary>

-    internal class ZooKeeperStringSerializer : IZooKeeperSerializer

-    {

-        public static readonly ZooKeeperStringSerializer Serializer = new ZooKeeperStringSerializer();

-

-        /// <summary>

-        /// Prevents a default instance of the <see cref="ZooKeeperStringSerializer"/> class from being created.

-        /// </summary>

-        private ZooKeeperStringSerializer()

-        {

-        }

-

-        /// <summary>

-        /// Serializes data using UTF-8 encoding

-        /// </summary>

-        /// <param name="obj">

-        /// The data to serialize

-        /// </param>

-        /// <returns>

-        /// Serialized data

-        /// </returns>

-        public byte[] Serialize(object obj)

-        {

-            Guard.NotNull(obj, "obj");

-            return Encoding.UTF8.GetBytes(obj.ToString());

-        }

-

-        /// <summary>

-        /// Deserializes data using UTF-8 encoding

-        /// </summary>

-        /// <param name="bytes">

-        /// The serialized data

-        /// </param>

-        /// <returns>

-        /// The deserialized data

-        /// </returns>

-        public object Deserialize(byte[] bytes)

-        {

-            Guard.NotNull(bytes, "bytes");

-            Guard.Greater(bytes.Count(), 0, "bytes");

-

-            return bytes == null ? null : Encoding.UTF8.GetString(bytes);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Kafka.FxCop b/trunk/clients/csharp/src/Kafka/Kafka.FxCop
deleted file mode 100644
index 3ed3374..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.FxCop
+++ /dev/null
@@ -1,120 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<?xml version="1.0" encoding="utf-8"?>
-<FxCopProject Version="10.0" Name="Kafka">
- <ProjectOptions>
-  <SharedProject>True</SharedProject>
-  <Stylesheet Apply="False">$(FxCopDir)\Xml\FxCopReport.xsl</Stylesheet>
-  <SaveMessages>
-   <Project Status="None" NewOnly="False" />
-   <Report Status="Active" NewOnly="False" />
-  </SaveMessages>
-  <ProjectFile Compress="True" DefaultTargetCheck="True" DefaultRuleCheck="True" SaveByRuleGroup="" Deterministic="True" />
-  <EnableMultithreadedLoad>True</EnableMultithreadedLoad>
-  <EnableMultithreadedAnalysis>True</EnableMultithreadedAnalysis>
-  <SourceLookup>True</SourceLookup>
-  <AnalysisExceptionsThreshold>10</AnalysisExceptionsThreshold>
-  <RuleExceptionsThreshold>1</RuleExceptionsThreshold>
-  <Spelling Locale="en-US" />
-  <OverrideRuleVisibilities>False</OverrideRuleVisibilities>
-  <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True" />
-  <SearchGlobalAssemblyCache>True</SearchGlobalAssemblyCache>
-  <DeadlockDetectionTimeout>120</DeadlockDetectionTimeout>
-  <IgnoreGeneratedCode>True</IgnoreGeneratedCode>
- </ProjectOptions>
- <Targets>
-  <Target Name="$(ProjectDir)/Kafka.Client/bin/Integration/Kafka.Client.dll" Analyze="True" AnalyzeAllChildren="True" />
- </Targets>
- <Rules>
-  <RuleFiles>
-   <RuleFile Name="$(FxCopDir)\Rules\DesignRules.dll" Enabled="True" AllRulesEnabled="False">
-    <Rule Name="AbstractTypesShouldNotHaveConstructors" Enabled="True" />
-    <Rule Name="AvoidEmptyInterfaces" Enabled="True" />
-    <Rule Name="AvoidExcessiveParametersOnGenericTypes" Enabled="True" />
-    <Rule Name="AvoidNamespacesWithFewTypes" Enabled="True" />
-    <Rule Name="AvoidOutParameters" Enabled="True" />
-    <Rule Name="CollectionsShouldImplementGenericInterface" Enabled="True" />
-    <Rule Name="ConsiderPassingBaseTypesAsParameters" Enabled="True" />
-    <Rule Name="DeclareEventHandlersCorrectly" Enabled="True" />
-    <Rule Name="DeclareTypesInNamespaces" Enabled="True" />
-    <Rule Name="DefaultParametersShouldNotBeUsed" Enabled="True" />
-    <Rule Name="DefineAccessorsForAttributeArguments" Enabled="True" />
-    <Rule Name="DoNotCatchGeneralExceptionTypes" Enabled="True" />
-    <Rule Name="DoNotDeclareProtectedMembersInSealedTypes" Enabled="True" />
-    <Rule Name="DoNotDeclareStaticMembersOnGenericTypes" Enabled="True" />
-    <Rule Name="DoNotDeclareVirtualMembersInSealedTypes" Enabled="True" />
-    <Rule Name="DoNotDeclareVisibleInstanceFields" Enabled="True" />
-    <Rule Name="DoNotExposeGenericLists" Enabled="True" />
-    <Rule Name="DoNotHideBaseClassMethods" Enabled="True" />
-    <Rule Name="DoNotNestGenericTypesInMemberSignatures" Enabled="True" />
-    <Rule Name="DoNotOverloadOperatorEqualsOnReferenceTypes" Enabled="True" />
-    <Rule Name="DoNotPassTypesByReference" Enabled="True" />
-    <Rule Name="DoNotRaiseExceptionsInUnexpectedLocations" Enabled="True" />
-    <Rule Name="EnumeratorsShouldBeStronglyTyped" Enabled="True" />
-    <Rule Name="EnumsShouldHaveZeroValue" Enabled="True" />
-    <Rule Name="EnumStorageShouldBeInt32" Enabled="True" />
-    <Rule Name="ExceptionsShouldBePublic" Enabled="True" />
-    <Rule Name="GenericMethodsShouldProvideTypeParameter" Enabled="True" />
-    <Rule Name="ICollectionImplementationsHaveStronglyTypedMembers" Enabled="True" />
-    <Rule Name="ImplementIDisposableCorrectly" Enabled="True" />
-    <Rule Name="ImplementStandardExceptionConstructors" Enabled="True" />
-    <Rule Name="IndexersShouldNotBeMultidimensional" Enabled="True" />
-    <Rule Name="InterfaceMethodsShouldBeCallableByChildTypes" Enabled="True" />
-    <Rule Name="ListsAreStronglyTyped" Enabled="True" />
-    <Rule Name="MarkAssembliesWithAssemblyVersion" Enabled="True" />
-    <Rule Name="MarkAssembliesWithClsCompliant" Enabled="True" />
-    <Rule Name="MarkAssembliesWithComVisible" Enabled="True" />
-    <Rule Name="MarkAttributesWithAttributeUsage" Enabled="True" />
-    <Rule Name="MarkEnumsWithFlags" Enabled="True" />
-    <Rule Name="MembersShouldNotExposeCertainConcreteTypes" Enabled="True" />
-    <Rule Name="MovePInvokesToNativeMethodsClass" Enabled="True" />
-    <Rule Name="NestedTypesShouldNotBeVisible" Enabled="True" />
-    <Rule Name="OverloadOperatorEqualsOnOverloadingAddAndSubtract" Enabled="True" />
-    <Rule Name="OverrideMethodsOnComparableTypes" Enabled="True" />
-    <Rule Name="PropertiesShouldNotBeWriteOnly" Enabled="True" />
-    <Rule Name="ProvideObsoleteAttributeMessage" Enabled="True" />
-    <Rule Name="ReplaceRepetitiveArgumentsWithParamsArray" Enabled="True" />
-    <Rule Name="StaticHolderTypesShouldBeSealed" Enabled="True" />
-    <Rule Name="StaticHolderTypesShouldNotHaveConstructors" Enabled="True" />
-    <Rule Name="StringUriOverloadsCallSystemUriOverloads" Enabled="True" />
-    <Rule Name="TypesShouldNotExtendCertainBaseTypes" Enabled="True" />
-    <Rule Name="TypesThatOwnDisposableFieldsShouldBeDisposable" Enabled="True" />
-    <Rule Name="TypesThatOwnNativeResourcesShouldBeDisposable" Enabled="True" />
-    <Rule Name="UriParametersShouldNotBeStrings" Enabled="True" />
-    <Rule Name="UriPropertiesShouldNotBeStrings" Enabled="True" />
-    <Rule Name="UriReturnValuesShouldNotBeStrings" Enabled="True" />
-    <Rule Name="UseEventsWhereAppropriate" Enabled="True" />
-    <Rule Name="UseGenericEventHandlerInstances" Enabled="True" />
-    <Rule Name="UseGenericsWhereAppropriate" Enabled="True" />
-    <Rule Name="UseIntegralOrStringArgumentForIndexers" Enabled="True" />
-    <Rule Name="UsePropertiesWhereAppropriate" Enabled="True" />
-   </RuleFile>
-   <RuleFile Name="$(FxCopDir)\Rules\GlobalizationRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\InteroperabilityRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\MobilityRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\NamingRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\PerformanceRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\PortabilityRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\SecurityRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\SecurityTransparencyRules.dll" Enabled="True" AllRulesEnabled="True" />
-   <RuleFile Name="$(FxCopDir)\Rules\UsageRules.dll" Enabled="True" AllRulesEnabled="True" />
-  </RuleFiles>
-  <Groups />
-  <Settings />
- </Rules>
- <FxCopReport Version="10.0" />
-</FxCopProject>
diff --git a/trunk/clients/csharp/src/Kafka/Kafka.sln b/trunk/clients/csharp/src/Kafka/Kafka.sln
deleted file mode 100644
index b457569..0000000
--- a/trunk/clients/csharp/src/Kafka/Kafka.sln
+++ /dev/null
@@ -1,61 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-

-Microsoft Visual Studio Solution File, Format Version 11.00

-# Visual Studio 2010

-Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Kafka.Client", "Kafka.Client\Kafka.Client.csproj", "{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}"

-EndProject

-Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Kafka.Client.Tests", "Tests\Kafka.Client.Tests\Kafka.Client.Tests.csproj", "{9BA1A0BF-B207-4A11-8883-5F64B113C07D}"

-EndProject

-Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Tests", "Tests", "{06FD20F1-CE06-430E-AF6E-2EBECE6E47B3}"

-EndProject

-Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Kafka.Client.IntegrationTests", "Tests\Kafka.Client.IntegrationTests\Kafka.Client.IntegrationTests.csproj", "{AF29C330-49BD-4648-B692-882E922C435B}"

-EndProject

-Global

-	GlobalSection(SolutionConfigurationPlatforms) = preSolution

-		Debug|Any CPU = Debug|Any CPU

-		Integration|Any CPU = Integration|Any CPU

-		Release|Any CPU = Release|Any CPU

-	EndGlobalSection

-	GlobalSection(ProjectConfigurationPlatforms) = postSolution

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Debug|Any CPU.ActiveCfg = Debug|Any CPU

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Debug|Any CPU.Build.0 = Debug|Any CPU

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Integration|Any CPU.ActiveCfg = Integration|Any CPU

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Integration|Any CPU.Build.0 = Integration|Any CPU

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Release|Any CPU.ActiveCfg = Release|Any CPU

-		{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}.Release|Any CPU.Build.0 = Release|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Debug|Any CPU.Build.0 = Debug|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Integration|Any CPU.ActiveCfg = Integration|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Integration|Any CPU.Build.0 = Integration|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Release|Any CPU.ActiveCfg = Release|Any CPU

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D}.Release|Any CPU.Build.0 = Release|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Debug|Any CPU.ActiveCfg = Debug|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Debug|Any CPU.Build.0 = Debug|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Integration|Any CPU.ActiveCfg = Integration|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Integration|Any CPU.Build.0 = Integration|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Release|Any CPU.ActiveCfg = Release|Any CPU

-		{AF29C330-49BD-4648-B692-882E922C435B}.Release|Any CPU.Build.0 = Release|Any CPU

-	EndGlobalSection

-	GlobalSection(SolutionProperties) = preSolution

-		HideSolutionNode = FALSE

-	EndGlobalSection

-	GlobalSection(NestedProjects) = preSolution

-		{9BA1A0BF-B207-4A11-8883-5F64B113C07D} = {06FD20F1-CE06-430E-AF6E-2EBECE6E47B3}

-		{AF29C330-49BD-4648-B692-882E922C435B} = {06FD20F1-CE06-430E-AF6E-2EBECE6E47B3}

-	EndGlobalSection

-EndGlobal

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/App.config b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/App.config
deleted file mode 100644
index 7e4c9a2..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/App.config
+++ /dev/null
@@ -1,43 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<?xml version="1.0" encoding="utf-8" ?>

-<configuration>

-    <configSections>

-        <section

-            name="kafkaClientConfiguration"

-            type="Kafka.Client.Cfg.KafkaClientConfiguration, Kafka.Client"

-            allowLocation="true"

-            allowDefinition="Everywhere"

-      />

-        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821" />

-    </configSections>

-    <log4net configSource="Log4Net.config" />

-  <kafkaClientConfiguration>

-    <!--<kafkaServer address="192.168.3.251" port="9092"></kafkaServer>-->

-    <kafkaServer address="192.168.1.39" port="9092"></kafkaServer>

-    <consumer numberOfTries="2" groupId="testGroup" timeout="10000" autoOffsetReset="smallest" autoCommit="true" autoCommitIntervalMs="1000" fetchSize="307200" backOffIncrementMs="2000"/>

-    <brokerPartitionInfos>

-      <add id="0" address="192.168.1.39" port="9092" />

-      <add id="1" address="192.168.1.39" port="9101" />

-      <add id="2" address="192.168.1.39" port="9102" />

-      <!--<add id="0" address="192.168.3.251" port="9092" />-->

-      <!--<add id="2" address="192.168.3.251" port="9092" />-->

-    </brokerPartitionInfos>

-    <zooKeeperServers addressList="192.168.1.39:2181" sessionTimeout="30000" connectionTimeout="3000"></zooKeeperServers>

-    <!--<zooKeeperServers addressList="192.168.3.251:2181" sessionTimeout="30000"></zooKeeperServers>-->

-  </kafkaClientConfiguration>

-</configuration>
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Debug/App.config b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Debug/App.config
deleted file mode 100644
index 7e4c9a2..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Debug/App.config
+++ /dev/null
@@ -1,43 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<?xml version="1.0" encoding="utf-8" ?>

-<configuration>

-    <configSections>

-        <section

-            name="kafkaClientConfiguration"

-            type="Kafka.Client.Cfg.KafkaClientConfiguration, Kafka.Client"

-            allowLocation="true"

-            allowDefinition="Everywhere"

-      />

-        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821" />

-    </configSections>

-    <log4net configSource="Log4Net.config" />

-  <kafkaClientConfiguration>

-    <!--<kafkaServer address="192.168.3.251" port="9092"></kafkaServer>-->

-    <kafkaServer address="192.168.1.39" port="9092"></kafkaServer>

-    <consumer numberOfTries="2" groupId="testGroup" timeout="10000" autoOffsetReset="smallest" autoCommit="true" autoCommitIntervalMs="1000" fetchSize="307200" backOffIncrementMs="2000"/>

-    <brokerPartitionInfos>

-      <add id="0" address="192.168.1.39" port="9092" />

-      <add id="1" address="192.168.1.39" port="9101" />

-      <add id="2" address="192.168.1.39" port="9102" />

-      <!--<add id="0" address="192.168.3.251" port="9092" />-->

-      <!--<add id="2" address="192.168.3.251" port="9092" />-->

-    </brokerPartitionInfos>

-    <zooKeeperServers addressList="192.168.1.39:2181" sessionTimeout="30000" connectionTimeout="3000"></zooKeeperServers>

-    <!--<zooKeeperServers addressList="192.168.3.251:2181" sessionTimeout="30000"></zooKeeperServers>-->

-  </kafkaClientConfiguration>

-</configuration>
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Integration/App.config b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Integration/App.config
deleted file mode 100644
index a0b764f..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Config/Integration/App.config
+++ /dev/null
@@ -1,37 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<?xml version="1.0" encoding="utf-8" ?>

-<configuration>

-  <configSections>

-    <section

-        name="kafkaClientConfiguration"

-        type="Kafka.Client.Cfg.KafkaClientConfiguration, Kafka.Client"

-        allowLocation="true"

-        allowDefinition="Everywhere"

-      />

-  </configSections>

-  <kafkaClientConfiguration>

-    <kafkaServer address="192.168.1.39" port="9092"></kafkaServer>

-    <consumer numberOfTries="2" groupId="testGroup" timeout="10000" autoOffsetReset="smallest" autoCommit="true" autoCommitIntervalMs="1000" fetchSize="307200" backOffIncrementMs="2000"/>

-    <brokerPartitionInfos>

-      <add id="0" address="192.168.1.39" port="9092" />

-      <add id="1" address="192.168.1.39" port="9101" />

-      <add id="2" address="192.168.1.39" port="9102" />

-    </brokerPartitionInfos>

-    <zooKeeperServers addressList="192.168.1.39:2181" sessionTimeout="30000" connectionTimeout="3000"></zooKeeperServers>

-  </kafkaClientConfiguration>

-</configuration>

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerRebalancingTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerRebalancingTests.cs
deleted file mode 100644
index 09a4b4a..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerRebalancingTests.cs
+++ /dev/null
@@ -1,236 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System.Collections.Generic;

-    using System.Linq;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using NUnit.Framework;

-

-    [TestFixture]

-    public class ConsumerRebalancingTests : IntegrationFixtureBase

-    {

-        [Test]

-        public void ConsumerPorformsRebalancingOnStart()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            using (var consumerConnector = new ZookeeperConsumerConnector(config, true))

-            {

-                var client = ReflectionHelper.GetInstanceField<ZooKeeperClient>("zkClient", consumerConnector);

-                Assert.IsNotNull(client);

-                client.DeleteRecursive("/consumers/group1");

-                var topicCount = new Dictionary<string, int> { { "test", 1 } };

-                consumerConnector.CreateMessageStreams(topicCount);

-                WaitUntillIdle(client, 1000);

-                IList<string> children = client.GetChildren("/consumers", false);

-                Assert.That(children, Is.Not.Null.And.Not.Empty);

-                Assert.That(children, Contains.Item("group1"));

-                children = client.GetChildren("/consumers/group1", false);

-                Assert.That(children, Is.Not.Null.And.Not.Empty);

-                Assert.That(children, Contains.Item("ids"));

-                Assert.That(children, Contains.Item("owners"));

-                children = client.GetChildren("/consumers/group1/ids", false);

-                Assert.That(children, Is.Not.Null.And.Not.Empty);

-                string consumerId = children[0];

-                children = client.GetChildren("/consumers/group1/owners", false);

-                Assert.That(children, Is.Not.Null.And.Not.Empty);

-                Assert.That(children.Count, Is.EqualTo(1));

-                Assert.That(children, Contains.Item("test"));

-                children = client.GetChildren("/consumers/group1/owners/test", false);

-                Assert.That(children, Is.Not.Null.And.Not.Empty);

-                Assert.That(children.Count, Is.EqualTo(2));

-                string partId = children[0];

-                var data = client.ReadData<string>("/consumers/group1/owners/test/" + partId);

-                Assert.That(data, Is.Not.Null.And.Not.Empty);

-                Assert.That(data, Contains.Substring(consumerId));

-                data = client.ReadData<string>("/consumers/group1/ids/" + consumerId);

-                Assert.That(data, Is.Not.Null.And.Not.Empty);

-                Assert.That(data, Is.EqualTo("{ \"test\": 1 }"));

-            }

-

-            using (var client = new ZooKeeperClient(config.ZooKeeper.ZkConnect, config.ZooKeeper.ZkSessionTimeoutMs, ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                //// Should be created as ephemeral

-                IList<string> children = client.GetChildren("/consumers/group1/ids");

-                Assert.That(children, Is.Null.Or.Empty);

-                //// Should be created as ephemeral

-                children = client.GetChildren("/consumers/group1/owners/test");

-                Assert.That(children, Is.Null.Or.Empty);

-            }

-        }

-

-        [Test]

-        public void ConsumerPorformsRebalancingWhenNewBrokerIsAddedToTopic()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            string brokerTopicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/test/" + 2345;

-            using (var consumerConnector = new ZookeeperConsumerConnector(config, true))

-            {

-                var client = ReflectionHelper.GetInstanceField<ZooKeeperClient>(

-                    "zkClient", consumerConnector);

-                Assert.IsNotNull(client);

-                client.DeleteRecursive("/consumers/group1");

-                var topicCount = new Dictionary<string, int> { { "test", 1 } };

-                consumerConnector.CreateMessageStreams(topicCount);

-                WaitUntillIdle(client, 1000);

-                IList<string> children = client.GetChildren("/consumers/group1/ids", false);

-                string consumerId = children[0];

-                client.CreateEphemeral(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                client.CreateEphemeral(brokerTopicPath, 1);

-                WaitUntillIdle(client, 500);

-                children = client.GetChildren("/consumers/group1/owners/test", false);

-                Assert.That(children.Count, Is.EqualTo(3));

-                Assert.That(children, Contains.Item("2345-0"));

-                var data = client.ReadData<string>("/consumers/group1/owners/test/2345-0");

-                Assert.That(data, Is.Not.Null);

-                Assert.That(data, Contains.Substring(consumerId));

-                var topicRegistry =

-                    ReflectionHelper.GetInstanceField<IDictionary<string, IDictionary<Partition, PartitionTopicInfo>>>(

-                        "topicRegistry", consumerConnector);

-                Assert.That(topicRegistry, Is.Not.Null.And.Not.Empty);

-                Assert.That(topicRegistry.Count, Is.EqualTo(1));

-                var item = topicRegistry["test"];

-                Assert.That(item.Count, Is.EqualTo(3));

-                var broker = topicRegistry["test"].SingleOrDefault(x => x.Key.BrokerId == 2345);

-                Assert.That(broker, Is.Not.Null);

-            }

-        }

-

-        [Test]

-        public void ConsumerPorformsRebalancingWhenBrokerIsRemovedFromTopic()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            string brokerTopicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/test/" + 2345;

-            using (var consumerConnector = new ZookeeperConsumerConnector(config, true))

-            {

-                var client = ReflectionHelper.GetInstanceField<ZooKeeperClient>("zkClient", consumerConnector);

-                Assert.IsNotNull(client);

-                client.DeleteRecursive("/consumers/group1");

-                var topicCount = new Dictionary<string, int> { { "test", 1 } };

-                consumerConnector.CreateMessageStreams(topicCount);

-                WaitUntillIdle(client, 1000);

-                client.CreateEphemeral(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                client.CreateEphemeral(brokerTopicPath, 1);

-                WaitUntillIdle(client, 1000);

-                client.DeleteRecursive(brokerTopicPath);

-                WaitUntillIdle(client, 1000);

-

-                IList<string> children = client.GetChildren("/consumers/group1/owners/test", false);

-                Assert.That(children.Count, Is.EqualTo(2));

-                Assert.That(children, Has.None.EqualTo("2345-0"));

-                var topicRegistry = ReflectionHelper.GetInstanceField<IDictionary<string, IDictionary<Partition, PartitionTopicInfo>>>("topicRegistry", consumerConnector);

-                Assert.That(topicRegistry, Is.Not.Null.And.Not.Empty);

-                Assert.That(topicRegistry.Count, Is.EqualTo(1));

-                var item = topicRegistry["test"];

-                Assert.That(item.Count, Is.EqualTo(2));

-                Assert.That(item.Where(x => x.Value.BrokerId == 2345).Count(), Is.EqualTo(0));

-            }

-        }

-

-        [Test]

-        public void ConsumerPerformsRebalancingWhenNewConsumerIsAddedAndTheyDividePartitions()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            IList<string> ids;

-            IList<string> owners;

-            using (var consumerConnector = new ZookeeperConsumerConnector(config, true))

-            {

-                var client = ReflectionHelper.GetInstanceField<ZooKeeperClient>(

-                    "zkClient", consumerConnector);

-                Assert.IsNotNull(client);

-                client.DeleteRecursive("/consumers/group1");

-                var topicCount = new Dictionary<string, int> { { "test", 1 } };

-                consumerConnector.CreateMessageStreams(topicCount);

-                WaitUntillIdle(client, 1000);

-                using (var consumerConnector2 = new ZookeeperConsumerConnector(config, true))

-                {

-                    consumerConnector2.CreateMessageStreams(topicCount);

-                    WaitUntillIdle(client, 1000);

-                    ids = client.GetChildren("/consumers/group1/ids", false).ToList();

-                    owners = client.GetChildren("/consumers/group1/owners/test", false).ToList();

-

-                    Assert.That(ids, Is.Not.Null.And.Not.Empty);

-                    Assert.That(ids.Count, Is.EqualTo(2));

-                    Assert.That(owners, Is.Not.Null.And.Not.Empty);

-                    Assert.That(owners.Count, Is.EqualTo(2));

-

-                    var data1 = client.ReadData<string>("/consumers/group1/owners/test/" + owners[0], false);

-                    var data2 = client.ReadData<string>("/consumers/group1/owners/test/" + owners[1], false);

-

-                    Assert.That(data1, Is.Not.Null.And.Not.Empty);

-                    Assert.That(data2, Is.Not.Null.And.Not.Empty);

-                    Assert.That(data1, Is.Not.EqualTo(data2));

-                    Assert.That(data1, Is.StringStarting(ids[0]).Or.StringStarting(ids[1]));

-                    Assert.That(data2, Is.StringStarting(ids[0]).Or.StringStarting(ids[1]));

-                }

-            }

-        }

-

-        [Test]

-        public void ConsumerPerformsRebalancingWhenConsumerIsRemovedAndTakesItsPartitions()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            string basePath = "/consumers/" + config.GroupId;

-            IList<string> ids;

-            IList<string> owners;

-            using (var consumerConnector = new ZookeeperConsumerConnector(config, true))

-            {

-                var client = ReflectionHelper.GetInstanceField<ZooKeeperClient>("zkClient", consumerConnector);

-                Assert.IsNotNull(client);

-                client.DeleteRecursive("/consumers/group1");

-                var topicCount = new Dictionary<string, int> { { "test", 1 } };

-                consumerConnector.CreateMessageStreams(topicCount);

-                WaitUntillIdle(client, 1000);

-                using (var consumerConnector2 = new ZookeeperConsumerConnector(config, true))

-                {

-                    consumerConnector2.CreateMessageStreams(topicCount);

-                    WaitUntillIdle(client, 1000);

-                    ids = client.GetChildren("/consumers/group1/ids", false).ToList();

-                    owners = client.GetChildren("/consumers/group1/owners/test", false).ToList();

-                    Assert.That(ids, Is.Not.Null.And.Not.Empty);

-                    Assert.That(ids.Count, Is.EqualTo(2));

-                    Assert.That(owners, Is.Not.Null.And.Not.Empty);

-                    Assert.That(owners.Count, Is.EqualTo(2));

-                }

-

-                WaitUntillIdle(client, 1000);

-                ids = client.GetChildren("/consumers/group1/ids", false).ToList();

-                owners = client.GetChildren("/consumers/group1/owners/test", false).ToList();

-

-                Assert.That(ids, Is.Not.Null.And.Not.Empty);

-                Assert.That(ids.Count, Is.EqualTo(1));

-                Assert.That(owners, Is.Not.Null.And.Not.Empty);

-                Assert.That(owners.Count, Is.EqualTo(2));

-

-                var data1 = client.ReadData<string>("/consumers/group1/owners/test/" + owners[0], false);

-                var data2 = client.ReadData<string>("/consumers/group1/owners/test/" + owners[1], false);

-

-                Assert.That(data1, Is.Not.Null.And.Not.Empty);

-                Assert.That(data2, Is.Not.Null.And.Not.Empty);

-                Assert.That(data1, Is.EqualTo(data2));

-                Assert.That(data1, Is.StringStarting(ids[0]));

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerTests.cs
deleted file mode 100644
index f2d05e4..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ConsumerTests.cs
+++ /dev/null
@@ -1,307 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Collections.Concurrent;

-    using System.Collections.Generic;

-    using System.Reflection;

-    using System.Text;

-    using System.Threading;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers.Sync;

-    using Kafka.Client.Requests;

-    using NUnit.Framework;

-

-    [TestFixture]

-    public class ConsumerTests : IntegrationFixtureBase

-    {

-        [Test]

-        public void ConsumerConnectorIsCreatedConnectsDisconnectsAndShutsDown()

-        {

-            var config = this.ZooKeeperBasedConsumerConfig;

-            using (new ZookeeperConsumerConnector(config, true))

-            {

-            }

-        }

-

-        [Test]

-        public void SimpleSyncProducerSends2MessagesAndConsumerConnectorGetsThemBack()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ZooKeeperBasedConsumerConfig;

-

-            // first producing

-            string payload1 = "kafka 1.";

-            byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);

-            var msg1 = new Message(payloadData1);

-

-            string payload2 = "kafka 2.";

-            byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);

-            var msg2 = new Message(payloadData2);

-

-            var producerRequest = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { msg1, msg2 });

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                producer.Send(producerRequest);

-            }

-

-            // now consuming

-            var resultMessages = new List<Message>();

-            using (IConsumerConnector consumerConnector = new ZookeeperConsumerConnector(consumerConfig, true))

-            {

-                var topicCount = new Dictionary<string, int> { { CurrentTestTopic, 1 } };

-                var messages = consumerConnector.CreateMessageStreams(topicCount);

-                var sets = messages[CurrentTestTopic];

-                try

-                {

-                    foreach (var set in sets)

-                    {

-                        foreach (var message in set)

-                        {

-                            resultMessages.Add(message);

-                        }

-                    }

-                }

-                catch (ConsumerTimeoutException)

-                {

-                    // do nothing, this is expected

-                }

-            }

-

-            Assert.AreEqual(2, resultMessages.Count);

-            Assert.AreEqual(msg1.ToString(), resultMessages[0].ToString());

-            Assert.AreEqual(msg2.ToString(), resultMessages[1].ToString());

-        }

-

-        [Test]

-        public void OneMessageIsSentAndReceivedThenExceptionsWhenNoMessageThenAnotherMessageIsSentAndReceived()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ZooKeeperBasedConsumerConfig;

-

-            // first producing

-            string payload1 = "kafka 1.";

-            byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);

-            var msg1 = new Message(payloadData1);

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { msg1 });

-                producer.Send(producerRequest);

-

-                // now consuming

-                using (IConsumerConnector consumerConnector = new ZookeeperConsumerConnector(consumerConfig, true))

-                {

-                    var topicCount = new Dictionary<string, int> { { CurrentTestTopic, 1 } };

-                    var messages = consumerConnector.CreateMessageStreams(topicCount);

-                    var sets = messages[CurrentTestTopic];

-                    KafkaMessageStream myStream = sets[0];

-                    var enumerator = myStream.GetEnumerator();

-

-                    Assert.IsTrue(enumerator.MoveNext());

-                    Assert.AreEqual(msg1.ToString(), enumerator.Current.ToString());

-

-                    Assert.Throws<ConsumerTimeoutException>(() => enumerator.MoveNext());

-

-                    Assert.Throws<IllegalStateException>(() => enumerator.MoveNext()); // iterator is in failed state

-

-                    enumerator.Reset();

-

-                    // producing again

-                    string payload2 = "kafka 2.";

-                    byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);

-                    var msg2 = new Message(payloadData2);

-

-                    var producerRequest2 = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { msg2 });

-                    producer.Send(producerRequest2);

-                    Thread.Sleep(3000);

-

-                    Assert.IsTrue(enumerator.MoveNext());

-                    Assert.AreEqual(msg2.ToString(), enumerator.Current.ToString());

-                }

-            }

-        }

-

-        [Test]

-        public void ConsumerConnectorConsumesTwoDifferentTopics()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ZooKeeperBasedConsumerConfig;

-

-            string topic1 = CurrentTestTopic + "1";

-            string topic2 = CurrentTestTopic + "2";

-

-            // first producing

-            string payload1 = "kafka 1.";

-            byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);

-            var msg1 = new Message(payloadData1);

-

-            string payload2 = "kafka 2.";

-            byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);

-            var msg2 = new Message(payloadData2);

-

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest1 = new ProducerRequest(topic1, 0, new List<Message> { msg1 });

-                producer.Send(producerRequest1);

-                var producerRequest2 = new ProducerRequest(topic2, 0, new List<Message> { msg2 });

-                producer.Send(producerRequest2);

-            }

-

-            // now consuming

-            var resultMessages1 = new List<Message>();

-            var resultMessages2 = new List<Message>();

-            using (IConsumerConnector consumerConnector = new ZookeeperConsumerConnector(consumerConfig, true))

-            {

-                var topicCount = new Dictionary<string, int> { { topic1, 1 }, { topic2, 1 } };

-                var messages = consumerConnector.CreateMessageStreams(topicCount);

-

-                Assert.IsTrue(messages.ContainsKey(topic1));

-                Assert.IsTrue(messages.ContainsKey(topic2));

-

-                var sets1 = messages[topic1];

-                try

-                {

-                    foreach (var set in sets1)

-                    {

-                        foreach (var message in set)

-                        {

-                            resultMessages1.Add(message);

-                        }

-                    }

-                }

-                catch (ConsumerTimeoutException)

-                {

-                    // do nothing, this is expected

-                }

-

-                var sets2 = messages[topic2];

-                try

-                {

-                    foreach (var set in sets2)

-                    {

-                        foreach (var message in set)

-                        {

-                            resultMessages2.Add(message);

-                        }

-                    }

-                }

-                catch (ConsumerTimeoutException)

-                {

-                    // do nothing, this is expected

-                }

-            }

-

-            Assert.AreEqual(1, resultMessages1.Count);

-            Assert.AreEqual(msg1.ToString(), resultMessages1[0].ToString());

-

-            Assert.AreEqual(1, resultMessages2.Count);

-            Assert.AreEqual(msg2.ToString(), resultMessages2[0].ToString());

-        }

-

-        [Test]

-        public void ConsumerConnectorReceivesAShutdownSignal()

-        {

-            var consumerConfig = this.ZooKeeperBasedConsumerConfig;

-

-            // now consuming

-            using (IConsumerConnector consumerConnector = new ZookeeperConsumerConnector(consumerConfig, true))

-            {

-                var topicCount = new Dictionary<string, int> { { CurrentTestTopic, 1 } };

-                var messages = consumerConnector.CreateMessageStreams(topicCount);

-

-                // putting the shutdown command into the queue

-                FieldInfo fi = typeof(ZookeeperConsumerConnector).GetField(

-                    "queues", BindingFlags.NonPublic | BindingFlags.Instance);

-                var value =

-                    (IDictionary<Tuple<string, string>, BlockingCollection<FetchedDataChunk>>)

-                    fi.GetValue(consumerConnector);

-                foreach (var topicConsumerQueueMap in value)

-                {

-                    topicConsumerQueueMap.Value.Add(ZookeeperConsumerConnector.ShutdownCommand);

-                }

-

-                var sets = messages[CurrentTestTopic];

-                var resultMessages = new List<Message>();

-

-                foreach (var set in sets)

-                {

-                    foreach (var message in set)

-                    {

-                        resultMessages.Add(message);

-                    }

-                }

-

-                Assert.AreEqual(0, resultMessages.Count);

-            }

-        }

-

-        [Test]

-        public void ProducersSendMessagesToDifferentPartitionsAndConsumerConnectorGetsThemBack()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ZooKeeperBasedConsumerConfig;

-

-            // first producing

-            string payload1 = "kafka 1.";

-            byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);

-            var msg1 = new Message(payloadData1);

-

-            string payload2 = "kafka 2.";

-            byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);

-            var msg2 = new Message(payloadData2);

-

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest1 = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { msg1 });

-                producer.Send(producerRequest1);

-                var producerRequest2 = new ProducerRequest(CurrentTestTopic, 1, new List<Message> { msg2 });

-                producer.Send(producerRequest2);

-            }

-

-            // now consuming

-            var resultMessages = new List<Message>();

-            using (IConsumerConnector consumerConnector = new ZookeeperConsumerConnector(consumerConfig, true))

-            {

-                var topicCount = new Dictionary<string, int> { { CurrentTestTopic, 1 } };

-                var messages = consumerConnector.CreateMessageStreams(topicCount);

-                var sets = messages[CurrentTestTopic];

-                try

-                {

-                    foreach (var set in sets)

-                    {

-                        foreach (var message in set)

-                        {

-                            resultMessages.Add(message);

-                        }

-                    }

-                }

-                catch (ConsumerTimeoutException)

-                {

-                    // do nothing, this is expected

-                }

-            }

-

-            Assert.AreEqual(2, resultMessages.Count);

-            Assert.AreEqual(msg1.ToString(), resultMessages[0].ToString());

-            Assert.AreEqual(msg2.ToString(), resultMessages[1].ToString());

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/IntegrationFixtureBase.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/IntegrationFixtureBase.cs
deleted file mode 100644
index f30a868..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/IntegrationFixtureBase.cs
+++ /dev/null
@@ -1,147 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.ZooKeeperIntegration;

-    using NUnit.Framework;

-

-    public abstract class IntegrationFixtureBase

-    {

-        protected string CurrentTestTopic { get; set; }

-

-        protected ProducerConfiguration ConfigBasedSyncProdConfig

-        {

-            get

-            {

-                return ProducerConfiguration.Configure(ProducerConfiguration.DefaultSectionName);

-            }

-        }

-

-        protected SyncProducerConfiguration SyncProducerConfig1

-        {

-            get

-            {

-                var prodConfig = this.ConfigBasedSyncProdConfig;

-                return new SyncProducerConfiguration(

-                    prodConfig,

-                    prodConfig.Brokers[0].BrokerId,

-                    prodConfig.Brokers[0].Host,

-                    prodConfig.Brokers[0].Port);

-            }

-        }

-

-        protected SyncProducerConfiguration SyncProducerConfig2

-        {

-            get

-            {

-                var prodConfig = this.ConfigBasedSyncProdConfig;

-                return new SyncProducerConfiguration(

-                    prodConfig,

-                    prodConfig.Brokers[1].BrokerId,

-                    prodConfig.Brokers[1].Host,

-                    prodConfig.Brokers[1].Port);

-            }

-        }

-

-        protected SyncProducerConfiguration SyncProducerConfig3

-        {

-            get

-            {

-                var prodConfig = this.ConfigBasedSyncProdConfig;

-                return new SyncProducerConfiguration(

-                    prodConfig,

-                    prodConfig.Brokers[2].BrokerId,

-                    prodConfig.Brokers[2].Host,

-                    prodConfig.Brokers[2].Port);

-            }

-        }

-

-        protected ProducerConfiguration ZooKeeperBasedSyncProdConfig

-        {

-            get

-            {

-                return ProducerConfiguration.Configure(ProducerConfiguration.DefaultSectionName + 2);

-            }

-        }

-

-        protected AsyncProducerConfiguration AsyncProducerConfig1

-        {

-            get

-            {

-                var asyncUberConfig = ProducerConfiguration.Configure(ProducerConfiguration.DefaultSectionName + 3);

-                return new AsyncProducerConfiguration(

-                    asyncUberConfig,

-                    asyncUberConfig.Brokers[0].BrokerId,

-                    asyncUberConfig.Brokers[0].Host,

-                    asyncUberConfig.Brokers[0].Port);

-            }

-        }

-

-        protected ConsumerConfiguration ConsumerConfig1

-        {

-            get

-            {

-                return ConsumerConfiguration.Configure(ConsumerConfiguration.DefaultSection + 1);

-            }

-        }

-

-        protected ConsumerConfiguration ConsumerConfig2

-        {

-            get

-            {

-                return ConsumerConfiguration.Configure(ConsumerConfiguration.DefaultSection + 2);

-            }

-        }

-

-        protected ConsumerConfiguration ConsumerConfig3

-        {

-            get

-            {

-                return ConsumerConfiguration.Configure(ConsumerConfiguration.DefaultSection + 3);

-            }

-        }

-

-        protected ConsumerConfiguration ZooKeeperBasedConsumerConfig

-        {

-            get

-            {

-                return ConsumerConfiguration.Configure(ConsumerConfiguration.DefaultSection + 4);

-            }

-        }

-

-        [SetUp]

-        public void SetupCurrentTestTopic()

-        {

-            CurrentTestTopic = TestContext.CurrentContext.Test.Name + "_" + Guid.NewGuid();

-        }

-

-        internal static void WaitUntillIdle(IZooKeeperClient client, int timeout)

-        {

-            Thread.Sleep(timeout);

-            int rest = client.IdleTime.HasValue ? timeout - client.IdleTime.Value : timeout;

-            while (rest > 0)

-            {

-                Thread.Sleep(rest);

-                rest = client.IdleTime.HasValue ? timeout - client.IdleTime.Value : timeout;

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Kafka.Client.IntegrationTests.csproj b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Kafka.Client.IntegrationTests.csproj
deleted file mode 100644
index 794c401..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Kafka.Client.IntegrationTests.csproj
+++ /dev/null
@@ -1,141 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>

-<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

-  <PropertyGroup>

-    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>

-    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>

-    <ProductVersion>8.0.30703</ProductVersion>

-    <SchemaVersion>2.0</SchemaVersion>

-    <ProjectGuid>{AF29C330-49BD-4648-B692-882E922C435B}</ProjectGuid>

-    <OutputType>Library</OutputType>

-    <AppDesignerFolder>Properties</AppDesignerFolder>

-    <RootNamespace>Kafka.Client.IntegrationTests</RootNamespace>

-    <AssemblyName>Kafka.Client.IntegrationTests</AssemblyName>

-    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>

-    <FileAlignment>512</FileAlignment>

-    <CodeContractsAssemblyMode>0</CodeContractsAssemblyMode>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">

-    <DebugSymbols>true</DebugSymbols>

-    <DebugType>full</DebugType>

-    <Optimize>false</Optimize>

-    <OutputPath>bin\Debug\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>true</StyleCopTreatErrorsAsWarnings>

-    <CodeContractsEnableRuntimeChecking>False</CodeContractsEnableRuntimeChecking>

-    <CodeContractsRuntimeOnlyPublicSurface>False</CodeContractsRuntimeOnlyPublicSurface>

-    <CodeContractsRuntimeThrowOnFailure>True</CodeContractsRuntimeThrowOnFailure>

-    <CodeContractsRuntimeCallSiteRequires>False</CodeContractsRuntimeCallSiteRequires>

-    <CodeContractsRuntimeSkipQuantifiers>False</CodeContractsRuntimeSkipQuantifiers>

-    <CodeContractsRunCodeAnalysis>False</CodeContractsRunCodeAnalysis>

-    <CodeContractsNonNullObligations>False</CodeContractsNonNullObligations>

-    <CodeContractsBoundsObligations>False</CodeContractsBoundsObligations>

-    <CodeContractsArithmeticObligations>False</CodeContractsArithmeticObligations>

-    <CodeContractsEnumObligations>False</CodeContractsEnumObligations>

-    <CodeContractsRedundantAssumptions>False</CodeContractsRedundantAssumptions>

-    <CodeContractsRunInBackground>True</CodeContractsRunInBackground>

-    <CodeContractsShowSquigglies>False</CodeContractsShowSquigglies>

-    <CodeContractsUseBaseLine>False</CodeContractsUseBaseLine>

-    <CodeContractsEmitXMLDocs>False</CodeContractsEmitXMLDocs>

-    <CodeContractsCustomRewriterAssembly />

-    <CodeContractsCustomRewriterClass />

-    <CodeContractsLibPaths />

-    <CodeContractsExtraRewriteOptions />

-    <CodeContractsExtraAnalysisOptions />

-    <CodeContractsBaseLineFile />

-    <CodeContractsCacheAnalysisResults>False</CodeContractsCacheAnalysisResults>

-    <CodeContractsRuntimeCheckingLevel>Full</CodeContractsRuntimeCheckingLevel>

-    <CodeContractsReferenceAssembly>%28none%29</CodeContractsReferenceAssembly>

-    <CodeContractsAnalysisWarningLevel>0</CodeContractsAnalysisWarningLevel>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">

-    <DebugType>pdbonly</DebugType>

-    <Optimize>true</Optimize>

-    <OutputPath>bin\Release\</OutputPath>

-    <DefineConstants>TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Integration|AnyCPU' ">

-    <DebugSymbols>true</DebugSymbols>

-    <DebugType>full</DebugType>

-    <Optimize>false</Optimize>

-    <OutputPath>bin\Integration\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <PropertyGroup>

-    <StartupObject />

-  </PropertyGroup>

-  <ItemGroup>

-    <Reference Include="log4net">

-      <HintPath>..\..\..\..\lib\log4Net\log4net.dll</HintPath>

-    </Reference>

-    <Reference Include="nunit.framework, Version=2.5.9.10348, Culture=neutral, PublicKeyToken=96d09a1eb7f44a77, processorArchitecture=MSIL">

-      <SpecificVersion>False</SpecificVersion>

-      <HintPath>..\..\..\..\lib\nunit\2.5.9\nunit.framework.dll</HintPath>

-    </Reference>

-    <Reference Include="System" />

-    <Reference Include="System.Configuration" />

-    <Reference Include="System.Core" />

-    <Reference Include="Microsoft.CSharp" />

-    <Reference Include="ZooKeeperNet">

-      <HintPath>..\..\..\..\lib\zookeeper\ZooKeeperNet.dll</HintPath>

-    </Reference>

-  </ItemGroup>

-  <ItemGroup>

-    <Compile Include="CompressionTests.cs" />

-    <Compile Include="ConsumerRebalancingTests.cs" />

-    <Compile Include="ConsumerTests.cs" />

-    <Compile Include="IntegrationFixtureBase.cs" />

-    <Compile Include="KafkaIntegrationTest.cs" />

-    <Compile Include="MockAlwaysZeroPartitioner.cs" />

-    <Compile Include="ProducerTests.cs" />

-    <Compile Include="Properties\AssemblyInfo.cs" />

-    <Compile Include="TestHelper.cs" />

-    <Compile Include="TestsSetup.cs" />

-    <Compile Include="TestMultipleBrokersHelper.cs" />

-    <Compile Include="ZKBrokerPartitionInfoTests.cs" />

-    <Compile Include="ZooKeeperAwareProducerTests.cs" />

-    <Compile Include="ZooKeeperClientTests.cs" />

-    <Compile Include="ZooKeeperConnectionTests.cs" />

-  </ItemGroup>

-  <ItemGroup>

-    <ProjectReference Include="..\..\Kafka.Client\Kafka.Client.csproj">

-      <Project>{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}</Project>

-      <Name>Kafka.Client</Name>

-    </ProjectReference>

-  </ItemGroup>

-  <ItemGroup>

-    <None Include="..\..\..\..\Settings.StyleCop">

-      <Link>Settings.StyleCop</Link>

-    </None>

-    <None Include="App.config">

-      <SubType>Designer</SubType>

-    </None>

-    <None Include="Config\Debug\App.config">

-      <SubType>Designer</SubType>

-    </None>

-    <None Include="Log4Net.config">

-      <CopyToOutputDirectory>Always</CopyToOutputDirectory>

-    </None>

-    <None Include="Config\Integration\App.config">

-      <SubType>Designer</SubType>

-    </None>

-  </ItemGroup>

-  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

-  <Import Project="..\..\..\..\lib\StyleCop\Microsoft.StyleCop.Targets" />

-  <PropertyGroup>

-    <PostBuildEvent>

-    </PostBuildEvent>

-  </PropertyGroup>

-  <Target Name="BeforeBuild">

-    <Copy SourceFiles="Config\$(Configuration)\App.config" DestinationFiles="App.config" />

-  </Target>

-  <Target Name="AfterBuild">

-  </Target>

-</Project>

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/KafkaIntegrationTest.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/KafkaIntegrationTest.cs
deleted file mode 100644
index cc7b360..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/KafkaIntegrationTest.cs
+++ /dev/null
@@ -1,488 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Linq;

-    using System.Text;

-    using System.Threading;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers.Async;

-    using Kafka.Client.Producers.Sync;

-    using Kafka.Client.Requests;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Contains tests that go all the way to Kafka and back.

-    /// </summary>

-    [TestFixture]

-    public class KafkaIntegrationTest : IntegrationFixtureBase

-    {

-        /// <summary>

-        /// Maximum amount of time to wait trying to get a specific test message from Kafka server (in miliseconds)

-        /// </summary>

-        private static readonly int MaxTestWaitTimeInMiliseconds = 5000;

-

-        /// <summary>

-        /// Sends a pair of message to Kafka.

-        /// </summary>

-        [Test]

-        public void ProducerSendsMessage()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-

-            string payload1 = "kafka 1.";

-            byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);

-            var msg1 = new Message(payloadData1);

-

-            string payload2 = "kafka 2.";

-            byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);

-            var msg2 = new Message(payloadData2);

-

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { msg1, msg2 });

-                producer.Send(producerRequest);

-            }

-        }

-

-        /// <summary>

-        /// Sends a message with long topic to Kafka.

-        /// </summary>

-        [Test]

-        public void ProducerSendsMessageWithLongTopic()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-

-            var msg = new Message(Encoding.UTF8.GetBytes("test message"));

-            string topic = "ThisIsAVeryLongTopicThisIsAVeryLongTopicThisIsAVeryLongTopicThisIsAVeryLongTopicThisIsAVeryLongTopicThisIsAVeryLongTopic";

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest = new ProducerRequest(topic, 0, new List<Message> { msg });

-                producer.Send(producerRequest);

-            }

-        }

-

-        /// <summary>

-        /// Asynchronously sends many random messages to Kafka

-        /// </summary>

-        [Test]

-        public void AsyncProducerSendsManyLongRandomMessages()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-            List<Message> messages = GenerateRandomTextMessages(50);

-            using (var producer = new AsyncProducer(prodConfig))

-            {

-                producer.Send(CurrentTestTopic, 0, messages);

-            }

-        }

-

-        /// <summary>

-        /// Asynchronously sends few short fixed messages to Kafka

-        /// </summary>

-        [Test]

-        public void AsyncProducerSendsFewShortFixedMessages()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-

-            var messages = new List<Message>

-                                         {

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 1")),

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 2")),

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 3")),

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 4"))

-                                         };

-

-            using (var producer = new AsyncProducer(prodConfig))

-            {

-                producer.Send(CurrentTestTopic, 0, messages);

-            }

-        }

-

-        /// <summary>

-        /// Asynchronously sends few short fixed messages to Kafka in separate send actions

-        /// </summary>

-        [Test]

-        public void AsyncProducerSendsFewShortFixedMessagesInSeparateSendActions()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-

-            using (var producer = new AsyncProducer(prodConfig))

-            {

-                var req1 = new ProducerRequest(

-                    CurrentTestTopic,

-                    0,

-                    new List<Message> { new Message(Encoding.UTF8.GetBytes("Async Test Message 1")) });

-                producer.Send(req1);

-

-                var req2 = new ProducerRequest(

-                    CurrentTestTopic,

-                    0,

-                    new List<Message> { new Message(Encoding.UTF8.GetBytes("Async Test Message 2")) });

-                producer.Send(req2);

-

-                var req3 = new ProducerRequest(

-                    CurrentTestTopic,

-                    0,

-                    new List<Message> { new Message(Encoding.UTF8.GetBytes("Async Test Message 3")) });

-                producer.Send(req3);

-            }

-        }

-

-        [Test]

-        public void AsyncProducerSendsMessageWithCallbackClass()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-

-            var messages = new List<Message>

-                                         {

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 1")),

-                                         };

-            var myHandler = new TestCallbackHandler();

-            using (var producer = new AsyncProducer(prodConfig, myHandler))

-            {

-                producer.Send(CurrentTestTopic, 0, messages);

-            }

-

-            Thread.Sleep(1000);

-            Assert.IsTrue(myHandler.WasRun);

-        }

-

-        [Test]

-        public void AsyncProducerSendsMessageWithCallback()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-

-            var messages = new List<Message>

-                                         {

-                                             new Message(Encoding.UTF8.GetBytes("Async Test Message 1")),

-                                         };

-            var myHandler = new TestCallbackHandler();

-            using (var producer = new AsyncProducer(prodConfig))

-            {

-                producer.Send(CurrentTestTopic, 0, messages, myHandler.Handle);

-            }

-

-            Thread.Sleep(1000);

-            Assert.IsTrue(myHandler.WasRun);

-        }

-

-        private class TestCallbackHandler : ICallbackHandler

-        {

-            public bool WasRun { get; private set; }

-

-            public void Handle(RequestContext<ProducerRequest> context)

-            {

-                WasRun = true;

-            }

-        }

-

-        /// <summary>

-        /// Send a multi-produce request to Kafka.

-        /// </summary>

-        [Test]

-        public void ProducerSendMultiRequest()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-

-            var requests = new List<ProducerRequest>

-            { 

-                new ProducerRequest(CurrentTestTopic, 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("1: " + DateTime.UtcNow)) }),

-                new ProducerRequest(CurrentTestTopic, 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("2: " + DateTime.UtcNow)) }),

-                new ProducerRequest(CurrentTestTopic, 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("3: " + DateTime.UtcNow)) }),

-                new ProducerRequest(CurrentTestTopic, 0, new List<Message> { new Message(Encoding.UTF8.GetBytes("4: " + DateTime.UtcNow)) })

-            };

-

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                producer.MultiSend(requests);

-            }

-        }

-

-        /// <summary>

-        /// Generates messages for Kafka then gets them back.

-        /// </summary>

-        [Test]

-        public void ConsumerFetchMessage()

-        {

-            var consumerConfig = this.ConsumerConfig1;

-            ProducerSendsMessage();

-            Thread.Sleep(1000);

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new FetchRequest(CurrentTestTopic, 0, 0);

-            BufferedMessageSet response = consumer.Fetch(request);

-            Assert.NotNull(response);

-            int count = 0;

-            foreach (var message in response)

-            {

-                count++;

-                Console.WriteLine(message.Message);

-            }

-

-            Assert.AreEqual(2, count);

-        }

-

-        /// <summary>

-        /// Generates multiple messages for Kafka then gets them back.

-        /// </summary>

-        [Test]

-        public void ConsumerMultiFetchGetsMessage()

-        {

-            var config = this.ConsumerConfig1;

-

-            ProducerSendMultiRequest();

-            Thread.Sleep(2000);

-            IConsumer cons = new Consumer(config);

-            var request = new MultiFetchRequest(new List<FetchRequest>

-            {

-                new FetchRequest(CurrentTestTopic, 0, 0),

-                new FetchRequest(CurrentTestTopic, 0, 0),

-                new FetchRequest(CurrentTestTopic, 0, 0)

-            });

-

-            IList<BufferedMessageSet> response = cons.MultiFetch(request);

-            Assert.AreEqual(3, response.Count);

-            for (int ix = 0; ix < response.Count; ix++)

-            {

-                IEnumerable<Message> messageSet = response[ix].Messages;

-                Assert.AreEqual(4, messageSet.Count());

-                Console.WriteLine(string.Format("Request #{0}-->", ix));

-                foreach (Message msg in messageSet)

-                {

-                    Console.WriteLine(msg.ToString());

-                }

-            }

-        }

-

-        /// <summary>

-        /// Gets offsets from Kafka.

-        /// </summary>

-        [Test]

-        public void ConsumerGetsOffsets()

-        {

-            var consumerConfig = this.ConsumerConfig1;

-

-            var request = new OffsetRequest(CurrentTestTopic, 0, DateTime.Now.AddHours(-24).Ticks, 10);

-            IConsumer consumer = new Consumer(consumerConfig);

-            IList<long> list = consumer.GetOffsetsBefore(request);

-

-            foreach (long l in list)

-            {

-                Console.Out.WriteLine(l);

-            }

-        }

-

-        /// <summary>

-        /// Synchronous Producer sends a single simple message and then a consumer consumes it

-        /// </summary>

-        [Test]

-        public void ProducerSendsAndConsumerReceivesSingleSimpleMessage()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ConsumerConfig1;

-

-            var sourceMessage = new Message(Encoding.UTF8.GetBytes("test message"));

-            long currentOffset = TestHelper.GetCurrentKafkaOffset(CurrentTestTopic, consumerConfig);

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                var producerRequest = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { sourceMessage });

-                producer.Send(producerRequest);

-            }

-

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new FetchRequest(CurrentTestTopic, 0, currentOffset);

-            BufferedMessageSet response;

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                response = consumer.Fetch(request);

-                if (response != null && response.Messages.Count() > 0)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= MaxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.NotNull(response);

-            Assert.AreEqual(1, response.Messages.Count());

-            Message resultMessage = response.Messages.First();

-            Assert.AreEqual(sourceMessage.ToString(), resultMessage.ToString());

-        }

-

-        /// <summary>

-        /// Asynchronous Producer sends a single simple message and then a consumer consumes it

-        /// </summary>

-        [Test]

-        public void AsyncProducerSendsAndConsumerReceivesSingleSimpleMessage()

-        {

-            var prodConfig = this.AsyncProducerConfig1;

-            var consumerConfig = this.ConsumerConfig1;

-

-            var sourceMessage = new Message(Encoding.UTF8.GetBytes("test message"));

-            using (var producer = new AsyncProducer(prodConfig))

-            {

-                var producerRequest = new ProducerRequest(CurrentTestTopic, 0, new List<Message> { sourceMessage });

-                producer.Send(producerRequest);

-            }

-

-            long currentOffset = TestHelper.GetCurrentKafkaOffset(CurrentTestTopic, consumerConfig);

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new FetchRequest(CurrentTestTopic, 0, currentOffset);

-

-            BufferedMessageSet response;

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                response = consumer.Fetch(request);

-                if (response != null && response.Messages.Count() > 0)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= MaxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.NotNull(response);

-            Assert.AreEqual(1, response.Messages.Count());

-            Message resultMessage = response.Messages.First();

-            Assert.AreEqual(sourceMessage.ToString(), resultMessage.ToString());

-        }

-

-        /// <summary>

-        /// Synchronous producer sends a multi request and a consumer receives it from to Kafka.

-        /// </summary>

-        [Test]

-        public void ProducerSendsAndConsumerReceivesMultiRequest()

-        {

-            var prodConfig = this.SyncProducerConfig1;

-            var consumerConfig = this.ConsumerConfig1;

-

-            string testTopic1 = CurrentTestTopic + "1";

-            string testTopic2 = CurrentTestTopic + "2";

-            string testTopic3 = CurrentTestTopic + "3";

-

-            var sourceMessage1 = new Message(Encoding.UTF8.GetBytes("1: TestMessage"));

-            var sourceMessage2 = new Message(Encoding.UTF8.GetBytes("2: TestMessage"));

-            var sourceMessage3 = new Message(Encoding.UTF8.GetBytes("3: TestMessage"));

-            var sourceMessage4 = new Message(Encoding.UTF8.GetBytes("4: TestMessage"));

-

-            var requests = new List<ProducerRequest>

-            { 

-                new ProducerRequest(testTopic1, 0, new List<Message> { sourceMessage1 }),

-                new ProducerRequest(testTopic1, 0, new List<Message> { sourceMessage2 }),

-                new ProducerRequest(testTopic2, 0, new List<Message> { sourceMessage3 }),

-                new ProducerRequest(testTopic3, 0, new List<Message> { sourceMessage4 })

-            };

-

-            long currentOffset1 = TestHelper.GetCurrentKafkaOffset(testTopic1, consumerConfig);

-            long currentOffset2 = TestHelper.GetCurrentKafkaOffset(testTopic2, consumerConfig);

-            long currentOffset3 = TestHelper.GetCurrentKafkaOffset(testTopic3, consumerConfig);

-

-            using (var producer = new SyncProducer(prodConfig))

-            {

-                producer.MultiSend(requests);

-            }

-

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new MultiFetchRequest(new List<FetchRequest>

-            {

-                new FetchRequest(testTopic1, 0, currentOffset1),

-                new FetchRequest(testTopic2, 0, currentOffset2),

-                new FetchRequest(testTopic3, 0, currentOffset3)

-            });

-            IList<BufferedMessageSet> messageSets;

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                messageSets = consumer.MultiFetch(request);

-                if (messageSets.Count > 2 && messageSets[0].Messages.Count() > 0 && messageSets[1].Messages.Count() > 0 && messageSets[2].Messages.Count() > 0)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= MaxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.AreEqual(3, messageSets.Count);

-            Assert.AreEqual(2, messageSets[0].Messages.Count());

-            Assert.AreEqual(1, messageSets[1].Messages.Count());

-            Assert.AreEqual(1, messageSets[2].Messages.Count());

-            Assert.AreEqual(sourceMessage1.ToString(), messageSets[0].Messages.First().ToString());

-            Assert.AreEqual(sourceMessage2.ToString(), messageSets[0].Messages.Skip(1).First().ToString());

-            Assert.AreEqual(sourceMessage3.ToString(), messageSets[1].Messages.First().ToString());

-            Assert.AreEqual(sourceMessage4.ToString(), messageSets[2].Messages.First().ToString());

-        }

-

-        /// <summary>

-        /// Gererates a randome list of text messages.

-        /// </summary>

-        /// <param name="numberOfMessages">The number of messages to generate.</param>

-        /// <returns>A list of random text messages.</returns>

-        private static List<Message> GenerateRandomTextMessages(int numberOfMessages)

-        {

-            var messages = new List<Message>();

-            for (int ix = 0; ix < numberOfMessages; ix++)

-            {

-                ////messages.Add(new Message(GenerateRandomBytes(10000)));

-                messages.Add(new Message(Encoding.UTF8.GetBytes(GenerateRandomMessage(10000))));

-            }

-

-            return messages;

-        }

-

-        /// <summary>

-        /// Generate a random message text.

-        /// </summary>

-        /// <param name="length">Length of the message string.</param>

-        /// <returns>Random message string.</returns>

-        private static string GenerateRandomMessage(int length)

-        {

-            var builder = new StringBuilder();

-            var random = new Random();

-            for (int i = 0; i < length; i++)

-            {

-                char ch = Convert.ToChar(Convert.ToInt32(

-                    Math.Floor((26 * random.NextDouble()) + 65)));

-                builder.Append(ch);

-            }

-

-            return builder.ToString();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Log4Net.config b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Log4Net.config
deleted file mode 100644
index db7bd16..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Log4Net.config
+++ /dev/null
@@ -1,55 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- 
-    http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<?xml version="1.0" encoding="utf-8" ?>

-<log4net>

-    <root>

-        <level value="ALL" />

-        <!--<appender-ref ref="ConsoleAppender" />-->

-        <appender-ref ref="KafkaFileAppender" />

-        <appender-ref ref="ZookeeperFileAppender" />

-    </root>

-    <!--<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">

-        <layout type="log4net.Layout.PatternLayout">

-            <conversionPattern value="%-5level - %message - %logger%newline" />

-        </layout>

-    </appender>-->

-    <appender name="KafkaFileAppender" type="log4net.Appender.FileAppender">

-        <filter type="log4net.Filter.LoggerMatchFilter">

-            <LoggerToMatch value="Kafka.Client."/>

-        </filter>

-        <filter type="log4net.Filter.DenyAllFilter" />

-        <file value="kafka-logs.txt" />

-        <appendToFile value="false" />

-        <layout type="log4net.Layout.PatternLayout">

-            <conversionPattern value="[%-5level] - [%logger] - %message%newline" />

-        </layout>

-    </appender>

-    <appender name="ZookeeperFileAppender" type="log4net.Appender.FileAppender">

-        <filter type="log4net.Filter.LoggerMatchFilter">

-            <LoggerToMatch value="ZooKeeperNet."/>

-        </filter>

-        <filter type="log4net.Filter.LoggerMatchFilter">

-            <LoggerToMatch value="Org.Apache.Zookeeper.Data."/>

-        </filter>

-        <filter type="log4net.Filter.DenyAllFilter" />

-        <file value="zookeeper-logs.txt" />

-        <appendToFile value="false" />

-        <layout type="log4net.Layout.PatternLayout">

-            <conversionPattern value="[%-5level] - [%logger] - %message%newline" />

-        </layout>

-    </appender>

-</log4net>
\ No newline at end of file
diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/MockAlwaysZeroPartitioner.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/MockAlwaysZeroPartitioner.cs
deleted file mode 100644
index 372b026..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/MockAlwaysZeroPartitioner.cs
+++ /dev/null
@@ -1,34 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using Kafka.Client.Producers.Partitioning;

-

-namespace Kafka.Client.IntegrationTests

-{

-    using Kafka.Client.Producers.Partitioning;

-

-    /// <summary>

-    /// This mock partitioner will always point to the first partition (the one of index = 0)

-    /// </summary>

-    public class MockAlwaysZeroPartitioner : IPartitioner<string>

-    {

-        public int Partition(string key, int numPartitions)

-        {

-            return 0;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ProducerTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ProducerTests.cs
deleted file mode 100644
index 9e45157..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ProducerTests.cs
+++ /dev/null
@@ -1,226 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System.Collections.Generic;

-    using System.Linq;

-    using System.Text;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using NUnit.Framework;

-

-    [TestFixture]

-    public class ProducerTests : IntegrationFixtureBase

-    {

-        /// <summary>

-        /// Maximum amount of time to wait trying to get a specific test message from Kafka server (in miliseconds)

-        /// </summary>

-        private readonly int maxTestWaitTimeInMiliseconds = 5000;

-

-        [Test]

-        public void ProducerSends1Message()

-        {

-            var prodConfig = this.ConfigBasedSyncProdConfig;

-

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            var originalMessage = new Message(Encoding.UTF8.GetBytes("TestData"));

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(

-                new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-            using (var producer = new Producer(prodConfig))

-            {

-                var producerData = new ProducerData<string, Message>(

-                    CurrentTestTopic, new List<Message> { originalMessage });

-                producer.Send(producerData);

-                Thread.Sleep(waitSingle);

-            }

-

-            while (

-                !multipleBrokersHelper.CheckIfAnyBrokerHasChanged(

-                    new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-            {

-                totalWaitTimeInMiliseconds += waitSingle;

-                Thread.Sleep(waitSingle);

-                if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                {

-                    Assert.Fail("None of the brokers changed their offset after sending a message");

-                }

-            }

-

-            totalWaitTimeInMiliseconds = 0;

-

-            var consumerConfig = new ConsumerConfiguration(

-                multipleBrokersHelper.BrokerThatHasChanged.Host, multipleBrokersHelper.BrokerThatHasChanged.Port);

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request1 = new FetchRequest(CurrentTestTopic, multipleBrokersHelper.PartitionThatHasChanged, multipleBrokersHelper.OffsetFromBeforeTheChange);

-            BufferedMessageSet response;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                response = consumer.Fetch(request1);

-                if (response != null && response.Messages.Count() > 0)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.NotNull(response);

-            Assert.AreEqual(1, response.Messages.Count());

-            Assert.AreEqual(originalMessage.ToString(), response.Messages.First().ToString());

-        }

-

-        [Test]

-        public void ProducerSends3Messages()

-        {

-            var prodConfig = this.ConfigBasedSyncProdConfig;

-

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            var originalMessage1 = new Message(Encoding.UTF8.GetBytes("TestData1"));

-            var originalMessage2 = new Message(Encoding.UTF8.GetBytes("TestData2"));

-            var originalMessage3 = new Message(Encoding.UTF8.GetBytes("TestData3"));

-            var originalMessageList = new List<Message> { originalMessage1, originalMessage2, originalMessage3 };

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(

-                new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-            using (var producer = new Producer(prodConfig))

-            {

-                var producerData = new ProducerData<string, Message>(CurrentTestTopic, originalMessageList);

-                producer.Send(producerData);

-            }

-

-            Thread.Sleep(waitSingle);

-            while (

-                !multipleBrokersHelper.CheckIfAnyBrokerHasChanged(

-                    new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-            {

-                totalWaitTimeInMiliseconds += waitSingle;

-                Thread.Sleep(waitSingle);

-                if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                {

-                    Assert.Fail("None of the brokers changed their offset after sending a message");

-                }

-            }

-

-            totalWaitTimeInMiliseconds = 0;

-

-            var consumerConfig = new ConsumerConfiguration(

-                multipleBrokersHelper.BrokerThatHasChanged.Host, multipleBrokersHelper.BrokerThatHasChanged.Port);

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new FetchRequest(CurrentTestTopic, multipleBrokersHelper.PartitionThatHasChanged, multipleBrokersHelper.OffsetFromBeforeTheChange);

-

-            BufferedMessageSet response;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                response = consumer.Fetch(request);

-                if (response != null && response.Messages.Count() > 2)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.NotNull(response);

-            Assert.AreEqual(3, response.Messages.Count());

-            Assert.AreEqual(originalMessage1.ToString(), response.Messages.First().ToString());

-            Assert.AreEqual(originalMessage2.ToString(), response.Messages.Skip(1).First().ToString());

-            Assert.AreEqual(originalMessage3.ToString(), response.Messages.Skip(2).First().ToString());

-        }

-

-        [Test]

-        public void ProducerSends1MessageUsingNotDefaultEncoder()

-        {

-            var prodConfig = this.ConfigBasedSyncProdConfig;

-

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            string originalMessage = "TestData";

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-            using (var producer = new Producer<string, string>(prodConfig, null, new StringEncoder(), null))

-            {

-                var producerData = new ProducerData<string, string>(

-                    CurrentTestTopic, new List<string> { originalMessage });

-

-                producer.Send(producerData);

-            }

-

-            Thread.Sleep(waitSingle);

-

-            while (!multipleBrokersHelper.CheckIfAnyBrokerHasChanged(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-            {

-                totalWaitTimeInMiliseconds += waitSingle;

-                Thread.Sleep(waitSingle);

-                if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                {

-                    Assert.Fail("None of the brokers changed their offset after sending a message");

-                }

-            }

-

-            totalWaitTimeInMiliseconds = 0;

-

-            var consumerConfig = new ConsumerConfiguration(

-                multipleBrokersHelper.BrokerThatHasChanged.Host,

-                    multipleBrokersHelper.BrokerThatHasChanged.Port);

-            IConsumer consumer = new Consumer(consumerConfig);

-            var request = new FetchRequest(CurrentTestTopic, multipleBrokersHelper.PartitionThatHasChanged, multipleBrokersHelper.OffsetFromBeforeTheChange);

-

-            BufferedMessageSet response;

-            while (true)

-            {

-                Thread.Sleep(waitSingle);

-                response = consumer.Fetch(request);

-                if (response != null && response.Messages.Count() > 0)

-                {

-                    break;

-                }

-

-                totalWaitTimeInMiliseconds += waitSingle;

-                if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                {

-                    break;

-                }

-            }

-

-            Assert.NotNull(response);

-            Assert.AreEqual(1, response.Messages.Count());

-            Assert.AreEqual(originalMessage, Encoding.UTF8.GetString(response.Messages.First().Payload));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Properties/AssemblyInfo.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Properties/AssemblyInfo.cs
deleted file mode 100644
index ab036b2..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/Properties/AssemblyInfo.cs
+++ /dev/null
@@ -1,52 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-using System.Reflection;

-using System.Runtime.InteropServices;

-

-// General Information about an assembly is controlled through the following 

-// set of attributes. Change these attribute values to modify the information

-// associated with an assembly.

-[assembly: AssemblyTitle("Kafka.Client.IntegrationTests")]

-[assembly: AssemblyDescription("")]

-[assembly: AssemblyConfiguration("")]

-[assembly: AssemblyCompany("Microsoft")]

-[assembly: AssemblyProduct("Kafka.Client.IntegrationTests")]

-[assembly: AssemblyCopyright("Copyright © Microsoft 2011")]

-[assembly: AssemblyTrademark("")]

-[assembly: AssemblyCulture("")]

-

-// Setting ComVisible to false makes the types in this assembly not visible 

-// to COM components.  If you need to access a type in this assembly from 

-// COM, set the ComVisible attribute to true on that type.

-[assembly: ComVisible(false)]

-

-// The following GUID is for the ID of the typelib if this project is exposed to COM

-[assembly: Guid("7b2387b7-6a58-4e8b-ae06-8aadf1a64949")]

-

-// Version information for an assembly consists of the following four values:

-//

-//      Major Version

-//      Minor Version 

-//      Build Number

-//      Revision

-//

-// You can specify all the values or you can default the Build and Revision Numbers 

-// by using the '*' as shown below:

-// [assembly: AssemblyVersion("1.0.*")]

-[assembly: AssemblyVersion("1.0.0.0")]

-[assembly: AssemblyFileVersion("1.0.0.0")]

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestHelper.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestHelper.cs
deleted file mode 100644
index 8b97436..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestHelper.cs
+++ /dev/null
@@ -1,48 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Linq;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Requests;

-

-    public static class TestHelper

-    {

-        public static long GetCurrentKafkaOffset(string topic, ConsumerConfiguration clientConfig)

-        {

-            return GetCurrentKafkaOffset(topic, clientConfig.Broker.Host, clientConfig.Broker.Port);

-        }

-

-        public static long GetCurrentKafkaOffset(string topic, string address, int port)

-        {

-            return GetCurrentKafkaOffset(topic, address, port, 0);

-        }

-

-        public static long GetCurrentKafkaOffset(string topic, string address, int port, int partition)

-        {

-            var request = new OffsetRequest(topic, partition, DateTime.Now.AddDays(-5).Ticks, 10);

-            var consumerConfig = new ConsumerConfiguration(address, port);

-            IConsumer consumer = new Consumer(consumerConfig, address, port);

-            IList<long> list = consumer.GetOffsetsBefore(request);

-            return list.Sum();

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestMultipleBrokersHelper.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestMultipleBrokersHelper.cs
deleted file mode 100644
index 6cda8f4..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestMultipleBrokersHelper.cs
+++ /dev/null
@@ -1,78 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System.Collections.Generic;

-    using Kafka.Client.Cfg;

-

-    public class TestMultipleBrokersHelper

-    {

-        private readonly Dictionary<int, Dictionary<int, long>> offsets = new Dictionary<int, Dictionary<int, long>>();

-

-        private readonly string topic;

-

-        public TestMultipleBrokersHelper(string topic)

-        {

-            this.topic = topic;

-        }

-

-        public SyncProducerConfiguration BrokerThatHasChanged { get; private set; }

-

-        public int PartitionThatHasChanged { get; private set; }

-

-        public long OffsetFromBeforeTheChange

-        {

-            get

-            {

-                return this.BrokerThatHasChanged != null ? this.offsets[this.BrokerThatHasChanged.BrokerId][this.PartitionThatHasChanged] : 0;

-            }

-        }

-

-        public void GetCurrentOffsets(IEnumerable<SyncProducerConfiguration> brokers)

-        {

-            foreach (var broker in brokers)

-            {

-                offsets.Add(broker.BrokerId, new Dictionary<int, long>());

-                offsets[broker.BrokerId].Add(0, TestHelper.GetCurrentKafkaOffset(topic, broker.Host, broker.Port, 0));

-                offsets[broker.BrokerId].Add(1, TestHelper.GetCurrentKafkaOffset(topic, broker.Host, broker.Port, 1));

-            }

-        }

-

-        public bool CheckIfAnyBrokerHasChanged(IEnumerable<SyncProducerConfiguration> brokers)

-        {

-            foreach (var broker in brokers)

-            {

-                if (TestHelper.GetCurrentKafkaOffset(topic, broker.Host, broker.Port, 0) != offsets[broker.BrokerId][0])

-                {

-                    this.BrokerThatHasChanged = broker;

-                    this.PartitionThatHasChanged = 0;

-                    return true;

-                }

-

-                if (TestHelper.GetCurrentKafkaOffset(topic, broker.Host, broker.Port, 1) != offsets[broker.BrokerId][1])

-                {

-                    this.BrokerThatHasChanged = broker;

-                    this.PartitionThatHasChanged = 1;

-                    return true;

-                }

-            }

-

-            return false;

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestsSetup.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestsSetup.cs
deleted file mode 100644
index c25eef9..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/TestsSetup.cs
+++ /dev/null
@@ -1,35 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using log4net;

-    using log4net.Config;

-    using NUnit.Framework;

-

-    [SetUpFixture]

-    public class TestsSetup

-    {

-        [SetUp]

-        public void Setup()

-        {

-            XmlConfigurator.Configure();

-            ILog logger = LogManager.GetLogger(typeof(TestsSetup));

-            logger.Info("Start logging");

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZKBrokerPartitionInfoTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZKBrokerPartitionInfoTests.cs
deleted file mode 100644
index 580ea14..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZKBrokerPartitionInfoTests.cs
+++ /dev/null
@@ -1,464 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System.Collections.Generic;

-    using System.Linq;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Producers.Partitioning;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using NUnit.Framework;

-    using ZooKeeperNet;

-

-    [TestFixture]

-    public class ZkBrokerPartitionInfoTests : IntegrationFixtureBase

-    {

-        [Test]

-        public void ZkBrokerPartitionInfoGetsAllBrokerInfo()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-            var prodConfigNotZk = this.ConfigBasedSyncProdConfig;

-

-            IDictionary<int, Broker> allBrokerInfo;

-            using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(prodConfig, null))

-            {

-                allBrokerInfo = brokerPartitionInfo.GetAllBrokerInfo();

-            }

-

-            Assert.AreEqual(prodConfigNotZk.Brokers.Count, allBrokerInfo.Count);

-            allBrokerInfo.Values.All(x => prodConfigNotZk.Brokers.Any(

-                y => x.Id == y.BrokerId

-                && x.Host == y.Host

-                && x.Port == y.Port));

-        }

-

-        [Test]

-        public void ZkBrokerPartitionInfoGetsBrokerPartitionInfo()

-        {

-            var prodconfig = this.ZooKeeperBasedSyncProdConfig;

-            SortedSet<Partition> partitions;

-            using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(prodconfig, null))

-            {

-                partitions = brokerPartitionInfo.GetBrokerPartitionInfo("test");

-            }

-

-            Assert.NotNull(partitions);

-            Assert.GreaterOrEqual(partitions.Count, 2);

-        }

-

-        [Test]

-        public void ZkBrokerPartitionInfoGetsBrokerInfo()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-            var prodConfigNotZk = this.ConfigBasedSyncProdConfig;

-

-            using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(prodConfig, null))

-            {

-                var testBroker = prodConfigNotZk.Brokers[0];

-                Broker broker = brokerPartitionInfo.GetBrokerInfo(testBroker.BrokerId);

-                Assert.NotNull(broker);

-                Assert.AreEqual(testBroker.Host, broker.Host);

-                Assert.AreEqual(testBroker.Port, broker.Port);

-            }

-        }

-

-        [Test]

-        public void WhenNewTopicIsAddedBrokerTopicsListenerCreatesNewMapping()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            string topicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + CurrentTestTopic;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                var brokerTopicsListener = new BrokerTopicsListener(client, mappings, brokers, null);

-                client.Subscribe(ZooKeeperClient.DefaultBrokerTopicsPath, brokerTopicsListener);

-                client.CreatePersistent(topicPath, true);

-                WaitUntillIdle(client, 500);

-                client.UnsubscribeAll();

-                WaitUntillIdle(client, 500);

-                client.DeleteRecursive(topicPath);

-            }

-

-            Assert.IsTrue(mappings.ContainsKey(CurrentTestTopic));

-        }

-

-        [Test]

-        public void WhenNewBrokerIsAddedBrokerTopicsListenerUpdatesBrokersList()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                var brokerTopicsListener = new BrokerTopicsListener(client, mappings, brokers, null);

-                client.Subscribe(ZooKeeperClient.DefaultBrokerIdsPath, brokerTopicsListener);

-                WaitUntillIdle(client, 500);

-                client.CreatePersistent(brokerPath, true);

-                client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                WaitUntillIdle(client, 500);

-                client.UnsubscribeAll();

-                WaitUntillIdle(client, 500);

-                client.DeleteRecursive(brokerPath);

-            }

-

-            Assert.IsTrue(brokers.ContainsKey(2345));

-            Assert.AreEqual("192.168.1.39", brokers[2345].Host);

-            Assert.AreEqual(9102, brokers[2345].Port);

-            Assert.AreEqual(2345, brokers[2345].Id);

-        }

-

-        [Test]

-        public void WhenBrokerIsRemovedBrokerTopicsListenerUpdatesBrokersList()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                WaitUntillIdle(client, 500); 

-                var brokerTopicsListener = new BrokerTopicsListener(client, mappings, brokers, null);

-                client.Subscribe(ZooKeeperClient.DefaultBrokerIdsPath, brokerTopicsListener);

-                client.CreatePersistent(brokerPath, true);

-                client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                WaitUntillIdle(client, 500); 

-                Assert.IsTrue(brokers.ContainsKey(2345));

-                client.DeleteRecursive(brokerPath);

-                WaitUntillIdle(client, 500); 

-                Assert.IsFalse(brokers.ContainsKey(2345));

-            }

-        }

-

-        [Test]

-        public void WhenNewBrokerInTopicIsAddedBrokerTopicsListenerUpdatesMappings()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            string topicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + CurrentTestTopic;

-            string topicBrokerPath = topicPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                var brokerTopicsListener = new BrokerTopicsListener(client, mappings, brokers, null);

-                client.Subscribe(ZooKeeperClient.DefaultBrokerIdsPath, brokerTopicsListener);

-                client.Subscribe(ZooKeeperClient.DefaultBrokerTopicsPath, brokerTopicsListener);

-                client.CreatePersistent(brokerPath, true);

-                client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                client.CreatePersistent(topicPath, true);

-                WaitUntillIdle(client, 500);

-                Assert.IsTrue(brokers.ContainsKey(2345));

-                Assert.IsTrue(mappings.ContainsKey(CurrentTestTopic));

-                client.CreatePersistent(topicBrokerPath, true);

-                client.WriteData(topicBrokerPath, 5);

-                WaitUntillIdle(client, 500);

-                client.UnsubscribeAll();

-                WaitUntillIdle(client, 500);

-                client.DeleteRecursive(brokerPath);

-                client.DeleteRecursive(topicPath);

-            }

-

-            Assert.IsTrue(brokers.ContainsKey(2345));

-            Assert.IsTrue(mappings.Keys.Contains(CurrentTestTopic));

-            Assert.AreEqual(5, mappings[CurrentTestTopic].Count);

-        }

-

-        [Test]

-        public void WhenSessionIsExpiredListenerRecreatesEphemeralNodes()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            IDictionary<string, SortedSet<Partition>> mappings2;

-            IDictionary<int, Broker> brokers2;

-            using (

-                IZooKeeperClient client = new ZooKeeperClient(

-                    prodConfig.ZooKeeper.ZkConnect,

-                    prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                    ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                    Assert.NotNull(brokers);

-                    Assert.Greater(brokers.Count, 0);

-                    Assert.NotNull(mappings);

-                    Assert.Greater(mappings.Count, 0);

-                    client.Process(new WatchedEvent(KeeperState.Expired, EventType.None, null));

-                    WaitUntillIdle(client, 3000);

-                    brokers2 = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings2 =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                }

-            }

-

-            Assert.NotNull(brokers2);

-            Assert.Greater(brokers2.Count, 0);

-            Assert.NotNull(mappings2);

-            Assert.Greater(mappings2.Count, 0);

-            Assert.AreEqual(brokers.Count, brokers2.Count);

-            Assert.AreEqual(mappings.Count, mappings2.Count);

-        }

-

-        [Test]

-        public void WhenNewTopicIsAddedZkBrokerPartitionInfoUpdatesMappings()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            string topicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + CurrentTestTopic;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                    client.CreatePersistent(topicPath, true);

-                    WaitUntillIdle(client, 500);

-                    client.UnsubscribeAll();

-                    WaitUntillIdle(client, 500);

-                    client.DeleteRecursive(topicPath);

-                }

-            }

-

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            Assert.IsTrue(mappings.ContainsKey(CurrentTestTopic));

-        }

-

-        [Test]

-        public void WhenNewBrokerIsAddedZkBrokerPartitionInfoUpdatesBrokersList()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    client.CreatePersistent(brokerPath, true);

-                    client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                    WaitUntillIdle(client, 500);

-                    client.UnsubscribeAll();

-                    WaitUntillIdle(client, 500);

-                    client.DeleteRecursive(brokerPath);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.IsTrue(brokers.ContainsKey(2345));

-            Assert.AreEqual("192.168.1.39", brokers[2345].Host);

-            Assert.AreEqual(9102, brokers[2345].Port);

-            Assert.AreEqual(2345, brokers[2345].Id);

-        }

-

-        [Test]

-        public void WhenBrokerIsRemovedZkBrokerPartitionInfoUpdatesBrokersList()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    WaitUntillIdle(client, 500);

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    client.CreatePersistent(brokerPath, true);

-                    client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                    WaitUntillIdle(client, 500);

-                    Assert.NotNull(brokers);

-                    Assert.Greater(brokers.Count, 0);

-                    Assert.IsTrue(brokers.ContainsKey(2345));

-                    client.DeleteRecursive(brokerPath);

-                    WaitUntillIdle(client, 500);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.IsFalse(brokers.ContainsKey(2345));

-        }

-

-        [Test]

-        public void WhenNewBrokerInTopicIsAddedZkBrokerPartitionInfoUpdatesMappings()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IDictionary<string, SortedSet<Partition>> mappings;

-            IDictionary<int, Broker> brokers;

-            string brokerPath = ZooKeeperClient.DefaultBrokerIdsPath + "/" + 2345;

-            string topicPath = ZooKeeperClient.DefaultBrokerTopicsPath + "/" + CurrentTestTopic;

-            string topicBrokerPath = topicPath + "/" + 2345;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs,

-                ZooKeeperStringSerializer.Serializer))

-            {

-                using (var brokerPartitionInfo = new ZKBrokerPartitionInfo(client))

-                {

-                    brokers = brokerPartitionInfo.GetAllBrokerInfo();

-                    mappings =

-                        ReflectionHelper.GetInstanceField<IDictionary<string, SortedSet<Partition>>>(

-                            "topicBrokerPartitions", brokerPartitionInfo);

-                    client.CreatePersistent(brokerPath, true);

-                    client.WriteData(brokerPath, "192.168.1.39-1310449279123:192.168.1.39:9102");

-                    client.CreatePersistent(topicPath, true);

-                    WaitUntillIdle(client, 500);

-                    Assert.IsTrue(brokers.ContainsKey(2345));

-                    Assert.IsTrue(mappings.ContainsKey(CurrentTestTopic));

-                    client.CreatePersistent(topicBrokerPath, true);

-                    client.WriteData(topicBrokerPath, 5);

-                    WaitUntillIdle(client, 500);

-                    client.UnsubscribeAll();

-                    WaitUntillIdle(client, 500);

-                    client.DeleteRecursive(brokerPath);

-                    client.DeleteRecursive(topicPath);

-                }

-            }

-

-            Assert.NotNull(brokers);

-            Assert.Greater(brokers.Count, 0);

-            Assert.NotNull(mappings);

-            Assert.Greater(mappings.Count, 0);

-            Assert.IsTrue(brokers.ContainsKey(2345));

-            Assert.IsTrue(mappings.Keys.Contains(CurrentTestTopic));

-            Assert.AreEqual(5, mappings[CurrentTestTopic].Count);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperAwareProducerTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperAwareProducerTests.cs
deleted file mode 100644
index 42948a4..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperAwareProducerTests.cs
+++ /dev/null
@@ -1,224 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System.Collections.Generic;

-    using System.Linq;

-    using System.Text;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Consumers;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Producers;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Serialization;

-    using NUnit.Framework;

-

-    [TestFixture]

-    public class ZooKeeperAwareProducerTests : IntegrationFixtureBase

-    {

-        /// <summary>

-        /// Maximum amount of time to wait trying to get a specific test message from Kafka server (in miliseconds)

-        /// </summary>

-        private readonly int maxTestWaitTimeInMiliseconds = 5000;

-

-        [Test]

-        public void ZkAwareProducerSends1Message()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            var originalMessage = new Message(Encoding.UTF8.GetBytes("TestData"));

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-

-            var mockPartitioner = new MockAlwaysZeroPartitioner();

-            using (var producer = new Producer<string, Message>(prodConfig, mockPartitioner, new DefaultEncoder()))

-            {

-                var producerData = new ProducerData<string, Message>(

-                    CurrentTestTopic, "somekey", new List<Message> { originalMessage });

-                producer.Send(producerData);

-

-                while (!multipleBrokersHelper.CheckIfAnyBrokerHasChanged(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-                {

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    Thread.Sleep(waitSingle);

-                    if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                    {

-                        Assert.Fail("None of the brokers changed their offset after sending a message");

-                    }

-                }

-

-                totalWaitTimeInMiliseconds = 0;

-

-                var consumerConfig = new ConsumerConfiguration(

-                    multipleBrokersHelper.BrokerThatHasChanged.Host,

-                    multipleBrokersHelper.BrokerThatHasChanged.Port);

-                IConsumer consumer = new Consumer(consumerConfig);

-                var request = new FetchRequest(CurrentTestTopic, multipleBrokersHelper.PartitionThatHasChanged, multipleBrokersHelper.OffsetFromBeforeTheChange);

-

-                BufferedMessageSet response;

-

-                while (true)

-                {

-                    Thread.Sleep(waitSingle);

-                    response = consumer.Fetch(request);

-                    if (response != null & response.Messages.Count() > 0)

-                    {

-                        break;

-                    }

-

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                    {

-                        break;

-                    }

-                }

-

-                Assert.NotNull(response);

-                Assert.AreEqual(1, response.Messages.Count());

-                Assert.AreEqual(originalMessage.ToString(), response.Messages.First().ToString());

-            }

-        }

-

-        [Test]

-        public void ZkAwareProducerSends3Messages()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            var originalMessage1 = new Message(Encoding.UTF8.GetBytes("TestData1"));

-            var originalMessage2 = new Message(Encoding.UTF8.GetBytes("TestData2"));

-            var originalMessage3 = new Message(Encoding.UTF8.GetBytes("TestData3"));

-            var originalMessageList = new List<Message> { originalMessage1, originalMessage2, originalMessage3 };

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-

-            var mockPartitioner = new MockAlwaysZeroPartitioner();

-            using (var producer = new Producer<string, Message>(prodConfig, mockPartitioner, new DefaultEncoder()))

-            {

-                var producerData = new ProducerData<string, Message>(CurrentTestTopic, "somekey", originalMessageList);

-                producer.Send(producerData);

-

-                while (!multipleBrokersHelper.CheckIfAnyBrokerHasChanged(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-                {

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    Thread.Sleep(waitSingle);

-                    if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                    {

-                        Assert.Fail("None of the brokers changed their offset after sending a message");

-                    }

-                }

-

-                totalWaitTimeInMiliseconds = 0;

-

-                var consumerConfig = new ConsumerConfiguration(

-                    multipleBrokersHelper.BrokerThatHasChanged.Host,

-                    multipleBrokersHelper.BrokerThatHasChanged.Port);

-                IConsumer consumer = new Consumer(consumerConfig);

-                var request = new FetchRequest(CurrentTestTopic, 0, multipleBrokersHelper.OffsetFromBeforeTheChange);

-                BufferedMessageSet response;

-

-                while (true)

-                {

-                    Thread.Sleep(waitSingle);

-                    response = consumer.Fetch(request);

-                    if (response != null && response.Messages.Count() > 2)

-                    {

-                        break;

-                    }

-

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                    {

-                        break;

-                    }

-                }

-

-                Assert.NotNull(response);

-                Assert.AreEqual(3, response.Messages.Count());

-                Assert.AreEqual(originalMessage1.ToString(), response.Messages.First().ToString());

-                Assert.AreEqual(originalMessage2.ToString(), response.Messages.Skip(1).First().ToString());

-                Assert.AreEqual(originalMessage3.ToString(), response.Messages.Skip(2).First().ToString());

-            }

-        }

-

-        [Test]

-        public void ZkAwareProducerSends1MessageUsingNotDefaultEncoder()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            int totalWaitTimeInMiliseconds = 0;

-            int waitSingle = 100;

-            string originalMessage = "TestData";

-

-            var multipleBrokersHelper = new TestMultipleBrokersHelper(CurrentTestTopic);

-            multipleBrokersHelper.GetCurrentOffsets(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 });

-

-            var mockPartitioner = new MockAlwaysZeroPartitioner();

-            using (var producer = new Producer<string, string>(prodConfig, mockPartitioner, new StringEncoder(), null))

-            {

-                var producerData = new ProducerData<string, string>(

-                    CurrentTestTopic, "somekey", new List<string> { originalMessage });

-                producer.Send(producerData);

-

-                while (!multipleBrokersHelper.CheckIfAnyBrokerHasChanged(new[] { this.SyncProducerConfig1, this.SyncProducerConfig2, this.SyncProducerConfig3 }))

-                {

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    Thread.Sleep(waitSingle);

-                    if (totalWaitTimeInMiliseconds > this.maxTestWaitTimeInMiliseconds)

-                    {

-                        Assert.Fail("None of the brokers changed their offset after sending a message");

-                    }

-                }

-

-                totalWaitTimeInMiliseconds = 0;

-

-                var consumerConfig = new ConsumerConfiguration(

-                    multipleBrokersHelper.BrokerThatHasChanged.Host,

-                    multipleBrokersHelper.BrokerThatHasChanged.Port);

-                IConsumer consumer = new Consumer(consumerConfig);

-                var request = new FetchRequest(CurrentTestTopic, 0, multipleBrokersHelper.OffsetFromBeforeTheChange);

-                BufferedMessageSet response;

-

-                while (true)

-                {

-                    Thread.Sleep(waitSingle);

-                    response = consumer.Fetch(request);

-                    if (response != null && response.Messages.Count() > 0)

-                    {

-                        break;

-                    }

-

-                    totalWaitTimeInMiliseconds += waitSingle;

-                    if (totalWaitTimeInMiliseconds >= this.maxTestWaitTimeInMiliseconds)

-                    {

-                        break;

-                    }

-                }

-

-                Assert.NotNull(response);

-                Assert.AreEqual(1, response.Messages.Count());

-                Assert.AreEqual(originalMessage, Encoding.UTF8.GetString(response.Messages.First().Payload));

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperClientTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperClientTests.cs
deleted file mode 100644
index da5fbf1..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperClientTests.cs
+++ /dev/null
@@ -1,411 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.Reflection;

-    using System.Threading;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Exceptions;

-    using Kafka.Client.Utils;

-    using Kafka.Client.ZooKeeperIntegration;

-    using Kafka.Client.ZooKeeperIntegration.Events;

-    using Kafka.Client.ZooKeeperIntegration.Listeners;

-    using log4net;

-    using NUnit.Framework;

-    using ZooKeeperNet;

-

-    [TestFixture]

-    internal class ZooKeeperClientTests : IntegrationFixtureBase, IZooKeeperDataListener, IZooKeeperStateListener, IZooKeeperChildListener

-    {

-        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

-        private readonly IList<ZooKeeperEventArgs> events = new List<ZooKeeperEventArgs>();

-

-        [SetUp]

-        public void TestSetup()

-        {

-            this.events.Clear();

-        }

-

-        [Test]

-        public void ZooKeeperClientCreateWorkerThreadsOnBeingCreated()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect, 

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                var eventWorker = ReflectionHelper.GetInstanceField<Thread>("eventWorker", client);

-                var zooKeeperWorker = ReflectionHelper.GetInstanceField<Thread>("zooKeeperEventWorker", client);

-                Assert.NotNull(eventWorker);

-                Assert.NotNull(zooKeeperWorker);

-            }

-        }

-

-        [Test]

-        public void ZooKeeperClientFailsWhenCreatedWithWrongConnectionInfo()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                "random text", 

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                Assert.Throws<FormatException>(client.Connect);

-            }

-        }

-

-        [Test]

-        public void WhenStateChangedToConnectedStateListenerFires()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Subscribe(this);

-                client.Connect();

-                WaitUntillIdle(client, 500);

-            }

-

-            Assert.AreEqual(1, this.events.Count);

-            ZooKeeperEventArgs e = this.events[0];

-            Assert.AreEqual(ZooKeeperEventTypes.StateChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperStateChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperStateChangedEventArgs)e).State, KeeperState.SyncConnected);

-        }

-

-        [Test]

-        public void WhenStateChangedToDisconnectedStateListenerFires()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Subscribe(this);

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                client.Process(new WatchedEvent(KeeperState.Disconnected, EventType.None, null));

-                WaitUntillIdle(client, 500);

-            }

-

-            Assert.AreEqual(2, this.events.Count);

-            ZooKeeperEventArgs e = this.events[1];

-            Assert.AreEqual(ZooKeeperEventTypes.StateChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperStateChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperStateChangedEventArgs)e).State, KeeperState.Disconnected);

-        }

-

-        [Test]

-        public void WhenStateChangedToExpiredStateAndSessionListenersFire()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Subscribe(this);

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                client.Process(new WatchedEvent(KeeperState.Expired, EventType.None, null));

-                WaitUntillIdle(client, 3000);

-            }

-

-            Assert.AreEqual(4, this.events.Count);

-            ZooKeeperEventArgs e = this.events[1];

-            Assert.AreEqual(ZooKeeperEventTypes.StateChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperStateChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperStateChangedEventArgs)e).State, KeeperState.Expired);

-            e = this.events[2];

-            Assert.AreEqual(ZooKeeperEventTypes.SessionCreated, e.Type);

-            Assert.IsInstanceOf<ZooKeeperSessionCreatedEventArgs>(e);

-            e = this.events[3];

-            Assert.AreEqual(ZooKeeperEventTypes.StateChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperStateChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperStateChangedEventArgs)e).State, KeeperState.SyncConnected);

-        }

-

-        [Test]

-        public void WhenSessionExpiredClientReconnects()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IZooKeeperConnection conn1;

-            IZooKeeperConnection conn2;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect, 

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                conn1 = ReflectionHelper.GetInstanceField<ZooKeeperConnection>("connection", client);

-                client.Process(new WatchedEvent(KeeperState.Expired, EventType.None, null));

-                WaitUntillIdle(client, 1000);

-                conn2 = ReflectionHelper.GetInstanceField<ZooKeeperConnection>("connection", client);

-            }

-

-            Assert.AreNotEqual(conn1, conn2);

-        }

-

-        [Test]

-        public void ZooKeeperClientChecksIfPathExists()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                Assert.IsTrue(client.Exists(ZooKeeperClient.DefaultBrokerTopicsPath, false));

-            }

-        }

-

-        [Test]

-        public void ZooKeeperClientCreatesANewPathAndDeletesIt()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                string myPath = "/" + Guid.NewGuid();

-                client.CreatePersistent(myPath, false);

-                Assert.IsTrue(client.Exists(myPath));

-                client.Delete(myPath);

-                Assert.IsFalse(client.Exists(myPath));

-            }

-        }

-

-        [Test]

-        public void WhenChildIsCreatedChilListenerOnParentFires()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            string myPath = "/" + Guid.NewGuid();

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                WaitUntillIdle(client, 500);

-                client.Subscribe("/", this as IZooKeeperChildListener);

-                client.CreatePersistent(myPath, true);

-                WaitUntillIdle(client, 500);

-                client.UnsubscribeAll();

-                client.Delete(myPath);

-            }

-

-            Assert.AreEqual(1, this.events.Count);

-            ZooKeeperEventArgs e = this.events[0];

-            Assert.AreEqual(ZooKeeperEventTypes.ChildChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperChildChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperChildChangedEventArgs)e).Path, "/");

-            Assert.Greater(((ZooKeeperChildChangedEventArgs)e).Children.Count, 0);

-            Assert.IsTrue(((ZooKeeperChildChangedEventArgs)e).Children.Contains(myPath.Replace("/", string.Empty)));

-        }

-

-        [Test]

-        public void WhenChildIsDeletedChildListenerOnParentFires()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            string myPath = "/" + Guid.NewGuid();

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                client.CreatePersistent(myPath, true);

-                WaitUntillIdle(client, 500);

-                client.Subscribe("/", this as IZooKeeperChildListener);

-                client.Delete(myPath);

-                WaitUntillIdle(client, 500);

-            }

-

-            Assert.AreEqual(1, this.events.Count);

-            ZooKeeperEventArgs e = this.events[0];

-            Assert.AreEqual(ZooKeeperEventTypes.ChildChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperChildChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperChildChangedEventArgs)e).Path, "/");

-            Assert.Greater(((ZooKeeperChildChangedEventArgs)e).Children.Count, 0);

-            Assert.IsFalse(((ZooKeeperChildChangedEventArgs)e).Children.Contains(myPath.Replace("/", string.Empty)));

-        }

-

-        [Test]

-        public void WhenZNodeIsDeletedChildAndDataDeletedListenersFire()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            string myPath = "/" + Guid.NewGuid();

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                client.CreatePersistent(myPath, true);

-                WaitUntillIdle(client, 500);

-                client.Subscribe(myPath, this as IZooKeeperChildListener);

-                client.Subscribe(myPath, this as IZooKeeperDataListener);

-                client.Delete(myPath);

-                WaitUntillIdle(client, 500);

-            }

-

-            Assert.AreEqual(2, this.events.Count);

-            ZooKeeperEventArgs e = this.events[0];

-            Assert.AreEqual(ZooKeeperEventTypes.ChildChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperChildChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperChildChangedEventArgs)e).Path, myPath);

-            Assert.IsNull(((ZooKeeperChildChangedEventArgs)e).Children);

-            e = this.events[1];

-            Assert.AreEqual(ZooKeeperEventTypes.DataChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperDataChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperDataChangedEventArgs)e).Path, myPath);

-            Assert.IsNull(((ZooKeeperDataChangedEventArgs)e).Data);

-        }

-

-        [Test]

-        public void ZooKeeperClientCreatesAChildAndGetsChildren()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect, 

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                string child = Guid.NewGuid().ToString();

-                string myPath = "/" + child;

-                client.CreatePersistent(myPath, false);

-                IList<string> children = client.GetChildren("/", false);

-                int countChildren = client.CountChildren("/");

-                Assert.Greater(children.Count, 0);

-                Assert.AreEqual(children.Count, countChildren);

-                Assert.IsTrue(children.Contains(child));

-                client.Delete(myPath);

-            }

-        }

-

-        [Test]

-        public void WhenDataChangedDataListenerFires()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            string myPath = "/" + Guid.NewGuid();

-            string sourceData = "my test data";

-            string resultData;

-            using (IZooKeeperClient client = new ZooKeeperClient(

-                prodConfig.ZooKeeper.ZkConnect,

-                prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                ZooKeeperStringSerializer.Serializer))

-            {

-                client.Connect();

-                client.CreatePersistent(myPath, true);

-                WaitUntillIdle(client, 500);

-                client.Subscribe(myPath, this as IZooKeeperDataListener);

-                client.Subscribe(myPath, this as IZooKeeperChildListener);

-                client.WriteData(myPath, sourceData);

-                WaitUntillIdle(client, 500);

-                client.UnsubscribeAll();

-                resultData = client.ReadData<string>(myPath);

-                client.Delete(myPath);

-            }

-

-            Assert.IsTrue(!string.IsNullOrEmpty(resultData));

-            Assert.AreEqual(sourceData, resultData);

-            Assert.AreEqual(1, this.events.Count);

-            ZooKeeperEventArgs e = this.events[0];

-            Assert.AreEqual(ZooKeeperEventTypes.DataChanged, e.Type);

-            Assert.IsInstanceOf<ZooKeeperDataChangedEventArgs>(e);

-            Assert.AreEqual(((ZooKeeperDataChangedEventArgs)e).Path, myPath);

-            Assert.IsNotNull(((ZooKeeperDataChangedEventArgs)e).Data);

-            Assert.AreEqual(((ZooKeeperDataChangedEventArgs)e).Data, sourceData);

-        }

-

-        [Test]

-        [ExpectedException(typeof(ZooKeeperException))]

-        public void WhenClientWillNotConnectWithinGivenTimeThrows()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperClient client = 

-                new ZooKeeperClient(

-                    prodConfig.ZooKeeper.ZkConnect,

-                    prodConfig.ZooKeeper.ZkSessionTimeoutMs, 

-                    ZooKeeperStringSerializer.Serializer,

-                    1))

-            {

-                client.Connect();

-            }

-        }

-

-        public void HandleDataChange(ZooKeeperDataChangedEventArgs args)

-        {

-            Logger.Debug(args + " reach test event handler");

-            this.events.Add(args);

-        }

-

-        public void HandleDataDelete(ZooKeeperDataChangedEventArgs args)

-        {

-            Logger.Debug(args + " reach test event handler");

-            this.events.Add(args);

-        }

-

-        public void HandleStateChanged(ZooKeeperStateChangedEventArgs args)

-        {

-            Logger.Debug(args + " reach test event handler");

-            this.events.Add(args);

-        }

-

-        public void HandleSessionCreated(ZooKeeperSessionCreatedEventArgs args)

-        {

-            Logger.Debug(args + " reach test event handler");

-            this.events.Add(args);

-        }

-

-        public void HandleChildChange(ZooKeeperChildChangedEventArgs args)

-        {

-            Logger.Debug(args + " reach test event handler");

-            this.events.Add(args);

-        }

-

-        public void ResetState()

-        {

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperConnectionTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperConnectionTests.cs
deleted file mode 100644
index 66a4e8f..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.IntegrationTests/ZooKeeperConnectionTests.cs
+++ /dev/null
@@ -1,118 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.IntegrationTests

-{

-    using System;

-    using System.Collections.Generic;

-    using Kafka.Client.Cfg;

-    using Kafka.Client.ZooKeeperIntegration;

-    using NUnit.Framework;

-    using ZooKeeperNet;

-

-    [TestFixture]

-    public class ZooKeeperConnectionTests : IntegrationFixtureBase

-    {

-        [Test]

-        public void ZooKeeperConnectionCreatesAndDeletesPath()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperConnection connection = new ZooKeeperConnection(prodConfig.ZooKeeper.ZkConnect))

-            {

-                connection.Connect(null);

-                string pathName = "/" + Guid.NewGuid();

-                connection.Create(pathName, null, CreateMode.Persistent);

-                Assert.IsTrue(connection.Exists(pathName, false));

-                connection.Delete(pathName);

-                Assert.IsFalse(connection.Exists(pathName, false));

-            }

-        }

-

-        [Test]

-        public void ZooKeeperConnectionConnectsAndDisposes()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            IZooKeeperConnection connection;

-            using (connection = new ZooKeeperConnection(prodConfig.ZooKeeper.ZkConnect))

-            {

-                Assert.IsNull(connection.ClientState);

-                connection.Connect(null);

-                Assert.NotNull(connection.Client);

-                Assert.AreEqual(ZooKeeper.States.CONNECTING, connection.ClientState);

-            }

-

-            Assert.Null(connection.Client);

-        }

-

-        [Test]

-        public void ZooKeeperConnectionCreatesAndGetsCreateTime()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperConnection connection = new ZooKeeperConnection(prodConfig.ZooKeeper.ZkConnect))

-            {

-                connection.Connect(null);

-                string pathName = "/" + Guid.NewGuid();

-                connection.Create(pathName, null, CreateMode.Persistent);

-                long createTime = connection.GetCreateTime(pathName);

-                Assert.Greater(createTime, 0);

-                connection.Delete(pathName);

-            }

-        }

-

-        [Test]

-        public void ZooKeeperConnectionCreatesAndGetsChildren()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperConnection connection = new ZooKeeperConnection(prodConfig.ZooKeeper.ZkConnect))

-            {

-                connection.Connect(null);

-                string child = Guid.NewGuid().ToString();

-                string pathName = "/" + child;

-                connection.Create(pathName, null, CreateMode.Persistent);

-                IList<string> children = connection.GetChildren("/", false);

-                Assert.Greater(children.Count, 0);

-                Assert.IsTrue(children.Contains(child));

-                connection.Delete(pathName);

-            }

-        }

-

-        [Test]

-        public void ZooKeeperConnectionWritesAndReadsData()

-        {

-            var prodConfig = this.ZooKeeperBasedSyncProdConfig;

-

-            using (IZooKeeperConnection connection = new ZooKeeperConnection(prodConfig.ZooKeeper.ZkConnect))

-            {

-                connection.Connect(null);

-                string child = Guid.NewGuid().ToString();

-                string pathName = "/" + child;

-                connection.Create(pathName, null, CreateMode.Persistent);

-                var sourceData = new byte[] { 1, 2 };

-                connection.WriteData(pathName, sourceData);

-                byte[] resultData = connection.ReadData(pathName, null, false);

-                Assert.IsNotNull(resultData);

-                Assert.AreEqual(sourceData[0], resultData[0]);

-                Assert.AreEqual(sourceData[1], resultData[1]);

-                connection.Delete(pathName);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Kafka.Client.Tests.csproj b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Kafka.Client.Tests.csproj
deleted file mode 100644
index ba2d101..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Kafka.Client.Tests.csproj
+++ /dev/null
@@ -1,118 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>

-<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

-  <PropertyGroup>

-    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>

-    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>

-    <ProductVersion>8.0.30703</ProductVersion>

-    <SchemaVersion>2.0</SchemaVersion>

-    <ProjectGuid>{9BA1A0BF-B207-4A11-8883-5F64B113C07D}</ProjectGuid>

-    <OutputType>Library</OutputType>

-    <AppDesignerFolder>Properties</AppDesignerFolder>

-    <RootNamespace>Kafka.Client.Tests</RootNamespace>

-    <AssemblyName>Kafka.Client.Tests</AssemblyName>

-    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>

-    <FileAlignment>512</FileAlignment>

-    <CodeContractsAssemblyMode>0</CodeContractsAssemblyMode>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">

-    <DebugSymbols>true</DebugSymbols>

-    <DebugType>full</DebugType>

-    <Optimize>false</Optimize>

-    <OutputPath>bin\Debug\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>true</StyleCopTreatErrorsAsWarnings>

-    <CodeContractsEnableRuntimeChecking>False</CodeContractsEnableRuntimeChecking>

-    <CodeContractsRuntimeOnlyPublicSurface>False</CodeContractsRuntimeOnlyPublicSurface>

-    <CodeContractsRuntimeThrowOnFailure>True</CodeContractsRuntimeThrowOnFailure>

-    <CodeContractsRuntimeCallSiteRequires>False</CodeContractsRuntimeCallSiteRequires>

-    <CodeContractsRuntimeSkipQuantifiers>False</CodeContractsRuntimeSkipQuantifiers>

-    <CodeContractsRunCodeAnalysis>False</CodeContractsRunCodeAnalysis>

-    <CodeContractsNonNullObligations>False</CodeContractsNonNullObligations>

-    <CodeContractsBoundsObligations>False</CodeContractsBoundsObligations>

-    <CodeContractsArithmeticObligations>False</CodeContractsArithmeticObligations>

-    <CodeContractsEnumObligations>False</CodeContractsEnumObligations>

-    <CodeContractsRedundantAssumptions>False</CodeContractsRedundantAssumptions>

-    <CodeContractsRunInBackground>True</CodeContractsRunInBackground>

-    <CodeContractsShowSquigglies>False</CodeContractsShowSquigglies>

-    <CodeContractsUseBaseLine>False</CodeContractsUseBaseLine>

-    <CodeContractsEmitXMLDocs>False</CodeContractsEmitXMLDocs>

-    <CodeContractsCustomRewriterAssembly />

-    <CodeContractsCustomRewriterClass />

-    <CodeContractsLibPaths />

-    <CodeContractsExtraRewriteOptions />

-    <CodeContractsExtraAnalysisOptions />

-    <CodeContractsBaseLineFile />

-    <CodeContractsCacheAnalysisResults>False</CodeContractsCacheAnalysisResults>

-    <CodeContractsRuntimeCheckingLevel>Full</CodeContractsRuntimeCheckingLevel>

-    <CodeContractsReferenceAssembly>%28none%29</CodeContractsReferenceAssembly>

-    <CodeContractsAnalysisWarningLevel>0</CodeContractsAnalysisWarningLevel>

-  </PropertyGroup>

-  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">

-    <DebugType>pdbonly</DebugType>

-    <Optimize>true</Optimize>

-    <OutputPath>bin\Release\</OutputPath>

-    <DefineConstants>TRACE</DefineConstants>

-    <ErrorReport>prompt</ErrorReport>

-    <WarningLevel>4</WarningLevel>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Integration|AnyCPU'">

-    <DebugSymbols>true</DebugSymbols>

-    <OutputPath>bin\Integration\</OutputPath>

-    <DefineConstants>DEBUG;TRACE</DefineConstants>

-    <DebugType>full</DebugType>

-    <PlatformTarget>AnyCPU</PlatformTarget>

-    <ErrorReport>prompt</ErrorReport>

-    <CodeAnalysisIgnoreBuiltInRuleSets>true</CodeAnalysisIgnoreBuiltInRuleSets>

-    <CodeAnalysisIgnoreBuiltInRules>true</CodeAnalysisIgnoreBuiltInRules>

-    <CodeAnalysisFailOnMissingRules>true</CodeAnalysisFailOnMissingRules>

-    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

-  </PropertyGroup>

-  <ItemGroup>

-    <Reference Include="log4net">

-      <HintPath>..\..\..\..\lib\log4Net\log4net.dll</HintPath>

-    </Reference>

-    <Reference Include="nunit.framework, Version=2.5.9.10348, Culture=neutral, PublicKeyToken=96d09a1eb7f44a77, processorArchitecture=MSIL">

-      <SpecificVersion>False</SpecificVersion>

-      <HintPath>..\..\..\..\lib\nunit\2.5.9\nunit.framework.dll</HintPath>

-    </Reference>

-    <Reference Include="System" />

-    <Reference Include="System.configuration" />

-    <Reference Include="System.Core" />

-    <Reference Include="Microsoft.CSharp" />

-  </ItemGroup>

-  <ItemGroup>

-    <Compile Include="CompressionTests.cs" />

-    <Compile Include="MessageSetTests.cs" />

-    <Compile Include="MessageTests.cs" />

-    <Compile Include="Producers\PartitioningTests.cs" />

-    <Compile Include="Properties\AssemblyInfo.cs" />

-    <Compile Include="Request\FetchRequestTests.cs" />

-    <Compile Include="Request\MultiFetchRequestTests.cs" />

-    <Compile Include="Request\MultiProducerRequestTests.cs" />

-    <Compile Include="Request\OffsetRequestTests.cs" />

-    <Compile Include="Request\ProducerRequestTests.cs" />

-    <Compile Include="Util\BitWorksTests.cs" />

-  </ItemGroup>

-  <ItemGroup>

-    <ProjectReference Include="..\..\Kafka.Client\Kafka.Client.csproj">

-      <Project>{A92DD03B-EE4F-4A78-9FB2-279B6348C7D2}</Project>

-      <Name>Kafka.Client</Name>

-    </ProjectReference>

-  </ItemGroup>

-  <ItemGroup>

-    <Folder Include="ZooKeeper\" />

-  </ItemGroup>

-  <ItemGroup>

-    <None Include="..\..\..\..\Settings.StyleCop">

-      <Link>Settings.StyleCop</Link>

-    </None>

-    <None Include="App.config">

-      <SubType>Designer</SubType>

-    </None>

-  </ItemGroup>

-  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

-  <Import Project="..\..\..\..\lib\StyleCop\Microsoft.StyleCop.Targets" />

-</Project>

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageSetTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageSetTests.cs
deleted file mode 100644
index 35068ba..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageSetTests.cs
+++ /dev/null
@@ -1,108 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    [TestFixture]

-    public class MessageSetTests

-    {

-        private const int MessageLengthPartLength = 4;

-        private const int MagicNumberPartLength = 1;

-        private const int AttributesPartLength = 1;

-        private const int ChecksumPartLength = 4;

-        

-        private const int MessageLengthPartOffset = 0;

-        private const int MagicNumberPartOffset = 4;

-        private const int AttributesPartOffset = 5;

-        private const int ChecksumPartOffset = 6;

-        private const int DataPartOffset = 10;

-

-        [Test]

-        public void BufferedMessageSetWriteToValidSequence()

-        {

-            byte[] messageBytes = new byte[] { 1, 2, 3, 4, 5 };

-            Message msg1 = new Message(messageBytes);

-            Message msg2 = new Message(messageBytes);

-            MessageSet messageSet = new BufferedMessageSet(new List<Message>() { msg1, msg2 });

-            MemoryStream ms = new MemoryStream();

-            messageSet.WriteTo(ms);

-

-            ////first message

-

-            byte[] messageLength = new byte[MessageLengthPartLength];

-            Array.Copy(ms.ToArray(), MessageLengthPartOffset, messageLength, 0, MessageLengthPartLength);

-            if (BitConverter.IsLittleEndian)

-            {

-                Array.Reverse(messageLength);

-            }

-

-            Assert.AreEqual(MagicNumberPartLength + AttributesPartLength + ChecksumPartLength + messageBytes.Length, BitConverter.ToInt32(messageLength, 0));

-

-            Assert.AreEqual(1, ms.ToArray()[MagicNumberPartOffset]);    // default magic number should be 1

-

-            byte[] checksumPart = new byte[ChecksumPartLength];

-            Array.Copy(ms.ToArray(), ChecksumPartOffset, checksumPart, 0, ChecksumPartLength);

-            Assert.AreEqual(Crc32Hasher.Compute(messageBytes), checksumPart);

-

-            byte[] dataPart = new byte[messageBytes.Length];

-            Array.Copy(ms.ToArray(), DataPartOffset, dataPart, 0, messageBytes.Length);

-            Assert.AreEqual(messageBytes, dataPart);

-

-            ////second message

-            int secondMessageOffset = MessageLengthPartLength + MagicNumberPartLength + AttributesPartLength + ChecksumPartLength +

-                                      messageBytes.Length;

-

-            messageLength = new byte[MessageLengthPartLength];

-            Array.Copy(ms.ToArray(), secondMessageOffset + MessageLengthPartOffset, messageLength, 0, MessageLengthPartLength);

-            if (BitConverter.IsLittleEndian)

-            {

-                Array.Reverse(messageLength);

-            }

-

-            Assert.AreEqual(MagicNumberPartLength + AttributesPartLength + ChecksumPartLength + messageBytes.Length, BitConverter.ToInt32(messageLength, 0));

-

-            Assert.AreEqual(1, ms.ToArray()[secondMessageOffset + MagicNumberPartOffset]);    // default magic number should be 1

-

-            checksumPart = new byte[ChecksumPartLength];

-            Array.Copy(ms.ToArray(), secondMessageOffset + ChecksumPartOffset, checksumPart, 0, ChecksumPartLength);

-            Assert.AreEqual(Crc32Hasher.Compute(messageBytes), checksumPart);

-

-            dataPart = new byte[messageBytes.Length];

-            Array.Copy(ms.ToArray(), secondMessageOffset + DataPartOffset, dataPart, 0, messageBytes.Length);

-            Assert.AreEqual(messageBytes, dataPart);

-        }

-

-        [Test]

-        public void SetSizeValid()

-        {

-            byte[] messageBytes = new byte[] { 1, 2, 3, 4, 5 };

-            Message msg1 = new Message(messageBytes);

-            Message msg2 = new Message(messageBytes);

-            MessageSet messageSet = new BufferedMessageSet(new List<Message>() { msg1, msg2 });

-            Assert.AreEqual(

-                2 * (MessageLengthPartLength + MagicNumberPartLength + AttributesPartLength + ChecksumPartLength + messageBytes.Length),

-                messageSet.SetSize);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageTests.cs
deleted file mode 100644
index 479caa8..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/MessageTests.cs
+++ /dev/null
@@ -1,138 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests

-{

-    using System;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for the <see cref="Message"/> class.

-    /// </summary>

-    [TestFixture]

-    public class MessageTests

-    {

-        private readonly int ChecksumPartLength = 4;

-

-        private readonly int MagicNumberPartOffset = 0;

-        private readonly int ChecksumPartOffset = 2;

-        private readonly int DataPartOffset = 6;

-

-        /// <summary>

-        /// Demonstrates a properly parsed message.

-        /// </summary>

-        [Test]

-        public void ParseFromValid()

-        {

-            Crc32Hasher crc32 = new Crc32Hasher();

-

-            string payload = "kafka";

-            byte magic = 1;

-            byte attributes = 0;

-            byte[] payloadData = Encoding.UTF8.GetBytes(payload);

-            byte[] payloadSize = BitConverter.GetBytes(payloadData.Length);

-            byte[] checksum = crc32.ComputeHash(payloadData);

-            byte[] messageData = new byte[payloadData.Length + 2 + payloadSize.Length + checksum.Length];

-

-            Buffer.BlockCopy(payloadSize, 0, messageData, 0, payloadSize.Length);

-            messageData[4] = magic;

-            messageData[5] = attributes;

-            Buffer.BlockCopy(checksum, 0, messageData, payloadSize.Length + 2, checksum.Length);

-            Buffer.BlockCopy(payloadData, 0, messageData, payloadSize.Length + 2 + checksum.Length, payloadData.Length);

-

-            Message message = Message.ParseFrom(messageData);

-

-            Assert.IsNotNull(message);

-            Assert.AreEqual(magic, message.Magic);

-            Assert.IsTrue(payloadData.SequenceEqual(message.Payload));

-            Assert.IsTrue(checksum.SequenceEqual(message.Checksum));

-        }

-

-        /// <summary>

-        /// Ensure that the bytes returned from the message are in valid kafka sequence.

-        /// </summary>

-        [Test]

-        public void GetBytesValidSequence()

-        {

-            Message message = new Message(new byte[10], CompressionCodecs.NoCompressionCodec);

-

-            MemoryStream ms = new MemoryStream();

-            message.WriteTo(ms);

-

-            // len(payload) + 1 + 4

-            Assert.AreEqual(16, ms.Length);

-

-            // first 4 bytes = the magic number

-            Assert.AreEqual((byte)1, ms.ToArray()[0]);

-

-            // attributes

-            Assert.AreEqual((byte)0, ms.ToArray()[1]);

-

-            // next 4 bytes = the checksum

-            Assert.IsTrue(message.Checksum.SequenceEqual(ms.ToArray().Skip(2).Take(4).ToArray<byte>()));

-

-            // remaining bytes = the payload

-            Assert.AreEqual(10, ms.ToArray().Skip(6).ToArray<byte>().Length);

-        }

-

-        [Test]

-        public void WriteToValidSequenceForDefaultConstructor()

-        {

-            byte[] messageBytes = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 };

-            Message message = new Message(messageBytes);

-            MemoryStream ms = new MemoryStream();

-            message.WriteTo(ms);

-

-            Assert.AreEqual(1, ms.ToArray()[MagicNumberPartOffset]);    // default magic number should be 1

-

-            byte[] checksumPart = new byte[ChecksumPartLength];

-            Array.Copy(ms.ToArray(), ChecksumPartOffset, checksumPart, 0, ChecksumPartLength);

-            Assert.AreEqual(Crc32Hasher.Compute(messageBytes), checksumPart);

-

-            message.ToString();

-

-            byte[] dataPart = new byte[messageBytes.Length];

-            Array.Copy(ms.ToArray(), DataPartOffset, dataPart, 0, messageBytes.Length);

-            Assert.AreEqual(messageBytes, dataPart);

-        }

-

-        [Test]

-        public void WriteToValidSequenceForCustomConstructor()

-        {

-            byte[] messageBytes = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 };

-            byte[] customChecksum = new byte[] { 3, 4, 5, 6 };

-            Message message = new Message(messageBytes, customChecksum);

-            MemoryStream ms = new MemoryStream();

-            message.WriteTo(ms);

-

-            Assert.AreEqual((byte)1, ms.ToArray()[MagicNumberPartOffset]);

-

-            byte[] checksumPart = new byte[ChecksumPartLength];

-            Array.Copy(ms.ToArray(), ChecksumPartOffset, checksumPart, 0, ChecksumPartLength);

-            Assert.AreEqual(customChecksum, checksumPart);

-

-            byte[] dataPart = new byte[messageBytes.Length];

-            Array.Copy(ms.ToArray(), DataPartOffset, dataPart, 0, messageBytes.Length);

-            Assert.AreEqual(messageBytes, dataPart);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Producers/PartitioningTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Producers/PartitioningTests.cs
deleted file mode 100644
index db4a403..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Producers/PartitioningTests.cs
+++ /dev/null
@@ -1,73 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Producers

-{

-    using Kafka.Client.Cfg;

-    using Kafka.Client.Cluster;

-    using Kafka.Client.Producers.Partitioning;

-    using NUnit.Framework;

-    using System.Collections.Generic;

-

-    [TestFixture]

-    public class PartitioningTests

-    {

-        private ProducerConfiguration config;

-

-        [TestFixtureSetUp]

-        public void SetUp()

-        {

-            var brokers = new List<BrokerConfiguration>();

-            brokers.Add(new BrokerConfiguration { BrokerId = 1, Host = "192.168.0.1", Port = 1234 });

-            brokers.Add(new BrokerConfiguration { BrokerId = 2, Host = "192.168.0.2", Port = 3456 });

-            config = new ProducerConfiguration(brokers);

-        }

-

-        [Test]

-        public void BrokerPartitionInfoGetAllBrokerInfoTest()

-        {

-            IBrokerPartitionInfo brokerPartitionInfo = new ConfigBrokerPartitionInfo(config);

-            var allInfo = brokerPartitionInfo.GetAllBrokerInfo();

-            this.MakeAssertionsForBroker(allInfo[1], 1, "192.168.0.1", 1234);

-            this.MakeAssertionsForBroker(allInfo[2], 2, "192.168.0.2", 3456);

-        }

-

-        [Test]

-        public void BrokerPartitionInfoGetPartitionInfo()

-        {

-            IBrokerPartitionInfo brokerPartitionInfo = new ConfigBrokerPartitionInfo(config);

-            var broker = brokerPartitionInfo.GetBrokerInfo(1);

-            this.MakeAssertionsForBroker(broker, 1, "192.168.0.1", 1234);

-        }

-

-        [Test]

-        public void BrokerPartitionInfoGetPartitionInfoReturnsNullOnNonexistingBrokerId()

-        {

-            IBrokerPartitionInfo brokerPartitionInfo = new ConfigBrokerPartitionInfo(config);

-            var broker = brokerPartitionInfo.GetBrokerInfo(45);

-            Assert.IsNull(broker);

-        }

-

-        private void MakeAssertionsForBroker(Broker broker, int expectedId, string expectedHost, int expectedPort)

-        {

-            Assert.IsNotNull(broker);

-            Assert.AreEqual(expectedId, broker.Id);

-            Assert.AreEqual(expectedHost, broker.Host);

-            Assert.AreEqual(expectedPort, broker.Port);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Properties/AssemblyInfo.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Properties/AssemblyInfo.cs
deleted file mode 100644
index efe653d..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Properties/AssemblyInfo.cs
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-using System.Reflection;
-using System.Runtime.InteropServices;
-
-// General Information about an assembly is controlled through the following 
-// set of attributes. Change these attribute values to modify the information
-// associated with an assembly.
-[assembly: AssemblyTitle("Kafka.Client.Tests")]
-[assembly: AssemblyDescription("")]
-[assembly: AssemblyConfiguration("")]
-[assembly: AssemblyCompany("Microsoft")]
-[assembly: AssemblyProduct("Kafka.Client.Tests")]
-[assembly: AssemblyCopyright("Copyright © Microsoft 2011")]
-[assembly: AssemblyTrademark("")]
-[assembly: AssemblyCulture("")]
-
-// Setting ComVisible to false makes the types in this assembly not visible 
-// to COM components.  If you need to access a type in this assembly from 
-// COM, set the ComVisible attribute to true on that type.
-[assembly: ComVisible(false)]
-
-// The following GUID is for the ID of the typelib if this project is exposed to COM
-[assembly: Guid("bf361ee0-5cbb-4fd6-bded-67bedcb603b8")]
-
-// Version information for an assembly consists of the following four values:
-//
-//      Major Version
-//      Minor Version 
-//      Build Number
-//      Revision
-//
-// You can specify all the values or you can default the Build and Revision Numbers 
-// by using the '*' as shown below:
-// [assembly: AssemblyVersion("1.0.*")]
-[assembly: AssemblyVersion("1.0.0.0")]
-[assembly: AssemblyFileVersion("1.0.0.0")]
diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/FetchRequestTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/FetchRequestTests.cs
deleted file mode 100644
index ab600da..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/FetchRequestTests.cs
+++ /dev/null
@@ -1,76 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Request

-{

-    using System;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for the <see cref="FetchRequest"/> class.

-    /// </summary>

-    [TestFixture]

-    public class FetchRequestTests

-    {

-        /// <summary>

-        /// Tests to ensure that the request follows the expected structure.

-        /// </summary>

-        [Test]

-        public void GetBytesValidStructure()

-        {

-            string topicName = "topic";

-            FetchRequest request = new FetchRequest(topicName, 1, 10L, 100);

-

-            // REQUEST TYPE ID + TOPIC LENGTH + TOPIC + PARTITION + OFFSET + MAX SIZE

-            int requestSize = 2 + 2 + topicName.Length + 4 + 8 + 4;

-

-            MemoryStream ms = new MemoryStream();

-            request.WriteTo(ms);

-            byte[] bytes = ms.ToArray();

-            Assert.IsNotNull(bytes);

-

-            // add 4 bytes for the length of the message at the beginning

-            Assert.AreEqual(requestSize + 4, bytes.Length);

-

-            // first 4 bytes = the message length

-            Assert.AreEqual(25, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Take(4).ToArray<byte>()), 0));

-

-            // next 2 bytes = the request type

-            Assert.AreEqual((short)RequestTypes.Fetch, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(4).Take(2).ToArray<byte>()), 0));

-

-            // next 2 bytes = the topic length

-            Assert.AreEqual((short)topicName.Length, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(6).Take(2).ToArray<byte>()), 0));

-

-            // next few bytes = the topic

-            Assert.AreEqual(topicName, Encoding.ASCII.GetString(bytes.Skip(8).Take(topicName.Length).ToArray<byte>()));

-

-            // next 4 bytes = the partition

-            Assert.AreEqual(1, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(8 + topicName.Length).Take(4).ToArray<byte>()), 0));

-

-            // next 8 bytes = the offset

-            Assert.AreEqual(10, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(12 + topicName.Length).Take(8).ToArray<byte>()), 0));

-

-            // last 4 bytes = the max size

-            Assert.AreEqual(100, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(20 + +topicName.Length).Take(4).ToArray<byte>()), 0));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiFetchRequestTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiFetchRequestTests.cs
deleted file mode 100644
index 600254b..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiFetchRequestTests.cs
+++ /dev/null
@@ -1,78 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Request

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Linq;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for the <see cref="MultiFetchRequest"/> class.

-    /// </summary>

-    [TestFixture]

-    public class MultiFetchRequestTests

-    {

-        /// <summary>

-        /// Tests for an invalid multi-request constructor with no requests given.

-        /// </summary>

-        [Test]

-        public void ThrowsExceptionWhenNullArgumentPassedToTheConstructor()

-        {

-            MultiFetchRequest multiRequest;

-            Assert.Throws<ArgumentNullException>(() => multiRequest = new MultiFetchRequest(null));

-        }

-

-        /// <summary>

-        /// Test to ensure a valid format in the returned byte array as expected by Kafka.

-        /// </summary>

-        [Test]

-        public void GetBytesValidFormat()

-        {

-            List<FetchRequest> requests = new List<FetchRequest>

-            { 

-                new FetchRequest("topic a", 0, 0),

-                new FetchRequest("topic a", 0, 0),

-                new FetchRequest("topic b", 0, 0),

-                new FetchRequest("topic c", 0, 0)

-            };

-

-            MultiFetchRequest request = new MultiFetchRequest(requests);

-

-            // format = len(request) + requesttype + requestcount + requestpackage

-            // total byte count = 4 + (2 + 2 + 100)

-            MemoryStream ms = new MemoryStream();

-            request.WriteTo(ms);

-            byte[] bytes = ms.ToArray();

-            Assert.IsNotNull(bytes);

-            Assert.AreEqual(108, bytes.Length);

-

-            // first 4 bytes = the length of the request

-            Assert.AreEqual(104, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Take(4).ToArray<byte>()), 0));

-

-            // next 2 bytes = the RequestType which in this case should be Produce

-            Assert.AreEqual((short)RequestTypes.MultiFetch, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(4).Take(2).ToArray<byte>()), 0));

-

-            // next 2 bytes = the number of messages

-            Assert.AreEqual((short)4, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(6).Take(2).ToArray<byte>()), 0));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiProducerRequestTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiProducerRequestTests.cs
deleted file mode 100644
index 9191a66..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/MultiProducerRequestTests.cs
+++ /dev/null
@@ -1,69 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Request

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Linq;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for the <see cref="MultiProducerRequest"/> class.

-    /// </summary>

-    [TestFixture]

-    public class MultiProducerRequestTests

-    {

-        /// <summary>

-        /// Test to ensure a valid format in the returned byte array as expected by Kafka.

-        /// </summary>

-        [Test]

-        public void WriteToValidFormat()

-        {

-            List<ProducerRequest> requests = new List<ProducerRequest>

-            { 

-                new ProducerRequest("topic a", 0, new List<Message> { new Message(new byte[10]) }),

-                new ProducerRequest("topic a", 0, new List<Message> { new Message(new byte[10]) }),

-                new ProducerRequest("topic b", 0, new List<Message> { new Message(new byte[10]) }),

-                new ProducerRequest("topic c", 0, new List<Message> { new Message(new byte[10]) })

-            };

-

-            MultiProducerRequest request = new MultiProducerRequest(requests);

-

-            // format = len(request) + requesttype + requestcount + requestpackage

-            // total byte count = 4 + (2 + 2 + 144)

-            MemoryStream ms = new MemoryStream();

-            request.WriteTo(ms);

-            byte[] bytes = ms.ToArray();

-            Assert.IsNotNull(bytes);

-            Assert.AreEqual(156, bytes.Length);

-

-            // first 4 bytes = the length of the request

-            Assert.AreEqual(152, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Take(4).ToArray<byte>()), 0));

-

-            // next 2 bytes = the RequestType which in this case should be Produce

-            Assert.AreEqual((short)RequestTypes.MultiProduce, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(4).Take(2).ToArray<byte>()), 0));

-

-            // next 2 bytes = the number of messages

-            Assert.AreEqual((short)4, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(6).Take(2).ToArray<byte>()), 0));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/OffsetRequestTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/OffsetRequestTests.cs
deleted file mode 100644
index c451fcd..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/OffsetRequestTests.cs
+++ /dev/null
@@ -1,73 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Request

-{

-    using System;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests the <see cref="OffsetRequest"/> class.

-    /// </summary>

-    [TestFixture]

-    public class OffsetRequestTests

-    {

-        /// <summary>

-        /// Validates the list of bytes meet Kafka expectations.

-        /// </summary>

-        [Test]

-        public void GetBytesValid()

-        {

-            string topicName = "topic";

-            OffsetRequest request = new OffsetRequest(topicName, 0, OffsetRequest.LatestTime, 10);

-

-            // format = len(request) + requesttype + len(topic) + topic + partition + time + max

-            // total byte count = 4 + (2 + 2 + 5 + 4 + 8 + 4)

-            MemoryStream ms = new MemoryStream();

-            request.WriteTo(ms);

-            byte[] bytes = ms.ToArray();

-            Assert.IsNotNull(bytes);

-            Assert.AreEqual(29, bytes.Length);

-

-            // first 4 bytes = the length of the request

-            Assert.AreEqual(25, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Take(4).ToArray<byte>()), 0));

-

-            // next 2 bytes = the RequestType which in this case should be Produce

-            Assert.AreEqual((short)RequestTypes.Offsets, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(4).Take(2).ToArray<byte>()), 0));

-

-            // next 2 bytes = the length of the topic

-            Assert.AreEqual((short)5, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(6).Take(2).ToArray<byte>()), 0));

-

-            // next 5 bytes = the topic

-            Assert.AreEqual(topicName, Encoding.ASCII.GetString(bytes.Skip(8).Take(5).ToArray<byte>()));

-

-            // next 4 bytes = the partition

-            Assert.AreEqual(0, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(13).Take(4).ToArray<byte>()), 0));

-

-            // next 8 bytes = time

-            Assert.AreEqual(OffsetRequest.LatestTime, BitConverter.ToInt64(BitWorks.ReverseBytes(bytes.Skip(17).Take(8).ToArray<byte>()), 0));

-

-            // next 4 bytes = max offsets

-            Assert.AreEqual(10, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(25).Take(4).ToArray<byte>()), 0));

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/ProducerRequestTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/ProducerRequestTests.cs
deleted file mode 100644
index 97761e2..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Request/ProducerRequestTests.cs
+++ /dev/null
@@ -1,77 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Request

-{

-    using System;

-    using System.Collections.Generic;

-    using System.IO;

-    using System.Linq;

-    using System.Text;

-    using Kafka.Client.Messages;

-    using Kafka.Client.Requests;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for the <see cref="ProducerRequest"/> class.

-    /// </summary>

-    [TestFixture]

-    public class ProducerRequestTests

-    {

-        /// <summary>

-        /// Test to ensure a valid format in the returned byte array as expected by Kafka.

-        /// </summary>

-        [Test]

-        public void WriteToValidFormat()

-        {

-            string topicName = "topic";

-            ProducerRequest request = new ProducerRequest(

-                topicName, 0, new List<Message> { new Message(new byte[10]) });

-

-            // format = len(request) + requesttype + len(topic) + topic + partition + len(messagepack) + message

-            // total byte count = (4 + 2 + 2 + 5 + 4 + 4 + 19)

-            System.IO.MemoryStream ms = new MemoryStream();

-            request.WriteTo(ms);

-

-            byte[] bytes = ms.ToArray();

-            Assert.IsNotNull(bytes);

-            Assert.AreEqual(41, bytes.Length);

-

-            // next 4 bytes = the length of the request

-            Assert.AreEqual(37, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Take(4).ToArray<byte>()), 0));

-

-            // next 2 bytes = the RequestType which in this case should be Produce

-            Assert.AreEqual((short)RequestTypes.Produce, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(4).Take(2).ToArray<byte>()), 0));

-

-            // next 2 bytes = the length of the topic

-            Assert.AreEqual((short)5, BitConverter.ToInt16(BitWorks.ReverseBytes(bytes.Skip(6).Take(2).ToArray<byte>()), 0));

-

-            // next 5 bytes = the topic

-            Assert.AreEqual(topicName, Encoding.ASCII.GetString(bytes.Skip(8).Take(5).ToArray<byte>()));

-

-            // next 4 bytes = the partition

-            Assert.AreEqual(0, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(13).Take(4).ToArray<byte>()), 0));

-

-            // next 4 bytes = the length of the individual messages in the pack

-            Assert.AreEqual(20, BitConverter.ToInt32(BitWorks.ReverseBytes(bytes.Skip(17).Take(4).ToArray<byte>()), 0));

-

-            // fianl bytes = the individual messages in the pack

-            Assert.AreEqual(20, bytes.Skip(21).ToArray<byte>().Length);

-        }

-    }

-}

diff --git a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Util/BitWorksTests.cs b/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Util/BitWorksTests.cs
deleted file mode 100644
index 9cd23ec..0000000
--- a/trunk/clients/csharp/src/Kafka/Tests/Kafka.Client.Tests/Util/BitWorksTests.cs
+++ /dev/null
@@ -1,121 +0,0 @@
-/**

- * Licensed to the Apache Software Foundation (ASF) under one or more

- * contributor license agreements.  See the NOTICE file distributed with

- * this work for additional information regarding copyright ownership.

- * The ASF licenses this file to You under the Apache License, Version 2.0

- * (the "License"); you may not use this file except in compliance with

- * the License.  You may obtain a copy of the License at

- *

- *    http://www.apache.org/licenses/LICENSE-2.0

- *

- * Unless required by applicable law or agreed to in writing, software

- * distributed under the License is distributed on an "AS IS" BASIS,

- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

- * See the License for the specific language governing permissions and

- * limitations under the License.

- */

-

-namespace Kafka.Client.Tests.Util

-{

-    using System;

-    using Kafka.Client.Utils;

-    using NUnit.Framework;

-

-    /// <summary>

-    /// Tests for <see cref="BitWorks"/> utility class.

-    /// </summary>

-    [TestFixture]

-    public class BitWorksTests

-    {

-        /// <summary>

-        /// Ensures bytes are returned reversed.

-        /// </summary>

-        [Test]

-        public void GetBytesReversedShortValid()

-        {

-            short val = (short)100;

-            byte[] normal = BitConverter.GetBytes(val);

-            byte[] reversed = BitWorks.GetBytesReversed(val);

-

-            TestReversedArray(normal, reversed);

-        }

-

-        /// <summary>

-        /// Ensures bytes are returned reversed.

-        /// </summary>

-        [Test]

-        public void GetBytesReversedIntValid()

-        {

-            int val = 100;

-            byte[] normal = BitConverter.GetBytes(val);

-            byte[] reversed = BitWorks.GetBytesReversed(val);

-

-            TestReversedArray(normal, reversed);

-        }

-

-        /// <summary>

-        /// Ensures bytes are returned reversed.

-        /// </summary>

-        [Test]

-        public void GetBytesReversedLongValid()

-        {

-            long val = 100L;

-            byte[] normal = BitConverter.GetBytes(val);

-            byte[] reversed = BitWorks.GetBytesReversed(val);

-

-            TestReversedArray(normal, reversed);

-        }

-

-        /// <summary>

-        /// Null array will reverse to a null.

-        /// </summary>

-        [Test]

-        public void ReverseBytesNullArray()

-        {

-            byte[] arr = null;

-            Assert.IsNull(BitWorks.ReverseBytes(arr));

-        }

-

-        /// <summary>

-        /// Zero length array will reverse to a zero length array.

-        /// </summary>

-        [Test]

-        public void ReverseBytesZeroLengthArray()

-        {

-            byte[] arr = new byte[0];

-            byte[] reversedArr = BitWorks.ReverseBytes(arr);

-            Assert.IsNotNull(reversedArr);

-            Assert.AreEqual(0, reversedArr.Length);

-        }

-

-        /// <summary>

-        /// Array is reversed.

-        /// </summary>

-        [Test]

-        public void ReverseBytesValid()

-        {

-            byte[] arr = BitConverter.GetBytes((short)1);

-            byte[] original = new byte[2];

-            arr.CopyTo(original, 0);

-            byte[] reversedArr = BitWorks.ReverseBytes(arr);

-

-            TestReversedArray(original, reversedArr);

-        }

-

-        /// <summary>

-        /// Performs asserts for two arrays that should be exactly the same, but values

-        /// in one are in reverse order of the other.

-        /// </summary>

-        /// <param name="normal">The "normal" array.</param>

-        /// <param name="reversed">The array that is in reverse order to the "normal" one.</param>

-        private static void TestReversedArray(byte[] normal, byte[] reversed)

-        {

-            Assert.IsNotNull(reversed);

-            Assert.AreEqual(normal.Length, reversed.Length);

-            for (int ix = 0; ix < normal.Length; ix++)

-            {

-                Assert.AreEqual(normal[ix], reversed[reversed.Length - 1 - ix]);

-            }

-        }

-    }

-}

diff --git a/trunk/clients/go/.gitignore b/trunk/clients/go/.gitignore
deleted file mode 100644
index bd9ecc3..0000000
--- a/trunk/clients/go/.gitignore
+++ /dev/null
@@ -1,13 +0,0 @@
-_go_.6
-_obj
-6.out
-_gotest_.6
-_test
-_testmain.go
-_testmain.6
-tools/*/_obj
-tools/*/_go_.6
-tools/consumer/consumer
-tools/publisher/publisher
-tools/consumer/test.txt
-tools/offsets/offsets
diff --git a/trunk/clients/go/LICENSE b/trunk/clients/go/LICENSE
deleted file mode 100644
index 3797017..0000000
--- a/trunk/clients/go/LICENSE
+++ /dev/null
@@ -1,208 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright (c) 2011 NeuStar, Inc.
-   All rights reserved.
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-   
-   NeuStar, the Neustar logo and related names and logos are registered
-   trademarks, service marks or tradenames of NeuStar, Inc. All other 
-   product names, company names, marks, logos and symbols may be trademarks
-   of their respective owners.
diff --git a/trunk/clients/go/Makefile b/trunk/clients/go/Makefile
deleted file mode 100644
index 3d7da05..0000000
--- a/trunk/clients/go/Makefile
+++ /dev/null
@@ -1,26 +0,0 @@
-include $(GOROOT)/src/Make.inc
-
-TARG=kafka
-GOFILES=\
-	src/kafka.go\
-	src/message.go\
-	src/converts.go\
-	src/consumer.go\
-	src/payload_codec.go\
-	src/publisher.go\
-	src/timing.go\
-	src/request.go\
-
-include $(GOROOT)/src/Make.pkg
-
-tools: force
-	make -C tools/consumer clean all
-	make -C tools/publisher clean all
-	make -C tools/offsets clean all
-
-format:
-	gofmt -w -tabwidth=2 -tabindent=false src/*.go tools/consumer/*.go  tools/publisher/*.go kafka_test.go
-
-full: format clean install tools
-
-.PHONY: force 
diff --git a/trunk/clients/go/README.md b/trunk/clients/go/README.md
deleted file mode 100644
index 1296acd..0000000
--- a/trunk/clients/go/README.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Kafka.go - Publisher & Consumer for Kafka in Go #
-
-Kafka is a distributed publish-subscribe messaging system: (http://incubator.apache.org/kafka/)
-
-Go language: (http://golang.org/) <br/>
-
-## Get up and running ##
-
-Install go: <br/>
-For more info see: http://golang.org/doc/install.html#install 
-
-Make sure to set your GOROOT properly (http://golang.org/doc/install.html#environment).
-
-Install kafka.go package: <br/>
-<code>make install</code>
-<br/>
-Make the tools (publisher & consumer) <br/>
-<code>make tools</code>
-<br/>
-Start zookeeper, Kafka server <br/>
-For more info on Kafka, see: http://incubator.apache.org/kafka/quickstart.html
-
-
-
-## Tools ##
-
-Start a consumer:
-<pre><code>
-   ./tools/consumer/consumer -topic test -consumeforever
-  Consuming Messages :
-  From: localhost:9092, topic: test, partition: 0
-   ---------------------- 
-</code></pre>
-
-Now the consumer will just poll until a message is received.
-  
-Publish a message:
-<pre><code>
-  ./tools/publisher/publisher -topic test -message "Hello World"
-</code></pre>
-
-The consumer should output message.
-
-## API Usage ##
-
-### Publishing ###
-
-
-<pre><code>
-
-broker := kafka.NewBrokerPublisher("localhost:9092", "mytesttopic", 0)
-broker.Publish(kafka.NewMessage([]byte("tesing 1 2 3")))
-
-</code></pre>
-
-
-### Publishing Compressed Messages ###
-
-<pre><code>
-
-broker := kafka.NewBrokerPublisher("localhost:9092", "mytesttopic", 0)
-broker.Publish(kafka.NewCompressedMessage([]byte("tesing 1 2 3")))
-
-</code></pre>
-
-
-### Consumer ###
-
-<pre><code>
-broker := kafka.NewBrokerConsumer("localhost:9092", "mytesttopic", 0, 0, 1048576)
-broker.Consume(func(msg *kafka.Message) { msg.Print() })
-
-</code></pre>
-
-Or the consumer can use a channel based approach:
-
-<pre><code>
-broker := kafka.NewBrokerConsumer("localhost:9092", "mytesttopic", 0, 0, 1048576)
-go broker.ConsumeOnChannel(msgChan, 10, quitChan)
-
-</code></pre>
-
-### Consuming Offsets ###
-
-<pre><code>
-broker := kafka.NewBrokerOffsetConsumer("localhost:9092", "mytesttopic", 0)
-offsets, err := broker.GetOffsets(-1, 1)
-</code></pre>
-
-
-### Contact ###
-
-jeffreydamick (at) gmail (dot) com
-
-http://twitter.com/jeffreydamick
-
-Big thank you to [NeuStar](http://neustar.biz) for sponsoring this work.
-
-
diff --git a/trunk/clients/go/kafka_test.go b/trunk/clients/go/kafka_test.go
deleted file mode 100644
index 05944bc..0000000
--- a/trunk/clients/go/kafka_test.go
+++ /dev/null
@@ -1,277 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "testing"
-  //"fmt"
-  "bytes"
-  "compress/gzip"
-)
-
-func TestMessageCreation(t *testing.T) {
-  payload := []byte("testing")
-  msg := NewMessage(payload)
-  if msg.magic != 1 {
-    t.Errorf("magic incorrect")
-    t.Fail()
-  }
-
-  // generated by kafka-rb: e8 f3 5a 06
-  expected := []byte{0xe8, 0xf3, 0x5a, 0x06}
-  if !bytes.Equal(expected, msg.checksum[:]) {
-    t.Fail()
-  }
-}
-
-func TestMagic0MessageEncoding(t *testing.T) {
-  // generated by kafka-rb:
-  // test the old message format
-  expected := []byte{0x00, 0x00, 0x00, 0x0c, 0x00, 0xe8, 0xf3, 0x5a, 0x06, 0x74, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67}
-  length, msgsDecoded := Decode(expected, DefaultCodecsMap)
-
-  if length == 0 || msgsDecoded == nil {
-    t.Fail()
-  }
-  msgDecoded := msgsDecoded[0]
-
-  payload := []byte("testing")
-  if !bytes.Equal(payload, msgDecoded.payload) {
-    t.Fatal("bytes not equal")
-  }
-  chksum := []byte{0xE8, 0xF3, 0x5A, 0x06}
-  if !bytes.Equal(chksum, msgDecoded.checksum[:]) {
-    t.Fatal("checksums do not match")
-  }
-  if msgDecoded.magic != 0 {
-    t.Fatal("magic incorrect")
-  }
-}
-
-func TestMessageEncoding(t *testing.T) {
-
-  payload := []byte("testing")
-  msg := NewMessage(payload)
-
-  // generated by kafka-rb:
-  expected := []byte{0x00, 0x00, 0x00, 0x0d, 0x01, 0x00, 0xe8, 0xf3, 0x5a, 0x06, 0x74, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67}
-  if !bytes.Equal(expected, msg.Encode()) {
-    t.Fatalf("expected: % X\n but got: % X", expected, msg.Encode())
-  }
-
-  // verify round trip
-  length, msgsDecoded := DecodeWithDefaultCodecs(msg.Encode())
-
-  if length == 0 || msgsDecoded == nil {
-    t.Fatal("message is nil")
-  }
-  msgDecoded := msgsDecoded[0]
-
-  if !bytes.Equal(msgDecoded.payload, payload) {
-    t.Fatal("bytes not equal")
-  }
-  chksum := []byte{0xE8, 0xF3, 0x5A, 0x06}
-  if !bytes.Equal(chksum, msgDecoded.checksum[:]) {
-    t.Fatal("checksums do not match")
-  }
-  if msgDecoded.magic != 1 {
-    t.Fatal("magic incorrect")
-  }
-}
-
-func TestCompressedMessageEncodingCompare(t *testing.T) {
-  payload := []byte("testing")
-  uncompressedMsgBytes := NewMessage(payload).Encode()
-  
-  msgGzipBytes := NewMessageWithCodec(uncompressedMsgBytes, DefaultCodecsMap[GZIP_COMPRESSION_ID]).Encode()
-  msgDefaultBytes := NewCompressedMessage(payload).Encode()
-  if !bytes.Equal(msgDefaultBytes, msgGzipBytes) {
-    t.Fatalf("uncompressed: % X \npayload: % X bytes not equal", msgDefaultBytes, msgGzipBytes)
-  }
-}
-
-func TestCompressedMessageEncoding(t *testing.T) {
-  payload := []byte("testing")
-  uncompressedMsgBytes := NewMessage(payload).Encode()
-  
-  msg := NewMessageWithCodec(uncompressedMsgBytes, DefaultCodecsMap[GZIP_COMPRESSION_ID])
-
-  expectedPayload := []byte{0x1F, 0x8B, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04,
-    0xFF, 0x62, 0x60, 0x60, 0xE0, 0x65, 0x64, 0x78, 0xF1, 0x39, 0x8A,
-    0xAD, 0x24, 0xB5, 0xB8, 0x24, 0x33, 0x2F, 0x1D, 0x10, 0x00, 0x00,
-    0xFF, 0xFF, 0x0C, 0x6A, 0x82, 0x91, 0x11, 0x00, 0x00, 0x00}
-
-  expectedHeader := []byte{0x00, 0x00, 0x00, 0x2F, 0x01, 0x01, 0x07, 0xFD, 0xC3, 0x76}
-
-  expected := make([]byte, len(expectedHeader)+len(expectedPayload))
-  n := copy(expected, expectedHeader)
-  copy(expected[n:], expectedPayload)
-
-  if msg.compression != 1 {
-    t.Fatalf("expected compression: 1 but got: %b", msg.compression)
-  }
-
-  zipper, _ := gzip.NewReader(bytes.NewBuffer(msg.payload))
-  uncompressed := make([]byte, 100)
-  n, _ = zipper.Read(uncompressed)
-  uncompressed = uncompressed[:n]
-  zipper.Close()
-
-  if !bytes.Equal(uncompressed, uncompressedMsgBytes) {
-    t.Fatalf("uncompressed: % X \npayload: % X bytes not equal", uncompressed, uncompressedMsgBytes)
-  }
-
-  if !bytes.Equal(expected, msg.Encode()) {
-    t.Fatalf("expected: % X\n but got: % X", expected, msg.Encode())
-  }
-
-  // verify round trip
-  length, msgsDecoded := Decode(msg.Encode(), DefaultCodecsMap)
-
-  if length == 0 || msgsDecoded == nil {
-    t.Fatal("message is nil")
-  }
-  msgDecoded := msgsDecoded[0]
-
-  if !bytes.Equal(msgDecoded.payload, payload) {
-    t.Fatal("bytes not equal")
-  }
-  chksum := []byte{0xE8, 0xF3, 0x5A, 0x06}
-  if !bytes.Equal(chksum, msgDecoded.checksum[:]) {
-    t.Fatalf("checksums do not match, expected: % X but was: % X", 
-      chksum, msgDecoded.checksum[:])
-  }
-  if msgDecoded.magic != 1 {
-    t.Fatal("magic incorrect")
-  }
-}
-
-func TestLongCompressedMessageRoundTrip(t *testing.T) {
-  payloadBuf := bytes.NewBuffer([]byte{})
-  // make the test bigger than buffer allocated in the Decode
-  for i := 0; i < 15; i++ {
-    payloadBuf.Write([]byte("testing123 "))
-  }
-
-  uncompressedMsgBytes := NewMessage(payloadBuf.Bytes()).Encode()
-  msg := NewMessageWithCodec(uncompressedMsgBytes, DefaultCodecsMap[GZIP_COMPRESSION_ID])
-  
-  zipper, _ := gzip.NewReader(bytes.NewBuffer(msg.payload))
-  uncompressed := make([]byte, 200)
-  n, _ := zipper.Read(uncompressed)
-  uncompressed = uncompressed[:n]
-  zipper.Close()
-
-  if !bytes.Equal(uncompressed, uncompressedMsgBytes) {
-    t.Fatalf("uncompressed: % X \npayload: % X bytes not equal", 
-      uncompressed, uncompressedMsgBytes)
-  }
-
-  // verify round trip
-  length, msgsDecoded := Decode(msg.Encode(), DefaultCodecsMap)
-
-  if length == 0 || msgsDecoded == nil {
-    t.Fatal("message is nil")
-  }
-  msgDecoded := msgsDecoded[0]
-
-  if !bytes.Equal(msgDecoded.payload, payloadBuf.Bytes()) {
-    t.Fatal("bytes not equal")
-  }
-  if msgDecoded.magic != 1 {
-    t.Fatal("magic incorrect")
-  }
-}
-
-func TestMultipleCompressedMessages(t *testing.T) {
-  msgs := []*Message{NewMessage([]byte("testing")), 
-    NewMessage([]byte("multiple")), 
-    NewMessage([]byte("messages")),
-  }
-  msg := NewCompressedMessages(msgs...)
-  
-  length, msgsDecoded := DecodeWithDefaultCodecs(msg.Encode())
-  if length == 0 || msgsDecoded == nil {
-    t.Fatal("msgsDecoded is nil")
-  }
-  
-  // make sure the decompressed messages match what was put in
-  for index, decodedMsg := range msgsDecoded {
-    if !bytes.Equal(msgs[index].payload, decodedMsg.payload) {
-      t.Fatalf("Payload doesn't match, expected: % X but was: % X\n",
-        msgs[index].payload, decodedMsg.payload)
-    }
-  }
-}
-
-func TestRequestHeaderEncoding(t *testing.T) {
-  broker := newBroker("localhost:9092", "test", 0)
-  request := broker.EncodeRequestHeader(REQUEST_PRODUCE)
-
-  // generated by kafka-rb:
-  expected := []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x74, 0x65, 0x73, 0x74,
-    0x00, 0x00, 0x00, 0x00}
-
-  if !bytes.Equal(expected, request.Bytes()) {
-    t.Errorf("expected length: %d but got: %d", len(expected), len(request.Bytes()))
-    t.Errorf("expected: %X\n but got: %X", expected, request)
-    t.Fail()
-  }
-}
-
-func TestPublishRequestEncoding(t *testing.T) {
-  payload := []byte("testing")
-  msg := NewMessage(payload)
-
-  pubBroker := NewBrokerPublisher("localhost:9092", "test", 0)
-  request := pubBroker.broker.EncodePublishRequest(msg)
-
-  // generated by kafka-rb:
-  expected := []byte{0x00, 0x00, 0x00, 0x21, 0x00, 0x00, 0x00, 0x04, 0x74, 0x65, 0x73, 0x74,
-    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x00, 0x0d,
-    /* magic  comp  ......  chksum ....     ..  payload .. */
-    0x01, 0x00, 0xe8, 0xf3, 0x5a, 0x06, 0x74, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67}
-
-  if !bytes.Equal(expected, request) {
-    t.Errorf("expected length: %d but got: %d", len(expected), len(request))
-    t.Errorf("expected: % X\n but got: % X", expected, request)
-    t.Fail()
-  }
-}
-
-func TestConsumeRequestEncoding(t *testing.T) {
-
-  pubBroker := NewBrokerPublisher("localhost:9092", "test", 0)
-  request := pubBroker.broker.EncodeConsumeRequest(0, 1048576)
-
-  // generated by kafka-rb, encode_request_size + encode_request
-  expected := []byte{0x00, 0x00, 0x00, 0x18, 0x00, 0x01, 0x00, 0x04, 0x74,
-    0x65, 0x73, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00}
-
-  if !bytes.Equal(expected, request) {
-    t.Errorf("expected length: %d but got: %d", len(expected), len(request))
-    t.Errorf("expected: % X\n but got: % X", expected, request)
-    t.Fail()
-  }
-}
diff --git a/trunk/clients/go/src/consumer.go b/trunk/clients/go/src/consumer.go
deleted file mode 100644
index 57a4452..0000000
--- a/trunk/clients/go/src/consumer.go
+++ /dev/null
@@ -1,199 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "log"
-  "os"
-  "net"
-  "time"
-  "encoding/binary"
-)
-
-type BrokerConsumer struct {
-  broker  *Broker
-  offset  uint64
-  maxSize uint32
-  codecs  map[byte]PayloadCodec
-}
-
-// Create a new broker consumer
-// hostname - host and optionally port, delimited by ':'
-// topic to consume
-// partition to consume from
-// offset to start consuming from
-// maxSize (in bytes) of the message to consume (this should be at least as big as the biggest message to be published)
-func NewBrokerConsumer(hostname string, topic string, partition int, offset uint64, maxSize uint32) *BrokerConsumer {
-  return &BrokerConsumer{broker: newBroker(hostname, topic, partition),
-    offset:  offset,
-    maxSize: maxSize,
-    codecs:  DefaultCodecsMap}
-}
-
-// Simplified consumer that defaults the offset and maxSize to 0.
-// hostname - host and optionally port, delimited by ':'
-// topic to consume
-// partition to consume from
-func NewBrokerOffsetConsumer(hostname string, topic string, partition int) *BrokerConsumer {
-  return &BrokerConsumer{broker: newBroker(hostname, topic, partition),
-    offset:  0,
-    maxSize: 0,
-    codecs:  DefaultCodecsMap}
-}
-
-// Add Custom Payload Codecs for Consumer Decoding
-// payloadCodecs - an array of PayloadCodec implementations
-func (consumer *BrokerConsumer) AddCodecs(payloadCodecs []PayloadCodec) {
-  // merge to the default map, so one 'could' override the default codecs..
-  for k, v := range codecsMap(payloadCodecs) {
-    consumer.codecs[k] = v, true
-  }
-}
-
-func (consumer *BrokerConsumer) ConsumeOnChannel(msgChan chan *Message, pollTimeoutMs int64, quit chan bool) (int, os.Error) {
-  conn, err := consumer.broker.connect()
-  if err != nil {
-    return -1, err
-  }
-
-  num := 0
-  done := make(chan bool, 1)
-  go func() {
-    for {
-      _, err := consumer.consumeWithConn(conn, func(msg *Message) {
-        msgChan <- msg
-        num += 1
-      })
-
-      if err != nil {
-        if err != os.EOF {
-          log.Println("Fatal Error: ", err)
-          panic(err)
-        }
-        quit <- true // force quit
-        break
-      }
-      time.Sleep(pollTimeoutMs * 1000000)
-    }
-    done <- true
-  }()
-  // wait to be told to stop..
-  <-quit
-  conn.Close()
-  close(msgChan)
-  <-done
-  return num, err
-}
-
-type MessageHandlerFunc func(msg *Message)
-
-func (consumer *BrokerConsumer) Consume(handlerFunc MessageHandlerFunc) (int, os.Error) {
-  conn, err := consumer.broker.connect()
-  if err != nil {
-    return -1, err
-  }
-  defer conn.Close()
-
-  num, err := consumer.consumeWithConn(conn, handlerFunc)
-
-  if err != nil {
-    log.Println("Fatal Error: ", err)
-  }
-
-  return num, err
-}
-
-func (consumer *BrokerConsumer) consumeWithConn(conn *net.TCPConn, handlerFunc MessageHandlerFunc) (int, os.Error) {
-  _, err := conn.Write(consumer.broker.EncodeConsumeRequest(consumer.offset, consumer.maxSize))
-  if err != nil {
-    return -1, err
-  }
-
-  length, payload, err := consumer.broker.readResponse(conn)
-
-  if err != nil {
-    return -1, err
-  }
-
-  num := 0
-  if length > 2 {
-    // parse out the messages
-    var currentOffset uint64 = 0
-    for currentOffset <= uint64(length-4) {
-      totalLength, msgs := Decode(payload[currentOffset:], consumer.codecs)
-      if msgs == nil {
-        return num, os.NewError("Error Decoding Message")
-      }
-      msgOffset := consumer.offset + currentOffset
-      for _, msg := range msgs {
-        // update all of the messages offset
-        // multiple messages can be at the same offset (compressed for example)
-        msg.offset = msgOffset
-        handlerFunc(&msg)
-        num += 1
-      }
-      currentOffset += uint64(4 + totalLength)
-    }
-    // update the broker's offset for next consumption
-    consumer.offset += currentOffset
-  }
-
-  return num, err
-}
-
-// Get a list of valid offsets (up to maxNumOffsets) before the given time, where 
-// time is in milliseconds (-1, from the latest offset available, -2 from the smallest offset available)
-// The result is a list of offsets, in descending order.
-func (consumer *BrokerConsumer) GetOffsets(time int64, maxNumOffsets uint32) ([]uint64, os.Error) {
-  offsets := make([]uint64, 0)
-
-  conn, err := consumer.broker.connect()
-  if err != nil {
-    return offsets, err
-  }
-
-  defer conn.Close()
-
-  _, err = conn.Write(consumer.broker.EncodeOffsetRequest(time, maxNumOffsets))
-  if err != nil {
-    return offsets, err
-  }
-
-  length, payload, err := consumer.broker.readResponse(conn)
-  if err != nil {
-    return offsets, err
-  }
-
-  if length > 4 {
-    // get the number of offsets
-    numOffsets := binary.BigEndian.Uint32(payload[0:])
-    var currentOffset uint64 = 4
-    for currentOffset < uint64(length-4) && uint32(len(offsets)) < numOffsets {
-      offset := binary.BigEndian.Uint64(payload[currentOffset:])
-      offsets = append(offsets, offset)
-      currentOffset += 8 // offset size
-    }
-  }
-
-  return offsets, err
-}
diff --git a/trunk/clients/go/src/converts.go b/trunk/clients/go/src/converts.go
deleted file mode 100644
index cb7fc90..0000000
--- a/trunk/clients/go/src/converts.go
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "encoding/binary"
-)
-
-func uint16bytes(value int) []byte {
-  result := make([]byte, 2)
-  binary.BigEndian.PutUint16(result, uint16(value))
-  return result
-}
-
-func uint32bytes(value int) []byte {
-  result := make([]byte, 4)
-  binary.BigEndian.PutUint32(result, uint32(value))
-  return result
-}
-
-func uint32toUint32bytes(value uint32) []byte {
-  result := make([]byte, 4)
-  binary.BigEndian.PutUint32(result, value)
-  return result
-}
-
-func uint64ToUint64bytes(value uint64) []byte {
-  result := make([]byte, 8)
-  binary.BigEndian.PutUint64(result, value)
-  return result
-}
diff --git a/trunk/clients/go/src/kafka.go b/trunk/clients/go/src/kafka.go
deleted file mode 100644
index a87431d..0000000
--- a/trunk/clients/go/src/kafka.go
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "log"
-  "net"
-  "os"
-  "fmt"
-  "encoding/binary"
-  "io"
-  "bufio"
-)
-
-const (
-  NETWORK = "tcp"
-)
-
-type Broker struct {
-  topic     string
-  partition int
-  hostname  string
-}
-
-func newBroker(hostname string, topic string, partition int) *Broker {
-  return &Broker{topic: topic,
-    partition: partition,
-    hostname:  hostname}
-}
-
-func (b *Broker) connect() (conn *net.TCPConn, error os.Error) {
-  raddr, err := net.ResolveTCPAddr(NETWORK, b.hostname)
-  if err != nil {
-    log.Println("Fatal Error: ", err)
-    return nil, err
-  }
-  conn, err = net.DialTCP(NETWORK, nil, raddr)
-  if err != nil {
-    log.Println("Fatal Error: ", err)
-    return nil, err
-  }
-  return conn, error
-}
-
-// returns length of response & payload & err
-func (b *Broker) readResponse(conn *net.TCPConn) (uint32, []byte, os.Error) {
-  reader := bufio.NewReader(conn)
-  length := make([]byte, 4)
-  lenRead, err := io.ReadFull(reader, length)
-  if err != nil {
-    return 0, []byte{}, err
-  }
-  if lenRead != 4 || lenRead < 0 {
-    return 0, []byte{}, os.NewError("invalid length of the packet length field")
-  }
-
-  expectedLength := binary.BigEndian.Uint32(length)
-  messages := make([]byte, expectedLength)
-  lenRead, err = io.ReadFull(reader, messages)
-  if err != nil {
-    return 0, []byte{}, err
-  }
-
-  if uint32(lenRead) != expectedLength {
-    return 0, []byte{}, os.NewError(fmt.Sprintf("Fatal Error: Unexpected Length: %d  expected:  %d", lenRead, expectedLength))
-  }
-
-  errorCode := binary.BigEndian.Uint16(messages[0:2])
-  if errorCode != 0 {
-    log.Println("errorCode: ", errorCode)
-    return 0, []byte{}, os.NewError(
-      fmt.Sprintf("Broker Response Error: %d", errorCode))
-  }
-  return expectedLength, messages[2:], nil
-}
diff --git a/trunk/clients/go/src/message.go b/trunk/clients/go/src/message.go
deleted file mode 100644
index aa31048..0000000
--- a/trunk/clients/go/src/message.go
+++ /dev/null
@@ -1,182 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "hash/crc32"
-  "encoding/binary"
-  "bytes"
-  "log"
-)
-
-const (
-  // Compression Support uses '1' - https://cwiki.apache.org/confluence/display/KAFKA/Compression
-  MAGIC_DEFAULT = 1
-  // magic + compression + chksum
-  NO_LEN_HEADER_SIZE = 1 + 1 + 4
-)
-
-type Message struct {
-  magic       byte
-  compression byte
-  checksum    [4]byte
-  payload     []byte
-  offset      uint64 // only used after decoding
-  totalLength uint32 // total length of the raw message (from decoding)
-
-}
-
-func (m *Message) Offset() uint64 {
-  return m.offset
-}
-
-func (m *Message) Payload() []byte {
-  return m.payload
-}
-
-func (m *Message) PayloadString() string {
-  return string(m.payload)
-}
-
-func NewMessageWithCodec(payload []byte, codec PayloadCodec) *Message {
-  message := &Message{}
-  message.magic = byte(MAGIC_DEFAULT)
-  message.compression = codec.Id()
-  message.payload = codec.Encode(payload)
-  binary.BigEndian.PutUint32(message.checksum[0:], crc32.ChecksumIEEE(message.payload))
-  return message
-}
-
-// Default is is create a message with no compression
-func NewMessage(payload []byte) *Message {
-  return NewMessageWithCodec(payload, DefaultCodecsMap[NO_COMPRESSION_ID])
-}
-
-// Create a Message using the default compression method (gzip)
-func NewCompressedMessage(payload []byte) *Message {
-  return NewCompressedMessages(NewMessage(payload))
-}
-
-func NewCompressedMessages(messages ...*Message) *Message {
-  buf := bytes.NewBuffer([]byte{})
-  for _, message := range messages {
-    buf.Write(message.Encode())
-  }
-  return NewMessageWithCodec(buf.Bytes(), DefaultCodecsMap[GZIP_COMPRESSION_ID])
-}
-
-// MESSAGE SET: <MESSAGE LENGTH: uint32><MAGIC: 1 byte><COMPRESSION: 1 byte><CHECKSUM: uint32><MESSAGE PAYLOAD: bytes>
-func (m *Message) Encode() []byte {
-  msgLen := NO_LEN_HEADER_SIZE + len(m.payload)
-  msg := make([]byte, 4+msgLen)
-  binary.BigEndian.PutUint32(msg[0:], uint32(msgLen))
-  msg[4] = m.magic
-  msg[5] = m.compression
-
-  copy(msg[6:], m.checksum[0:])
-  copy(msg[10:], m.payload)
-
-  return msg
-}
-
-func DecodeWithDefaultCodecs(packet []byte) (uint32, []Message) {
-  return Decode(packet, DefaultCodecsMap)
-}
-
-func Decode(packet []byte, payloadCodecsMap map[byte]PayloadCodec) (uint32, []Message) {
-  messages := []Message{}
-
-  length, message := decodeMessage(packet, payloadCodecsMap)
-
-  if length > 0 && message != nil {
-    if message.compression != NO_COMPRESSION_ID {
-      // wonky special case for compressed messages having embedded messages
-      payloadLen := uint32(len(message.payload))
-      messageLenLeft := payloadLen
-      for messageLenLeft > 0 {
-        start := payloadLen - messageLenLeft
-        innerLen, innerMsg := decodeMessage(message.payload[start:], payloadCodecsMap)
-        messageLenLeft = messageLenLeft - innerLen - 4 // message length uint32
-        messages = append(messages, *innerMsg)
-      }
-    } else {
-      messages = append(messages, *message)
-    }
-  }
-
-  return length, messages
-}
-
-func decodeMessage(packet []byte, payloadCodecsMap map[byte]PayloadCodec) (uint32, *Message) {
-  length := binary.BigEndian.Uint32(packet[0:])
-  if length > uint32(len(packet[4:])) {
-    log.Printf("length mismatch, expected at least: %X, was: %X\n", length, len(packet[4:]))
-    return 0, nil
-  }
-  msg := Message{}
-  msg.totalLength = length
-  msg.magic = packet[4]
-
-  rawPayload := []byte{}
-  if msg.magic == 0 {
-    msg.compression = byte(0)
-    copy(msg.checksum[:], packet[5:9])
-    payloadLength := length - 1 - 4
-    rawPayload = packet[9 : 9+payloadLength]
-  } else if msg.magic == MAGIC_DEFAULT {
-    msg.compression = packet[5]
-    copy(msg.checksum[:], packet[6:10])
-    payloadLength := length - NO_LEN_HEADER_SIZE
-    rawPayload = packet[10 : 10+payloadLength]
-  } else {
-    log.Printf("incorrect magic, expected: %X was: %X\n", MAGIC_DEFAULT, msg.magic)
-    return 0, nil
-  }
-
-  payloadChecksum := make([]byte, 4)
-  binary.BigEndian.PutUint32(payloadChecksum, crc32.ChecksumIEEE(rawPayload))
-  if !bytes.Equal(payloadChecksum, msg.checksum[:]) {
-    msg.Print()
-    log.Printf("checksum mismatch, expected: % X was: % X\n", payloadChecksum, msg.checksum[:])
-    return 0, nil
-  }
-  msg.payload = payloadCodecsMap[msg.compression].Decode(rawPayload)
-
-  return length, &msg
-}
-
-func (msg *Message) Print() {
-  log.Println("----- Begin Message ------")
-  log.Printf("magic: %X\n", msg.magic)
-  log.Printf("compression: %X\n", msg.compression)
-  log.Printf("checksum: %X\n", msg.checksum)
-  if len(msg.payload) < 1048576 { // 1 MB 
-    log.Printf("payload: % X\n", msg.payload)
-    log.Printf("payload(string): %s\n", msg.PayloadString())
-  } else {
-    log.Printf("long payload, length: %d\n", len(msg.payload))
-  }
-  log.Printf("length: %d\n", msg.totalLength)
-  log.Printf("offset: %d\n", msg.offset)
-  log.Println("----- End Message ------")
-}
diff --git a/trunk/clients/go/src/payload_codec.go b/trunk/clients/go/src/payload_codec.go
deleted file mode 100644
index 7d6f8b5..0000000
--- a/trunk/clients/go/src/payload_codec.go
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "bytes"
-  "compress/gzip"
-  //  "log"
-)
-
-const (
-  NO_COMPRESSION_ID   = 0
-  GZIP_COMPRESSION_ID = 1
-)
-
-type PayloadCodec interface {
-
-  // the 1 byte id of the codec
-  Id() byte
-
-  // encoder interface for compression implementation
-  Encode(data []byte) []byte
-
-  // decoder interface for decompression implementation
-  Decode(data []byte) []byte
-}
-
-// Default Codecs
-
-var DefaultCodecs = []PayloadCodec{
-  new(NoCompressionPayloadCodec),
-  new(GzipPayloadCodec),
-}
-
-var DefaultCodecsMap = codecsMap(DefaultCodecs)
-
-func codecsMap(payloadCodecs []PayloadCodec) map[byte]PayloadCodec {
-  payloadCodecsMap := make(map[byte]PayloadCodec, len(payloadCodecs))
-  for _, c := range payloadCodecs {
-    payloadCodecsMap[c.Id()] = c, true
-  }
-  return payloadCodecsMap
-}
-
-// No compression codec, noop
-
-type NoCompressionPayloadCodec struct {
-
-}
-
-func (codec *NoCompressionPayloadCodec) Id() byte {
-  return NO_COMPRESSION_ID
-}
-
-func (codec *NoCompressionPayloadCodec) Encode(data []byte) []byte {
-  return data
-}
-
-func (codec *NoCompressionPayloadCodec) Decode(data []byte) []byte {
-  return data
-}
-
-// Gzip Codec
-
-type GzipPayloadCodec struct {
-
-}
-
-func (codec *GzipPayloadCodec) Id() byte {
-  return GZIP_COMPRESSION_ID
-}
-
-func (codec *GzipPayloadCodec) Encode(data []byte) []byte {
-  buf := bytes.NewBuffer([]byte{})
-  zipper, _ := gzip.NewWriterLevel(buf, gzip.BestSpeed)
-  zipper.Write(data)
-  zipper.Close()
-  return buf.Bytes()
-}
-
-func (codec *GzipPayloadCodec) Decode(data []byte) []byte {
-  buf := bytes.NewBuffer([]byte{})
-  zipper, _ := gzip.NewReader(bytes.NewBuffer(data))
-  unzipped := make([]byte, 100)
-  for {
-    n, err := zipper.Read(unzipped)
-    if n > 0 && err == nil {
-      buf.Write(unzipped[0:n])
-    } else {
-      break
-    }
-  }
-
-  zipper.Close()
-  return buf.Bytes()
-}
diff --git a/trunk/clients/go/src/publisher.go b/trunk/clients/go/src/publisher.go
deleted file mode 100644
index 5ca3093..0000000
--- a/trunk/clients/go/src/publisher.go
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "os"
-)
-
-type BrokerPublisher struct {
-  broker *Broker
-}
-
-func NewBrokerPublisher(hostname string, topic string, partition int) *BrokerPublisher {
-  return &BrokerPublisher{broker: newBroker(hostname, topic, partition)}
-}
-
-func (b *BrokerPublisher) Publish(message *Message) (int, os.Error) {
-  return b.BatchPublish(message)
-}
-
-func (b *BrokerPublisher) BatchPublish(messages ...*Message) (int, os.Error) {
-  conn, err := b.broker.connect()
-  if err != nil {
-    return -1, err
-  }
-  defer conn.Close()
-  // TODO: MULTIPRODUCE
-  request := b.broker.EncodePublishRequest(messages...)
-  num, err := conn.Write(request)
-  if err != nil {
-    return -1, err
-  }
-
-  return num, err
-}
diff --git a/trunk/clients/go/src/request.go b/trunk/clients/go/src/request.go
deleted file mode 100644
index d15db90..0000000
--- a/trunk/clients/go/src/request.go
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "encoding/binary"
-  "bytes"
-)
-
-type RequestType uint16
-
-// Request Types
-const (
-  REQUEST_PRODUCE      RequestType = 0
-  REQUEST_FETCH                    = 1
-  REQUEST_MULTIFETCH               = 2
-  REQUEST_MULTIPRODUCE             = 3
-  REQUEST_OFFSETS                  = 4
-)
-
-// Request Header: <REQUEST_SIZE: uint32><REQUEST_TYPE: uint16><TOPIC SIZE: uint16><TOPIC: bytes><PARTITION: uint32>
-func (b *Broker) EncodeRequestHeader(requestType RequestType) *bytes.Buffer {
-  request := bytes.NewBuffer([]byte{})
-  request.Write(uint32bytes(0)) // placeholder for request size
-  request.Write(uint16bytes(int(requestType)))
-  request.Write(uint16bytes(len(b.topic)))
-  request.WriteString(b.topic)
-  request.Write(uint32bytes(b.partition))
-
-  return request
-}
-
-// after writing to the buffer is complete, encode the size of the request in the request.
-func encodeRequestSize(request *bytes.Buffer) {
-  binary.BigEndian.PutUint32(request.Bytes()[0:], uint32(request.Len()-4))
-}
-
-// <Request Header><TIME: uint64><MAX NUMBER of OFFSETS: uint32>
-func (b *Broker) EncodeOffsetRequest(time int64, maxNumOffsets uint32) []byte {
-  request := b.EncodeRequestHeader(REQUEST_OFFSETS)
-  // specific to offset request
-  request.Write(uint64ToUint64bytes(uint64(time)))
-  request.Write(uint32toUint32bytes(maxNumOffsets))
-
-  encodeRequestSize(request)
-
-  return request.Bytes()
-}
-
-// <Request Header><OFFSET: uint64><MAX SIZE: uint32>
-func (b *Broker) EncodeConsumeRequest(offset uint64, maxSize uint32) []byte {
-  request := b.EncodeRequestHeader(REQUEST_FETCH)
-  // specific to consume request
-  request.Write(uint64ToUint64bytes(offset))
-  request.Write(uint32toUint32bytes(maxSize))
-
-  encodeRequestSize(request)
-
-  return request.Bytes()
-}
-
-// <Request Header><MESSAGE SET SIZE: uint32><MESSAGE SETS>
-func (b *Broker) EncodePublishRequest(messages ...*Message) []byte {
-  // 4 + 2 + 2 + topicLength + 4 + 4
-  request := b.EncodeRequestHeader(REQUEST_PRODUCE)
-
-  messageSetSizePos := request.Len()
-  request.Write(uint32bytes(0)) // placeholder message len
-
-  written := 0
-  for _, message := range messages {
-    wrote, _ := request.Write(message.Encode())
-    written += wrote
-  }
-
-  // now add the accumulated size of that the message set was
-  binary.BigEndian.PutUint32(request.Bytes()[messageSetSizePos:], uint32(written))
-  // now add the size of the whole to the first uint32
-  encodeRequestSize(request)
-  return request.Bytes()
-}
diff --git a/trunk/clients/go/src/timing.go b/trunk/clients/go/src/timing.go
deleted file mode 100644
index 56d01665..0000000
--- a/trunk/clients/go/src/timing.go
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package kafka
-
-import (
-  "log"
-  "time"
-)
-
-type Timing struct {
-  label string
-  start int64
-  stop  int64
-}
-
-func StartTiming(label string) *Timing {
-  return &Timing{label: label, start: time.Nanoseconds(), stop: 0}
-}
-
-func (t *Timing) Stop() {
-  t.stop = time.Nanoseconds()
-}
-
-func (t *Timing) Print() {
-  if t.stop == 0 {
-    t.Stop()
-  }
-  log.Printf("%s took: %f ms\n", t.label, float64((time.Nanoseconds()-t.start))/1000000)
-}
diff --git a/trunk/clients/go/tools/consumer/Makefile b/trunk/clients/go/tools/consumer/Makefile
deleted file mode 100644
index bfdc07d..0000000
--- a/trunk/clients/go/tools/consumer/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-include $(GOROOT)/src/Make.inc
-
-TARG=consumer
-GOFILES=\
-	consumer.go\
-
-include $(GOROOT)/src/Make.cmd
diff --git a/trunk/clients/go/tools/consumer/consumer.go b/trunk/clients/go/tools/consumer/consumer.go
deleted file mode 100644
index 50f0ebc..0000000
--- a/trunk/clients/go/tools/consumer/consumer.go
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package main
-
-import (
-  "kafka"
-  "flag"
-  "fmt"
-  "os"
-  "strconv"
-  "os/signal"
-  "syscall"
-)
-
-var hostname string
-var topic string
-var partition int
-var offset uint64
-var maxSize uint
-var writePayloadsTo string
-var consumerForever bool
-var printmessage bool
-
-func init() {
-  flag.StringVar(&hostname, "hostname", "localhost:9092", "host:port string for the kafka server")
-  flag.StringVar(&topic, "topic", "test", "topic to publish to")
-  flag.IntVar(&partition, "partition", 0, "partition to publish to")
-  flag.Uint64Var(&offset, "offset", 0, "offset to start consuming from")
-  flag.UintVar(&maxSize, "maxsize", 1048576, "offset to start consuming from")
-  flag.StringVar(&writePayloadsTo, "writeto", "", "write payloads to this file")
-  flag.BoolVar(&consumerForever, "consumeforever", false, "loop forever consuming")
-  flag.BoolVar(&printmessage, "printmessage", true, "print the message details to stdout")
-}
-
-func main() {
-  flag.Parse()
-  fmt.Println("Consuming Messages :")
-  fmt.Printf("From: %s, topic: %s, partition: %d\n", hostname, topic, partition)
-  fmt.Println(" ---------------------- ")
-  broker := kafka.NewBrokerConsumer(hostname, topic, partition, offset, uint32(maxSize))
-
-  var payloadFile *os.File = nil
-  if len(writePayloadsTo) > 0 {
-    var err os.Error
-    payloadFile, err = os.Create(writePayloadsTo)
-    if err != nil {
-      fmt.Println("Error opening file: ", err)
-      payloadFile = nil
-    }
-  }
-
-  consumerCallback := func(msg *kafka.Message) {
-    if printmessage {
-      msg.Print()
-    }
-    if payloadFile != nil {
-      payloadFile.Write([]byte("Message at: " + strconv.Uitoa64(msg.Offset()) + "\n"))
-      payloadFile.Write(msg.Payload())
-      payloadFile.Write([]byte("\n-------------------------------\n"))
-    }
-  }
-
-  if consumerForever {
-    quit := make(chan bool, 1)
-    go func() {
-      for {
-        sig := <-signal.Incoming
-        if sig.(os.UnixSignal) == syscall.SIGINT {
-          quit <- true
-        }
-      }
-    }()
-
-    msgChan := make(chan *kafka.Message)
-    go broker.ConsumeOnChannel(msgChan, 10, quit)
-    for msg := range msgChan {
-      if msg != nil {
-        consumerCallback(msg)
-      } else {
-        break
-      }
-    }
-  } else {
-    broker.Consume(consumerCallback)
-  }
-
-  if payloadFile != nil {
-    payloadFile.Close()
-  }
-
-}
diff --git a/trunk/clients/go/tools/offsets/Makefile b/trunk/clients/go/tools/offsets/Makefile
deleted file mode 100644
index 15ac969..0000000
--- a/trunk/clients/go/tools/offsets/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-include $(GOROOT)/src/Make.inc
-
-TARG=offsets
-GOFILES=\
-	offsets.go\
-
-include $(GOROOT)/src/Make.cmd
diff --git a/trunk/clients/go/tools/offsets/offsets.go b/trunk/clients/go/tools/offsets/offsets.go
deleted file mode 100644
index 81e60d5..0000000
--- a/trunk/clients/go/tools/offsets/offsets.go
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-
-package main
-
-import (
-  "kafka"
-  "flag"
-  "fmt"
-)
-
-var hostname string
-var topic string
-var partition int
-var offsets uint
-var time int64
-
-func init() {
-  flag.StringVar(&hostname, "hostname", "localhost:9092", "host:port string for the kafka server")
-  flag.StringVar(&topic, "topic", "test", "topic to read offsets from")
-  flag.IntVar(&partition, "partition", 0, "partition to read offsets from")
-  flag.UintVar(&offsets, "offsets", 1, "number of offsets returned")
-  flag.Int64Var(&time, "time", -1, "timestamp of the offsets before that:  time(ms)/-1(latest)/-2(earliest)")
-}
-
-
-func main() {
-  flag.Parse()
-  fmt.Println("Offsets :")
-  fmt.Printf("From: %s, topic: %s, partition: %d\n", hostname, topic, partition)
-  fmt.Println(" ---------------------- ")
-  broker := kafka.NewBrokerOffsetConsumer(hostname, topic, partition)
-
-  offsets, err := broker.GetOffsets(time, uint32(offsets))
-  if err != nil {
-    fmt.Println("Error: ", err)
-  }
-  fmt.Printf("Offsets found: %d\n", len(offsets))
-  for i := 0 ; i < len(offsets); i++ {
-    fmt.Printf("Offset[%d] = %d\n", i, offsets[i])
-  }
-}
diff --git a/trunk/clients/go/tools/publisher/Makefile b/trunk/clients/go/tools/publisher/Makefile
deleted file mode 100644
index ff48fb9..0000000
--- a/trunk/clients/go/tools/publisher/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-include $(GOROOT)/src/Make.inc
-
-TARG=publisher
-GOFILES=\
-	publisher.go\
-
-include $(GOROOT)/src/Make.cmd
diff --git a/trunk/clients/go/tools/publisher/publisher.go b/trunk/clients/go/tools/publisher/publisher.go
deleted file mode 100644
index 0a316bf..0000000
--- a/trunk/clients/go/tools/publisher/publisher.go
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- *  Copyright (c) 2011 NeuStar, Inc.
- *  All rights reserved.  
- *
- *  Licensed under the Apache License, Version 2.0 (the "License");
- *  you may not use this file except in compliance with the License.
- *  You may obtain a copy of the License at 
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *  
- *  NeuStar, the Neustar logo and related names and logos are registered
- *  trademarks, service marks or tradenames of NeuStar, Inc. All other 
- *  product names, company names, marks, logos and symbols may be trademarks
- *  of their respective owners.
- */
-
-package main
-
-import (
-  "kafka"
-  "flag"
-  "fmt"
-  "os"
-)
-
-var hostname string
-var topic string
-var partition int
-var message string
-var messageFile string
-var compress bool
-
-func init() {
-  flag.StringVar(&hostname, "hostname", "localhost:9092", "host:port string for the kafka server")
-  flag.StringVar(&topic, "topic", "test", "topic to publish to")
-  flag.IntVar(&partition, "partition", 0, "partition to publish to")
-  flag.StringVar(&message, "message", "", "message to publish")
-  flag.StringVar(&messageFile, "messagefile", "", "read message from this file")
-  flag.BoolVar(&compress, "compress", false, "compress the messages published")
-}
-
-func main() {
-  flag.Parse()
-  fmt.Println("Publishing :", message)
-  fmt.Printf("To: %s, topic: %s, partition: %d\n", hostname, topic, partition)
-  fmt.Println(" ---------------------- ")
-  broker := kafka.NewBrokerPublisher(hostname, topic, partition)
-
-  if len(message) == 0 && len(messageFile) != 0 {
-    file, err := os.Open(messageFile)
-    if err != nil {
-      fmt.Println("Error: ", err)
-      return
-    }
-    stat, err := file.Stat()
-    if err != nil {
-      fmt.Println("Error: ", err)
-      return
-    }
-    payload := make([]byte, stat.Size)
-    file.Read(payload)
-    timing := kafka.StartTiming("Sending")
-
-    if compress {
-      broker.Publish(kafka.NewCompressedMessage(payload))
-    } else {
-      broker.Publish(kafka.NewMessage(payload))
-    }
-
-    timing.Print()
-    file.Close()
-  } else {
-    timing := kafka.StartTiming("Sending")
-
-    if compress {
-      broker.Publish(kafka.NewCompressedMessage([]byte(message)))
-    } else {
-      broker.Publish(kafka.NewMessage([]byte(message)))
-    }
-
-    timing.Print()
-  }
-}
diff --git a/trunk/clients/php/LICENSE b/trunk/clients/php/LICENSE
deleted file mode 100644
index 614c632..0000000
--- a/trunk/clients/php/LICENSE
+++ /dev/null
@@ -1,203 +0,0 @@
-
-                              Apache License
-                        Version 2.0, January 2004
-                     http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   "License" shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   "Licensor" shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   "Legal Entity" shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   "control" means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   "You" (or "Your") shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   "Source" form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   "Object" form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   "Work" shall mean the work of authorship, whether in Source or
-   Object form, made available under the License, as indicated by a
-   copyright notice that is included in or attached to the work
-   (an example is provided in the Appendix below).
-
-   "Derivative Works" shall mean any work, whether in Source or Object
-   form, that is based on (or derived from) the Work and for which the
-   editorial revisions, annotations, elaborations, or other modifications
-   represent, as a whole, an original work of authorship. For the purposes
-   of this License, Derivative Works shall not include works that remain
-   separable from, or merely link (or bind by name) to the interfaces of,
-   the Work and Derivative Works thereof.
-
-   "Contribution" shall mean any work of authorship, including
-   the original version of the Work and any modifications or additions
-   to that Work or Derivative Works thereof, that is intentionally
-   submitted to Licensor for inclusion in the Work by the copyright owner
-   or by an individual or Legal Entity authorized to submit on behalf of
-   the copyright owner. For the purposes of this definition, "submitted"
-   means any form of electronic, verbal, or written communication sent
-   to the Licensor or its representatives, including but not limited to
-   communication on electronic mailing lists, source code control systems,
-   and issue tracking systems that are managed by, or on behalf of, the
-   Licensor for the purpose of discussing and improving the Work, but
-   excluding communication that is conspicuously marked or otherwise
-   designated in writing by the copyright owner as "Not a Contribution."
-
-   "Contributor" shall mean Licensor and any individual or Legal Entity
-   on behalf of whom a Contribution has been received by Licensor and
-   subsequently incorporated within the Work.
-
-2. Grant of Copyright License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   copyright license to reproduce, prepare Derivative Works of,
-   publicly display, publicly perform, sublicense, and distribute the
-   Work and such Derivative Works in Source or Object form.
-
-3. Grant of Patent License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   (except as stated in this section) patent license to make, have made,
-   use, offer to sell, sell, import, and otherwise transfer the Work,
-   where such license applies only to those patent claims licensable
-   by such Contributor that are necessarily infringed by their
-   Contribution(s) alone or by combination of their Contribution(s)
-   with the Work to which such Contribution(s) was submitted. If You
-   institute patent litigation against any entity (including a
-   cross-claim or counterclaim in a lawsuit) alleging that the Work
-   or a Contribution incorporated within the Work constitutes direct
-   or contributory patent infringement, then any patent licenses
-   granted to You under this License for that Work shall terminate
-   as of the date such litigation is filed.
-
-4. Redistribution. You may reproduce and distribute copies of the
-   Work or Derivative Works thereof in any medium, with or without
-   modifications, and in Source or Object form, provided that You
-   meet the following conditions:
-
-   (a) You must give any other recipients of the Work or
-       Derivative Works a copy of this License; and
-
-   (b) You must cause any modified files to carry prominent notices
-       stating that You changed the files; and
-
-   (c) You must retain, in the Source form of any Derivative Works
-       that You distribute, all copyright, patent, trademark, and
-       attribution notices from the Source form of the Work,
-       excluding those notices that do not pertain to any part of
-       the Derivative Works; and
-
-   (d) If the Work includes a "NOTICE" text file as part of its
-       distribution, then any Derivative Works that You distribute must
-       include a readable copy of the attribution notices contained
-       within such NOTICE file, excluding those notices that do not
-       pertain to any part of the Derivative Works, in at least one
-       of the following places: within a NOTICE text file distributed
-       as part of the Derivative Works; within the Source form or
-       documentation, if provided along with the Derivative Works; or,
-       within a display generated by the Derivative Works, if and
-       wherever such third-party notices normally appear. The contents
-       of the NOTICE file are for informational purposes only and
-       do not modify the License. You may add Your own attribution
-       notices within Derivative Works that You distribute, alongside
-       or as an addendum to the NOTICE text from the Work, provided
-       that such additional attribution notices cannot be construed
-       as modifying the License.
-
-   You may add Your own copyright statement to Your modifications and
-   may provide additional or different license terms and conditions
-   for use, reproduction, or distribution of Your modifications, or
-   for any such Derivative Works as a whole, provided Your use,
-   reproduction, and distribution of the Work otherwise complies with
-   the conditions stated in this License.
-
-5. Submission of Contributions. Unless You explicitly state otherwise,
-   any Contribution intentionally submitted for inclusion in the Work
-   by You to the Licensor shall be under the terms and conditions of
-   this License, without any additional terms or conditions.
-   Notwithstanding the above, nothing herein shall supersede or modify
-   the terms of any separate license agreement you may have executed
-   with Licensor regarding such Contributions.
-
-6. Trademarks. This License does not grant permission to use the trade
-   names, trademarks, service marks, or product names of the Licensor,
-   except as required for reasonable and customary use in describing the
-   origin of the Work and reproducing the content of the NOTICE file.
-
-7. Disclaimer of Warranty. Unless required by applicable law or
-   agreed to in writing, Licensor provides the Work (and each
-   Contributor provides its Contributions) on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-   implied, including, without limitation, any warranties or conditions
-   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-   PARTICULAR PURPOSE. You are solely responsible for determining the
-   appropriateness of using or redistributing the Work and assume any
-   risks associated with Your exercise of permissions under this License.
-
-8. Limitation of Liability. In no event and under no legal theory,
-   whether in tort (including negligence), contract, or otherwise,
-   unless required by applicable law (such as deliberate and grossly
-   negligent acts) or agreed to in writing, shall any Contributor be
-   liable to You for damages, including any direct, indirect, special,
-   incidental, or consequential damages of any character arising as a
-   result of this License or out of the use or inability to use the
-   Work (including but not limited to damages for loss of goodwill,
-   work stoppage, computer failure or malfunction, or any and all
-   other commercial damages or losses), even if such Contributor
-   has been advised of the possibility of such damages.
-
-9. Accepting Warranty or Additional Liability. While redistributing
-   the Work or Derivative Works thereof, You may choose to offer,
-   and charge a fee for, acceptance of support, warranty, indemnity,
-   or other liability obligations and/or rights consistent with this
-   License. However, in accepting such obligations, You may act only
-   on Your own behalf and on Your sole responsibility, not on behalf
-   of any other Contributor, and only if You agree to indemnify,
-   defend, and hold each Contributor harmless for any liability
-   incurred by, or claims asserted against, such Contributor by reason
-   of your accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-APPENDIX: How to apply the Apache License to your work.
-
-   To apply the Apache License to your work, attach the following
-   boilerplate notice, with the fields enclosed by brackets "[]"
-   replaced with your own identifying information. (Don't include
-   the brackets!)  The text should be enclosed in the appropriate
-   comment syntax for the file format. We also recommend that a
-   file or class name and description of purpose be included on the
-   same "printed page" as the copyright notice for easier
-   identification within third-party archives.
-
-Copyright [yyyy] [name of copyright owner]
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
diff --git a/trunk/clients/php/README.md b/trunk/clients/php/README.md
deleted file mode 100644
index 59959e0..0000000
--- a/trunk/clients/php/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# kafka-php
-kafka-php allows you to produce messages to the Kafka distributed publish/subscribe messaging service.
-
-## Requirements
-Minimum PHP version: 5.3.3.
-You need to have access to your Kafka instance and be able to connect through TCP. You can obtain a copy and instructions on how to setup kafka at https://github.com/kafka-dev/kafka
-
-## Installation
-Add the lib directory to the include_path and use an autoloader like the one in the examples directory (the code follows the PEAR/Zend one-class-per-file convention).
-
-## Usage
-The examples directory contains an example of a Producer and a Consumer.
-
-## Contact for questions
-
-Lorenzo Alberton
-
-l.alberton at(@) quipo.it
-
-http://twitter.com/lorenzoalberton
diff --git a/trunk/clients/php/src/examples/autoloader.php b/trunk/clients/php/src/examples/autoloader.php
deleted file mode 100644
index d40fae9..0000000
--- a/trunk/clients/php/src/examples/autoloader.php
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-spl_autoload_register(function($className)
-{
-	$classFile = str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php';
-	if (function_exists('stream_resolve_include_path')) {
-		$file = stream_resolve_include_path($classFile);
-	} else {
-		foreach (explode(PATH_SEPARATOR, get_include_path()) as $path) {
-			if (file_exists($path . '/' . $classFile)) {
-				$file = $path . '/' . $classFile;
-				break;
-			}
-		}
-	}
-	/* If file is found, store it into the cache, classname <-> file association */
-	if (($file !== false) && ($file !== null)) {
-		include $file;
-		return;
-	}
-
-	throw new RuntimeException($className. ' not found');
-});
diff --git a/trunk/clients/php/src/examples/consume.php b/trunk/clients/php/src/examples/consume.php
deleted file mode 100644
index bbef1b2..0000000
--- a/trunk/clients/php/src/examples/consume.php
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/php
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-set_include_path(
-	implode(PATH_SEPARATOR, array(
-		realpath(dirname(__FILE__).'/../lib'),
-		get_include_path(),
-	))
-);
-require 'autoloader.php';
-
-$host = 'localhost';
-$zkPort  = 2181; //zookeeper
-$kPort   = 9092; //kafka server
-$topic   = 'test';
-$maxSize = 1000000;
-$socketTimeout = 5;
-
-$offset    = 0;
-$partition = 0;
-
-$consumer = new Kafka_SimpleConsumer($host, $kPort, $socketTimeout, $maxSize);
-while (true) {	
-	//create a fetch request for topic "test", partition 0, current offset and fetch size of 1MB
-	$fetchRequest = new Kafka_FetchRequest($topic, $partition, $offset, $maxSize);
-	//get the message set from the consumer and print them out
-	$messages = $consumer->fetch($fetchRequest);
-	foreach ($messages as $msg) {
-		echo "\nconsumed[$offset]: " . $msg->payload();
-	}
-	//advance the offset after consuming each message
-	$offset += $messages->validBytes();
-	//echo "\n---[Advancing offset to $offset]------(".date('H:i:s').")";
-	unset($fetchRequest);
-	sleep(2);
-}
diff --git a/trunk/clients/php/src/examples/produce.php b/trunk/clients/php/src/examples/produce.php
deleted file mode 100644
index 9d73689..0000000
--- a/trunk/clients/php/src/examples/produce.php
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/usr/bin/php
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-set_include_path(
-	implode(PATH_SEPARATOR, array(
-		realpath(dirname(__FILE__).'/../lib'),
-		get_include_path(),
-	))
-);
-require 'autoloader.php';
-
-define('PRODUCE_REQUEST_ID', 0);
-
-
-$host = 'localhost';
-$port = 9092;
-$topic = 'test';
-
-$producer = new Kafka_Producer($host, $port);
-$in = fopen('php://stdin', 'r');
-while (true) {
-	echo "\nEnter comma separated messages:\n";
-	$messages = explode(',', fgets($in));
-	foreach (array_keys($messages) as $k) {
-		//$messages[$k] = trim($messages[$k]);
-	}
-	$bytes = $producer->send($messages, $topic);
-	printf("\nSuccessfully sent %d messages (%d bytes)\n\n", count($messages), $bytes);
-}
diff --git a/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Receive.php b/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Receive.php
deleted file mode 100644
index 53cc650..0000000
--- a/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Receive.php
+++ /dev/null
@@ -1,154 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Read an entire message set from a stream into an internal buffer
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_BoundedByteBuffer_Receive
-{
-	/**
-	 * @var integer
-	 */
-	protected $size;
-	
-	/**
-	 * @var boolean
-	 */
-	protected $sizeRead = false;
-	
-	/**
-	 * @var integer
-	 */
-	protected $remainingBytes = 0;
-
-	/**
-	 * @var string resource
-	 */
-	public $buffer = null;
-	
-	/**
-	 * @var boolean
-	 */
-	protected $complete = false;
-	
-	/**
-	 *
-	 * @var integer
-	 */
-	protected $maxSize = PHP_INT_MAX;
-	
-	/**
-	 * Constructor
-	 *
-	 * @param integer $maxSize Max buffer size
-	 */
-	public function __construct($maxSize = PHP_INT_MAX) {
-		$this->maxSize = $maxSize;
-	}
-	
-	/**
-	 * Destructor
-	 * 
-	 * @return void
-	 */
-	public function __destruct() {
-		if (is_resource($this->buffer)) {
-			fclose($this->buffer);
-		}
-	}
-	
-	/**
-	 * Read the request size (4 bytes) if not read yet
-	 * 
-	 * @param resource $stream Stream resource
-	 *
-	 * @return integer Number of bytes read
-	 * @throws RuntimeException when size is <=0 or >= $maxSize
-	 */
-	private function readRequestSize($stream) {
-		if (!$this->sizeRead) {
-			$this->size = fread($stream, 4);
-			if ((false === $this->size) || ('' === $this->size)) {
-				$errmsg = 'Received nothing when reading from channel, socket has likely been closed.';
-				throw new RuntimeException($errmsg);
-			}
-			$this->size = array_shift(unpack('N', $this->size));
-			if ($this->size <= 0 || $this->size > $this->maxSize) {
-				throw new RuntimeException($this->size . ' is not a valid message size');
-			}
-			$this->remainingBytes = $this->size;
-			$this->sizeRead = true;
-			return 4;
-		}
-		return 0;
-	}
-	
-	/**
-	 * Read a chunk of data from the stream
-	 * 
-	 * @param resource $stream Stream resource
-	 * 
-	 * @return integer number of read bytes
-	 * @throws RuntimeException when size is <=0 or >= $maxSize
-	 */
-	public function readFrom($stream) {
-		// have we read the request size yet?
-		$read = $this->readRequestSize($stream);
-		// have we allocated the request buffer yet?
-		if (!$this->buffer) {
-			$this->buffer = fopen('php://temp', 'w+b');
-		}
-		// if we have a buffer, read some stuff into it
-		if ($this->buffer && !$this->complete) {
-			$freadBufferSize = min(8192, $this->remainingBytes);
-			if ($freadBufferSize > 0) {
-				//TODO: check that fread returns something
-				$bytesRead = fwrite($this->buffer, fread($stream, $freadBufferSize));
-				$this->remainingBytes -= $bytesRead;
-				$read += $bytesRead;
-			}
-			// did we get everything?
-			if ($this->remainingBytes <= 0) {
-				rewind($this->buffer);
-				$this->complete = true;
-			}
-		}
-		return $read;
-	}
-	
-	/**
-	 * Read all the available bytes in the stream
-	 * 
-	 * @param resource $stream Stream resource
-	 * 
-	 * @return integer number of read bytes
-	 * @throws RuntimeException when size is <=0 or >= $maxSize
-	 */
-	public function readCompletely($stream) {
-		$read = 0;
-		while (!$this->complete) {
-			$read += $this->readFrom($stream);
-		}
-		return $read;
-	}
-}
-
-
-
-  
\ No newline at end of file
diff --git a/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Send.php b/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Send.php
deleted file mode 100644
index 91b99e7..0000000
--- a/trunk/clients/php/src/lib/Kafka/BoundedByteBuffer/Send.php
+++ /dev/null
@@ -1,118 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Send a request to Kafka
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_BoundedByteBuffer_Send
-{
-	/**
-	 * @var integer
-	 */
-	protected $size;
-	
-	/**
-	 * @var boolean
-	 */
-	protected $sizeWritten = false; 
-
-	/**
-	 * @var string resource
-	 */
-	protected $buffer;
-	
-	/**
-	 * @var boolean
-	 */
-	protected $complete = false;
-	
-	/**
-	 * Constructor
-	 * 
-	 * @param Kafka_FetchRequest $req Request object
-	 */
-	public function __construct(Kafka_FetchRequest $req) {
-		$this->size = $req->sizeInBytes() + 2;
-		$this->buffer = fopen('php://temp', 'w+b');
-		fwrite($this->buffer, pack('n', $req->id));
-		$req->writeTo($this->buffer);
-		rewind($this->buffer);
-		//fseek($this->buffer, $req->getOffset(), SEEK_SET);
-	}
-	
-	/**
-	 * Try to write the request size if we haven't already
-	 * 
-	 * @param resource $stream Stream resource
-	 *
-	 * @return integer Number of bytes read
-	 * @throws RuntimeException when size is <=0 or >= $maxSize
-	 */
-	private function writeRequestSize($stream) {
-		if (!$this->sizeWritten) {
-			if (!fwrite($stream, pack('N', $this->size))) {
-				throw new RuntimeException('Cannot write request to stream (' . error_get_last() . ')');
-			}
-			$this->sizeWritten = true;
-			return 4;
-		}
-		return 0;
-	}
-	
-	/**
-	 * Write a chunk of data to the stream
-	 * 
-	 * @param resource $stream Stream resource
-	 * 
-	 * @return integer number of written bytes
-	 * @throws RuntimeException
-	 */
-	public function writeTo($stream) {
-		// have we written the request size yet?
-		$written = $this->writeRequestSize($stream);
-		
-		// try to write the actual buffer itself
-		if ($this->sizeWritten && !feof($this->buffer)) {
-			//TODO: check that fread returns something
-			$written += fwrite($stream, fread($this->buffer, 8192));
-		}
-		// if we are done, mark it off
-		if (feof($this->buffer)) {
-			$this->complete = true;
-			fclose($this->buffer);
-		}
-		return $written;
-	}
-	
-	/**
-	 * Write the entire request to the stream
-	 * 
-	 * @param resource $stream Stream resource
-	 * 
-	 * @return integer number of written bytes
-	 */
-	public function writeCompletely($stream) {
-		$written = 0;
-		while (!$this->complete) {
-			$written += $this->writeTo($stream);
-		}
-		//echo "\nWritten " . $written . ' bytes ';
-		return $written;
-	}
-}
diff --git a/trunk/clients/php/src/lib/Kafka/Encoder.php b/trunk/clients/php/src/lib/Kafka/Encoder.php
deleted file mode 100644
index 3c05cfd..0000000
--- a/trunk/clients/php/src/lib/Kafka/Encoder.php
+++ /dev/null
@@ -1,73 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Encode messages and messages sets into the kafka protocol
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_Encoder
-{
-	/**
-	 * 1 byte "magic" identifier to allow format changes
-	 * 
-	 * @var integer
-	 */
-	const CURRENT_MAGIC_VALUE = 1;
-	
-	/**
-	 * Encode a message. The format of an N byte message is the following:
-     *  - 1 byte: "magic" identifier to allow format changes
-     *  - 1 byte:  "compression-attributes" for compression alogrithm
-     *  - 4 bytes: CRC32 of the payload
-     *  - (N - 5) bytes: payload
-	 * 
-	 * @param string $msg Message to encode
-	 *
-	 * @return string
-	 */
-	static public function encode_message($msg, $compression) {
-		// <MAGIC_BYTE: 1 byte> <COMPRESSION: 1 byte> <CRC32: 4 bytes bigendian> <PAYLOAD: N bytes>
-		return pack('CCN', self::CURRENT_MAGIC_VALUE, $compression, crc32($msg)) 
-			 . $msg;
-	}
-
-	/**
-	 * Encode a complete request
-	 * 
-	 * @param string  $topic     Topic
-	 * @param integer $partition Partition number
-	 * @param array   $messages  Array of messages to send
-	 * @param compression $compression flag for type of compression 
-	 *
-	 * @return string
-	 */
-	static public function encode_produce_request($topic, $partition, array $messages, $compression) {
-		// encode messages as <LEN: int><MESSAGE_BYTES>
-		$message_set = '';
-		foreach ($messages as $message) {
-			$encoded = self::encode_message($message, $compression);
-			$message_set .= pack('N', strlen($encoded)) . $encoded;
-		}
-		// create the request as <REQUEST_SIZE: int> <REQUEST_ID: short> <TOPIC: bytes> <PARTITION: int> <BUFFER_SIZE: int> <BUFFER: bytes>
-		$data = pack('n', PRODUCE_REQUEST_ID) .
-			pack('n', strlen($topic)) . $topic .
-			pack('N', $partition) .
-			pack('N', strlen($message_set)) . $message_set;
-		return pack('N', strlen($data)) . $data;
-	}
-}
diff --git a/trunk/clients/php/src/lib/Kafka/FetchRequest.php b/trunk/clients/php/src/lib/Kafka/FetchRequest.php
deleted file mode 100644
index 81b21a5..0000000
--- a/trunk/clients/php/src/lib/Kafka/FetchRequest.php
+++ /dev/null
@@ -1,126 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Represents a request object
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_FetchRequest extends Kafka_Request
-{
-	/**
-	 * @var string
-	 */
-	private $topic;
-	
-	/**
-	 * @var integer
-	 */
-	private $partition;
-	
-	/**
-	 * @var integer
-	 */
-	private $offset;
-	
-	/**
-	 * @var integer
-	 */
-	private $maxSize;
-	
-	/**
-	 * @param string  $topic     Topic
-	 * @param integer $partition Partition
-	 * @param integer $offset    Offset
-	 * @param integer $maxSize   Max buffer size
-	 */
-	public function __construct($topic, $partition = 0, $offset = 0, $maxSize = 1000000) {
-		$this->id        = Kafka_RequestKeys::FETCH;
-		$this->topic     = $topic;
-		$this->partition = $partition;
-		$this->offset    = $offset;
-		$this->maxSize   = $maxSize;
-	}
-	
-	/**
-	 * Write the request to the output stream
-	 * 
-	 * @param resource $stream Output stream
-	 * 
-	 * @return void
-	 */
-	public function writeTo($stream) {
-		//echo "\nWriting request to stream: " . (string)$this;
-		// <topic size: short> <topic: bytes>
-		fwrite($stream, pack('n', strlen($this->topic)) . $this->topic);
-		// <partition: int> <offset: Long> <maxSize: int>
-		fwrite($stream, pack('N', $this->partition));
-		
-//TODO: need to store a 64bit integer (bigendian), but PHP only supports 32bit integers: 
-//setting first 32 bits to 0
-		fwrite($stream, pack('N2', 0, $this->offset));
-		fwrite($stream, pack('N', $this->maxSize));
-		//echo "\nWritten request to stream: " .(string)$this;
-	}
-	
-	/**
-	 * Get request size in bytes
-	 * 
-	 * @return integer
-	 */
-	public function sizeInBytes() {
-		return 2 + strlen($this->topic) + 4 + 8 + 4;
-	}
-	
-	/**
-	 * Get current offset
-	 *
-	 * @return integer
-	 */
-	public function getOffset() {
-		return $this->offset;
-	}
-	
-	/**
-	 * Get topic
-	 * 
-	 * @return string
-	 */
-	public function getTopic() {
-		return $this->topic;
-	}
-	
-	/**
-	 * Get partition
-	 * 
-	 * @return integer
-	 */
-	public function getPartition() {
-		return $this->partition;
-	}
-	
-	/**
-	 * String representation of the Fetch Request
-	 * 
-	 * @return string
-	 */
-	public function __toString()
-	{
-		return 'topic:' . $this->topic . ', part:' . $this->partition . ' offset:' . $this->offset . ' maxSize:' . $this->maxSize;
-	}
-}
-
diff --git a/trunk/clients/php/src/lib/Kafka/Message.php b/trunk/clients/php/src/lib/Kafka/Message.php
deleted file mode 100644
index d728634..0000000
--- a/trunk/clients/php/src/lib/Kafka/Message.php
+++ /dev/null
@@ -1,126 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * A message. The format of an N byte message is the following:
- * 1 byte "magic" identifier to allow format changes
- * 1 byte compression-attribute
- * 4 byte CRC32 of the payload
- * N - 5 byte payload
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_Message
-{
-	
-	/**
-	 * @var string
-	 */
-	private $payload = null;
-	
-	/**
-	 * @var integer
-	 */
-	private $size    = 0;
-	
-	/**
-	 * @var integer
-	 */
-	private $compression    = 0;
-	
-	/**
-	 * @var string
-	 */
-	private $crc     = false;
-	
-	/**
-	 * Constructor
-	 * 
-	 * @param string $data Message payload
-	 */
-	public function __construct($data) {
-		$this->payload = substr($data, 6);
-		$this->compression    = substr($data,1,1);
-		$this->crc     = crc32($this->payload);
-		$this->size    = strlen($this->payload);
-	}
-
-	
-	/**
-	 * Encode a message
-	 * 
-	 * @return string
-	 */
-	public function encode() {
-		return Kafka_Encoder::encode_message($this->payload);
-	}
-	
-	/**
-	 * Get the message size
-	 * 
-	 * @return integer
-	 */
-	public function size() {
-		return $this->size;
-	}
-  
-	/**
-	 * Get the magic value
-	 * 
-	 * @return integer
-	 */
-	public function magic() {
-		return Kafka_Encoder::CURRENT_MAGIC_VALUE;
-	}
-	
-	/**
-	 * Get the message checksum
-	 * 
-	 * @return integer
-	 */
-	public function checksum() {
-		return $this->crc;
-	}
-	
-	/**
-	 * Get the message payload
-	 * 
-	 * @return string
-	 */
-	public function payload() {
-		return $this->payload;
-	}
-	
-	/**
-	 * Verify the message against the checksum
-	 * 
-	 * @return boolean
-	 */
-	public function isValid() {
-		return ($this->crc === crc32($this->payload));
-	}
-  
-	/**
-	 * Debug message
-	 * 
-	 * @return string
-	 */
-	public function __toString() {
-		return 'message(magic = ' . Kafka_Encoder::CURRENT_MAGIC_VALUE . ', compression = ' . $this->compression .
-		  ', crc = ' . $this->crc . ', payload = ' . $this->payload . ')';
-	}
-}
diff --git a/trunk/clients/php/src/lib/Kafka/MessageSet.php b/trunk/clients/php/src/lib/Kafka/MessageSet.php
deleted file mode 100644
index 85a0b9a..0000000
--- a/trunk/clients/php/src/lib/Kafka/MessageSet.php
+++ /dev/null
@@ -1,122 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * A sequence of messages stored in a byte buffer
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_MessageSet implements Iterator
-{	
-	/**
-	 * @var integer
-	 */
-	protected $validByteCount = 0;
-	
-	/**
-	 * @var boolean
-	 */
-	private $valid = false;
-	
-	/**
-	 * @var array
-	 */
-	private $array = array();
-	
-	/**
-	 * Constructor
-	 * 
-	 * @param resource $stream    Stream resource
-	 * @param integer  $errorCode Error code
-	 */
-	public function __construct($stream, $errorCode = 0) {
-		$data = stream_get_contents($stream);
-		$len = strlen($data);
-		$ptr = 0;
-		while ($ptr <= ($len - 4)) {
-			$size = array_shift(unpack('N', substr($data, $ptr, 4)));
-			$ptr += 4;
-			$this->array[] = new Kafka_Message(substr($data, $ptr, $size));
-			$ptr += $size;
-			$this->validByteCount += 4 + $size;
-		}
-		fclose($stream);
-	}
-	
-	/**
-	 * Get message set size in bytes
-	 * 
-	 * @return integer
-	 */
-	public function validBytes() {
-		return $this->validByteCount;
-	}
-	
-	/**
-	 * Get message set size in bytes
-	 * 
-	 * @return integer
-	 */
-	public function sizeInBytes() {
-		return $this->validBytes();
-	}
-	
-	/**
-	 * next
-	 * 
-	 * @return void
-	 */
-	public function next() {
-		$this->valid = (FALSE !== next($this->array)); 
-	}	
-	
-	/**
-	 * valid
-	 * 
-	 * @return boolean
-	 */
-	public function valid() {
-		return $this->valid;
-	}
-	
-	/**
-	 * key
-	 * 
-	 * @return integer
-	 */
-	public function key() {
-		return key($this->array); 
-	}
-	
-	/**
-	 * current
-	 * 
-	 * @return Kafka_Message 
-	 */
-	public function current() {
-		return current($this->array);
-	}
-	
-	/**
-	 * rewind
-	 * 
-	 * @return void
-	 */
-	public function rewind() {
-		$this->valid = (FALSE !== reset($this->array)); 
-	}
-}
diff --git a/trunk/clients/php/src/lib/Kafka/Producer.php b/trunk/clients/php/src/lib/Kafka/Producer.php
deleted file mode 100644
index 56a7bd8..0000000
--- a/trunk/clients/php/src/lib/Kafka/Producer.php
+++ /dev/null
@@ -1,122 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Simple Kafka Producer
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_Producer
-{
-	/**
-	 * @var integer
-	 */
-	protected $request_key;
-
-	/**
-	 * @var resource
-	 */
-	protected $conn;
-	
-	/**
-	 * @var string
-	 */
-	protected $host;
-	
-	/**
-	 * @var integer
-	 */
-	protected $port;
-
-	/**
-	 * @var integer
-	 */
-	protected $compression;
-
-	/**
-	 * Constructor
-	 * 
-	 * @param integer $host Host 
-	 * @param integer $port Port
-	 */
-	public function __construct($host, $port) {
-		$this->request_key = 0;
-		$this->host = $host;
-		$this->port = $port;
-		$this->compression = 0;
-	}
-	
-	/**
-	 * Connect to Kafka via a socket
-	 * 
-	 * @return void
-	 * @throws RuntimeException
-	 */
-	public function connect() {
-		if (!is_resource($this->conn)) {
-			$this->conn = stream_socket_client('tcp://' . $this->host . ':' . $this->port, $errno, $errstr);
-		}
-		if (!is_resource($this->conn)) {
-			throw new RuntimeException('Cannot connect to Kafka: ' . $errstr, $errno);
-		}
-	}
-
-	/**
-	 * Close the socket
-	 * 
-	 * @return void
-	 */
-	public function close() {
-		if (is_resource($this->conn)) {
-			fclose($this->conn);
-		}
-	}
-
-	/**
-	 * Send messages to Kafka
-	 * 
-	 * @param array   $messages  Messages to send
-	 * @param string  $topic     Topic
-	 * @param integer $partition Partition
-	 *
-	 * @return boolean
-	 */
-	public function send(array $messages, $topic, $partition = 0xFFFFFFFF) {
-		$this->connect();
-		return fwrite($this->conn, Kafka_Encoder::encode_produce_request($topic, $partition, $messages, $this->compression));
-	}
-
-	/**
-	 * When serializing, close the socket and save the connection parameters
-	 * so it can connect again
-	 * 
-	 * @return array Properties to save
-	 */
-	public function __sleep() {
-		$this->close();
-		return array('request_key', 'host', 'port');
-	}
-
-	/**
-	 * Restore parameters on unserialize
-	 * 
-	 * @return void
-	 */
-	public function __wakeup() {
-		
-	}
-}
diff --git a/trunk/clients/php/src/lib/Kafka/Request.php b/trunk/clients/php/src/lib/Kafka/Request.php
deleted file mode 100644
index c91898e..0000000
--- a/trunk/clients/php/src/lib/Kafka/Request.php
+++ /dev/null
@@ -1,30 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Abstract Request class
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-abstract class Kafka_Request
-{
-	/**
-	 * @var integer
-	 */
-	public $id;
-}
-
diff --git a/trunk/clients/php/src/lib/Kafka/RequestKeys.php b/trunk/clients/php/src/lib/Kafka/RequestKeys.php
deleted file mode 100644
index 6b7084b..0000000
--- a/trunk/clients/php/src/lib/Kafka/RequestKeys.php
+++ /dev/null
@@ -1,30 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Some constants for request keys
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_RequestKeys
-{
-	const PRODUCE      = 0;
-	const FETCH        = 1;
-	const MULTIFETCH   = 2;
-	const MULTIPRODUCE = 3;
-	const OFFSETS      = 4;
-}
diff --git a/trunk/clients/php/src/lib/Kafka/SimpleConsumer.php b/trunk/clients/php/src/lib/Kafka/SimpleConsumer.php
deleted file mode 100644
index 4fc846b..0000000
--- a/trunk/clients/php/src/lib/Kafka/SimpleConsumer.php
+++ /dev/null
@@ -1,142 +0,0 @@
-<?php
-/**
- * Kafka Client
- *
- * @category  Libraries
- * @package   Kafka
- * @author    Lorenzo Alberton <l.alberton@quipo.it>
- * @copyright 2011 Lorenzo Alberton
- * @license   http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @version   $Revision: $
- * @link      http://sna-projects.com/kafka/
- */
-
-/**
- * Simple Kafka Consumer
- *
- * @category Libraries
- * @package  Kafka
- * @author   Lorenzo Alberton <l.alberton@quipo.it>
- * @license  http://www.apache.org/licenses/LICENSE-2.0 Apache License, Version 2.0
- * @link     http://sna-projects.com/kafka/
- */
-class Kafka_SimpleConsumer
-{
-	/**
-	 * @var string
-	 */
-	protected $host             = 'localhost';
-	
-	/**
-	 * @var integer
-	 */
-	protected $port             = 9092;
-	
-	/**
-	 * @var integer
-	 */
-	protected $socketTimeout    = 10;
-	
-	/**
-	 * @var integer
-	 */
-	protected $socketBufferSize = 1000000;
-
-	/**
-	 * @var resource
-	 */
-	protected $conn = null;
-	
-	/**
-	 * Constructor
-	 * 
-	 * @param integer $host             Kafka Hostname
-	 * @param integer $port             Port
-	 * @param integer $socketTimeout    Socket timeout
-	 * @param integer $socketBufferSize Socket max buffer size
-	 */
-	public function __construct($host, $port, $socketTimeout, $socketBufferSize) {
-		$this->host = $host;
-		$this->port = $port;
-		$this->socketTimeout    = $socketTimeout;
-		$this->socketBufferSize = $socketBufferSize;
-	}
-	
-	/**
-	 * Connect to Kafka via socket
-	 * 
-	 * @return void
-	 */
-	public function connect() {
-		if (!is_resource($this->conn)) {
-			$this->conn = stream_socket_client('tcp://' . $this->host . ':' . $this->port, $errno, $errstr);
-			if (!$this->conn) {
-				throw new RuntimeException($errstr, $errno);
-			}
-			stream_set_timeout($this->conn,      $this->socketTimeout);
-			stream_set_read_buffer($this->conn,  $this->socketBufferSize);
-			stream_set_write_buffer($this->conn, $this->socketBufferSize);
-			//echo "\nConnected to ".$this->host.":".$this->port."\n";
-		}
-	}
-
-	/**
-	 * Close the connection
-	 * 
-	 * @return void
-	 */
-	public function close() {
-		if (is_resource($this->conn)) {
-			fclose($this->conn);
-		}
-	}
-
-	/**
-	 * Send a request and fetch the response
-	 * 
-	 * @param Kafka_FetchRequest $req Request
-	 *
-	 * @return Kafka_MessageSet $messages
-	 */
-	public function fetch(Kafka_FetchRequest $req) {
-		$this->connect();
-		$this->sendRequest($req);
-		//echo "\nRequest sent: ".(string)$req."\n";
-		$response = $this->getResponse();
-		//var_dump($response);
-		$this->close();
-		return new Kafka_MessageSet($response['response']->buffer, $response['errorCode']);
-	}
-	
-	/**
-	 * Send the request
-	 * 
-	 * @param Kafka_FetchRequest $req Request
-	 * 
-	 * @return void
-	 */
-	protected function sendRequest(Kafka_FetchRequest $req) {
-		$send = new Kafka_BoundedByteBuffer_Send($req);
-		$send->writeCompletely($this->conn);
-	}
-	
-	/**
-	 * Get the response
-	 * 
-	 * @return array
-	 */
-	protected function getResponse() {
-		$response = new Kafka_BoundedByteBuffer_Receive();
-		$response->readCompletely($this->conn);
-		
-		rewind($response->buffer);
-		// this has the side effect of setting the initial position of buffer correctly
-		$errorCode = array_shift(unpack('n', fread($response->buffer, 2))); 
-		//rewind($response->buffer);
-		return array(
-			'response'  => $response, 
-			'errorCode' => $errorCode,
-		);
-	}
-	
-}
diff --git a/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/ReceiveTest.php b/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/ReceiveTest.php
deleted file mode 100644
index a751d7e..0000000
--- a/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/ReceiveTest.php
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-if (!defined('PRODUCE_REQUEST_ID')) {
-	define('PRODUCE_REQUEST_ID', 0);
-}
-
-/**
- * Description of Kafka_BoundedByteBuffer_ReceiveTest
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_BoundedByteBuffer_ReceiveTest extends PHPUnit_Framework_TestCase
-{
-	private $stream = null;
-	private $size1  = 0;
-	private $msg1   = '';
-	private $size2  = 0;
-	private $msg2   = '';
-	
-	/**
-	 * @var Kafka_BoundedByteBuffer_Receive
-	 */
-	private $obj = null;
-	
-	/**
-	 * Append two message sets to a sample stream to verify that only the first one is read
-	 */
-	public function setUp() {
-		$this->stream = fopen('php://temp', 'w+b');
-		$this->msg1 = 'test message';
-		$this->msg2 = 'another message';
-		$this->size1 = strlen($this->msg1);
-		$this->size2 = strlen($this->msg2);
-		fwrite($this->stream, pack('N', $this->size1));
-		fwrite($this->stream, $this->msg1);
-		fwrite($this->stream, pack('N', $this->size2));
-		fwrite($this->stream, $this->msg2);
-		rewind($this->stream);
-		$this->obj = new Kafka_BoundedByteBuffer_Receive;
-	}
-
-	public function tearDown() {
-		fclose($this->stream);
-		unset($this->obj);
-	}
-	
-	public function testReadFrom() {
-		$this->assertEquals($this->size1 + 4, $this->obj->readFrom($this->stream));
-		$this->assertEquals($this->msg1, stream_get_contents($this->obj->buffer));
-		//test that we don't go beyond the first message set
-		$this->assertEquals(0, $this->obj->readFrom($this->stream));
-		$this->assertEquals($this->size1 + 4, ftell($this->stream));
-	}
-	
-	public function testReadCompletely() {
-		$this->assertEquals($this->size1 + 4, $this->obj->readCompletely($this->stream));
-		$this->assertEquals($this->msg1, stream_get_contents($this->obj->buffer));
-		//test that we don't go beyond the first message set
-		$this->assertEquals(0, $this->obj->readCompletely($this->stream));
-		$this->assertEquals($this->size1 + 4, ftell($this->stream));
-	}
-	
-	public function testReadFromOffset() {
-		fseek($this->stream, $this->size1 + 4);
-		$this->obj = new Kafka_BoundedByteBuffer_Receive;
-		$this->assertEquals($this->size2 + 4, $this->obj->readFrom($this->stream));
-		$this->assertEquals($this->msg2, stream_get_contents($this->obj->buffer));
-		//test that we reached the end of the stream (2nd message set)
-		$this->assertEquals(0, $this->obj->readFrom($this->stream));
-		$this->assertEquals($this->size1 + 4 + $this->size2 + 4, ftell($this->stream));
-	}
-	
-	public function testReadCompletelyOffset() {
-		fseek($this->stream, $this->size1 + 4);
-		$this->obj = new Kafka_BoundedByteBuffer_Receive;
-		$this->assertEquals($this->size2 + 4, $this->obj->readCompletely($this->stream));
-		$this->assertEquals($this->msg2, stream_get_contents($this->obj->buffer));
-		//test that we reached the end of the stream (2nd message set)
-		$this->assertEquals(0, $this->obj->readCompletely($this->stream));
-		$this->assertEquals($this->size1 + 4 + $this->size2 + 4, ftell($this->stream));
-	}
-	
-	/**
-	 * @expectedException RuntimeException
-	 */
-	public function testInvalidStream() {
-		$this->stream = fopen('php://temp', 'w+b');
-		$this->obj->readFrom($this->stream);
-		$this->fail('The above call should throw an exception');	
-	}
-	
-	/**
-	 * @expectedException RuntimeException
-	 */
-	public function testInvalidSizeTooBig() {
-		$maxSize = 10;
-		$this->obj = new Kafka_BoundedByteBuffer_Receive($maxSize);
-		$this->stream = fopen('php://temp', 'w+b');
-		fwrite($this->stream, pack('N', $maxSize + 1));
-		fwrite($this->stream, $this->msg1);
-		rewind($this->stream);
-		$this->obj->readFrom($this->stream);
-		$this->fail('The above call should throw an exception');
-	}
-	
-	/**
-	 * @expectedException RuntimeException
-	 */
-	public function testInvalidSizeNotPositive() {
-		$this->stream = fopen('php://temp', 'w+b');
-		fwrite($this->stream, pack('N', 0));
-		fwrite($this->stream, '');
-		rewind($this->stream);
-		$this->obj->readFrom($this->stream);
-		$this->fail('The above call should throw an exception');
-	}
-}
diff --git a/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/SendTest.php b/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/SendTest.php
deleted file mode 100644
index 72c8f30..0000000
--- a/trunk/clients/php/src/tests/Kafka/BoundedByteBuffer/SendTest.php
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-/**
- * Description of Kafka_BoundedByteBuffer_SendTest
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_BoundedByteBuffer_SendTest extends PHPUnit_Framework_TestCase
-{
-	private $stream;
-	private $topic;
-	private $partition;
-	private $offset;
-	
-	/**
-	 * @var Kafka_FetchRequest
-	 */
-	private $req;
-	
-	/**
-	 * @var Kafka_BoundedByteBuffer_Send
-	 */
-	private $obj = null;
-
-	public function setUp() {
-		$this->stream = fopen('php://temp', 'w+b');
-		$this->topic     = 'a test topic';
-		$this->partition = 0;
-		$this->offset    = 0;
-		$maxSize         = 10000;
-		$this->req = new Kafka_FetchRequest($this->topic, $this->partition, $this->offset, $maxSize);
-		$this->obj = new Kafka_BoundedByteBuffer_Send($this->req);
-	}
-
-	public function tearDown() {
-		fclose($this->stream);
-		unset($this->obj);
-	}
-	
-	public function testWriteTo() {
-		// 4 bytes = size
-		// 2 bytes = request ID
-		$this->assertEquals(4 + $this->req->sizeInBytes() + 2, $this->obj->writeTo($this->stream));
-	}
-	
-	public function testWriteCompletely() {
-		// 4 bytes = size
-		// 2 bytes = request ID
-		$this->assertEquals(4 + $this->req->sizeInBytes() + 2, $this->obj->writeCompletely($this->stream));
-	}
-	
-	public function testWriteToWithBigRequest() {
-		$topicSize = 9000;
-		$this->topic = str_repeat('a', $topicSize); //bigger than the fread buffer, 8192
-		$this->req = new Kafka_FetchRequest($this->topic, $this->partition, $this->offset);
-		$this->obj = new Kafka_BoundedByteBuffer_Send($this->req);
-		// 4 bytes = size
-		// 2 bytes = request ID
-		//$this->assertEquals(4 + $this->req->sizeInBytes() + 2, $this->obj->writeTo($this->stream));
-		$written = $this->obj->writeTo($this->stream);
-		$this->assertEquals(4 + 8192, $written);
-		$this->assertTrue($written < $topicSize);
-	}
-	
-	public function testWriteCompletelyWithBigRequest() {
-		$topicSize = 9000;
-		$this->topic = str_repeat('a', $topicSize); //bigger than the fread buffer, 8192
-		$this->req = new Kafka_FetchRequest($this->topic, $this->partition, $this->offset);
-		$this->obj = new Kafka_BoundedByteBuffer_Send($this->req);
-		// 4 bytes = size
-		// 2 bytes = request ID
-		$this->assertEquals(4 + $this->req->sizeInBytes() + 2, $this->obj->writeCompletely($this->stream));
-	}
-	
-	/**
-	 * @expectedException RuntimeException
-	 */
-	public function testWriteInvalidStream() {
-		$this->stream = fopen('php://temp', 'rb'); //read-only mode
-		$this->obj->writeTo($this->stream);
-		$this->fail('the above call should throw an exception');
-	}	
-}
diff --git a/trunk/clients/php/src/tests/Kafka/EncoderTest.php b/trunk/clients/php/src/tests/Kafka/EncoderTest.php
deleted file mode 100644
index 628b05f..0000000
--- a/trunk/clients/php/src/tests/Kafka/EncoderTest.php
+++ /dev/null
@@ -1,61 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-if (!defined('PRODUCE_REQUEST_ID')) {
-	define('PRODUCE_REQUEST_ID', 0);
-}
-
-/**
- * Description of EncoderTest
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_EncoderTest extends PHPUnit_Framework_TestCase
-{
-	public function testEncodedMessageLength() {
-		$test = 'a sample string';
-		$encoded = Kafka_Encoder::encode_message($test);
-		$this->assertEquals(5 + strlen($test), strlen($encoded));
-	}
-	
-	public function testByteArrayContainsString() {
-		$test = 'a sample string';
-		$encoded = Kafka_Encoder::encode_message($test);
-		$this->assertContains($test, $encoded);
-	}
-	
-	public function testEncodedMessages() {
-		$topic     = 'sample topic';
-		$partition = 1;
-		$messages  = array(
-			'test 1',
-			'test 2 abcde',
-		);
-		$encoded = Kafka_Encoder::encode_produce_request($topic, $partition, $messages);
-		$this->assertContains($topic, $encoded);
-		$this->assertContains($partition, $encoded);
-		foreach ($messages as $msg) {
-			$this->assertContains($msg, $encoded);
-		}
-		$size = 4 + 2 + 2 + strlen($topic) + 4 + 4;
-		foreach ($messages as $msg) {
-			$size += 9 + strlen($msg);
-		}
-		$this->assertEquals($size, strlen($encoded));
-	}
-}
diff --git a/trunk/clients/php/src/tests/Kafka/FetchRequestTest.php b/trunk/clients/php/src/tests/Kafka/FetchRequestTest.php
deleted file mode 100644
index ce3f274..0000000
--- a/trunk/clients/php/src/tests/Kafka/FetchRequestTest.php
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-/**
- * Description of FetchRequestTest
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_FetchRequestTest extends PHPUnit_Framework_TestCase
-{
-	private $topic;
-	private $partition;
-	private $offset;
-	private $maxSize;
-	
-	/**
-	 * @var Kafka_FetchRequest
-	 */
-	private $req;
-
-	public function setUp() {
-		$this->topic     = 'a test topic';
-		$this->partition = 0;
-		$this->offset    = 0;
-		$this->maxSize   = 10000;
-		$this->req = new Kafka_FetchRequest($this->topic, $this->partition, $this->offset, $this->maxSize);
-	}
-	
-	public function testRequestSize() {
-		$this->assertEquals(18 + strlen($this->topic) , $this->req->sizeInBytes());
-	}
-	
-	public function testGetters() {
-		$this->assertEquals($this->topic,     $this->req->getTopic());
-		$this->assertEquals($this->offset,    $this->req->getOffset());
-		$this->assertEquals($this->partition, $this->req->getPartition());
-	}
-	
-	public function testWriteTo() {
-		$stream = fopen('php://temp', 'w+b');
-		$this->req->writeTo($stream);
-		rewind($stream);
-		$data = stream_get_contents($stream);
-		fclose($stream);
-		$this->assertEquals(strlen($data), $this->req->sizeInBytes());
-		$this->assertContains($this->topic, $data);
-		$this->assertContains($this->partition, $data);
-	}
-	
-	public function testWriteToOffset() {
-		$this->offset = 14;
-		$this->req = new Kafka_FetchRequest($this->topic, $this->partition, $this->offset, $this->maxSize);
-		$stream = fopen('php://temp', 'w+b');
-		$this->req->writeTo($stream);
-		rewind($stream);
-		//read it back
-		$topicLen = array_shift(unpack('n', fread($stream, 2)));
-		$this->assertEquals(strlen($this->topic), $topicLen);
-		$this->assertEquals($this->topic,     fread($stream, $topicLen));
-		$this->assertEquals($this->partition, array_shift(unpack('N', fread($stream, 4))));
-		$int64bit = unpack('N2', fread($stream, 8));
-		$this->assertEquals($this->offset,    $int64bit[2]);
-		$this->assertEquals($this->maxSize,   array_shift(unpack('N', fread($stream, 4))));
-	}
-	
-	public function testToString() {
-		$this->assertContains('topic:'   . $this->topic,     (string)$this->req);
-		$this->assertContains('part:'    . $this->partition, (string)$this->req);
-		$this->assertContains('offset:'  . $this->offset,    (string)$this->req);
-		$this->assertContains('maxSize:' . $this->maxSize,   (string)$this->req);
-	}
-}
\ No newline at end of file
diff --git a/trunk/clients/php/src/tests/Kafka/MessageTest.php b/trunk/clients/php/src/tests/Kafka/MessageTest.php
deleted file mode 100644
index 38c3cc6..0000000
--- a/trunk/clients/php/src/tests/Kafka/MessageTest.php
+++ /dev/null
@@ -1,62 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-/**
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_MessageTest extends PHPUnit_Framework_TestCase
-{
-	private $test;
-	private $encoded;
-	private $msg;
-	public function setUp() {
-		$this->test = 'a sample string';
-		$this->encoded = Kafka_Encoder::encode_message($this->test);
-		$this->msg = new Kafka_Message($this->encoded);
-		
-	}
-	
-	public function testPayload() {
-		$this->assertEquals($this->test, $this->msg->payload());
-	}
-	
-	public function testValid() {
-		$this->assertTrue($this->msg->isValid());
-	}
-	
-	public function testEncode() {
-		$this->assertEquals($this->encoded, $this->msg->encode());
-	}
-	
-	public function testChecksum() {
-		$this->assertInternalType('integer', $this->msg->checksum());
-	}
-	
-	public function testSize() {
-		$this->assertEquals(strlen($this->test), $this->msg->size());
-	}
-	
-	public function testToString() {
-		$this->assertInternalType('string', $this->msg->__toString());
-	}
-	
-	public function testMagic() {
-		$this->assertInternalType('integer', $this->msg->magic());
-	}
-}
diff --git a/trunk/clients/php/src/tests/Kafka/ProducerTest.php b/trunk/clients/php/src/tests/Kafka/ProducerTest.php
deleted file mode 100644
index a6705fa..0000000
--- a/trunk/clients/php/src/tests/Kafka/ProducerTest.php
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-/**
- * Override connect() method of base class
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_ProducerMock extends Kafka_Producer {
-	public function connect() {
-		if (!is_resource($this->conn)) {
-			$this->conn = fopen('php://temp', 'w+b');
-		}
-	}
-	
-	public function getData() {
-		$this->connect();
-		rewind($this->conn);
-		return stream_get_contents($this->conn);
-	}
-}
-
-/**
- * Description of ProducerTest
- *
- * @author Lorenzo Alberton <l.alberton@quipo.it>
- */
-class Kafka_ProducerTest extends PHPUnit_Framework_TestCase
-{
-	/**
-	 * @var Kafka_Producer
-	 */
-	private $producer;
-	
-	public function setUp() {
-		$this->producer = new Kafka_ProducerMock('localhost', 1234);
-	}
-	
-	public function tearDown() {
-		$this->producer->close();
-		unset($this->producer);
-	}
-
-
-	public function testProducer() {
-		$messages = array(
-			'test 1',
-			'test 2 abc',
-		);
-		$topic = 'a topic';
-		$partition = 3;
-		$this->producer->send($messages, $topic, $partition);
-		$sent = $this->producer->getData();
-		$this->assertContains($topic, $sent);
-		$this->assertContains($partition, $sent);
-		foreach ($messages as $msg) {
-			$this->assertContains($msg, $sent);
-		}
-	}
-}
diff --git a/trunk/clients/php/src/tests/bootstrap.php b/trunk/clients/php/src/tests/bootstrap.php
deleted file mode 100644
index cbeb8cc..0000000
--- a/trunk/clients/php/src/tests/bootstrap.php
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-<?php
-
-function test_autoload($className)
-{
-	$classFile = str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php';
-	if (function_exists('stream_resolve_include_path')) {
-		$file = stream_resolve_include_path($classFile);
-	} else {
-		foreach (explode(PATH_SEPARATOR, get_include_path()) as $path) {
-			if (file_exists($path . '/' . $classFile)) {
-				$file = $path . '/' . $classFile;
-				break;
-			}
-		}
-	}
-	/* If file is found, store it into the cache, classname <-> file association */
-	if (($file !== false) && ($file !== null)) {
-		include $file;
-		return;
-	}
-
-	throw new RuntimeException($className. ' not found');
-}
-
-// register the autoloader
-spl_autoload_register('test_autoload');
-
-set_include_path(
-	implode(PATH_SEPARATOR, array(
-		realpath(dirname(__FILE__).'/../lib'),
-		get_include_path(),
-	))
-);
-
-date_default_timezone_set('Europe/London');
- 
\ No newline at end of file
diff --git a/trunk/clients/php/src/tests/phpunit.xml b/trunk/clients/php/src/tests/phpunit.xml
deleted file mode 100644
index 7654258..0000000
--- a/trunk/clients/php/src/tests/phpunit.xml
+++ /dev/null
@@ -1,17 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>

-<phpunit 

-	bootstrap="./bootstrap.php"

-	colors="true"

-	backupGlobals="false"

-	backupStaticAttributes="false">

-

-    <testsuite name="Kafka PHP Client Test Suite">

-        <directory>./Kafka</directory>

-    </testsuite>

-

-    <filter>

-        <blacklist>

-            <directory>./</directory>

-        </blacklist>

-    </filter>

-</phpunit>

diff --git a/trunk/clients/python/LICENSE b/trunk/clients/python/LICENSE
deleted file mode 100644
index 6b0b127..0000000
--- a/trunk/clients/python/LICENSE
+++ /dev/null
@@ -1,203 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
diff --git a/trunk/clients/python/kafka.py b/trunk/clients/python/kafka.py
deleted file mode 100644
index cf88c77..0000000
--- a/trunk/clients/python/kafka.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-import socket
-import struct
-import binascii
-import sys
-
-PRODUCE_REQUEST_ID = 0
-
-def encode_message(message):
-    # <MAGIC_BYTE: char> <COMPRESSION_ALGO: char> <CRC32: int> <PAYLOAD: bytes>
-    return struct.pack('>B', 1) + \
-           struct.pack('>B', 0) + \
-           struct.pack('>i', binascii.crc32(message)) + \
-           message
-
-def encode_produce_request(topic, partition, messages):
-    # encode messages as <LEN: int><MESSAGE_BYTES>
-    encoded = [encode_message(message) for message in messages]
-    message_set = ''.join([struct.pack('>i', len(m)) + m for m in encoded])
-    
-    # create the request as <REQUEST_SIZE: int> <REQUEST_ID: short> <TOPIC: bytes> <PARTITION: int> <BUFFER_SIZE: int> <BUFFER: bytes>
-    data = struct.pack('>H', PRODUCE_REQUEST_ID) + \
-           struct.pack('>H', len(topic)) + topic + \
-           struct.pack('>i', partition) + \
-           struct.pack('>i', len(message_set)) + message_set
-    return struct.pack('>i', len(data)) + data
-
-
-class KafkaProducer:
-    def __init__(self, host, port):
-        self.REQUEST_KEY = 0
-        self.connection = socket.socket()
-        self.connection.connect((host, port))
-
-    def close(self):
-        self.connection.close()
-
-    def send(self, messages, topic, partition = 0):
-        self.connection.sendall(encode_produce_request(topic, partition, messages))
-    
-if __name__ == '__main__':
-    if len(sys.argv) < 4:
-        print >> sys.stderr, 'USAGE: python', sys.argv[0], 'host port topic'
-        sys.exit(1)
-    host = sys.argv[1]
-    port = int(sys.argv[2])
-    topic = sys.argv[3]
-
-    producer = KafkaProducer(host, port)
-
-    while True:
-        print 'Enter comma seperated messages: ',
-        line = sys.stdin.readline()
-        messages = line.split(',')
-        producer.send(messages, topic)
-        print 'Sent', len(messages), 'messages successfully'
diff --git a/trunk/clients/python/setup.py b/trunk/clients/python/setup.py
deleted file mode 100644
index 0c5a615..0000000
--- a/trunk/clients/python/setup.py
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-from distutils.core import setup
- 
-setup(
-    name='kafka-python-client',
-    version='0.6',
-    description='This library implements a Kafka client',
-    author='LinkedIn.com',
-    url='https://github.com/kafka-dev/kafka',
-    package_dir={'': '.'},
-    py_modules=[
-        'kafka',
-    ],
-)
diff --git a/trunk/clients/ruby/LICENSE b/trunk/clients/ruby/LICENSE
deleted file mode 100644
index ef51da2..0000000
--- a/trunk/clients/ruby/LICENSE
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                              Apache License
-                        Version 2.0, January 2004
-                     http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   "License" shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   "Licensor" shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   "Legal Entity" shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   "control" means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   "You" (or "Your") shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   "Source" form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   "Object" form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   "Work" shall mean the work of authorship, whether in Source or
-   Object form, made available under the License, as indicated by a
-   copyright notice that is included in or attached to the work
-   (an example is provided in the Appendix below).
-
-   "Derivative Works" shall mean any work, whether in Source or Object
-   form, that is based on (or derived from) the Work and for which the
-   editorial revisions, annotations, elaborations, or other modifications
-   represent, as a whole, an original work of authorship. For the purposes
-   of this License, Derivative Works shall not include works that remain
-   separable from, or merely link (or bind by name) to the interfaces of,
-   the Work and Derivative Works thereof.
-
-   "Contribution" shall mean any work of authorship, including
-   the original version of the Work and any modifications or additions
-   to that Work or Derivative Works thereof, that is intentionally
-   submitted to Licensor for inclusion in the Work by the copyright owner
-   or by an individual or Legal Entity authorized to submit on behalf of
-   the copyright owner. For the purposes of this definition, "submitted"
-   means any form of electronic, verbal, or written communication sent
-   to the Licensor or its representatives, including but not limited to
-   communication on electronic mailing lists, source code control systems,
-   and issue tracking systems that are managed by, or on behalf of, the
-   Licensor for the purpose of discussing and improving the Work, but
-   excluding communication that is conspicuously marked or otherwise
-   designated in writing by the copyright owner as "Not a Contribution."
-
-   "Contributor" shall mean Licensor and any individual or Legal Entity
-   on behalf of whom a Contribution has been received by Licensor and
-   subsequently incorporated within the Work.
-
-2. Grant of Copyright License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   copyright license to reproduce, prepare Derivative Works of,
-   publicly display, publicly perform, sublicense, and distribute the
-   Work and such Derivative Works in Source or Object form.
-
-3. Grant of Patent License. Subject to the terms and conditions of
-   this License, each Contributor hereby grants to You a perpetual,
-   worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-   (except as stated in this section) patent license to make, have made,
-   use, offer to sell, sell, import, and otherwise transfer the Work,
-   where such license applies only to those patent claims licensable
-   by such Contributor that are necessarily infringed by their
-   Contribution(s) alone or by combination of their Contribution(s)
-   with the Work to which such Contribution(s) was submitted. If You
-   institute patent litigation against any entity (including a
-   cross-claim or counterclaim in a lawsuit) alleging that the Work
-   or a Contribution incorporated within the Work constitutes direct
-   or contributory patent infringement, then any patent licenses
-   granted to You under this License for that Work shall terminate
-   as of the date such litigation is filed.
-
-4. Redistribution. You may reproduce and distribute copies of the
-   Work or Derivative Works thereof in any medium, with or without
-   modifications, and in Source or Object form, provided that You
-   meet the following conditions:
-
-   (a) You must give any other recipients of the Work or
-       Derivative Works a copy of this License; and
-
-   (b) You must cause any modified files to carry prominent notices
-       stating that You changed the files; and
-
-   (c) You must retain, in the Source form of any Derivative Works
-       that You distribute, all copyright, patent, trademark, and
-       attribution notices from the Source form of the Work,
-       excluding those notices that do not pertain to any part of
-       the Derivative Works; and
-
-   (d) If the Work includes a "NOTICE" text file as part of its
-       distribution, then any Derivative Works that You distribute must
-       include a readable copy of the attribution notices contained
-       within such NOTICE file, excluding those notices that do not
-       pertain to any part of the Derivative Works, in at least one
-       of the following places: within a NOTICE text file distributed
-       as part of the Derivative Works; within the Source form or
-       documentation, if provided along with the Derivative Works; or,
-       within a display generated by the Derivative Works, if and
-       wherever such third-party notices normally appear. The contents
-       of the NOTICE file are for informational purposes only and
-       do not modify the License. You may add Your own attribution
-       notices within Derivative Works that You distribute, alongside
-       or as an addendum to the NOTICE text from the Work, provided
-       that such additional attribution notices cannot be construed
-       as modifying the License.
-
-   You may add Your own copyright statement to Your modifications and
-   may provide additional or different license terms and conditions
-   for use, reproduction, or distribution of Your modifications, or
-   for any such Derivative Works as a whole, provided Your use,
-   reproduction, and distribution of the Work otherwise complies with
-   the conditions stated in this License.
-
-5. Submission of Contributions. Unless You explicitly state otherwise,
-   any Contribution intentionally submitted for inclusion in the Work
-   by You to the Licensor shall be under the terms and conditions of
-   this License, without any additional terms or conditions.
-   Notwithstanding the above, nothing herein shall supersede or modify
-   the terms of any separate license agreement you may have executed
-   with Licensor regarding such Contributions.
-
-6. Trademarks. This License does not grant permission to use the trade
-   names, trademarks, service marks, or product names of the Licensor,
-   except as required for reasonable and customary use in describing the
-   origin of the Work and reproducing the content of the NOTICE file.
-
-7. Disclaimer of Warranty. Unless required by applicable law or
-   agreed to in writing, Licensor provides the Work (and each
-   Contributor provides its Contributions) on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-   implied, including, without limitation, any warranties or conditions
-   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-   PARTICULAR PURPOSE. You are solely responsible for determining the
-   appropriateness of using or redistributing the Work and assume any
-   risks associated with Your exercise of permissions under this License.
-
-8. Limitation of Liability. In no event and under no legal theory,
-   whether in tort (including negligence), contract, or otherwise,
-   unless required by applicable law (such as deliberate and grossly
-   negligent acts) or agreed to in writing, shall any Contributor be
-   liable to You for damages, including any direct, indirect, special,
-   incidental, or consequential damages of any character arising as a
-   result of this License or out of the use or inability to use the
-   Work (including but not limited to damages for loss of goodwill,
-   work stoppage, computer failure or malfunction, or any and all
-   other commercial damages or losses), even if such Contributor
-   has been advised of the possibility of such damages.
-
-9. Accepting Warranty or Additional Liability. While redistributing
-   the Work or Derivative Works thereof, You may choose to offer,
-   and charge a fee for, acceptance of support, warranty, indemnity,
-   or other liability obligations and/or rights consistent with this
-   License. However, in accepting such obligations, You may act only
-   on Your own behalf and on Your sole responsibility, not on behalf
-   of any other Contributor, and only if You agree to indemnify,
-   defend, and hold each Contributor harmless for any liability
-   incurred by, or claims asserted against, such Contributor by reason
-   of your accepting any such warranty or additional liability.
-
-END OF TERMS AND CONDITIONS
-
-APPENDIX: How to apply the Apache License to your work.
-
-   To apply the Apache License to your work, attach the following
-   boilerplate notice, with the fields enclosed by brackets "[]"
-   replaced with your own identifying information. (Don't include
-   the brackets!)  The text should be enclosed in the appropriate
-   comment syntax for the file format. We also recommend that a
-   file or class name and description of purpose be included on the
-   same "printed page" as the copyright notice for easier
-   identification within third-party archives.
-
-Copyright [yyyy] [name of copyright owner]
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
diff --git a/trunk/clients/ruby/README.md b/trunk/clients/ruby/README.md
deleted file mode 100644
index 00b53ce..0000000
--- a/trunk/clients/ruby/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# kafka-rb
-kafka-rb allows you to produce messages to the Kafka distributed publish/subscribe messaging service.
-
-## Requirements
-You need to have access to your Kafka instance and be able to connect through TCP. You can obtain a copy and instructions on how to setup kafka at https://github.com/kafka-dev/kafka
-
-## Installation
-sudo gem install kafka-rb
-
-(the code works fine with JRuby, Ruby 1.8x and Ruby 1.9.x)
-
-## Usage
-
-### Sending a simple message
-
-    require 'kafka'
-    producer = Kafka::Producer.new
-    message = Kafka::Message.new("some random message content")
-    producer.send(message)
-
-### Sending a sequence of messages
-
-    require 'kafka'
-    producer = Kafka::Producer.new
-    message1 = Kafka::Message.new("some random message content")
-    message2 = Kafka::Message.new("some more content")
-    producer.send([message1, message2])
-
-### Batching a bunch of messages using the block syntax
-
-    require 'kafka'
-    producer = Kafka::Producer.new
-    producer.batch do |messages|
-        puts "Batching a send of multiple messages.."
-        messages << Kafka::Message.new("first message to send")
-        messages << Kafka::Message.new("second message to send")
-    end
-
-* they will be sent all at once, after the block execution
-
-### Consuming messages one by one
-
-    require 'kafka'
-    consumer = Kafka::Consumer.new
-    messages = consumer.consume
-
-
-### Consuming messages using a block loop
-
-    require 'kafka'
-    consumer = Kafka::Consumer.new
-    consumer.loop do |messages|
-        puts "Received"
-        puts messages
-    end
-
-
-Contact for questions
-
-alejandrocrosa at(@) gmail.com
-
-http://twitter.com/alejandrocrosa
diff --git a/trunk/clients/ruby/Rakefile b/trunk/clients/ruby/Rakefile
deleted file mode 100644
index 00aa14c..0000000
--- a/trunk/clients/ruby/Rakefile
+++ /dev/null
@@ -1,76 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'rubygems'
-require 'rake/gempackagetask'
-require 'rubygems/specification'
-require 'date'
-require 'rspec/core/rake_task'
-
-GEM = 'kafka-rb'
-GEM_NAME = 'Kafka Client'
-GEM_VERSION = '0.0.5'
-AUTHORS = ['Alejandro Crosa']
-EMAIL = "alejandrocrosa@gmail.com"
-HOMEPAGE = "http://github.com/acrosa/kafka-rb"
-SUMMARY = "A Ruby client for the Kafka distributed publish/subscribe messaging service"
-DESCRIPTION = "kafka-rb allows you to produce and consume messages using the Kafka distributed publish/subscribe messaging service."
-
-spec = Gem::Specification.new do |s|
-  s.name = GEM
-  s.version = GEM_VERSION
-  s.platform = Gem::Platform::RUBY
-  s.has_rdoc = true
-  s.extra_rdoc_files = ["LICENSE"]
-  s.summary = SUMMARY
-  s.description = DESCRIPTION
-  s.authors = AUTHORS
-  s.email = EMAIL
-  s.homepage = HOMEPAGE
-  s.add_development_dependency "rspec"
-  s.require_path = 'lib'
-  s.autorequire = GEM
-  s.files = %w(LICENSE README.md Rakefile) + Dir.glob("{lib,tasks,spec}/**/*")
-end
-
-task :default => :spec
-
-desc "Run specs"
-RSpec::Core::RakeTask.new do |t|
-  t.pattern = FileList['spec/**/*_spec.rb']
-  t.rspec_opts = %w(-fs --color)
-end
-
-Rake::GemPackageTask.new(spec) do |pkg|
-  pkg.gem_spec = spec
-end
-
-desc "install the gem locally"
-task :install => [:package] do
-  sh %{sudo gem install pkg/#{GEM}-#{GEM_VERSION}}
-end
-
-desc "create a gemspec file"
-task :make_spec do
-  File.open("#{GEM}.gemspec", "w") do |file|
-    file.puts spec.to_ruby
-  end
-end
-
-desc "Run all examples with RCov"
-RSpec::Core::RakeTask.new(:rcov) do |t|
-  t.pattern = FileList['spec/**/*_spec.rb']
-  t.rcov = true
-end
diff --git a/trunk/clients/ruby/TODO b/trunk/clients/ruby/TODO
deleted file mode 100644
index a1d1116..0000000
--- a/trunk/clients/ruby/TODO
+++ /dev/null
@@ -1 +0,0 @@
-* should persist the offset somewhere (currently thinking alternatives)
diff --git a/trunk/clients/ruby/kafka-rb.gemspec b/trunk/clients/ruby/kafka-rb.gemspec
deleted file mode 100644
index 52ff6af..0000000
--- a/trunk/clients/ruby/kafka-rb.gemspec
+++ /dev/null
@@ -1,46 +0,0 @@
-# -*- encoding: utf-8 -*-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-Gem::Specification.new do |s|
-  s.name = %q{kafka-rb}
-  s.version = "0.0.5"
-
-  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
-  s.authors = ["Alejandro Crosa"]
-  s.autorequire = %q{kafka-rb}
-  s.date = %q{2011-01-13}
-  s.description = %q{kafka-rb allows you to produce and consume messages using the Kafka distributed publish/subscribe messaging service.}
-  s.email = %q{alejandrocrosa@gmail.com}
-  s.extra_rdoc_files = ["LICENSE"]
-  s.files = ["LICENSE", "README.md", "Rakefile", "lib/kafka", "lib/kafka/batch.rb", "lib/kafka/consumer.rb", "lib/kafka/io.rb", "lib/kafka/message.rb", "lib/kafka/producer.rb", "lib/kafka/request_type.rb", "lib/kafka/error_codes.rb", "lib/kafka.rb", "spec/batch_spec.rb", "spec/consumer_spec.rb", "spec/io_spec.rb", "spec/kafka_spec.rb", "spec/message_spec.rb", "spec/producer_spec.rb", "spec/spec_helper.rb"]
-  s.homepage = %q{http://github.com/acrosa/kafka-rb}
-  s.require_paths = ["lib"]
-  s.rubygems_version = %q{1.3.7}
-  s.summary = %q{A Ruby client for the Kafka distributed publish/subscribe messaging service}
-
-  if s.respond_to? :specification_version then
-    current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
-    s.specification_version = 3
-
-    if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
-      s.add_development_dependency(%q<rspec>, [">= 0"])
-    else
-      s.add_dependency(%q<rspec>, [">= 0"])
-    end
-  else
-    s.add_dependency(%q<rspec>, [">= 0"])
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka.rb b/trunk/clients/ruby/lib/kafka.rb
deleted file mode 100644
index 0e0080b..0000000
--- a/trunk/clients/ruby/lib/kafka.rb
+++ /dev/null
@@ -1,27 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require 'socket'
-require 'zlib'
-
-require File.join(File.dirname(__FILE__), "kafka", "io")
-require File.join(File.dirname(__FILE__), "kafka", "request_type")
-require File.join(File.dirname(__FILE__), "kafka", "error_codes")
-require File.join(File.dirname(__FILE__), "kafka", "batch")
-require File.join(File.dirname(__FILE__), "kafka", "message")
-require File.join(File.dirname(__FILE__), "kafka", "producer")
-require File.join(File.dirname(__FILE__), "kafka", "consumer")
-
-module Kafka
-end
diff --git a/trunk/clients/ruby/lib/kafka/batch.rb b/trunk/clients/ruby/lib/kafka/batch.rb
deleted file mode 100644
index 156f29a..0000000
--- a/trunk/clients/ruby/lib/kafka/batch.rb
+++ /dev/null
@@ -1,27 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  class Batch
-    attr_accessor :messages
-
-    def initialize
-      self.messages = []
-    end
-
-    def << (message)
-      self.messages << message
-    end
-  end
-end
\ No newline at end of file
diff --git a/trunk/clients/ruby/lib/kafka/consumer.rb b/trunk/clients/ruby/lib/kafka/consumer.rb
deleted file mode 100644
index 7763e4f..0000000
--- a/trunk/clients/ruby/lib/kafka/consumer.rb
+++ /dev/null
@@ -1,149 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  class Consumer
-
-    include Kafka::IO
-
-    CONSUME_REQUEST_TYPE = Kafka::RequestType::FETCH
-    MAX_SIZE = 1048576 # 1 MB
-    DEFAULT_POLLING_INTERVAL = 2 # 2 seconds
-    MAX_OFFSETS = 100
-
-    attr_accessor :topic, :partition, :offset, :max_size, :request_type, :polling
-
-    def initialize(options = {})
-      self.topic        = options[:topic]        || "test"
-      self.partition    = options[:partition]    || 0
-      self.host         = options[:host]         || "localhost"
-      self.port         = options[:port]         || 9092
-      self.offset       = options[:offset]       || -2
-      self.max_size     = options[:max_size]     || MAX_SIZE
-      self.request_type = options[:request_type] || CONSUME_REQUEST_TYPE
-      self.polling      = options[:polling]      || DEFAULT_POLLING_INTERVAL
-      self.connect(self.host, self.port)
-
-      if @offset < 0
-         send_offsets_request
-         offsets = read_offsets_response
-         raise Exception, "No offsets for #@topic-#@partition" if offsets.empty?
-         @offset = offsets[0]
-      end
-    end
-
-    # REQUEST TYPE ID + TOPIC LENGTH + TOPIC + PARTITION + OFFSET + MAX SIZE
-    def request_size
-      2 + 2 + topic.length + 4 + 8 + 4
-    end
-
-    def encode_request_size
-      [self.request_size].pack("N")
-    end
-
-    def encode_request(request_type, topic, partition, offset, max_size)
-      request_type = [request_type].pack("n")
-      topic        = [topic.length].pack('n') + topic
-      partition    = [partition].pack("N")
-      offset       = [offset].pack("Q").reverse # DIY 64bit big endian integer
-      max_size     = [max_size].pack("N")
-
-      request_type + topic + partition + offset + max_size
-    end
-
-    def offsets_request_size
-       2 + 2 + topic.length + 4 + 8 +4
-    end
-
-    def encode_offsets_request_size
-       [offsets_request_size].pack('N')
-    end
-
-    # Query the server for the offsets
-    def encode_offsets_request(topic, partition, time, max_offsets)
-       req         = [Kafka::RequestType::OFFSETS].pack('n')
-       topic       = [topic.length].pack('n') + topic
-       partition   = [partition].pack('N')
-       time        = [time].pack("q").reverse # DIY 64bit big endian integer
-       max_offsets = [max_offsets].pack('N')
-
-       req + topic + partition + time + max_offsets
-    end
-
-    def consume
-      self.send_consume_request         # request data
-      data = self.read_data_response    # read data response
-      self.parse_message_set_from(data) # parse message set
-    end
-
-    def loop(&block)
-      messages = []
-      while(true) do
-        messages = self.consume
-        block.call(messages) if messages && !messages.empty?
-        sleep(self.polling)
-      end
-    end
-
-    def read_data_response
-      data_length = self.socket.read(4).unpack("N").shift # read length
-      data = self.socket.read(data_length)                # read message set
-      data[2, data.length]                                # we start with a 2 byte offset
-    end
-
-    def send_consume_request
-      self.write(self.encode_request_size) # write request_size
-      self.write(self.encode_request(self.request_type, self.topic, self.partition, self.offset, self.max_size)) # write request
-    end
-
-    def send_offsets_request
-      self.write(self.encode_offsets_request_size) # write request_size
-      self.write(self.encode_offsets_request(@topic, @partition, -2, MAX_OFFSETS)) # write request
-    end
-
-    def read_offsets_response
-      data_length = self.socket.read(4).unpack('N').shift # read length
-      data = self.socket.read(data_length)                # read message
-
-      pos = 0
-      error_code = data[pos,2].unpack('n')[0]
-      raise Exception, Kafka::ErrorCodes::to_s(error_code) if error_code != Kafka::ErrorCodes::NO_ERROR
-
-      pos += 2
-      count = data[pos,4].unpack('N')[0]
-      pos += 4
-
-      res = []
-      while pos != data.size
-         res << data[pos,8].reverse.unpack('q')[0]
-         pos += 8
-      end
-
-      res
-    end
-
-    def parse_message_set_from(data)
-      messages = []
-      processed = 0
-      length = data.length - 4
-      while(processed <= length) do
-        message_size = data[processed, 4].unpack("N").shift
-        messages << Kafka::Message.parse_from(data[processed, message_size + 4])
-        processed += 4 + message_size
-      end
-      self.offset += processed
-      messages
-    end
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka/error_codes.rb b/trunk/clients/ruby/lib/kafka/error_codes.rb
deleted file mode 100644
index 231cea6..0000000
--- a/trunk/clients/ruby/lib/kafka/error_codes.rb
+++ /dev/null
@@ -1,35 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  module ErrorCodes
-    NO_ERROR                = 0
-    OFFSET_OUT_OF_RANGE     = 1
-    INVALID_MESSAGE_CODE    = 2
-    WRONG_PARTITION_CODE    = 3
-    INVALID_RETCH_SIZE_CODE = 4
-
-    STRINGS = {
-      0 => 'No error',
-      1 => 'Offset out of range',
-      2 => 'Invalid message code',
-      3 => 'Wrong partition code',
-      4 => 'Invalid retch size code',
-    }
-
-    def self.to_s(code)
-      STRINGS[code] || 'Unknown error'
-    end
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka/io.rb b/trunk/clients/ruby/lib/kafka/io.rb
deleted file mode 100644
index fcaf9ea..0000000
--- a/trunk/clients/ruby/lib/kafka/io.rb
+++ /dev/null
@@ -1,53 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  module IO
-    attr_accessor :socket, :host, :port
-
-    def connect(host, port)
-      raise ArgumentError, "No host or port specified" unless host && port
-      self.host = host
-      self.port = port
-      self.socket = TCPSocket.new(host, port)
-    end
-
-    def reconnect
-      self.disconnect
-      self.socket = self.connect(self.host, self.port)
-    end
-
-    def disconnect
-      self.socket.close rescue nil
-      self.socket = nil
-    end
-
-    def write(data)
-      self.reconnect unless self.socket
-      self.socket.write(data)
-    rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED
-      self.reconnect
-      self.socket.write(data) # retry
-    end
-
-    def read(length)
-      begin
-        self.socket.read(length)
-      rescue Errno::EAGAIN
-        self.disconnect
-        raise Errno::EAGAIN, "Timeout reading from the socket"
-      end
-    end
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka/message.rb b/trunk/clients/ruby/lib/kafka/message.rb
deleted file mode 100644
index 918d59c..0000000
--- a/trunk/clients/ruby/lib/kafka/message.rb
+++ /dev/null
@@ -1,49 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-
-  # A message. The format of an N byte message is the following:
-  # 1 byte "magic" identifier to allow format changes
-  # 4 byte CRC32 of the payload
-  # N - 5 byte payload
-  class Message
-
-    MAGIC_IDENTIFIER_DEFAULT = 0
-
-    attr_accessor :magic, :checksum, :payload
-
-    def initialize(payload = nil, magic = MAGIC_IDENTIFIER_DEFAULT, checksum = nil)
-      self.magic    = magic
-      self.payload  = payload
-      self.checksum = checksum || self.calculate_checksum
-    end
-
-    def calculate_checksum
-      Zlib.crc32(self.payload)
-    end
-
-    def valid?
-      self.checksum == Zlib.crc32(self.payload)
-    end
-
-    def self.parse_from(binary)
-      size     = binary[0, 4].unpack("N").shift.to_i
-      magic    = binary[4, 1].unpack("C").shift
-      checksum = binary[5, 4].unpack("N").shift
-      payload  = binary[9, size] # 5 = 1 + 4 is Magic + Checksum
-      return Kafka::Message.new(payload, magic, checksum)
-    end
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka/producer.rb b/trunk/clients/ruby/lib/kafka/producer.rb
deleted file mode 100644
index 4a81861..0000000
--- a/trunk/clients/ruby/lib/kafka/producer.rb
+++ /dev/null
@@ -1,63 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  class Producer
-
-    include Kafka::IO
-
-    PRODUCE_REQUEST_ID = Kafka::RequestType::PRODUCE
-
-    attr_accessor :topic, :partition
-
-    def initialize(options = {})
-      self.topic     = options[:topic]      || "test"
-      self.partition = options[:partition]  || 0
-      self.host      = options[:host]       || "localhost"
-      self.port      = options[:port]       || 9092
-      self.connect(self.host, self.port)
-    end
-
-    def encode(message)
-      [message.magic].pack("C") + [message.calculate_checksum].pack("N") + message.payload.to_s
-    end
-
-    def encode_request(topic, partition, messages)
-      message_set = Array(messages).collect { |message|
-        encoded_message = self.encode(message)
-        [encoded_message.length].pack("N") + encoded_message
-      }.join("")
-
-      request   = [PRODUCE_REQUEST_ID].pack("n")
-      topic     = [topic.length].pack("n") + topic
-      partition = [partition].pack("N")
-      messages  = [message_set.length].pack("N") + message_set
-
-      data = request + topic + partition + messages
-
-      return [data.length].pack("N") + data
-    end
-
-    def send(messages)
-      self.write(self.encode_request(self.topic, self.partition, messages))
-    end
-
-    def batch(&block)
-      batch = Kafka::Batch.new
-      block.call( batch )
-      self.send(batch.messages)
-      batch.messages.clear
-    end
-  end
-end
diff --git a/trunk/clients/ruby/lib/kafka/request_type.rb b/trunk/clients/ruby/lib/kafka/request_type.rb
deleted file mode 100644
index 55e7f64..0000000
--- a/trunk/clients/ruby/lib/kafka/request_type.rb
+++ /dev/null
@@ -1,23 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-module Kafka
-  module RequestType
-    PRODUCE      = 0
-    FETCH        = 1
-    MULTIFETCH   = 2
-    MULTIPRODUCE = 3
-    OFFSETS      = 4
-  end
-end
\ No newline at end of file
diff --git a/trunk/clients/ruby/spec/batch_spec.rb b/trunk/clients/ruby/spec/batch_spec.rb
deleted file mode 100644
index 24099bf..0000000
--- a/trunk/clients/ruby/spec/batch_spec.rb
+++ /dev/null
@@ -1,35 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-describe Batch do
-
-  before(:each) do
-    @batch = Batch.new
-  end
-
-  describe "batch messages" do
-    it "holds all messages to be sent" do
-      @batch.should respond_to(:messages)
-      @batch.messages.class.should eql(Array)
-    end
-
-    it "supports queueing/adding messages to be send" do
-      @batch.messages << mock(Kafka::Message.new("one"))
-      @batch.messages << mock(Kafka::Message.new("two"))
-      @batch.messages.length.should eql(2)
-    end
-  end
-end
\ No newline at end of file
diff --git a/trunk/clients/ruby/spec/consumer_spec.rb b/trunk/clients/ruby/spec/consumer_spec.rb
deleted file mode 100644
index 3b5b77b..0000000
--- a/trunk/clients/ruby/spec/consumer_spec.rb
+++ /dev/null
@@ -1,134 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-describe Consumer do
-
-  before(:each) do
-    @mocked_socket = mock(TCPSocket)
-    TCPSocket.stub!(:new).and_return(@mocked_socket) # don't use a real socket
-    @consumer = Consumer.new
-  end
-
-  describe "Kafka Consumer" do
-
-    it "should have a CONSUME_REQUEST_TYPE" do
-      Consumer::CONSUME_REQUEST_TYPE.should eql(1)
-      @consumer.should respond_to(:request_type)
-    end
-
-    it "should have a topic and a partition" do
-      @consumer.should respond_to(:topic)
-      @consumer.should respond_to(:partition)
-    end
-
-    it "should have a polling option, and a default value" do
-      Consumer::DEFAULT_POLLING_INTERVAL.should eql(2)
-      @consumer.should respond_to(:polling)
-      @consumer.polling.should eql(2)
-    end
-
-    it "should set a topic and partition on initialize" do
-      @consumer = Consumer.new({ :host => "localhost", :port => 9092, :topic => "testing" })
-      @consumer.topic.should eql("testing")
-      @consumer.partition.should eql(0)
-      @consumer = Consumer.new({ :topic => "testing", :partition => 3 })
-      @consumer.partition.should eql(3)
-    end
-
-    it "should set default host and port if none is specified" do
-      @consumer = Consumer.new
-      @consumer.host.should eql("localhost")
-      @consumer.port.should eql(9092)
-    end
-
-    it "should have a default offset, and be able to set it" do
-      @consumer.offset.should eql(0)
-      @consumer = Consumer.new({ :offset => 1111 })
-      @consumer.offset.should eql(1111)
-    end
-
-    it "should have a max size" do
-      Consumer::MAX_SIZE.should eql(1048576)
-      @consumer.max_size.should eql(1048576)
-    end
-
-    it "should return the size of the request" do
-      @consumer.request_size.should eql(24)
-      @consumer.topic = "someothertopicname"
-      @consumer.request_size.should eql(38)
-      @consumer.encode_request_size.should eql([@consumer.request_size].pack("N"))
-    end
-
-    it "should encode a request to consume" do
-      bytes = [Kafka::Consumer::CONSUME_REQUEST_TYPE].pack("n") + ["test".length].pack("n") + "test" + [0].pack("N") + [0].pack("L_") + [Kafka::Consumer::MAX_SIZE].pack("N")
-      @consumer.encode_request(Kafka::Consumer::CONSUME_REQUEST_TYPE, "test", 0, 0, Kafka::Consumer::MAX_SIZE).should eql(bytes)
-    end
-
-    it "should read the response data" do
-      bytes = [12].pack("N") + [0].pack("C") + [1120192889].pack("N") + "ale"
-      @mocked_socket.should_receive(:read).exactly(:twice).and_return(bytes)
-      @consumer.read_data_response.should eql(bytes[2, bytes.length])
-    end
-
-    it "should send a consumer request" do
-      @consumer.stub!(:encode_request_size).and_return(666)
-      @consumer.stub!(:encode_request).and_return("someencodedrequest")
-      @consumer.should_receive(:write).with("someencodedrequest").exactly(:once).and_return(true)
-      @consumer.should_receive(:write).with(666).exactly(:once).and_return(true)
-      @consumer.send_consume_request.should eql(true)
-    end
-
-    it "should parse a message set from bytes" do
-      bytes = [12].pack("N") + [0].pack("C") + [1120192889].pack("N") + "ale"
-      message = @consumer.parse_message_set_from(bytes).first
-      message.payload.should eql("ale")
-      message.checksum.should eql(1120192889)
-      message.magic.should eql(0)
-      message.valid?.should eql(true)
-    end
-
-    it "should consume messages" do
-      @consumer.should_receive(:send_consume_request).and_return(true)
-      @consumer.should_receive(:read_data_response).and_return("")
-      @consumer.consume.should eql([])
-    end
-
-    it "should loop and execute a block with the consumed messages" do
-      @consumer.stub!(:consume).and_return([mock(Kafka::Message)])
-      messages = []
-      messages.should_receive(:<<).exactly(:once).and_return([])
-      @consumer.loop do |message|
-        messages << message
-        break # we don't wanna loop forever on the test
-      end
-    end
-
-    it "should loop (every N seconds, configurable on polling attribute), and execute a block with the consumed messages" do
-      @consumer = Consumer.new({ :polling => 1 })
-      @consumer.stub!(:consume).and_return([mock(Kafka::Message)])
-      messages = []
-      messages.should_receive(:<<).exactly(:twice).and_return([])
-      executed_times = 0
-      @consumer.loop do |message|
-        messages << message
-        executed_times += 1
-        break if executed_times >= 2 # we don't wanna loop forever on the test, only 2 seconds
-      end
-
-      executed_times.should eql(2)
-    end
-  end
-end
diff --git a/trunk/clients/ruby/spec/io_spec.rb b/trunk/clients/ruby/spec/io_spec.rb
deleted file mode 100644
index 082fe08..0000000
--- a/trunk/clients/ruby/spec/io_spec.rb
+++ /dev/null
@@ -1,91 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-class IOTest
-  include Kafka::IO
-end
-
-describe IO do
-
-  before(:each) do
-    @mocked_socket = mock(TCPSocket)
-    TCPSocket.stub!(:new).and_return(@mocked_socket) # don't use a real socket
-    @io = IOTest.new
-    @io.connect("somehost", 9093)
-  end
-
-  describe "default methods" do
-    it "has a socket, a host and a port" do
-      [:socket, :host, :port].each do |m|
-        @io.should respond_to(m.to_sym)
-      end
-    end
-
-    it "raises an exception if no host and port is specified" do
-      lambda {
-        io = IOTest.new
-        io.connect
-      }.should raise_error(ArgumentError)
-    end
-    
-    it "should remember the port and host on connect" do
-      @io.connect("somehost", 9093)
-      @io.host.should eql("somehost")
-      @io.port.should eql(9093)
-    end
-
-    it "should write to a socket" do
-      data = "some data"
-      @mocked_socket.should_receive(:write).with(data).and_return(9)
-      @io.write(data).should eql(9)
-    end
-
-    it "should read from a socket" do
-      length = 200
-      @mocked_socket.should_receive(:read).with(length).and_return(nil)
-      @io.read(length)
-    end
-
-    it "should disconnect on a timeout when reading from a socket (to aviod protocol desync state)" do
-      length = 200
-      @mocked_socket.should_receive(:read).with(length).and_raise(Errno::EAGAIN)
-      @io.should_receive(:disconnect)
-      lambda { @io.read(length) }.should raise_error(Errno::EAGAIN)
-    end
-
-    it "should disconnect" do
-      @io.should respond_to(:disconnect)
-      @mocked_socket.should_receive(:close).and_return(nil)
-      @io.disconnect
-    end
-
-    it "should reconnect" do
-      @mocked_socket.should_receive(:close)
-      @io.should_receive(:connect)
-      @io.reconnect
-    end
-
-    it "should reconnect on a broken pipe error" do
-      [Errno::ECONNABORTED, Errno::EPIPE, Errno::ECONNRESET].each do |error|
-        @mocked_socket.should_receive(:write).exactly(:twice).and_raise(error)
-        @mocked_socket.should_receive(:close).exactly(:once).and_return(nil)
-        lambda {
-          @io.write("some data to send")
-        }.should raise_error(error)
-      end
-    end
-  end
-end
diff --git a/trunk/clients/ruby/spec/kafka_spec.rb b/trunk/clients/ruby/spec/kafka_spec.rb
deleted file mode 100644
index a5ebba2..0000000
--- a/trunk/clients/ruby/spec/kafka_spec.rb
+++ /dev/null
@@ -1,21 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-describe Kafka do
-
-  before(:each) do
-  end
-end
\ No newline at end of file
diff --git a/trunk/clients/ruby/spec/message_spec.rb b/trunk/clients/ruby/spec/message_spec.rb
deleted file mode 100644
index 9317d11..0000000
--- a/trunk/clients/ruby/spec/message_spec.rb
+++ /dev/null
@@ -1,69 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-describe Message do
-
-  before(:each) do
-    @message = Message.new
-  end
-
-  describe "Kafka Message" do
-    it "should have a default magic number" do
-      Message::MAGIC_IDENTIFIER_DEFAULT.should eql(0)
-    end
-
-    it "should have a magic field, a checksum and a payload" do
-      [:magic, :checksum, :payload].each do |field|
-        @message.should respond_to(field.to_sym)
-      end
-    end
-
-    it "should set a default value of zero" do
-      @message.magic.should eql(Kafka::Message::MAGIC_IDENTIFIER_DEFAULT)
-    end
-
-    it "should allow to set a custom magic number" do
-      @message = Message.new("ale", 1)
-      @message.magic.should eql(1)
-    end
-
-    it "should calculate the checksum (crc32 of a given message)" do
-      @message.payload = "ale"
-      @message.calculate_checksum.should eql(1120192889)
-      @message.payload = "alejandro"
-      @message.calculate_checksum.should eql(2865078607)
-    end
-
-    it "should say if the message is valid using the crc32 signature" do
-      @message.payload  = "alejandro"
-      @message.checksum = 2865078607
-      @message.valid?.should eql(true)
-      @message.checksum = 0
-      @message.valid?.should eql(false)
-      @message = Message.new("alejandro", 0, 66666666) # 66666666 is a funny checksum
-      @message.valid?.should eql(false)
-    end
-
-    it "should parse a message from bytes" do
-      bytes = [12].pack("N") + [0].pack("C") + [1120192889].pack("N") + "ale"
-      message = Kafka::Message.parse_from(bytes)
-      message.valid?.should eql(true)
-      message.magic.should eql(0)
-      message.checksum.should eql(1120192889)
-      message.payload.should eql("ale")
-    end
-  end
-end
diff --git a/trunk/clients/ruby/spec/producer_spec.rb b/trunk/clients/ruby/spec/producer_spec.rb
deleted file mode 100644
index 947c792..0000000
--- a/trunk/clients/ruby/spec/producer_spec.rb
+++ /dev/null
@@ -1,109 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require File.dirname(__FILE__) + '/spec_helper'
-
-describe Producer do
-
-  before(:each) do
-    @mocked_socket = mock(TCPSocket)
-    TCPSocket.stub!(:new).and_return(@mocked_socket) # don't use a real socket
-    @producer = Producer.new
-  end
-
-  describe "Kafka Producer" do
-    it "should have a PRODUCE_REQUEST_ID" do
-      Producer::PRODUCE_REQUEST_ID.should eql(0)
-    end
-
-    it "should have a topic and a partition" do
-      @producer.should respond_to(:topic)
-      @producer.should respond_to(:partition)
-    end
-
-    it "should set a topic and partition on initialize" do
-      @producer = Producer.new({ :host => "localhost", :port => 9092, :topic => "testing" })
-      @producer.topic.should eql("testing")
-      @producer.partition.should eql(0)
-      @producer = Producer.new({ :topic => "testing", :partition => 3 })
-      @producer.partition.should eql(3)
-    end
-
-    it "should set default host and port if none is specified" do
-      @producer = Producer.new
-      @producer.host.should eql("localhost")
-      @producer.port.should eql(9092)
-    end
-
-    describe "Message Encoding" do
-      it "should encode a message" do
-        message = Kafka::Message.new("alejandro")
-        full_message = [message.magic].pack("C") + [message.calculate_checksum].pack("N") + message.payload
-        @producer.encode(message).should eql(full_message)
-      end
-      
-      it "should encode an empty message" do
-        message = Kafka::Message.new()
-        full_message = [message.magic].pack("C") + [message.calculate_checksum].pack("N") + message.payload.to_s
-        @producer.encode(message).should eql(full_message)
-      end
-    end
-
-    describe "Request Encoding" do
-      it "should binary encode an empty request" do
-        bytes = @producer.encode_request("test", 0, [])
-        bytes.length.should eql(20)
-        bytes.should eql("\000\000\000\020\000\000\000\004test\000\000\000\000\000\000\000\000")
-      end
-
-      it "should binary encode a request with a message, using a specific wire format" do
-        message = Kafka::Message.new("ale")
-        bytes = @producer.encode_request("test", 3, message)
-        data_size  = bytes[0, 4].unpack("N").shift
-        request_id = bytes[4, 2].unpack("n").shift
-        topic_length = bytes[6, 2].unpack("n").shift
-        topic = bytes[8, 4]
-        partition = bytes[12, 4].unpack("N").shift
-        messages_length = bytes[16, 4].unpack("N").shift
-        messages = bytes[20, messages_length]
-
-        bytes.length.should eql(32)
-        data_size.should eql(28)
-        request_id.should eql(0)
-        topic_length.should eql(4)
-        topic.should eql("test")
-        partition.should eql(3)
-        messages_length.should eql(12)
-      end
-    end
-  end
-
-  it "should send messages" do
-    @producer.should_receive(:write).and_return(32)
-    message = Kafka::Message.new("ale")
-    @producer.send(message).should eql(32)
-  end
-
-  describe "Message Batching" do
-    it "should batch messages and send them at once" do
-      message1 = Kafka::Message.new("one")
-      message2 = Kafka::Message.new("two")
-      @producer.should_receive(:send).with([message1, message2]).exactly(:once).and_return(nil)
-      @producer.batch do |messages|
-        messages << message1
-        messages << message2
-      end
-    end
-  end
-end
\ No newline at end of file
diff --git a/trunk/clients/ruby/spec/spec_helper.rb b/trunk/clients/ruby/spec/spec_helper.rb
deleted file mode 100644
index 5fd87d1..0000000
--- a/trunk/clients/ruby/spec/spec_helper.rb
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require 'rubygems'
-require 'kafka'
-
-include Kafka
\ No newline at end of file
diff --git a/trunk/config/consumer.properties b/trunk/config/consumer.properties
deleted file mode 100644
index a067ac0..0000000
--- a/trunk/config/consumer.properties
+++ /dev/null
@@ -1,29 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.consumer.ConsumerConfig for more details
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=127.0.0.1:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-#consumer group id
-groupid=test-consumer-group
-
-#consumer timeout
-#consumer.timeout.ms=5000
diff --git a/trunk/config/log4j.properties b/trunk/config/log4j.properties
deleted file mode 100644
index afe14af..0000000
--- a/trunk/config/log4j.properties
+++ /dev/null
@@ -1,30 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-log4j.rootLogger=INFO, stdout
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
-
-#log4j.appender.fileAppender=org.apache.log4j.FileAppender
-#log4j.appender.fileAppender.File=kafka-request.log
-#log4j.appender.fileAppender.layout=org.apache.log4j.PatternLayout
-#log4j.appender.fileAppender.layout.ConversionPattern= %-4r [%t] %-5p %c %x - %m%n
-
-
-# Turn on all our debugging info
-#log4j.logger.kafka=INFO
-#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG
-
diff --git a/trunk/config/producer.properties b/trunk/config/producer.properties
deleted file mode 100644
index e94d78b..0000000
--- a/trunk/config/producer.properties
+++ /dev/null
@@ -1,80 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.producer.ProducerConfig for more details
-
-############################# Producer Basics #############################
-
-# need to set either broker.list or zk.connect
-
-# configure brokers statically
-# format: brokerid1:host1:port1,brokerid2:host2:port2 ...
-broker.list=0:localhost:9092
-
-# discover brokers from ZK
-#zk.connect=
-
-# zookeeper session timeout; default is 6000
-#zk.sessiontimeout.ms=
-
-# the max time that the client waits to establish a connection to zookeeper; default is 6000
-#zk.connectiontimeout.ms
-
-# name of the partitioner class for partitioning events; default partition spreads data randomly
-#partitioner.class=
-
-# specifies whether the messages are sent asynchronously (async) or synchronously (sync)
-producer.type=sync
-
-# specify the compression codec for all data generated: 0: no compression, 1: gzip
-compression.codec=0
-
-# message encoder
-serializer.class=kafka.serializer.StringEncoder
-
-# allow topic level compression
-#compressed.topics=
-
-# max message size; messages larger than that size are discarded; default is 1000000
-#max.message.size=
-
-
-############################# Async Producer #############################
-# maximum time, in milliseconds, for buffering data on the producer queue 
-#queue.time=
-
-# the maximum size of the blocking queue for buffering on the producer 
-#queue.size=
-
-# Timeout for event enqueue:
-# 0: events will be enqueued immediately or dropped if the queue is full
-# -ve: enqueue will block indefinitely if the queue is full
-# +ve: enqueue will block up to this many milliseconds if the queue is full
-#queue.enqueueTimeout.ms=
-
-# the number of messages batched at the producer 
-#batch.size=
-
-# the callback handler for one or multiple events 
-#callback.handler=
-
-# properties required to initialize the callback handler 
-#callback.handler.props=
-
-# the handler for events 
-#event.handler=
-
-# properties required to initialize the event handler 
-#event.handler.props=
-
diff --git a/trunk/config/server.properties b/trunk/config/server.properties
deleted file mode 100644
index a4f7fe7..0000000
--- a/trunk/config/server.properties
+++ /dev/null
@@ -1,116 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-############################# Server Basics #############################
-
-# The id of the broker. This must be set to a unique integer for each broker.
-brokerid=0
-
-# Hostname the broker will advertise to consumers. If not set, kafka will use the value returned
-# from InetAddress.getLocalHost().  If there are multiple interfaces getLocalHost
-# may not be what you want.
-#hostname=
-
-
-############################# Socket Server Settings #############################
-
-# The port the socket server listens on
-port=9092
-
-# The number of processor threads the socket server uses for receiving and answering requests. 
-# Defaults to the number of cores on the machine
-num.threads=8
-
-# The send buffer (SO_SNDBUF) used by the socket server
-socket.send.buffer=1048576
-
-# The receive buffer (SO_RCVBUF) used by the socket server
-socket.receive.buffer=1048576
-
-# The maximum size of a request that the socket server will accept (protection against OOM)
-max.socket.request.bytes=104857600
-
-
-############################# Log Basics #############################
-
-# The directory under which to store log files
-log.dir=/tmp/kafka-logs
-
-# The number of logical partitions per topic per server. More partitions allow greater parallelism
-# for consumption, but also mean more files.
-num.partitions=1
-
-# Overrides for for the default given by num.partitions on a per-topic basis
-#topic.partition.count.map=topic1:3, topic2:4
-
-############################# Log Flush Policy #############################
-
-# The following configurations control the flush of data to disk. This is the most
-# important performance knob in kafka.
-# There are a few important trade-offs here:
-#    1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
-#    2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
-#    3. Throughput: The flush is generally the most expensive operation. 
-# The settings below allow one to configure the flush policy to flush data after a period of time or
-# every N messages (or both). This can be done globally and overridden on a per-topic basis.
-
-# The number of messages to accept before forcing a flush of data to disk
-log.flush.interval=10000
-
-# The maximum amount of time a message can sit in a log before we force a flush
-log.default.flush.interval.ms=1000
-
-# Per-topic overrides for log.default.flush.interval.ms
-#topic.flush.intervals.ms=topic1:1000, topic2:3000
-
-# The interval (in ms) at which logs are checked to see if they need to be flushed to disk.
-log.default.flush.scheduler.interval.ms=1000
-
-############################# Log Retention Policy #############################
-
-# The following configurations control the disposal of log segments. The policy can
-# be set to delete segments after a period of time, or after a given size has accumulated.
-# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
-# from the end of the log.
-
-# The minimum age of a log file to be eligible for deletion
-log.retention.hours=168
-
-# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
-# segments don't drop below log.retention.size.
-#log.retention.size=1073741824
-
-# The maximum size of a log segment file. When this size is reached a new log segment will be created.
-log.file.size=536870912
-
-# The interval at which log segments are checked to see if they can be deleted according 
-# to the retention policies
-log.cleanup.interval.mins=1
-
-############################# Zookeeper #############################
-
-# Enable connecting to zookeeper
-enable.zookeeper=true
-
-# Zk connection string (see zk docs for details).
-# This is a comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
-# You can also append an optional chroot string to the urls to specify the
-# root directory for all kafka znodes.
-zk.connect=localhost:2181
-
-# Timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
diff --git a/trunk/config/zookeeper.properties b/trunk/config/zookeeper.properties
deleted file mode 100644
index 74cbf90..0000000
--- a/trunk/config/zookeeper.properties
+++ /dev/null
@@ -1,20 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper
-# the port at which the clients will connect
-clientPort=2181
-# disable the per-ip limit on the number of connections since this is a non-production config
-maxClientCnxns=0
diff --git a/trunk/contrib/hadoop-consumer/LICENSE b/trunk/contrib/hadoop-consumer/LICENSE
deleted file mode 100644
index 6b0b127..0000000
--- a/trunk/contrib/hadoop-consumer/LICENSE
+++ /dev/null
@@ -1,203 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
diff --git a/trunk/contrib/hadoop-consumer/README b/trunk/contrib/hadoop-consumer/README
deleted file mode 100644
index 5395d38..0000000
--- a/trunk/contrib/hadoop-consumer/README
+++ /dev/null
@@ -1,66 +0,0 @@
-This is a Hadoop job that pulls data from kafka server into HDFS.
-
-It requires the following inputs from a configuration file 
-(test/test.properties is an example)
-
-kafka.etl.topic : the topic to be fetched;
-
-input		: input directory containing topic offsets and
-		  it can be generated by DataGenerator; 
-		  the number of files in this directory determines the
-		  number of mappers in the hadoop job;
-
-output		: output directory containing kafka data and updated 
-		  topic offsets;
-
-kafka.request.limit : it is used to limit the number events fetched. 
-
-KafkaETLRecordReader is a record reader associated with KafkaETLInputFormat.
-It fetches kafka data from the server. It starts from provided offsets 
-(specified by "input") and stops when it reaches the largest available offsets 
-or the specified limit (specified by "kafka.request.limit").
-
-KafkaETLJob contains some helper functions to initialize job configuration.
-
-SimpleKafkaETLJob sets up job properties and files Hadoop job. 
-
-SimpleKafkaETLMapper dumps kafka data into hdfs. 
-
-HOW TO RUN:
-In order to run this, make sure the HADOOP_HOME environment variable points to 
-your hadoop installation directory.
-
-1. Complile using "sbt" to create a package for hadoop consumer code.
-./sbt package
-
-2. Run the hadoop-setup.sh script that enables write permission on the 
-   required HDFS directory
-
-3. Produce test events in server and generate offset files
-  1) Start kafka server [ Follow the quick start - 
-                        http://sna-projects.com/kafka/quickstart.php ]
-
-  2) Update test/test.properties to change the following parameters:  
-   kafka.etl.topic 	: topic name
-   event.count		: number of events to be generated
-   kafka.server.uri     : kafka server uri;
-   input                : hdfs directory of offset files
-
-  3) Produce test events to Kafka server and generate offset files
-   ./run-class.sh kafka.etl.impl.DataGenerator test/test.properties
-
-4. Fetch generated topic into HDFS:
-  1) Update test/test.properties to change the following parameters:
-	hadoop.job.ugi	: id and group 
-	input           : input location 
-	output	        : output location 
-	kafka.request.limit: limit the number of events to be fetched; 
-			     -1 means no limitation.
-        hdfs.default.classpath.dir : hdfs location of jars
-
-  2) copy jars into hdfs
-   ./copy-jars.sh ${hdfs.default.classpath.dir}
-
-  2) Fetch data
-  ./run-class.sh kafka.etl.impl.SimpleKafkaETLJob test/test.properties
-
diff --git a/trunk/contrib/hadoop-consumer/copy-jars.sh b/trunk/contrib/hadoop-consumer/copy-jars.sh
deleted file mode 100755
index e5de1dd..0000000
--- a/trunk/contrib/hadoop-consumer/copy-jars.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -lt 1 ];
-then
-  echo "USAGE: $0 dir"
-  exit 1
-fi
-
-base_dir=$(dirname $0)/../..
-
-hadoop=${HADOOP_HOME}/bin/hadoop
-
-echo "$hadoop fs -rmr $1"
-$hadoop fs -rmr $1
-
-echo "$hadoop fs -mkdir $1"
-$hadoop fs -mkdir $1
-
-# include kafka jars
-for file in $base_dir/contrib/hadoop-consumer/target/scala_2.8.0/*.jar;
-do
-   echo "$hadoop fs -put $file $1/"
-   $hadoop fs -put $file $1/ 
-done
-
-# include kafka jars
-echo "$hadoop fs -put $base_dir/core/target/scala_2.8.0/kafka-*.jar; $1/"
-$hadoop fs -put $base_dir/core/target/scala_2.8.0/kafka-*.jar $1/ 
-
-# include core lib jars
-for file in $base_dir/core/lib/*.jar;
-do
-   echo "$hadoop fs -put $file $1/"
-   $hadoop fs -put $file $1/ 
-done
-
-for file in $base_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
-do
-   echo "$hadoop fs -put $file $1/"
-   $hadoop fs -put $file $1/ 
-done
-
-# include scala library jar
-echo "$hadoop fs -put $base_dir/project/boot/scala-2.8.0/lib/scala-library.jar; $1/"
-$hadoop fs -put $base_dir/project/boot/scala-2.8.0/lib/scala-library.jar $1/
-
-local_dir=$(dirname $0)
-
-# include hadoop-consumer jars
-for file in $local_dir/lib/*.jar;
-do
-   echo "$hadoop fs -put $file $1/"
-   $hadoop fs -put $file $1/ 
-done
-
diff --git a/trunk/contrib/hadoop-consumer/hadoop-setup.sh b/trunk/contrib/hadoop-consumer/hadoop-setup.sh
deleted file mode 100755
index c855e66..0000000
--- a/trunk/contrib/hadoop-consumer/hadoop-setup.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-hadoop=${HADOOP_HOME}/bin/hadoop
-
-$hadoop fs -chmod ugoa+w /tmp
-
diff --git a/trunk/contrib/hadoop-consumer/lib/piggybank.jar b/trunk/contrib/hadoop-consumer/lib/piggybank.jar
deleted file mode 100644
index cbd46e0..0000000
--- a/trunk/contrib/hadoop-consumer/lib/piggybank.jar
+++ /dev/null
Binary files differ
diff --git a/trunk/contrib/hadoop-consumer/run-class.sh b/trunk/contrib/hadoop-consumer/run-class.sh
deleted file mode 100755
index bfb4744..0000000
--- a/trunk/contrib/hadoop-consumer/run-class.sh
+++ /dev/null
@@ -1,65 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -lt 1 ];
-then
-  echo "USAGE: $0 classname [opts]"
-  exit 1
-fi
-
-base_dir=$(dirname $0)/../..
-
-# include kafka jars
-for file in $base_dir/core/target/scala_2.8.0/kafka-*.jar
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/contrib/hadoop-consumer/lib_managed/scala_2.8.0/compile/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-local_dir=$(dirname $0)
-
-# include hadoop-consumer jars
-for file in $base_dir/contrib/hadoop-consumer/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/contrib/hadoop-consumer/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-CLASSPATH=$CLASSPATH:$base_dir/project/boot/scala-2.8.0/lib/scala-library.jar
-
-echo $CLASSPATH
-
-CLASSPATH=dist:$CLASSPATH:${HADOOP_HOME}/conf
-
-#if [ -z "$KAFKA_OPTS" ]; then
-#  KAFKA_OPTS="-Xmx512M -server -Dcom.sun.management.jmxremote"
-#fi
-
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-$JAVA $KAFKA_OPTS -cp $CLASSPATH $@
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLContext.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLContext.java
deleted file mode 100644
index 1c18832..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLContext.java
+++ /dev/null
@@ -1,286 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-
-import java.io.IOException;
-import java.net.URI;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.List;
-import kafka.api.FetchRequest;
-import kafka.api.OffsetRequest;
-import kafka.common.ErrorMapping;
-import kafka.javaapi.MultiFetchResponse;
-import kafka.javaapi.consumer.SimpleConsumer;
-import kafka.javaapi.message.ByteBufferMessageSet;
-import kafka.message.MessageAndOffset;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.lib.MultipleOutputs;
-
-@SuppressWarnings({ "deprecation"})
-public class KafkaETLContext {
-    
-    static protected int MAX_RETRY_TIME = 1;
-    final static String CLIENT_BUFFER_SIZE = "client.buffer.size";
-    final static String CLIENT_TIMEOUT = "client.so.timeout";
-
-    final static int DEFAULT_BUFFER_SIZE = 1 * 1024 * 1024;
-    final static int DEFAULT_TIMEOUT = 60000; // one minute
-
-    final static KafkaETLKey DUMMY_KEY = new KafkaETLKey();
-
-    protected int _index; /*index of context*/
-    protected String _input = null; /*input string*/
-    protected KafkaETLRequest _request = null;
-    protected SimpleConsumer _consumer = null; /*simple consumer*/
-
-    protected long[] _offsetRange = {0, 0};  /*offset range*/
-    protected long _offset = Long.MAX_VALUE; /*current offset*/
-    protected long _count; /*current count*/
-
-    protected MultiFetchResponse _response = null;  /*fetch response*/
-    protected Iterator<MessageAndOffset> _messageIt = null; /*message iterator*/
-    protected Iterator<ByteBufferMessageSet> _respIterator = null;
-    protected int _retry = 0;
-    protected long _requestTime = 0; /*accumulative request time*/
-    protected long _startTime = -1;
-    
-    protected int _bufferSize;
-    protected int _timeout;
-    protected Reporter _reporter;
-    
-    protected MultipleOutputs _mos;
-    protected OutputCollector<KafkaETLKey, BytesWritable> _offsetOut = null;
-    
-    public long getTotalBytes() {
-        return (_offsetRange[1] > _offsetRange[0])? _offsetRange[1] - _offsetRange[0] : 0;
-    }
-    
-    public long getReadBytes() {
-        return _offset - _offsetRange[0];
-    }
-    
-    public long getCount() {
-        return _count;
-    }
-    
-    /**
-     * construct using input string
-     */
-    @SuppressWarnings("unchecked")
-    public KafkaETLContext(JobConf job, Props props, Reporter reporter, 
-                                    MultipleOutputs mos, int index, String input) 
-    throws Exception {
-        
-        _bufferSize = getClientBufferSize(props);
-        _timeout = getClientTimeout(props);
-        System.out.println("bufferSize=" +_bufferSize);
-        System.out.println("timeout=" + _timeout);
-        _reporter = reporter;
-        _mos = mos;
-        
-        // read topic and current offset from input
-        _index= index; 
-        _input = input;
-        _request = new KafkaETLRequest(input.trim());
-        
-        // read data from queue
-        URI uri = _request.getURI();
-        _consumer = new SimpleConsumer(uri.getHost(), uri.getPort(), _timeout, _bufferSize);
-        
-        // get available offset range
-        _offsetRange = getOffsetRange();
-        System.out.println("Connected to node " + uri 
-                + " beginning reading at offset " + _offsetRange[0]
-                + " latest offset=" + _offsetRange[1]);
-
-        _offset = _offsetRange[0];
-        _count = 0;
-        _requestTime = 0;
-        _retry = 0;
-        
-        _startTime = System.currentTimeMillis();
-    }
-    
-    public boolean hasMore () {
-        return _messageIt != null && _messageIt.hasNext() 
-                || _response != null && _respIterator.hasNext()
-                || _offset < _offsetRange[1]; 
-    }
-    
-    public boolean getNext(KafkaETLKey key, BytesWritable value) throws IOException {
-        if ( !hasMore() ) return false;
-        
-        boolean gotNext = get(key, value);
-
-        if(_response != null) {
-
-            while ( !gotNext && _respIterator.hasNext()) {
-                ByteBufferMessageSet msgSet = _respIterator.next();
-                if ( hasError(msgSet)) return false;
-                _messageIt = msgSet.iterator();
-                gotNext = get(key, value);
-            }
-        }
-        return gotNext;
-    }
-    
-    public boolean fetchMore () throws IOException {
-        if (!hasMore()) return false;
-        
-        FetchRequest fetchRequest = 
-            new FetchRequest(_request.getTopic(), _request.getPartition(), _offset, _bufferSize);
-        List<FetchRequest> array = new ArrayList<FetchRequest>();
-        array.add(fetchRequest);
-
-        long tempTime = System.currentTimeMillis();
-        _response = _consumer.multifetch(array);
-        if(_response != null)
-            _respIterator = _response.iterator();
-        _requestTime += (System.currentTimeMillis() - tempTime);
-        
-        return true;
-    }
-    
-    @SuppressWarnings("unchecked")
-    public void output(String fileprefix) throws IOException {
-       String offsetString = _request.toString(_offset);
-
-        if (_offsetOut == null)
-            _offsetOut = (OutputCollector<KafkaETLKey, BytesWritable>)
-                                    _mos.getCollector("offsets", fileprefix+_index, _reporter);
-        _offsetOut.collect(DUMMY_KEY, new BytesWritable(offsetString.getBytes("UTF-8")));
-        
-    }
-    
-    public void close() throws IOException {
-        if (_consumer != null) _consumer.close();
-        
-        String topic = _request.getTopic();
-        long endTime = System.currentTimeMillis();
-        _reporter.incrCounter(topic, "read-time(ms)", endTime - _startTime);
-        _reporter.incrCounter(topic, "request-time(ms)", _requestTime);
-        
-        long bytesRead = _offset - _offsetRange[0];
-        double megaRead = bytesRead / (1024.0*1024.0);
-        _reporter.incrCounter(topic, "data-read(mb)", (long) megaRead);
-        _reporter.incrCounter(topic, "event-count", _count);
-    }
-    
-    protected boolean get(KafkaETLKey key, BytesWritable value) throws IOException {
-        if (_messageIt != null && _messageIt.hasNext()) {
-            MessageAndOffset messageAndOffset = _messageIt.next();
-            
-            ByteBuffer buf = messageAndOffset.message().payload();
-            int origSize = buf.remaining();
-            byte[] bytes = new byte[origSize];
-          buf.get(bytes, buf.position(), origSize);
-            value.set(bytes, 0, origSize);
-            
-            key.set(_index, _offset, messageAndOffset.message().checksum());
-            
-            _offset = messageAndOffset.offset();  //increase offset
-            _count ++;  //increase count
-            
-            return true;
-        }
-        else return false;
-    }
-    
-    /**
-     * Get offset ranges
-     */
-    protected long[] getOffsetRange() throws IOException {
-
-        /* get smallest and largest offsets*/
-        long[] range = new long[2];
-
-        long[] startOffsets = _consumer.getOffsetsBefore(_request.getTopic(), _request.getPartition(),
-                OffsetRequest.EarliestTime(), 1);
-        if (startOffsets.length != 1)
-            throw new IOException("input:" + _input + " Expect one smallest offset but get "
-                                            + startOffsets.length);
-        range[0] = startOffsets[0];
-        
-        long[] endOffsets = _consumer.getOffsetsBefore(_request.getTopic(), _request.getPartition(),
-                                        OffsetRequest.LatestTime(), 1);
-        if (endOffsets.length != 1)
-            throw new IOException("input:" + _input + " Expect one latest offset but get " 
-                                            + endOffsets.length);
-        range[1] = endOffsets[0];
-
-        /*adjust range based on input offsets*/
-        if ( _request.isValidOffset()) {
-            long startOffset = _request.getOffset();
-            if (startOffset > range[0]) {
-                System.out.println("Update starting offset with " + startOffset);
-                range[0] = startOffset;
-            }
-            else {
-                System.out.println("WARNING: given starting offset " + startOffset 
-                                            + " is smaller than the smallest one " + range[0] 
-                                            + ". Will ignore it.");
-            }
-        }
-        System.out.println("Using offset range [" + range[0] + ", " + range[1] + "]");
-        return range;
-    }
-    
-    /**
-     * Called by the default implementation of {@link #map} to check error code
-     * to determine whether to continue.
-     */
-    protected boolean hasError(ByteBufferMessageSet messages)
-            throws IOException {
-        int errorCode = messages.getErrorCode();
-        if (errorCode == ErrorMapping.OffsetOutOfRangeCode()) {
-            /* offset cannot cross the maximum offset (guaranteed by Kafka protocol).
-               Kafka server may delete old files from time to time */
-            System.err.println("WARNING: current offset=" + _offset + ". It is out of range.");
-
-            if (_retry >= MAX_RETRY_TIME)  return true;
-            _retry++;
-            // get the current offset range
-            _offsetRange = getOffsetRange();
-            _offset =  _offsetRange[0];
-            return false;
-        } else if (errorCode == ErrorMapping.InvalidMessageCode()) {
-            throw new IOException(_input + " current offset=" + _offset
-                    + " : invalid offset.");
-        } else if (errorCode == ErrorMapping.WrongPartitionCode()) {
-            throw new IOException(_input + " : wrong partition");
-        } else if (errorCode != ErrorMapping.NoError()) {
-            throw new IOException(_input + " current offset=" + _offset
-                    + " error:" + errorCode);
-        } else
-            return false;
-    }
-    
-    public static int getClientBufferSize(Props props) throws Exception {
-        return props.getInt(CLIENT_BUFFER_SIZE, DEFAULT_BUFFER_SIZE);
-    }
-
-    public static int getClientTimeout(Props props) throws Exception {
-        return props.getInt(CLIENT_TIMEOUT, DEFAULT_TIMEOUT);
-    }
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLInputFormat.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLInputFormat.java
deleted file mode 100644
index ddd6b72..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLInputFormat.java
+++ /dev/null
@@ -1,78 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-
-import java.io.IOException;
-import java.net.URI;
-import java.util.Map;
-import kafka.consumer.SimpleConsumer;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.mapred.InputSplit;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.RecordReader;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.SequenceFileInputFormat;
-import org.apache.hadoop.mapred.lib.MultipleOutputs;
-
-
-@SuppressWarnings("deprecation")
-public class KafkaETLInputFormat 
-extends SequenceFileInputFormat<KafkaETLKey, BytesWritable> {
-
-    protected Props _props;
-    protected int _bufferSize;
-    protected int _soTimeout;
-
-    protected Map<Integer, URI> _nodes;
-    protected int _partition;
-    protected int _nodeId;
-    protected String _topic;
-    protected SimpleConsumer _consumer;
-
-    protected MultipleOutputs _mos;
-    protected OutputCollector<BytesWritable, BytesWritable> _offsetOut = null;
-
-    protected long[] _offsetRange;
-    protected long _startOffset;
-    protected long _offset;
-    protected boolean _toContinue = true;
-    protected int _retry;
-    protected long _timestamp;
-    protected long _count;
-    protected boolean _ignoreErrors = false;
-
-    @Override
-    public RecordReader<KafkaETLKey, BytesWritable> getRecordReader(InputSplit split,
-                                    JobConf job, Reporter reporter)
-                                    throws IOException {
-        return new KafkaETLRecordReader(split, job, reporter);
-    }
-
-    @Override
-    protected boolean isSplitable(FileSystem fs, Path file) {
-        return super.isSplitable(fs, file);
-    }
-
-    @Override
-    public InputSplit[] getSplits(JobConf conf, int numSplits) throws IOException {
-        return super.getSplits(conf, numSplits);
-    }
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLJob.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLJob.java
deleted file mode 100644
index 1a4bcba..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLJob.java
+++ /dev/null
@@ -1,172 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-
-import java.net.URI;
-import org.apache.hadoop.filecache.DistributedCache;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.SequenceFileOutputFormat;
-import org.apache.hadoop.mapred.lib.MultipleOutputs;
-
-@SuppressWarnings("deprecation")
-public class KafkaETLJob {
-    
-    public static final String HADOOP_PREFIX = "hadoop-conf.";
-    /**
-     * Create a job configuration
-     */
-    @SuppressWarnings("rawtypes")
-    public static JobConf createJobConf(String name, String topic, Props props, Class classobj) 
-    throws Exception {
-        JobConf conf = getJobConf(name, props, classobj);
-        
-        conf.set("topic", topic);
-        
-        // input format
-        conf.setInputFormat(KafkaETLInputFormat.class);
-
-        //turn off mapper speculative execution
-        conf.setMapSpeculativeExecution(false);
-        
-        // setup multiple outputs
-        MultipleOutputs.addMultiNamedOutput(conf, "offsets", SequenceFileOutputFormat.class, 
-                    KafkaETLKey.class, BytesWritable.class);
-
-
-        return conf;
-    }
-    
-    /**
-     * Helper function to initialize a job configuration
-     */
-    public static JobConf getJobConf(String name, Props props, Class classobj) throws Exception {
-        JobConf conf = new JobConf();
-        // set custom class loader with custom find resource strategy.
-
-        conf.setJobName(name);
-        String hadoop_ugi = props.getProperty("hadoop.job.ugi", null);
-        if (hadoop_ugi != null) {
-            conf.set("hadoop.job.ugi", hadoop_ugi);
-        }
-
-        if (props.getBoolean("is.local", false)) {
-            conf.set("mapred.job.tracker", "local");
-            conf.set("fs.default.name", "file:///");
-            conf.set("mapred.local.dir", "/tmp/map-red");
-
-            info("Running locally, no hadoop jar set.");
-        } else {
-            setClassLoaderAndJar(conf, classobj);
-            info("Setting hadoop jar file for class:" + classobj + "  to " + conf.getJar());
-            info("*************************************************************************");
-            info("          Running on Real Hadoop Cluster(" + conf.get("mapred.job.tracker") + ")           ");
-            info("*************************************************************************");
-        }
-
-        // set JVM options if present
-        if (props.containsKey("mapred.child.java.opts")) {
-            conf.set("mapred.child.java.opts", props.getProperty("mapred.child.java.opts"));
-            info("mapred.child.java.opts set to " + props.getProperty("mapred.child.java.opts"));
-        }
-
-        // Adds External jars to hadoop classpath
-        String externalJarList = props.getProperty("hadoop.external.jarFiles", null);
-        if (externalJarList != null) {
-            String[] jarFiles = externalJarList.split(",");
-            for (String jarFile : jarFiles) {
-                info("Adding extenral jar File:" + jarFile);
-                DistributedCache.addFileToClassPath(new Path(jarFile), conf);
-            }
-        }
-
-        // Adds distributed cache files
-        String cacheFileList = props.getProperty("hadoop.cache.files", null);
-        if (cacheFileList != null) {
-            String[] cacheFiles = cacheFileList.split(",");
-            for (String cacheFile : cacheFiles) {
-                info("Adding Distributed Cache File:" + cacheFile);
-                DistributedCache.addCacheFile(new URI(cacheFile), conf);
-            }
-        }
-
-        // Adds distributed cache files
-        String archiveFileList = props.getProperty("hadoop.cache.archives", null);
-        if (archiveFileList != null) {
-            String[] archiveFiles = archiveFileList.split(",");
-            for (String archiveFile : archiveFiles) {
-                info("Adding Distributed Cache Archive File:" + archiveFile);
-                DistributedCache.addCacheArchive(new URI(archiveFile), conf);
-            }
-        }
-
-        String hadoopCacheJarDir = props.getProperty("hdfs.default.classpath.dir", null);
-        if (hadoopCacheJarDir != null) {
-            FileSystem fs = FileSystem.get(conf);
-            if (fs != null) {
-                FileStatus[] status = fs.listStatus(new Path(hadoopCacheJarDir));
-
-                if (status != null) {
-                    for (int i = 0; i < status.length; ++i) {
-                        if (!status[i].isDir()) {
-                            Path path = new Path(hadoopCacheJarDir, status[i].getPath().getName());
-                            info("Adding Jar to Distributed Cache Archive File:" + path);
-
-                            DistributedCache.addFileToClassPath(path, conf);
-                        }
-                    }
-                } else {
-                    info("hdfs.default.classpath.dir " + hadoopCacheJarDir + " is empty.");
-                }
-            } else {
-                info("hdfs.default.classpath.dir " + hadoopCacheJarDir + " filesystem doesn't exist");
-            }
-        }
-
-        // May want to add this to HadoopUtils, but will await refactoring
-        for (String key : props.stringPropertyNames()) {
-            String lowerCase = key.toLowerCase();
-            if (lowerCase.startsWith(HADOOP_PREFIX)) {
-                String newKey = key.substring(HADOOP_PREFIX.length());
-                conf.set(newKey, props.getProperty(key));
-            }
-        }
-
-        KafkaETLUtils.setPropsInJob(conf, props);
-        
-        return conf;
-    }
-
-    public static void info(String message) {
-        System.out.println(message);
-    }
-
-    public static void setClassLoaderAndJar(JobConf conf,
-            @SuppressWarnings("rawtypes") Class jobClass) {
-        conf.setClassLoader(Thread.currentThread().getContextClassLoader());
-        String jar = KafkaETLUtils.findContainingJar(jobClass, Thread
-                .currentThread().getContextClassLoader());
-        if (jar != null) {
-            conf.setJar(jar);
-        }
-    }
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLKey.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLKey.java
deleted file mode 100644
index aafecea..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLKey.java
+++ /dev/null
@@ -1,104 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import org.apache.hadoop.io.WritableComparable;
-
-public class KafkaETLKey implements WritableComparable<KafkaETLKey>{
-
-    protected int _inputIndex;
-    protected long _offset;
-    protected long _checksum;
-    
-    /**
-     * dummy empty constructor
-     */
-    public KafkaETLKey() {
-        _inputIndex = 0;
-        _offset = 0;
-        _checksum = 0;
-    }
-    
-    public KafkaETLKey (int index, long offset) {
-        _inputIndex =  index;
-        _offset = offset;
-        _checksum = 0;
-    }
-    
-    public KafkaETLKey (int index, long offset, long checksum) {
-        _inputIndex =  index;
-        _offset = offset;
-        _checksum = checksum;
-    }
-    
-    public void set(int index, long offset, long checksum) {
-        _inputIndex = index;
-        _offset = offset;
-        _checksum = checksum;
-    }
-    
-    public int getIndex() {
-        return _inputIndex;
-    }
-    
-    public long getOffset() {
-        return _offset;
-    }
-    
-    public long getChecksum() {
-        return _checksum;
-    }
-    
-    @Override
-    public void readFields(DataInput in) throws IOException {
-        _inputIndex = in.readInt(); 
-        _offset = in.readLong();
-        _checksum = in.readLong();
-    }
-
-    @Override
-    public void write(DataOutput out) throws IOException {
-        out.writeInt(_inputIndex);
-        out.writeLong(_offset);
-        out.writeLong(_checksum);
-    }
-
-    @Override
-    public int compareTo(KafkaETLKey o) {
-        if (_inputIndex != o._inputIndex)
-            return _inputIndex = o._inputIndex;
-        else {
-            if  (_offset > o._offset) return 1;
-            else if (_offset < o._offset) return -1;
-            else {
-                if  (_checksum > o._checksum) return 1;
-                else if (_checksum < o._checksum) return -1;
-                else return 0;
-            }
-        }
-    }
-    
-    @Override
-    public String toString() {
-        return "index=" + _inputIndex + " offset=" + _offset + " checksum=" + _checksum;
-    }
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRecordReader.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRecordReader.java
deleted file mode 100644
index 375429e..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRecordReader.java
+++ /dev/null
@@ -1,180 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.mapred.FileSplit;
-import org.apache.hadoop.mapred.InputSplit;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.SequenceFileRecordReader;
-import org.apache.hadoop.mapred.lib.MultipleOutputs;
-
-@SuppressWarnings({ "deprecation" })
-public class KafkaETLRecordReader 
-extends SequenceFileRecordReader<KafkaETLKey, BytesWritable> {
-
-    /* max number of retries */
-    protected Props _props;   /*properties*/
-    protected JobConf _job;
-    protected Reporter _reporter ;
-    protected MultipleOutputs _mos;
-    protected List<KafkaETLContext> _contextList;
-    protected int _contextIndex ;
-    
-    protected long _totalBytes;
-    protected long _readBytes;
-    protected long _readCounts;
-    
-    protected String _attemptId = null;
-    
-    private static long _limit = 100; /*for testing only*/
-    
-    public KafkaETLRecordReader(InputSplit split, JobConf job, Reporter reporter) 
-    throws IOException {
-       super(job, (FileSplit) split);
-       
-       _props = KafkaETLUtils.getPropsFromJob(job);
-       _contextList = new ArrayList<KafkaETLContext>();
-       _job = job;
-       _reporter = reporter;
-       _contextIndex = -1;
-       _mos = new MultipleOutputs(job);
-       try {
-           _limit = _props.getInt("kafka.request.limit", -1);
-           
-           /*get attemp id*/
-           String taskId = _job.get("mapred.task.id");
-           if (taskId == null) {
-               throw new IllegalArgumentException(
-                                 "Configutaion does not contain the property mapred.task.id");
-           }
-           String[] parts = taskId.split("_");
-           if (    parts.length != 6 || !parts[0].equals("attempt") 
-                || (!"m".equals(parts[3]) && !"r".equals(parts[3]))) {
-                   throw new IllegalArgumentException(
-                                 "TaskAttemptId string : " + taskId + " is not properly formed");
-           }
-          _attemptId = parts[4]+parts[3];
-       }catch (Exception e) {
-           throw new IOException (e);
-       }
-    }
-
-    @Override
-    public synchronized void close() throws IOException {
-        super.close();
-        
-        /* now record some stats */
-        for (KafkaETLContext context: _contextList) {
-            context.output(_attemptId);
-            context.close();
-        }
-        
-        _mos.close();
-    }
-
-    @Override
-    public KafkaETLKey createKey() {
-        return super.createKey();
-    }
-
-    @Override
-    public BytesWritable createValue() {
-        return super.createValue();
-    }
-
-    @Override
-    public float getProgress() throws IOException {
-        if (_totalBytes == 0) return 0f;
-        
-        if (_contextIndex >= _contextList.size()) return 1f;
-        
-        if (_limit < 0) {
-            double p = ( _readBytes + getContext().getReadBytes() ) / ((double) _totalBytes);
-            return (float)p;
-        }
-        else {
-            double p = (_readCounts + getContext().getCount()) / ((double)_limit * _contextList.size());
-            return (float)p;
-        }
-    }
-
-    @Override
-    public synchronized boolean next(KafkaETLKey key, BytesWritable value)
-                                    throws IOException {
-    try{
-        if (_contextIndex < 0) { /* first call, get all requests */
-            System.out.println("RecordReader.next init()");
-            _totalBytes = 0;
-            
-            while ( super.next(key, value)) {
-                String input = new String(value.getBytes(), "UTF-8");
-                int index = _contextList.size();
-                KafkaETLContext context = new KafkaETLContext(
-                                              _job, _props, _reporter, _mos, index, input);
-                _contextList.add(context);
-                _totalBytes += context.getTotalBytes();
-            }
-            System.out.println("Number of requests=" + _contextList.size());
-            
-            _readBytes = 0;
-            _readCounts = 0;
-            _contextIndex = 0;
-        }
-        
-        while (_contextIndex < _contextList.size()) {
-            
-            KafkaETLContext currContext = getContext();
-            
-            while (currContext.hasMore() && 
-                       (_limit < 0 || currContext.getCount() < _limit)) {
-                
-                if (currContext.getNext(key, value)) {
-                    //System.out.println("RecordReader.next get (key,value)");
-                    return true;
-                }
-                else {
-                    //System.out.println("RecordReader.next fetch more");
-                    currContext.fetchMore();
-                }
-            }
-            
-            _readBytes += currContext.getReadBytes();
-            _readCounts += currContext.getCount();
-            _contextIndex ++;
-            System.out.println("RecordReader.next will get from request " + _contextIndex);
-       }
-    }catch (Exception e) {
-        throw new IOException (e);
-    }
-    return false;
-    }
-    
-    protected KafkaETLContext getContext() throws IOException{
-        if (_contextIndex >= _contextList.size()) 
-            throw new IOException ("context index " + _contextIndex + " is out of bound " 
-                                            + _contextList.size());
-        return _contextList.get(_contextIndex);
-    }
-
-    
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRequest.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRequest.java
deleted file mode 100644
index defb51b..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLRequest.java
+++ /dev/null
@@ -1,128 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl;
-
-import java.io.IOException;
-import java.net.URI;
-import java.net.URISyntaxException;
-import java.util.Map;
-
-public class KafkaETLRequest {
-    public static long DEFAULT_OFFSET = -1;
-    public static String DELIM = "\t";
-    
-    String _topic;
-    URI _uri;
-    int _partition;
-    long _offset = DEFAULT_OFFSET;
-    
-    public KafkaETLRequest() {
-        
-    }
-    
-    public KafkaETLRequest(String input) throws IOException {
-        //System.out.println("Init request from " + input);
-        String[] pieces = input.trim().split(DELIM);
-        if (pieces.length != 4)
-            throw new IOException( input + 
-                                            " : input must be in the form 'url" + DELIM +
-                                            "topic" + DELIM +"partition" + DELIM +"offset'");
-
-        try {
-            _uri = new URI (pieces[0]); 
-        }catch (java.net.URISyntaxException e) {
-            throw new IOException (e);
-        }
-        _topic = pieces[1];
-        _partition = Integer.valueOf(pieces[2]);
-        _offset = Long.valueOf(pieces[3]);
-    }
-    
-    public KafkaETLRequest(String node, String topic, String partition, String offset, 
-                                    Map<String, String> nodes) throws IOException {
-
-        Integer nodeId = Integer.parseInt(node);
-        String uri = nodes.get(nodeId.toString());
-        if (uri == null) throw new IOException ("Cannot form node for id " + nodeId);
-        
-        try {
-            _uri = new URI (uri); 
-        }catch (java.net.URISyntaxException e) {
-            throw new IOException (e);
-        }
-        _topic = topic;
-        _partition = Integer.valueOf(partition);
-        _offset = Long.valueOf(offset);
-    }
-    
-    public KafkaETLRequest(String topic, String uri, int partition) throws URISyntaxException {
-        _topic = topic;
-        _uri = new URI(uri);
-        _partition = partition;
-    }
-    
-    public void setDefaultOffset() {
-        _offset = DEFAULT_OFFSET;
-    }
-    
-    public void setOffset(long offset) {
-        _offset = offset;
-    }
-    
-    public String getTopic() { return _topic;}
-    public URI getURI () { return _uri;}
-    public int getPartition() { return _partition;}
-    
-    public long getOffset() { return _offset;}
-
-    public boolean isValidOffset() {
-        return _offset >= 0;
-    }
-    
-    @Override
-    public boolean equals(Object o) {
-        if (! (o instanceof KafkaETLRequest))
-            return false;
-        
-        KafkaETLRequest r = (KafkaETLRequest) o;
-        return this._topic.equals(r._topic) ||
-                    this._uri.equals(r._uri) ||
-                    this._partition == r._partition;
-    }
-
-    @Override
-    public int hashCode() {
-        return toString(0).hashCode();
-    }
-
-    @Override
-    public String toString() {
-        return toString(_offset);
-    }
-    
-
-    public String toString (long offset) {
-    
-        return 
-        _uri + DELIM +
-        _topic + DELIM +
-        _partition + DELIM +
-       offset;
-    }
-    
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLUtils.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLUtils.java
deleted file mode 100644
index 02d79a1..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/KafkaETLUtils.java
+++ /dev/null
@@ -1,205 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.etl;
-
-
-import java.io.BufferedReader;
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.FileNotFoundException;
-import java.io.FileWriter;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.PrintWriter;
-import java.net.URL;
-import java.net.URLDecoder;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Enumeration;
-import java.util.List;
-import java.util.Properties;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.PathFilter;
-import org.apache.hadoop.io.BytesWritable;
-
-public class KafkaETLUtils {
-
-	public static PathFilter PATH_FILTER = new PathFilter() {
-		@Override
-		public boolean accept(Path path) {
-			return !path.getName().startsWith("_")
-					&& !path.getName().startsWith(".");
-		}
-	};
-
-	
-	public static Path getLastPath(Path path, FileSystem fs) throws IOException {
-
-		FileStatus[] statuses = fs.listStatus(path, PATH_FILTER);
-
-		if (statuses.length == 0) {
-			return path;
-		} else {
-			Arrays.sort(statuses);
-			return statuses[statuses.length - 1].getPath();
-		}
-	}
-
-	public static String getFileName(Path path) throws IOException {
-		String fullname = path.toUri().toString();
-		String[] parts = fullname.split(Path.SEPARATOR);
-		if (parts.length < 1)
-			throw new IOException("Invalid path " + fullname);
-		return parts[parts.length - 1];
-	}
-
-	public static List<String> readText(FileSystem fs, String inputFile)
-			throws IOException, FileNotFoundException {
-		Path path = new Path(inputFile);
-		return readText(fs, path);
-	}
-
-	public static List<String> readText(FileSystem fs, Path path)
-			throws IOException, FileNotFoundException {
-		if (!fs.exists(path)) {
-			throw new FileNotFoundException("File " + path + " doesn't exist!");
-		}
-		BufferedReader in = new BufferedReader(new InputStreamReader(
-				fs.open(path)));
-		List<String> buf = new ArrayList<String>();
-		String line = null;
-
-		while ((line = in.readLine()) != null) {
-			if (line.trim().length() > 0)
-				buf.add(new String(line.trim()));
-		}
-		in.close();
-		return buf;
-	}
-
-	public static void writeText(FileSystem fs, Path outPath, String content)
-			throws IOException {
-		long timestamp = System.currentTimeMillis();
-		String localFile = "/tmp/KafkaETL_tmp_" + timestamp;
-		PrintWriter writer = new PrintWriter(new FileWriter(localFile));
-		writer.println(content);
-		writer.close();
-
-		Path src = new Path(localFile);
-		fs.moveFromLocalFile(src, outPath);
-	}
-
-	public static Props getPropsFromJob(Configuration conf) {
-		String propsString = conf.get("kafka.etl.props");
-		if (propsString == null)
-			throw new UndefinedPropertyException(
-					"The required property kafka.etl.props was not found in the Configuration.");
-		try {
-			ByteArrayInputStream input = new ByteArrayInputStream(
-					propsString.getBytes("UTF-8"));
-			Properties properties = new Properties();
-			properties.load(input);
-			return new Props(properties);
-		} catch (IOException e) {
-			throw new RuntimeException("This is not possible!", e);
-		}
-	}
-
-	 public static void setPropsInJob(Configuration conf, Props props)
-	  {
-	    ByteArrayOutputStream output = new ByteArrayOutputStream();
-	    try
-	    {
-	      props.store(output);
-	      conf.set("kafka.etl.props", new String(output.toByteArray(), "UTF-8"));
-	    }
-	    catch (IOException e)
-	    {
-	      throw new RuntimeException("This is not possible!", e);
-	    }
-	  }
-	 
-	public static Props readProps(String file) throws IOException {
-		Path path = new Path(file);
-		FileSystem fs = path.getFileSystem(new Configuration());
-		if (fs.exists(path)) {
-			InputStream input = fs.open(path);
-			try {
-				// wrap it up in another layer so that the user can override
-				// properties
-				Props p = new Props(input);
-				return new Props(p);
-			} finally {
-				input.close();
-			}
-		} else {
-			return new Props();
-		}
-	}
-
-	public static String findContainingJar(
-			@SuppressWarnings("rawtypes") Class my_class, ClassLoader loader) {
-		String class_file = my_class.getName().replaceAll("\\.", "/")
-				+ ".class";
-		return findContainingJar(class_file, loader);
-	}
-
-	public static String findContainingJar(String fileName, ClassLoader loader) {
-		try {
-			for (@SuppressWarnings("rawtypes")
-			Enumeration itr = loader.getResources(fileName); itr
-					.hasMoreElements();) {
-				URL url = (URL) itr.nextElement();
-				// logger.info("findContainingJar finds url:" + url);
-				if ("jar".equals(url.getProtocol())) {
-					String toReturn = url.getPath();
-					if (toReturn.startsWith("file:")) {
-						toReturn = toReturn.substring("file:".length());
-					}
-					toReturn = URLDecoder.decode(toReturn, "UTF-8");
-					return toReturn.replaceAll("!.*$", "");
-				}
-			}
-		} catch (IOException e) {
-			throw new RuntimeException(e);
-		}
-		return null;
-	}
-
-    public static byte[] getBytes(BytesWritable val) {
-        
-        byte[] buffer = val.getBytes();
-        
-        /* FIXME: remove the following part once the below gira is fixed
-         * https://issues.apache.org/jira/browse/HADOOP-6298
-         */
-        long len = val.getLength();
-        byte [] bytes = buffer;
-        if (len < buffer.length) {
-            bytes = new byte[(int) len];
-            System.arraycopy(buffer, 0, bytes, 0, (int)len);
-        }
-        
-        return bytes;
-    }
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/Props.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/Props.java
deleted file mode 100644
index 0ba4ccf..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/Props.java
+++ /dev/null
@@ -1,461 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.etl;
-
-import java.io.BufferedInputStream;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.lang.reflect.Constructor;
-import java.net.URI;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
-import org.apache.log4j.Logger;
-
-public class Props extends Properties {
-
-	private static final long serialVersionUID = 1L;
-	private static Logger logger = Logger.getLogger(Props.class);
-	
-	/**
-	 * default constructor
-	 */
-	public Props() {
-		super();
-	}
-
-	/**
-	 * copy constructor 
-	 * @param props
-	 */
-	public Props(Props props) {
-		if (props != null) {
-			this.put(props);
-		}
-	}
-	
-	/**
-	 * construct props from a list of files
-	 * @param files		paths of files
-	 * @throws FileNotFoundException
-	 * @throws IOException
-	 */
-	public Props(String... files) throws FileNotFoundException, IOException {
-		this(Arrays.asList(files));
-	}
-
-	/**
-	 * construct props from a list of files
-	 * @param files		paths of files
-	 * @throws FileNotFoundException
-	 * @throws IOException
-	 */
-	public Props(List<String> files) throws FileNotFoundException, IOException {
-
-		for (int i = 0; i < files.size(); i++) {
-			InputStream input = new BufferedInputStream(new FileInputStream(
-					new File(files.get(i)).getAbsolutePath()));
-			super.load(input);
-			input.close();
-		}
-	}
-
-	/**
-	 * construct props from a list of input streams
-	 * @param inputStreams
-	 * @throws IOException
-	 */
-	public Props(InputStream... inputStreams) throws IOException {
-		for (InputStream stream : inputStreams)
-			super.load(stream);
-	}
-
-	/**
-	 * construct props from a list of maps
-	 * @param props
-	 */
-	public Props(Map<String, String>... props) {
-		for (int i = props.length - 1; i >= 0; i--)
-			super.putAll(props[i]);
-	}
-
-	/**
-	 * construct props from a list of Properties
-	 * @param properties
-	 */
-	public Props(Properties... properties) {
-		for (int i = properties.length - 1; i >= 0; i--){
-			this.put(properties[i]);
-		}
-	}
-
-	/**
-	 * build props from a list of strings and interprate them as
-	 * key, value, key, value,....
-	 * 
-	 * @param args
-	 * @return
-	 */
-	@SuppressWarnings("unchecked")
-	public static Props of(String... args) {
-		if (args.length % 2 != 0)
-			throw new IllegalArgumentException(
-					"Must have an equal number of keys and values.");
-		Map<String, String> vals = new HashMap<String, String>(args.length / 2);
-		for (int i = 0; i < args.length; i += 2)
-			vals.put(args[i], args[i + 1]);
-		return new Props(vals);
-	}
-
-	/**
-	 * Put the given Properties into the Props. 
-	 * 
-	 * @param properties
-	 *            The properties to put
-	 * 
-	 */
-	public void put(Properties properties) {
-		for (String propName : properties.stringPropertyNames()) {
-			super.put(propName, properties.getProperty(propName));
-		}
-	}
-
-	/**
-	 * get property of "key" and split the value by " ," 
-	 * @param key		
-	 * @return
-	 */
-	public List<String> getStringList(String key) {
-		return getStringList(key, "\\s*,\\s*");
-	}
-
-	/**
-	 * get property of "key" and split the value by "sep"
-	 * @param key
-	 * @param sep
-	 * @return
-	 */
-	public List<String> getStringList(String key, String sep) {
-		String val =  super.getProperty(key);
-		if (val == null || val.trim().length() == 0)
-			return Collections.emptyList();
-
-		if (containsKey(key))
-			return Arrays.asList(val.split(sep));
-		else
-			throw new UndefinedPropertyException("Missing required property '"
-					+ key + "'");
-	}
-
-	/**
-	 * get string list with default value. default delimiter is ","
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 */
-	public List<String> getStringList(String key, List<String> defaultValue) {
-		if (containsKey(key))
-			return getStringList(key);
-		else
-			return defaultValue;
-	}
-
-	/**
-	 * get string list with default value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 */
-	public List<String> getStringList(String key, List<String> defaultValue,
-			String sep) {
-		if (containsKey(key))
-			return getStringList(key, sep);
-		else
-			return defaultValue;
-	}
-
-	@SuppressWarnings("unchecked")
-	protected <T> T getValue(String key, T defaultValue) 
-	throws Exception {
-		
-		if (containsKey(key)) {
-			Object value = super.get(key);
-			if (value.getClass().isInstance(defaultValue)) {
-				return (T)value;
-			} else if (value instanceof String) {
-				// call constructor(String) to initialize it
-				@SuppressWarnings("rawtypes")
-				Constructor ct = defaultValue.getClass().getConstructor(String.class);
-				String v = ((String)value).trim();
-				Object ret = ct.newInstance(v);
-				return (T) ret;
-			}
-			else throw new UndefinedPropertyException ("Property " + key + 
-					": cannot convert value of " + value.getClass().getName() + 
-					" to " + defaultValue.getClass().getName());
-		}
-		else {
-			return defaultValue;
-		}
-	}
-
-	@SuppressWarnings("unchecked")
-	protected <T> T getValue(String key, Class<T> mclass) 
-	throws Exception {
-		
-		if (containsKey(key)) {
-			Object value = super.get(key);
-			if (value.getClass().equals(mclass)) {
-				return (T)value;
-			} else if (value instanceof String) {
-				// call constructor(String) to initialize it
-				@SuppressWarnings("rawtypes")
-				Constructor ct = mclass.getConstructor(String.class);
-				String v = ((String)value).trim();
-				Object ret = ct.newInstance(v);
-				return (T) ret;
-			}
-			else throw new UndefinedPropertyException ("Property " + key + 
-					": cannot convert value of " + value.getClass().getName() + 
-					" to " + mclass.getClass().getName());
-		}
-		else {
-			throw new UndefinedPropertyException ("Missing required property '"
-					+ key + "'");
-		}
-	}
-
-	/**
-	 * get boolean value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type boolean or string
-	 */
-	public Boolean getBoolean(String key, Boolean defaultValue) 
-	throws Exception {
-		return getValue (key, defaultValue);
-	}
-
-	/**
-	 * get boolean value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type boolean or string or 
-	 * 										if value doesn't exist
-	 */
-	public Boolean getBoolean(String key) throws Exception {
-		return getValue (key, Boolean.class);
-	}
-
-	/**
-	 * get long value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type long or string
-	 */
-	public Long getLong(String name, Long defaultValue) 
-	throws Exception {
-		return getValue(name, defaultValue);
-	}
-
-	/**
-	 * get long value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type long or string or 
-	 * 										if value doesn't exist
-	 */
-	public Long getLong(String name) throws Exception  {
-		return getValue (name, Long.class);
-	}
-
-	/**
-	 * get integer value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type integer or string
-	 */
-	public Integer getInt(String name, Integer defaultValue) 
-	throws Exception  {
-		return getValue(name, defaultValue);
-	}
-
-	/**
-	 * get integer value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type integer or string or 
-	 * 										if value doesn't exist
-	 */
-	public Integer getInt(String name) throws Exception {
-		return getValue (name, Integer.class);
-	}
-
-	/**
-	 * get double value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type double or string
-	 */
-	public Double getDouble(String name, double defaultValue) 
-	throws Exception {
-		return getValue(name, defaultValue);
-	}
-
-	/**
-	 * get double value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type double or string or 
-	 * 										if value doesn't exist
-	 */
-	public double getDouble(String name) throws Exception {
-		return getValue(name, Double.class);
-	}
-
-	/**
-	 * get URI value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type URI or string 
-	 */
-	public URI getUri(String name, URI defaultValue) throws Exception {
-		return getValue(name, defaultValue);
-	}
-
-	/**
-	 * get URI value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type URI or string 
-	 */
-	public URI getUri(String name, String defaultValue) 
-	throws Exception {
-		URI defaultV = new URI(defaultValue);
-		return getValue(name, defaultV);
-	}
-
-	/**
-	 * get URI value
-	 * @param key
-	 * @param defaultValue
-	 * @return
-	 * @throws Exception 	if value is not of type URI or string or 
-	 * 										if value doesn't exist
-	 */
-	public URI getUri(String name) throws Exception {
-		return getValue(name, URI.class);
-	}
-
-	/**
-	 * compare two props 
-	 * @param p
-	 * @return
-	 */
-	public boolean equalsProps(Props p) {
-		if (p == null) {
-			return false;
-		}
-
-		final Set<String> myKeySet = getKeySet();
-		for (String s : myKeySet) {
-			if (!get(s).equals(p.get(s))) {
-				return false;
-			}
-		}
-
-		return myKeySet.size() == p.getKeySet().size();
-	}
-
-
-	/**
-	 * Get a map of all properties by string prefix
-	 * 
-	 * @param prefix
-	 *            The string prefix
-	 */
-	public Map<String, String> getMapByPrefix(String prefix) {
-		Map<String, String> values = new HashMap<String, String>();
-
-		for (String key : super.stringPropertyNames()) {
-			if (key.startsWith(prefix)) {
-				values.put(key.substring(prefix.length()), super.getProperty(key));
-			}
-		}
-		return values;
-	}
-
-    /**
-     * Store all properties
-     * 
-     * @param out The stream to write to
-     * @throws IOException If there is an error writing
-     */
-    public void store(OutputStream out) throws IOException {
-           super.store(out, null);
-    }
-    
-    /**
-     * get all property names
-     * @return
-     */
-	public Set<String> getKeySet() {
-		return super.stringPropertyNames();
-	}
-
-	/**
-	 * log properties
-	 * @param comment
-	 */
-	public void logProperties(String comment) {
-		logger.info(comment);
-
-		for (String key : getKeySet()) {
-			logger.info("  key=" + key + " value=" + get(key));
-		}
-	}
-
-	/**
-	 * clone a Props
-	 * @param p
-	 * @return
-	 */
-	public static Props clone(Props p) {
-		return new Props(p);
-	}
-
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/UndefinedPropertyException.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/UndefinedPropertyException.java
deleted file mode 100644
index 9278122..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/UndefinedPropertyException.java
+++ /dev/null
@@ -1,28 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.etl;
-
-public class UndefinedPropertyException extends RuntimeException {
-
-	private static final long serialVersionUID = 1;
-
-	public UndefinedPropertyException(String message) {
-		super(message);
-	}
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/DataGenerator.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/DataGenerator.java
deleted file mode 100644
index 5166358..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/DataGenerator.java
+++ /dev/null
@@ -1,134 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.etl.impl;
-
-
-import java.net.URI;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.Random;
-import kafka.etl.KafkaETLKey;
-import kafka.etl.KafkaETLRequest;
-import kafka.etl.Props;
-import kafka.javaapi.message.ByteBufferMessageSet;
-import kafka.javaapi.producer.SyncProducer;
-import kafka.message.Message;
-import kafka.producer.SyncProducerConfig;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.SequenceFile;
-import org.apache.hadoop.mapred.JobConf;
-
-/**
- * Use this class to produce test events to Kafka server. Each event contains a
- * random timestamp in text format.
- */
-@SuppressWarnings("deprecation")
-public class DataGenerator {
-
-	protected final static Random RANDOM = new Random(
-			System.currentTimeMillis());
-
-	protected Props _props;
-	protected SyncProducer _producer = null;
-	protected URI _uri = null;
-	protected String _topic;
-	protected int _count;
-	protected String _offsetsDir;
-	protected final int TCP_BUFFER_SIZE = 300 * 1000;
-	protected final int CONNECT_TIMEOUT = 20000; // ms
-	protected final int RECONNECT_INTERVAL = Integer.MAX_VALUE; // ms
-
-	public DataGenerator(String id, Props props) throws Exception {
-		_props = props;
-		_topic = props.getProperty("kafka.etl.topic");
-		System.out.println("topics=" + _topic);
-		_count = props.getInt("event.count");
-
-		_offsetsDir = _props.getProperty("input");
-		
-		// initialize kafka producer to generate count events
-		String serverUri = _props.getProperty("kafka.server.uri");
-		_uri = new URI (serverUri);
-		
-		System.out.println("server uri:" + _uri.toString());
-        Properties producerProps = new Properties();
-        producerProps.put("host", _uri.getHost());
-        producerProps.put("port", String.valueOf(_uri.getPort()));
-        producerProps.put("buffer.size", String.valueOf(TCP_BUFFER_SIZE));
-        producerProps.put("connect.timeout.ms", String.valueOf(CONNECT_TIMEOUT));
-        producerProps.put("reconnect.interval", String.valueOf(RECONNECT_INTERVAL));
-		_producer = new SyncProducer(new SyncProducerConfig(producerProps));
-			
-	}
-
-	public void run() throws Exception {
-
-		List<Message> list = new ArrayList<Message>();
-		for (int i = 0; i < _count; i++) {
-			Long timestamp = RANDOM.nextLong();
-			if (timestamp < 0) timestamp = -timestamp;
-			byte[] bytes = timestamp.toString().getBytes("UTF8");
-			Message message = new Message(bytes);
-			list.add(message);
-		}
-		// send events
-		System.out.println(" send " + list.size() + " " + _topic + " count events to " + _uri);
-		_producer.send(_topic, new ByteBufferMessageSet(kafka.message.NoCompressionCodec$.MODULE$, list));
-
-		// close the producer
-		_producer.close();
-		
-		// generate offset files
-		generateOffsets();
-	}
-
-    protected void generateOffsets() throws Exception {
-        JobConf conf = new JobConf();
-        conf.set("hadoop.job.ugi", _props.getProperty("hadoop.job.ugi"));
-        conf.setCompressMapOutput(false);
-        Path outPath = new Path(_offsetsDir + Path.SEPARATOR + "1.dat");
-        FileSystem fs = outPath.getFileSystem(conf);
-        if (fs.exists(outPath)) fs.delete(outPath);
-        
-        KafkaETLRequest request =
-            new KafkaETLRequest(_topic, "tcp://" + _uri.getHost() + ":" + _uri.getPort(), 0);
-
-        System.out.println("Dump " + request.toString() + " to " + outPath.toUri().toString());
-        byte[] bytes = request.toString().getBytes("UTF-8");
-        KafkaETLKey dummyKey = new KafkaETLKey();
-        SequenceFile.setCompressionType(conf, SequenceFile.CompressionType.NONE);
-        SequenceFile.Writer writer = SequenceFile.createWriter(fs, conf, outPath, 
-                                        KafkaETLKey.class, BytesWritable.class);
-        writer.append(dummyKey, new BytesWritable(bytes));
-        writer.close();
-    }
-    
-	public static void main(String[] args) throws Exception {
-
-		if (args.length < 1)
-			throw new Exception("Usage: - config_file");
-
-		Props props = new Props(args[0]);
-		DataGenerator job = new DataGenerator("DataGenerator", props);
-		job.run();
-	}
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLJob.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLJob.java
deleted file mode 100644
index d269704..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLJob.java
+++ /dev/null
@@ -1,104 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.etl.impl;
-
-import kafka.etl.KafkaETLInputFormat;
-import kafka.etl.KafkaETLJob;
-import kafka.etl.Props;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.LongWritable;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.JobClient;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.RunningJob;
-import org.apache.hadoop.mapred.TextOutputFormat;
-
-/**
- * This is a simple Kafka ETL job which pull text events generated by
- * DataGenerator and store them in hdfs
- */
-@SuppressWarnings("deprecation")
-public class SimpleKafkaETLJob {
-
-    protected String _name;
-    protected Props _props;
-    protected String _input;
-    protected String _output;
-    protected String _topic;
-    
-	public SimpleKafkaETLJob(String name, Props props) throws Exception {
-		_name = name;
-		_props = props;
-		
-		_input = _props.getProperty("input");
-		_output = _props.getProperty("output");
-		
-		_topic = props.getProperty("kafka.etl.topic");
-	}
-
-
-	protected JobConf createJobConf() throws Exception {
-		JobConf jobConf = KafkaETLJob.createJobConf("SimpleKafakETL", _topic, _props, getClass());
-		
-		jobConf.setMapperClass(SimpleKafkaETLMapper.class);
-		KafkaETLInputFormat.setInputPaths(jobConf, new Path(_input));
-		
-		jobConf.setOutputKeyClass(LongWritable.class);
-		jobConf.setOutputValueClass(Text.class);
-		jobConf.setOutputFormat(TextOutputFormat.class);
-		TextOutputFormat.setCompressOutput(jobConf, false);
-		Path output = new Path(_output);
-		FileSystem fs = output.getFileSystem(jobConf);
-		if (fs.exists(output)) fs.delete(output);
-		TextOutputFormat.setOutputPath(jobConf, output);
-		
-		jobConf.setNumReduceTasks(0);
-		return jobConf;
-	}
-	
-    public void execute () throws Exception {
-        JobConf conf = createJobConf();
-        RunningJob runningJob = new JobClient(conf).submitJob(conf);
-        String id = runningJob.getJobID();
-        System.out.println("Hadoop job id=" + id);
-        runningJob.waitForCompletion();
-        
-        if (!runningJob.isSuccessful()) 
-            throw new Exception("Hadoop ETL job failed! Please check status on http://"
-                                         + conf.get("mapred.job.tracker") + "/jobdetails.jsp?jobid=" + id);
-    }
-
-	/**
-	 * for testing only
-	 * 
-	 * @param args
-	 * @throws Exception
-	 */
-	public static void main(String[] args) throws Exception {
-
-		if (args.length < 1)
-			throw new Exception("Usage: - config_file");
-
-		Props props = new Props(args[0]);
-		SimpleKafkaETLJob job = new SimpleKafkaETLJob("SimpleKafkaETLJob",
-				props);
-		job.execute();
-	}
-
-}
diff --git a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLMapper.java b/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLMapper.java
deleted file mode 100644
index b0aadff..0000000
--- a/trunk/contrib/hadoop-consumer/src/main/java/kafka/etl/impl/SimpleKafkaETLMapper.java
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.etl.impl;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import kafka.etl.KafkaETLKey;
-import kafka.etl.KafkaETLUtils;
-import kafka.message.Message;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.LongWritable;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Mapper;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-
-/**
- * Simple implementation of KafkaETLMapper. It assumes that 
- * input data are text timestamp (long).
- */
-@SuppressWarnings("deprecation")
-public class SimpleKafkaETLMapper implements
-Mapper<KafkaETLKey, BytesWritable, LongWritable, Text> {
-
-    protected long _count = 0;
-    
-	protected Text getData(Message message) throws IOException {
-		ByteBuffer buf = message.payload();
-		
-		byte[] array = new byte[buf.limit()];
-		buf.get(array);
-		
-		Text text = new Text( new String(array, "UTF8"));
-		return text;
-	}
-
-
-    @Override
-    public void map(KafkaETLKey key, BytesWritable val,
-            OutputCollector<LongWritable, Text> collector,
-            Reporter reporter) throws IOException {
-        
-         
-        byte[] bytes = KafkaETLUtils.getBytes(val);
-        
-        //check the checksum of message
-        Message message = new Message(bytes);
-        long checksum = key.getChecksum();
-        if (checksum != message.checksum()) 
-            throw new IOException ("Invalid message checksum " 
-                                            + message.checksum() + ". Expected " + key + ".");
-        Text data = getData (message);
-        _count ++;
-           
-        collector.collect(new LongWritable (_count), data);
-
-    }
-
-
-    @Override
-    public void configure(JobConf arg0) {
-        // TODO Auto-generated method stub
-        
-    }
-
-
-    @Override
-    public void close() throws IOException {
-        // TODO Auto-generated method stub
-        
-    }
-
-}
diff --git a/trunk/contrib/hadoop-consumer/test/test.properties b/trunk/contrib/hadoop-consumer/test/test.properties
deleted file mode 100644
index cdea8cc..0000000
--- a/trunk/contrib/hadoop-consumer/test/test.properties
+++ /dev/null
@@ -1,42 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# name of test topic
-kafka.etl.topic=SimpleTestEvent
-
-# hdfs location of jars
-hdfs.default.classpath.dir=/tmp/kafka/lib
-
-# number of test events to be generated
-event.count=1000
-
-# hadoop id and group
-hadoop.job.ugi=kafka,hadoop
-
-# kafka server uri
-kafka.server.uri=tcp://localhost:9092
-
-# hdfs location of input directory 
-input=/tmp/kafka/data
-
-# hdfs location of output directory
-output=/tmp/kafka/output
-
-# limit the number of events to be fetched;
-# value -1 means no limitation
-kafka.request.limit=-1
-
-# kafka parameters
-client.buffer.size=1048576
-client.so.timeout=60000
diff --git a/trunk/contrib/hadoop-producer/LICENSE b/trunk/contrib/hadoop-producer/LICENSE
deleted file mode 100644
index 6b0b127..0000000
--- a/trunk/contrib/hadoop-producer/LICENSE
+++ /dev/null
@@ -1,203 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
diff --git a/trunk/contrib/hadoop-producer/README.md b/trunk/contrib/hadoop-producer/README.md
deleted file mode 100644
index 6e57fde..0000000
--- a/trunk/contrib/hadoop-producer/README.md
+++ /dev/null
@@ -1,147 +0,0 @@
-Hadoop to Kafka Bridge
-======================
-
-What's new?
------------
-
-* Now supports Kafka's software load balancer (Kafka URIs are specified with
-  kafka+zk as the scheme, as described below)
-* Supports Kafka 0.7. Now uses the new Producer API, rather than the legacy
-  SyncProducer.
-
-What is it?
------------
-
-The Hadoop to Kafka bridge is a way to publish data from Hadoop to Kafka. There
-are two possible mechanisms, varying from easy to difficult:  writing a Pig
-script and writing messages in Avro format, or rolling your own job using the
-Kafka `OutputFormat`. 
-
-Note that there are no write-once semantics: any client of the data must handle
-messages in an idempotent manner. That is, because of node failures and
-Hadoop's failure recovery, it's possible that the same message is published
-multiple times in the same push.
-
-How do I use it?
-----------------
-
-With this bridge, Kafka topics are URIs and are specified in one of two
-formats: `kafka+zk://<zk-path>#<kafka-topic>`, which uses the software load
-balancer, or the legacy `kafka://<kafka-server>/<kafka-topic>` to connect to a
-specific Kafka broker.
-
-### Pig ###
-
-Pig bridge writes data in binary Avro format with one message created per input
-row. To push data via Kafka, store to the Kafka URI using `AvroKafkaStorage`
-with the Avro schema as its first argument. You'll need to register the
-appropriate Kafka JARs. Here is what an example Pig script looks like:
-
-    REGISTER hadoop-producer_2.8.0-0.7.0.jar;
-    REGISTER avro-1.4.0.jar;
-    REGISTER piggybank.jar;
-    REGISTER kafka-0.7.0.jar;
-    REGISTER jackson-core-asl-1.5.5.jar;
-    REGISTER jackson-mapper-asl-1.5.5.jar;
-    REGISTER zkclient-20110412.jar;
-    REGISTER zookeeper-3.3.4.jar;
-    REGISTER scala-library.jar;
-
-    member_info = LOAD 'member_info.tsv' as (member_id : int, name : chararray);
-    names = FOREACH member_info GENERATE name;
-    STORE member_info INTO 'kafka+zk://my-zookeeper:2181/kafka#member_info' USING kafka.bridge.AvroKafkaStorage('"string"');
-
-That's it! The Pig StoreFunc makes use of AvroStorage in Piggybank to convert
-from Pig's data model to the specified Avro schema.
-
-Further, multi-store is possible with KafkaStorage, so you can easily write to
-multiple topics and brokers in the same job:
-
-    SPLIT member_info INTO early_adopters IF member_id < 1000, others IF member_id >= 1000;
-    STORE early_adopters INTO 'kafka+zk://my-zookeeper:2181/kafka#early_adopters' USING AvroKafkaStorage('$schema');
-    STORE others INTO 'kafka://my-broker:9092,my-broker2:9092/others' USING AvroKafkaStorage('$schema');
-
-### KafkaOutputFormat ###
-
-KafkaOutputFormat is a Hadoop OutputFormat for publishing data via Kafka. It
-uses the newer 0.20 mapreduce APIs and simply pushes bytes (i.e.,
-BytesWritable). This is a lower-level method of publishing data, as it allows
-you to precisely control output.
-
-Here is an example that publishes some input text. With KafkaOutputFormat, the
-key is a NullWritable and is ignored; only values are published. Speculative
-execution is turned off by the OutputFormat.
-
-    import kafka.bridge.hadoop.KafkaOutputFormat;
-    
-    import org.apache.hadoop.fs.Path;
-    import org.apache.hadoop.io.BytesWritable;
-    import org.apache.hadoop.io.NullWritable;
-    import org.apache.hadoop.io.Text;
-    import org.apache.hadoop.mapreduce.Job;
-    import org.apache.hadoop.mapreduce.Mapper;
-    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-    import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
-    
-    import java.io.IOException;
-    
-    public class TextPublisher
-    {
-      public static void main(String[] args) throws Exception
-      {
-        if (args.length != 2) {
-          System.err.println("usage: <input path> <kafka output url>");
-          return;
-        }
-    
-        Job job = new Job();
-    
-        job.setJarByClass(TextPublisher.class);
-        job.setOutputKeyClass(NullWritable.class);
-        job.setOutputValueClass(BytesWritable.class);
-        job.setInputFormatClass(TextInputFormat.class);
-        job.setOutputFormatClass(KafkaOutputFormat.class);
-    
-        job.setMapperClass(TheMapper.class);
-        job.setNumReduceTasks(0);
-    
-        FileInputFormat.addInputPath(job, new Path(args[0]));
-        KafkaOutputFormat.setOutputPath(job, new Path(args[1]));
-    
-        if (!job.waitForCompletion(true)) {
-          throw new RuntimeException("Job failed!");
-        }
-      }
-    
-      public static class TheMapper extends Mapper<Object, Object, NullWritable, BytesWritable>
-      {
-        @Override
-        protected void map(Object key, Object value, Context context) throws IOException, InterruptedException
-        {
-          context.write(NullWritable.get(), new BytesWritable(((Text) value).getBytes()));
-        }
-      }
-    }
-
-What can I tune?
-----------------
-
-Normally, you needn't change any of these parameters:
-
-* kafka.output.queue_size: Bytes to queue in memory before pushing to the Kafka
-  producer (i.e., the batch size). Default is 10*1024*1024 (10MB).
-* kafka.output.connect_timeout: Connection timeout in milliseconds (see Kafka
-  producer docs). Default is 30*1000 (30s).
-* kafka.output.reconnect_timeout: Milliseconds to wait until attempting
-  reconnection (see Kafka producer docs). Default is 1000 (1s).
-* kafka.output.bufsize: Producer buffer size in bytes (see Kafka producer
-  docs). Default is 64*1024 (64KB). 
-* kafka.output.max_msgsize: Maximum message size in bytes (see Kafka producer
-  docs). Default is 1024*1024 (1MB).
-* kafka.output.compression_codec: The compression codec to use (see Kafka producer
-  docs). Default is 0 (no compression).
-
-For easier debugging, the above values as well as the Kafka broker information
-(either kafka.zk.connect or kafka.broker.list), the topic (kafka.output.topic),
-and the schema (kafka.output.schema) are injected into the job's configuration.
-
diff --git a/trunk/contrib/hadoop-producer/lib/piggybank.jar b/trunk/contrib/hadoop-producer/lib/piggybank.jar
deleted file mode 100644
index cbd46e0..0000000
--- a/trunk/contrib/hadoop-producer/lib/piggybank.jar
+++ /dev/null
Binary files differ
diff --git a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/examples/TextPublisher.java b/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/examples/TextPublisher.java
deleted file mode 100644
index 5acbcee..0000000
--- a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/examples/TextPublisher.java
+++ /dev/null
@@ -1,68 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.bridge.examples;
-
-
-import java.io.IOException;
-import kafka.bridge.hadoop.KafkaOutputFormat;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
-
-public class TextPublisher
-{
-  public static void main(String[] args) throws Exception
-  {
-    if (args.length != 2) {
-      System.err.println("usage: <input path> <kafka output url>");
-      return;
-    }
-
-    Job job = new Job();
-
-    job.setJarByClass(TextPublisher.class);
-    job.setOutputKeyClass(NullWritable.class);
-    job.setOutputValueClass(BytesWritable.class);
-    job.setInputFormatClass(TextInputFormat.class);
-    job.setOutputFormatClass(KafkaOutputFormat.class);
-
-    job.setMapperClass(TheMapper.class);
-    job.setNumReduceTasks(0);
-
-    FileInputFormat.addInputPath(job, new Path(args[0]));
-    KafkaOutputFormat.setOutputPath(job, new Path(args[1]));
-
-    if (!job.waitForCompletion(true)) {
-      throw new RuntimeException("Job failed!");
-    }
-  }
-
-  public static class TheMapper extends Mapper<Object, Object, NullWritable, BytesWritable>
-  {
-    @Override
-    protected void map(Object key, Object value, Context context) throws IOException, InterruptedException
-    {
-      context.write(NullWritable.get(), new BytesWritable(((Text) value).getBytes()));
-    }
-  }
-}
-
diff --git a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaOutputFormat.java b/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaOutputFormat.java
deleted file mode 100644
index 4b9343f..0000000
--- a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaOutputFormat.java
+++ /dev/null
@@ -1,173 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.bridge.hadoop;
-
-
-import java.io.IOException;
-import java.net.URI;
-import java.util.Properties;
-import kafka.javaapi.producer.Producer;
-import kafka.message.Message;
-import kafka.producer.ProducerConfig;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.JobContext;
-import org.apache.hadoop.mapreduce.OutputCommitter;
-import org.apache.hadoop.mapreduce.OutputFormat;
-import org.apache.hadoop.mapreduce.RecordWriter;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
-import org.apache.log4j.Logger;
-
-public class KafkaOutputFormat<W extends BytesWritable> extends OutputFormat<NullWritable, W>
-{
-  private Logger log = Logger.getLogger(KafkaOutputFormat.class);
-
-  public static final String KAFKA_URL = "kafka.output.url";
-  /** Bytes to buffer before the OutputFormat does a send */
-  public static final int KAFKA_QUEUE_SIZE = 10*1024*1024;
-
-  /** Default value for Kafka's connect.timeout.ms */
-  public static final int KAFKA_PRODUCER_CONNECT_TIMEOUT = 30*1000;
-  /** Default value for Kafka's reconnect.interval*/
-  public static final int KAFKA_PRODUCER_RECONNECT_INTERVAL = 1000;
-  /** Default value for Kafka's buffer.size */
-  public static final int KAFKA_PRODUCER_BUFFER_SIZE = 64*1024;
-  /** Default value for Kafka's max.message.size */
-  public static final int KAFKA_PRODUCER_MAX_MESSAGE_SIZE = 1024*1024;
-  /** Default value for Kafka's producer.type */
-  public static final String KAFKA_PRODUCER_PRODUCER_TYPE = "sync";
-  /** Default value for Kafka's compression.codec */
-  public static final int KAFKA_PRODUCER_COMPRESSION_CODEC = 0;
-
-  public KafkaOutputFormat()
-  {
-    super();
-  }
-
-  public static void setOutputPath(Job job, Path outputUrl)
-  {
-    job.getConfiguration().set(KafkaOutputFormat.KAFKA_URL, outputUrl.toString());
-
-    job.getConfiguration().setBoolean("mapred.map.tasks.speculative.execution", false);
-    job.getConfiguration().setBoolean("mapred.reduce.tasks.speculative.execution", false);
-  }
-
-  public static Path getOutputPath(JobContext job)
-  {
-    String name = job.getConfiguration().get(KafkaOutputFormat.KAFKA_URL);
-    return name == null ? null : new Path(name);
-  }
-
-  @Override
-  public void checkOutputSpecs(JobContext jobContext) throws IOException, InterruptedException
-  {
-  }
-
-  @Override
-  public OutputCommitter getOutputCommitter(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException
-  {
-    // Is there a programmatic way to get the temp dir? I see it hardcoded everywhere in Hadoop, Hive, and Pig.
-    return new FileOutputCommitter(new Path("/tmp/" + taskAttemptContext.getTaskAttemptID().getJobID().toString()), taskAttemptContext);
-  }
-
-  @Override
-  public RecordWriter<NullWritable, W> getRecordWriter(TaskAttemptContext context) throws IOException, InterruptedException
-  {
-    Path outputPath = getOutputPath(context);
-    if (outputPath == null)
-      throw new IllegalArgumentException("no kafka output url specified");
-    URI uri = URI.create(outputPath.toString());
-    Configuration job = context.getConfiguration();
-
-    Properties props = new Properties();
-    String topic;
-
-    final int queueSize = job.getInt("kafka.output.queue_size", KAFKA_QUEUE_SIZE);
-    final int timeout = job.getInt("kafka.output.connect_timeout", KAFKA_PRODUCER_CONNECT_TIMEOUT);
-    final int interval = job.getInt("kafka.output.reconnect_interval", KAFKA_PRODUCER_RECONNECT_INTERVAL);
-    final int bufSize = job.getInt("kafka.output.bufsize", KAFKA_PRODUCER_BUFFER_SIZE);
-    final int maxSize = job.getInt("kafka.output.max_msgsize", KAFKA_PRODUCER_MAX_MESSAGE_SIZE);
-    final String producerType = job.get("kafka.output.producer_type", KAFKA_PRODUCER_PRODUCER_TYPE);
-    final int compressionCodec = job.getInt("kafka.output.compression_codec", KAFKA_PRODUCER_COMPRESSION_CODEC);
-
-    job.setInt("kafka.output.queue_size", queueSize);
-    job.setInt("kafka.output.connect_timeout", timeout);
-    job.setInt("kafka.output.reconnect_interval", interval);
-    job.setInt("kafka.output.bufsize", bufSize);
-    job.setInt("kafka.output.max_msgsize", maxSize);
-    job.set("kafka.output.producer_type", producerType);
-    job.setInt("kafka.output.compression_codec", compressionCodec);
-
-    props.setProperty("producer.type", producerType);
-    props.setProperty("buffer.size", Integer.toString(bufSize));
-    props.setProperty("connect.timeout.ms", Integer.toString(timeout));
-    props.setProperty("reconnect.interval", Integer.toString(interval));
-    props.setProperty("max.message.size", Integer.toString(maxSize));
-    props.setProperty("compression.codec", Integer.toString(compressionCodec));
-
-    if (uri.getScheme().equals("kafka+zk")) {
-      // Software load balancer:
-      //  URL: kafka+zk://<zk connect path>#<kafka topic>
-      //  e.g. kafka+zk://kafka-zk:2181/kafka#foobar
-
-      String zkConnect = uri.getAuthority() + uri.getPath();
-
-      props.setProperty("zk.connect", zkConnect);
-      job.set("kafka.zk.connect", zkConnect);
-
-      topic = uri.getFragment();
-      if (topic == null)
-        throw new IllegalArgumentException("no topic specified in kafka uri fragment");
-
-      log.info(String.format("using kafka zk.connect %s (topic %s)", zkConnect, topic));
-    } else if (uri.getScheme().equals("kafka")) {
-      // using the legacy direct broker list
-      // URL: kafka://<kafka host>/<topic>
-      // e.g. kafka://kafka-server:9000,kafka-server2:9000/foobar
-
-      // Just enumerate broker_ids, as it really doesn't matter what they are as long as they're unique
-      // (KAFKA-258 will remove the broker_id requirement)
-      StringBuilder brokerListBuilder = new StringBuilder();
-      String delim = "";
-      int brokerId = 0;
-      for (String serverPort : uri.getAuthority().split(",")) {
-        brokerListBuilder.append(delim).append(String.format("%d:%s", brokerId, serverPort));
-        delim = ",";
-        brokerId++;
-      }
-      String brokerList = brokerListBuilder.toString();
-
-      props.setProperty("broker.list", brokerList);
-      job.set("kafka.broker.list", brokerList);
-
-      if (uri.getPath() == null || uri.getPath().length() <= 1)
-        throw new IllegalArgumentException("no topic specified in kafka uri");
-
-      topic = uri.getPath().substring(1);             // ignore the initial '/' in the path
-      job.set("kafka.output.topic", topic);
-      log.info(String.format("using kafka broker %s (topic %s)", brokerList, topic));
-    } else
-      throw new IllegalArgumentException("missing scheme from kafka uri (must be kafka:// or kafka+zk://)");
-
-    Producer<Integer, Message> producer = new Producer<Integer, Message>(new ProducerConfig(props));
-    return new KafkaRecordWriter<W>(producer, topic, queueSize);
-  }
-}
diff --git a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaRecordWriter.java b/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaRecordWriter.java
deleted file mode 100644
index af9c650..0000000
--- a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/hadoop/KafkaRecordWriter.java
+++ /dev/null
@@ -1,74 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.bridge.hadoop;
-
-
-import java.io.IOException;
-import java.util.LinkedList;
-import java.util.List;
-import kafka.javaapi.producer.Producer;
-import kafka.javaapi.producer.ProducerData;
-import kafka.message.Message;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.mapreduce.RecordWriter;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-
-public class KafkaRecordWriter<W extends BytesWritable> extends RecordWriter<NullWritable, W>
-{
-  protected Producer<Integer, Message> producer;
-  protected String topic;
-
-  protected List<ProducerData<Integer, Message>> msgList = new LinkedList<ProducerData<Integer, Message>>();
-  protected int totalSize = 0;
-  protected int queueSize;
-
-  public KafkaRecordWriter(Producer<Integer, Message> producer, String topic, int queueSize)
-  {
-    this.producer = producer;
-    this.topic = topic;
-    this.queueSize = queueSize;
-  }
-
-  protected void sendMsgList()
-  {
-    if (msgList.size() > 0) {
-      producer.send(msgList);
-      msgList.clear();
-      totalSize = 0;
-    }
-  }
-
-  @Override
-  public void write(NullWritable key, BytesWritable value) throws IOException, InterruptedException
-  {
-    Message msg = new Message(value.getBytes());
-    msgList.add(new ProducerData<Integer, Message>(this.topic, msg));
-    totalSize += msg.size();
-
-    // MultiProducerRequest only supports sending up to Short.MAX_VALUE messages in one batch
-    if (totalSize > queueSize || msgList.size() >= Short.MAX_VALUE)
-      sendMsgList();
-  }
-
-  @Override
-  public void close(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException
-  {
-    sendMsgList();
-    producer.close();
-  }
-}
diff --git a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/pig/AvroKafkaStorage.java b/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/pig/AvroKafkaStorage.java
deleted file mode 100644
index faa1950..0000000
--- a/trunk/contrib/hadoop-producer/src/main/java/kafka/bridge/pig/AvroKafkaStorage.java
+++ /dev/null
@@ -1,117 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.bridge.pig;
-
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.OutputStream;
-import kafka.bridge.hadoop.KafkaOutputFormat;
-import kafka.bridge.hadoop.KafkaRecordWriter;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.Encoder;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.OutputFormat;
-import org.apache.hadoop.mapreduce.RecordWriter;
-import org.apache.pig.ResourceSchema;
-import org.apache.pig.StoreFunc;
-import org.apache.pig.data.Tuple;
-import org.apache.pig.piggybank.storage.avro.PigAvroDatumWriter;
-import org.apache.pig.piggybank.storage.avro.PigSchema2Avro;
-
-public class AvroKafkaStorage extends StoreFunc
-{
-  protected KafkaRecordWriter writer;
-  protected org.apache.avro.Schema avroSchema;
-  protected PigAvroDatumWriter datumWriter;
-  protected Encoder encoder;
-  protected ByteArrayOutputStream os;
-
-  public AvroKafkaStorage(String schema)
-  {
-    this.avroSchema = org.apache.avro.Schema.parse(schema);
-  }
-
-  @Override
-  public OutputFormat getOutputFormat() throws IOException
-  {
-    return new KafkaOutputFormat();
-  }
-
-  @Override
-  public String relToAbsPathForStoreLocation(String location, Path curDir) throws IOException
-  {
-    return location;
-  }
-
-  @Override
-  public void setStoreLocation(String uri, Job job) throws IOException
-  {
-    KafkaOutputFormat.setOutputPath(job, new Path(uri));
-  }
-
-  @Override
-  public void prepareToWrite(RecordWriter writer) throws IOException
-  {
-    if (this.avroSchema == null)
-      throw new IllegalStateException("avroSchema shouldn't be null");
-
-    this.writer = (KafkaRecordWriter) writer;
-    this.datumWriter = new PigAvroDatumWriter(this.avroSchema);
-    this.os = new ByteArrayOutputStream();
-    this.encoder = new BinaryEncoder(this.os);
-  }
-
-  @Override
-  public void cleanupOnFailure(String location, Job job) throws IOException
-  {
-  }
-
-  @Override
-  public void setStoreFuncUDFContextSignature(String signature)
-  {
-  }
-
-  @Override
-  public void checkSchema(ResourceSchema schema) throws IOException
-  {
-    this.avroSchema = PigSchema2Avro.validateAndConvert(avroSchema, schema);
-  }
-
-  protected void writeEnvelope(OutputStream os, Encoder enc) throws IOException
-  {
-  }
-
-  @Override
-  public void putNext(Tuple tuple) throws IOException
-  {
-    os.reset();
-    writeEnvelope(os, this.encoder);
-    datumWriter.write(tuple, this.encoder);
-    this.encoder.flush();
-
-    try {
-      this.writer.write(NullWritable.get(), new BytesWritable(this.os.toByteArray()));
-    }
-    catch (InterruptedException e) {
-      throw new IOException(e);
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/Kafka.scala b/trunk/core/src/main/scala/kafka/Kafka.scala
deleted file mode 100644
index 367cf51..0000000
--- a/trunk/core/src/main/scala/kafka/Kafka.scala
+++ /dev/null
@@ -1,55 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka
-
-import server.{KafkaConfig, KafkaServerStartable, KafkaServer}
-import utils.{Utils, Logging}
-
-
-object Kafka extends Logging {
-
-  def main(args: Array[String]): Unit = {
-
-    if (args.length != 1) {
-      println("USAGE: java [options] %s server.properties".format(classOf[KafkaServer].getSimpleName()))
-      System.exit(1)
-    }
-  
-    try {
-      val props = Utils.loadProps(args(0))
-      val serverConfig = new KafkaConfig(props)
-
-      val kafkaServerStartble = new KafkaServerStartable(serverConfig)
-
-      // attach shutdown handler to catch control-c
-      Runtime.getRuntime().addShutdownHook(new Thread() {
-        override def run() = {
-          kafkaServerStartble.shutdown
-          kafkaServerStartble.awaitShutdown
-        }
-      });
-
-      kafkaServerStartble.startup
-      kafkaServerStartble.awaitShutdown
-    }
-    catch {
-      case e => fatal(e)
-    }
-    System.exit(0)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/api/FetchRequest.scala b/trunk/core/src/main/scala/kafka/api/FetchRequest.scala
deleted file mode 100644
index 50c46a1..0000000
--- a/trunk/core/src/main/scala/kafka/api/FetchRequest.scala
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio._
-import kafka.network._
-import kafka.utils._
-
-object FetchRequest {
-    
-  def readFrom(buffer: ByteBuffer): FetchRequest = {
-    val topic = Utils.readShortString(buffer, "UTF-8")
-    val partition = buffer.getInt()
-    val offset = buffer.getLong()
-    val size = buffer.getInt()
-    new FetchRequest(topic, partition, offset, size)
-  }
-}
-
-class FetchRequest(val topic: String,
-                   val partition: Int,
-                   val offset: Long, 
-                   val maxSize: Int) extends Request(RequestKeys.Fetch) {
-  
-  def writeTo(buffer: ByteBuffer) {
-    Utils.writeShortString(buffer, topic, "UTF-8")
-    buffer.putInt(partition)
-    buffer.putLong(offset)
-    buffer.putInt(maxSize)
-  }
-  
-  def sizeInBytes(): Int = 2 + topic.length + 4 + 8 + 4
-
-  override def toString(): String= "FetchRequest(topic:" + topic + ", part:" + partition +" offset:" + offset +
-    " maxSize:" + maxSize + ")"
-}
diff --git a/trunk/core/src/main/scala/kafka/api/MultiFetchRequest.scala b/trunk/core/src/main/scala/kafka/api/MultiFetchRequest.scala
deleted file mode 100644
index 6ecc619..0000000
--- a/trunk/core/src/main/scala/kafka/api/MultiFetchRequest.scala
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio._
-import kafka.network._
-
-object MultiFetchRequest {
-  def readFrom(buffer: ByteBuffer): MultiFetchRequest = {
-    val count = buffer.getShort
-    val fetches = new Array[FetchRequest](count)
-    for(i <- 0 until fetches.length)
-      fetches(i) = FetchRequest.readFrom(buffer)
-    new MultiFetchRequest(fetches)
-  }
-}
-
-class MultiFetchRequest(val fetches: Array[FetchRequest]) extends Request(RequestKeys.MultiFetch) {
-  def writeTo(buffer: ByteBuffer) {
-    if(fetches.length > Short.MaxValue)
-      throw new IllegalArgumentException("Number of requests in MultiFetchRequest exceeds " + Short.MaxValue + ".")
-    buffer.putShort(fetches.length.toShort)
-    for(fetch <- fetches)
-      fetch.writeTo(buffer)
-  }
-  
-  def sizeInBytes: Int = {
-    var size = 2
-    for(fetch <- fetches)
-      size += fetch.sizeInBytes
-    size
-  }
-
-
-  override def toString(): String = {
-    val buffer = new StringBuffer
-    for(fetch <- fetches) {
-      buffer.append(fetch.toString)
-      buffer.append(",")
-    }
-    buffer.toString
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/api/MultiFetchResponse.scala b/trunk/core/src/main/scala/kafka/api/MultiFetchResponse.scala
deleted file mode 100644
index 9eefa02..0000000
--- a/trunk/core/src/main/scala/kafka/api/MultiFetchResponse.scala
+++ /dev/null
@@ -1,52 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio._
-import collection.mutable
-import kafka.utils.IteratorTemplate
-import kafka.message._
-
-class MultiFetchResponse(val buffer: ByteBuffer, val numSets: Int, val offsets: Array[Long]) extends Iterable[ByteBufferMessageSet] {
-  private val messageSets = new mutable.ListBuffer[ByteBufferMessageSet]
-  
-  for(i <- 0 until numSets) {
-    val size = buffer.getInt()
-    val errorCode: Int = buffer.getShort()
-    val copy = buffer.slice()
-    val payloadSize = size - 2
-    copy.limit(payloadSize)
-    buffer.position(buffer.position + payloadSize)
-    messageSets += new ByteBufferMessageSet(copy, offsets(i), errorCode)
-  }
- 
-  def iterator : Iterator[ByteBufferMessageSet] = {
-    new IteratorTemplate[ByteBufferMessageSet] {
-      val iter = messageSets.iterator
-
-      override def makeNext(): ByteBufferMessageSet = {
-        if(iter.hasNext)
-          iter.next
-        else
-          return allDone
-      }
-    }
-  }
-
-  override def toString() = this.messageSets.toString
-}
diff --git a/trunk/core/src/main/scala/kafka/api/MultiProducerRequest.scala b/trunk/core/src/main/scala/kafka/api/MultiProducerRequest.scala
deleted file mode 100644
index 84c510c..0000000
--- a/trunk/core/src/main/scala/kafka/api/MultiProducerRequest.scala
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio.ByteBuffer
-import kafka.network.Request
-
-object MultiProducerRequest {
-  def readFrom(buffer: ByteBuffer): MultiProducerRequest = {
-    val count = buffer.getShort
-    val produces = new Array[ProducerRequest](count)
-    for(i <- 0 until produces.length)
-      produces(i) = ProducerRequest.readFrom(buffer)
-    new MultiProducerRequest(produces)
-  }
-}
-
-class MultiProducerRequest(val produces: Array[ProducerRequest]) extends Request(RequestKeys.MultiProduce) {
-  def writeTo(buffer: ByteBuffer) {
-    if(produces.length > Short.MaxValue)
-      throw new IllegalArgumentException("Number of requests in MultiProducer exceeds " + Short.MaxValue + ".")    
-    buffer.putShort(produces.length.toShort)
-    for(produce <- produces)
-      produce.writeTo(buffer)
-  }
-
-  def sizeInBytes: Int = {
-    var size = 2
-    for(produce <- produces)
-      size += produce.sizeInBytes
-    size
-  }
-
-  override def toString(): String = {
-    val buffer = new StringBuffer
-    for(produce <- produces) {
-      buffer.append(produce.toString)
-      buffer.append(",")
-    }
-    buffer.toString
-  }  
-}
diff --git a/trunk/core/src/main/scala/kafka/api/OffsetRequest.scala b/trunk/core/src/main/scala/kafka/api/OffsetRequest.scala
deleted file mode 100644
index 747d205..0000000
--- a/trunk/core/src/main/scala/kafka/api/OffsetRequest.scala
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio.ByteBuffer
-import kafka.utils.{nonthreadsafe, Utils}
-import kafka.network.{Send, Request}
-import java.nio.channels.GatheringByteChannel
-import kafka.common.ErrorMapping
-
-object OffsetRequest {
-  val SmallestTimeString = "smallest"
-  val LargestTimeString = "largest"
-  val LatestTime = -1L
-  val EarliestTime = -2L
-
-  def readFrom(buffer: ByteBuffer): OffsetRequest = {
-    val topic = Utils.readShortString(buffer, "UTF-8")
-    val partition = buffer.getInt()
-    val offset = buffer.getLong
-    val maxNumOffsets = buffer.getInt
-    new OffsetRequest(topic, partition, offset, maxNumOffsets)
-  }
-
-  def serializeOffsetArray(offsets: Array[Long]): ByteBuffer = {
-    val size = 4 + 8 * offsets.length
-    val buffer = ByteBuffer.allocate(size)
-    buffer.putInt(offsets.length)
-    for (i <- 0 until offsets.length)
-      buffer.putLong(offsets(i))
-    buffer.rewind
-    buffer
-  }
-
-  def deserializeOffsetArray(buffer: ByteBuffer): Array[Long] = {
-    val size = buffer.getInt
-    val offsets = new Array[Long](size)
-    for (i <- 0 until offsets.length)
-      offsets(i) = buffer.getLong
-    offsets
-  }
-}
-
-class OffsetRequest(val topic: String,
-                    val partition: Int,
-                    val time: Long,
-                    val maxNumOffsets: Int) extends Request(RequestKeys.Offsets) {
-
-  def writeTo(buffer: ByteBuffer) {
-    Utils.writeShortString(buffer, topic, "UTF-8")
-    buffer.putInt(partition)
-    buffer.putLong(time)
-    buffer.putInt(maxNumOffsets)
-  }
-
-  def sizeInBytes(): Int = 2 + topic.length + 4 + 8 + 4
-
-  override def toString(): String= "OffsetRequest(topic:" + topic + ", part:" + partition + ", time:" + time +
-          ", maxNumOffsets:" + maxNumOffsets + ")"
-}
-
-@nonthreadsafe
-private[kafka] class OffsetArraySend(offsets: Array[Long]) extends Send {
-  private var size: Long = offsets.foldLeft(4)((sum, _) => sum + 8)
-  private val header = ByteBuffer.allocate(6)
-  header.putInt(size.asInstanceOf[Int] + 2)
-  header.putShort(ErrorMapping.NoError.asInstanceOf[Short])
-  header.rewind()
-  private val contentBuffer = OffsetRequest.serializeOffsetArray(offsets)
-
-  var complete: Boolean = false
-
-  def writeTo(channel: GatheringByteChannel): Int = {
-    expectIncomplete()
-    var written = 0
-    if(header.hasRemaining)
-      written += channel.write(header)
-    if(!header.hasRemaining && contentBuffer.hasRemaining)
-      written += channel.write(contentBuffer)
-
-    if(!contentBuffer.hasRemaining)
-      complete = true
-    written
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/api/ProducerRequest.scala b/trunk/core/src/main/scala/kafka/api/ProducerRequest.scala
deleted file mode 100644
index 9574dce..0000000
--- a/trunk/core/src/main/scala/kafka/api/ProducerRequest.scala
+++ /dev/null
@@ -1,83 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-import java.nio._
-import kafka.message._
-import kafka.network._
-import kafka.utils._
-
-object ProducerRequest {
-  val RandomPartition = -1
-  
-  def readFrom(buffer: ByteBuffer): ProducerRequest = {
-    val topic = Utils.readShortString(buffer, "UTF-8")
-    val partition = buffer.getInt
-    val messageSetSize = buffer.getInt
-    val messageSetBuffer = buffer.slice()
-    messageSetBuffer.limit(messageSetSize)
-    buffer.position(buffer.position + messageSetSize)
-    new ProducerRequest(topic, partition, new ByteBufferMessageSet(messageSetBuffer))
-  }
-}
-
-class ProducerRequest(val topic: String,
-                      val partition: Int,
-                      val messages: ByteBufferMessageSet) extends Request(RequestKeys.Produce) {
-
-  def writeTo(buffer: ByteBuffer) {
-    Utils.writeShortString(buffer, topic, "UTF-8")
-    buffer.putInt(partition)
-    buffer.putInt(messages.serialized.limit)
-    buffer.put(messages.serialized)
-    messages.serialized.rewind
-  }
-  
-  def sizeInBytes(): Int = 2 + topic.length + 4 + 4 + messages.sizeInBytes.asInstanceOf[Int]
-
-  def getTranslatedPartition(randomSelector: String => Int): Int = {
-    if (partition == ProducerRequest.RandomPartition)
-      return randomSelector(topic)
-    else 
-      return partition
-  }
-
-  override def toString: String = {
-    val builder = new StringBuilder()
-    builder.append("ProducerRequest(")
-    builder.append(topic + ",")
-    builder.append(partition + ",")
-    builder.append(messages.sizeInBytes)
-    builder.append(")")
-    builder.toString
-  }
-
-  override def equals(other: Any): Boolean = {
-    other match {
-      case that: ProducerRequest =>
-        (that canEqual this) && topic == that.topic && partition == that.partition &&
-                messages.equals(that.messages) 
-      case _ => false
-    }
-  }
-
-  def canEqual(other: Any): Boolean = other.isInstanceOf[ProducerRequest]
-
-  override def hashCode: Int = 31 + (17 * partition) + topic.hashCode + messages.hashCode
-
-}
diff --git a/trunk/core/src/main/scala/kafka/api/RequestKeys.scala b/trunk/core/src/main/scala/kafka/api/RequestKeys.scala
deleted file mode 100644
index 3e7e57d..0000000
--- a/trunk/core/src/main/scala/kafka/api/RequestKeys.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.api
-
-object RequestKeys {
-  val Produce: Short = 0
-  val Fetch: Short = 1
-  val MultiFetch: Short = 2
-  val MultiProduce: Short = 3
-  val Offsets: Short = 4
-}
diff --git a/trunk/core/src/main/scala/kafka/cluster/Broker.scala b/trunk/core/src/main/scala/kafka/cluster/Broker.scala
deleted file mode 100644
index be44b48..0000000
--- a/trunk/core/src/main/scala/kafka/cluster/Broker.scala
+++ /dev/null
@@ -1,48 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.cluster
-
-import kafka.utils._
-
-/**
- * A Kafka broker
- */
-private[kafka] object Broker {
-  def createBroker(id: Int, brokerInfoString: String): Broker = {
-    val brokerInfo = brokerInfoString.split(":")
-    new Broker(id, brokerInfo(0), brokerInfo(1), brokerInfo(2).toInt)
-  }
-}
-
-private[kafka] class Broker(val id: Int, val creatorId: String, val host: String, val port: Int) {
-  
-  override def toString(): String = new String("id:" + id + ",creatorId:" + creatorId + ",host:" + host + ",port:" + port)
-
-  def getZKString(): String = new String(creatorId + ":" + host + ":" + port)
-  
-  override def equals(obj: Any): Boolean = {
-    obj match {
-      case null => false
-      case n: Broker => id == n.id && host == n.host && port == n.port
-      case _ => false
-    }
-  }
-  
-  override def hashCode(): Int = Utils.hashcode(id, host, port)
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/cluster/Cluster.scala b/trunk/core/src/main/scala/kafka/cluster/Cluster.scala
deleted file mode 100644
index 992c54e..0000000
--- a/trunk/core/src/main/scala/kafka/cluster/Cluster.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.cluster
-
-import scala.collection._
-
-/**
- * The set of active brokers in the cluster
- */
-private[kafka] class Cluster {
-  
-  private val brokers = new mutable.HashMap[Int, Broker]
-  
-  def this(brokerList: Iterable[Broker]) {
-    this()
-	  for(broker <- brokerList)
-      brokers.put(broker.id, broker)
-  }
-
-  def getBroker(id: Int): Option[Broker] = brokers.get(id)
-  
-  def add(broker: Broker) = brokers.put(broker.id, broker)
-  
-  def remove(id: Int) = brokers.remove(id)
-  
-  def size = brokers.size
-  
-  override def toString(): String = 
-    "Cluster(" + brokers.values.mkString(", ") + ")"  
-}
diff --git a/trunk/core/src/main/scala/kafka/cluster/Partition.scala b/trunk/core/src/main/scala/kafka/cluster/Partition.scala
deleted file mode 100644
index 5d79b6c..0000000
--- a/trunk/core/src/main/scala/kafka/cluster/Partition.scala
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.cluster
-
-object Partition {
-  def parse(s: String): Partition = {
-    val pieces = s.split("-")
-    if(pieces.length != 2)
-      throw new IllegalArgumentException("Expected name in the form x-y.")
-    new Partition(pieces(0).toInt, pieces(1).toInt)
-  }
-}
-
-class Partition(val brokerId: Int, val partId: Int) extends Ordered[Partition] {
-
-  def this(name: String) = {
-    this(1, 1)
-  }
-  
-  def name = brokerId + "-" + partId
-  
-  override def toString(): String = name
-
-  def compare(that: Partition) =
-    if (this.brokerId == that.brokerId)
-      this.partId - that.partId
-    else
-      this.brokerId - that.brokerId
-
-  override def equals(other: Any): Boolean = {
-    other match {
-      case that: Partition =>
-        (that canEqual this) && brokerId == that.brokerId && partId == that.partId
-      case _ => false
-    }
-  }
-
-  def canEqual(other: Any): Boolean = other.isInstanceOf[Partition]
-
-  override def hashCode: Int = 31 * (17 + brokerId) + partId
-
-}
diff --git a/trunk/core/src/main/scala/kafka/common/ConsumerReblanceFailedException.scala b/trunk/core/src/main/scala/kafka/common/ConsumerReblanceFailedException.scala
deleted file mode 100644
index ae5018d..0000000
--- a/trunk/core/src/main/scala/kafka/common/ConsumerReblanceFailedException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Thrown when a request is made for broker but no brokers with that topic
- * exist.
- */
-class ConsumerRebalanceFailedException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/common/ErrorMapping.scala b/trunk/core/src/main/scala/kafka/common/ErrorMapping.scala
deleted file mode 100644
index ccadd31..0000000
--- a/trunk/core/src/main/scala/kafka/common/ErrorMapping.scala
+++ /dev/null
@@ -1,61 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-import kafka.message.InvalidMessageException
-import java.nio.ByteBuffer
-import java.lang.Throwable
-
-/**
- * A bi-directional mapping between error codes and exceptions x  
- */
-object ErrorMapping {
-  val EmptyByteBuffer = ByteBuffer.allocate(0)
-
-  val UnknownCode = -1
-  val NoError = 0
-  val OffsetOutOfRangeCode = 1
-  val InvalidMessageCode = 2
-  val WrongPartitionCode = 3
-  val InvalidFetchSizeCode = 4
-
-  private val exceptionToCode = 
-    Map[Class[Throwable], Int](
-      classOf[OffsetOutOfRangeException].asInstanceOf[Class[Throwable]] -> OffsetOutOfRangeCode,
-      classOf[InvalidMessageException].asInstanceOf[Class[Throwable]] -> InvalidMessageCode,
-      classOf[InvalidPartitionException].asInstanceOf[Class[Throwable]] -> WrongPartitionCode,
-      classOf[InvalidMessageSizeException].asInstanceOf[Class[Throwable]] -> InvalidFetchSizeCode
-    ).withDefaultValue(UnknownCode)
-  
-  /* invert the mapping */
-  private val codeToException = 
-    (Map[Int, Class[Throwable]]() ++ exceptionToCode.iterator.map(p => (p._2, p._1))).withDefaultValue(classOf[UnknownException])
-  
-  def codeFor(exception: Class[Throwable]): Int = exceptionToCode(exception)
-  
-  def maybeThrowException(code: Int) =
-    if(code != 0)
-      throw codeToException(code).newInstance()
-}
-
-class InvalidTopicException(message: String) extends RuntimeException(message) {
-  def this() = this(null)  
-}
-
-class MessageSizeTooLargeException(message: String) extends RuntimeException(message) {
-}
diff --git a/trunk/core/src/main/scala/kafka/common/InvalidConfigException.scala b/trunk/core/src/main/scala/kafka/common/InvalidConfigException.scala
deleted file mode 100644
index 6437846..0000000
--- a/trunk/core/src/main/scala/kafka/common/InvalidConfigException.scala
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.common
-
-/**
- * Indicates that the given config parameter has invalid value
- */
-class InvalidConfigException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/common/InvalidMessageSizeException.scala b/trunk/core/src/main/scala/kafka/common/InvalidMessageSizeException.scala
deleted file mode 100644
index 6a7bb47..0000000
--- a/trunk/core/src/main/scala/kafka/common/InvalidMessageSizeException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Indicates the client has requested a range no longer available on the server
- */
-class InvalidMessageSizeException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/common/InvalidPartitionException.scala b/trunk/core/src/main/scala/kafka/common/InvalidPartitionException.scala
deleted file mode 100644
index 440a358..0000000
--- a/trunk/core/src/main/scala/kafka/common/InvalidPartitionException.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.common
-
-/**
- * Indicates that the partition id is not between 0 and numPartitions-1
- */
-class InvalidPartitionException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/common/NoBrokersForPartitionException.scala b/trunk/core/src/main/scala/kafka/common/NoBrokersForPartitionException.scala
deleted file mode 100644
index 4577b29..0000000
--- a/trunk/core/src/main/scala/kafka/common/NoBrokersForPartitionException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Thrown when a request is made for broker but no brokers with that topic
- * exist.
- */
-class NoBrokersForPartitionException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/common/OffsetOutOfRangeException.scala b/trunk/core/src/main/scala/kafka/common/OffsetOutOfRangeException.scala
deleted file mode 100644
index 0a2514c..0000000
--- a/trunk/core/src/main/scala/kafka/common/OffsetOutOfRangeException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Indicates the client has requested a range no longer available on the server
- */
-class OffsetOutOfRangeException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/common/UnavailableProducerException.scala b/trunk/core/src/main/scala/kafka/common/UnavailableProducerException.scala
deleted file mode 100644
index 885c98d..0000000
--- a/trunk/core/src/main/scala/kafka/common/UnavailableProducerException.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.common
-
-/**
- * Indicates a producer pool initialization problem
-*/
-class UnavailableProducerException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/common/UnknownCodecException.scala b/trunk/core/src/main/scala/kafka/common/UnknownCodecException.scala
deleted file mode 100644
index 7e66901..0000000
--- a/trunk/core/src/main/scala/kafka/common/UnknownCodecException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Indicates the client has requested a range no longer available on the server
- */
-class UnknownCodecException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/common/UnknownException.scala b/trunk/core/src/main/scala/kafka/common/UnknownException.scala
deleted file mode 100644
index 6cf0fc9..0000000
--- a/trunk/core/src/main/scala/kafka/common/UnknownException.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * If we don't know what else it is, call it this
- */
-class UnknownException extends RuntimeException
diff --git a/trunk/core/src/main/scala/kafka/common/UnknownMagicByteException.scala b/trunk/core/src/main/scala/kafka/common/UnknownMagicByteException.scala
deleted file mode 100644
index 544d426..0000000
--- a/trunk/core/src/main/scala/kafka/common/UnknownMagicByteException.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.common
-
-/**
- * Indicates the client has requested a range no longer available on the server
- */
-class UnknownMagicByteException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/ConsoleConsumer.scala b/trunk/core/src/main/scala/kafka/consumer/ConsoleConsumer.scala
deleted file mode 100644
index 49a6f39..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ConsoleConsumer.scala
+++ /dev/null
@@ -1,249 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import scala.collection.mutable._
-import scala.collection.JavaConversions._
-import org.I0Itec.zkclient._
-import joptsimple._
-import java.util.Properties
-import java.util.Random
-import java.io.PrintStream
-import kafka.message._
-import kafka.utils.{Utils, Logging}
-import kafka.utils.ZKStringSerializer
-
-/**
- * Consumer that dumps messages out to standard out.
- *
- */
-object ConsoleConsumer extends Logging {
-
-  def main(args: Array[String]) {
-    val parser = new OptionParser
-    val topicIdOpt = parser.accepts("topic", "The topic id to consume on.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    val whitelistOpt = parser.accepts("whitelist", "Whitelist of topics to include for consumption.")
-                             .withRequiredArg
-                             .describedAs("whitelist")
-                             .ofType(classOf[String])
-    val blacklistOpt = parser.accepts("blacklist", "Blacklist of topics to exclude from consumption.")
-                             .withRequiredArg
-                             .describedAs("blacklist")
-                             .ofType(classOf[String])
-    val zkConnectOpt = parser.accepts("zookeeper", "REQUIRED: The connection string for the zookeeper connection in the form host:port. " +
-                                      "Multiple URLS can be given to allow fail-over.")
-                           .withRequiredArg
-                           .describedAs("urls")
-                           .ofType(classOf[String])
-    val groupIdOpt = parser.accepts("group", "The group id to consume on.")
-                           .withRequiredArg
-                           .describedAs("gid")
-                           .defaultsTo("console-consumer-" + new Random().nextInt(100000))   
-                           .ofType(classOf[String])
-    val fetchSizeOpt = parser.accepts("fetch-size", "The amount of data to fetch in a single request.")
-                           .withRequiredArg
-                           .describedAs("size")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1024 * 1024)   
-    val socketBufferSizeOpt = parser.accepts("socket-buffer-size", "The size of the tcp RECV size.")
-                           .withRequiredArg
-                           .describedAs("size")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(2 * 1024 * 1024)
-    val consumerTimeoutMsOpt = parser.accepts("consumer-timeout-ms", "consumer throws timeout exception after waiting this much " +
-                                              "of time without incoming messages")
-                           .withRequiredArg
-                           .describedAs("prop")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(-1)
-    val messageFormatterOpt = parser.accepts("formatter", "The name of a class to use for formatting kafka messages for display.")
-                           .withRequiredArg
-                           .describedAs("class")
-                           .ofType(classOf[String])
-                           .defaultsTo(classOf[NewlineMessageFormatter].getName)
-    val messageFormatterArgOpt = parser.accepts("property")
-                           .withRequiredArg
-                           .describedAs("prop")
-                           .ofType(classOf[String])
-    val resetBeginningOpt = parser.accepts("from-beginning", "If the consumer does not already have an established offset to consume from, " +
-        "start with the earliest message present in the log rather than the latest message.")
-    val autoCommitIntervalOpt = parser.accepts("autocommit.interval.ms", "The time interval at which to save the current offset in ms")
-                           .withRequiredArg
-                           .describedAs("ms")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(10*1000)
-    val maxMessagesOpt = parser.accepts("max-messages", "The maximum number of messages to consume before exiting. If not set, consumption is continual.")
-                           .withRequiredArg
-                           .describedAs("num_messages")
-                           .ofType(classOf[java.lang.Integer])
-    val skipMessageOnErrorOpt = parser.accepts("skip-message-on-error", "If there is an error when processing a message, " +
-        "skip it instead of halt.")
-
-    val options: OptionSet = tryParse(parser, args)
-    Utils.checkRequiredArgs(parser, options, zkConnectOpt)
-    
-    val topicOrFilterOpt = List(topicIdOpt, whitelistOpt, blacklistOpt).filter(options.has)
-    if (topicOrFilterOpt.size != 1) {
-      error("Exactly one of whitelist/blacklist/topic is required.")
-      parser.printHelpOn(System.err)
-      System.exit(1)
-    }
-    val topicArg = options.valueOf(topicOrFilterOpt.head)
-    val filterSpec = if (options.has(blacklistOpt))
-      new Blacklist(topicArg)
-    else
-      new Whitelist(topicArg)
-
-    val props = new Properties()
-    props.put("groupid", options.valueOf(groupIdOpt))
-    props.put("socket.buffersize", options.valueOf(socketBufferSizeOpt).toString)
-    props.put("fetch.size", options.valueOf(fetchSizeOpt).toString)
-    props.put("auto.commit", "true")
-    props.put("autocommit.interval.ms", options.valueOf(autoCommitIntervalOpt).toString)
-    props.put("autooffset.reset", if(options.has(resetBeginningOpt)) "smallest" else "largest")
-    props.put("zk.connect", options.valueOf(zkConnectOpt))
-    props.put("consumer.timeout.ms", options.valueOf(consumerTimeoutMsOpt).toString)
-    val config = new ConsumerConfig(props)
-    val skipMessageOnError = if (options.has(skipMessageOnErrorOpt)) true else false
-    
-    val messageFormatterClass = Class.forName(options.valueOf(messageFormatterOpt))
-    val formatterArgs = tryParseFormatterArgs(options.valuesOf(messageFormatterArgOpt))
-    
-    val maxMessages = if(options.has(maxMessagesOpt)) options.valueOf(maxMessagesOpt).intValue else -1
-
-    val connector = Consumer.create(config)
-
-    if(options.has(resetBeginningOpt))
-      tryCleanupZookeeper(options.valueOf(zkConnectOpt), options.valueOf(groupIdOpt))
-
-    Runtime.getRuntime.addShutdownHook(new Thread() {
-      override def run() {
-        connector.shutdown()
-        // if there is no group specified then avoid polluting zookeeper with persistent group data, this is a hack
-        if(!options.has(groupIdOpt))  
-          tryCleanupZookeeper(options.valueOf(zkConnectOpt), options.valueOf(groupIdOpt))
-      }
-    })
-
-    val stream = connector.createMessageStreamsByFilter(filterSpec).get(0)
-    val iter = if(maxMessages >= 0)
-      stream.slice(0, maxMessages)
-    else
-      stream
-
-    val formatter: MessageFormatter = messageFormatterClass.newInstance().asInstanceOf[MessageFormatter]
-    formatter.init(formatterArgs)
-
-    try {
-      for(messageAndTopic <- iter) {
-        try {
-          formatter.writeTo(messageAndTopic.message, System.out)
-        } catch {
-          case e =>
-            if (skipMessageOnError)
-              error("Error processing message, skipping this message: ", e)
-            else
-              throw e
-        }
-        if(System.out.checkError()) { 
-          // This means no one is listening to our output stream any more, time to shutdown
-          System.err.println("Unable to write to standard out, closing consumer.")
-          formatter.close()
-          connector.shutdown()
-          System.exit(1)
-        }
-      }
-    } catch {
-      case e => error("Error processing message, stopping consumer: ", e)
-    }
-      
-    System.out.flush()
-    formatter.close()
-    connector.shutdown()
-  }
-
-  def tryParse(parser: OptionParser, args: Array[String]) = {
-    try {
-      parser.parse(args : _*)
-    } catch {
-      case e: OptionException => {
-        Utils.croak(e.getMessage)
-        null
-      }
-    }
-  }
-  
-  def tryParseFormatterArgs(args: Iterable[String]): Properties = {
-    val splits = args.map(_ split "=").filterNot(_ == null).filterNot(_.length == 0)
-    if(!splits.forall(_.length == 2)) {
-      System.err.println("Invalid parser arguments: " + args.mkString(" "))
-      System.exit(1)
-    }
-    val props = new Properties
-    for(a <- splits)
-      props.put(a(0), a(1))
-    props
-  }
-  
-  trait MessageFormatter {
-    def writeTo(message: Message, output: PrintStream)
-    def init(props: Properties) {}
-    def close() {}
-  }
-  
-  class NewlineMessageFormatter extends MessageFormatter {
-    def writeTo(message: Message, output: PrintStream) {
-      val payload = message.payload
-      output.write(payload.array, payload.arrayOffset, payload.limit)
-      output.write('\n')
-    }
-  }
-
-  class ChecksumMessageFormatter extends MessageFormatter {
-    private var topicStr: String = _
-    
-    override def init(props: Properties) {
-      topicStr = props.getProperty("topic")
-      if (topicStr != null) 
-        topicStr = topicStr + "-"
-      else
-        topicStr = ""
-    }
-    
-    def writeTo(message: Message, output: PrintStream) {
-      val chksum = message.checksum
-      output.println(topicStr + "checksum:" + chksum)
-    }
-  }
-  
-  def tryCleanupZookeeper(zkUrl: String, groupId: String) {
-    try {
-      val dir = "/consumers/" + groupId
-      info("Cleaning up temporary zookeeper data under " + dir + ".")
-      val zk = new ZkClient(zkUrl, 30*1000, 30*1000, ZKStringSerializer)
-      zk.deleteRecursive(dir)
-      zk.close()
-    } catch {
-      case _ => // swallow
-    }
-  }
-   
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/ConsumerConfig.scala b/trunk/core/src/main/scala/kafka/consumer/ConsumerConfig.scala
deleted file mode 100644
index c531cd1..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ConsumerConfig.scala
+++ /dev/null
@@ -1,98 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.Properties
-import kafka.utils.{ZKConfig, Utils}
-import kafka.api.OffsetRequest
-object ConsumerConfig {
-  val SocketTimeout = 30 * 1000
-  val SocketBufferSize = 64*1024
-  val FetchSize = 1024 * 1024
-  val MaxFetchSize = 10*FetchSize
-  val DefaultFetcherBackoffMs = 1000
-  val AutoCommit = true
-  val AutoCommitInterval = 10 * 1000
-  val MaxQueuedChunks = 10
-  val MaxRebalanceRetries = 4
-  val AutoOffsetReset = OffsetRequest.SmallestTimeString
-  val ConsumerTimeoutMs = -1
-  val MirrorTopicsWhitelist = ""
-  val MirrorTopicsBlacklist = ""
-  val MirrorConsumerNumThreads = 1
-
-  val MirrorTopicsWhitelistProp = "mirror.topics.whitelist"
-  val MirrorTopicsBlacklistProp = "mirror.topics.blacklist"
-  val MirrorConsumerNumThreadsProp = "mirror.consumer.numthreads"
-}
-
-class ConsumerConfig(props: Properties) extends ZKConfig(props) {
-  import ConsumerConfig._
-
-  /** a string that uniquely identifies a set of consumers within the same consumer group */
-  val groupId = Utils.getString(props, "groupid")
-
-  /** consumer id: generated automatically if not set.
-   *  Set this explicitly for only testing purpose. */
-  val consumerId: Option[String] = /** TODO: can be written better in scala 2.8 */
-    if (Utils.getString(props, "consumerid", null) != null) Some(Utils.getString(props, "consumerid")) else None
-
-  /** the socket timeout for network requests */
-  val socketTimeoutMs = Utils.getInt(props, "socket.timeout.ms", SocketTimeout)
-  
-  /** the socket receive buffer for network requests */
-  val socketBufferSize = Utils.getInt(props, "socket.buffersize", SocketBufferSize)
-  
-  /** the number of byes of messages to attempt to fetch */
-  val fetchSize = Utils.getInt(props, "fetch.size", FetchSize)
-  
-  /** to avoid repeatedly polling a broker node which has no new data
-      we will backoff every time we get an empty set from the broker*/
-  val fetcherBackoffMs: Long = Utils.getInt(props, "fetcher.backoff.ms", DefaultFetcherBackoffMs)
-  
-  /** if true, periodically commit to zookeeper the offset of messages already fetched by the consumer */
-  val autoCommit = Utils.getBoolean(props, "autocommit.enable", AutoCommit)
-  
-  /** the frequency in ms that the consumer offsets are committed to zookeeper */
-  val autoCommitIntervalMs = Utils.getInt(props, "autocommit.interval.ms", AutoCommitInterval)
-
-  /** max number of messages buffered for consumption */
-  val maxQueuedChunks = Utils.getInt(props, "queuedchunks.max", MaxQueuedChunks)
-
-  /** max number of retries during rebalance */
-  val maxRebalanceRetries = Utils.getInt(props, "rebalance.retries.max", MaxRebalanceRetries)
-
-  /** backoff time between retries during rebalance */
-  val rebalanceBackoffMs = Utils.getInt(props, "rebalance.backoff.ms", zkSyncTimeMs)
-
-  /* what to do if an offset is out of range.
-     smallest : automatically reset the offset to the smallest offset
-     largest : automatically reset the offset to the largest offset
-     anything else: throw exception to the consumer */
-  val autoOffsetReset = Utils.getString(props, "autooffset.reset", AutoOffsetReset)
-
-  /** throw a timeout exception to the consumer if no message is available for consumption after the specified interval */
-  val consumerTimeoutMs = Utils.getInt(props, "consumer.timeout.ms", ConsumerTimeoutMs)
-
-  /** Use shallow iterator over compressed messages directly. This feature should be used very carefully.
-   *  Typically, it's only used for mirroring raw messages from one kafka cluster to another to save the
-   *  overhead of decompression.
-   *  */
-  val enableShallowIterator = Utils.getBoolean(props, "shallowiterator.enable", false)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/ConsumerConnector.scala b/trunk/core/src/main/scala/kafka/consumer/ConsumerConnector.scala
deleted file mode 100644
index 94cb2f1..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ConsumerConnector.scala
+++ /dev/null
@@ -1,92 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import scala.collection._
-import kafka.utils.{Utils, Logging}
-import kafka.serializer.{DefaultDecoder, Decoder}
-
-/**
- *  Main interface for consumer
- */
-trait ConsumerConnector {
-  /**
-   *  Create a list of MessageStreams for each topic.
-   *
-   *  @param topicCountMap  a map of (topic, #streams) pair
-   *  @param decoder Decoder to decode each Message to type T
-   *  @return a map of (topic, list of  KafkaStream) pairs.
-   *          The number of items in the list is #streams. Each stream supports
-   *          an iterator over message/metadata pairs.
-   */
-  def createMessageStreams[T](topicCountMap: Map[String,Int],
-                              decoder: Decoder[T] = new DefaultDecoder)
-    : Map[String,List[KafkaStream[T]]]
-
-  /**
-   *  Create a list of message streams for all topics that match a given filter.
-   *
-   *  @param topicFilter Either a Whitelist or Blacklist TopicFilter object.
-   *  @param numStreams Number of streams to return
-   *  @param decoder Decoder to decode each Message to type T
-   *  @return a list of KafkaStream each of which provides an
-   *          iterator over message/metadata pairs over allowed topics.
-   */
-  def createMessageStreamsByFilter[T](topicFilter: TopicFilter,
-                                      numStreams: Int = 1,
-                                      decoder: Decoder[T] = new DefaultDecoder)
-    : Seq[KafkaStream[T]]
-
-  /**
-   *  Commit the offsets of all broker partitions connected by this connector.
-   */
-  def commitOffsets
-  
-  /**
-   *  Shut down the connector
-   */
-  def shutdown()
-}
-
-object Consumer extends Logging {
-  private val consumerStatsMBeanName = "kafka:type=kafka.ConsumerStats"
-
-  /**
-   *  Create a ConsumerConnector
-   *
-   *  @param config  at the minimum, need to specify the groupid of the consumer and the zookeeper
-   *                 connection string zk.connect.
-   */
-  def create(config: ConsumerConfig): ConsumerConnector = {
-    val consumerConnect = new ZookeeperConsumerConnector(config)
-    Utils.registerMBean(consumerConnect, consumerStatsMBeanName)
-    consumerConnect
-  }
-
-  /**
-   *  Create a ConsumerConnector
-   *
-   *  @param config  at the minimum, need to specify the groupid of the consumer and the zookeeper
-   *                 connection string zk.connect.
-   */
-  def createJavaConsumerConnector(config: ConsumerConfig): kafka.javaapi.consumer.ConsumerConnector = {
-    val consumerConnect = new kafka.javaapi.consumer.ZookeeperConsumerConnector(config)
-    Utils.registerMBean(consumerConnect.underlying, consumerStatsMBeanName)
-    consumerConnect
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/ConsumerIterator.scala b/trunk/core/src/main/scala/kafka/consumer/ConsumerIterator.scala
deleted file mode 100644
index 73e2794..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ConsumerIterator.scala
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import kafka.utils.{IteratorTemplate, Logging}
-import java.util.concurrent.{TimeUnit, BlockingQueue}
-import kafka.serializer.Decoder
-import java.util.concurrent.atomic.AtomicReference
-import kafka.message.{MessageAndOffset, MessageAndMetadata}
-
-
-/**
- * An iterator that blocks until a value can be read from the supplied queue.
- * The iterator takes a shutdownCommand object which can be added to the queue to trigger a shutdown
- *
- */
-class ConsumerIterator[T](private val channel: BlockingQueue[FetchedDataChunk],
-                          consumerTimeoutMs: Int,
-                          private val decoder: Decoder[T],
-                          val enableShallowIterator: Boolean)
-  extends IteratorTemplate[MessageAndMetadata[T]] with Logging {
-
-  private var current: AtomicReference[Iterator[MessageAndOffset]] = new AtomicReference(null)
-  private var currentTopicInfo:PartitionTopicInfo = null
-  private var consumedOffset: Long = -1L
-
-  override def next(): MessageAndMetadata[T] = {
-    val item = super.next()
-    if(consumedOffset < 0)
-      throw new IllegalStateException("Offset returned by the message set is invalid %d".format(consumedOffset))
-    currentTopicInfo.resetConsumeOffset(consumedOffset)
-    val topic = currentTopicInfo.topic
-    trace("Setting %s consumed offset to %d".format(topic, consumedOffset))
-    ConsumerTopicStat.getConsumerTopicStat(topic).recordMessagesPerTopic(1)
-    ConsumerTopicStat.getConsumerAllTopicStat().recordMessagesPerTopic(1)
-    item
-  }
-
-  protected def makeNext(): MessageAndMetadata[T] = {
-    var currentDataChunk: FetchedDataChunk = null
-    // if we don't have an iterator, get one
-    var localCurrent = current.get()
-    if(localCurrent == null || !localCurrent.hasNext) {
-      if (consumerTimeoutMs < 0)
-        currentDataChunk = channel.take
-      else {
-        currentDataChunk = channel.poll(consumerTimeoutMs, TimeUnit.MILLISECONDS)
-        if (currentDataChunk == null) {
-          // reset state to make the iterator re-iterable
-          resetState()
-          throw new ConsumerTimeoutException
-        }
-      }
-      if(currentDataChunk eq ZookeeperConsumerConnector.shutdownCommand) {
-        debug("Received the shutdown command")
-        channel.offer(currentDataChunk)
-        return allDone
-      } else {
-        currentTopicInfo = currentDataChunk.topicInfo
-        if (currentTopicInfo.getConsumeOffset != currentDataChunk.fetchOffset) {
-          error("consumed offset: %d doesn't match fetch offset: %d for %s;\n Consumer may lose data"
-                        .format(currentTopicInfo.getConsumeOffset, currentDataChunk.fetchOffset, currentTopicInfo))
-          currentTopicInfo.resetConsumeOffset(currentDataChunk.fetchOffset)
-        }
-        localCurrent = if (enableShallowIterator) currentDataChunk.messages.shallowIterator
-                       else currentDataChunk.messages.iterator
-        current.set(localCurrent)
-      }
-    }
-    val item = localCurrent.next()
-    consumedOffset = item.offset
-
-    new MessageAndMetadata(decoder.toEvent(item.message), currentTopicInfo.topic)
-  }
-
-  def clearCurrentChunk() {
-    try {
-      info("Clearing the current data chunk for this consumer iterator")
-      current.set(null)
-    }
-  }
-}
-
-class ConsumerTimeoutException() extends RuntimeException()
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/ConsumerTopicStat.scala b/trunk/core/src/main/scala/kafka/consumer/ConsumerTopicStat.scala
deleted file mode 100644
index 3a9de2a..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ConsumerTopicStat.scala
+++ /dev/null
@@ -1,60 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.concurrent.atomic.AtomicLong
-import kafka.utils.{Pool, Utils, threadsafe, Logging}
-
-trait ConsumerTopicStatMBean {
-  def getMessagesPerTopic: Long
-  def getBytesPerTopic: Long
-}
-
-@threadsafe
-class ConsumerTopicStat extends ConsumerTopicStatMBean {
-  private val numCumulatedMessagesPerTopic = new AtomicLong(0)
-  private val numCumulatedBytesPerTopic = new AtomicLong(0)
-
-  def getMessagesPerTopic: Long = numCumulatedMessagesPerTopic.get
-
-  def recordMessagesPerTopic(nMessages: Int) = numCumulatedMessagesPerTopic.getAndAdd(nMessages)
-
-  def getBytesPerTopic: Long = numCumulatedBytesPerTopic.get
-
-  def recordBytesPerTopic(nBytes: Long) = numCumulatedBytesPerTopic.getAndAdd(nBytes)
-}
-
-object ConsumerTopicStat extends Logging {
-  private val stats = new Pool[String, ConsumerTopicStat]
-  private val allTopicStat = new ConsumerTopicStat
-  Utils.registerMBean(allTopicStat, "kafka:type=kafka.ConsumerAllTopicStat")
-
-  def getConsumerAllTopicStat(): ConsumerTopicStat = allTopicStat
-
-  def getConsumerTopicStat(topic: String): ConsumerTopicStat = {
-    var stat = stats.get(topic)
-    if (stat == null) {
-      stat = new ConsumerTopicStat
-      if (stats.putIfNotExists(topic, stat) == null)
-        Utils.registerMBean(stat, "kafka:type=kafka.ConsumerTopicStat." + topic)
-      else
-        stat = stats.get(topic)
-    }
-    return stat
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/FetchedDataChunk.scala b/trunk/core/src/main/scala/kafka/consumer/FetchedDataChunk.scala
deleted file mode 100644
index ea90d18..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/FetchedDataChunk.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import kafka.message.ByteBufferMessageSet
-
-private[consumer] class FetchedDataChunk(val messages: ByteBufferMessageSet,
-                                         val topicInfo: PartitionTopicInfo,
-                                         val fetchOffset: Long)
diff --git a/trunk/core/src/main/scala/kafka/consumer/Fetcher.scala b/trunk/core/src/main/scala/kafka/consumer/Fetcher.scala
deleted file mode 100644
index 5e65df9..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/Fetcher.scala
+++ /dev/null
@@ -1,95 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import scala.collection._
-import kafka.cluster._
-import org.I0Itec.zkclient.ZkClient
-import java.util.concurrent.BlockingQueue
-import kafka.utils._
-import java.lang.IllegalStateException
-
-/**
- * The fetcher is a background thread that fetches data from a set of servers
- */
-private [consumer] class Fetcher(val config: ConsumerConfig, val zkClient : ZkClient) extends Logging {
-  private val EMPTY_FETCHER_THREADS = new Array[FetcherRunnable](0)
-  @volatile
-  private var fetcherThreads : Array[FetcherRunnable] = EMPTY_FETCHER_THREADS
-
-  /**
-   *  shutdown all fetcher threads
-   */
-  def stopConnectionsToAllBrokers = {
-    // shutdown the old fetcher threads, if any
-    for (fetcherThread <- fetcherThreads)
-      fetcherThread.shutdown
-    fetcherThreads = EMPTY_FETCHER_THREADS
-  }
-
-  def clearFetcherQueues(topicInfos: Iterable[PartitionTopicInfo], cluster: Cluster,
-                            queuesTobeCleared: Iterable[BlockingQueue[FetchedDataChunk]],
-                            messageStreams: Map[String,List[KafkaStream[_]]]) {
-
-    // Clear all but the currently iterated upon chunk in the consumer thread's queue
-    queuesTobeCleared.foreach(_.clear)
-    info("Cleared all relevant queues for this fetcher")
-
-    // Also clear the currently iterated upon chunk in the consumer threads
-    if(messageStreams != null)
-       messageStreams.foreach(_._2.foreach(s => s.clear()))
-
-    info("Cleared the data chunks in all the consumer message iterators")
-
-  }
-
-  def startConnections(topicInfos: Iterable[PartitionTopicInfo],
-                       cluster: Cluster) {
-    if (topicInfos == null)
-      return
-
-    // re-arrange by broker id
-    val m = new mutable.HashMap[Int, List[PartitionTopicInfo]]
-    for(info <- topicInfos) {
-      m.get(info.brokerId) match {
-        case None => m.put(info.brokerId, List(info))
-        case Some(lst) => m.put(info.brokerId, info :: lst)
-      }
-    }
-
-    // open a new fetcher thread for each broker
-    val ids = Set() ++ topicInfos.map(_.brokerId)
-    val brokers = ids.map { id =>
-      cluster.getBroker(id) match {
-        case Some(broker) => broker
-        case None => throw new IllegalStateException("Broker " + id + " is unavailable, fetchers could not be started")
-      }
-    }
-
-    fetcherThreads = new Array[FetcherRunnable](brokers.size)
-    var i = 0
-    for(broker <- brokers) {
-      val fetcherThread = new FetcherRunnable("FetchRunnable-" + i, zkClient, config, broker, m.get(broker.id).get)
-      fetcherThreads(i) = fetcherThread
-      fetcherThread.start
-      i +=1
-    }
-  }    
-}
-
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/FetcherRunnable.scala b/trunk/core/src/main/scala/kafka/consumer/FetcherRunnable.scala
deleted file mode 100644
index 6f0ea79..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/FetcherRunnable.scala
+++ /dev/null
@@ -1,141 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.concurrent.CountDownLatch
-import kafka.common.ErrorMapping
-import kafka.cluster.{Partition, Broker}
-import kafka.api.{OffsetRequest, FetchRequest}
-import org.I0Itec.zkclient.ZkClient
-import kafka.utils._
-import java.io.IOException
-
-class FetcherRunnable(val name: String,
-                      val zkClient : ZkClient,
-                      val config: ConsumerConfig,
-                      val broker: Broker,
-                      val partitionTopicInfos: List[PartitionTopicInfo])
-  extends Thread(name) with Logging {
-  private val shutdownLatch = new CountDownLatch(1)
-  private val simpleConsumer = new SimpleConsumer(broker.host, broker.port, config.socketTimeoutMs,
-    config.socketBufferSize)
-  @volatile
-  private var stopped = false
-
-  def shutdown(): Unit = {
-    stopped = true
-    interrupt
-    debug("awaiting shutdown on fetcher " + name)
-    shutdownLatch.await
-    debug("shutdown of fetcher " + name + " thread complete")
-  }
-
-  override def run() {
-    for (infopti <- partitionTopicInfos)
-      info(name + " start fetching topic: " + infopti.topic + " part: " + infopti.partition.partId + " offset: "
-        + infopti.getFetchOffset + " from " + broker.host + ":" + broker.port)
-
-    try {
-      while (!stopped) {
-        val fetches = partitionTopicInfos.map(info =>
-          new FetchRequest(info.topic, info.partition.partId, info.getFetchOffset, config.fetchSize))
-
-        trace("fetch request: " + fetches.toString)
-
-        val response = simpleConsumer.multifetch(fetches : _*)
-        trace("recevied response from fetch request: " + fetches.toString)
-
-        var read = 0L
-
-        for((messages, infopti) <- response.zip(partitionTopicInfos)) {
-          try {
-            var done = false
-            if(messages.getErrorCode == ErrorMapping.OffsetOutOfRangeCode) {
-              info("offset for " + infopti + " out of range")
-              // see if we can fix this error
-              val resetOffset = resetConsumerOffsets(infopti.topic, infopti.partition)
-              if(resetOffset >= 0) {
-                infopti.resetFetchOffset(resetOffset)
-                infopti.resetConsumeOffset(resetOffset)
-                done = true
-              }
-            }
-            if (!done)
-              read += infopti.enqueue(messages, infopti.getFetchOffset)
-          }
-          catch {
-            case e1: IOException =>
-              // something is wrong with the socket, re-throw the exception to stop the fetcher
-              throw e1
-            case e2 =>
-              if (!stopped) {
-                // this is likely a repeatable error, log it and trigger an exception in the consumer
-                error("error in FetcherRunnable for " + infopti, e2)
-                infopti.enqueueError(e2, infopti.getFetchOffset)
-              }
-              // re-throw the exception to stop the fetcher
-              throw e2
-          }
-        }
-
-        trace("fetched bytes: " + read)
-        if(read == 0) {
-          debug("backing off " + config.fetcherBackoffMs + " ms")
-          Thread.sleep(config.fetcherBackoffMs)
-        }
-      }
-    }
-    catch {
-      case e =>
-        if (stopped)
-          info("FecherRunnable " + this + " interrupted")
-        else
-          error("error in FetcherRunnable ", e)
-    }
-
-    info("stopping fetcher " + name + " to host " + broker.host)
-    Utils.swallow(logger.info, simpleConsumer.close)
-    shutdownComplete()
-  }
-
-  /**
-   * Record that the thread shutdown is complete
-   */
-  private def shutdownComplete() = shutdownLatch.countDown
-
-  private def resetConsumerOffsets(topic : String,
-                                   partition: Partition) : Long = {
-    var offset : Long = 0
-    config.autoOffsetReset match {
-      case OffsetRequest.SmallestTimeString => offset = OffsetRequest.EarliestTime
-      case OffsetRequest.LargestTimeString => offset = OffsetRequest.LatestTime
-      case _ => return -1
-    }
-
-    // get mentioned offset from the broker
-    val offsets = simpleConsumer.getOffsetsBefore(topic, partition.partId, offset, 1)
-    val topicDirs = new ZKGroupTopicDirs(config.groupId, topic)
-
-    // reset manually in zookeeper
-    info("updating partition " + partition.name + " for topic " + topic + " with " +
-            (if(offset == OffsetRequest.EarliestTime) "earliest " else " latest ") + "offset " + offsets(0))
-    ZkUtils.updatePersistentPath(zkClient, topicDirs.consumerOffsetDir + "/" + partition.name, offsets(0).toString)
-
-    offsets(0)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/KafkaStream.scala b/trunk/core/src/main/scala/kafka/consumer/KafkaStream.scala
deleted file mode 100644
index 3ef0978..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/KafkaStream.scala
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-
-import java.util.concurrent.BlockingQueue
-import kafka.serializer.Decoder
-import kafka.message.MessageAndMetadata
-
-class KafkaStream[T](private val queue: BlockingQueue[FetchedDataChunk],
-                     consumerTimeoutMs: Int,
-                     private val decoder: Decoder[T],
-                     val enableShallowIterator: Boolean)
-   extends Iterable[MessageAndMetadata[T]] with java.lang.Iterable[MessageAndMetadata[T]] {
-
-  private val iter: ConsumerIterator[T] =
-    new ConsumerIterator[T](queue, consumerTimeoutMs, decoder, enableShallowIterator)
-
-  /**
-   *  Create an iterator over messages in the stream.
-   */
-  def iterator(): ConsumerIterator[T] = iter
-
-  /**
-   * This method clears the queue being iterated during the consumer rebalancing. This is mainly
-   * to reduce the number of duplicates received by the consumer
-   */
-  def clear() {
-    iter.clearCurrentChunk()
-  }
-
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/PartitionTopicInfo.scala b/trunk/core/src/main/scala/kafka/consumer/PartitionTopicInfo.scala
deleted file mode 100644
index 2a4caa7..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/PartitionTopicInfo.scala
+++ /dev/null
@@ -1,81 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-import kafka.message._
-import kafka.cluster._
-import kafka.utils.Logging
-import kafka.common.ErrorMapping
-
-private[consumer] class PartitionTopicInfo(val topic: String,
-                                           val brokerId: Int,
-                                           val partition: Partition,
-                                           private val chunkQueue: BlockingQueue[FetchedDataChunk],
-                                           private val consumedOffset: AtomicLong,
-                                           private val fetchedOffset: AtomicLong,
-                                           private val fetchSize: AtomicInteger) extends Logging {
-
-  debug("initial consumer offset of " + this + " is " + consumedOffset.get)
-  debug("initial fetch offset of " + this + " is " + fetchedOffset.get)
-
-  def getConsumeOffset() = consumedOffset.get
-
-  def getFetchOffset() = fetchedOffset.get
-
-  def resetConsumeOffset(newConsumeOffset: Long) = {
-    consumedOffset.set(newConsumeOffset)
-    debug("reset consume offset of " + this + " to " + newConsumeOffset)
-  }
-
-  def resetFetchOffset(newFetchOffset: Long) = {
-    fetchedOffset.set(newFetchOffset)
-    debug("reset fetch offset of ( %s ) to %d".format(this, newFetchOffset))
-  }
-
-  /**
-   * Enqueue a message set for processing
-   * @return the number of valid bytes
-   */
-  def enqueue(messages: ByteBufferMessageSet, fetchOffset: Long): Long = {
-    val size = messages.validBytes
-    if(size > 0) {
-      // update fetched offset to the compressed data chunk size, not the decompressed message set size
-      trace("Updating fetch offset = " + fetchedOffset.get + " with size = " + size)
-      chunkQueue.put(new FetchedDataChunk(messages, this, fetchOffset))
-      val newOffset = fetchedOffset.addAndGet(size)
-      debug("updated fetch offset of ( %s ) to %d".format(this, newOffset))
-      ConsumerTopicStat.getConsumerTopicStat(topic).recordBytesPerTopic(size)
-      ConsumerTopicStat.getConsumerAllTopicStat().recordBytesPerTopic(size)
-    }
-    size
-  }
-
-  /**
-   *  add an empty message with the exception to the queue so that client can see the error
-   */
-  def enqueueError(e: Throwable, fetchOffset: Long) = {
-    val messages = new ByteBufferMessageSet(buffer = ErrorMapping.EmptyByteBuffer, initialOffset = 0,
-      errorCode = ErrorMapping.codeFor(e.getClass.asInstanceOf[Class[Throwable]]))
-    chunkQueue.put(new FetchedDataChunk(messages, this, fetchOffset))
-  }
-
-  override def toString(): String = topic + ":" + partition.toString + ": fetched offset = " + fetchedOffset.get +
-    ": consumed offset = " + consumedOffset.get
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/SimpleConsumer.scala b/trunk/core/src/main/scala/kafka/consumer/SimpleConsumer.scala
deleted file mode 100644
index 3064fae..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/SimpleConsumer.scala
+++ /dev/null
@@ -1,226 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.net._
-import java.nio.channels._
-import kafka.api._
-import kafka.message._
-import kafka.network._
-import kafka.utils._
-
-/**
- * A consumer of kafka messages
- */
-@threadsafe
-class SimpleConsumer(val host: String,
-                     val port: Int,
-                     val soTimeout: Int,
-                     val bufferSize: Int) extends Logging {
-  private var channel : SocketChannel = null
-  private val lock = new Object()
-
-  private def connect(): SocketChannel = {
-    close
-    val address = new InetSocketAddress(host, port)
-
-    val channel = SocketChannel.open
-    debug("Connected to " + address + " for fetching.")
-    channel.configureBlocking(true)
-    channel.socket.setReceiveBufferSize(bufferSize)
-    channel.socket.setSoTimeout(soTimeout)
-    channel.socket.setKeepAlive(true)
-    channel.socket.setTcpNoDelay(true)
-    channel.connect(address)
-    trace("requested receive buffer size=" + bufferSize + " actual receive buffer size= " + channel.socket.getReceiveBufferSize)
-    trace("soTimeout=" + soTimeout + " actual soTimeout= " + channel.socket.getSoTimeout)
-    
-    channel
-  }
-
-  private def close(channel: SocketChannel) = {
-    debug("Disconnecting from " + channel.socket.getRemoteSocketAddress())
-    Utils.swallow(logger.warn, channel.close())
-    Utils.swallow(logger.warn, channel.socket.close())
-  }
-
-  def close() {
-    lock synchronized {
-      if (channel != null)
-        close(channel)
-      channel = null
-    }
-  }
-
-  /**
-   *  Fetch a set of messages from a topic.
-   *
-   *  @param request  specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.
-   *  @return a set of fetched messages
-   */
-  def fetch(request: FetchRequest): ByteBufferMessageSet = {
-    lock synchronized {
-      val startTime = SystemTime.nanoseconds
-      getOrMakeConnection()
-      var response: Tuple2[Receive,Int] = null
-      try {
-        sendRequest(request)
-        response = getResponse
-      } catch {
-        case e : java.io.IOException =>
-          info("Reconnect in fetch request due to socket error: ", e)
-          // retry once
-          try {
-            channel = connect
-            sendRequest(request)
-            response = getResponse
-          }catch {
-            case ioe: java.io.IOException => channel = null; throw ioe;
-          }
-        case e => throw e
-      }
-      val endTime = SystemTime.nanoseconds
-      SimpleConsumerStats.recordFetchRequest(endTime - startTime)
-      SimpleConsumerStats.recordConsumptionThroughput(response._1.buffer.limit)
-      new ByteBufferMessageSet(response._1.buffer, request.offset, response._2)
-    }
-  }
-
-  /**
-   *  Combine multiple fetch requests in one call.
-   *
-   *  @param fetches  a sequence of fetch requests.
-   *  @return a sequence of fetch responses
-   */
-  def multifetch(fetches: FetchRequest*): MultiFetchResponse = {
-    lock synchronized {
-      val startTime = SystemTime.nanoseconds
-      getOrMakeConnection()
-      var response: Tuple2[Receive,Int] = null
-      try {
-        sendRequest(new MultiFetchRequest(fetches.toArray))
-        response = getResponse
-      } catch {
-        case e : java.io.IOException =>
-          info("Reconnect in multifetch due to socket error: ", e)
-          // retry once
-          try {
-            channel = connect
-            sendRequest(new MultiFetchRequest(fetches.toArray))
-            response = getResponse
-          }catch {
-            case ioe: java.io.IOException => channel = null; throw ioe;
-          }
-        case e => throw e        
-      }
-      val endTime = SystemTime.nanoseconds
-      SimpleConsumerStats.recordFetchRequest(endTime - startTime)
-      SimpleConsumerStats.recordConsumptionThroughput(response._1.buffer.limit)
-
-      // error code will be set on individual messageset inside MultiFetchResponse
-      new MultiFetchResponse(response._1.buffer, fetches.length, fetches.toArray.map(f => f.offset))
-    }
-  }
-
-  /**
-   *  Get a list of valid offsets (up to maxSize) before the given time.
-   *  The result is a list of offsets, in descending order.
-   *
-   *  @param time: time in millisecs (-1, from the latest offset available, -2 from the smallest offset available)
-   *  @return an array of offsets
-   */
-  def getOffsetsBefore(topic: String, partition: Int, time: Long, maxNumOffsets: Int): Array[Long] = {
-    lock synchronized {
-      getOrMakeConnection()
-      var response: Tuple2[Receive,Int] = null
-      try {
-        sendRequest(new OffsetRequest(topic, partition, time, maxNumOffsets))
-        response = getResponse
-      } catch {
-        case e : java.io.IOException =>
-          info("Reconnect in get offetset request due to socket error: ", e)
-          // retry once
-          try {
-            channel = connect
-            sendRequest(new OffsetRequest(topic, partition, time, maxNumOffsets))
-            response = getResponse
-          }catch {
-            case ioe: java.io.IOException => channel = null; throw ioe;
-          }
-      }
-      OffsetRequest.deserializeOffsetArray(response._1.buffer)
-    }
-  }
-
-  private def sendRequest(request: Request) = {
-    val send = new BoundedByteBufferSend(request)
-    send.writeCompletely(channel)
-  }
-
-  private def getResponse(): Tuple2[Receive,Int] = {
-    val response = new BoundedByteBufferReceive()
-    response.readCompletely(channel)
-
-    // this has the side effect of setting the initial position of buffer correctly
-    val errorCode: Int = response.buffer.getShort
-    (response, errorCode)
-  }
-
-  private def getOrMakeConnection() {
-    if(channel == null) {
-      channel = connect()
-    }
-  }
-}
-
-trait SimpleConsumerStatsMBean {
-  def getFetchRequestsPerSecond: Double
-  def getAvgFetchRequestMs: Double
-  def getMaxFetchRequestMs: Double
-  def getNumFetchRequests: Long  
-  def getConsumerThroughput: Double
-}
-
-@threadsafe
-class SimpleConsumerStats extends SimpleConsumerStatsMBean {
-  private val fetchRequestStats = new SnapshotStats
-
-  def recordFetchRequest(requestNs: Long) = fetchRequestStats.recordRequestMetric(requestNs)
-
-  def recordConsumptionThroughput(data: Long) = fetchRequestStats.recordThroughputMetric(data)
-
-  def getFetchRequestsPerSecond: Double = fetchRequestStats.getRequestsPerSecond
-
-  def getAvgFetchRequestMs: Double = fetchRequestStats.getAvgMetric / (1000.0 * 1000.0)
-
-  def getMaxFetchRequestMs: Double = fetchRequestStats.getMaxMetric / (1000.0 * 1000.0)
-
-  def getNumFetchRequests: Long = fetchRequestStats.getNumRequests
-
-  def getConsumerThroughput: Double = fetchRequestStats.getThroughput
-}
-
-object SimpleConsumerStats extends Logging {
-  private val simpleConsumerstatsMBeanName = "kafka:type=kafka.SimpleConsumerStats"
-  private val stats = new SimpleConsumerStats
-  Utils.registerMBean(stats, simpleConsumerstatsMBeanName)
-
-  def recordFetchRequest(requestMs: Long) = stats.recordFetchRequest(requestMs)
-  def recordConsumptionThroughput(data: Long) = stats.recordConsumptionThroughput(data)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/TopicCount.scala b/trunk/core/src/main/scala/kafka/consumer/TopicCount.scala
deleted file mode 100644
index 2ef13d4..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/TopicCount.scala
+++ /dev/null
@@ -1,176 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import scala.collection._
-import org.I0Itec.zkclient.ZkClient
-import java.util.regex.Pattern
-import kafka.utils.{SyncJSON, ZKGroupDirs, ZkUtils, Logging}
-
-
-private[kafka] trait TopicCount {
-  def getConsumerThreadIdsPerTopic: Map[String, Set[String]]
-
-  def dbString: String
-  
-  protected def makeConsumerThreadIdsPerTopic(consumerIdString: String,
-                                            topicCountMap: Map[String,  Int]) = {
-    val consumerThreadIdsPerTopicMap = new mutable.HashMap[String, Set[String]]()
-    for ((topic, nConsumers) <- topicCountMap) {
-      val consumerSet = new mutable.HashSet[String]
-      assert(nConsumers >= 1)
-      for (i <- 0 until nConsumers)
-        consumerSet += consumerIdString + "-" + i
-      consumerThreadIdsPerTopicMap.put(topic, consumerSet)
-    }
-    consumerThreadIdsPerTopicMap
-  }
-}
-
-private[kafka] object TopicCount extends Logging {
-
-  /*
-   * Example of whitelist topic count stored in ZooKeeper:
-   * Topics with whitetopic as prefix, and four streams: *4*whitetopic.*
-   *
-   * Example of blacklist topic count stored in ZooKeeper:
-   * Topics with blacktopic as prefix, and four streams: !4!blacktopic.*
-   */
-
-  val WHITELIST_MARKER = "*"
-  val BLACKLIST_MARKER = "!"
-  private val WHITELIST_PATTERN =
-    Pattern.compile("""\*(\p{Digit}+)\*(.*)""")
-  private val BLACKLIST_PATTERN =
-    Pattern.compile("""!(\p{Digit}+)!(.*)""")
-
-  def constructTopicCount(group: String,
-                          consumerId: String,
-                          zkClient: ZkClient) : TopicCount = {
-    val dirs = new ZKGroupDirs(group)
-    val topicCountString = ZkUtils.readData(zkClient, dirs.consumerRegistryDir + "/" + consumerId)
-    val hasWhitelist = topicCountString.startsWith(WHITELIST_MARKER)
-    val hasBlacklist = topicCountString.startsWith(BLACKLIST_MARKER)
-
-    if (hasWhitelist || hasBlacklist)
-      info("Constructing topic count for %s from %s using %s as pattern."
-        .format(consumerId, topicCountString,
-          if (hasWhitelist) WHITELIST_PATTERN else BLACKLIST_PATTERN))
-
-    if (hasWhitelist || hasBlacklist) {
-      val matcher = if (hasWhitelist)
-        WHITELIST_PATTERN.matcher(topicCountString)
-      else
-        BLACKLIST_PATTERN.matcher(topicCountString)
-      require(matcher.matches())
-      val numStreams = matcher.group(1).toInt
-      val regex = matcher.group(2)
-      val filter = if (hasWhitelist)
-        new Whitelist(regex)
-      else
-        new Blacklist(regex)
-
-      new WildcardTopicCount(zkClient, consumerId, filter, numStreams)
-    }
-    else {
-      var topMap : Map[String,Int] = null
-      try {
-        SyncJSON.parseFull(topicCountString) match {
-          case Some(m) => topMap = m.asInstanceOf[Map[String,Int]]
-          case None => throw new RuntimeException("error constructing TopicCount : " + topicCountString)
-        }
-      }
-      catch {
-        case e =>
-          error("error parsing consumer json string " + topicCountString, e)
-          throw e
-      }
-
-      new StaticTopicCount(consumerId, topMap)
-    }
-  }
-
-  def constructTopicCount(consumerIdString: String, topicCount: Map[String,  Int]) =
-    new StaticTopicCount(consumerIdString, topicCount)
-
-  def constructTopicCount(consumerIdString: String,
-                          filter: TopicFilter,
-                          numStreams: Int,
-                          zkClient: ZkClient) =
-    new WildcardTopicCount(zkClient, consumerIdString, filter, numStreams)
-
-}
-
-private[kafka] class StaticTopicCount(val consumerIdString: String,
-                                val topicCountMap: Map[String, Int])
-                                extends TopicCount {
-
-  def getConsumerThreadIdsPerTopic =
-    makeConsumerThreadIdsPerTopic(consumerIdString, topicCountMap)
-
-  override def equals(obj: Any): Boolean = {
-    obj match {
-      case null => false
-      case n: StaticTopicCount => consumerIdString == n.consumerIdString && topicCountMap == n.topicCountMap
-      case _ => false
-    }
-  }
-
-  /**
-   *  return json of
-   *  { "topic1" : 4,
-   *    "topic2" : 4
-   *  }
-   */
-  def dbString = {
-    val builder = new StringBuilder
-    builder.append("{ ")
-    var i = 0
-    for ( (topic, nConsumers) <- topicCountMap) {
-      if (i > 0)
-        builder.append(",")
-      builder.append("\"" + topic + "\": " + nConsumers)
-      i += 1
-    }
-    builder.append(" }")
-    builder.toString()
-  }
-}
-
-private[kafka] class WildcardTopicCount(zkClient: ZkClient,
-                                        consumerIdString: String,
-                                        topicFilter: TopicFilter,
-                                        numStreams: Int) extends TopicCount {
-  def getConsumerThreadIdsPerTopic = {
-    val wildcardTopics = ZkUtils.getChildrenParentMayNotExist(
-      zkClient, ZkUtils.BrokerTopicsPath).filter(topicFilter.isTopicAllowed(_))
-    makeConsumerThreadIdsPerTopic(consumerIdString,
-                                  Map(wildcardTopics.map((_, numStreams)): _*))
-  }
-
-  def dbString = {
-    val marker = topicFilter match {
-      case wl: Whitelist => TopicCount.WHITELIST_MARKER
-      case bl: Blacklist => TopicCount.BLACKLIST_MARKER
-    }
-
-    "%s%d%s%s".format(marker, numStreams, marker, topicFilter.regex)
-  }
-
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/TopicEventHandler.scala b/trunk/core/src/main/scala/kafka/consumer/TopicEventHandler.scala
deleted file mode 100644
index 2423f0a..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/TopicEventHandler.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-trait TopicEventHandler[T] {
-
-  def handleTopicEvent(allTopics: Seq[T])
-
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/TopicFilter.scala b/trunk/core/src/main/scala/kafka/consumer/TopicFilter.scala
deleted file mode 100644
index cf3853b..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/TopicFilter.scala
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-
-import kafka.utils.Logging
-import java.util.regex.{PatternSyntaxException, Pattern}
-
-
-sealed abstract class TopicFilter(rawRegex: String) extends Logging {
-
-  val regex = rawRegex
-          .trim
-          .replace(',', '|')
-          .replace(" ", "")
-          .replaceAll("""^["']+""","")
-          .replaceAll("""["']+$""","") // property files may bring quotes
-
-  try {
-    Pattern.compile(regex)
-  }
-  catch {
-    case e: PatternSyntaxException =>
-      throw new RuntimeException(regex + " is an invalid regex.")
-  }
-
-  override def toString = regex
-
-  def requiresTopicEventWatcher: Boolean
-
-  def isTopicAllowed(topic: String): Boolean
-}
-
-case class Whitelist(rawRegex: String) extends TopicFilter(rawRegex) {
-  override def requiresTopicEventWatcher = !regex.matches("""[\p{Alnum}-|]+""")
-
-  override def isTopicAllowed(topic: String) = {
-    val allowed = topic.matches(regex)
-
-    debug("%s %s".format(
-      topic, if (allowed) "allowed" else "filtered"))
-
-    allowed
-  }
-
-
-}
-
-case class Blacklist(rawRegex: String) extends TopicFilter(rawRegex) {
-  override def requiresTopicEventWatcher = true
-
-  override def isTopicAllowed(topic: String) = {
-    val allowed = !topic.matches(regex)
-
-    debug("%s %s".format(
-      topic, if (allowed) "allowed" else "filtered"))
-
-    allowed
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala b/trunk/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala
deleted file mode 100644
index f7782df..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala
+++ /dev/null
@@ -1,810 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-import locks.ReentrantLock
-import scala.collection._
-import kafka.cluster._
-import kafka.utils._
-import org.I0Itec.zkclient.exception.ZkNodeExistsException
-import java.net.InetAddress
-import org.I0Itec.zkclient.{IZkStateListener, IZkChildListener, ZkClient}
-import org.apache.zookeeper.Watcher.Event.KeeperState
-import kafka.api.OffsetRequest
-import java.util.UUID
-import kafka.serializer.Decoder
-import kafka.common.{ConsumerRebalanceFailedException, InvalidConfigException}
-import java.lang.IllegalStateException
-import kafka.utils.ZkUtils._
-
-
-/**
- * This class handles the consumers interaction with zookeeper
- *
- * Directories:
- * 1. Consumer id registry:
- * /consumers/[group_id]/ids[consumer_id] -> topic1,...topicN
- * A consumer has a unique consumer id within a consumer group. A consumer registers its id as an ephemeral znode
- * and puts all topics that it subscribes to as the value of the znode. The znode is deleted when the client is gone.
- * A consumer subscribes to event changes of the consumer id registry within its group.
- *
- * The consumer id is picked up from configuration, instead of the sequential id assigned by ZK. Generated sequential
- * ids are hard to recover during temporary connection loss to ZK, since it's difficult for the client to figure out
- * whether the creation of a sequential znode has succeeded or not. More details can be found at
- * (http://wiki.apache.org/hadoop/ZooKeeper/ErrorHandling)
- *
- * 2. Broker node registry:
- * /brokers/[0...N] --> { "host" : "host:port",
- *                        "topics" : {"topic1": ["partition1" ... "partitionN"], ...,
- *                                    "topicN": ["partition1" ... "partitionN"] } }
- * This is a list of all present broker brokers. A unique logical node id is configured on each broker node. A broker
- * node registers itself on start-up and creates a znode with the logical node id under /brokers. The value of the znode
- * is a JSON String that contains (1) the host name and the port the broker is listening to, (2) a list of topics that
- * the broker serves, (3) a list of logical partitions assigned to each topic on the broker.
- * A consumer subscribes to event changes of the broker node registry.
- *
- * 3. Partition owner registry:
- * /consumers/[group_id]/owner/[topic]/[broker_id-partition_id] --> consumer_node_id
- * This stores the mapping before broker partitions and consumers. Each partition is owned by a unique consumer
- * within a consumer group. The mapping is reestablished after each rebalancing.
- *
- * 4. Consumer offset tracking:
- * /consumers/[group_id]/offsets/[topic]/[broker_id-partition_id] --> offset_counter_value
- * Each consumer tracks the offset of the latest message consumed for each partition.
- *
- */
-private[kafka] object ZookeeperConsumerConnector {
-  val shutdownCommand: FetchedDataChunk = new FetchedDataChunk(null, null, -1L)
-}
-
-/**
- *  JMX interface for monitoring consumer
- */
-trait ZookeeperConsumerConnectorMBean {
-  def getPartOwnerStats: String
-  def getConsumerGroup: String
-  def getOffsetLag(topic: String, brokerId: Int, partitionId: Int): Long
-  def getConsumedOffset(topic: String, brokerId: Int, partitionId: Int): Long
-  def getLatestOffset(topic: String, brokerId: Int, partitionId: Int): Long
-}
-
-private[kafka] class ZookeeperConsumerConnector(val config: ConsumerConfig,
-                                                val enableFetcher: Boolean) // for testing only
-        extends ConsumerConnector with ZookeeperConsumerConnectorMBean
-        with Logging {
-  private val isShuttingDown = new AtomicBoolean(false)
-  private val rebalanceLock = new Object
-  private var fetcher: Option[Fetcher] = None
-  private var zkClient: ZkClient = null
-  private var topicRegistry = new Pool[String, Pool[Partition, PartitionTopicInfo]]
-  // topicThreadIdAndQueues : (topic,consumerThreadId) -> queue
-  private val topicThreadIdAndQueues = new Pool[Tuple2[String,String], BlockingQueue[FetchedDataChunk]]
-  private val scheduler = new KafkaScheduler(1, "Kafka-consumer-autocommit-", false)
-  private val messageStreamCreated = new AtomicBoolean(false)
-
-  private var sessionExpirationListener: ZKSessionExpireListener = null
-  private var loadBalancerListener: ZKRebalancerListener = null
-
-  private var wildcardTopicWatcher: ZookeeperTopicEventWatcher = null
-
-  val consumerIdString = {
-    var consumerUuid : String = null
-    config.consumerId match {
-      case Some(consumerId) // for testing only
-      => consumerUuid = consumerId
-      case None // generate unique consumerId automatically
-      => val uuid = UUID.randomUUID()
-      consumerUuid = "%s-%d-%s".format(
-        InetAddress.getLocalHost.getHostName, System.currentTimeMillis,
-        uuid.getMostSignificantBits().toHexString.substring(0,8))
-    }
-    config.groupId + "_" + consumerUuid
-  }
-  this.logIdent = consumerIdString + " "
-
-  connectZk()
-  createFetcher()
-  if (config.autoCommit) {
-    info("starting auto committer every " + config.autoCommitIntervalMs + " ms")
-    scheduler.scheduleWithRate(autoCommit, config.autoCommitIntervalMs, config.autoCommitIntervalMs)
-  }
-
-  def this(config: ConsumerConfig) = this(config, true)
-
-  def createMessageStreams[T](topicCountMap: Map[String,Int],
-                              decoder: Decoder[T])
-      : Map[String,List[KafkaStream[T]]] = {
-    if (messageStreamCreated.getAndSet(true))
-      throw new RuntimeException(this.getClass.getSimpleName +
-                                   " can create message streams at most once")
-    consume(topicCountMap, decoder)
-  }
-
-  def createMessageStreamsByFilter[T](topicFilter: TopicFilter, numStreams: Int, decoder: Decoder[T]) = {
-    val wildcardStreamsHandler = new WildcardStreamsHandler[T](topicFilter, numStreams, decoder)
-    wildcardStreamsHandler.streams
-  }
-
-  private def createFetcher() {
-    if (enableFetcher)
-      fetcher = Some(new Fetcher(config, zkClient))
-  }
-
-  private def connectZk() {
-    info("Connecting to zookeeper instance at " + config.zkConnect)
-    zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs, config.zkConnectionTimeoutMs, ZKStringSerializer)
-  }
-
-  def shutdown() {
-    val canShutdown = isShuttingDown.compareAndSet(false, true);
-    if (canShutdown) {
-      info("ZKConsumerConnector shutting down")
-
-      if (wildcardTopicWatcher != null)
-        wildcardTopicWatcher.shutdown()
-      try {
-        scheduler.shutdownNow()
-        fetcher match {
-          case Some(f) => f.stopConnectionsToAllBrokers
-          case None =>
-        }
-        sendShutdownToAllQueues()
-        if (config.autoCommit)
-          commitOffsets()
-        if (zkClient != null) {
-          zkClient.close()
-          zkClient = null
-        }
-      }
-      catch {
-        case e =>
-          fatal("error during consumer connector shutdown", e)
-      }
-      info("ZKConsumerConnector shut down completed")
-    }
-  }
-
-  def consume[T](topicCountMap: scala.collection.Map[String,Int],
-                 decoder: Decoder[T])
-      : Map[String,List[KafkaStream[T]]] = {
-    debug("entering consume ")
-    if (topicCountMap == null)
-      throw new RuntimeException("topicCountMap is null")
-
-    val topicCount = TopicCount.constructTopicCount(consumerIdString, topicCountMap)
-
-    val topicThreadIds = topicCount.getConsumerThreadIdsPerTopic
-
-    // make a list of (queue,stream) pairs, one pair for each threadId
-    val queuesAndStreams = topicThreadIds.values.map(threadIdSet =>
-      threadIdSet.map(_ => {
-        val queue =  new LinkedBlockingQueue[FetchedDataChunk](config.maxQueuedChunks)
-        val stream = new KafkaStream[T](
-          queue, config.consumerTimeoutMs, decoder, config.enableShallowIterator)
-        (queue, stream)
-      })
-    ).flatten.toList
-
-    val dirs = new ZKGroupDirs(config.groupId)
-    registerConsumerInZK(dirs, consumerIdString, topicCount)
-    reinitializeConsumer(topicCount, queuesAndStreams)
-
-    loadBalancerListener.kafkaMessageAndMetadataStreams.asInstanceOf[Map[String, List[KafkaStream[T]]]]
-  }
-
-  private def registerConsumerInZK(dirs: ZKGroupDirs, consumerIdString: String, topicCount: TopicCount) {
-    info("begin registering consumer " + consumerIdString + " in ZK")
-    createEphemeralPathExpectConflict(zkClient,
-                                      dirs.consumerRegistryDir + "/" + consumerIdString,
-                                      topicCount.dbString)
-    info("end registering consumer " + consumerIdString + " in ZK")
-  }
-
-  private def sendShutdownToAllQueues() = {
-    for (queue <- topicThreadIdAndQueues.values) {
-      debug("Clearing up queue")
-      queue.clear()
-      queue.put(ZookeeperConsumerConnector.shutdownCommand)
-      debug("Cleared queue and sent shutdown command")
-    }
-  }
-
-  def autoCommit() {
-    trace("auto committing")
-    try {
-      commitOffsets()
-    }
-    catch {
-      case t: Throwable =>
-      // log it and let it go
-        error("exception during autoCommit: ", t)
-    }
-  }
-
-  def commitOffsets() {
-    if (zkClient == null) {
-      error("zk client is null. Cannot commit offsets")
-      return
-    }
-    for ((topic, infos) <- topicRegistry) {
-      val topicDirs = new ZKGroupTopicDirs(config.groupId, topic)
-      for (info <- infos.values) {
-        val newOffset = info.getConsumeOffset
-        try {
-          updatePersistentPath(zkClient, topicDirs.consumerOffsetDir + "/" + info.partition.name,
-            newOffset.toString)
-        }
-        catch {
-          case t: Throwable =>
-          // log it and let it go
-            warn("exception during commitOffsets",  t)
-        }
-        debug("Committed offset " + newOffset + " for topic " + info)
-      }
-    }
-  }
-
-  // for JMX
-  def getPartOwnerStats(): String = {
-    val builder = new StringBuilder
-    for ((topic, infos) <- topicRegistry) {
-      builder.append("\n" + topic + ": [")
-      val topicDirs = new ZKGroupTopicDirs(config.groupId, topic)
-      for(partition <- infos.values) {
-        builder.append("\n    {")
-        builder.append{partition.partition.name}
-        builder.append(",fetch offset:" + partition.getFetchOffset)
-        builder.append(",consumer offset:" + partition.getConsumeOffset)
-        builder.append("}")
-      }
-      builder.append("\n        ]")
-    }
-    builder.toString
-  }
-
-  // for JMX
-  def getConsumerGroup(): String = config.groupId
-
-  def getOffsetLag(topic: String, brokerId: Int, partitionId: Int): Long =
-    getLatestOffset(topic, brokerId, partitionId) - getConsumedOffset(topic, brokerId, partitionId)
-
-  def getConsumedOffset(topic: String, brokerId: Int, partitionId: Int): Long = {
-    val partition = new Partition(brokerId, partitionId)
-    val partitionInfos = topicRegistry.get(topic)
-    if (partitionInfos != null) {
-      val partitionInfo = partitionInfos.get(partition)
-      if (partitionInfo != null)
-        return partitionInfo.getConsumeOffset
-    }
-
-    //otherwise, try to get it from zookeeper
-    try {
-      val topicDirs = new ZKGroupTopicDirs(config.groupId, topic)
-      val znode = topicDirs.consumerOffsetDir + "/" + partition.name
-      val offsetString = readDataMaybeNull(zkClient, znode)
-      if (offsetString != null)
-        return offsetString.toLong
-      else
-        return -1
-    }
-    catch {
-      case e =>
-        error("error in getConsumedOffset JMX ", e)
-    }
-    return -2
-  }
-
-  def getLatestOffset(topic: String, brokerId: Int, partitionId: Int): Long =
-    earliestOrLatestOffset(topic, brokerId, partitionId, OffsetRequest.LatestTime)
-
-  private def earliestOrLatestOffset(topic: String, brokerId: Int, partitionId: Int, earliestOrLatest: Long): Long = {
-    var simpleConsumer: SimpleConsumer = null
-    var producedOffset: Long = -1L
-    try {
-      val cluster = getCluster(zkClient)
-      val broker = cluster.getBroker(brokerId) match {
-        case Some(b) => b
-        case None => throw new IllegalStateException("Broker " + brokerId + " is unavailable. Cannot issue " +
-          "getOffsetsBefore request")
-      }
-      simpleConsumer = new SimpleConsumer(broker.host, broker.port, ConsumerConfig.SocketTimeout,
-                                            ConsumerConfig.SocketBufferSize)
-      val offsets = simpleConsumer.getOffsetsBefore(topic, partitionId, earliestOrLatest, 1)
-      producedOffset = offsets(0)
-    }
-    catch {
-      case e =>
-        error("error in earliestOrLatestOffset() ", e)
-    }
-    finally {
-      if (simpleConsumer != null)
-        simpleConsumer.close
-    }
-    producedOffset
-  }
-
-  class ZKSessionExpireListener(val dirs: ZKGroupDirs,
-                                 val consumerIdString: String,
-                                 val topicCount: TopicCount,
-                                 val loadBalancerListener: ZKRebalancerListener)
-    extends IZkStateListener {
-    @throws(classOf[Exception])
-    def handleStateChanged(state: KeeperState) {
-      // do nothing, since zkclient will do reconnect for us.
-    }
-
-    /**
-     * Called after the zookeeper session has expired and a new session has been created. You would have to re-create
-     * any ephemeral nodes here.
-     *
-     * @throws Exception
-     *             On any error.
-     */
-    @throws(classOf[Exception])
-    def handleNewSession() {
-      /**
-       *  When we get a SessionExpired event, we lost all ephemeral nodes and zkclient has reestablished a
-       *  connection for us. We need to release the ownership of the current consumer and re-register this
-       *  consumer in the consumer registry and trigger a rebalance.
-       */
-      info("ZK expired; release old broker parition ownership; re-register consumer " + consumerIdString)
-      loadBalancerListener.resetState()
-      registerConsumerInZK(dirs, consumerIdString, topicCount)
-      // explicitly trigger load balancing for this consumer
-      loadBalancerListener.syncedRebalance()
-
-      // There is no need to resubscribe to child and state changes.
-      // The child change watchers will be set inside rebalance when we read the children list.
-    }
-
-  }
-
-  class ZKRebalancerListener(val group: String, val consumerIdString: String,
-                             val kafkaMessageAndMetadataStreams: mutable.Map[String,List[KafkaStream[_]]])
-    extends IZkChildListener {
-    private var isWatcherTriggered = false
-    private val lock = new ReentrantLock
-    private val cond = lock.newCondition()
-    private val watcherExecutorThread = new Thread(consumerIdString + "_watcher_executor") {
-      override def run() {
-        info("starting watcher executor thread for consumer " + consumerIdString)
-        var doRebalance = false
-        while (!isShuttingDown.get) {
-          try {
-            lock.lock()
-            try {
-              if (!isWatcherTriggered)
-                cond.await(1000, TimeUnit.MILLISECONDS) // wake up periodically so that it can check the shutdown flag
-            } finally {
-              doRebalance = isWatcherTriggered
-              isWatcherTriggered = false
-              lock.unlock()
-            }
-            if (doRebalance)
-              syncedRebalance
-          } catch {
-            case t => error("error during syncedRebalance", t)
-          }
-        }
-        info("stopping watcher executor thread for consumer " + consumerIdString)
-      }
-    }
-    watcherExecutorThread.start()
-
-    @throws(classOf[Exception])
-    def handleChildChange(parentPath : String, curChilds : java.util.List[String]) {
-      lock.lock()
-      try {
-        isWatcherTriggered = true
-        cond.signalAll()
-      } finally {
-        lock.unlock()
-      }
-    }
-
-    private def deletePartitionOwnershipFromZK(topic: String, partition: String) {
-      val topicDirs = new ZKGroupTopicDirs(group, topic)
-      val znode = topicDirs.consumerOwnerDir + "/" + partition
-      deletePath(zkClient, znode)
-      debug("Consumer " + consumerIdString + " releasing " + znode)
-    }
-
-    private def releasePartitionOwnership(localTopicRegistry: Pool[String, Pool[Partition, PartitionTopicInfo]])= {
-      info("Releasing partition ownership")
-      for ((topic, infos) <- localTopicRegistry) {
-        for(partition <- infos.keys)
-          deletePartitionOwnershipFromZK(topic, partition.toString)
-        localTopicRegistry.remove(topic)
-      }
-    }
-
-    def resetState() {
-      topicRegistry.clear
-    }
-
-    def syncedRebalance() {
-      rebalanceLock synchronized {
-        for (i <- 0 until config.maxRebalanceRetries) {
-          info("begin rebalancing consumer " + consumerIdString + " try #" + i)
-          var done = false
-          val cluster = getCluster(zkClient)
-          try {
-            done = rebalance(cluster)
-          }
-          catch {
-            case e =>
-              /** occasionally, we may hit a ZK exception because the ZK state is changing while we are iterating.
-               * For example, a ZK node can disappear between the time we get all children and the time we try to get
-               * the value of a child. Just let this go since another rebalance will be triggered.
-               **/
-              info("exception during rebalance ", e)
-          }
-          info("end rebalancing consumer " + consumerIdString + " try #" + i)
-          if (done) {
-            return
-          }else {
-              /* Here the cache is at a risk of being stale. To take future rebalancing decisions correctly, we should
-               * clear the cache */
-              info("Rebalancing attempt failed. Clearing the cache before the next rebalancing operation is triggered")
-          }
-          // stop all fetchers and clear all the queues to avoid data duplication
-          closeFetchersForQueues(cluster, kafkaMessageAndMetadataStreams, topicThreadIdAndQueues.map(q => q._2))
-          Thread.sleep(config.rebalanceBackoffMs)
-        }
-      }
-
-      throw new ConsumerRebalanceFailedException(consumerIdString + " can't rebalance after " + config.maxRebalanceRetries +" retries")
-    }
-
-    private def rebalance(cluster: Cluster): Boolean = {
-      val myTopicThreadIdsMap = TopicCount.constructTopicCount(group, consumerIdString, zkClient).getConsumerThreadIdsPerTopic
-      val consumersPerTopicMap = getConsumersPerTopic(zkClient, group)
-      val partitionsPerTopicMap = getPartitionsForTopics(zkClient, myTopicThreadIdsMap.keys.iterator)
-
-      /**
-       * fetchers must be stopped to avoid data duplication, since if the current
-       * rebalancing attempt fails, the partitions that are released could be owned by another consumer.
-       * But if we don't stop the fetchers first, this consumer would continue returning data for released
-       * partitions in parallel. So, not stopping the fetchers leads to duplicate data.
-       */
-      closeFetchers(cluster, kafkaMessageAndMetadataStreams, myTopicThreadIdsMap)
-
-      releasePartitionOwnership(topicRegistry)
-
-      var partitionOwnershipDecision = new collection.mutable.HashMap[(String, String), String]()
-      var currentTopicRegistry = new Pool[String, Pool[Partition, PartitionTopicInfo]]
-
-      for ((topic, consumerThreadIdSet) <- myTopicThreadIdsMap) {
-        currentTopicRegistry.put(topic, new Pool[Partition, PartitionTopicInfo])
-
-        val topicDirs = new ZKGroupTopicDirs(group, topic)
-        val curConsumers = consumersPerTopicMap.get(topic).get
-        var curPartitions: List[String] = partitionsPerTopicMap.get(topic).get
-
-        val nPartsPerConsumer = curPartitions.size / curConsumers.size
-        val nConsumersWithExtraPart = curPartitions.size % curConsumers.size
-
-        info("Consumer " + consumerIdString + " rebalancing the following partitions: " + curPartitions +
-          " for topic " + topic + " with consumers: " + curConsumers)
-
-        for (consumerThreadId <- consumerThreadIdSet) {
-          val myConsumerPosition = curConsumers.findIndexOf(_ == consumerThreadId)
-          assert(myConsumerPosition >= 0)
-          val startPart = nPartsPerConsumer*myConsumerPosition + myConsumerPosition.min(nConsumersWithExtraPart)
-          val nParts = nPartsPerConsumer + (if (myConsumerPosition + 1 > nConsumersWithExtraPart) 0 else 1)
-
-          /**
-           *   Range-partition the sorted partitions to consumers for better locality.
-           *  The first few consumers pick up an extra partition, if any.
-           */
-          if (nParts <= 0)
-            warn("No broker partitions consumed by consumer thread " + consumerThreadId + " for topic " + topic)
-          else {
-            for (i <- startPart until startPart + nParts) {
-              val partition = curPartitions(i)
-              info(consumerThreadId + " attempting to claim partition " + partition)
-              addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId)
-              // record the partition ownership decision
-              partitionOwnershipDecision += ((topic, partition) -> consumerThreadId)
-            }
-          }
-        }
-      }
-
-      /**
-       * move the partition ownership here, since that can be used to indicate a truly successful rebalancing attempt
-       * A rebalancing attempt is completed successfully only after the fetchers have been started correctly
-       */
-      if(reflectPartitionOwnershipDecision(partitionOwnershipDecision.toMap)) {
-        info("Updating the cache")
-        debug("Partitions per topic cache " + partitionsPerTopicMap)
-        debug("Consumers per topic cache " + consumersPerTopicMap)
-        topicRegistry = currentTopicRegistry
-        updateFetcher(cluster)
-        true
-      }else {
-        false
-      }
-    }
-
-    private def closeFetchersForQueues(cluster: Cluster,
-                                       messageStreams: Map[String,List[KafkaStream[_]]],
-                                       queuesToBeCleared: Iterable[BlockingQueue[FetchedDataChunk]]) {
-      var allPartitionInfos = topicRegistry.values.map(p => p.values).flatten
-      fetcher match {
-        case Some(f) => f.stopConnectionsToAllBrokers
-        f.clearFetcherQueues(allPartitionInfos, cluster, queuesToBeCleared, messageStreams)
-        info("Committing all offsets after clearing the fetcher queues")
-        /**
-        * here, we need to commit offsets before stopping the consumer from returning any more messages
-        * from the current data chunk. Since partition ownership is not yet released, this commit offsets
-        * call will ensure that the offsets committed now will be used by the next consumer thread owning the partition
-        * for the current data chunk. Since the fetchers are already shutdown and this is the last chunk to be iterated
-        * by the consumer, there will be no more messages returned by this iterator until the rebalancing finishes
-        * successfully and the fetchers restart to fetch more data chunks
-        **/
-        commitOffsets
-        case None =>
-      }
-    }
-
-    private def closeFetchers(cluster: Cluster, messageStreams: Map[String,List[KafkaStream[_]]],
-                              relevantTopicThreadIdsMap: Map[String, Set[String]]) {
-      // only clear the fetcher queues for certain topic partitions that *might* no longer be served by this consumer
-      // after this rebalancing attempt
-      val queuesTobeCleared = topicThreadIdAndQueues.filter(q => relevantTopicThreadIdsMap.contains(q._1._1)).map(q => q._2)
-      closeFetchersForQueues(cluster, messageStreams, queuesTobeCleared)
-    }
-
-    private def updateFetcher(cluster: Cluster) {
-      // update partitions for fetcher
-      var allPartitionInfos : List[PartitionTopicInfo] = Nil
-      for (partitionInfos <- topicRegistry.values)
-        for (partition <- partitionInfos.values)
-          allPartitionInfos ::= partition
-      info("Consumer " + consumerIdString + " selected partitions : " +
-        allPartitionInfos.sortWith((s,t) => s.partition < t.partition).map(_.toString).mkString(","))
-
-      fetcher match {
-        case Some(f) =>
-          f.startConnections(allPartitionInfos, cluster)
-        case None =>
-      }
-    }
-
-    private def reflectPartitionOwnershipDecision(partitionOwnershipDecision: Map[(String, String), String]): Boolean = {
-      var successfullyOwnedPartitions : List[(String, String)] = Nil
-      val partitionOwnershipSuccessful = partitionOwnershipDecision.map { partitionOwner =>
-        val topic = partitionOwner._1._1
-        val partition = partitionOwner._1._2
-        val consumerThreadId = partitionOwner._2
-        val topicDirs = new ZKGroupTopicDirs(group, topic)
-        val partitionOwnerPath = topicDirs.consumerOwnerDir + "/" + partition
-        try {
-          createEphemeralPathExpectConflict(zkClient, partitionOwnerPath, consumerThreadId)
-          info(consumerThreadId + " successfully owned partition " + partition + " for topic " + topic)
-          successfullyOwnedPartitions ::= (topic, partition)
-          true
-        }
-        catch {
-          case e: ZkNodeExistsException =>
-            // The node hasn't been deleted by the original owner. So wait a bit and retry.
-            info("waiting for the partition ownership to be deleted: " + partition)
-            false
-          case e2 => throw e2
-        }
-      }
-      val hasPartitionOwnershipFailed = partitionOwnershipSuccessful.foldLeft(0)((sum, decision) => sum + (if(decision) 0 else 1))
-      /* even if one of the partition ownership attempt has failed, return false */
-      if(hasPartitionOwnershipFailed > 0) {
-        // remove all paths that we have owned in ZK
-        successfullyOwnedPartitions.foreach(topicAndPartition => deletePartitionOwnershipFromZK(topicAndPartition._1, topicAndPartition._2))
-        false
-      }
-      else true
-    }
-
-    private def addPartitionTopicInfo(currentTopicRegistry: Pool[String, Pool[Partition, PartitionTopicInfo]],
-                                      topicDirs: ZKGroupTopicDirs, partitionString: String,
-                                      topic: String, consumerThreadId: String) {
-      val partition = Partition.parse(partitionString)
-      val partTopicInfoMap = currentTopicRegistry.get(topic)
-
-      val znode = topicDirs.consumerOffsetDir + "/" + partition.name
-      val offsetString = readDataMaybeNull(zkClient, znode)
-      // If first time starting a consumer, set the initial offset based on the config
-      var offset : Long = 0L
-      if (offsetString == null)
-        offset = config.autoOffsetReset match {
-              case OffsetRequest.SmallestTimeString =>
-                  earliestOrLatestOffset(topic, partition.brokerId, partition.partId, OffsetRequest.EarliestTime)
-              case OffsetRequest.LargestTimeString =>
-                  earliestOrLatestOffset(topic, partition.brokerId, partition.partId, OffsetRequest.LatestTime)
-              case _ =>
-                  throw new InvalidConfigException("Wrong value in autoOffsetReset in ConsumerConfig")
-        }
-      else
-        offset = offsetString.toLong
-      val queue = topicThreadIdAndQueues.get((topic, consumerThreadId))
-      val consumedOffset = new AtomicLong(offset)
-      val fetchedOffset = new AtomicLong(offset)
-      val partTopicInfo = new PartitionTopicInfo(topic,
-                                                 partition.brokerId,
-                                                 partition,
-                                                 queue,
-                                                 consumedOffset,
-                                                 fetchedOffset,
-                                                 new AtomicInteger(config.fetchSize))
-      partTopicInfoMap.put(partition, partTopicInfo)
-      debug(partTopicInfo + " selected new offset " + offset)
-    }
-  }
-
-  private def reinitializeConsumer[T](
-      topicCount: TopicCount,
-      queuesAndStreams: List[(LinkedBlockingQueue[FetchedDataChunk],KafkaStream[T])]) {
-
-    val dirs = new ZKGroupDirs(config.groupId)
-
-    // listener to consumer and partition changes
-    if (loadBalancerListener == null) {
-      val topicStreamsMap = new mutable.HashMap[String,List[KafkaStream[T]]]
-      loadBalancerListener = new ZKRebalancerListener(
-        config.groupId, consumerIdString, topicStreamsMap.asInstanceOf[scala.collection.mutable.Map[String, List[KafkaStream[_]]]])
-    }
-
-    // register listener for session expired event
-    if (sessionExpirationListener == null)
-      sessionExpirationListener = new ZKSessionExpireListener(
-        dirs, consumerIdString, topicCount, loadBalancerListener)
-
-    val topicStreamsMap = loadBalancerListener.kafkaMessageAndMetadataStreams
-
-    // map of {topic -> Set(thread-1, thread-2, ...)}
-    val consumerThreadIdsPerTopic: Map[String, Set[String]] =
-      topicCount.getConsumerThreadIdsPerTopic
-
-    /*
-     * This usage of map flatten breaks up consumerThreadIdsPerTopic into
-     * a set of (topic, thread-id) pairs that we then use to construct
-     * the updated (topic, thread-id) -> queues map
-     */
-    implicit def getTopicThreadIds(v: (String, Set[String])): Set[(String, String)] = v._2.map((v._1, _))
-
-    // iterator over (topic, thread-id) tuples
-    val topicThreadIds: Iterable[(String, String)] =
-      consumerThreadIdsPerTopic.flatten
-
-    // list of (pairs of pairs): e.g., ((topic, thread-id),(queue, stream))
-    val threadQueueStreamPairs = topicCount match {
-      case wildTopicCount: WildcardTopicCount =>
-        for (tt <- topicThreadIds; qs <- queuesAndStreams) yield (tt -> qs)
-      case statTopicCount: StaticTopicCount => {
-        require(topicThreadIds.size == queuesAndStreams.size,
-          "Mismatch between thread ID count (%d) and queue count (%d)".format(
-          topicThreadIds.size, queuesAndStreams.size))
-        topicThreadIds.zip(queuesAndStreams)
-      }
-    }
-
-    threadQueueStreamPairs.foreach(e => {
-      val topicThreadId = e._1
-      val q = e._2._1
-      topicThreadIdAndQueues.put(topicThreadId, q)
-    })
-
-    val groupedByTopic = threadQueueStreamPairs.groupBy(_._1._1)
-    groupedByTopic.foreach(e => {
-      val topic = e._1
-      val streams = e._2.map(_._2._2).toList
-      topicStreamsMap += (topic -> streams)
-      debug("adding topic %s and %d streams to map.".format(topic, streams.size))
-    })
-
-    // listener to consumer and partition changes
-    zkClient.subscribeStateChanges(sessionExpirationListener)
-
-    zkClient.subscribeChildChanges(dirs.consumerRegistryDir, loadBalancerListener)
-
-    topicStreamsMap.foreach { topicAndStreams =>
-      // register on broker partition path changes
-      val partitionPath = BrokerTopicsPath + "/" + topicAndStreams._1
-      zkClient.subscribeChildChanges(partitionPath, loadBalancerListener)
-    }
-
-    // explicitly trigger load balancing for this consumer
-    loadBalancerListener.syncedRebalance()
-  }
-
-  class WildcardStreamsHandler[T](topicFilter: TopicFilter,
-                                  numStreams: Int,
-                                  decoder: Decoder[T])
-                                extends TopicEventHandler[String] {
-
-    if (messageStreamCreated.getAndSet(true))
-      throw new RuntimeException("Each consumer connector can create " +
-        "message streams by filter at most once.")
-
-    private val wildcardQueuesAndStreams = (1 to numStreams)
-      .map(e => {
-        val queue = new LinkedBlockingQueue[FetchedDataChunk](config.maxQueuedChunks)
-        val stream = new KafkaStream[T](
-          queue, config.consumerTimeoutMs, decoder, config.enableShallowIterator)
-        (queue, stream)
-    }).toList
-
-     // bootstrap with existing topics
-    private var wildcardTopics =
-      getChildrenParentMayNotExist(zkClient, BrokerTopicsPath)
-        .filter(topicFilter.isTopicAllowed)
-
-    private val wildcardTopicCount = TopicCount.constructTopicCount(
-      consumerIdString, topicFilter, numStreams, zkClient)
-
-    val dirs = new ZKGroupDirs(config.groupId)
-    registerConsumerInZK(dirs, consumerIdString, wildcardTopicCount)
-    reinitializeConsumer(wildcardTopicCount, wildcardQueuesAndStreams)
-
-    if (!topicFilter.requiresTopicEventWatcher) {
-      info("Not creating event watcher for trivial whitelist " + topicFilter)
-    }
-    else {
-      info("Creating topic event watcher for whitelist " + topicFilter)
-      wildcardTopicWatcher = new ZookeeperTopicEventWatcher(config, this)
-
-      /*
-       * Topic events will trigger subsequent synced rebalances. Also, the
-       * consumer will get registered only after an allowed topic becomes
-       * available.
-       */
-    }
-
-    def handleTopicEvent(allTopics: Seq[String]) {
-      debug("Handling topic event")
-
-      val updatedTopics = allTopics.filter(topicFilter.isTopicAllowed)
-
-      val addedTopics = updatedTopics filterNot (wildcardTopics contains)
-      if (addedTopics.nonEmpty)
-        info("Topic event: added topics = %s"
-                             .format(addedTopics))
-
-      /*
-       * TODO: Deleted topics are interesting (and will not be a concern until
-       * 0.8 release). We may need to remove these topics from the rebalance
-       * listener's map in reinitializeConsumer.
-       */
-      val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
-      if (deletedTopics.nonEmpty)
-        info("Topic event: deleted topics = %s"
-                             .format(deletedTopics))
-
-      wildcardTopics = updatedTopics
-      info("Topics to consume = %s".format(wildcardTopics))
-
-      if (addedTopics.nonEmpty || deletedTopics.nonEmpty)
-        reinitializeConsumer(wildcardTopicCount, wildcardQueuesAndStreams)
-    }
-
-    def streams: Seq[KafkaStream[T]] =
-      wildcardQueuesAndStreams.map(_._2)
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala b/trunk/core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala
deleted file mode 100644
index df83baa..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala
+++ /dev/null
@@ -1,105 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import scala.collection.JavaConversions._
-import kafka.utils.{ZkUtils, ZKStringSerializer, Logging}
-import org.I0Itec.zkclient.{IZkStateListener, IZkChildListener, ZkClient}
-import org.apache.zookeeper.Watcher.Event.KeeperState
-
-class ZookeeperTopicEventWatcher(val config:ConsumerConfig,
-    val eventHandler: TopicEventHandler[String]) extends Logging {
-
-  val lock = new Object()
-
-  private var zkClient: ZkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs,
-      config.zkConnectionTimeoutMs, ZKStringSerializer)
-
-  startWatchingTopicEvents()
-
-  private def startWatchingTopicEvents() {
-    val topicEventListener = new ZkTopicEventListener()
-    ZkUtils.makeSurePersistentPathExists(zkClient, ZkUtils.BrokerTopicsPath)
-
-    zkClient.subscribeStateChanges(
-      new ZkSessionExpireListener(topicEventListener))
-
-    val topics = zkClient.subscribeChildChanges(
-      ZkUtils.BrokerTopicsPath, topicEventListener).toList
-
-    // call to bootstrap topic list
-    topicEventListener.handleChildChange(ZkUtils.BrokerTopicsPath, topics)
-  }
-
-  private def stopWatchingTopicEvents() { zkClient.unsubscribeAll() }
-
-  def shutdown() {
-    lock.synchronized {
-      info("Shutting down topic event watcher.")
-      if (zkClient != null) {
-        stopWatchingTopicEvents()
-        zkClient.close()
-        zkClient = null
-      }
-      else
-        warn("Cannot shutdown already shutdown topic event watcher.")
-    }
-  }
-
-  class ZkTopicEventListener extends IZkChildListener {
-
-    @throws(classOf[Exception])
-    def handleChildChange(parent: String, children: java.util.List[String]) {
-      lock.synchronized {
-        try {
-          if (zkClient != null) {
-            val latestTopics = zkClient.getChildren(ZkUtils.BrokerTopicsPath).toList
-            debug("all topics: %s".format(latestTopics))
-
-            eventHandler.handleTopicEvent(latestTopics)
-          }
-        }
-        catch {
-          case e =>
-            error("error in handling child changes", e)
-        }
-      }
-    }
-
-  }
-
-  class ZkSessionExpireListener(val topicEventListener: ZkTopicEventListener)
-    extends IZkStateListener {
-
-    @throws(classOf[Exception])
-    def handleStateChanged(state: KeeperState) { }
-
-    @throws(classOf[Exception])
-    def handleNewSession() {
-      lock.synchronized {
-        if (zkClient != null) {
-          info(
-            "ZK expired: resubscribing topic event listener to topic registry")
-          zkClient.subscribeChildChanges(
-            ZkUtils.BrokerTopicsPath, topicEventListener)
-        }
-      }
-    }
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/consumer/package.html b/trunk/core/src/main/scala/kafka/consumer/package.html
deleted file mode 100644
index cb3d735..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/package.html
+++ /dev/null
@@ -1 +0,0 @@
-This is the consumer API for kafka.
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/consumer/storage/MemoryOffsetStorage.scala b/trunk/core/src/main/scala/kafka/consumer/storage/MemoryOffsetStorage.scala
deleted file mode 100644
index d6ce868..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/storage/MemoryOffsetStorage.scala
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer.storage
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-import java.util.concurrent.locks._
-
-class MemoryOffsetStorage extends OffsetStorage {
-  
-  val offsetAndLock = new ConcurrentHashMap[(Int, String), (AtomicLong, Lock)]
-
-  def reserve(node: Int, topic: String): Long = {
-    val key = (node, topic)
-    if(!offsetAndLock.containsKey(key))
-      offsetAndLock.putIfAbsent(key, (new AtomicLong(0), new ReentrantLock))
-    val (offset, lock) = offsetAndLock.get(key)
-    lock.lock
-    offset.get
-  }
-
-  def commit(node: Int, topic: String, offset: Long) = {
-    val (highwater, lock) = offsetAndLock.get((node, topic))
-    highwater.set(offset)
-    lock.unlock
-    offset
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/storage/OffsetStorage.scala b/trunk/core/src/main/scala/kafka/consumer/storage/OffsetStorage.scala
deleted file mode 100644
index f9c9467..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/storage/OffsetStorage.scala
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer.storage
-
-
-/**
- * A method for storing offsets for the consumer. 
- * This is used to track the progress of the consumer in the stream.
- */
-trait OffsetStorage {
-
-  /**
-   * Reserve a range of the length given by increment.
-   * @param increment The size to reserver
-   * @return The range reserved
-   */
-  def reserve(node: Int, topic: String): Long
-
-  /**
-   * Update the offset to the new offset
-   * @param offset The new offset
-   */
-  def commit(node: Int, topic: String, offset: Long)
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/consumer/storage/OracleOffsetStorage.scala b/trunk/core/src/main/scala/kafka/consumer/storage/OracleOffsetStorage.scala
deleted file mode 100644
index eb966a2..0000000
--- a/trunk/core/src/main/scala/kafka/consumer/storage/OracleOffsetStorage.scala
+++ /dev/null
@@ -1,155 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer.storage.sql
-
-import java.sql._
-import kafka.utils._
-import kafka.consumer.storage.OffsetStorage
-
-/**
- * An offset storage implementation that uses an oracle database to save offsets
- */
-@nonthreadsafe
-class OracleOffsetStorage(val connection: Connection) extends OffsetStorage with Logging {
-  
-  private val lock = new Object
-  connection.setAutoCommit(false)
-  
-  def reserve(node: Int, topic: String): Long = {
-    /* try to get and lock the offset, if it isn't there, make it */
-    val maybeOffset = selectExistingOffset(connection, node, topic)
-    val offset = maybeOffset match {
-      case Some(offset) => offset
-      case None => {
-        maybeInsertZeroOffset(connection, node, topic)
-        selectExistingOffset(connection, node, topic).get
-      }
-    }
-    
-    debug("Reserved node " + node + " for topic '" + topic + " offset " + offset)
-    
-    offset
-  }
-  
-  def commit(node: Int, topic: String, offset: Long) {
-    var success = false
-    try {
-      updateOffset(connection, node, topic, offset)
-      success = true
-    } finally {
-      commitOrRollback(connection, success)
-    }
-    if(logger.isDebugEnabled)
-      logger.debug("Updated node " + node + " for topic '" + topic + "' to " + offset)
-  }
-  
-  def close() {
-    Utils.swallow(logger.error, connection.close())
-  }
-  
-  /**
-   * Attempt to update an existing entry in the table if there isn't already one there
-   * @return true iff the row didn't already exist
-   */
-  private def maybeInsertZeroOffset(connection: Connection, node: Int, topic: String): Boolean = {
-    val stmt = connection.prepareStatement(
-      """insert into kafka_offsets (node, topic, offset) 
-         select ?, ?, 0 from dual where not exists 
-         (select null from kafka_offsets where node = ? and topic = ?)""")
-    stmt.setInt(1, node)
-    stmt.setString(2, topic)
-    stmt.setInt(3, node)
-    stmt.setString(4, topic)
-    val updated = stmt.executeUpdate()
-    if(updated > 1)
-      throw new IllegalStateException("More than one key updated by primary key!")
-    else
-      updated == 1
-  }
-  
-  /**
-   * Attempt to update an existing entry in the table
-   * @return true iff we updated an entry
-   */
-  private def selectExistingOffset(connection: Connection, node: Int, topic: String): Option[Long] = {
-    val stmt = connection.prepareStatement(
-        """select offset from kafka_offsets
-           where node = ? and topic = ?
-           for update""")
-    var results: ResultSet = null
-    try {
-      stmt.setInt(1, node)
-      stmt.setString(2, topic)
-      results = stmt.executeQuery()
-      if(!results.next()) {
-        None
-      } else {
-        val offset = results.getLong("offset")
-        if(results.next())
-          throw new IllegalStateException("More than one entry for primary key!")
-        Some(offset)
-      }
-    } finally {
-      close(stmt)
-      close(results)
-    }
-  }
-  
-  private def updateOffset(connection: Connection, 
-                           node: Int, 
-                           topic: String, 
-                           newOffset: Long): Unit = {
-    val stmt = connection.prepareStatement("update kafka_offsets set offset = ? where node = ? and topic = ?")
-    try {
-      stmt.setLong(1, newOffset)
-      stmt.setInt(2, node)
-      stmt.setString(3, topic)
-      val updated = stmt.executeUpdate()
-      if(updated != 1)
-        throw new IllegalStateException("Unexpected number of keys updated: " + updated)
-    } finally {
-      close(stmt)
-    }
-  }
-  
-  
-  private def commitOrRollback(connection: Connection, commit: Boolean) {
-    if(connection != null) {
-      if(commit)
-        Utils.swallow(logger.error, connection.commit())
-      else
-        Utils.swallow(logger.error, connection.rollback())
-    }
-  }
-  
-  private def close(rs: ResultSet) {
-    if(rs != null)
-      Utils.swallow(logger.error, rs.close())
-  }
-  
-  private def close(stmt: PreparedStatement) {
-    if(stmt != null)
-      Utils.swallow(logger.error, stmt.close())
-  }
-  
-  private def close(connection: Connection) {
-    if(connection != null)
-      Utils.swallow(logger.error, connection.close())
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/Implicits.scala b/trunk/core/src/main/scala/kafka/javaapi/Implicits.scala
deleted file mode 100644
index 20ca193..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/Implicits.scala
+++ /dev/null
@@ -1,123 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.javaapi
-
-import kafka.serializer.Encoder
-import kafka.producer.async.QueueItem
-import kafka.utils.Logging
-
-private[javaapi] object Implicits extends Logging {
-  implicit def javaMessageSetToScalaMessageSet(messageSet: kafka.javaapi.message.ByteBufferMessageSet):
-     kafka.message.ByteBufferMessageSet = messageSet.underlying
-
-  implicit def scalaMessageSetToJavaMessageSet(messageSet: kafka.message.ByteBufferMessageSet):
-     kafka.javaapi.message.ByteBufferMessageSet = {
-    new kafka.javaapi.message.ByteBufferMessageSet(messageSet.getBuffer, messageSet.getInitialOffset,
-                                                   messageSet.getErrorCode)
-  }
-
-  implicit def toJavaSyncProducer(producer: kafka.producer.SyncProducer): kafka.javaapi.producer.SyncProducer = {
-    debug("Implicit instantiation of Java Sync Producer")
-    new kafka.javaapi.producer.SyncProducer(producer)
-  }
-
-  implicit def toSyncProducer(producer: kafka.javaapi.producer.SyncProducer): kafka.producer.SyncProducer = {
-    debug("Implicit instantiation of Sync Producer")
-    producer.underlying
-  }
-
-  implicit def toScalaEventHandler[T](eventHandler: kafka.javaapi.producer.async.EventHandler[T])
-       : kafka.producer.async.EventHandler[T] = {
-    new kafka.producer.async.EventHandler[T] {
-      override def init(props: java.util.Properties) { eventHandler.init(props) }
-      override def handle(events: Seq[QueueItem[T]], producer: kafka.producer.SyncProducer, encoder: Encoder[T]) {
-        import collection.JavaConversions._
-        eventHandler.handle(asList(events), producer, encoder)
-      }
-      override def close { eventHandler.close }
-    }
-  }
-
-  implicit def toJavaEventHandler[T](eventHandler: kafka.producer.async.EventHandler[T])
-    : kafka.javaapi.producer.async.EventHandler[T] = {
-    new kafka.javaapi.producer.async.EventHandler[T] {
-      override def init(props: java.util.Properties) { eventHandler.init(props) }
-      override def handle(events: java.util.List[QueueItem[T]], producer: kafka.javaapi.producer.SyncProducer,
-                          encoder: Encoder[T]) {
-        import collection.JavaConversions._
-        eventHandler.handle(asBuffer(events), producer, encoder)
-      }
-      override def close { eventHandler.close }
-    }
-  }
-
-  implicit def toScalaCbkHandler[T](cbkHandler: kafka.javaapi.producer.async.CallbackHandler[T])
-      : kafka.producer.async.CallbackHandler[T] = {
-    new kafka.producer.async.CallbackHandler[T] {
-      import collection.JavaConversions._
-      override def init(props: java.util.Properties) { cbkHandler.init(props)}
-      override def beforeEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]]): QueueItem[T] = {
-        cbkHandler.beforeEnqueue(data)
-      }
-      override def afterEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]], added: Boolean) {
-        cbkHandler.afterEnqueue(data, added)
-      }
-      override def afterDequeuingExistingData(data: QueueItem[T] = null): scala.collection.mutable.Seq[QueueItem[T]] = {
-        cbkHandler.afterDequeuingExistingData(data)
-      }
-      override def beforeSendingData(data: Seq[QueueItem[T]] = null): scala.collection.mutable.Seq[QueueItem[T]] = {
-        asList(cbkHandler.beforeSendingData(asList(data)))
-      }
-      override def lastBatchBeforeClose: scala.collection.mutable.Seq[QueueItem[T]] = {
-        asBuffer(cbkHandler.lastBatchBeforeClose)
-      }
-      override def close { cbkHandler.close }
-    }
-  }
-
-  implicit def toJavaCbkHandler[T](cbkHandler: kafka.producer.async.CallbackHandler[T])
-      : kafka.javaapi.producer.async.CallbackHandler[T] = {
-    new kafka.javaapi.producer.async.CallbackHandler[T] {
-      import collection.JavaConversions._
-      override def init(props: java.util.Properties) { cbkHandler.init(props)}
-      override def beforeEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]]): QueueItem[T] = {
-        cbkHandler.beforeEnqueue(data)
-      }
-      override def afterEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]], added: Boolean) {
-        cbkHandler.afterEnqueue(data, added)
-      }
-      override def afterDequeuingExistingData(data: QueueItem[T] = null)
-      : java.util.List[QueueItem[T]] = {
-        asList(cbkHandler.afterDequeuingExistingData(data))
-      }
-      override def beforeSendingData(data: java.util.List[QueueItem[T]] = null)
-      : java.util.List[QueueItem[T]] = {
-        asBuffer(cbkHandler.beforeSendingData(asBuffer(data)))
-      }
-      override def lastBatchBeforeClose: java.util.List[QueueItem[T]] = {
-        asList(cbkHandler.lastBatchBeforeClose)
-      }
-      override def close { cbkHandler.close }
-    }
-  }
-
-  implicit def toMultiFetchResponse(response: kafka.javaapi.MultiFetchResponse): kafka.api.MultiFetchResponse =
-    response.underlying
-
-  implicit def toJavaMultiFetchResponse(response: kafka.api.MultiFetchResponse): kafka.javaapi.MultiFetchResponse =
-    new kafka.javaapi.MultiFetchResponse(response.buffer, response.numSets, response.offsets)
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/MultiFetchResponse.scala b/trunk/core/src/main/scala/kafka/javaapi/MultiFetchResponse.scala
deleted file mode 100644
index 3bf5f44..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/MultiFetchResponse.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi
-
-import kafka.utils.IteratorTemplate
-import java.nio.ByteBuffer
-import message.ByteBufferMessageSet
-
-class MultiFetchResponse(buffer: ByteBuffer, numSets: Int, offsets: Array[Long]) extends java.lang.Iterable[ByteBufferMessageSet] {
-  val underlyingBuffer = ByteBuffer.wrap(buffer.array)
-    // this has the side effect of setting the initial position of buffer correctly
-  val errorCode = underlyingBuffer.getShort
-
-  import Implicits._
-  val underlying = new kafka.api.MultiFetchResponse(underlyingBuffer, numSets, offsets)
-
-  override def toString() = underlying.toString
-
-  def iterator : java.util.Iterator[ByteBufferMessageSet] = {
-    new IteratorTemplate[ByteBufferMessageSet] {
-      val iter = underlying.iterator
-      override def makeNext(): ByteBufferMessageSet = {
-        if(iter.hasNext)
-          iter.next
-        else
-          return allDone
-      }
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/ProducerRequest.scala b/trunk/core/src/main/scala/kafka/javaapi/ProducerRequest.scala
deleted file mode 100644
index 802e410..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/ProducerRequest.scala
+++ /dev/null
@@ -1,52 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.javaapi
-
-import kafka.network.Request
-import kafka.api.RequestKeys
-import java.nio.ByteBuffer
-
-class ProducerRequest(val topic: String,
-                      val partition: Int,
-                      val messages: kafka.javaapi.message.ByteBufferMessageSet) extends Request(RequestKeys.Produce) {
-  import Implicits._
-  private val underlying = new kafka.api.ProducerRequest(topic, partition, messages)
-
-  def writeTo(buffer: ByteBuffer) { underlying.writeTo(buffer) }
-
-  def sizeInBytes(): Int = underlying.sizeInBytes
-
-  def getTranslatedPartition(randomSelector: String => Int): Int =
-    underlying.getTranslatedPartition(randomSelector)
-
-  override def toString: String =
-    underlying.toString
-
-  override def equals(other: Any): Boolean = {
-    other match {
-      case that: ProducerRequest =>
-        (that canEqual this) && topic == that.topic && partition == that.partition &&
-                messages.equals(that.messages)
-      case _ => false
-    }
-  }
-
-  def canEqual(other: Any): Boolean = other.isInstanceOf[ProducerRequest]
-
-  override def hashCode: Int = 31 + (17 * partition) + topic.hashCode + messages.hashCode
-
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java b/trunk/core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java
deleted file mode 100644
index afb6b0a..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi.consumer;
-
-
-import java.util.List;
-import java.util.Map;
-import kafka.consumer.KafkaStream;
-import kafka.consumer.TopicFilter;
-import kafka.message.Message;
-import kafka.serializer.Decoder;
-
-public interface ConsumerConnector {
-  /**
-   *  Create a list of MessageStreams of type T for each topic.
-   *
-   *  @param topicCountMap  a map of (topic, #streams) pair
-   *  @param decoder a decoder that converts from Message to T
-   *  @return a map of (topic, list of  KafkaStream) pairs.
-   *          The number of items in the list is #streams. Each stream supports
-   *          an iterator over message/metadata pairs.
-   */
-  public <T> Map<String, List<KafkaStream<T>>> createMessageStreams(
-      Map<String, Integer> topicCountMap, Decoder<T> decoder);
-  public Map<String, List<KafkaStream<Message>>> createMessageStreams(
-      Map<String, Integer> topicCountMap);
-
-  /**
-   *  Create a list of MessageAndTopicStreams containing messages of type T.
-   *
-   *  @param topicFilter a TopicFilter that specifies which topics to
-   *                    subscribe to (encapsulates a whitelist or a blacklist).
-   *  @param numStreams the number of message streams to return.
-   *  @param decoder a decoder that converts from Message to T
-   *  @return a list of KafkaStream. Each stream supports an
-   *          iterator over its MessageAndMetadata elements.
-   */
-  public <T> List<KafkaStream<T>> createMessageStreamsByFilter(
-      TopicFilter topicFilter, int numStreams, Decoder<T> decoder);
-  public List<KafkaStream<Message>> createMessageStreamsByFilter(
-      TopicFilter topicFilter, int numStreams);
-  public List<KafkaStream<Message>> createMessageStreamsByFilter(
-      TopicFilter topicFilter);
-
-  /**
-   *  Commit the offsets of all broker partitions connected by this connector.
-   */
-  public void commitOffsets();
-
-  /**
-   *  Shut down the connector
-   */
-  public void shutdown();
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/consumer/SimpleConsumer.scala b/trunk/core/src/main/scala/kafka/javaapi/consumer/SimpleConsumer.scala
deleted file mode 100644
index 9ba324d..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/consumer/SimpleConsumer.scala
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi.consumer
-
-import kafka.utils.threadsafe
-import kafka.javaapi.message.ByteBufferMessageSet
-import kafka.javaapi.MultiFetchResponse
-import kafka.api.FetchRequest
-
-/**
- * A consumer of kafka messages
- */
-@threadsafe
-class SimpleConsumer(val host: String,
-                     val port: Int,
-                     val soTimeout: Int,
-                     val bufferSize: Int) {
-  val underlying = new kafka.consumer.SimpleConsumer(host, port, soTimeout, bufferSize)
-
-  /**
-   *  Fetch a set of messages from a topic.
-   *
-   *  @param request  specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.
-   *  @return a set of fetched messages
-   */
-  def fetch(request: FetchRequest): ByteBufferMessageSet = {
-    import kafka.javaapi.Implicits._
-    underlying.fetch(request)
-  }
-
-  /**
-   *  Combine multiple fetch requests in one call.
-   *
-   *  @param fetches  a sequence of fetch requests.
-   *  @return a sequence of fetch responses
-   */
-  def multifetch(fetches: java.util.List[FetchRequest]): MultiFetchResponse = {
-    import scala.collection.JavaConversions._
-    import kafka.javaapi.Implicits._
-    underlying.multifetch(asBuffer(fetches): _*)
-  }
-
-  /**
-   *  Get a list of valid offsets (up to maxSize) before the given time.
-   *  The result is a list of offsets, in descending order.
-   *
-   *  @param time: time in millisecs (-1, from the latest offset available, -2 from the smallest offset available)
-   *  @return an array of offsets
-   */
-  def getOffsetsBefore(topic: String, partition: Int, time: Long, maxNumOffsets: Int): Array[Long] =
-    underlying.getOffsetsBefore(topic, partition, time, maxNumOffsets)
-
-  def close() {
-    underlying.close
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala b/trunk/core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala
deleted file mode 100644
index f1a469b..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala
+++ /dev/null
@@ -1,109 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.javaapi.consumer
-
-import kafka.message.Message
-import kafka.serializer.{DefaultDecoder, Decoder}
-import kafka.consumer._
-import scala.collection.JavaConversions.asList
-
-
-/**
- * This class handles the consumers interaction with zookeeper
- *
- * Directories:
- * 1. Consumer id registry:
- * /consumers/[group_id]/ids[consumer_id] -> topic1,...topicN
- * A consumer has a unique consumer id within a consumer group. A consumer registers its id as an ephemeral znode
- * and puts all topics that it subscribes to as the value of the znode. The znode is deleted when the client is gone.
- * A consumer subscribes to event changes of the consumer id registry within its group.
- *
- * The consumer id is picked up from configuration, instead of the sequential id assigned by ZK. Generated sequential
- * ids are hard to recover during temporary connection loss to ZK, since it's difficult for the client to figure out
- * whether the creation of a sequential znode has succeeded or not. More details can be found at
- * (http://wiki.apache.org/hadoop/ZooKeeper/ErrorHandling)
- *
- * 2. Broker node registry:
- * /brokers/[0...N] --> { "host" : "host:port",
- *                        "topics" : {"topic1": ["partition1" ... "partitionN"], ...,
- *                                    "topicN": ["partition1" ... "partitionN"] } }
- * This is a list of all present broker brokers. A unique logical node id is configured on each broker node. A broker
- * node registers itself on start-up and creates a znode with the logical node id under /brokers. The value of the znode
- * is a JSON String that contains (1) the host name and the port the broker is listening to, (2) a list of topics that
- * the broker serves, (3) a list of logical partitions assigned to each topic on the broker.
- * A consumer subscribes to event changes of the broker node registry.
- *
- * 3. Partition owner registry:
- * /consumers/[group_id]/owner/[topic]/[broker_id-partition_id] --> consumer_node_id
- * This stores the mapping before broker partitions and consumers. Each partition is owned by a unique consumer
- * within a consumer group. The mapping is reestablished after each rebalancing.
- *
- * 4. Consumer offset tracking:
- * /consumers/[group_id]/offsets/[topic]/[broker_id-partition_id] --> offset_counter_value
- * Each consumer tracks the offset of the latest message consumed for each partition.
- *
-*/
-
-private[kafka] class ZookeeperConsumerConnector(val config: ConsumerConfig,
-                                 val enableFetcher: Boolean) // for testing only
-    extends ConsumerConnector {
-
-  val underlying = new kafka.consumer.ZookeeperConsumerConnector(config, enableFetcher)
-
-  def this(config: ConsumerConfig) = this(config, true)
-
- // for java client
-  def createMessageStreams[T](
-        topicCountMap: java.util.Map[String,java.lang.Integer],
-        decoder: Decoder[T])
-      : java.util.Map[String,java.util.List[KafkaStream[T]]] = {
-    import scala.collection.JavaConversions._
-
-    val scalaTopicCountMap: Map[String, Int] = Map.empty[String, Int] ++ asMap(topicCountMap.asInstanceOf[java.util.Map[String, Int]])
-    val scalaReturn = underlying.consume(scalaTopicCountMap, decoder)
-    val ret = new java.util.HashMap[String,java.util.List[KafkaStream[T]]]
-    for ((topic, streams) <- scalaReturn) {
-      var javaStreamList = new java.util.ArrayList[KafkaStream[T]]
-      for (stream <- streams)
-        javaStreamList.add(stream)
-      ret.put(topic, javaStreamList)
-    }
-    ret
-  }
-
-  def createMessageStreams(
-        topicCountMap: java.util.Map[String,java.lang.Integer])
-      : java.util.Map[String,java.util.List[KafkaStream[Message]]] =
-    createMessageStreams(topicCountMap, new DefaultDecoder)
-
-  def createMessageStreamsByFilter[T](topicFilter: TopicFilter, numStreams: Int, decoder: Decoder[T]) =
-    asList(underlying.createMessageStreamsByFilter(topicFilter, numStreams, decoder))
-
-  def createMessageStreamsByFilter(topicFilter: TopicFilter, numStreams: Int) =
-    createMessageStreamsByFilter(topicFilter, numStreams, new DefaultDecoder)
-
-  def createMessageStreamsByFilter(topicFilter: TopicFilter) =
-    createMessageStreamsByFilter(topicFilter, 1, new DefaultDecoder)
-
-  def commitOffsets() {
-    underlying.commitOffsets
-  }
-
-  def shutdown() {
-    underlying.shutdown
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/message/ByteBufferMessageSet.scala b/trunk/core/src/main/scala/kafka/javaapi/message/ByteBufferMessageSet.scala
deleted file mode 100644
index 7ebeb9c..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/message/ByteBufferMessageSet.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.javaapi.message
-
-import java.nio.ByteBuffer
-import kafka.common.ErrorMapping
-import kafka.message._
-
-class ByteBufferMessageSet(private val buffer: ByteBuffer,
-                           private val initialOffset: Long = 0L,
-                           private val errorCode: Int = ErrorMapping.NoError) extends MessageSet {
-  val underlying: kafka.message.ByteBufferMessageSet = new kafka.message.ByteBufferMessageSet(buffer,
-                                                                                              initialOffset,
-                                                                                              errorCode)
-  def this(buffer: ByteBuffer) = this(buffer, 0L, ErrorMapping.NoError)
-
-  def this(compressionCodec: CompressionCodec, messages: java.util.List[Message]) {
-    this(MessageSet.createByteBuffer(compressionCodec, scala.collection.JavaConversions.asBuffer(messages): _*),
-         0L, ErrorMapping.NoError)
-  }
-
-  def this(messages: java.util.List[Message]) {
-    this(NoCompressionCodec, messages)
-  }
-
-  def validBytes: Long = underlying.validBytes
-
-  def serialized():ByteBuffer = underlying.serialized
-
-  def getInitialOffset = initialOffset
-
-  def getBuffer = buffer
-
-  def getErrorCode = errorCode
-
-  override def iterator: java.util.Iterator[MessageAndOffset] = new java.util.Iterator[MessageAndOffset] {
-    val underlyingIterator = underlying.iterator
-    override def hasNext(): Boolean = {
-      underlyingIterator.hasNext
-    }
-
-    override def next(): MessageAndOffset = {
-      underlyingIterator.next
-    }
-
-    override def remove = throw new UnsupportedOperationException("remove API on MessageSet is not supported")
-  }
-
-  override def toString: String = underlying.toString
-
-  def sizeInBytes: Long = underlying.sizeInBytes
-
-  override def equals(other: Any): Boolean = {
-    other match {
-      case that: ByteBufferMessageSet =>
-        (that canEqual this) && errorCode == that.errorCode && buffer.equals(that.buffer) && initialOffset == that.initialOffset
-      case _ => false
-    }
-  }
-
-  def canEqual(other: Any): Boolean = other.isInstanceOf[ByteBufferMessageSet]
-
-  override def hashCode: Int = 31 * (17 + errorCode) + buffer.hashCode + initialOffset.hashCode
-
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/message/MessageSet.scala b/trunk/core/src/main/scala/kafka/javaapi/message/MessageSet.scala
deleted file mode 100644
index 9c9c72f..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/message/MessageSet.scala
+++ /dev/null
@@ -1,55 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi.message
-
-
-import kafka.message.{MessageAndOffset, InvalidMessageException}
-
-
-/**
- * A set of messages. A message set has a fixed serialized form, though the container
- * for the bytes could be either in-memory or on disk. A The format of each message is
- * as follows:
- * 4 byte size containing an integer N
- * N message bytes as described in the message class
- */
-abstract class MessageSet extends java.lang.Iterable[MessageAndOffset] {
-
-  /**
-   * Provides an iterator over the messages in this set
-   */
-  def iterator: java.util.Iterator[MessageAndOffset]
-
-  /**
-   * Gives the total size of this message set in bytes
-   */
-  def sizeInBytes: Long
-
-  /**
-   * Validate the checksum of all the messages in the set. Throws an InvalidMessageException if the checksum doesn't
-   * match the payload for any message.
-   */
-  def validate(): Unit = {
-    val thisIterator = this.iterator
-    while(thisIterator.hasNext) {
-      val messageAndOffset = thisIterator.next
-      if(!messageAndOffset.message.isValid)
-        throw new InvalidMessageException
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/producer/Producer.scala b/trunk/core/src/main/scala/kafka/javaapi/producer/Producer.scala
deleted file mode 100644
index faa420d..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/producer/Producer.scala
+++ /dev/null
@@ -1,122 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi.producer
-
-import kafka.utils.Utils
-import kafka.producer.async.QueueItem
-import java.util.Properties
-import kafka.producer.{ProducerPool, ProducerConfig, Partitioner}
-import kafka.serializer.Encoder
-
-class Producer[K,V](config: ProducerConfig,
-                    partitioner: Partitioner[K],
-                    producerPool: ProducerPool[V],
-                    populateProducerPool: Boolean = true) /* for testing purpose only. Applications should ideally */
-                                                          /* use the other constructor*/
-{
-
-  private val underlying = new kafka.producer.Producer[K,V](config, partitioner, producerPool, populateProducerPool, null)
-
-  /**
-   * This constructor can be used when all config parameters will be specified through the
-   * ProducerConfig object
-   * @param config Producer Configuration object
-   */
-  def this(config: ProducerConfig) = this(config, Utils.getObject(config.partitionerClass),
-    new ProducerPool[V](config, Utils.getObject(config.serializerClass)))
-
-  /**
-   * This constructor can be used to provide pre-instantiated objects for all config parameters
-   * that would otherwise be instantiated via reflection. i.e. encoder, partitioner, event handler and
-   * callback handler
-   * @param config Producer Configuration object
-   * @param encoder Encoder used to convert an object of type V to a kafka.message.Message
-   * @param eventHandler the class that implements kafka.javaapi.producer.async.IEventHandler[T] used to
-   * dispatch a batch of produce requests, using an instance of kafka.javaapi.producer.SyncProducer
-   * @param cbkHandler the class that implements kafka.javaapi.producer.async.CallbackHandler[T] used to inject
-   * callbacks at various stages of the kafka.javaapi.producer.AsyncProducer pipeline.
-   * @param partitioner class that implements the kafka.javaapi.producer.Partitioner[K], used to supply a custom
-   * partitioning strategy on the message key (of type K) that is specified through the ProducerData[K, T]
-   * object in the  send API
-   */
-  def this(config: ProducerConfig,
-           encoder: Encoder[V],
-           eventHandler: kafka.javaapi.producer.async.EventHandler[V],
-           cbkHandler: kafka.javaapi.producer.async.CallbackHandler[V],
-           partitioner: Partitioner[K]) = {
-    this(config, partitioner,
-         new ProducerPool[V](config, encoder,
-                             new kafka.producer.async.EventHandler[V] {
-                               override def init(props: Properties) { eventHandler.init(props) }
-                               override def handle(events: Seq[QueueItem[V]], producer: kafka.producer.SyncProducer,
-                                                   encoder: Encoder[V]) {
-                                 import collection.JavaConversions._
-                                 import kafka.javaapi.Implicits._
-                                 eventHandler.handle(asList(events), producer, encoder)
-                               }
-                               override def close { eventHandler.close }
-                             },
-                             new kafka.producer.async.CallbackHandler[V] {
-                               import collection.JavaConversions._
-                               override def init(props: Properties) { cbkHandler.init(props)}
-                               override def beforeEnqueue(data: QueueItem[V] = null.asInstanceOf[QueueItem[V]]): QueueItem[V] = {
-                                 cbkHandler.beforeEnqueue(data)
-                               }
-                               override def afterEnqueue(data: QueueItem[V] = null.asInstanceOf[QueueItem[V]], added: Boolean) {
-                                 cbkHandler.afterEnqueue(data, added)
-                               }
-                               override def afterDequeuingExistingData(data: QueueItem[V] = null): scala.collection.mutable.Seq[QueueItem[V]] = {
-                                 cbkHandler.afterDequeuingExistingData(data)
-                               }
-                               override def beforeSendingData(data: Seq[QueueItem[V]] = null): scala.collection.mutable.Seq[QueueItem[V]] = {
-                                 asList(cbkHandler.beforeSendingData(asList(data)))
-                               }
-                               override def lastBatchBeforeClose: scala.collection.mutable.Seq[QueueItem[V]] = {
-                                 asBuffer(cbkHandler.lastBatchBeforeClose)
-                               }
-                               override def close { cbkHandler.close }
-                             }))
-  }
-
-  /**
-   * Sends the data to a single topic, partitioned by key, using either the
-   * synchronous or the asynchronous producer
-   * @param producerData the producer data object that encapsulates the topic, key and message data
-   */
-  def send(producerData: kafka.javaapi.producer.ProducerData[K,V]) {
-    import collection.JavaConversions._
-    underlying.send(new kafka.producer.ProducerData[K,V](producerData.getTopic, producerData.getKey,
-                                                         asBuffer(producerData.getData)))
-  }
-
-  /**
-   * Use this API to send data to multiple topics
-   * @param producerData list of producer data objects that encapsulate the topic, key and message data
-   */
-  def send(producerData: java.util.List[kafka.javaapi.producer.ProducerData[K,V]]) {
-    import collection.JavaConversions._
-    underlying.send(asBuffer(producerData).map(pd => new kafka.producer.ProducerData[K,V](pd.getTopic, pd.getKey,
-                                                         asBuffer(pd.getData))): _*)
-  }
-
-  /**
-   * Close API to close the producer pool connections to all Kafka brokers. Also closes
-   * the zookeeper client connection if one exists
-   */
-  def close = underlying.close
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/producer/ProducerData.scala b/trunk/core/src/main/scala/kafka/javaapi/producer/ProducerData.scala
deleted file mode 100644
index 338e0a8..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/producer/ProducerData.scala
+++ /dev/null
@@ -1,34 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.javaapi.producer
-
-import scala.collection.JavaConversions._
-
-class ProducerData[K, V](private val topic: String,
-                         private val key: K,
-                         private val data: java.util.List[V]) {
-
-  def this(t: String, d: java.util.List[V]) = this(topic = t, key = null.asInstanceOf[K], data = d)
-
-  def this(t: String, d: V) = this(topic = t, key = null.asInstanceOf[K], data = asList(List(d)))
-
-  def getTopic: String = topic
-
-  def getKey: K = key
-
-  def getData: java.util.List[V] = data
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/producer/SyncProducer.scala b/trunk/core/src/main/scala/kafka/javaapi/producer/SyncProducer.scala
deleted file mode 100644
index 9925728..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/producer/SyncProducer.scala
+++ /dev/null
@@ -1,48 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.javaapi.producer
-
-import kafka.producer.SyncProducerConfig
-import kafka.javaapi.message.ByteBufferMessageSet
-
-class SyncProducer(syncProducer: kafka.producer.SyncProducer) {
-
-  def this(config: SyncProducerConfig) = this(new kafka.producer.SyncProducer(config))
-
-  val underlying = syncProducer
-
-  def send(topic: String, partition: Int, messages: ByteBufferMessageSet) {
-    import kafka.javaapi.Implicits._
-    underlying.send(topic, partition, messages)
-  }
-
-  def send(topic: String, messages: ByteBufferMessageSet): Unit = send(topic,
-                                                                       kafka.api.ProducerRequest.RandomPartition,
-                                                                       messages)
-
-  def multiSend(produces: Array[kafka.javaapi.ProducerRequest]) {
-    import kafka.javaapi.Implicits._
-    val produceRequests = new Array[kafka.api.ProducerRequest](produces.length)
-    for(i <- 0 until produces.length)
-      produceRequests(i) = new kafka.api.ProducerRequest(produces(i).topic, produces(i).partition, produces(i).messages)
-    underlying.multiSend(produceRequests)
-  }
-
-  def close() {
-    underlying.close
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/producer/async/CallbackHandler.java b/trunk/core/src/main/scala/kafka/javaapi/producer/async/CallbackHandler.java
deleted file mode 100644
index 2b93974..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/producer/async/CallbackHandler.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.javaapi.producer.async;
-
-
-import java.util.Properties;
-import kafka.producer.async.QueueItem;
-
-/**
- * Callback handler APIs for use in the async producer. The purpose is to
- * give the user some callback handles to insert custom functionality at
- * various stages as the data flows through the pipeline of the async producer
- */
-public interface CallbackHandler<T> {
-    /**
-     * Initializes the callback handler using a Properties object
-     * @param props the properties used to initialize the callback handler
-    */
-    public void init(Properties props);
-
-    /**
-     * Callback to process the data before it enters the batching queue
-     * of the asynchronous producer
-     * @param data the data sent to the producer
-     * @return the processed data that enters the queue
-    */
-    public QueueItem<T> beforeEnqueue(QueueItem<T> data);
-
-    /**
-     * Callback to process the data just after it enters the batching queue
-     * of the asynchronous producer
-     * @param data the data sent to the producer
-     * @param added flag that indicates if the data was successfully added to the queue
-    */
-    public void afterEnqueue(QueueItem<T> data, boolean added);
-
-    /**
-     * Callback to process the data item right after it has been dequeued by the
-     * background sender thread of the asynchronous producer
-     * @param data the data item dequeued from the async producer queue
-     * @return the processed list of data items that gets added to the data handled by the event handler
-     */
-    public java.util.List<QueueItem<T>> afterDequeuingExistingData(QueueItem<T> data);
-
-    /**
-     * Callback to process the batched data right before it is being processed by the
-     * handle API of the event handler
-     * @param data the batched data received by the event handler
-     * @return the processed batched data that gets processed by the handle() API of the event handler
-    */
-    public java.util.List<QueueItem<T>> beforeSendingData(java.util.List<QueueItem<T>> data);
-
-    /**
-     * Callback to process the last batch of data right before the producer send thread is shutdown
-     * @return the last batch of data that is sent to the EventHandler
-    */
-    public java.util.List<QueueItem<T>> lastBatchBeforeClose();
-
-    /**
-     * Cleans up and shuts down the callback handler
-    */
-    public void close();
-}
diff --git a/trunk/core/src/main/scala/kafka/javaapi/producer/async/EventHandler.java b/trunk/core/src/main/scala/kafka/javaapi/producer/async/EventHandler.java
deleted file mode 100644
index 842799d..0000000
--- a/trunk/core/src/main/scala/kafka/javaapi/producer/async/EventHandler.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.javaapi.producer.async;
-
-
-import java.util.List;
-import java.util.Properties;
-import kafka.javaapi.producer.SyncProducer;
-import kafka.producer.async.QueueItem;
-import kafka.serializer.Encoder;
-
-/**
- * Handler that dispatches the batched data from the queue of the
- * asynchronous producer.
- */
-public interface EventHandler<T> {
-    /**
-     * Initializes the event handler using a Properties object
-     * @param props the properties used to initialize the event handler
-    */
-    public void init(Properties props);
-
-    /**
-     * Callback to dispatch the batched data and send it to a Kafka server
-     * @param events the data sent to the producer
-     * @param producer the low-level producer used to send the data
-    */
-    public void handle(List<QueueItem<T>> events, SyncProducer producer, Encoder<T> encoder);
-
-    /**
-     * Cleans up and shuts down the event handler
-    */
-    public void close();
-}
diff --git a/trunk/core/src/main/scala/kafka/log/Log.scala b/trunk/core/src/main/scala/kafka/log/Log.scala
deleted file mode 100644
index 5450699..0000000
--- a/trunk/core/src/main/scala/kafka/log/Log.scala
+++ /dev/null
@@ -1,412 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.util.concurrent.atomic._
-import java.text.NumberFormat
-import java.io._
-import kafka.message._
-import kafka.utils._
-import kafka.common._
-import kafka.api.OffsetRequest
-import java.util._
-import kafka.server.BrokerTopicStat
-
-private[log] object Log {
-  val FileSuffix = ".kafka"
-
-  /**
-   * Find a given range object in a list of ranges by a value in that range. Does a binary search over the ranges
-   * but instead of checking for equality looks within the range. Takes the array size as an option in case
-   * the array grows while searching happens
-   *
-   * TODO: This should move into SegmentList.scala
-   */
-  def findRange[T <: Range](ranges: Array[T], value: Long, arraySize: Int): Option[T] = {
-    if(ranges.size < 1)
-      return None
-
-    // check out of bounds
-    if(value < ranges(0).start || value > ranges(arraySize - 1).start + ranges(arraySize - 1).size)
-      throw new OffsetOutOfRangeException("offset " + value + " is out of range")
-
-    // check at the end
-    if (value == ranges(arraySize - 1).start + ranges(arraySize - 1).size)
-      return None
-
-    var low = 0
-    var high = arraySize - 1
-    while(low <= high) {
-      val mid = (high + low) / 2
-      val found = ranges(mid)
-      if(found.contains(value))
-        return Some(found)
-      else if (value < found.start)
-        high = mid - 1
-      else
-        low = mid + 1
-    }
-    None
-  }
-
-  def findRange[T <: Range](ranges: Array[T], value: Long): Option[T] =
-    findRange(ranges, value, ranges.length)
-
-  /**
-   * Make log segment file name from offset bytes. All this does is pad out the offset number with zeros
-   * so that ls sorts the files numerically
-   */
-  def nameFromOffset(offset: Long): String = {
-    val nf = NumberFormat.getInstance()
-    nf.setMinimumIntegerDigits(20)
-    nf.setMaximumFractionDigits(0)
-    nf.setGroupingUsed(false)
-    nf.format(offset) + Log.FileSuffix
-  }
-  
-  def getEmptyOffsets(request: OffsetRequest): Array[Long] = {
-    if (request.time == OffsetRequest.LatestTime || request.time == OffsetRequest.EarliestTime)
-      return Array(0L)
-    else
-      return Array()
-  }
-}
-
-/**
- * A segment file in the log directory. Each log semgment consists of an open message set, a start offset and a size 
- */
-private[log] class LogSegment(val file: File, val time: Time, val messageSet: FileMessageSet, val start: Long) extends Range {
-  var firstAppendTime: Option[Long] = None
-  @volatile var deleted = false
-  def size: Long = messageSet.highWaterMark
-
-  private def updateFirstAppendTime() {
-    if (firstAppendTime.isEmpty)
-      firstAppendTime = Some(time.milliseconds)
-  }
-
-  def append(messages: ByteBufferMessageSet) {
-    if (messages.sizeInBytes > 0) {
-      messageSet.append(messages)
-      updateFirstAppendTime()
-    }
-   }
-
-  override def toString() = "(file=" + file + ", start=" + start + ", size=" + size + ")"
-}
-
-
-/**
- * An append-only log for storing messages. 
- */
-@threadsafe
-private[log] class Log(val dir: File, val time: Time, val maxSize: Long, val maxMessageSize: Int,
-                       val flushInterval: Int, val rollIntervalMs: Long, val needRecovery: Boolean) extends Logging {
-  /* A lock that guards all modifications to the log */
-  private val lock = new Object
-
-  /* The current number of unflushed messages appended to the write */
-  private val unflushed = new AtomicInteger(0)
-
-   /* last time it was flushed */
-  private val lastflushedTime = new AtomicLong(System.currentTimeMillis)
-
-  /* The actual segments of the log */
-  private[log] val segments: SegmentList[LogSegment] = loadSegments()
-
-  /* The name of this log */
-  val name  = dir.getName()
-
-  private val logStats = new LogStats(this)
-
-  Utils.registerMBean(logStats, "kafka:type=kafka.logs." + dir.getName)
-
-  /* Load the log segments from the log files on disk */
-  private def loadSegments(): SegmentList[LogSegment] = {
-    // open all the segments read-only
-    val accum = new ArrayList[LogSegment]
-    val ls = dir.listFiles()
-    if(ls != null) {
-      for(file <- ls if file.isFile && file.toString.endsWith(Log.FileSuffix)) {
-        if(!file.canRead)
-          throw new IOException("Could not read file " + file)
-        val filename = file.getName()
-        val start = filename.substring(0, filename.length - Log.FileSuffix.length).toLong
-        val messageSet = new FileMessageSet(file, false)
-        accum.add(new LogSegment(file, time, messageSet, start))
-      }
-    }
-
-    if(accum.size == 0) {
-      // no existing segments, create a new mutable segment
-      val newFile = new File(dir, Log.nameFromOffset(0))
-      val set = new FileMessageSet(newFile, true)
-      accum.add(new LogSegment(newFile, time, set, 0))
-    } else {
-      // there is at least one existing segment, validate and recover them/it
-      // sort segments into ascending order for fast searching
-      Collections.sort(accum, new Comparator[LogSegment] {
-        def compare(s1: LogSegment, s2: LogSegment): Int = {
-          if(s1.start == s2.start) 0
-          else if(s1.start < s2.start) -1
-          else 1
-        }
-      })
-      validateSegments(accum)
-
-      //make the final section mutable and run recovery on it if necessary
-      val last = accum.remove(accum.size - 1)
-      last.messageSet.close()
-      info("Loading the last segment " + last.file.getAbsolutePath() + " in mutable mode, recovery " + needRecovery)
-      val mutable = new LogSegment(last.file, time, new FileMessageSet(last.file, true, new AtomicBoolean(needRecovery)), last.start)
-      accum.add(mutable)
-    }
-    new SegmentList(accum.toArray(new Array[LogSegment](accum.size)))
-  }
-
-  /**
-   * Check that the ranges and sizes add up, otherwise we have lost some data somewhere
-   */
-  private def validateSegments(segments: ArrayList[LogSegment]) {
-    lock synchronized {
-      for(i <- 0 until segments.size - 1) {
-        val curr = segments.get(i)
-        val next = segments.get(i+1)
-        if(curr.start + curr.size != next.start)
-          throw new IllegalStateException("The following segments don't validate: " +
-                  curr.file.getAbsolutePath() + ", " + next.file.getAbsolutePath())
-      }
-    }
-  }
-
-  /**
-   * The number of segments in the log
-   */
-  def numberOfSegments: Int = segments.view.length
-
-  /**
-   * Close this log
-   */
-  def close() {
-    lock synchronized {
-      for(seg <- segments.view)
-        seg.messageSet.close()
-    }
-  }
-
-  /**
-   * Append this message set to the active segment of the log, rolling over to a fresh segment if necessary.
-   * Returns the offset at which the messages are written.
-   */
-  def append(messages: ByteBufferMessageSet): Unit = {
-    // validate the messages
-    messages.verifyMessageSize(maxMessageSize)
-    var numberOfMessages = 0
-    for(messageAndOffset <- messages) {
-      if(!messageAndOffset.message.isValid)
-        throw new InvalidMessageException()
-      numberOfMessages += 1;
-    }
-
-    BrokerTopicStat.getBrokerTopicStat(getTopicName).recordMessagesIn(numberOfMessages)
-    BrokerTopicStat.getBrokerAllTopicStat.recordMessagesIn(numberOfMessages)
-    logStats.recordAppendedMessages(numberOfMessages)
-
-    // truncate the message set's buffer upto validbytes, before appending it to the on-disk log
-    val validByteBuffer = messages.getBuffer.duplicate()
-    val messageSetValidBytes = messages.validBytes
-    if(messageSetValidBytes > Int.MaxValue || messageSetValidBytes < 0)
-      throw new InvalidMessageSizeException("Illegal length of message set " + messageSetValidBytes +
-        " Message set cannot be appended to log. Possible causes are corrupted produce requests")
-
-    validByteBuffer.limit(messageSetValidBytes.asInstanceOf[Int])
-    val validMessages = new ByteBufferMessageSet(validByteBuffer)
-
-    // they are valid, insert them in the log
-    lock synchronized {
-      try {
-        var segment = segments.view.last
-        maybeRoll(segment)
-        segment = segments.view.last
-        segment.append(validMessages)
-        maybeFlush(numberOfMessages)
-      }
-      catch {
-        case e: IOException =>
-          fatal("Halting due to unrecoverable I/O error while handling producer request", e)
-          Runtime.getRuntime.halt(1)
-        case e2 => throw e2
-      }
-    }
-  }
-
-
-  /**
-   * Read from the log file at the given offset
-   */
-  def read(offset: Long, length: Int): MessageSet = {
-    val view = segments.view
-    Log.findRange(view, offset, view.length) match {
-      case Some(segment) => segment.messageSet.read((offset - segment.start), length)
-      case _ => MessageSet.Empty
-    }
-  }
-
-  /**
-   * Delete any log segments matching the given predicate function
-   */
-  def markDeletedWhile(predicate: LogSegment => Boolean): Seq[LogSegment] = {
-    lock synchronized {
-      val view = segments.view
-      val deletable = view.takeWhile(predicate)
-      for(seg <- deletable)
-        seg.deleted = true
-      var numToDelete = deletable.size
-      // if we are deleting everything, create a new empty segment
-      if(numToDelete == view.size) {
-        if (view(numToDelete - 1).size > 0)
-          roll()
-        else {
-          // If the last segment to be deleted is empty and we roll the log, the new segment will have the same
-          // file name. So simply reuse the last segment and reset the modified time.
-          view(numToDelete - 1).file.setLastModified(SystemTime.milliseconds)
-          numToDelete -=1
-        }
-      }
-      segments.trunc(numToDelete)
-    }
-  }
-
-  /**
-   * Get the size of the log in bytes
-   */
-  def size: Long =
-    segments.view.foldLeft(0L)(_ + _.size)
-
-  /**
-   * The byte offset of the message that will be appended next.
-   */
-  def nextAppendOffset: Long = {
-    flush
-    val last = segments.view.last
-    last.start + last.size
-  }
-
-  /**
-   *  get the current high watermark of the log
-   */
-  def getHighwaterMark: Long = segments.view.last.messageSet.highWaterMark
-
-  /**
-   * Roll the log over if necessary
-   */
-  private def maybeRoll(segment: LogSegment) {
-    if((segment.messageSet.sizeInBytes > maxSize) ||
-       ((segment.firstAppendTime.isDefined) && (time.milliseconds - segment.firstAppendTime.get > rollIntervalMs)))
-      roll()
-  }
-
-  /**
-   * Create a new segment and make it active
-   */
-  def roll() {
-    lock synchronized {
-      val newOffset = nextAppendOffset
-      val newFile = new File(dir, Log.nameFromOffset(newOffset))
-      if (newFile.exists) {
-        warn("newly rolled logsegment " + newFile.getName + " already exists; deleting it first")
-        newFile.delete()
-      }
-      debug("Rolling log '" + name + "' to " + newFile.getName())
-      segments.append(new LogSegment(newFile, time, new FileMessageSet(newFile, true), newOffset))
-    }
-  }
-
-  /**
-   * Flush the log if necessary
-   */
-  private def maybeFlush(numberOfMessages : Int) {
-    if(unflushed.addAndGet(numberOfMessages) >= flushInterval) {
-      flush()
-    }
-  }
-
-  /**
-   * Flush this log file to the physical disk
-   */
-  def flush() : Unit = {
-    if (unflushed.get == 0) return
-
-    lock synchronized {
-      debug("Flushing log '" + name + "' last flushed: " + getLastFlushedTime + " current time: " +
-          System.currentTimeMillis)
-      segments.view.last.messageSet.flush()
-      unflushed.set(0)
-      lastflushedTime.set(System.currentTimeMillis)
-     }
-  }
-
-  def getOffsetsBefore(request: OffsetRequest): Array[Long] = {
-    val segsArray = segments.view
-    var offsetTimeArray: Array[Tuple2[Long, Long]] = null
-    if (segsArray.last.size > 0)
-      offsetTimeArray = new Array[Tuple2[Long, Long]](segsArray.length + 1)
-    else
-      offsetTimeArray = new Array[Tuple2[Long, Long]](segsArray.length)
-
-    for (i <- 0 until segsArray.length)
-      offsetTimeArray(i) = (segsArray(i).start, segsArray(i).file.lastModified)
-    if (segsArray.last.size > 0)
-      offsetTimeArray(segsArray.length) = (segsArray.last.start + segsArray.last.messageSet.highWaterMark, SystemTime.milliseconds)
-
-    var startIndex = -1
-    request.time match {
-      case OffsetRequest.LatestTime =>
-        startIndex = offsetTimeArray.length - 1
-      case OffsetRequest.EarliestTime =>
-        startIndex = 0
-      case _ =>
-          var isFound = false
-          debug("Offset time array = " + offsetTimeArray.foreach(o => "%d, %d".format(o._1, o._2)))
-          startIndex = offsetTimeArray.length - 1
-          while (startIndex >= 0 && !isFound) {
-            if (offsetTimeArray(startIndex)._2 <= request.time)
-              isFound = true
-            else
-              startIndex -=1
-          }
-    }
-
-    val retSize = request.maxNumOffsets.min(startIndex + 1)
-    val ret = new Array[Long](retSize)
-    for (j <- 0 until retSize) {
-      ret(j) = offsetTimeArray(startIndex)._1
-      startIndex -= 1
-    }
-    ret
-  }
- 
-  def getTopicName():String = {
-    name.substring(0, name.lastIndexOf("-"))
-  }
-
-  def getLastFlushedTime():Long = {
-    return lastflushedTime.get
-  }
-}
-  
diff --git a/trunk/core/src/main/scala/kafka/log/LogManager.scala b/trunk/core/src/main/scala/kafka/log/LogManager.scala
deleted file mode 100644
index 822f879..0000000
--- a/trunk/core/src/main/scala/kafka/log/LogManager.scala
+++ /dev/null
@@ -1,357 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.io._
-import kafka.utils._
-import scala.actors.Actor
-import scala.collection._
-import java.util.concurrent.CountDownLatch
-import kafka.server.{KafkaConfig, KafkaZooKeeper}
-import kafka.common.{InvalidTopicException, InvalidPartitionException}
-import kafka.api.OffsetRequest
-
-/**
- * The guy who creates and hands out logs
- */
-@threadsafe
-private[kafka] class LogManager(val config: KafkaConfig,
-                                private val scheduler: KafkaScheduler,
-                                private val time: Time,
-                                val logRollDefaultIntervalMs: Long,
-                                val logCleanupIntervalMs: Long,
-                                val logCleanupDefaultAgeMs: Long,
-                                needRecovery: Boolean) extends Logging {
-
-  val logDir: File = new File(config.logDir)
-  private val numPartitions = config.numPartitions
-  private val logFileSizeMap = config.logFileSizeMap
-  private val flushInterval = config.flushInterval
-  private val topicPartitionsMap = config.topicPartitionsMap
-  private val logCreationLock = new Object
-  private val random = new java.util.Random
-  private var kafkaZookeeper: KafkaZooKeeper = null
-  private var zkActor: Actor = null
-  private val startupLatch: CountDownLatch = if (config.enableZookeeper) new CountDownLatch(1) else null
-  private val logFlusherScheduler = new KafkaScheduler(1, "kafka-logflusher-", false)
-  private val topicNameValidator = new TopicNameValidator(config)
-  private val logFlushIntervalMap = config.flushIntervalMap
-  private val logRetentionSizeMap = config.logRetentionSizeMap
-  private val logRetentionMsMap = getMsMap(config.logRetentionHoursMap)
-  private val logRollMsMap = getMsMap(config.logRollHoursMap)
-
-  /* Initialize a log for each subdirectory of the main log directory */
-  private val logs = new Pool[String, Pool[Int, Log]]()
-  if(!logDir.exists()) {
-    info("No log directory found, creating '" + logDir.getAbsolutePath() + "'")
-    logDir.mkdirs()
-  }
-  if(!logDir.isDirectory() || !logDir.canRead())
-    throw new IllegalArgumentException(logDir.getAbsolutePath() + " is not a readable log directory.")
-  val subDirs = logDir.listFiles()
-  if(subDirs != null) {
-    for(dir <- subDirs) {
-      if(!dir.isDirectory()) {
-        warn("Skipping unexplainable file '" + dir.getAbsolutePath() + "'--should it be there?")
-      } else {
-        info("Loading log '" + dir.getName() + "'")
-        val topic = Utils.getTopicPartition(dir.getName)._1
-        val rollIntervalMs = logRollMsMap.get(topic).getOrElse(this.logRollDefaultIntervalMs)
-        val maxLogFileSize = logFileSizeMap.get(topic).getOrElse(config.logFileSize)
-        val log = new Log(dir, time, maxLogFileSize, config.maxMessageSize, flushInterval, rollIntervalMs, needRecovery)
-        val topicPartion = Utils.getTopicPartition(dir.getName)
-        logs.putIfNotExists(topicPartion._1, new Pool[Int, Log]())
-        val parts = logs.get(topicPartion._1)
-        parts.put(topicPartion._2, log)
-      }
-    }
-  }
-  
-  /* Schedule the cleanup task to delete old logs */
-  if(scheduler != null) {
-    info("starting log cleaner every " + logCleanupIntervalMs + " ms")    
-    scheduler.scheduleWithRate(cleanupLogs, 60 * 1000, logCleanupIntervalMs)
-  }
-
-  if(config.enableZookeeper) {
-    kafkaZookeeper = new KafkaZooKeeper(config, this)
-    kafkaZookeeper.startup
-    zkActor = new Actor {
-      def act() {
-        loop {
-          receive {
-            case topic: String =>
-              try {
-                kafkaZookeeper.registerTopicInZk(topic)
-              }
-              catch {
-                case e => error(e) // log it and let it go
-              }
-            case StopActor =>
-              info("zkActor stopped")
-              exit
-          }
-        }
-      }
-    }
-    zkActor.start
-  }
-
-  case object StopActor
-
-  private def getMsMap(hoursMap: Map[String, Int]) : Map[String, Long] = {
-    var ret = new mutable.HashMap[String, Long]
-    for ( (topic, hour) <- hoursMap ) {
-      ret.put(topic, hour * 60 * 60 * 1000L)
-    }
-    ret
-  }
-
-  /**
-   *  Register this broker in ZK for the first time.
-   */
-  def startup() {
-    if(config.enableZookeeper) {
-      kafkaZookeeper.registerBrokerInZk()
-      for (topic <- getAllTopics)
-        kafkaZookeeper.registerTopicInZk(topic)
-      startupLatch.countDown
-    }
-    info("Starting log flusher every " + config.flushSchedulerThreadRate + " ms with the following overrides " + logFlushIntervalMap)
-    logFlusherScheduler.scheduleWithRate(flushAllLogs, config.flushSchedulerThreadRate, config.flushSchedulerThreadRate)
-  }
-
-  private def awaitStartup() {
-    if (config.enableZookeeper)
-      startupLatch.await
-  }
-
-  private def registerNewTopicInZK(topic: String) {
-    if (config.enableZookeeper)
-      zkActor ! topic 
-  }
-
-  /**
-   * Create a log for the given topic and the given partition
-   */
-  private def createLog(topic: String, partition: Int): Log = {
-    logCreationLock synchronized {
-      val d = new File(logDir, topic + "-" + partition)
-      d.mkdirs()
-      val rollIntervalMs = logRollMsMap.get(topic).getOrElse(this.logRollDefaultIntervalMs)
-      val maxLogFileSize = logFileSizeMap.get(topic).getOrElse(config.logFileSize)
-      new Log(d, time, maxLogFileSize, config.maxMessageSize, flushInterval, rollIntervalMs, false)
-    }
-  }
-
-  /**
-   * Return the Pool (partitions) for a specific log
-   */
-  private def getLogPool(topic: String, partition: Int): Pool[Int, Log] = {
-    awaitStartup
-    topicNameValidator.validate(topic)
-    if (partition < 0 || partition >= topicPartitionsMap.getOrElse(topic, numPartitions)) {
-      warn("Wrong partition " + partition + " valid partitions (0," +
-              (topicPartitionsMap.getOrElse(topic, numPartitions) - 1) + ")")
-      throw new InvalidPartitionException("wrong partition " + partition)
-    }
-    logs.get(topic)
-  }
-
-  /**
-   * Pick a random partition from the given topic
-   */
-  def chooseRandomPartition(topic: String): Int = {
-    random.nextInt(topicPartitionsMap.getOrElse(topic, numPartitions))
-  }
-
-  def getOffsets(offsetRequest: OffsetRequest): Array[Long] = {
-    val log = getLog(offsetRequest.topic, offsetRequest.partition)
-    if (log != null) return log.getOffsetsBefore(offsetRequest)
-    Log.getEmptyOffsets(offsetRequest)
-  }
-
-  /**
-   * Get the log if exists
-   */
-  def getLog(topic: String, partition: Int): Log = {
-    val parts = getLogPool(topic, partition)
-    if (parts == null) return null
-    parts.get(partition)
-  }
-
-  /**
-   * Create the log if it does not exist, if it exists just return it
-   */
-  def getOrCreateLog(topic: String, partition: Int): Log = {
-    var hasNewTopic = false
-    var parts = getLogPool(topic, partition)
-    if (parts == null) {
-      val found = logs.putIfNotExists(topic, new Pool[Int, Log])
-      if (found == null)
-        hasNewTopic = true
-      parts = logs.get(topic)
-    }
-    var log = parts.get(partition)
-    if(log == null) {
-      log = createLog(topic, partition)
-      val found = parts.putIfNotExists(partition, log)
-      if(found != null) {
-        // there was already somebody there
-        log.close()
-        log = found
-      }
-      else
-        info("Created log for '" + topic + "'-" + partition)
-    }
-
-    if (hasNewTopic)
-      registerNewTopicInZK(topic)
-    log
-  }
-  
-  /* Attemps to delete all provided segments from a log and returns how many it was able to */
-  private def deleteSegments(log: Log, segments: Seq[LogSegment]): Int = {
-    var total = 0
-    for(segment <- segments) {
-      info("Deleting log segment " + segment.file.getName() + " from " + log.name)
-      Utils.swallow(logger.warn, segment.messageSet.close())
-      if(!segment.file.delete()) {
-        warn("Delete failed.")
-      } else {
-        total += 1
-      }
-    }
-    total
-  }
-
-  /* Runs through the log removing segments older than a certain age */
-  private def cleanupExpiredSegments(log: Log): Int = {
-    val startMs = time.milliseconds
-    val topic = Utils.getTopicPartition(log.dir.getName)._1
-    val logCleanupThresholdMS = logRetentionMsMap.get(topic).getOrElse(this.logCleanupDefaultAgeMs)
-    val toBeDeleted = log.markDeletedWhile(startMs - _.file.lastModified > logCleanupThresholdMS)
-    val total = deleteSegments(log, toBeDeleted)
-    total
-  }
-
-  /**
-   *  Runs through the log removing segments until the size of the log
-   *  is at least logRetentionSize bytes in size
-   */
-  private def cleanupSegmentsToMaintainSize(log: Log): Int = {
-    val topic = Utils.getTopicPartition(log.dir.getName)._1
-    val maxLogRetentionSize = logRetentionSizeMap.get(topic).getOrElse(config.logRetentionSize)
-    if(maxLogRetentionSize < 0 || log.size < maxLogRetentionSize) return 0
-    var diff = log.size - maxLogRetentionSize
-    def shouldDelete(segment: LogSegment) = {
-      if(diff - segment.size >= 0) {
-        diff -= segment.size
-        true
-      } else {
-        false
-      }
-    }
-    val toBeDeleted = log.markDeletedWhile( shouldDelete )
-    val total = deleteSegments(log, toBeDeleted)
-    total
-  }
-
-  /**
-   * Delete any eligible logs. Return the number of segments deleted.
-   */
-  def cleanupLogs() {
-    debug("Beginning log cleanup...")
-    val iter = getLogIterator
-    var total = 0
-    val startMs = time.milliseconds
-    while(iter.hasNext) {
-      val log = iter.next
-      debug("Garbage collecting '" + log.name + "'")
-      total += cleanupExpiredSegments(log) + cleanupSegmentsToMaintainSize(log)
-    }
-    debug("Log cleanup completed. " + total + " files deleted in " + 
-                 (time.milliseconds - startMs) / 1000 + " seconds")
-  }
-  
-  /**
-   * Close all the logs
-   */
-  def close() {
-    logFlusherScheduler.shutdown()
-    val iter = getLogIterator
-    while(iter.hasNext)
-      iter.next.close()
-    if (config.enableZookeeper) {
-      zkActor ! StopActor
-      kafkaZookeeper.close
-    }
-  }
-  
-  private def getLogIterator(): Iterator[Log] = {
-    new IteratorTemplate[Log] {
-      val partsIter = logs.values.iterator
-      var logIter: Iterator[Log] = null
-
-      override def makeNext(): Log = {
-        while (true) {
-          if (logIter != null && logIter.hasNext)
-            return logIter.next
-          if (!partsIter.hasNext)
-            return allDone
-          logIter = partsIter.next.values.iterator
-        }
-        // should never reach here
-        assert(false)
-        return allDone
-      }
-    }
-  }
-
-  private def flushAllLogs() = {
-    debug("flushing the high watermark of all logs")
-
-    for (log <- getLogIterator)
-    {
-      try{
-        val timeSinceLastFlush = System.currentTimeMillis - log.getLastFlushedTime
-        var logFlushInterval = config.defaultFlushIntervalMs
-        if(logFlushIntervalMap.contains(log.getTopicName))
-          logFlushInterval = logFlushIntervalMap(log.getTopicName)
-        debug(log.getTopicName + " flush interval  " + logFlushInterval +
-            " last flushed " + log.getLastFlushedTime + " timesincelastFlush: " + timeSinceLastFlush)
-        if(timeSinceLastFlush >= logFlushInterval)
-          log.flush
-      }
-      catch {
-        case e =>
-          error("Error flushing topic " + log.getTopicName, e)
-          e match {
-            case _: IOException =>
-              fatal("Halting due to unrecoverable I/O error while flushing logs: " + e.getMessage, e)
-              Runtime.getRuntime.halt(1)
-            case _ =>
-          }
-      }
-    }
-  }
-
-
-  def getAllTopics(): Iterator[String] = logs.keys.iterator
-  def getTopicPartitionsMap() = topicPartitionsMap
-}
diff --git a/trunk/core/src/main/scala/kafka/log/LogStats.scala b/trunk/core/src/main/scala/kafka/log/LogStats.scala
deleted file mode 100644
index 4ac40a0..0000000
--- a/trunk/core/src/main/scala/kafka/log/LogStats.scala
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.util.concurrent.atomic.AtomicLong
-
-trait LogStatsMBean {
-  def getName(): String
-  def getSize(): Long
-  def getNumberOfSegments: Int
-  def getCurrentOffset: Long
-  def getNumAppendedMessages: Long
-}
-
-class LogStats(val log: Log) extends LogStatsMBean {
-  private val numCumulatedMessages = new AtomicLong(0)
-
-  def getName(): String = log.name
-  
-  def getSize(): Long = log.size
-  
-  def getNumberOfSegments: Int = log.numberOfSegments
-  
-  def getCurrentOffset: Long = log.getHighwaterMark
-  
-  def getNumAppendedMessages: Long = numCumulatedMessages.get
-
-  def recordAppendedMessages(nMessages: Int) = numCumulatedMessages.getAndAdd(nMessages)
-}
diff --git a/trunk/core/src/main/scala/kafka/log/SegmentList.scala b/trunk/core/src/main/scala/kafka/log/SegmentList.scala
deleted file mode 100644
index 989948d..0000000
--- a/trunk/core/src/main/scala/kafka/log/SegmentList.scala
+++ /dev/null
@@ -1,86 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.util.concurrent.atomic._
-import reflect._
-import scala.math._
-
-private[log] object SegmentList {
-  val MaxAttempts: Int = 20
-}
-
-/**
- * A copy-on-write list implementation that provides consistent views. The view() method
- * provides an immutable sequence representing a consistent state of the list. The user can do
- * iterative operations on this sequence such as binary search without locking all access to the list.
- * Even if the range of the underlying list changes no change will be made to the view
- */
-private[log] class SegmentList[T](seq: Seq[T])(implicit m: ClassManifest[T]) {
-  
-  val contents: AtomicReference[Array[T]] = new AtomicReference(seq.toArray)
-
-  /**
-   * Append the given items to the end of the list
-   */
-  def append(ts: T*)(implicit m: ClassManifest[T]) {
-    while(true){
-      val curr = contents.get()
-      val updated = new Array[T](curr.length + ts.length)
-      Array.copy(curr, 0, updated, 0, curr.length)
-      for(i <- 0 until ts.length)
-        updated(curr.length + i) = ts(i)
-      if(contents.compareAndSet(curr, updated))
-        return
-    }
-  }
-  
-  
-  /**
-   * Delete the first n items from the list
-   */
-  def trunc(newStart: Int): Seq[T] = {
-    if(newStart < 0)
-      throw new IllegalArgumentException("Starting index must be positive.");
-    var deleted: Array[T] = null
-    var done = false
-    while(!done) {
-      val curr = contents.get()
-      val newLength = max(curr.length - newStart, 0)
-      val updated = new Array[T](newLength)
-      Array.copy(curr, min(newStart, curr.length - 1), updated, 0, newLength)
-      if(contents.compareAndSet(curr, updated)) {
-        deleted = new Array[T](newStart)
-        Array.copy(curr, 0, deleted, 0, curr.length - newLength)
-        done = true
-      }
-    }
-    deleted
-  }
-  
-  /**
-   * Get a consistent view of the sequence
-   */
-  def view: Array[T] = contents.get()
-  
-  /**
-   * Nicer toString method
-   */
-  override def toString(): String = view.toString
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/log/package.html b/trunk/core/src/main/scala/kafka/log/package.html
deleted file mode 100644
index 0880be7..0000000
--- a/trunk/core/src/main/scala/kafka/log/package.html
+++ /dev/null
@@ -1 +0,0 @@
-The log management system for Kafka.
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/message/ByteBufferBackedInputStream.scala b/trunk/core/src/main/scala/kafka/message/ByteBufferBackedInputStream.scala
deleted file mode 100644
index ce55c16..0000000
--- a/trunk/core/src/main/scala/kafka/message/ByteBufferBackedInputStream.scala
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.io.InputStream
-import java.nio.ByteBuffer
-
-class ByteBufferBackedInputStream(buffer:ByteBuffer) extends InputStream {
-  override def read():Int  = {
-    buffer.hasRemaining match {
-      case true =>
-        (buffer.get() & 0xFF)
-      case false => -1
-    }
-  }
-
-  override def read(bytes:Array[Byte], off:Int, len:Int):Int = {
-    buffer.hasRemaining match {
-      case true =>
-        // Read only what's left
-        val realLen = math.min(len, buffer.remaining())
-        buffer.get(bytes, off, realLen)
-        realLen
-      case false => -1
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/message/ByteBufferMessageSet.scala b/trunk/core/src/main/scala/kafka/message/ByteBufferMessageSet.scala
deleted file mode 100644
index 5afd6e1..0000000
--- a/trunk/core/src/main/scala/kafka/message/ByteBufferMessageSet.scala
+++ /dev/null
@@ -1,203 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import kafka.utils.Logging
-import java.nio.ByteBuffer
-import java.nio.channels._
-import kafka.utils.IteratorTemplate
-import kafka.common.{MessageSizeTooLargeException, InvalidMessageSizeException, ErrorMapping}
-
-/**
- * A sequence of messages stored in a byte buffer
- *
- * There are two ways to create a ByteBufferMessageSet
- *
- * Option 1: From a ByteBuffer which already contains the serialized message set. Consumers will use this method.
- *
- * Option 2: Give it a list of messages along with instructions relating to serialization format. Producers will use this method.
- * 
- */
-class ByteBufferMessageSet(private val buffer: ByteBuffer,
-                           private val initialOffset: Long = 0L,
-                           private val errorCode: Int = ErrorMapping.NoError) extends MessageSet with Logging {
-  private var shallowValidByteCount = -1L
-  if(sizeInBytes > Int.MaxValue)
-    throw new InvalidMessageSizeException("Message set cannot be larger than " + Int.MaxValue)
-
-  def this(compressionCodec: CompressionCodec, messages: Message*) {
-    this(MessageSet.createByteBuffer(compressionCodec, messages:_*), 0L, ErrorMapping.NoError)
-  }
-
-  def this(messages: Message*) {
-    this(NoCompressionCodec, messages: _*)
-  }
-
-  def getInitialOffset = initialOffset
-
-  def getBuffer = buffer
-
-  def getErrorCode = errorCode
-
-  def serialized(): ByteBuffer = buffer
-
-  def validBytes: Long = shallowValidBytes
-
-  private def shallowValidBytes: Long = {
-    if(shallowValidByteCount < 0) {
-      val iter = this.internalIterator(true)
-      while(iter.hasNext) {
-        val messageAndOffset = iter.next
-        shallowValidByteCount = messageAndOffset.offset
-      }
-    }
-    if(shallowValidByteCount < initialOffset) 0
-    else (shallowValidByteCount - initialOffset)
-  }
-  
-  /** Write the messages in this set to the given channel */
-  def writeTo(channel: GatheringByteChannel, offset: Long, size: Long): Long = {
-    buffer.mark()
-    val written = channel.write(buffer)
-    buffer.reset()
-    written
-  }
-
-  /** default iterator that iterates over decompressed messages */
-  override def iterator: Iterator[MessageAndOffset] = internalIterator()
-
-  /** iterator over compressed messages without decompressing */
-  def shallowIterator: Iterator[MessageAndOffset] = internalIterator(true)
-
-  def verifyMessageSize(maxMessageSize: Int){
-    var shallowIter = internalIterator(true)
-    while(shallowIter.hasNext){
-      var messageAndOffset = shallowIter.next
-      val payloadSize = messageAndOffset.message.payloadSize
-      if ( payloadSize > maxMessageSize)
-        throw new MessageSizeTooLargeException("payload size of " + payloadSize + " larger than " + maxMessageSize)
-    }
-  }
-
-  /** When flag isShallow is set to be true, we do a shallow iteration: just traverse the first level of messages. This is used in verifyMessageSize() function **/
-  private def internalIterator(isShallow: Boolean = false): Iterator[MessageAndOffset] = {
-    ErrorMapping.maybeThrowException(errorCode)
-    new IteratorTemplate[MessageAndOffset] {
-      var topIter = buffer.slice()
-      var currValidBytes = initialOffset
-      var innerIter:Iterator[MessageAndOffset] = null
-      var lastMessageSize = 0L
-
-      def innerDone():Boolean = (innerIter==null || !innerIter.hasNext)
-
-      def makeNextOuter: MessageAndOffset = {
-        if (topIter.remaining < 4) {
-          return allDone()
-        }
-        val size = topIter.getInt()
-        lastMessageSize = size
-
-        trace("Remaining bytes in iterator = " + topIter.remaining)
-        trace("size of data = " + size)
-
-        if(size < 0 || topIter.remaining < size) {
-          if (currValidBytes == initialOffset || size < 0)
-            throw new InvalidMessageSizeException("invalid message size: " + size + " only received bytes: " +
-              topIter.remaining + " at " + currValidBytes + "( possible causes (1) a single message larger than " +
-              "the fetch size; (2) log corruption )")
-          return allDone()
-        }
-        val message = topIter.slice()
-        message.limit(size)
-        topIter.position(topIter.position + size)
-        val newMessage = new Message(message)
-        if(!newMessage.isValid)
-          throw new InvalidMessageException("message is invalid, compression codec: " + newMessage.compressionCodec
-            + " size: " + size + " curr offset: " + currValidBytes + " init offset: " + initialOffset)
-
-        if(isShallow){
-          currValidBytes += 4 + size
-          trace("shallow iterator currValidBytes = " + currValidBytes)
-          new MessageAndOffset(newMessage, currValidBytes)
-        }
-        else{
-          newMessage.compressionCodec match {
-            case NoCompressionCodec =>
-              debug("Message is uncompressed. Valid byte count = %d".format(currValidBytes))
-              innerIter = null
-              currValidBytes += 4 + size
-              trace("currValidBytes = " + currValidBytes)
-              new MessageAndOffset(newMessage, currValidBytes)
-            case _ =>
-              debug("Message is compressed. Valid byte count = %d".format(currValidBytes))
-              innerIter = CompressionUtils.decompress(newMessage).internalIterator()
-              if (!innerIter.hasNext) {
-                currValidBytes += 4 + lastMessageSize
-                innerIter = null
-              }
-              makeNext()
-          }
-        }
-      }
-
-      override def makeNext(): MessageAndOffset = {
-        if(isShallow){
-          makeNextOuter
-        }
-        else{
-          val isInnerDone = innerDone()
-          debug("makeNext() in internalIterator: innerDone = " + isInnerDone)
-          isInnerDone match {
-            case true => makeNextOuter
-            case false => {
-              val messageAndOffset = innerIter.next
-              if (!innerIter.hasNext)
-                currValidBytes += 4 + lastMessageSize
-              new MessageAndOffset(messageAndOffset.message, currValidBytes)
-            }
-          }
-        }
-      }
-    }
-  }
-
-  def sizeInBytes: Long = buffer.limit
-  
-  override def toString: String = {
-    val builder = new StringBuilder()
-    builder.append("ByteBufferMessageSet(")
-    for(message <- this) {
-      builder.append(message)
-      builder.append(", ")
-    }
-    builder.append(")")
-    builder.toString
-  }
-
-  override def equals(other: Any): Boolean = {
-    other match {
-      case that: ByteBufferMessageSet =>
-        (that canEqual this) && errorCode == that.errorCode && buffer.equals(that.buffer) && initialOffset == that.initialOffset
-      case _ => false
-    }
-  }
-
-  override def canEqual(other: Any): Boolean = other.isInstanceOf[ByteBufferMessageSet]
-
-  override def hashCode: Int = 31 + (17 * errorCode) + buffer.hashCode + initialOffset.hashCode
-}
diff --git a/trunk/core/src/main/scala/kafka/message/CompressionCodec.scala b/trunk/core/src/main/scala/kafka/message/CompressionCodec.scala
deleted file mode 100644
index b71d4e9..0000000
--- a/trunk/core/src/main/scala/kafka/message/CompressionCodec.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.message
-
-object CompressionCodec {
-  def getCompressionCodec(codec: Int): CompressionCodec = {
-    codec match {
-      case NoCompressionCodec.codec => NoCompressionCodec
-      case GZIPCompressionCodec.codec => GZIPCompressionCodec
-      case SnappyCompressionCodec.codec => SnappyCompressionCodec
-      case _ => throw new kafka.common.UnknownCodecException("%d is an unknown compression codec".format(codec))
-    }
-  }
-}
-
-sealed trait CompressionCodec { def codec: Int }
-
-case object DefaultCompressionCodec extends CompressionCodec { val codec = GZIPCompressionCodec.codec }
-
-case object GZIPCompressionCodec extends CompressionCodec { val codec = 1 }
-
-case object SnappyCompressionCodec extends CompressionCodec { val codec = 2 }
-
-case object NoCompressionCodec extends CompressionCodec { val codec = 0 }
diff --git a/trunk/core/src/main/scala/kafka/message/CompressionUtils.scala b/trunk/core/src/main/scala/kafka/message/CompressionUtils.scala
deleted file mode 100644
index 607ca77..0000000
--- a/trunk/core/src/main/scala/kafka/message/CompressionUtils.scala
+++ /dev/null
@@ -1,160 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.io.ByteArrayOutputStream
-import java.io.IOException
-import java.io.InputStream
-import java.nio.ByteBuffer
-import kafka.utils._
-
-abstract sealed class CompressionFacade(inputStream: InputStream, outputStream: ByteArrayOutputStream) {
-  def close() = {
-    if (inputStream != null) inputStream.close()
-    if (outputStream != null) outputStream.close()
-  }	
-  def read(a: Array[Byte]): Int
-  def write(a: Array[Byte])
-}
-
-class GZIPCompression(inputStream: InputStream, outputStream: ByteArrayOutputStream)  extends CompressionFacade(inputStream,outputStream) {
-  import java.util.zip.GZIPInputStream
-  import java.util.zip.GZIPOutputStream
-  val gzipIn:GZIPInputStream = if (inputStream == null) null else new  GZIPInputStream(inputStream)
-  val gzipOut:GZIPOutputStream = if (outputStream == null) null else new  GZIPOutputStream(outputStream)
-
-  override def close() {
-    if (gzipIn != null) gzipIn.close()
-    if (gzipOut != null) gzipOut.close()
-    super.close()	
-  }
-
-  override def write(a: Array[Byte]) = {
-    gzipOut.write(a)
-  }
-
-  override def read(a: Array[Byte]): Int = {
-    gzipIn.read(a)
-  }
-}
-
-class SnappyCompression(inputStream: InputStream,outputStream: ByteArrayOutputStream)  extends CompressionFacade(inputStream,outputStream) {
-  import org.xerial.snappy.SnappyInputStream
-  import org.xerial.snappy.SnappyOutputStream
-  
-  val snappyIn:SnappyInputStream = if (inputStream == null) null else new SnappyInputStream(inputStream)
-  val snappyOut:SnappyOutputStream = if (outputStream == null) null else new  SnappyOutputStream(outputStream)
-
-  override def close() = {
-    if (snappyIn != null) snappyIn.close()
-    if (snappyOut != null) snappyOut.close()
-    super.close()	
-  }
-
-  override def write(a: Array[Byte]) = {
-    snappyOut.write(a)
-  }
-
-  override def read(a: Array[Byte]): Int = {
-    snappyIn.read(a)	
-  }
-
-}
-
-object CompressionFactory {
-  def apply(compressionCodec: CompressionCodec, stream: ByteArrayOutputStream): CompressionFacade = compressionCodec match {
-    case GZIPCompressionCodec => new GZIPCompression(null,stream)
-    case SnappyCompressionCodec => new SnappyCompression(null,stream)
-    case _ =>
-      throw new kafka.common.UnknownCodecException("Unknown Codec: " + compressionCodec)
-  }
-  def apply(compressionCodec: CompressionCodec, stream: InputStream): CompressionFacade = compressionCodec match {
-    case GZIPCompressionCodec => new GZIPCompression(stream,null)
-    case SnappyCompressionCodec => new SnappyCompression(stream,null)
-    case _ =>
-      throw new kafka.common.UnknownCodecException("Unknown Codec: " + compressionCodec)
-  }
-}
-
-object CompressionUtils extends Logging{
-
-  //specify the codec which is the default when DefaultCompressionCodec is used
-  private var defaultCodec: CompressionCodec = GZIPCompressionCodec
-
-  def compress(messages: Iterable[Message], compressionCodec: CompressionCodec = DefaultCompressionCodec):Message = {
-	val outputStream:ByteArrayOutputStream = new ByteArrayOutputStream()
-	
-	debug("Allocating message byte buffer of size = " + MessageSet.messageSetSize(messages))
-
-    var cf: CompressionFacade = null
-		
-	if (compressionCodec == DefaultCompressionCodec)
-      cf = CompressionFactory(defaultCodec,outputStream)
-    else 
-      cf = CompressionFactory(compressionCodec,outputStream) 
-
-    val messageByteBuffer = ByteBuffer.allocate(MessageSet.messageSetSize(messages))
-    messages.foreach(m => m.serializeTo(messageByteBuffer))
-    messageByteBuffer.rewind
-
-    try {
-      cf.write(messageByteBuffer.array)
-    } catch {
-      case e: IOException => error("Error while writing to the GZIP output stream", e)
-      cf.close()
-      throw e
-    } finally {
-      cf.close()
-    }
-
-    val oneCompressedMessage:Message = new Message(outputStream.toByteArray, compressionCodec)
-    oneCompressedMessage
-   }
-
-  def decompress(message: Message): ByteBufferMessageSet = {
-    val outputStream:ByteArrayOutputStream = new ByteArrayOutputStream
-    val inputStream:InputStream = new ByteBufferBackedInputStream(message.payload)
-
-    val intermediateBuffer = new Array[Byte](1024)
-
-    var cf: CompressionFacade = null
-		
-	if (message.compressionCodec == DefaultCompressionCodec) 
-      cf = CompressionFactory(defaultCodec,inputStream)
-    else 
-      cf = CompressionFactory(message.compressionCodec,inputStream)
-
-    try {
-      Stream.continually(cf.read(intermediateBuffer)).takeWhile(_ > 0).foreach { dataRead =>
-        outputStream.write(intermediateBuffer, 0, dataRead)
-      }
-    }catch {
-      case e: IOException => error("Error while reading from the GZIP input stream", e)
-      cf.close()
-      throw e
-    } finally {
-      cf.close()
-    }
-
-    val outputBuffer = ByteBuffer.allocate(outputStream.size)
-    outputBuffer.put(outputStream.toByteArray)
-    outputBuffer.rewind
-    val outputByteArray = outputStream.toByteArray
-    new ByteBufferMessageSet(outputBuffer)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/message/FileMessageSet.scala b/trunk/core/src/main/scala/kafka/message/FileMessageSet.scala
deleted file mode 100644
index 7c9b4f8..0000000
--- a/trunk/core/src/main/scala/kafka/message/FileMessageSet.scala
+++ /dev/null
@@ -1,280 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.io._
-import java.nio._
-import java.nio.channels._
-import java.util.concurrent.atomic._
-
-import kafka.utils._
-
-/**
- * An on-disk message set. The set can be opened either mutably or immutably. Mutation attempts
- * will fail on an immutable message set. An optional limit and offset can be applied to the message set
- * which will control the offset into the file and the effective length into the file from which
- * messages will be read
- */
-@nonthreadsafe
-class FileMessageSet private[kafka](private[message] val channel: FileChannel,
-                                    private[message] val offset: Long,
-                                    private[message] val limit: Long,
-                                    val mutable: Boolean,
-                                    val needRecover: AtomicBoolean) extends MessageSet with Logging {
-  
-  private val setSize = new AtomicLong()
-  private val setHighWaterMark = new AtomicLong()
-  
-  if(mutable) {
-    if(limit < Long.MaxValue || offset > 0)
-      throw new IllegalArgumentException("Attempt to open a mutable message set with a view or offset, which is not allowed.")
-
-    if (needRecover.get) {
-      // set the file position to the end of the file for appending messages
-      val startMs = System.currentTimeMillis
-      val truncated = recover()
-      info("Recovery succeeded in " + (System.currentTimeMillis - startMs) / 1000 +
-                " seconds. " + truncated + " bytes truncated.")
-    }
-    else {
-      setSize.set(channel.size())
-      setHighWaterMark.set(sizeInBytes)
-      channel.position(channel.size)
-    }
-  } else {
-    setSize.set(scala.math.min(channel.size(), limit) - offset)
-    setHighWaterMark.set(sizeInBytes)
-    debug("initializing high water mark in immutable mode: " + highWaterMark)
-  }
-  
-  /**
-   * Create a file message set with no limit or offset
-   */
-  def this(channel: FileChannel, mutable: Boolean) = 
-    this(channel, 0, Long.MaxValue, mutable, new AtomicBoolean(false))
-  
-  /**
-   * Create a file message set with no limit or offset
-   */
-  def this(file: File, mutable: Boolean) = 
-    this(Utils.openChannel(file, mutable), mutable)
-  
-  /**
-   * Create a file message set with no limit or offset
-   */
-  def this(channel: FileChannel, mutable: Boolean, needRecover: AtomicBoolean) = 
-    this(channel, 0, Long.MaxValue, mutable, needRecover)
-  
-  /**
-   * Create a file message set with no limit or offset
-   */
-  def this(file: File, mutable: Boolean, needRecover: AtomicBoolean) = 
-    this(Utils.openChannel(file, mutable), mutable, needRecover)
-  
-  
-  /**
-   * Return a message set which is a view into this set starting from the given offset and with the given size limit.
-   */
-  def read(readOffset: Long, size: Long): MessageSet = {
-    new FileMessageSet(channel, this.offset + readOffset, scala.math.min(this.offset + readOffset + size, highWaterMark),
-      false, new AtomicBoolean(false))
-  }
-  
-  /**
-   * Write some of this set to the given channel, return the ammount written
-   */
-  def writeTo(destChannel: GatheringByteChannel, writeOffset: Long, size: Long): Long = 
-    channel.transferTo(offset + writeOffset, scala.math.min(size, sizeInBytes), destChannel)
-  
-  /**
-   * Get an iterator over the messages in the set
-   */
-  override def iterator: Iterator[MessageAndOffset] = {
-    new IteratorTemplate[MessageAndOffset] {
-      var location = offset
-      
-      override def makeNext(): MessageAndOffset = {
-        // read the size of the item
-        val sizeBuffer = ByteBuffer.allocate(4)
-        channel.read(sizeBuffer, location)
-        if(sizeBuffer.hasRemaining)
-          return allDone()
-        
-        sizeBuffer.rewind()
-        val size: Int = sizeBuffer.getInt()
-        if (size < Message.MinHeaderSize)
-          return allDone()
-        
-        // read the item itself
-        val buffer = ByteBuffer.allocate(size)
-        channel.read(buffer, location + 4)
-        if(buffer.hasRemaining)
-          return allDone()
-        buffer.rewind()
-        
-        // increment the location and return the item
-        location += size + 4
-        new MessageAndOffset(new Message(buffer), location)
-      }
-    }
-  }
-  
-  /**
-   * The number of bytes taken up by this file set
-   */
-  def sizeInBytes(): Long = setSize.get()
-  
-  /**
-    * The high water mark
-    */
-  def highWaterMark(): Long = setHighWaterMark.get()
-
-  def checkMutable(): Unit = {
-    if(!mutable)
-      throw new IllegalStateException("Attempt to invoke mutation on immutable message set.")
-  }
-  
-  /**
-   * Append this message to the message set
-   */
-  def append(messages: MessageSet): Unit = {
-    checkMutable()
-    var written = 0L
-    while(written < messages.sizeInBytes)
-      written += messages.writeTo(channel, 0, messages.sizeInBytes)
-    setSize.getAndAdd(written)
-  }
- 
-  /**
-   * Commit all written data to the physical disk
-   */
-  def flush() = {
-    checkMutable()
-    val startTime = SystemTime.milliseconds
-    channel.force(true)
-    val elapsedTime = SystemTime.milliseconds - startTime
-    LogFlushStats.recordFlushRequest(elapsedTime)
-    debug("flush time " + elapsedTime)
-    setHighWaterMark.set(sizeInBytes)
-    debug("flush high water mark:" + highWaterMark)
-  }
-  
-  /**
-   * Close this message set
-   */
-  def close() = {
-    if(mutable)
-      flush()
-    channel.close()
-  }
-  
-  /**
-   * Recover log up to the last complete entry. Truncate off any bytes from any incomplete messages written
-   */
-  def recover(): Long = {
-    checkMutable()
-    val len = channel.size
-    val buffer = ByteBuffer.allocate(4)
-    var validUpTo: Long = 0
-    var next = 0L
-    do {
-      next = validateMessage(channel, validUpTo, len, buffer)
-      if(next >= 0)
-        validUpTo = next
-    } while(next >= 0)
-    channel.truncate(validUpTo)
-    setSize.set(validUpTo)
-    setHighWaterMark.set(validUpTo)
-    info("recover high water mark:" + highWaterMark)
-    /* This should not be necessary, but fixes bug 6191269 on some OSs. */
-    channel.position(validUpTo)
-    needRecover.set(false)    
-    len - validUpTo
-  }
-  
-  /**
-   * Read, validate, and discard a single message, returning the next valid offset, and
-   * the message being validated
-   */
-  private def validateMessage(channel: FileChannel, start: Long, len: Long, buffer: ByteBuffer): Long = {
-    buffer.rewind()
-    var read = channel.read(buffer, start)
-    if(read < 4)
-      return -1
-    
-    // check that we have sufficient bytes left in the file
-    val size = buffer.getInt(0)
-    if (size < Message.MinHeaderSize)
-      return -1
-    
-    val next = start + 4 + size
-    if(next > len)
-      return -1
-    
-    // read the message
-    val messageBuffer = ByteBuffer.allocate(size)
-    var curr = start + 4
-    while(messageBuffer.hasRemaining) {
-      read = channel.read(messageBuffer, curr)
-      if(read < 0)
-        throw new IllegalStateException("File size changed during recovery!")
-      else
-        curr += read
-    }
-    messageBuffer.rewind()
-    val message = new Message(messageBuffer)
-    if(!message.isValid)
-      return -1
-    else
-      next
-  }
-  
-}
-
-trait LogFlushStatsMBean {
-  def getFlushesPerSecond: Double
-  def getAvgFlushMs: Double
-  def getTotalFlushMs: Long
-  def getMaxFlushMs: Double
-  def getNumFlushes: Long
-}
-
-@threadsafe
-class LogFlushStats extends LogFlushStatsMBean {
-  private val flushRequestStats = new SnapshotStats
-
-  def recordFlushRequest(requestMs: Long) = flushRequestStats.recordRequestMetric(requestMs)
-
-  def getFlushesPerSecond: Double = flushRequestStats.getRequestsPerSecond
-
-  def getAvgFlushMs: Double = flushRequestStats.getAvgMetric
-
-  def getTotalFlushMs: Long = flushRequestStats.getTotalMetric
-
-  def getMaxFlushMs: Double = flushRequestStats.getMaxMetric
-
-  def getNumFlushes: Long = flushRequestStats.getNumRequests
-}
-
-object LogFlushStats extends Logging {
-  private val LogFlushStatsMBeanName = "kafka:type=kafka.LogFlushStats"
-  private val stats = new LogFlushStats
-  Utils.registerMBean(stats, LogFlushStatsMBeanName)
-
-  def recordFlushRequest(requestMs: Long) = stats.recordFlushRequest(requestMs)
-}
diff --git a/trunk/core/src/main/scala/kafka/message/InvalidMessageException.scala b/trunk/core/src/main/scala/kafka/message/InvalidMessageException.scala
deleted file mode 100644
index 9f0d6e9..0000000
--- a/trunk/core/src/main/scala/kafka/message/InvalidMessageException.scala
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-/**
- * Indicates that a message failed its checksum and is corrupt
- */
-class InvalidMessageException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/message/Message.scala b/trunk/core/src/main/scala/kafka/message/Message.scala
deleted file mode 100644
index 272a0b6..0000000
--- a/trunk/core/src/main/scala/kafka/message/Message.scala
+++ /dev/null
@@ -1,180 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.nio._
-import kafka.utils._
-import kafka.common.UnknownMagicByteException
-
-/**
- * Message byte offsets
- */
-object Message {
-  val MagicVersion1: Byte = 0
-  val MagicVersion2: Byte = 1
-  val CurrentMagicValue: Byte = 1
-  val MagicOffset = 0
-  val MagicLength = 1
-  val AttributeOffset = MagicOffset + MagicLength
-  val AttributeLength = 1
-  /**
-   * Specifies the mask for the compression code. 2 bits to hold the compression codec.
-   * 0 is reserved to indicate no compression
-   */
-  val CompressionCodeMask: Int = 0x03  //
-
-
-  val NoCompression:Int = 0
-
-  /**
-   * Computes the CRC value based on the magic byte
-   * @param magic Specifies the magic byte value. Possible values are 0 and 1
-   *              0 for no compression
-   *              1 for compression
-  */
-  def crcOffset(magic: Byte): Int = magic match {
-    case MagicVersion1 => MagicOffset + MagicLength
-    case MagicVersion2 => AttributeOffset + AttributeLength
-    case _ => throw new UnknownMagicByteException("Magic byte value of %d is unknown".format(magic))
-  }
-  
-  val CrcLength = 4
-
-  /**
-   * Computes the offset to the message payload based on the magic byte
-   * @param magic Specifies the magic byte value. Possible values are 0 and 1
-   *              0 for no compression
-   *              1 for compression
-   */
-  def payloadOffset(magic: Byte): Int = crcOffset(magic) + CrcLength
-
-  /**
-   * Computes the size of the message header based on the magic byte
-   * @param magic Specifies the magic byte value. Possible values are 0 and 1
-   *              0 for no compression
-   *              1 for compression
-   */
-  def headerSize(magic: Byte): Int = payloadOffset(magic)
-
-  /**
-   * Size of the header for magic byte 0. This is the minimum size of any message header
-   */
-  val MinHeaderSize = headerSize(0);
-}
-
-/**
- * A message. The format of an N byte message is the following:
- *
- * If magic byte is 0
- *
- * 1. 1 byte "magic" identifier to allow format changes
- *
- * 2. 4 byte CRC32 of the payload
- *
- * 3. N - 5 byte payload
- *
- * If magic byte is 1
- *
- * 1. 1 byte "magic" identifier to allow format changes
- *
- * 2. 1 byte "attributes" identifier to allow annotations on the message independent of the version (e.g. compression enabled, type of codec used)
- *
- * 3. 4 byte CRC32 of the payload
- *
- * 4. N - 6 byte payload
- * 
- */
-class Message(val buffer: ByteBuffer) {
-  
-  import kafka.message.Message._
-    
-  
-  private def this(checksum: Long, bytes: Array[Byte], compressionCodec: CompressionCodec) = {
-    this(ByteBuffer.allocate(Message.headerSize(Message.CurrentMagicValue) + bytes.length))
-    buffer.put(CurrentMagicValue)
-    var attributes:Byte = 0
-    if (compressionCodec.codec > 0) {
-      attributes =  (attributes | (Message.CompressionCodeMask & compressionCodec.codec)).toByte
-    }
-    buffer.put(attributes)
-    Utils.putUnsignedInt(buffer, checksum)
-    buffer.put(bytes)
-    buffer.rewind()
-  }
-
-  def this(checksum:Long, bytes:Array[Byte]) = this(checksum, bytes, NoCompressionCodec)
-  
-  def this(bytes: Array[Byte], compressionCodec: CompressionCodec) = {
-    //Note: we're not crc-ing the attributes header, so we're susceptible to bit-flipping there
-    this(Utils.crc32(bytes), bytes, compressionCodec)
-  }
-
-  def this(bytes: Array[Byte]) = this(bytes, NoCompressionCodec)
-  
-  def size: Int = buffer.limit
-  
-  def payloadSize: Int = size - headerSize(magic)
-  
-  def magic: Byte = buffer.get(MagicOffset)
-  
-  def attributes: Byte = buffer.get(AttributeOffset)
-  
-  def compressionCodec:CompressionCodec = {
-    magic match {
-      case 0 => NoCompressionCodec
-      case 1 => CompressionCodec.getCompressionCodec(buffer.get(AttributeOffset) & CompressionCodeMask)
-      case _ => throw new RuntimeException("Invalid magic byte " + magic)
-    }
-
-  }
-
-  def checksum: Long = Utils.getUnsignedInt(buffer, crcOffset(magic))
-  
-  def payload: ByteBuffer = {
-    var payload = buffer.duplicate
-    payload.position(headerSize(magic))
-    payload = payload.slice()
-    payload.limit(payloadSize)
-    payload.rewind()
-    payload
-  }
-  
-  def isValid: Boolean =
-    checksum == Utils.crc32(buffer.array, buffer.position + buffer.arrayOffset + payloadOffset(magic), payloadSize)
-
-  def serializedSize: Int = 4 /* int size*/ + buffer.limit
-   
-  def serializeTo(serBuffer:ByteBuffer) = {
-    serBuffer.putInt(buffer.limit)
-    serBuffer.put(buffer.duplicate)
-  }
-
-  override def toString(): String = 
-    "message(magic = %d, attributes = %d, crc = %d, payload = %s)".format(magic, attributes, checksum, payload)
-  
-  override def equals(any: Any): Boolean = {
-    any match {
-      case that: Message => size == that.size && attributes == that.attributes && checksum == that.checksum &&
-        payload == that.payload && magic == that.magic
-      case _ => false
-    }
-  }
-  
-  override def hashCode(): Int = buffer.hashCode
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/message/MessageAndMetadata.scala b/trunk/core/src/main/scala/kafka/message/MessageAndMetadata.scala
deleted file mode 100644
index 710308e..0000000
--- a/trunk/core/src/main/scala/kafka/message/MessageAndMetadata.scala
+++ /dev/null
@@ -1,21 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-case class MessageAndMetadata[T](message: T, topic: String = "")
-
diff --git a/trunk/core/src/main/scala/kafka/message/MessageAndOffset.scala b/trunk/core/src/main/scala/kafka/message/MessageAndOffset.scala
deleted file mode 100644
index d769fc6..0000000
--- a/trunk/core/src/main/scala/kafka/message/MessageAndOffset.scala
+++ /dev/null
@@ -1,22 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-
-case class MessageAndOffset(message: Message, offset: Long)
-
diff --git a/trunk/core/src/main/scala/kafka/message/MessageLengthException.scala b/trunk/core/src/main/scala/kafka/message/MessageLengthException.scala
deleted file mode 100644
index 752d1eb..0000000
--- a/trunk/core/src/main/scala/kafka/message/MessageLengthException.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-/**
- * Indicates the presense of a message that exceeds the maximum acceptable 
- * length (whatever that happens to be)
- */
-class MessageLengthException(message: String) extends RuntimeException(message)
diff --git a/trunk/core/src/main/scala/kafka/message/MessageSet.scala b/trunk/core/src/main/scala/kafka/message/MessageSet.scala
deleted file mode 100644
index bf45d91..0000000
--- a/trunk/core/src/main/scala/kafka/message/MessageSet.scala
+++ /dev/null
@@ -1,114 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.nio._
-import java.nio.channels._
-
-/**
- * Message set helper functions
- */
-object MessageSet {
-  
-  val LogOverhead = 4
-  val Empty: MessageSet = new ByteBufferMessageSet(ByteBuffer.allocate(0))
-  
-  /**
-   * The size of a message set containing the given messages
-   */
-  def messageSetSize(messages: Iterable[Message]): Int =
-    messages.foldLeft(0)(_ + entrySize(_))
-
-  /**
-   * The size of a list of messages
-   */
-  def messageSetSize(messages: java.util.List[Message]): Int = {
-    var size = 0
-    val iter = messages.iterator
-    while(iter.hasNext) {
-      val message = iter.next.asInstanceOf[Message]
-      size += entrySize(message)
-    }
-    size
-  }
-  
-  /**
-   * The size of a size-delimited entry in a message set
-   */
-  def entrySize(message: Message): Int = LogOverhead + message.size
-
-  def createByteBuffer(compressionCodec: CompressionCodec, messages: Message*): ByteBuffer =
-    compressionCodec match {
-      case NoCompressionCodec =>
-        val buffer = ByteBuffer.allocate(MessageSet.messageSetSize(messages))
-        for (message <- messages) {
-          message.serializeTo(buffer)
-        }
-        buffer.rewind
-        buffer
-      case _ =>
-        messages.size match {
-          case 0 =>
-            val buffer = ByteBuffer.allocate(MessageSet.messageSetSize(messages))
-            buffer.rewind
-            buffer
-          case _ =>
-            val message = CompressionUtils.compress(messages, compressionCodec)
-            val buffer = ByteBuffer.allocate(message.serializedSize)
-            message.serializeTo(buffer)
-            buffer.rewind
-            buffer
-        }
-    }
-}
-
-/**
- * A set of messages. A message set has a fixed serialized form, though the container
- * for the bytes could be either in-memory or on disk. A The format of each message is
- * as follows:
- * 4 byte size containing an integer N
- * N message bytes as described in the message class
- */
-abstract class MessageSet extends Iterable[MessageAndOffset] {
-
-  /** Write the messages in this set to the given channel starting at the given offset byte. 
-    * Less than the complete amount may be written, but no more than maxSize can be. The number
-    * of bytes written is returned */
-  def writeTo(channel: GatheringByteChannel, offset: Long, maxSize: Long): Long
-  
-  /**
-   * Provides an iterator over the messages in this set
-   */
-  def iterator: Iterator[MessageAndOffset]
-  
-  /**
-   * Gives the total size of this message set in bytes
-   */
-  def sizeInBytes: Long
-  
-  /**
-   * Validate the checksum of all the messages in the set. Throws an InvalidMessageException if the checksum doesn't
-   * match the payload for any message.
-   */
-  def validate(): Unit = {
-    for(messageAndOffset <- this)
-      if(!messageAndOffset.message.isValid)
-        throw new InvalidMessageException
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/message/package.html b/trunk/core/src/main/scala/kafka/message/package.html
deleted file mode 100644
index 19785ec3..0000000
--- a/trunk/core/src/main/scala/kafka/message/package.html
+++ /dev/null
@@ -1 +0,0 @@
-Messages and everything related to them.
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/network/BoundedByteBufferReceive.scala b/trunk/core/src/main/scala/kafka/network/BoundedByteBufferReceive.scala
deleted file mode 100644
index 4b1ab56..0000000
--- a/trunk/core/src/main/scala/kafka/network/BoundedByteBufferReceive.scala
+++ /dev/null
@@ -1,92 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.nio._
-import java.nio.channels._
-import kafka.utils._
-
-/**
- * Represents a communication between the client and server
- * 
- */
-@nonthreadsafe
-private[kafka] class BoundedByteBufferReceive(val maxSize: Int) extends Receive {
-  
-  private val sizeBuffer: ByteBuffer = ByteBuffer.allocate(4)
-  private var contentBuffer: ByteBuffer = null
-  
-  def this() = this(Int.MaxValue)
-  
-  var complete: Boolean = false
-  
-  /**
-   * Get the content buffer for this transmission
-   */
-  def buffer: ByteBuffer = {
-    expectComplete()
-    contentBuffer
-  }
-  
-  /**
-   * Read the bytes in this response from the given channel
-   */
-  def readFrom(channel: ReadableByteChannel): Int = {
-    expectIncomplete()
-    var read = 0
-    // have we read the request size yet?
-    if(sizeBuffer.remaining > 0)
-      read += Utils.read(channel, sizeBuffer)
-    // have we allocated the request buffer yet?
-    if(contentBuffer == null && !sizeBuffer.hasRemaining) {
-      sizeBuffer.rewind()
-      val size = sizeBuffer.getInt()
-      if(size <= 0)
-        throw new InvalidRequestException("%d is not a valid request size.".format(size))
-      if(size > maxSize)
-        throw new InvalidRequestException("Request of length %d is not valid, it is larger than the maximum size of %d bytes.".format(size, maxSize))
-      contentBuffer = byteBufferAllocate(size)
-    }
-    // if we have a buffer read some stuff into it
-    if(contentBuffer != null) {
-      read = Utils.read(channel, contentBuffer)
-      // did we get everything?
-      if(!contentBuffer.hasRemaining) {
-        contentBuffer.rewind()
-        complete = true
-      }
-    }
-    read
-  }
-
-  private def byteBufferAllocate(size: Int): ByteBuffer = {
-    var buffer: ByteBuffer = null
-    try {
-      buffer = ByteBuffer.allocate(size)
-    }
-    catch {
-      case e: OutOfMemoryError => {
-        logger.error("OOME with size " + size, e)
-        throw e
-      }
-      case e2 =>
-        throw e2
-    }
-    buffer
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/network/BoundedByteBufferSend.scala b/trunk/core/src/main/scala/kafka/network/BoundedByteBufferSend.scala
deleted file mode 100644
index 5e1eb5f..0000000
--- a/trunk/core/src/main/scala/kafka/network/BoundedByteBufferSend.scala
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.nio._
-import java.nio.channels._
-import kafka.utils._
-
-@nonthreadsafe
-private[kafka] class BoundedByteBufferSend(val buffer: ByteBuffer) extends Send {
-  
-  private var sizeBuffer = ByteBuffer.allocate(4)
-
-  // Avoid possibility of overflow for 2GB-4 byte buffer
-  if(buffer.remaining > Int.MaxValue - sizeBuffer.limit)
-    throw new IllegalArgumentException("Attempt to create a bounded buffer of " + buffer.remaining + " bytes, but the maximum " +
-                                       "allowable size for a bounded buffer is " + (Int.MaxValue - sizeBuffer.limit) + ".")    
-  sizeBuffer.putInt(buffer.limit)
-  sizeBuffer.rewind()
-
-  var complete: Boolean = false
-
-  def this(size: Int) = this(ByteBuffer.allocate(size))
-  
-  def this(request: Request) = {
-    this(request.sizeInBytes + 2)
-    buffer.putShort(request.id)
-    request.writeTo(buffer)
-    buffer.rewind()
-  }
-  
-  def writeTo(channel: GatheringByteChannel): Int = {
-    expectIncomplete()
-    var written = channel.write(Array(sizeBuffer, buffer))
-    // if we are done, mark it off
-    if(!buffer.hasRemaining)
-      complete = true    
-    written.asInstanceOf[Int]
-  }
-    
-}
diff --git a/trunk/core/src/main/scala/kafka/network/ByteBufferSend.scala b/trunk/core/src/main/scala/kafka/network/ByteBufferSend.scala
deleted file mode 100644
index af30042..0000000
--- a/trunk/core/src/main/scala/kafka/network/ByteBufferSend.scala
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.nio._
-import java.nio.channels._
-import kafka.utils._
-
-@nonthreadsafe
-private[kafka] class ByteBufferSend(val buffer: ByteBuffer) extends Send {
-  
-  var complete: Boolean = false
-
-  def this(size: Int) = this(ByteBuffer.allocate(size))
-  
-  def writeTo(channel: GatheringByteChannel): Int = {
-    expectIncomplete()
-    var written = 0
-    written += channel.write(buffer)
-    if(!buffer.hasRemaining)
-      complete = true
-    written
-  }
-    
-}
diff --git a/trunk/core/src/main/scala/kafka/network/ConnectionConfig.scala b/trunk/core/src/main/scala/kafka/network/ConnectionConfig.scala
deleted file mode 100644
index cde7c09..0000000
--- a/trunk/core/src/main/scala/kafka/network/ConnectionConfig.scala
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-trait ConnectionConfig {
-  val host: String
-  val port: Int
-  val sendBufferSize: Int = -1
-  val receiveBufferSize: Int = -1
-  val tcpNoDelay = true
-  val keepAlive = false  
-}
diff --git a/trunk/core/src/main/scala/kafka/network/Handler.scala b/trunk/core/src/main/scala/kafka/network/Handler.scala
deleted file mode 100644
index a030033..0000000
--- a/trunk/core/src/main/scala/kafka/network/Handler.scala
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-private[kafka] object Handler {
-  
-  /**
-   * A request handler is a function that turns an incoming 
-   * transmission into an outgoing transmission
-   */
-  type Handler = Receive => Option[Send]
-  
-  /**
-   * A handler mapping finds the right Handler function for a given request
-   */
-  type HandlerMapping = (Short, Receive) => Handler
-
-}
diff --git a/trunk/core/src/main/scala/kafka/network/InvalidRequestException.scala b/trunk/core/src/main/scala/kafka/network/InvalidRequestException.scala
deleted file mode 100644
index 5197913..0000000
--- a/trunk/core/src/main/scala/kafka/network/InvalidRequestException.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-class InvalidRequestException(val message: String) extends RuntimeException(message) {
-  
-  def this() = this("")
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/network/Request.scala b/trunk/core/src/main/scala/kafka/network/Request.scala
deleted file mode 100644
index d403d35..0000000
--- a/trunk/core/src/main/scala/kafka/network/Request.scala
+++ /dev/null
@@ -1,28 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.nio._
-
-private[kafka] abstract class Request(val id: Short) {
-
-  def sizeInBytes: Int
-  
-  def writeTo(buffer: ByteBuffer): Unit
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/network/SocketServer.scala b/trunk/core/src/main/scala/kafka/network/SocketServer.scala
deleted file mode 100644
index 1bc6bc1..0000000
--- a/trunk/core/src/main/scala/kafka/network/SocketServer.scala
+++ /dev/null
@@ -1,354 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-import java.net._
-import java.io._
-import java.nio.channels._
-
-import kafka.utils._
-
-import org.apache.log4j.Logger
-import kafka.api.RequestKeys
-
-/**
- * An NIO socket server. The thread model is
- *   1 Acceptor thread that handles new connections
- *   N Processor threads that each have their own selectors and handle all requests from their connections synchronously
- */
-class SocketServer(val port: Int,
-                   val numProcessorThreads: Int,
-                   monitoringPeriodSecs: Int,
-                   private val handlerFactory: Handler.HandlerMapping,
-                   val sendBufferSize: Int,
-                   val receiveBufferSize: Int,
-                   val maxRequestSize: Int = Int.MaxValue) {
-
-  private val time = SystemTime
-  private val processors = new Array[Processor](numProcessorThreads)
-  private var acceptor: Acceptor = new Acceptor(port, processors, sendBufferSize, receiveBufferSize)
-  val stats: SocketServerStats = new SocketServerStats(1000L * 1000L * 1000L * monitoringPeriodSecs)
-
-  /**
-   * Start the socket server
-   */
-  def startup() {
-    for(i <- 0 until numProcessorThreads) {
-      processors(i) = new Processor(handlerFactory, time, stats, maxRequestSize)
-      Utils.newThread("kafka-processor-" + i, processors(i), false).start()
-    }
-    Utils.newThread("kafka-acceptor", acceptor, false).start()
-    acceptor.awaitStartup
-  }
-
-  /**
-   * Shutdown the socket server
-   */
-  def shutdown() = {
-    acceptor.shutdown
-    for(processor <- processors)
-      processor.shutdown
-  }
-
-}
-
-/**
- * A base class with some helper variables and methods
- */
-private[kafka] abstract class AbstractServerThread extends Runnable {
-
-  protected val selector = Selector.open();
-  protected val logger = Logger.getLogger(getClass())
-  private val startupLatch = new CountDownLatch(1)
-  private val shutdownLatch = new CountDownLatch(1)
-  private val alive = new AtomicBoolean(false)
-
-  /**
-   * Initiates a graceful shutdown by signeling to stop and waiting for the shutdown to complete
-   */
-  def shutdown(): Unit = {
-    alive.set(false)
-    selector.wakeup
-    shutdownLatch.await
-  }
-
-  /**
-   * Wait for the thread to completely start up
-   */
-  def awaitStartup(): Unit = startupLatch.await
-
-  /**
-   * Record that the thread startup is complete
-   */
-  protected def startupComplete() = {
-    alive.set(true)
-    startupLatch.countDown
-  }
-
-  /**
-   * Record that the thread shutdown is complete
-   */
-  protected def shutdownComplete() = shutdownLatch.countDown
-
-  /**
-   * Is the server still running?
-   */
-  protected def isRunning = alive.get
-
-}
-
-/**
- * Thread that accepts and configures new connections. There is only need for one of these
- */
-private[kafka] class Acceptor(val port: Int, private val processors: Array[Processor], val sendBufferSize: Int, val receiveBufferSize: Int) extends AbstractServerThread {
-
-  /**
-   * Accept loop that checks for new connection attempts
-   */
-  def run() {
-    val serverChannel = ServerSocketChannel.open()
-    serverChannel.configureBlocking(false)
-    serverChannel.socket.bind(new InetSocketAddress(port))
-    serverChannel.register(selector, SelectionKey.OP_ACCEPT);
-    logger.info("Awaiting connections on port " + port)
-    startupComplete()
-
-    var currentProcessor = 0
-    while(isRunning) {
-      val ready = selector.select(500)
-      if(ready > 0) {
-        val keys = selector.selectedKeys()
-        val iter = keys.iterator()
-        while(iter.hasNext && isRunning) {
-          var key: SelectionKey = null
-          try {
-            key = iter.next
-            iter.remove()
-
-            if(key.isAcceptable)
-                accept(key, processors(currentProcessor))
-              else
-                throw new IllegalStateException("Unrecognized key state for acceptor thread.")
-
-              // round robin to the next processor thread
-              currentProcessor = (currentProcessor + 1) % processors.length
-          } catch {
-            case e: Throwable => logger.error("Error in acceptor", e)
-          }
-        }
-      }
-    }
-    logger.debug("Closing server socket and selector.")
-    Utils.swallow(logger.error, serverChannel.close())
-    Utils.swallow(logger.error, selector.close())
-    shutdownComplete()
-  }
-
-  /*
-   * Accept a new connection
-   */
-  def accept(key: SelectionKey, processor: Processor) {
-    val serverSocketChannel = key.channel().asInstanceOf[ServerSocketChannel]
-    serverSocketChannel.socket().setReceiveBufferSize(receiveBufferSize)
-    
-    val socketChannel = serverSocketChannel.accept()
-    socketChannel.configureBlocking(false)
-    socketChannel.socket().setTcpNoDelay(true)
-    socketChannel.socket().setSendBufferSize(sendBufferSize)
-
-    if (logger.isDebugEnabled()) {
-      logger.debug("sendBufferSize: [" + socketChannel.socket().getSendBufferSize() 
-          + "] receiveBufferSize: [" + socketChannel.socket().getReceiveBufferSize() + "]")
-    }
-
-    processor.accept(socketChannel)
-  }
-}
-
-/**
- * Thread that processes all requests from a single connection. There are N of these running in parallel
- * each of which has its own selectors
- */
-private[kafka] class Processor(val handlerMapping: Handler.HandlerMapping,
-                               val time: Time,
-                               val stats: SocketServerStats,
-                               val maxRequestSize: Int) extends AbstractServerThread {
-
-  private val newConnections = new ConcurrentLinkedQueue[SocketChannel]();
-  private val requestLogger = Logger.getLogger("kafka.request.logger")
-
-  override def run() {
-    startupComplete()
-    while(isRunning) {
-      // setup any new connections that have been queued up
-      configureNewConnections()
-
-      val ready = selector.select(500)
-      if(ready > 0) {
-        val keys = selector.selectedKeys()
-        val iter = keys.iterator()
-        while(iter.hasNext && isRunning) {
-          var key: SelectionKey = null
-          try {
-            key = iter.next
-            iter.remove()
-
-            if(key.isReadable)
-              read(key)
-            else if(key.isWritable)
-              write(key)
-            else if(!key.isValid)
-              close(key)
-            else
-              throw new IllegalStateException("Unrecognized key state for processor thread.")
-          } catch {
-            case e: EOFException => {
-              logger.info("Closing socket connection to %s.".format(channelFor(key).socket.getInetAddress))
-              close(key)
-        }
-        case e: InvalidRequestException => {
-          logger.info("Closing socket connection to %s due to invalid request: %s".format(channelFor(key).socket.getInetAddress, e.getMessage))
-          close(key)
-            } case e: Throwable => {
-              logger.error("Closing socket for " + channelFor(key).socket.getInetAddress + " because of error", e)
-              close(key)
-            }
-          }
-        }
-      }
-    }
-    logger.debug("Closing selector.")
-    Utils.swallow(logger.info, selector.close())
-    shutdownComplete()
-  }
-
-  private def close(key: SelectionKey) {
-    val channel = key.channel.asInstanceOf[SocketChannel]
-    if(logger.isDebugEnabled)
-      logger.debug("Closing connection from " + channel.socket.getRemoteSocketAddress())
-    Utils.swallow(logger.info, channel.socket().close())
-    Utils.swallow(logger.info, channel.close())
-    key.attach(null)
-    Utils.swallow(logger.info, key.cancel())
-  }
-
-  /**
-   * Queue up a new connection for reading
-   */
-  def accept(socketChannel: SocketChannel) {
-    newConnections.add(socketChannel)
-    selector.wakeup()
-  }
-
-  /**
-   * Register any new connections that have been queued up
-   */
-  private def configureNewConnections() {
-    while(newConnections.size() > 0) {
-      val channel = newConnections.poll()
-      if(logger.isDebugEnabled())
-        logger.debug("Listening to new connection from " + channel.socket.getRemoteSocketAddress)
-      channel.register(selector, SelectionKey.OP_READ)
-    }
-  }
-
-  /**
-   * Handle a completed request producing an optional response
-   */
-  private def handle(key: SelectionKey, request: Receive): Option[Send] = {
-    val requestTypeId = request.buffer.getShort()
-    if(requestLogger.isTraceEnabled) {
-      requestTypeId match {
-        case RequestKeys.Produce =>
-          requestLogger.trace("Handling produce request from " + channelFor(key).socket.getRemoteSocketAddress())
-        case RequestKeys.Fetch =>
-          requestLogger.trace("Handling fetch request from " + channelFor(key).socket.getRemoteSocketAddress())
-        case RequestKeys.MultiFetch =>
-          requestLogger.trace("Handling multi-fetch request from " + channelFor(key).socket.getRemoteSocketAddress())
-        case RequestKeys.MultiProduce =>
-          requestLogger.trace("Handling multi-produce request from " + channelFor(key).socket.getRemoteSocketAddress())
-        case RequestKeys.Offsets =>
-          requestLogger.trace("Handling offset request from " + channelFor(key).socket.getRemoteSocketAddress())
-        case _ => throw new InvalidRequestException("No mapping found for handler id " + requestTypeId)
-      }
-    }
-    val handler = handlerMapping(requestTypeId, request)
-    if(handler == null)
-      throw new InvalidRequestException("No handler found for request")
-    val start = time.nanoseconds
-    val maybeSend = handler(request)
-    stats.recordRequest(requestTypeId, time.nanoseconds - start)
-    maybeSend
-  }
-
-  /*
-   * Process reads from ready sockets
-   */
-  def read(key: SelectionKey) {
-    val socketChannel = channelFor(key)
-    var request = key.attachment.asInstanceOf[Receive]
-    if(key.attachment == null) {
-      request = new BoundedByteBufferReceive(maxRequestSize)
-      key.attach(request)
-    }
-    val read = request.readFrom(socketChannel)
-    stats.recordBytesRead(read)
-    if(logger.isTraceEnabled)
-      logger.trace(read + " bytes read from " + socketChannel.socket.getRemoteSocketAddress())
-    if(read < 0) {
-      close(key)
-      return
-    } else if(request.complete) {
-      val maybeResponse = handle(key, request)
-      key.attach(null)
-      // if there is a response, send it, otherwise do nothing
-      if(maybeResponse.isDefined) {
-        key.attach(maybeResponse.getOrElse(None))
-        key.interestOps(SelectionKey.OP_WRITE)
-      }
-    } else {
-      // more reading to be done
-      key.interestOps(SelectionKey.OP_READ)
-      selector.wakeup()
-    }
-  }
-
-  /*
-   * Process writes to ready sockets
-   */
-  def write(key: SelectionKey) {
-    val response = key.attachment().asInstanceOf[Send]
-    val socketChannel = channelFor(key)
-    val written = response.writeTo(socketChannel)
-    stats.recordBytesWritten(written)
-    if(logger.isTraceEnabled)
-      logger.trace(written + " bytes written to " + socketChannel.socket.getRemoteSocketAddress())
-    if(response.complete) {
-      key.attach(null)
-      key.interestOps(SelectionKey.OP_READ)
-    } else {
-      key.interestOps(SelectionKey.OP_WRITE)
-      selector.wakeup()
-    }
-  }
-
-  private def channelFor(key: SelectionKey) = key.channel().asInstanceOf[SocketChannel]
-
-}
diff --git a/trunk/core/src/main/scala/kafka/network/SocketServerStats.scala b/trunk/core/src/main/scala/kafka/network/SocketServerStats.scala
deleted file mode 100644
index 2ec1fa9..0000000
--- a/trunk/core/src/main/scala/kafka/network/SocketServerStats.scala
+++ /dev/null
@@ -1,90 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import kafka.utils._
-import kafka.api.RequestKeys
-
-trait SocketServerStatsMBean {
-  def getProduceRequestsPerSecond: Double
-  def getFetchRequestsPerSecond: Double
-  def getAvgProduceRequestMs: Double
-  def getMaxProduceRequestMs: Double
-  def getAvgFetchRequestMs: Double
-  def getMaxFetchRequestMs: Double
-  def getBytesReadPerSecond: Double
-  def getBytesWrittenPerSecond: Double
-  def getNumFetchRequests: Long
-  def getNumProduceRequests: Long
-  def getTotalBytesRead: Long
-  def getTotalBytesWritten: Long
-  def getTotalFetchRequestMs: Long
-  def getTotalProduceRequestMs: Long
-}
-
-@threadsafe
-class SocketServerStats(val monitorDurationNs: Long, val time: Time) extends SocketServerStatsMBean {
-  
-  def this(monitorDurationNs: Long) = this(monitorDurationNs, SystemTime)
-  val produceTimeStats = new SnapshotStats(monitorDurationNs)
-  val fetchTimeStats = new SnapshotStats(monitorDurationNs)
-  val produceBytesStats = new SnapshotStats(monitorDurationNs)
-  val fetchBytesStats = new SnapshotStats(monitorDurationNs)
-
-  def recordRequest(requestTypeId: Short, durationNs: Long) {
-    requestTypeId match {
-      case r if r == RequestKeys.Produce || r == RequestKeys.MultiProduce =>
-        produceTimeStats.recordRequestMetric(durationNs)
-      case r if r == RequestKeys.Fetch || r == RequestKeys.MultiFetch =>
-        fetchTimeStats.recordRequestMetric(durationNs)
-      case _ => /* not collecting; let go */
-    }
-  }
-  
-  def recordBytesWritten(bytes: Int): Unit = fetchBytesStats.recordRequestMetric(bytes)
-
-  def recordBytesRead(bytes: Int): Unit = produceBytesStats.recordRequestMetric(bytes)
-
-  def getProduceRequestsPerSecond: Double = produceTimeStats.getRequestsPerSecond
-  
-  def getFetchRequestsPerSecond: Double = fetchTimeStats.getRequestsPerSecond
-
-  def getAvgProduceRequestMs: Double = produceTimeStats.getAvgMetric / (1000.0 * 1000.0)
-  
-  def getMaxProduceRequestMs: Double = produceTimeStats.getMaxMetric / (1000.0 * 1000.0)
-
-  def getAvgFetchRequestMs: Double = fetchTimeStats.getAvgMetric / (1000.0 * 1000.0)
-
-  def getMaxFetchRequestMs: Double = fetchTimeStats.getMaxMetric / (1000.0 * 1000.0)
-
-  def getBytesReadPerSecond: Double = produceBytesStats.getAvgMetric
-  
-  def getBytesWrittenPerSecond: Double = fetchBytesStats.getAvgMetric
-
-  def getNumFetchRequests: Long = fetchTimeStats.getNumRequests
-
-  def getNumProduceRequests: Long = produceTimeStats.getNumRequests
-
-  def getTotalBytesRead: Long = produceBytesStats.getTotalMetric
-
-  def getTotalBytesWritten: Long = fetchBytesStats.getTotalMetric
-
-  def getTotalFetchRequestMs: Long = fetchTimeStats.getTotalMetric
-
-  def getTotalProduceRequestMs: Long = produceTimeStats.getTotalMetric
-}
diff --git a/trunk/core/src/main/scala/kafka/network/Transmission.scala b/trunk/core/src/main/scala/kafka/network/Transmission.scala
deleted file mode 100644
index 13f1b19..0000000
--- a/trunk/core/src/main/scala/kafka/network/Transmission.scala
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network
-
-import java.nio._
-import java.nio.channels._
-import kafka.utils.Logging
-
-/**
- * Represents a stateful transfer of data to or from the network
- */
-private[network] trait Transmission extends Logging {
-  
-  def complete: Boolean
-  
-  protected def expectIncomplete(): Unit = {
-    if(complete)
-      throw new IllegalStateException("This operation cannot be completed on a complete request.")
-  }
-  
-  protected def expectComplete(): Unit = {
-    if(!complete)
-      throw new IllegalStateException("This operation cannot be completed on an incomplete request.")
-  }
-  
-}
-
-/**
- * A transmission that is being received from a channel
- */
-private[kafka] trait Receive extends Transmission {
-  
-  def buffer: ByteBuffer
-  
-  def readFrom(channel: ReadableByteChannel): Int
-  
-  def readCompletely(channel: ReadableByteChannel): Int = {
-    var read = 0
-    while(!complete) {
-      read = readFrom(channel)
-      trace(read + " bytes read.")
-    }
-    read
-  }
-  
-}
-
-/**
- * A transmission that is being sent out to the channel
- */
-private[kafka] trait Send extends Transmission {
-    
-  def writeTo(channel: GatheringByteChannel): Int
-  
-  def writeCompletely(channel: GatheringByteChannel): Int = {
-    var written = 0
-    while(!complete) {
-      written = writeTo(channel)
-      trace(written + " bytes written.")
-    }
-    written
-  }
-    
-}
-
-/**
- * A set of composite sends, sent one after another
- */
-abstract class MultiSend[S <: Send](val sends: List[S]) extends Send {
-  val expectedBytesToWrite: Int
-  private var current = sends
-  var totalWritten = 0
-
-  def writeTo(channel: GatheringByteChannel): Int = {
-    expectIncomplete
-    val written = current.head.writeTo(channel)
-    totalWritten += written
-    if(current.head.complete)
-      current = current.tail
-    written
-  }
-  
-  def complete: Boolean = {
-    if (current == Nil) {
-      if (totalWritten != expectedBytesToWrite)
-        error("mismatch in sending bytes over socket; expected: " + expectedBytesToWrite + " actual: " + totalWritten)
-      return true
-    }
-    else
-      return false
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/network/package.html b/trunk/core/src/main/scala/kafka/network/package.html
deleted file mode 100644
index bcb83d9..0000000
--- a/trunk/core/src/main/scala/kafka/network/package.html
+++ /dev/null
@@ -1,11 +0,0 @@
-The network server for kafka. Now application specific code here, just general network server stuff.
-<br>
-The classes Receive and Send encapsulate the incoming and outgoing transmission of bytes. A Handler
-is a mapping between a Receive and a Send, and represents the users hook to add logic for mapping requests
-to actual processing code. Any uncaught exceptions in the reading or writing of the transmissions will result in 
-the server logging an error and closing the offending socket. As a result it is the duty of the Handler
-implementation to catch and serialize any application-level errors that should be sent to the client.
-<br>
-This slightly lower-level interface that models sending and receiving rather than requests and responses
-is necessary in order to allow the send or receive to be overridden with a non-user-space writing of bytes
-using FileChannel.transferTo.
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/producer/BrokerPartitionInfo.scala b/trunk/core/src/main/scala/kafka/producer/BrokerPartitionInfo.scala
deleted file mode 100644
index e04440a..0000000
--- a/trunk/core/src/main/scala/kafka/producer/BrokerPartitionInfo.scala
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer
-
-import collection.mutable.Map
-import collection.SortedSet
-import kafka.cluster.{Broker, Partition}
-
-trait BrokerPartitionInfo {
-  /**
-   * Return a sequence of (brokerId, numPartitions).
-   * @param topic the topic for which this information is to be returned
-   * @return a sequence of (brokerId, numPartitions). Returns a zero-length
-   * sequence if no brokers are available.
-   */  
-  def getBrokerPartitionInfo(topic: String = null): SortedSet[Partition]
-
-  /**
-   * Generate the host and port information for the broker identified
-   * by the given broker id 
-   * @param brokerId the broker for which the info is to be returned
-   * @return host and port of brokerId
-   */
-  def getBrokerInfo(brokerId: Int): Option[Broker]
-
-  /**
-   * Generate a mapping from broker id to the host and port for all brokers
-   * @return mapping from id to host and port of all brokers
-   */
-  def getAllBrokerInfo: Map[Int, Broker]
-
-  /**
-   * This is relevant to the ZKBrokerPartitionInfo. It updates the ZK cache
-   * by reading from zookeeper and recreating the data structures. This API
-   * is invoked by the producer, when it detects that the ZK cache of
-   * ZKBrokerPartitionInfo is stale.
-   *
-   */
-  def updateInfo
-
-  /**
-   * Cleanup
-   */
-  def close
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ConfigBrokerPartitionInfo.scala b/trunk/core/src/main/scala/kafka/producer/ConfigBrokerPartitionInfo.scala
deleted file mode 100644
index f9ea604..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ConfigBrokerPartitionInfo.scala
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer
-
-import collection.mutable.HashMap
-import collection.mutable.Map
-import collection.SortedSet
-import kafka.cluster.{Broker, Partition}
-import kafka.common.InvalidConfigException
-
-private[producer] class ConfigBrokerPartitionInfo(config: ProducerConfig) extends BrokerPartitionInfo {
-  private val brokerPartitions: SortedSet[Partition] = getConfigTopicPartitionInfo
-  private val allBrokers = getConfigBrokerInfo
-
-  /**
-   * Return a sequence of (brokerId, numPartitions)
-   * @param topic this value is null 
-   * @return a sequence of (brokerId, numPartitions)
-   */
-  def getBrokerPartitionInfo(topic: String): SortedSet[Partition] = brokerPartitions
-
-  /**
-   * Generate the host and port information for the broker identified
-   * by the given broker id
-   * @param brokerId the broker for which the info is to be returned
-   * @return host and port of brokerId
-   */
-  def getBrokerInfo(brokerId: Int): Option[Broker] = {
-    allBrokers.get(brokerId)
-  }
-
-  /**
-   * Generate a mapping from broker id to the host and port for all brokers
-   * @return mapping from id to host and port of all brokers
-   */
-  def getAllBrokerInfo: Map[Int, Broker] = allBrokers
-
-  def close {}
-
-  def updateInfo = {}
-
-  /**
-   * Generate a sequence of (brokerId, numPartitions) for all brokers
-   * specified in the producer configuration
-   * @return sequence of (brokerId, numPartitions)
-   */
-  private def getConfigTopicPartitionInfo(): SortedSet[Partition] = {
-    val brokerInfoList = config.brokerList.split(",")
-    if(brokerInfoList.size == 0) throw new InvalidConfigException("broker.list is empty")
-    // check if each individual broker info is valid => (brokerId: brokerHost: brokerPort)
-    brokerInfoList.foreach { bInfo =>
-      val brokerInfo = bInfo.split(":")
-      if(brokerInfo.size < 3) throw new InvalidConfigException("broker.list has invalid value")
-    }
-    val brokerPartitions = brokerInfoList.map(bInfo => (bInfo.split(":").head.toInt, 1))
-    var brokerParts = SortedSet.empty[Partition]
-    brokerPartitions.foreach { bp =>
-      for(i <- 0 until bp._2) {
-        val bidPid = new Partition(bp._1, i)
-        brokerParts = brokerParts + bidPid
-      }
-    }
-    brokerParts
-  }
-
-  /**
-   * Generate the host and port information for for all brokers
-   * specified in the producer configuration
-   * @return mapping from brokerId to (host, port) for all brokers
-   */
-  private def getConfigBrokerInfo(): Map[Int, Broker] = {
-    val brokerInfo = new HashMap[Int, Broker]()
-    val brokerInfoList = config.brokerList.split(",")
-    brokerInfoList.foreach{ bInfo =>
-      val brokerIdHostPort = bInfo.split(":")
-      brokerInfo += (brokerIdHostPort(0).toInt -> new Broker(brokerIdHostPort(0).toInt, brokerIdHostPort(1),
-        brokerIdHostPort(1), brokerIdHostPort(2).toInt))
-    }
-    brokerInfo
-  }
-
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ConsoleProducer.scala b/trunk/core/src/main/scala/kafka/producer/ConsoleProducer.scala
deleted file mode 100644
index 533fe46..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ConsoleProducer.scala
+++ /dev/null
@@ -1,144 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer
-
-import scala.collection.JavaConversions._
-import joptsimple._
-import java.util.Properties
-import java.io._
-import kafka.message._
-import kafka.serializer._
-
-object ConsoleProducer { 
-
-  def main(args: Array[String]) { 
-    val parser = new OptionParser
-    val topicOpt = parser.accepts("topic", "REQUIRED: The topic id to produce messages to.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    val zkConnectOpt = parser.accepts("zookeeper", "REQUIRED: The zookeeper connection string for the kafka zookeeper instance in the form HOST:PORT[/CHROOT].")
-                           .withRequiredArg
-                           .describedAs("connection_string")
-                           .ofType(classOf[String])
-    val syncOpt = parser.accepts("sync", "If set message send requests to the brokers are synchronously, one at a time as they arrive.")
-    val compressOpt = parser.accepts("compress", "If set, messages batches are sent compressed")
-    val batchSizeOpt = parser.accepts("batch-size", "Number of messages to send in a single batch if they are not being sent synchronously.")
-                             .withRequiredArg
-                             .describedAs("size")
-                             .ofType(classOf[java.lang.Integer])
-                             .defaultsTo(200)
-    val sendTimeoutOpt = parser.accepts("timeout", "If set and the producer is running in asynchronous mode, this gives the maximum amount of time" + 
-                                                   " a message will queue awaiting suffient batch size. The value is given in ms.")
-                               .withRequiredArg
-                               .describedAs("timeout_ms")
-                               .ofType(classOf[java.lang.Long])
-                               .defaultsTo(1000)
-    val messageEncoderOpt = parser.accepts("message-encoder", "The class name of the message encoder implementation to use.")
-                                 .withRequiredArg
-                                 .describedAs("encoder_class")
-                                 .ofType(classOf[java.lang.String])
-                                 .defaultsTo(classOf[StringEncoder].getName)
-    val messageReaderOpt = parser.accepts("line-reader", "The class name of the class to use for reading lines from standard in. " + 
-                                                          "By default each line is read as a seperate message.")
-                                  .withRequiredArg
-                                  .describedAs("reader_class")
-                                  .ofType(classOf[java.lang.String])
-                                  .defaultsTo(classOf[LineMessageReader].getName)
-    val propertyOpt = parser.accepts("property", "A mechanism to pass user-defined properties in the form key=value to the message reader. " + 
-                                                 "This allows custom configuration for a user-defined message reader.")
-                            .withRequiredArg
-                            .describedAs("prop")
-                            .ofType(classOf[String])
-
-
-    val options = parser.parse(args : _*)
-    for(arg <- List(topicOpt, zkConnectOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-
-    val topic = options.valueOf(topicOpt)
-    val zkConnect = options.valueOf(zkConnectOpt)
-    val sync = options.has(syncOpt)
-    val compress = options.has(compressOpt)
-    val batchSize = options.valueOf(batchSizeOpt)
-    val sendTimeout = options.valueOf(sendTimeoutOpt)
-    val encoderClass = options.valueOf(messageEncoderOpt)
-    val readerClass = options.valueOf(messageReaderOpt)
-    val cmdLineProps = parseLineReaderArgs(options.valuesOf(propertyOpt))
-
-    val props = new Properties()
-    props.put("zk.connect", zkConnect)
-    props.put("compression.codec", DefaultCompressionCodec.codec.toString)
-    props.put("producer.type", if(sync) "sync" else "async")
-    if(options.has(batchSizeOpt))
-      props.put("batch.size", batchSize.toString)
-    props.put("queue.time", sendTimeout.toString)
-    props.put("serializer.class", encoderClass)
-
-    val reader = Class.forName(readerClass).newInstance().asInstanceOf[MessageReader]
-    reader.init(System.in, cmdLineProps)
-
-    val producer = new Producer[Any, Any](new ProducerConfig(props))
-
-    Runtime.getRuntime.addShutdownHook(new Thread() {
-      override def run() {
-        producer.close()
-      }
-    })
-
-    var message: AnyRef = null
-    do { 
-      message = reader.readMessage()
-      if(message != null)
-        producer.send(new ProducerData(topic, message))
-    } while(message != null)
-  }
-
-  def parseLineReaderArgs(args: Iterable[String]): Properties = {
-    val splits = args.map(_ split "=").filterNot(_ == null).filterNot(_.length == 0)
-    if(!splits.forall(_.length == 2)) {
-      System.err.println("Invalid line reader properties: " + args.mkString(" "))
-      System.exit(1)
-    }
-    val props = new Properties
-    for(a <- splits)
-      props.put(a(0), a(1))
-    props
-  }
-
-  trait MessageReader { 
-    def init(inputStream: InputStream, props: Properties) {}
-    def readMessage(): AnyRef
-    def close() {}
-  }
-
-  class LineMessageReader extends MessageReader { 
-    var reader: BufferedReader = null
-
-    override def init(inputStream: InputStream, props: Properties) { 
-      reader = new BufferedReader(new InputStreamReader(inputStream))
-    }
-
-    override def readMessage() = reader.readLine()
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/DefaultPartitioner.scala b/trunk/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
deleted file mode 100644
index 3459224..0000000
--- a/trunk/core/src/main/scala/kafka/producer/DefaultPartitioner.scala
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-private[kafka] class DefaultPartitioner[T] extends Partitioner[T] {
-  private val random = new java.util.Random
-  
-  def partition(key: T, numPartitions: Int): Int = {
-    if(key == null)
-      random.nextInt(numPartitions)
-    else
-      math.abs(key.hashCode) % numPartitions
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala b/trunk/core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala
deleted file mode 100644
index 417da27..0000000
--- a/trunk/core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala
+++ /dev/null
@@ -1,94 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import async.MissingConfigException
-import org.apache.log4j.spi.LoggingEvent
-import org.apache.log4j.AppenderSkeleton
-import org.apache.log4j.helpers.LogLog
-import kafka.utils.Logging
-import java.util.{Properties, Date}
-import scala.collection._
-
-class KafkaLog4jAppender extends AppenderSkeleton with Logging {
-  var port:Int = 0
-  var host:String = null
-  var topic:String = null
-  var serializerClass:String = null
-  var zkConnect:String = null
-  var brokerList:String = null
-  
-  private var producer: Producer[String, String] = null
-
-  def getTopic:String = topic
-  def setTopic(topic: String) { this.topic = topic }
-
-  def getZkConnect:String = zkConnect
-  def setZkConnect(zkConnect: String) { this.zkConnect = zkConnect }
-  
-  def getBrokerList:String = brokerList
-  def setBrokerList(brokerList: String) { this.brokerList = brokerList }
-  
-  def getSerializerClass:String = serializerClass
-  def setSerializerClass(serializerClass:String) { this.serializerClass = serializerClass }
-
-  override def activateOptions() {
-    val connectDiagnostic : mutable.ListBuffer[String] = mutable.ListBuffer();
-    // check for config parameter validity
-    val props = new Properties()
-    if( zkConnect == null) connectDiagnostic += "zkConnect"
-    else props.put("zk.connect", zkConnect);
-    if( brokerList == null) connectDiagnostic += "brokerList"
-    else if( props.isEmpty) props.put("broker.list", brokerList)
-    if(props.isEmpty )
-      throw new MissingConfigException(
-        connectDiagnostic mkString ("One of these connection properties must be specified: ", ", ", ".")
-      )
-    if(topic == null)
-      throw new MissingConfigException("topic must be specified by the Kafka log4j appender")
-    if(serializerClass == null) {
-      serializerClass = "kafka.serializer.StringEncoder"
-      LogLog.warn("Using default encoder - kafka.serializer.StringEncoder")
-    }
-    props.put("serializer.class", serializerClass)
-    val config : ProducerConfig = new ProducerConfig(props)
-    producer = new Producer[String, String](config)
-    LogLog.debug("Kafka producer connected to " + (if(config.zkConnect == null) config.brokerList else config.zkConnect))
-    LogLog.debug("Logging for topic: " + topic)
-  }
-  
-  override def append(event: LoggingEvent)  {
-    val message : String = if( this.layout == null) {
-      event.getRenderedMessage
-    }
-    else this.layout.format(event)
-    LogLog.debug("[" + new Date(event.getTimeStamp).toString + "]" + message)
-    val messageData : ProducerData[String, String] =
-      new ProducerData[String, String](topic, message)
-    producer.send(messageData);
-  }
-
-  override def close() {
-    if(!this.closed) {
-      this.closed = true
-      producer.close()
-    }
-  }
-
-  override def requiresLayout: Boolean = false
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/Partitioner.scala b/trunk/core/src/main/scala/kafka/producer/Partitioner.scala
deleted file mode 100644
index 40e9f05..0000000
--- a/trunk/core/src/main/scala/kafka/producer/Partitioner.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer
-
-trait Partitioner[T] {
-  /**
-   * Uses the key to calculate a partition bucket id for routing
-   * the data to the appropriate broker partition
-   * @return an integer between 0 and numPartitions-1
-   */
-  def partition(key: T, numPartitions: Int): Int
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/Producer.scala b/trunk/core/src/main/scala/kafka/producer/Producer.scala
deleted file mode 100644
index dafa6d2..0000000
--- a/trunk/core/src/main/scala/kafka/producer/Producer.scala
+++ /dev/null
@@ -1,213 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.producer
-
-import async.{CallbackHandler, EventHandler}
-import kafka.serializer.Encoder
-import kafka.utils._
-import java.util.Properties
-import kafka.cluster.{Partition, Broker}
-import java.util.concurrent.atomic.AtomicBoolean
-import kafka.common.{NoBrokersForPartitionException, InvalidPartitionException}
-import kafka.api.ProducerRequest
-
-class Producer[K,V](config: ProducerConfig,
-                    partitioner: Partitioner[K],
-                    producerPool: ProducerPool[V],
-                    populateProducerPool: Boolean,
-                    private var brokerPartitionInfo: BrokerPartitionInfo) /* for testing purpose only. Applications should ideally */
-                                                          /* use the other constructor*/
-extends Logging {
-  private val hasShutdown = new AtomicBoolean(false)
-  private val random = new java.util.Random
-  // check if zookeeper based auto partition discovery is enabled
-  private val zkEnabled = Utils.propertyExists(config.zkConnect)
-  if(brokerPartitionInfo == null) {
-    zkEnabled match {
-      case true =>
-        val zkProps = new Properties()
-        zkProps.put("zk.connect", config.zkConnect)
-        zkProps.put("zk.sessiontimeout.ms", config.zkSessionTimeoutMs.toString)
-        zkProps.put("zk.connectiontimeout.ms", config.zkConnectionTimeoutMs.toString)
-        zkProps.put("zk.synctime.ms", config.zkSyncTimeMs.toString)
-        brokerPartitionInfo = new ZKBrokerPartitionInfo(new ZKConfig(zkProps), producerCbk)
-      case false =>
-        brokerPartitionInfo = new ConfigBrokerPartitionInfo(config)
-    }
-  }
-  // pool of producers, one per broker
-  if(populateProducerPool) {
-    val allBrokers = brokerPartitionInfo.getAllBrokerInfo
-    allBrokers.foreach(b => producerPool.addProducer(new Broker(b._1, b._2.host, b._2.host, b._2.port)))
-  }
-
-/**
- * This constructor can be used when all config parameters will be specified through the
- * ProducerConfig object
- * @param config Producer Configuration object
- */
-  def this(config: ProducerConfig) =  this(config, Utils.getObject(config.partitionerClass),
-    new ProducerPool[V](config, Utils.getObject(config.serializerClass)), true, null)
-
-  /**
-   * This constructor can be used to provide pre-instantiated objects for all config parameters
-   * that would otherwise be instantiated via reflection. i.e. encoder, partitioner, event handler and
-   * callback handler. If you use this constructor, encoder, eventHandler, callback handler and partitioner
-   * will not be picked up from the config.
-   * @param config Producer Configuration object
-   * @param encoder Encoder used to convert an object of type V to a kafka.message.Message. If this is null it
-   * throws an InvalidConfigException
-   * @param eventHandler the class that implements kafka.producer.async.IEventHandler[T] used to
-   * dispatch a batch of produce requests, using an instance of kafka.producer.SyncProducer. If this is null, it
-   * uses the DefaultEventHandler
-   * @param cbkHandler the class that implements kafka.producer.async.CallbackHandler[T] used to inject
-   * callbacks at various stages of the kafka.producer.AsyncProducer pipeline. If this is null, the producer does
-   * not use the callback handler and hence does not invoke any callbacks
-   * @param partitioner class that implements the kafka.producer.Partitioner[K], used to supply a custom
-   * partitioning strategy on the message key (of type K) that is specified through the ProducerData[K, T]
-   * object in the  send API. If this is null, producer uses DefaultPartitioner
-   */
-  def this(config: ProducerConfig,
-           encoder: Encoder[V],
-           eventHandler: EventHandler[V],
-           cbkHandler: CallbackHandler[V],
-           partitioner: Partitioner[K]) =
-    this(config, if(partitioner == null) new DefaultPartitioner[K] else partitioner,
-         new ProducerPool[V](config, encoder, eventHandler, cbkHandler), true, null)
-
-  /**
-   * Sends the data, partitioned by key to the topic using either the
-   * synchronous or the asynchronous producer
-   * @param producerData the producer data object that encapsulates the topic, key and message data
-   */
-  def send(producerData: ProducerData[K,V]*) {
-    zkEnabled match {
-      case true => zkSend(producerData: _*)
-      case false => configSend(producerData: _*)
-    }
-  }
-
-  private def zkSend(producerData: ProducerData[K,V]*) {
-    val producerPoolRequests = producerData.map { pd =>
-      var brokerIdPartition: Option[Partition] = None
-      var brokerInfoOpt: Option[Broker] = None
-
-      var numRetries: Int = 0
-      while(numRetries <= config.zkReadRetries && brokerInfoOpt.isEmpty) {
-        if(numRetries > 0) {
-          info("Try #" + numRetries + " ZK producer cache is stale. Refreshing it by reading from ZK again")
-          brokerPartitionInfo.updateInfo
-        }
-
-        val topicPartitionsList = getPartitionListForTopic(pd)
-        val totalNumPartitions = topicPartitionsList.length
-
-        val partitionId = getPartition(pd.getKey, totalNumPartitions)
-        brokerIdPartition = Some(topicPartitionsList(partitionId))
-        brokerInfoOpt = brokerPartitionInfo.getBrokerInfo(brokerIdPartition.get.brokerId)
-        numRetries += 1
-      }
-
-      brokerInfoOpt match {
-        case Some(brokerInfo) =>
-          debug("Sending message to broker " + brokerInfo.host + ":" + brokerInfo.port +
-                  " on partition " + brokerIdPartition.get.partId)
-        case None =>
-          throw new NoBrokersForPartitionException("Invalid Zookeeper state. Failed to get partition for topic: " +
-            pd.getTopic + " and key: " + pd.getKey)
-      }
-      producerPool.getProducerPoolData(pd.getTopic,
-        new Partition(brokerIdPartition.get.brokerId, brokerIdPartition.get.partId),
-        pd.getData)
-    }
-    producerPool.send(producerPoolRequests: _*)
-  }
-
-  private def configSend(producerData: ProducerData[K,V]*) {
-    val producerPoolRequests = producerData.map { pd =>
-    // find the broker partitions registered for this topic
-      val topicPartitionsList = getPartitionListForTopic(pd)
-      val totalNumPartitions = topicPartitionsList.length
-
-      val randomBrokerId = random.nextInt(totalNumPartitions)
-      val brokerIdPartition = topicPartitionsList(randomBrokerId)
-      val brokerInfo = brokerPartitionInfo.getBrokerInfo(brokerIdPartition.brokerId).get
-
-      debug("Sending message to broker " + brokerInfo.host + ":" + brokerInfo.port +
-                " on a randomly chosen partition")
-      val partition = ProducerRequest.RandomPartition
-      debug("Sending message to broker " + brokerInfo.host + ":" + brokerInfo.port + " on a partition " +
-          brokerIdPartition.partId)
-      producerPool.getProducerPoolData(pd.getTopic,
-        new Partition(brokerIdPartition.brokerId, partition),
-        pd.getData)
-    }
-    producerPool.send(producerPoolRequests: _*)
-  }
-
-  private def getPartitionListForTopic(pd: ProducerData[K,V]): Seq[Partition] = {
-    debug("Getting the number of broker partitions registered for topic: " + pd.getTopic)
-    val topicPartitionsList = brokerPartitionInfo.getBrokerPartitionInfo(pd.getTopic).toSeq
-    debug("Broker partitions registered for topic: " + pd.getTopic + " = " + topicPartitionsList)
-    val totalNumPartitions = topicPartitionsList.length
-    if(totalNumPartitions == 0) throw new NoBrokersForPartitionException("Partition = " + pd.getKey)
-    topicPartitionsList
-  }
-
-  /**
-   * Retrieves the partition id and throws an InvalidPartitionException if
-   * the value of partition is not between 0 and numPartitions-1
-   * @param key the partition key
-   * @param numPartitions the total number of available partitions
-   * @returns the partition id
-   */
-  private def getPartition(key: K, numPartitions: Int): Int = {
-    if(numPartitions <= 0)
-      throw new InvalidPartitionException("Invalid number of partitions: " + numPartitions +
-              "\n Valid values are > 0")
-    val partition = if(key == null) random.nextInt(numPartitions)
-                    else partitioner.partition(key , numPartitions)
-    if(partition < 0 || partition >= numPartitions)
-      throw new InvalidPartitionException("Invalid partition id : " + partition +
-              "\n Valid values are in the range inclusive [0, " + (numPartitions-1) + "]")
-    partition
-  }
-  
-  /**
-   * Callback to add a new producer to the producer pool. Used by ZKBrokerPartitionInfo
-   * on registration of new broker in zookeeper
-   * @param bid the id of the broker
-   * @param host the hostname of the broker
-   * @param port the port of the broker
-   */
-  private def producerCbk(bid: Int, host: String, port: Int) =  {
-    if(populateProducerPool) producerPool.addProducer(new Broker(bid, host, host, port))
-    else debug("Skipping the callback since populateProducerPool = false")
-  }
-
-  /**
-   * Close API to close the producer pool connections to all Kafka brokers. Also closes
-   * the zookeeper client connection if one exists
-   */
-  def close() = {
-    val canShutdown = hasShutdown.compareAndSet(false, true)
-    if(canShutdown) {
-      producerPool.close
-      brokerPartitionInfo.close
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ProducerConfig.scala b/trunk/core/src/main/scala/kafka/producer/ProducerConfig.scala
deleted file mode 100644
index 8a5b53c..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ProducerConfig.scala
+++ /dev/null
@@ -1,89 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import async.AsyncProducerConfigShared
-import java.util.Properties
-import kafka.utils.{ZKConfig, Utils}
-import kafka.common.InvalidConfigException
-
-class ProducerConfig(val props: Properties) extends ZKConfig(props)
-        with AsyncProducerConfigShared with SyncProducerConfigShared{
-
-  /** For bypassing zookeeper based auto partition discovery, use this config   *
-   *  to pass in static broker and per-broker partition information. Format-    *
-   *  brokerid1:host1:port1, brokerid2:host2:port2*/
-  val brokerList = Utils.getString(props, "broker.list", null)
-  if(Utils.propertyExists(brokerList) && Utils.getString(props, "partitioner.class", null) != null)
-    throw new InvalidConfigException("partitioner.class cannot be used when broker.list is set")
-
-  /**
-   * If DefaultEventHandler is used, this specifies the number of times to
-   * retry if an error is encountered during send. Currently, it is only
-   * appropriate when broker.list points to a VIP. If the zk.connect option
-   * is used instead, this will not have any effect because with the zk-based
-   * producer, brokers are not re-selected upon retry. So retries would go to
-   * the same (potentially still down) broker. (KAFKA-253 will help address
-   * this.)
-   */
-  val numRetries = Utils.getInt(props, "num.retries", 0)
-
-  /** If both broker.list and zk.connect options are specified, throw an exception */
-  if(Utils.propertyExists(brokerList) && Utils.propertyExists(zkConnect))
-    throw new InvalidConfigException("only one of broker.list and zk.connect can be specified")
-
-  if(!Utils.propertyExists(zkConnect) && !Utils.propertyExists(brokerList))
-    throw new InvalidConfigException("At least one of zk.connect or broker.list must be specified")
-
-  /** the partitioner class for partitioning events amongst sub-topics */
-  val partitionerClass = Utils.getString(props, "partitioner.class", "kafka.producer.DefaultPartitioner")
-
-  /** this parameter specifies whether the messages are sent asynchronously *
-   * or not. Valid values are - async for asynchronous send                 *
-   *                            sync for synchronous send                   */
-  val producerType = Utils.getString(props, "producer.type", "sync")
-
-  /**
-   * This parameter allows you to specify the compression codec for all data generated *
-   * by this producer. The default is NoCompressionCodec
-   */
-  val compressionCodec = Utils.getCompressionCodec(props, "compression.codec")
-
-  /** This parameter allows you to set whether compression should be turned *
-   *  on for particular topics
-   *
-   *  If the compression codec is anything other than NoCompressionCodec,
-   *
-   *    Enable compression only for specified topics if any
-   *
-   *    If the list of compressed topics is empty, then enable the specified compression codec for all topics
-   *
-   *  If the compression codec is NoCompressionCodec, compression is disabled for all topics
-   */
-  val compressedTopics = Utils.getCSVList(Utils.getString(props, "compressed.topics", null))
-
-  /**
-   * The producer using the zookeeper software load balancer maintains a ZK cache that gets
-   * updated by the zookeeper watcher listeners. During some events like a broker bounce, the
-   * producer ZK cache can get into an inconsistent state, for a small time period. In this time
-   * period, it could end up picking a broker partition that is unavailable. When this happens, the
-   * ZK cache needs to be updated.
-   * This parameter specifies the number of times the producer attempts to refresh this ZK cache.
-   */
-  val zkReadRetries = Utils.getInt(props, "zk.read.num.retries", 3)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ProducerData.scala b/trunk/core/src/main/scala/kafka/producer/ProducerData.scala
deleted file mode 100644
index 6034123..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ProducerData.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-/**
- * Represents the data to be sent using the Producer send API
- * @param topic the topic under which the message is to be published
- * @param key the key used by the partitioner to pick a broker partition
- * @param data variable length data to be published as Kafka messages under topic
- */
-class ProducerData[K, V](private val topic: String,
-                         private val key: K,
-                         private val data: Seq[V]) {
-
-  def this(t: String, d: Seq[V]) = this(topic = t, key = null.asInstanceOf[K], data = d)
-
-  def this(t: String, d: V) = this(topic = t, key = null.asInstanceOf[K], data = List(d))
-
-  def getTopic: String = topic
-
-  def getKey: K = key
-
-  def getData: Seq[V] = data
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ProducerPool.scala b/trunk/core/src/main/scala/kafka/producer/ProducerPool.scala
deleted file mode 100644
index ee2d44b..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ProducerPool.scala
+++ /dev/null
@@ -1,179 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import async._
-import java.util.Properties
-import kafka.serializer.Encoder
-import java.util.concurrent.{ConcurrentMap, ConcurrentHashMap}
-import kafka.cluster.{Partition, Broker}
-import kafka.api.ProducerRequest
-import kafka.common.{UnavailableProducerException, InvalidConfigException}
-import kafka.utils.{Utils, Logging}
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet}
-
-class ProducerPool[V](private val config: ProducerConfig,
-                      private val serializer: Encoder[V],
-                      private val syncProducers: ConcurrentMap[Int, SyncProducer],
-                      private val asyncProducers: ConcurrentMap[Int, AsyncProducer[V]],
-                      private val inputEventHandler: EventHandler[V] = null,
-                      private val cbkHandler: CallbackHandler[V] = null) extends Logging {
-
-  private var eventHandler = inputEventHandler
-  if(eventHandler == null)
-    eventHandler = new DefaultEventHandler(config, cbkHandler)
-
-  if(serializer == null)
-    throw new InvalidConfigException("serializer passed in is null!")
-
-  private var sync: Boolean = true
-  config.producerType match {
-    case "sync" =>
-    case "async" => sync = false
-    case _ => throw new InvalidConfigException("Valid values for producer.type are sync/async")
-  }
-
-  def this(config: ProducerConfig, serializer: Encoder[V],
-           eventHandler: EventHandler[V], cbkHandler: CallbackHandler[V]) =
-    this(config, serializer,
-         new ConcurrentHashMap[Int, SyncProducer](),
-         new ConcurrentHashMap[Int, AsyncProducer[V]](),
-         eventHandler, cbkHandler)
-
-  def this(config: ProducerConfig, serializer: Encoder[V]) = this(config, serializer,
-                                                                  new ConcurrentHashMap[Int, SyncProducer](),
-                                                                  new ConcurrentHashMap[Int, AsyncProducer[V]](),
-                                                                  Utils.getObject(config.eventHandler),
-                                                                  Utils.getObject(config.cbkHandler))
-  /**
-   * add a new producer, either synchronous or asynchronous, connecting
-   * to the specified broker 
-   * @param bid the id of the broker
-   * @param host the hostname of the broker
-   * @param port the port of the broker
-   */
-  def addProducer(broker: Broker) {
-    val props = new Properties()
-    props.put("host", broker.host)
-    props.put("port", broker.port.toString)
-    props.putAll(config.props)
-    if(sync) {
-        val producer = new SyncProducer(new SyncProducerConfig(props))
-        info("Creating sync producer for broker id = " + broker.id + " at " + broker.host + ":" + broker.port)
-        syncProducers.put(broker.id, producer)
-    } else {
-        val producer = new AsyncProducer[V](new AsyncProducerConfig(props),
-                                            new SyncProducer(new SyncProducerConfig(props)),
-                                            serializer,
-                                            eventHandler, config.eventHandlerProps,
-                                            cbkHandler, config.cbkHandlerProps)
-        producer.start
-        info("Creating async producer for broker id = " + broker.id + " at " + broker.host + ":" + broker.port)
-        asyncProducers.put(broker.id, producer)
-    }
-  }
-
-  /**
-   * selects either a synchronous or an asynchronous producer, for
-   * the specified broker id and calls the send API on the selected
-   * producer to publish the data to the specified broker partition
-   * @param poolData the producer pool request object
-   */
-  def send(poolData: ProducerPoolData[V]*) {
-    val distinctBrokers = poolData.map(pd => pd.getBidPid.brokerId).distinct
-    var remainingRequests = poolData.toSeq
-    distinctBrokers.foreach { bid =>
-      val requestsForThisBid = remainingRequests partition (_.getBidPid.brokerId == bid)
-      remainingRequests = requestsForThisBid._2
-
-      if(sync) {
-        val producerRequests = requestsForThisBid._1.map(req => new ProducerRequest(req.getTopic, req.getBidPid.partId,
-          new ByteBufferMessageSet(compressionCodec = config.compressionCodec,
-                                   messages = req.getData.map(d => serializer.toMessage(d)): _*)))
-        debug("Fetching sync producer for broker id: " + bid)
-        val producer = syncProducers.get(bid)
-        if(producer != null) {
-          if(producerRequests.size > 1)
-            producer.multiSend(producerRequests.toArray)
-          else
-            producer.send(topic = producerRequests(0).topic,
-                          partition = producerRequests(0).partition,
-                          messages = producerRequests(0).messages)
-          config.compressionCodec match {
-            case NoCompressionCodec => debug("Sending message to broker " + bid)
-            case _ => debug("Sending compressed messages to broker " + bid)
-          }
-        }else
-          throw new UnavailableProducerException("Producer pool has not been initialized correctly. " +
-            "Sync Producer for broker " + bid + " does not exist in the pool")
-      }else {
-        debug("Fetching async producer for broker id: " + bid)
-        val producer = asyncProducers.get(bid)
-        if(producer != null) {
-          requestsForThisBid._1.foreach { req =>
-            req.getData.foreach(d => producer.send(req.getTopic, d, req.getBidPid.partId))
-          }
-          if(logger.isDebugEnabled)
-            config.compressionCodec match {
-              case NoCompressionCodec => debug("Sending message")
-              case _ => debug("Sending compressed messages")
-            }
-        }
-        else
-          throw new UnavailableProducerException("Producer pool has not been initialized correctly. " +
-            "Async Producer for broker " + bid + " does not exist in the pool")
-      }
-    }
-  }
-
-  /**
-   * Closes all the producers in the pool
-   */
-  def close() = {
-    config.producerType match {
-      case "sync" =>
-        info("Closing all sync producers")
-        val iter = syncProducers.values.iterator
-        while(iter.hasNext)
-          iter.next.close
-      case "async" =>
-        info("Closing all async producers")
-        val iter = asyncProducers.values.iterator
-        while(iter.hasNext)
-          iter.next.close
-    }
-  }
-
-  /**
-   * This constructs and returns the request object for the producer pool
-   * @param topic the topic to which the data should be published
-   * @param bidPid the broker id and partition id
-   * @param data the data to be published
-   */
-  def getProducerPoolData(topic: String, bidPid: Partition, data: Seq[V]): ProducerPoolData[V] = {
-    new ProducerPoolData[V](topic, bidPid, data)
-  }
-
-  class ProducerPoolData[V](topic: String,
-                            bidPid: Partition,
-                            data: Seq[V]) {
-    def getTopic: String = topic
-    def getBidPid: Partition = bidPid
-    def getData: Seq[V] = data
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/SyncProducer.scala b/trunk/core/src/main/scala/kafka/producer/SyncProducer.scala
deleted file mode 100644
index f43685a..0000000
--- a/trunk/core/src/main/scala/kafka/producer/SyncProducer.scala
+++ /dev/null
@@ -1,229 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer
-
-import java.net._
-import java.nio.channels._
-import kafka.message._
-import kafka.network._
-import kafka.utils._
-import kafka.api._
-import scala.math._
-import java.nio.ByteBuffer
-import java.util.Random
-
-object SyncProducer {
-  val RequestKey: Short = 0
-  val randomGenerator = new Random
-}
-
-/*
- * Send a message set.
- */
-@threadsafe
-class SyncProducer(val config: SyncProducerConfig) extends Logging {
-  
-  private val MaxConnectBackoffMs = 60000
-  private var channel : SocketChannel = null
-  private var sentOnConnection = 0
-  /** make time-based reconnect starting at a random time **/
-  private var lastConnectionTime = System.currentTimeMillis - SyncProducer.randomGenerator.nextDouble() * config.reconnectInterval
-
-  private val lock = new Object()
-  @volatile
-  private var shutdown: Boolean = false
-
-  trace("Instantiating Scala Sync Producer")
-
-  private def verifySendBuffer(buffer : ByteBuffer) = {
-    /**
-     * This seems a little convoluted, but the idea is to turn on verification simply changing log4j settings
-     * Also, when verification is turned on, care should be taken to see that the logs don't fill up with unnecessary
-     * data. So, leaving the rest of the logging at TRACE, while errors should be logged at ERROR level
-     */
-    if (logger.isDebugEnabled) {
-      trace("verifying sendbuffer of size " + buffer.limit)
-      val requestTypeId = buffer.getShort()
-      if (requestTypeId == RequestKeys.MultiProduce) {
-        try {
-          val request = MultiProducerRequest.readFrom(buffer)
-          for (produce <- request.produces) {
-            try {
-              for (messageAndOffset <- produce.messages)
-                if (!messageAndOffset.message.isValid)
-                  throw new InvalidMessageException("Message for topic " + produce.topic + " is invalid")
-            }
-            catch {
-              case e: Throwable =>
-                error("error iterating messages ", e)
-            }
-          }
-        }
-        catch {
-          case e: Throwable =>
-            error("error verifying sendbuffer ", e)
-        }
-      }
-    }
-  }
-
-  /**
-   * Common functionality for the public send methods
-   */
-  private def send(send: BoundedByteBufferSend) {
-    lock synchronized {
-      verifySendBuffer(send.buffer.slice)
-      val startTime = SystemTime.nanoseconds
-      getOrMakeConnection()
-
-      try {
-        send.writeCompletely(channel)
-      } catch {
-        case e : java.io.IOException =>
-          // no way to tell if write succeeded. Disconnect and re-throw exception to let client handle retry
-          disconnect()
-          throw e
-        case e2 =>
-          throw e2
-      }
-      // TODO: do we still need this?
-      sentOnConnection += 1
-
-      if(sentOnConnection >= config.reconnectInterval || (config.reconnectTimeInterval >= 0 && System.currentTimeMillis - lastConnectionTime >= config.reconnectTimeInterval)) {
-        disconnect()
-        channel = connect()
-        sentOnConnection = 0
-        lastConnectionTime = System.currentTimeMillis
-      }
-      val endTime = SystemTime.nanoseconds
-      SyncProducerStats.recordProduceRequest(endTime - startTime)
-    }
-  }
-
-  /**
-   * Send a message
-   */
-  def send(topic: String, partition: Int, messages: ByteBufferMessageSet) {
-    messages.verifyMessageSize(config.maxMessageSize)
-    val setSize = messages.sizeInBytes.asInstanceOf[Int]
-    trace("Got message set with " + setSize + " bytes to send")
-    send(new BoundedByteBufferSend(new ProducerRequest(topic, partition, messages)))
-  }
- 
-  def send(topic: String, messages: ByteBufferMessageSet): Unit = send(topic, ProducerRequest.RandomPartition, messages)
-
-  def multiSend(produces: Array[ProducerRequest]) {
-    for (request <- produces)
-      request.messages.verifyMessageSize(config.maxMessageSize)
-    val setSize = produces.foldLeft(0L)(_ + _.messages.sizeInBytes)
-    trace("Got multi message sets with " + setSize + " bytes to send")
-    send(new BoundedByteBufferSend(new MultiProducerRequest(produces)))
-  }
-
-  def close() = {
-    lock synchronized {
-      disconnect()
-      shutdown = true
-    }
-  }
-
-
-  /**
-   * Disconnect from current channel, closing connection.
-   * Side effect: channel field is set to null on successful disconnect
-   */
-  private def disconnect() {
-    try {
-      if(channel != null) {
-        info("Disconnecting from " + config.host + ":" + config.port)
-        Utils.swallow(logger.warn, channel.close())
-        Utils.swallow(logger.warn, channel.socket.close())
-        channel = null
-      }
-    } catch {
-      case e: Exception => error("Error on disconnect: ", e)
-    }
-  }
-    
-  private def connect(): SocketChannel = {
-    var connectBackoffMs = 1
-    val beginTimeMs = SystemTime.milliseconds
-    while(channel == null && !shutdown) {
-      try {
-        channel = SocketChannel.open()
-        channel.socket.setSendBufferSize(config.bufferSize)
-        channel.configureBlocking(true)
-        channel.socket.setSoTimeout(config.socketTimeoutMs)
-        channel.socket.setKeepAlive(true)
-        channel.connect(new InetSocketAddress(config.host, config.port))
-        info("Connected to " + config.host + ":" + config.port + " for producing")
-      }
-      catch {
-        case e: Exception => {
-          disconnect()
-          val endTimeMs = SystemTime.milliseconds
-          if ( (endTimeMs - beginTimeMs + connectBackoffMs) > config.connectTimeoutMs)
-          {
-            error("Producer connection to " +  config.host + ":" + config.port + " timing out after " + config.connectTimeoutMs + " ms", e)
-            throw e
-          }
-          error("Connection attempt to " +  config.host + ":" + config.port + " failed, next attempt in " + connectBackoffMs + " ms", e)
-          SystemTime.sleep(connectBackoffMs)
-          connectBackoffMs = min(10 * connectBackoffMs, MaxConnectBackoffMs)
-        }
-      }
-    }
-    channel
-  }
-
-  private def getOrMakeConnection() {
-    if(channel == null) {
-      channel = connect()
-    }
-  }
-}
-
-trait SyncProducerStatsMBean {
-  def getProduceRequestsPerSecond: Double
-  def getAvgProduceRequestMs: Double
-  def getMaxProduceRequestMs: Double
-  def getNumProduceRequests: Long
-}
-
-@threadsafe
-class SyncProducerStats extends SyncProducerStatsMBean {
-  private val produceRequestStats = new SnapshotStats
-
-  def recordProduceRequest(requestNs: Long) = produceRequestStats.recordRequestMetric(requestNs)
-
-  def getProduceRequestsPerSecond: Double = produceRequestStats.getRequestsPerSecond
-
-  def getAvgProduceRequestMs: Double = produceRequestStats.getAvgMetric / (1000.0 * 1000.0)
-
-  def getMaxProduceRequestMs: Double = produceRequestStats.getMaxMetric / (1000.0 * 1000.0)
-
-  def getNumProduceRequests: Long = produceRequestStats.getNumRequests
-}
-
-object SyncProducerStats extends Logging {
-  private val kafkaProducerstatsMBeanName = "kafka:type=kafka.KafkaProducerStats"
-  private val stats = new SyncProducerStats
-  Utils.swallow(logger.warn, Utils.registerMBean(stats, kafkaProducerstatsMBeanName))
-
-  def recordProduceRequest(requestMs: Long) = stats.recordProduceRequest(requestMs)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/SyncProducerConfig.scala b/trunk/core/src/main/scala/kafka/producer/SyncProducerConfig.scala
deleted file mode 100644
index 4b78a4b..0000000
--- a/trunk/core/src/main/scala/kafka/producer/SyncProducerConfig.scala
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import kafka.utils.Utils
-import java.util.Properties
-
-class SyncProducerConfig(val props: Properties) extends SyncProducerConfigShared {
-  /** the broker to which the producer sends events */
-  val host = Utils.getString(props, "host")
-
-  /** the port on which the broker is running */
-  val port = Utils.getInt(props, "port")
-}
-
-trait SyncProducerConfigShared {
-  val props: Properties
-  
-  val bufferSize = Utils.getInt(props, "buffer.size", 100*1024)
-
-  val connectTimeoutMs = Utils.getInt(props, "connect.timeout.ms", 5000)
-
-  /** the socket timeout for network requests */
-  val socketTimeoutMs = Utils.getInt(props, "socket.timeout.ms", 30000)  
-
-  val reconnectInterval = Utils.getInt(props, "reconnect.interval", 30000)
-
-  /** negative reconnect time interval means disabling this time-based reconnect feature */
-  var reconnectTimeInterval = Utils.getInt(props, "reconnect.time.interval.ms", 1000*1000*10)
-
-  val maxMessageSize = Utils.getInt(props, "max.message.size", 1000000)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/ZKBrokerPartitionInfo.scala b/trunk/core/src/main/scala/kafka/producer/ZKBrokerPartitionInfo.scala
deleted file mode 100644
index 9e95b1c..0000000
--- a/trunk/core/src/main/scala/kafka/producer/ZKBrokerPartitionInfo.scala
+++ /dev/null
@@ -1,380 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.producer
-
-import kafka.utils.{ZKStringSerializer, ZkUtils, ZKConfig}
-import collection.mutable.HashMap
-import collection.mutable.Map
-import kafka.utils.Logging
-import collection.immutable.TreeSet
-import kafka.cluster.{Broker, Partition}
-import org.apache.zookeeper.Watcher.Event.KeeperState
-import org.I0Itec.zkclient.{IZkStateListener, IZkChildListener, ZkClient}
-import collection.SortedSet
-
-private[producer] object ZKBrokerPartitionInfo {
-
-  /**
-   * Generate a mapping from broker id to (brokerId, numPartitions) for the list of brokers
-   * specified
-   * @param topic the topic to which the brokers have registered
-   * @param brokerList the list of brokers for which the partitions info is to be generated
-   * @return a sequence of (brokerId, numPartitions) for brokers in brokerList
-   */
-  private def getBrokerPartitions(zkClient: ZkClient, topic: String, brokerList: List[Int]): SortedSet[Partition] = {
-    val brokerTopicPath = ZkUtils.BrokerTopicsPath + "/" + topic
-    val numPartitions = brokerList.map(bid => ZkUtils.readData(zkClient, brokerTopicPath + "/" + bid).toInt)
-    val brokerPartitions = brokerList.zip(numPartitions)
-
-    val sortedBrokerPartitions = brokerPartitions.sortWith((id1, id2) => id1._1 < id2._1)
-
-    var brokerParts = SortedSet.empty[Partition]
-    sortedBrokerPartitions.foreach { bp =>
-      for(i <- 0 until bp._2) {
-        val bidPid = new Partition(bp._1, i)
-        brokerParts = brokerParts + bidPid
-      }
-    }
-    brokerParts
-  }
-}
-
-/**
- * If zookeeper based auto partition discovery is enabled, fetch broker info like
- * host, port, number of partitions from zookeeper
- */
-private[producer] class ZKBrokerPartitionInfo(config: ZKConfig, producerCbk: (Int, String, Int) => Unit) extends BrokerPartitionInfo with Logging {
-  private val zkWatcherLock = new Object
-  private val zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs, config.zkConnectionTimeoutMs,
-    ZKStringSerializer)
-  // maintain a map from topic -> list of (broker, num_partitions) from zookeeper
-  private var topicBrokerPartitions = getZKTopicPartitionInfo
-  // maintain a map from broker id to the corresponding Broker object
-  private var allBrokers = getZKBrokerInfo
-
-  // use just the brokerTopicsListener for all watchers
-  private val brokerTopicsListener = new BrokerTopicsListener(topicBrokerPartitions, allBrokers)
-  // register listener for change of topics to keep topicsBrokerPartitions updated
-  zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath, brokerTopicsListener)
-
-  // register listener for change of brokers for each topic to keep topicsBrokerPartitions updated
-  topicBrokerPartitions.keySet.foreach {topic =>
-    zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath + "/" + topic, brokerTopicsListener)
-    debug("Registering listener on path: " + ZkUtils.BrokerTopicsPath + "/" + topic)
-  }
-
-  // register listener for new broker
-  zkClient.subscribeChildChanges(ZkUtils.BrokerIdsPath, brokerTopicsListener)
-
-  // register listener for session expired event
-  zkClient.subscribeStateChanges(new ZKSessionExpirationListener(brokerTopicsListener))
-
-  /**
-   * Return a sequence of (brokerId, numPartitions)
-   * @param topic the topic for which this information is to be returned
-   * @return a sequence of (brokerId, numPartitions). Returns a zero-length
-   * sequence if no brokers are available.
-   */
-  def getBrokerPartitionInfo(topic: String): SortedSet[Partition] = {
-    zkWatcherLock synchronized {
-      val brokerPartitions = topicBrokerPartitions.get(topic)
-      var numBrokerPartitions = SortedSet.empty[Partition]
-      brokerPartitions match {
-        case Some(bp) =>
-          bp.size match {
-            case 0 => // no brokers currently registered for this topic. Find the list of all brokers in the cluster.
-              numBrokerPartitions = bootstrapWithExistingBrokers(topic)
-              topicBrokerPartitions += (topic -> numBrokerPartitions)
-            case _ => numBrokerPartitions = TreeSet[Partition]() ++ bp
-          }
-        case None =>  // no brokers currently registered for this topic. Find the list of all brokers in the cluster.
-          numBrokerPartitions = bootstrapWithExistingBrokers(topic)
-          topicBrokerPartitions += (topic -> numBrokerPartitions)
-      }
-      numBrokerPartitions
-    }
-  }
-
-  /**
-   * Generate the host and port information for the broker identified
-   * by the given broker id
-   * @param brokerId the broker for which the info is to be returned
-   * @return host and port of brokerId
-   */
-  def getBrokerInfo(brokerId: Int): Option[Broker] =  {
-    zkWatcherLock synchronized {
-      allBrokers.get(brokerId)
-    }
-  }
-
-  /**
-   * Generate a mapping from broker id to the host and port for all brokers
-   * @return mapping from id to host and port of all brokers
-   */
-  def getAllBrokerInfo: Map[Int, Broker] = allBrokers
-
-  def close = zkClient.close
-
-  def updateInfo = {
-    zkWatcherLock synchronized {
-      topicBrokerPartitions = getZKTopicPartitionInfo
-      allBrokers = getZKBrokerInfo
-    }
-  }
-
-  private def bootstrapWithExistingBrokers(topic: String): scala.collection.immutable.SortedSet[Partition] = {
-   debug("Currently, no brokers are registered under topic: " + topic)
-    debug("Bootstrapping topic: " + topic + " with available brokers in the cluster with default " +
-      "number of partitions = 1")
-    val allBrokersIds = ZkUtils.getChildrenParentMayNotExist(zkClient, ZkUtils.BrokerIdsPath)
-    trace("List of all brokers currently registered in zookeeper = " + allBrokersIds.toString)
-    // since we do not have the in formation about number of partitions on these brokers, just assume single partition
-    // i.e. pick partition 0 from each broker as a candidate
-    val numBrokerPartitions = TreeSet[Partition]() ++ allBrokersIds.map(b => new Partition(b.toInt, 0))
-    // add the rest of the available brokers with default 1 partition for this topic, so all of the brokers
-    // participate in hosting this topic.
-    debug("Adding following broker id, partition id for NEW topic: " + topic + "=" + numBrokerPartitions.toString)
-    numBrokerPartitions
-  }
-
-  /**
-   * Generate a sequence of (brokerId, numPartitions) for all topics
-   * registered in zookeeper
-   * @return a mapping from topic to sequence of (brokerId, numPartitions)
-   */
-  private def getZKTopicPartitionInfo(): collection.mutable.Map[String, SortedSet[Partition]] = {
-    val brokerPartitionsPerTopic = new HashMap[String, SortedSet[Partition]]()
-    ZkUtils.makeSurePersistentPathExists(zkClient, ZkUtils.BrokerTopicsPath)
-    val topics = ZkUtils.getChildrenParentMayNotExist(zkClient, ZkUtils.BrokerTopicsPath)
-    topics.foreach { topic =>
-    // find the number of broker partitions registered for this topic
-      val brokerTopicPath = ZkUtils.BrokerTopicsPath + "/" + topic
-      val brokerList = ZkUtils.getChildrenParentMayNotExist(zkClient, brokerTopicPath)
-      val numPartitions = brokerList.map(bid => ZkUtils.readData(zkClient, brokerTopicPath + "/" + bid).toInt)
-      val brokerPartitions = brokerList.map(bid => bid.toInt).zip(numPartitions)
-      val sortedBrokerPartitions = brokerPartitions.sortWith((id1, id2) => id1._1 < id2._1)
-      debug("Broker ids and # of partitions on each for topic: " + topic + " = " + sortedBrokerPartitions.toString)
-
-      var brokerParts = SortedSet.empty[Partition]
-      sortedBrokerPartitions.foreach { bp =>
-        for(i <- 0 until bp._2) {
-          val bidPid = new Partition(bp._1, i)
-          brokerParts = brokerParts + bidPid
-        }
-      }
-      brokerPartitionsPerTopic += (topic -> brokerParts)
-      debug("Sorted list of broker ids and partition ids on each for topic: " + topic + " = " + brokerParts.toString)
-    }
-    brokerPartitionsPerTopic
-  }
-
-  /**
-   * Generate a mapping from broker id to (brokerId, numPartitions) for all brokers
-   * registered in zookeeper
-   * @return a mapping from brokerId to (host, port)
-   */
-  private def getZKBrokerInfo(): Map[Int, Broker] = {
-    val brokers = new HashMap[Int, Broker]()
-    val allBrokerIds = ZkUtils.getChildrenParentMayNotExist(zkClient, ZkUtils.BrokerIdsPath).map(bid => bid.toInt)
-    allBrokerIds.foreach { bid =>
-      val brokerInfo = ZkUtils.readData(zkClient, ZkUtils.BrokerIdsPath + "/" + bid)
-      brokers += (bid -> Broker.createBroker(bid, brokerInfo))
-    }
-    brokers
-  }
-
-  /**
-   * Listens to new broker registrations under a particular topic, in zookeeper and
-   * keeps the related data structures updated
-   */
-  class BrokerTopicsListener(val originalBrokerTopicsPartitionsMap: collection.mutable.Map[String, SortedSet[Partition]],
-                             val originalBrokerIdMap: Map[Int, Broker]) extends IZkChildListener with Logging {
-    private var oldBrokerTopicPartitionsMap = collection.mutable.Map.empty[String, SortedSet[Partition]] ++
-                                              originalBrokerTopicsPartitionsMap
-    private var oldBrokerIdMap = collection.mutable.Map.empty[Int, Broker] ++ originalBrokerIdMap
-
-    debug("[BrokerTopicsListener] Creating broker topics listener to watch the following paths - \n" +
-      "/broker/topics, /broker/topics/topic, /broker/ids")
-    debug("[BrokerTopicsListener] Initialized this broker topics listener with initial mapping of broker id to " +
-      "partition id per topic with " + oldBrokerTopicPartitionsMap.toString)
-
-    @throws(classOf[Exception])
-    def handleChildChange(parentPath : String, currentChildren : java.util.List[String]) {
-      val curChilds: java.util.List[String] = if(currentChildren != null) currentChildren
-                                              else new java.util.ArrayList[String]()
-
-      zkWatcherLock synchronized {
-        trace("Watcher fired for path: " + parentPath + " with change " + curChilds.toString)
-        import scala.collection.JavaConversions._
-
-        parentPath match {
-          case "/brokers/topics" =>        // this is a watcher for /broker/topics path
-            val updatedTopics = asBuffer(curChilds)
-            debug("[BrokerTopicsListener] List of topics changed at " + parentPath + " Updated topics -> " +
-                curChilds.toString)
-            debug("[BrokerTopicsListener] Old list of topics: " + oldBrokerTopicPartitionsMap.keySet.toString)
-            debug("[BrokerTopicsListener] Updated list of topics: " + updatedTopics.toSet.toString)
-            val newTopics = updatedTopics.toSet &~ oldBrokerTopicPartitionsMap.keySet
-            debug("[BrokerTopicsListener] List of newly registered topics: " + newTopics.toString)
-            newTopics.foreach { topic =>
-              val brokerTopicPath = ZkUtils.BrokerTopicsPath + "/" + topic
-              val brokerList = ZkUtils.getChildrenParentMayNotExist(zkClient, brokerTopicPath)
-              processNewBrokerInExistingTopic(topic, brokerList)
-              zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath + "/" + topic,
-                brokerTopicsListener)
-            }
-          case "/brokers/ids"    =>        // this is a watcher for /broker/ids path
-            debug("[BrokerTopicsListener] List of brokers changed in the Kafka cluster " + parentPath +
-                "\t Currently registered list of brokers -> " + curChilds.toString)
-            processBrokerChange(parentPath, curChilds)
-          case _ =>
-            val pathSplits = parentPath.split("/")
-            val topic = pathSplits.last
-            if(pathSplits.length == 4 && pathSplits(2).equals("topics")) {
-              debug("[BrokerTopicsListener] List of brokers changed at " + parentPath + "\t Currently registered " +
-                  " list of brokers -> " + curChilds.toString + " for topic -> " + topic)
-              processNewBrokerInExistingTopic(topic, asBuffer(curChilds))
-            }
-        }
-
-        // update the data structures tracking older state values
-        oldBrokerTopicPartitionsMap = collection.mutable.Map.empty[String, SortedSet[Partition]] ++ topicBrokerPartitions
-        oldBrokerIdMap = collection.mutable.Map.empty[Int, Broker] ++  allBrokers
-      }
-    }
-
-    def processBrokerChange(parentPath: String, curChilds: Seq[String]) {
-      if(parentPath.equals(ZkUtils.BrokerIdsPath)) {
-        import scala.collection.JavaConversions._
-        val updatedBrokerList = asBuffer(curChilds).map(bid => bid.toInt)
-        val newBrokers = updatedBrokerList.toSet &~ oldBrokerIdMap.keySet
-        debug("[BrokerTopicsListener] List of newly registered brokers: " + newBrokers.toString)
-        newBrokers.foreach { bid =>
-          val brokerInfo = ZkUtils.readData(zkClient, ZkUtils.BrokerIdsPath + "/" + bid)
-          val brokerHostPort = brokerInfo.split(":")
-          allBrokers += (bid -> new Broker(bid, brokerHostPort(1), brokerHostPort(1), brokerHostPort(2).toInt))
-          debug("[BrokerTopicsListener] Invoking the callback for broker: " + bid)
-          producerCbk(bid, brokerHostPort(1), brokerHostPort(2).toInt)
-        }
-        // remove dead brokers from the in memory list of live brokers
-        val deadBrokers = oldBrokerIdMap.keySet &~ updatedBrokerList.toSet
-        debug("[BrokerTopicsListener] Deleting broker ids for dead brokers: " + deadBrokers.toString)
-        deadBrokers.foreach {bid =>
-          allBrokers = allBrokers - bid
-          // also remove this dead broker from particular topics
-          topicBrokerPartitions.keySet.foreach{ topic =>
-            topicBrokerPartitions.get(topic) match {
-              case Some(oldBrokerPartitionList) =>
-                val aliveBrokerPartitionList = oldBrokerPartitionList.filter(bp => bp.brokerId != bid)
-                topicBrokerPartitions += (topic -> aliveBrokerPartitionList)
-                debug("[BrokerTopicsListener] Removing dead broker ids for topic: " + topic + "\t " +
-                  "Updated list of broker id, partition id = " + aliveBrokerPartitionList.toString)
-              case None =>
-            }
-          }
-        }
-      }
-    }
-
-    /**
-     * Generate the updated mapping of (brokerId, numPartitions) for the new list of brokers
-     * registered under some topic
-     * @param parentPath the path of the topic under which the brokers have changed
-     * @param curChilds the list of changed brokers
-     */
-    def processNewBrokerInExistingTopic(topic: String, curChilds: Seq[String]) = {
-      // find the old list of brokers for this topic
-      oldBrokerTopicPartitionsMap.get(topic) match {
-        case Some(brokersParts) =>
-          debug("[BrokerTopicsListener] Old list of brokers: " + brokersParts.map(bp => bp.brokerId).toString)
-        case None =>
-      }
-
-      val updatedBrokerList = curChilds.map(b => b.toInt)
-      import ZKBrokerPartitionInfo._
-      val updatedBrokerParts:SortedSet[Partition] = getBrokerPartitions(zkClient, topic, updatedBrokerList.toList)
-      debug("[BrokerTopicsListener] Currently registered list of brokers for topic: " + topic + " are " +
-          curChilds.toString)
-      // update the number of partitions on existing brokers
-      var mergedBrokerParts: SortedSet[Partition] = TreeSet[Partition]() ++ updatedBrokerParts
-      topicBrokerPartitions.get(topic) match {
-        case Some(oldBrokerParts) =>
-          debug("[BrokerTopicsListener] Unregistered list of brokers for topic: " + topic + " are " +
-            oldBrokerParts.toString)
-          mergedBrokerParts = oldBrokerParts ++ updatedBrokerParts
-        case None =>
-      }
-      // keep only brokers that are alive
-      mergedBrokerParts = mergedBrokerParts.filter(bp => allBrokers.contains(bp.brokerId))
-      topicBrokerPartitions += (topic -> mergedBrokerParts)
-      debug("[BrokerTopicsListener] List of broker partitions for topic: " + topic + " are " +
-          mergedBrokerParts.toString)
-    }
-
-    def resetState = {
-      trace("[BrokerTopicsListener] Before reseting broker topic partitions state " +
-          oldBrokerTopicPartitionsMap.toString)
-      oldBrokerTopicPartitionsMap = collection.mutable.Map.empty[String, SortedSet[Partition]] ++ topicBrokerPartitions
-      debug("[BrokerTopicsListener] After reseting broker topic partitions state " +
-          oldBrokerTopicPartitionsMap.toString)
-      trace("[BrokerTopicsListener] Before reseting broker id map state " + oldBrokerIdMap.toString)
-      oldBrokerIdMap = collection.mutable.Map.empty[Int, Broker] ++  allBrokers
-      debug("[BrokerTopicsListener] After reseting broker id map state " + oldBrokerIdMap.toString)
-    }
-  }
-
-  /**
-   * Handles the session expiration event in zookeeper
-   */
-  class ZKSessionExpirationListener(val brokerTopicsListener: BrokerTopicsListener)
-    extends IZkStateListener {
-
-    @throws(classOf[Exception])
-    def handleStateChanged(state: KeeperState) {
-      // do nothing, since zkclient will do reconnect for us.
-    }
-
-    /**
-     * Called after the zookeeper session has expired and a new session has been created. You would have to re-create
-     * any ephemeral nodes here.
-     *
-     * @throws Exception
-     *             On any error.
-     */
-    @throws(classOf[Exception])
-    def handleNewSession() {
-      /**
-       *  When we get a SessionExpired event, we lost all ephemeral nodes and zkclient has reestablished a
-       *  connection for us.
-       */
-      info("ZK expired; release old list of broker partitions for topics ")
-      topicBrokerPartitions = getZKTopicPartitionInfo
-      allBrokers = getZKBrokerInfo
-      brokerTopicsListener.resetState
-
-      // register listener for change of brokers for each topic to keep topicsBrokerPartitions updated
-      // NOTE: this is probably not required here. Since when we read from getZKTopicPartitionInfo() above,
-      // it automatically recreates the watchers there itself
-      topicBrokerPartitions.keySet.foreach(topic => zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath + "/" + topic,
-        brokerTopicsListener))
-      // there is no need to re-register other listeners as they are listening on the child changes of
-      // permanent nodes
-    }
-
-  }
-
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducer.scala b/trunk/core/src/main/scala/kafka/producer/async/AsyncProducer.scala
deleted file mode 100644
index 54a5e9c..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducer.scala
+++ /dev/null
@@ -1,144 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-import java.util.concurrent.{TimeUnit, LinkedBlockingQueue}
-import kafka.utils.{Utils, Logging}
-import java.util.concurrent.atomic.AtomicBoolean
-import kafka.api.ProducerRequest
-import kafka.serializer.Encoder
-import java.util.{Random, Properties}
-import kafka.producer.{ProducerConfig, SyncProducer}
-
-object AsyncProducer {
-  val Shutdown = new Object
-  val Random = new Random
-  val ProducerMBeanName = "kafka.producer.Producer:type=AsyncProducerStats"
-  val ProducerQueueSizeMBeanName = "kafka.producer.Producer:type=AsyncProducerQueueSizeStats"
-}
-
-private[kafka] class AsyncProducer[T](config: AsyncProducerConfig,
-                                      producer: SyncProducer,
-                                      serializer: Encoder[T],
-                                      eventHandler: EventHandler[T] = null,
-                                      eventHandlerProps: Properties = null,
-                                      cbkHandler: CallbackHandler[T] = null,
-                                      cbkHandlerProps: Properties = null) extends Logging {
-  private val closed = new AtomicBoolean(false)
-  private val queue = new LinkedBlockingQueue[QueueItem[T]](config.queueSize)
-  // initialize the callback handlers
-  if(eventHandler != null)
-    eventHandler.init(eventHandlerProps)
-  if(cbkHandler != null)
-    cbkHandler.init(cbkHandlerProps)
-  private val asyncProducerID = AsyncProducer.Random.nextInt
-  private val sendThread = new ProducerSendThread("ProducerSendThread-" + asyncProducerID, queue,
-    serializer, producer,
-    if(eventHandler != null) eventHandler else new DefaultEventHandler[T](new ProducerConfig(config.props), cbkHandler),
-    cbkHandler, config.queueTime, config.batchSize, AsyncProducer.Shutdown)
-  sendThread.setDaemon(false)
-  Utils.swallow(logger.warn, Utils.registerMBean(
-    new AsyncProducerQueueSizeStats[T](queue), AsyncProducer.ProducerQueueSizeMBeanName + "-" + asyncProducerID))
-
-  def this(config: AsyncProducerConfig) {
-    this(config,
-      new SyncProducer(config),
-      Utils.getObject(config.serializerClass),
-      Utils.getObject(config.eventHandler),
-      config.eventHandlerProps,
-      Utils.getObject(config.cbkHandler),
-      config.cbkHandlerProps)
-  }
-
-  def start = sendThread.start
-
-  def send(topic: String, event: T) { send(topic, event, ProducerRequest.RandomPartition) }
-
-  def send(topic: String, event: T, partition:Int) {
-    AsyncProducerStats.recordEvent
-
-    if(closed.get)
-      throw new QueueClosedException("Attempt to add event to a closed queue.")
-
-    var data = new QueueItem(event, topic, partition)
-    if(cbkHandler != null)
-      data = cbkHandler.beforeEnqueue(data)
-
-    val added = config.enqueueTimeoutMs match {
-      case 0  =>
-        queue.offer(data)
-      case _  =>
-        try {
-          config.enqueueTimeoutMs < 0 match {
-          case true =>
-            queue.put(data)
-            true
-          case _ =>
-            queue.offer(data, config.enqueueTimeoutMs, TimeUnit.MILLISECONDS)
-          }
-        }
-        catch {
-          case e: InterruptedException =>
-            val msg = "%s interrupted during enqueue of event %s.".format(
-              getClass.getSimpleName, event.toString)
-            error(msg)
-            throw new AsyncProducerInterruptedException(msg)
-        }
-    }
-
-    if(cbkHandler != null)
-      cbkHandler.afterEnqueue(data, added)
-
-    if(!added) {
-      AsyncProducerStats.recordDroppedEvents
-      logger.error("Event queue is full of unsent messages, could not send event: " + event.toString)
-      throw new QueueFullException("Event queue is full of unsent messages, could not send event: " + event.toString)
-    }else {
-      if(logger.isTraceEnabled) {
-        logger.trace("Added event to send queue for topic: " + topic + ", partition: " + partition + ":" + event.toString)
-        logger.trace("Remaining queue size: " + queue.remainingCapacity)
-      }
-    }
-  }
-
-  def close = {
-    if(cbkHandler != null) {
-      cbkHandler.close
-      logger.info("Closed the callback handler")
-    }
-    closed.set(true)
-    queue.put(new QueueItem(AsyncProducer.Shutdown.asInstanceOf[T], null, -1))
-    if(logger.isDebugEnabled)
-      logger.debug("Added shutdown command to the queue")
-    sendThread.shutdown
-    sendThread.awaitShutdown
-    producer.close
-    logger.info("Closed AsyncProducer")
-  }
-
-  // for testing only
-  import org.apache.log4j.Level
-  def setLoggerLevel(level: Level) = logger.setLevel(level)
-}
-
-class QueueItem[T](data: T, topic: String, partition: Int) {
-  def getData: T = data
-  def getPartition: Int = partition
-  def getTopic:String = topic
-  override def toString = "topic: " + topic + ", partition: " + partition + ", data: " + data.toString
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerConfig.scala b/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerConfig.scala
deleted file mode 100644
index ca3e65e..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerConfig.scala
+++ /dev/null
@@ -1,61 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer.async
-
-import java.util.Properties
-import kafka.utils.Utils
-import kafka.producer.SyncProducerConfig
-
-class AsyncProducerConfig(override val props: Properties) extends SyncProducerConfig(props)
-        with AsyncProducerConfigShared {
-}
-
-trait AsyncProducerConfigShared {
-  val props: Properties
-
-  /* maximum time, in milliseconds, for buffering data on the producer queue */
-  val queueTime = Utils.getInt(props, "queue.time", 5000)
-
-  /** the maximum size of the blocking queue for buffering on the producer */
-  val queueSize = Utils.getInt(props, "queue.size", 10000)
-
-  /**
-   * Timeout for event enqueue:
-   * 0: events will be enqueued immediately or dropped if the queue is full
-   * -ve: enqueue will block indefinitely if the queue is full
-   * +ve: enqueue will block up to this many milliseconds if the queue is full
-   */
-  val enqueueTimeoutMs = Utils.getInt(props, "queue.enqueueTimeout.ms", 0)
-
-  /** the number of messages batched at the producer */
-  val batchSize = Utils.getInt(props, "batch.size", 200)
-
-  /** the serializer class for events */
-  val serializerClass = Utils.getString(props, "serializer.class", "kafka.serializer.DefaultEncoder")
-
-  /** the callback handler for one or multiple events */
-  val cbkHandler = Utils.getString(props, "callback.handler", null)
-
-  /** properties required to initialize the callback handler */
-  val cbkHandlerProps = Utils.getProps(props, "callback.handler.props", null)
-
-  /** the handler for events */
-  val eventHandler = Utils.getString(props, "event.handler", null)
-
-  /** properties required to initialize the callback handler */
-  val eventHandlerProps = Utils.getProps(props, "event.handler.props", null)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerInterruptedException.scala b/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerInterruptedException.scala
deleted file mode 100644
index 42944f4..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerInterruptedException.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-class AsyncProducerInterruptedException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
-
diff --git a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStats.scala b/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStats.scala
deleted file mode 100644
index 7c37256..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStats.scala
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer.async
-
-import java.util.concurrent.atomic.AtomicInteger
-import java.util.concurrent.BlockingQueue
-import org.apache.log4j.Logger
-import kafka.utils.Utils
-
-class AsyncProducerStats extends AsyncProducerStatsMBean {
-  val droppedEvents = new AtomicInteger(0)
-  val numEvents = new AtomicInteger(0)
-
-  def getAsyncProducerEvents: Int = numEvents.get
-
-  def getAsyncProducerDroppedEvents: Int = droppedEvents.get
-
-  def recordDroppedEvents = droppedEvents.getAndAdd(1)
-
-  def recordEvent = numEvents.getAndAdd(1)
-}
-
-class AsyncProducerQueueSizeStats[T](private val queue: BlockingQueue[QueueItem[T]]) extends AsyncProducerQueueSizeStatsMBean {
-  def getAsyncProducerQueueSize: Int = queue.size
-}
-
-object AsyncProducerStats {
-  private val logger = Logger.getLogger(getClass())
-  private val stats = new AsyncProducerStats
-  Utils.registerMBean(stats, AsyncProducer.ProducerMBeanName)
-
-  def recordDroppedEvents = stats.recordDroppedEvents
-
-  def recordEvent = stats.recordEvent
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStatsMBean.scala b/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStatsMBean.scala
deleted file mode 100644
index 186f899..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/AsyncProducerStatsMBean.scala
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer.async
-
-trait AsyncProducerStatsMBean {
-  def getAsyncProducerEvents: Int
-  def getAsyncProducerDroppedEvents: Int
-}
-
-trait AsyncProducerQueueSizeStatsMBean {
-  def getAsyncProducerQueueSize: Int
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/CallbackHandler.scala b/trunk/core/src/main/scala/kafka/producer/async/CallbackHandler.scala
deleted file mode 100644
index cf75b2d..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/CallbackHandler.scala
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer.async
-
-import java.util.Properties
-
-/**
- * Callback handler APIs for use in the async producer. The purpose is to
- * give the user some callback handles to insert custom functionality at
- * various stages as the data flows through the pipeline of the async producer
- */
-trait CallbackHandler[T] {
-  /**
-   * Initializes the callback handler using a Properties object
-   * @param props properties used to initialize the callback handler
-   */
-  def init(props: Properties)
-
-  /**
-   * Callback to process the data before it enters the batching queue
-   * of the asynchronous producer
-   * @param data the data sent to the producer
-   * @return the processed data that enters the queue
-   */
-  def beforeEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]]): QueueItem[T]
-
-  /**
-   * Callback to process the data right after it enters the batching queue
-   * of the asynchronous producer
-   * @param data the data sent to the producer
-   * @param added flag that indicates if the data was successfully added to the queue
-   */
-  def afterEnqueue(data: QueueItem[T] = null.asInstanceOf[QueueItem[T]], added: Boolean)
-
-  /**
-   * Callback to process the data item right after it has been dequeued by the
-   * background sender thread of the asynchronous producer
-   * @param data the data item dequeued from the async producer queue
-   * @return the processed list of data items that gets added to the data handled by the event handler
-   */
-  def afterDequeuingExistingData(data: QueueItem[T] = null): scala.collection.mutable.Seq[QueueItem[T]]
-
-  /**
-   * Callback to process the batched data right before it is being sent by the
-   * handle API of the event handler
-   * @param data the batched data received by the event handler
-   * @return the processed batched data that gets sent by the handle() API of the event handler
-   */
-  def beforeSendingData(data: Seq[QueueItem[T]] = null): scala.collection.mutable.Seq[QueueItem[T]]
-
-  /**
-   * Callback to process the last batch of data right before the producer send thread is shutdown
-   * @return the last batch of data that is sent to the EventHandler
-  */
-  def lastBatchBeforeClose: scala.collection.mutable.Seq[QueueItem[T]]
-
-  /**
-   * Cleans up and shuts down the callback handler
-   */
-  def close
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala b/trunk/core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala
deleted file mode 100644
index 8d6664f..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-import collection.mutable.HashMap
-import collection.mutable.Map
-import kafka.api.ProducerRequest
-import kafka.serializer.Encoder
-import java.util.Properties
-import kafka.utils.Logging
-import kafka.producer.{ProducerConfig, SyncProducer}
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet}
-
-
-private[kafka] class DefaultEventHandler[T](val config: ProducerConfig,
-                                            val cbkHandler: CallbackHandler[T]) extends EventHandler[T] with Logging {
-
-  override def init(props: Properties) { }
-
-  override def handle(events: Seq[QueueItem[T]], syncProducer: SyncProducer, serializer: Encoder[T]) {
-    var processedEvents = events
-    if(cbkHandler != null)
-      processedEvents = cbkHandler.beforeSendingData(events)
-
-    if(logger.isTraceEnabled)
-      processedEvents.foreach(event => trace("Handling event for Topic: %s, Partition: %d"
-        .format(event.getTopic, event.getPartition)))
-
-    send(serialize(collate(processedEvents), serializer), syncProducer)
-  }
-
-  private def send(messagesPerTopic: Map[(String, Int), ByteBufferMessageSet], syncProducer: SyncProducer) {
-    if(messagesPerTopic.size > 0) {
-      val requests = messagesPerTopic.map(f => new ProducerRequest(f._1._1, f._1._2, f._2)).toArray
-
-      val maxAttempts = config.numRetries + 1
-      var attemptsRemaining = maxAttempts
-      var sent = false
-
-      while (attemptsRemaining > 0 && !sent) {
-        attemptsRemaining -= 1
-        try {
-          syncProducer.multiSend(requests)
-          trace("kafka producer sent messages for topics %s to broker %s:%d (on attempt %d)"
-                        .format(messagesPerTopic, syncProducer.config.host, syncProducer.config.port, maxAttempts - attemptsRemaining))
-          sent = true
-        }
-        catch {
-          case e => warn("Error sending messages, %d attempts remaining".format(attemptsRemaining), e)
-          if (attemptsRemaining == 0)
-            throw e
-        }
-      }
-    }
-  }
-
-  private def serialize(eventsPerTopic: Map[(String,Int), Seq[T]],
-                        serializer: Encoder[T]): Map[(String, Int), ByteBufferMessageSet] = {
-    val eventsPerTopicMap = eventsPerTopic.map(e => ((e._1._1, e._1._2) , e._2.map(l => serializer.toMessage(l))))
-    /** enforce the compressed.topics config here.
-     *  If the compression codec is anything other than NoCompressionCodec,
-     *    Enable compression only for specified topics if any
-     *    If the list of compressed topics is empty, then enable the specified compression codec for all topics
-     *  If the compression codec is NoCompressionCodec, compression is disabled for all topics
-     */
-
-    val messagesPerTopicPartition = eventsPerTopicMap.map { topicAndEvents =>
-      ((topicAndEvents._1._1, topicAndEvents._1._2),
-        config.compressionCodec match {
-          case NoCompressionCodec =>
-            trace("Sending %d messages with no compression to topic %s on partition %d"
-                .format(topicAndEvents._2.size, topicAndEvents._1._1, topicAndEvents._1._2))
-            new ByteBufferMessageSet(NoCompressionCodec, topicAndEvents._2: _*)
-          case _ =>
-            config.compressedTopics.size match {
-              case 0 =>
-                trace("Sending %d messages with compression codec %d to topic %s on partition %d"
-                    .format(topicAndEvents._2.size, config.compressionCodec.codec, topicAndEvents._1._1, topicAndEvents._1._2))
-                new ByteBufferMessageSet(config.compressionCodec, topicAndEvents._2: _*)
-              case _ =>
-                if(config.compressedTopics.contains(topicAndEvents._1._1)) {
-                  trace("Sending %d messages with compression codec %d to topic %s on partition %d"
-                      .format(topicAndEvents._2.size, config.compressionCodec.codec, topicAndEvents._1._1, topicAndEvents._1._2))
-                  new ByteBufferMessageSet(config.compressionCodec, topicAndEvents._2: _*)
-                }
-                else {
-                  trace("Sending %d messages to topic %s and partition %d with no compression as %s is not in compressed.topics - %s"
-                      .format(topicAndEvents._2.size, topicAndEvents._1._1, topicAndEvents._1._2, topicAndEvents._1._1,
-                      config.compressedTopics.toString))
-                  new ByteBufferMessageSet(NoCompressionCodec, topicAndEvents._2: _*)
-                }
-            }
-        })
-    }
-    messagesPerTopicPartition
-  }
-
-  private def collate(events: Seq[QueueItem[T]]): Map[(String,Int), Seq[T]] = {
-    val collatedEvents = new HashMap[(String, Int), Seq[T]]
-    val distinctTopics = events.map(e => e.getTopic).toSeq.distinct
-    val distinctPartitions = events.map(e => e.getPartition).distinct
-
-    var remainingEvents = events
-    distinctTopics foreach { topic =>
-      val topicEvents = remainingEvents partition (e => e.getTopic.equals(topic))
-      remainingEvents = topicEvents._2
-      distinctPartitions.foreach { p =>
-        val topicPartitionEvents = (topicEvents._1 partition (e => (e.getPartition == p)))._1
-        if(topicPartitionEvents.size > 0)
-          collatedEvents += ((topic, p) -> topicPartitionEvents.map(q => q.getData))
-      }
-    }
-    collatedEvents
-  }
-
-  override def close = {
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/EventHandler.scala b/trunk/core/src/main/scala/kafka/producer/async/EventHandler.scala
deleted file mode 100644
index e3c6b78..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/EventHandler.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package kafka.producer.async
-
-import java.util.Properties
-import kafka.producer.SyncProducer
-import kafka.serializer.Encoder
-
-/**
- * Handler that dispatches the batched data from the queue of the
- * asynchronous producer.
- */
-trait EventHandler[T] {
-  /**
-   * Initializes the event handler using a Properties object
-   * @param props the properties used to initialize the event handler
-  */
-  def init(props: Properties) {}
-
-  /**
-   * Callback to dispatch the batched data and send it to a Kafka server
-   * @param events the data sent to the producer
-   * @param producer the low-level producer used to send the data
-  */
-  def handle(events: Seq[QueueItem[T]], producer: SyncProducer, encoder: Encoder[T])
-
-  /**
-   * Cleans up and shuts down the event handler
-  */
-  def close {}
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/IllegalQueueStateException.scala b/trunk/core/src/main/scala/kafka/producer/async/IllegalQueueStateException.scala
deleted file mode 100644
index 9ecdf76..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/IllegalQueueStateException.scala
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer.async
-
-/**
- * Indicates that the given config parameter has invalid value
- */
-class IllegalQueueStateException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/MissingConfigException.scala b/trunk/core/src/main/scala/kafka/producer/async/MissingConfigException.scala
deleted file mode 100644
index 304e0b2..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/MissingConfigException.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-/* Indicates any missing configuration parameter */
-class MissingConfigException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/ProducerSendThread.scala b/trunk/core/src/main/scala/kafka/producer/async/ProducerSendThread.scala
deleted file mode 100644
index 91c2fad..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/ProducerSendThread.scala
+++ /dev/null
@@ -1,129 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-import kafka.utils.{SystemTime, Logging}
-import java.util.concurrent.{TimeUnit, CountDownLatch, BlockingQueue}
-import collection.mutable.ListBuffer
-import kafka.serializer.Encoder
-import kafka.producer.SyncProducer
-
-private[async] class ProducerSendThread[T](val threadName: String,
-                                           val queue: BlockingQueue[QueueItem[T]],
-                                           val serializer: Encoder[T],
-                                           val underlyingProducer: SyncProducer,
-                                           val handler: EventHandler[T],
-                                           val cbkHandler: CallbackHandler[T],
-                                           val queueTime: Long,
-                                           val batchSize: Int,
-                                           val shutdownCommand: Any) extends Thread(threadName) with Logging {
-
-  private val shutdownLatch = new CountDownLatch(1)
-
-  override def run {
-
-    try {
-      val remainingEvents = processEvents
-      debug("Remaining events = " + remainingEvents.size)
-
-      // handle remaining events
-      if(remainingEvents.size > 0) {
-        debug("Dispatching last batch of %d events to the event handler".format(remainingEvents.size))
-        tryToHandle(remainingEvents)
-      }
-    }catch {
-      case e => error("Error in sending events: ", e)
-    }finally {
-      shutdownLatch.countDown
-    }
-  }
-
-  def awaitShutdown = shutdownLatch.await
-
-  def shutdown = {
-    handler.close
-    info("Shutdown thread complete")
-  }
-
-  private def processEvents(): Seq[QueueItem[T]] = {
-    var lastSend = SystemTime.milliseconds
-    var events = new ListBuffer[QueueItem[T]]
-    var full: Boolean = false
-
-    // drain the queue until you get a shutdown command
-    Stream.continually(queue.poll(scala.math.max(0, (lastSend + queueTime) - SystemTime.milliseconds), TimeUnit.MILLISECONDS))
-                      .takeWhile(item => if(item != null) item.getData != shutdownCommand else true).foreach {
-      currentQueueItem =>
-        val elapsed = (SystemTime.milliseconds - lastSend)
-        // check if the queue time is reached. This happens when the poll method above returns after a timeout and
-        // returns a null object
-        val expired = currentQueueItem == null
-        if(currentQueueItem != null)
-          trace("Dequeued item for topic %s and partition %d"
-              .format(currentQueueItem.getTopic, currentQueueItem.getPartition))
-
-        // handle the dequeued current item
-        if(cbkHandler != null)
-          events = events ++ cbkHandler.afterDequeuingExistingData(currentQueueItem)
-        else {
-          if (currentQueueItem != null)
-            events += currentQueueItem
-        }
-
-        // check if the batch size is reached
-        full = events.size >= batchSize
-
-        if(full || expired) {
-          if(expired) debug(elapsed + " ms elapsed. Queue time reached. Sending..")
-          if(full) debug("Batch full. Sending..")
-          // if either queue time has reached or batch size has reached, dispatch to event handler
-          tryToHandle(events)
-          lastSend = SystemTime.milliseconds
-          events = new ListBuffer[QueueItem[T]]
-        }
-    }
-    if(queue.size > 0)
-      throw new IllegalQueueStateException("Invalid queue state! After queue shutdown, %d remaining items in the queue"
-        .format(queue.size))
-    if(cbkHandler != null) {
-      info("Invoking the callback handler before handling the last batch of %d events".format(events.size))
-      val addedEvents = cbkHandler.lastBatchBeforeClose
-      logEvents("last batch before close", addedEvents)
-      events = events ++ addedEvents
-    }
-    events
-  }
-
-  def tryToHandle(events: Seq[QueueItem[T]]) {
-    try {
-      debug("Handling " + events.size + " events")
-      if(events.size > 0)
-        handler.handle(events, underlyingProducer, serializer)
-    }catch {
-      case e => error("Error in handling batch of " + events.size + " events", e)
-    }
-  }
-
-  private def logEvents(tag: String, events: Iterable[QueueItem[T]]) {
-    if(logger.isTraceEnabled) {
-      trace("events for " + tag + ":")
-      for (event <- events)
-        trace(event.getData.toString)
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/QueueClosedException.scala b/trunk/core/src/main/scala/kafka/producer/async/QueueClosedException.scala
deleted file mode 100644
index 7fe205f..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/QueueClosedException.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-/* Indicates that client is sending event to a closed queue */
-class QueueClosedException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/producer/async/QueueFullException.scala b/trunk/core/src/main/scala/kafka/producer/async/QueueFullException.scala
deleted file mode 100644
index bb302b9..0000000
--- a/trunk/core/src/main/scala/kafka/producer/async/QueueFullException.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer.async
-
-/* Indicates the queue for sending messages is full of unsent messages */
-class QueueFullException(message: String) extends RuntimeException(message) {
-  def this() = this(null)
-}
diff --git a/trunk/core/src/main/scala/kafka/serializer/Decoder.scala b/trunk/core/src/main/scala/kafka/serializer/Decoder.scala
deleted file mode 100644
index 7d1c138..0000000
--- a/trunk/core/src/main/scala/kafka/serializer/Decoder.scala
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.serializer
-
-import kafka.message.Message
-
-trait Decoder[T] {
-  def toEvent(message: Message):T
-}
-
-class DefaultDecoder extends Decoder[Message] {
-  def toEvent(message: Message):Message = message
-}
-
-class StringDecoder extends Decoder[String] {
-  def toEvent(message: Message):String = {
-    val buf = message.payload
-    val arr = new Array[Byte](buf.remaining)
-    buf.get(arr)
-    new String(arr)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/serializer/Encoder.scala b/trunk/core/src/main/scala/kafka/serializer/Encoder.scala
deleted file mode 100644
index 222e51b..0000000
--- a/trunk/core/src/main/scala/kafka/serializer/Encoder.scala
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.serializer
-
-import kafka.message.Message
-
-trait Encoder[T] {
-  def toMessage(event: T):Message
-}
-
-class DefaultEncoder extends Encoder[Message] {
-  override def toMessage(event: Message):Message = event
-}
-
-class StringEncoder extends Encoder[String] {
-  override def toMessage(event: String):Message = new Message(event.getBytes)
-}
diff --git a/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala b/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala
deleted file mode 100644
index db752dd..0000000
--- a/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala
+++ /dev/null
@@ -1,108 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import java.util.Properties
-import kafka.utils.{Utils, ZKConfig}
-import kafka.message.Message
-
-/**
- * Configuration settings for the kafka server
- */
-class KafkaConfig(props: Properties) extends ZKConfig(props) {
-  /* the port to listen and accept connections on */
-  val port: Int = Utils.getInt(props, "port", 6667)
-
-  /* hostname of broker. If not set, will pick up from the value returned from getLocalHost. If there are multiple interfaces getLocalHost may not be what you want. */
-  val hostName: String = Utils.getString(props, "hostname", null)
-
-  /* the broker id for this server */
-  val brokerId: Int = Utils.getInt(props, "brokerid")
-  
-  /* the SO_SNDBUFF buffer of the socket sever sockets */
-  val socketSendBuffer: Int = Utils.getInt(props, "socket.send.buffer", 100*1024)
-  
-  /* the SO_RCVBUFF buffer of the socket sever sockets */
-  val socketReceiveBuffer: Int = Utils.getInt(props, "socket.receive.buffer", 100*1024)
-  
-  /* the maximum number of bytes in a socket request */
-  val maxSocketRequestSize: Int = Utils.getIntInRange(props, "max.socket.request.bytes", 100*1024*1024, (1, Int.MaxValue))
-
-  /* the maximum size of message that the server can receive */
-  val maxMessageSize = Utils.getIntInRange(props, "max.message.size", 1000000, (0, Int.MaxValue))
-
-  /* the number of worker threads that the server uses for handling all client requests*/
-  val numThreads = Utils.getIntInRange(props, "num.threads", Runtime.getRuntime().availableProcessors, (1, Int.MaxValue))
-  
-  /* the interval in which to measure performance statistics */
-  val monitoringPeriodSecs = Utils.getIntInRange(props, "monitoring.period.secs", 600, (1, Int.MaxValue))
-  
-  /* the default number of log partitions per topic */
-  val numPartitions = Utils.getIntInRange(props, "num.partitions", 1, (1, Int.MaxValue))
-  
-  /* the directory in which the log data is kept */
-  val logDir = Utils.getString(props, "log.dir")
-  
-  /* the maximum size of a single log file */
-  val logFileSize = Utils.getIntInRange(props, "log.file.size", 1*1024*1024*1024, (Message.MinHeaderSize, Int.MaxValue))
-
-  /* the maximum size of a single log file for some specific topic */
-  val logFileSizeMap = Utils.getTopicFileSize(Utils.getString(props, "topic.log.file.size", ""))
-
-  /* the maximum time before a new log segment is rolled out */
-  val logRollHours = Utils.getIntInRange(props, "log.roll.hours", 24*7, (1, Int.MaxValue))
-
-  /* the number of hours before rolling out a new log segment for some specific topic */
-  val logRollHoursMap = Utils.getTopicRollHours(Utils.getString(props, "topic.log.roll.hours", ""))
-
-  /* the number of hours to keep a log file before deleting it */
-  val logRetentionHours = Utils.getIntInRange(props, "log.retention.hours", 24*7, (1, Int.MaxValue))
-
-  /* the number of hours to keep a log file before deleting it for some specific topic*/
-  val logRetentionHoursMap = Utils.getTopicRetentionHours(Utils.getString(props, "topic.log.retention.hours", ""))
-  
-  /* the maximum size of the log before deleting it */
-  val logRetentionSize = Utils.getLong(props, "log.retention.size", -1)
-
-  /* the maximum size of the log for some specific topic before deleting it */
-  val logRetentionSizeMap = Utils.getTopicRetentionSize(Utils.getString(props, "topic.log.retention.size", ""))
-
-  /* the frequency in minutes that the log cleaner checks whether any log is eligible for deletion */
-  val logCleanupIntervalMinutes = Utils.getIntInRange(props, "log.cleanup.interval.mins", 10, (1, Int.MaxValue))
-  
-  /* enable zookeeper registration in the server */
-  val enableZookeeper = Utils.getBoolean(props, "enable.zookeeper", true)
-
-  /* the number of messages accumulated on a log partition before messages are flushed to disk */
-  val flushInterval = Utils.getIntInRange(props, "log.flush.interval", 500, (1, Int.MaxValue))
-
-  /* the maximum time in ms that a message in selected topics is kept in memory before flushed to disk, e.g., topic1:3000,topic2: 6000  */
-  val flushIntervalMap = Utils.getTopicFlushIntervals(Utils.getString(props, "topic.flush.intervals.ms", ""))
-
-  /* the frequency in ms that the log flusher checks whether any log needs to be flushed to disk */
-  val flushSchedulerThreadRate = Utils.getInt(props, "log.default.flush.scheduler.interval.ms",  3000)
-
-  /* the maximum time in ms that a message in any topic is kept in memory before flushed to disk */
-  val defaultFlushIntervalMs = Utils.getInt(props, "log.default.flush.interval.ms", flushSchedulerThreadRate)
-
-   /* the number of partitions for selected topics, e.g., topic1:8,topic2:16 */
-  val topicPartitionsMap = Utils.getTopicPartitions(Utils.getString(props, "topic.partition.count.map", ""))
-
-  /* the maximum length of topic name*/
-  val maxTopicNameLength = Utils.getIntInRange(props, "max.topic.name.length", 255, (1, Int.MaxValue))
-}
diff --git a/trunk/core/src/main/scala/kafka/server/KafkaRequestHandlers.scala b/trunk/core/src/main/scala/kafka/server/KafkaRequestHandlers.scala
deleted file mode 100644
index e537afb..0000000
--- a/trunk/core/src/main/scala/kafka/server/KafkaRequestHandlers.scala
+++ /dev/null
@@ -1,194 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import org.apache.log4j.Logger
-import kafka.log._
-import kafka.network._
-import kafka.message._
-import kafka.api._
-import kafka.common.{MessageSizeTooLargeException, ErrorMapping}
-import java.util.concurrent.atomic.AtomicLong
-import kafka.utils._
-
-/**
- * Logic to handle the various Kafka requests
- */
-private[kafka] class KafkaRequestHandlers(val logManager: LogManager) extends Logging {
-  
-  private val requestLogger = Logger.getLogger("kafka.request.logger")
-
-  def handlerFor(requestTypeId: Short, request: Receive): Handler.Handler = {
-    requestTypeId match {
-      case RequestKeys.Produce => handleProducerRequest _
-      case RequestKeys.Fetch => handleFetchRequest _
-      case RequestKeys.MultiFetch => handleMultiFetchRequest _
-      case RequestKeys.MultiProduce => handleMultiProducerRequest _
-      case RequestKeys.Offsets => handleOffsetRequest _
-      case _ => throw new IllegalStateException("No mapping found for handler id " + requestTypeId)
-    }
-  }
-  
-  def handleProducerRequest(receive: Receive): Option[Send] = {
-    val sTime = SystemTime.milliseconds
-    val request = ProducerRequest.readFrom(receive.buffer)
-
-    if(requestLogger.isTraceEnabled)
-      requestLogger.trace("Producer request " + request.toString)
-    handleProducerRequest(request, "ProduceRequest")
-    debug("kafka produce time " + (SystemTime.milliseconds - sTime) + " ms")
-    None
-  }
-
-  def handleMultiProducerRequest(receive: Receive): Option[Send] = {
-    val request = MultiProducerRequest.readFrom(receive.buffer)
-    if(requestLogger.isTraceEnabled)
-      requestLogger.trace("Multiproducer request " + request.toString)
-    request.produces.map(handleProducerRequest(_, "MultiProducerRequest"))
-    None
-  }
-
-  private def handleProducerRequest(request: ProducerRequest, requestHandlerName: String) = {
-    val partition = request.getTranslatedPartition(logManager.chooseRandomPartition)
-    try {
-      logManager.getOrCreateLog(request.topic, partition).append(request.messages)
-      trace(request.messages.sizeInBytes + " bytes written to logs.")
-      request.messages.foreach(m => trace("wrote message %s to disk".format(m.message.checksum)))
-      BrokerTopicStat.getBrokerTopicStat(request.topic).recordBytesIn(request.messages.sizeInBytes)
-      BrokerTopicStat.getBrokerAllTopicStat.recordBytesIn(request.messages.sizeInBytes)
-    }
-    catch {
-      case e: MessageSizeTooLargeException =>
-        warn(e.getMessage() + " on " + request.topic + ":" + partition)
-        BrokerTopicStat.getBrokerTopicStat(request.topic).recordFailedProduceRequest
-        BrokerTopicStat.getBrokerAllTopicStat.recordFailedProduceRequest
-      case t =>
-        error("Error processing " + requestHandlerName + " on " + request.topic + ":" + partition, t)
-        BrokerTopicStat.getBrokerTopicStat(request.topic).recordFailedProduceRequest
-        BrokerTopicStat.getBrokerAllTopicStat.recordFailedProduceRequest
-        throw t
-    }
-  }
-
-  def handleFetchRequest(request: Receive): Option[Send] = {
-    val fetchRequest = FetchRequest.readFrom(request.buffer)
-    if(requestLogger.isTraceEnabled)
-      requestLogger.trace("Fetch request " + fetchRequest.toString)
-    Some(readMessageSet(fetchRequest))
-  }
-  
-  def handleMultiFetchRequest(request: Receive): Option[Send] = {
-    val multiFetchRequest = MultiFetchRequest.readFrom(request.buffer)
-    if(requestLogger.isTraceEnabled)
-      requestLogger.trace("Multifetch request")
-    multiFetchRequest.fetches.foreach(req => requestLogger.trace(req.toString))
-    var responses = multiFetchRequest.fetches.map(fetch =>
-        readMessageSet(fetch)).toList
-    
-    Some(new MultiMessageSetSend(responses))
-  }
-
-  private def readMessageSet(fetchRequest: FetchRequest): MessageSetSend = {
-    var  response: MessageSetSend = null
-    try {
-      trace("Fetching log segment for topic, partition, offset, maxSize = " + fetchRequest)
-      val log = logManager.getLog(fetchRequest.topic, fetchRequest.partition)
-      if (log != null) {
-        response = new MessageSetSend(log.read(fetchRequest.offset, fetchRequest.maxSize))
-        BrokerTopicStat.getBrokerTopicStat(fetchRequest.topic).recordBytesOut(response.messages.sizeInBytes)
-        BrokerTopicStat.getBrokerAllTopicStat.recordBytesOut(response.messages.sizeInBytes)
-      }
-      else
-        response = new MessageSetSend()
-    }
-    catch {
-      case e =>
-        error("error when processing request " + fetchRequest, e)
-        BrokerTopicStat.getBrokerTopicStat(fetchRequest.topic).recordFailedFetchRequest
-        BrokerTopicStat.getBrokerAllTopicStat.recordFailedFetchRequest
-        response=new MessageSetSend(MessageSet.Empty, ErrorMapping.codeFor(e.getClass.asInstanceOf[Class[Throwable]]))
-    }
-    response
-  }
-
-  def handleOffsetRequest(request: Receive): Option[Send] = {
-    val offsetRequest = OffsetRequest.readFrom(request.buffer)
-    if(requestLogger.isTraceEnabled)
-      requestLogger.trace("Offset request " + offsetRequest.toString)
-    val offsets = logManager.getOffsets(offsetRequest)
-    val response = new OffsetArraySend(offsets)
-    Some(response)
-  }
-}
-
-trait BrokerTopicStatMBean {
-  def getMessagesIn: Long
-  def getBytesIn: Long
-  def getBytesOut: Long
-  def getFailedProduceRequest: Long
-  def getFailedFetchRequest: Long
-}
-
-@threadsafe
-class BrokerTopicStat extends BrokerTopicStatMBean {
-  private val numCumulatedMessagesIn = new AtomicLong(0)
-  private val numCumulatedBytesIn = new AtomicLong(0)
-  private val numCumulatedBytesOut = new AtomicLong(0)
-  private val numCumulatedFailedProduceRequests = new AtomicLong(0)
-  private val numCumulatedFailedFetchRequests = new AtomicLong(0)
-
-  def getMessagesIn: Long = numCumulatedMessagesIn.get
-
-  def recordMessagesIn(nMessages: Int) = numCumulatedMessagesIn.getAndAdd(nMessages)
-
-  def getBytesIn: Long = numCumulatedBytesIn.get
-
-  def recordBytesIn(nBytes: Long) = numCumulatedBytesIn.getAndAdd(nBytes)
-
-  def getBytesOut: Long = numCumulatedBytesOut.get
-
-  def recordBytesOut(nBytes: Long) = numCumulatedBytesOut.getAndAdd(nBytes)
-
-  def recordFailedProduceRequest = numCumulatedFailedProduceRequests.getAndIncrement
-
-  def getFailedProduceRequest = numCumulatedFailedProduceRequests.get()
-
-  def recordFailedFetchRequest = numCumulatedFailedFetchRequests.getAndIncrement
-
-  def getFailedFetchRequest = numCumulatedFailedFetchRequests.get()
-}
-
-object BrokerTopicStat extends Logging {
-  private val stats = new Pool[String, BrokerTopicStat]
-  private val allTopicStat = new BrokerTopicStat
-  Utils.registerMBean(allTopicStat, "kafka:type=kafka.BrokerAllTopicStat")
-
-  def getBrokerAllTopicStat(): BrokerTopicStat = allTopicStat
-
-  def getBrokerTopicStat(topic: String): BrokerTopicStat = {
-    var stat = stats.get(topic)
-    if (stat == null) {
-      stat = new BrokerTopicStat
-      if (stats.putIfNotExists(topic, stat) == null)
-        Utils.registerMBean(stat, "kafka:type=kafka.BrokerTopicStat." + topic)
-      else
-        stat = stats.get(topic)
-    }
-    return stat
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/server/KafkaServer.scala b/trunk/core/src/main/scala/kafka/server/KafkaServer.scala
deleted file mode 100644
index 43f9577..0000000
--- a/trunk/core/src/main/scala/kafka/server/KafkaServer.scala
+++ /dev/null
@@ -1,116 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import kafka.log.LogManager
-import java.util.concurrent.CountDownLatch
-import java.util.concurrent.atomic.AtomicBoolean
-import kafka.utils.{Mx4jLoader, Utils, SystemTime, KafkaScheduler, Logging}
-import kafka.network.{SocketServerStats, SocketServer}
-import java.io.File
-
-/**
- * Represents the lifecycle of a single Kafka broker. Handles all functionality required
- * to start up and shutdown a single Kafka node.
- */
-class KafkaServer(val config: KafkaConfig) extends Logging {
-  val CLEAN_SHUTDOWN_FILE = ".kafka_cleanshutdown"
-  private var isShuttingDown = new AtomicBoolean(false)
-  private var shutdownLatch = new CountDownLatch(1)
-
-  private val statsMBeanName = "kafka:type=kafka.SocketServerStats"
-  
-  var socketServer: SocketServer = null
-  
-  val scheduler = new KafkaScheduler(1, "kafka-logcleaner-", false)
-  
-  private var logManager: LogManager = null
-
-  /**
-   * Start up API for bringing up a single instance of the Kafka server.
-   * Instantiates the LogManager, the SocketServer and the request handlers - KafkaRequestHandlers
-   */
-  def startup() {
-    info("Starting Kafka server...")
-    isShuttingDown = new AtomicBoolean(false)
-    shutdownLatch = new CountDownLatch(1)
-    var needRecovery = true
-    val cleanShutDownFile = new File(new File(config.logDir), CLEAN_SHUTDOWN_FILE)
-    if (cleanShutDownFile.exists) {
-      needRecovery = false
-      cleanShutDownFile.delete
-    }
-    logManager = new LogManager(config,
-                                scheduler,
-                                SystemTime,
-                                1000L * 60 * 60 * config.logRollHours,
-                                1000L * 60 * config.logCleanupIntervalMinutes,
-                                1000L * 60 * 60 * config.logRetentionHours,
-                                needRecovery)
-                                                    
-    val handlers = new KafkaRequestHandlers(logManager)
-    socketServer = new SocketServer(config.port,
-                                    config.numThreads,
-                                    config.monitoringPeriodSecs,
-                                    handlers.handlerFor,
-                                    config.socketSendBuffer,
-                                    config.socketReceiveBuffer,                                    
-                                    config.maxSocketRequestSize)
-    Utils.registerMBean(socketServer.stats, statsMBeanName)
-    socketServer.startup()
-    Mx4jLoader.maybeLoad
-    /**
-     *  Registers this broker in ZK. After this, consumers can connect to broker.
-     *  So this should happen after socket server start.
-     */
-    logManager.startup()
-    info("Kafka server started.")
-  }
-  
-  /**
-   * Shutdown API for shutting down a single instance of the Kafka server.
-   * Shuts down the LogManager, the SocketServer and the log cleaner scheduler thread
-   */
-  def shutdown() {
-    val canShutdown = isShuttingDown.compareAndSet(false, true);
-    if (canShutdown) {
-      info("Shutting down Kafka server")
-      scheduler.shutdown()
-      if (socketServer != null)
-        socketServer.shutdown()
-      Utils.unregisterMBean(statsMBeanName)
-      if (logManager != null)
-        logManager.close()
-
-      val cleanShutDownFile = new File(new File(config.logDir), CLEAN_SHUTDOWN_FILE)
-      cleanShutDownFile.createNewFile
-
-      shutdownLatch.countDown()
-      info("Kafka server shut down completed")
-    }
-  }
-  
-  /**
-   * After calling shutdown(), use this API to wait until the shutdown is complete
-   */
-  def awaitShutdown(): Unit = shutdownLatch.await()
-
-  def getLogManager(): LogManager = logManager
-
-  def getStats(): SocketServerStats = socketServer.stats
-}
diff --git a/trunk/core/src/main/scala/kafka/server/KafkaServerStartable.scala b/trunk/core/src/main/scala/kafka/server/KafkaServerStartable.scala
deleted file mode 100644
index 370e20d..0000000
--- a/trunk/core/src/main/scala/kafka/server/KafkaServerStartable.scala
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import kafka.utils.Logging
-
-
-class KafkaServerStartable(val serverConfig: KafkaConfig) extends Logging {
-  private var server : KafkaServer = null
-
-  init
-
-  private def init() {
-    server = new KafkaServer(serverConfig)
-  }
-
-  def startup() {
-    try {
-      server.startup()
-    }
-    catch {
-      case e =>
-        fatal("Fatal error during KafkaServerStable startup. Prepare to shutdown", e)
-        shutdown()
-    }
-  }
-
-  def shutdown() {
-    try {
-      server.shutdown()
-    }
-    catch {
-      case e =>
-        fatal("Fatal error during KafkaServerStable shutdown. Prepare to halt", e)
-        Runtime.getRuntime.halt(1)
-    }
-  }
-
-  def awaitShutdown() {
-    server.awaitShutdown
-  }
-}
-
-
diff --git a/trunk/core/src/main/scala/kafka/server/KafkaZooKeeper.scala b/trunk/core/src/main/scala/kafka/server/KafkaZooKeeper.scala
deleted file mode 100644
index 48aac3e..0000000
--- a/trunk/core/src/main/scala/kafka/server/KafkaZooKeeper.scala
+++ /dev/null
@@ -1,117 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import kafka.utils._
-import kafka.cluster.Broker
-import org.I0Itec.zkclient.{IZkStateListener, ZkClient}
-import org.I0Itec.zkclient.exception.ZkNodeExistsException
-import org.apache.zookeeper.Watcher.Event.KeeperState
-import kafka.log.LogManager
-import java.net.InetAddress
-
-/**
- * Handles the server's interaction with zookeeper. The server needs to register the following paths:
- *   /topics/[topic]/[node_id-partition_num]
- *   /brokers/[0...N] --> host:port
- * 
- */
-class KafkaZooKeeper(config: KafkaConfig, logManager: LogManager) extends Logging {
-  
-  val brokerIdPath = ZkUtils.BrokerIdsPath + "/" + config.brokerId
-  var zkClient: ZkClient = null
-  var topics: List[String] = Nil
-  val lock = new Object()
-  
-  def startup() {
-    /* start client */
-    info("connecting to ZK: " + config.zkConnect)
-    zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs, config.zkConnectionTimeoutMs, ZKStringSerializer)
-    zkClient.subscribeStateChanges(new SessionExpireListener)
-  }
-
-  def registerBrokerInZk() {
-    info("Registering broker " + brokerIdPath)
-    val hostName = if (config.hostName == null) InetAddress.getLocalHost.getHostAddress else config.hostName
-    val creatorId = hostName + "-" + System.currentTimeMillis
-    val broker = new Broker(config.brokerId, creatorId, hostName, config.port)
-    try {
-      ZkUtils.createEphemeralPathExpectConflict(zkClient, brokerIdPath, broker.getZKString)
-    } catch {
-      case e: ZkNodeExistsException =>
-        throw new RuntimeException("A broker is already registered on the path " + brokerIdPath + ". This probably " + 
-                                   "indicates that you either have configured a brokerid that is already in use, or " + 
-                                   "else you have shutdown this broker and restarted it faster than the zookeeper " + 
-                                   "timeout so it appears to be re-registering.")
-    }
-    info("Registering broker " + brokerIdPath + " succeeded with " + broker)
-  }
-
-  def registerTopicInZk(topic: String) {
-    registerTopicInZkInternal(topic)
-    lock synchronized {
-      topics ::= topic
-    }
-  }
-
-  def registerTopicInZkInternal(topic: String) {
-    val brokerTopicPath = ZkUtils.BrokerTopicsPath + "/" + topic + "/" + config.brokerId
-    val numParts = logManager.getTopicPartitionsMap.getOrElse(topic, config.numPartitions)
-    info("Begin registering broker topic " + brokerTopicPath + " with " + numParts.toString + " partitions")
-    ZkUtils.createEphemeralPathExpectConflict(zkClient, brokerTopicPath, numParts.toString)
-    info("End registering broker topic " + brokerTopicPath)
-  }
-
-  /**
-   *  When we get a SessionExpired event, we lost all ephemeral nodes and zkclient has reestablished a
-   *  connection for us. We need to re-register this broker in the broker registry.
-   */
-  class SessionExpireListener() extends IZkStateListener {
-    @throws(classOf[Exception])
-    def handleStateChanged(state: KeeperState) {
-      // do nothing, since zkclient will do reconnect for us.
-    }
-
-    /**
-     * Called after the zookeeper session has expired and a new session has been created. You would have to re-create
-     * any ephemeral nodes here.
-     *
-     * @throws Exception
-     *             On any error.
-     */
-    @throws(classOf[Exception])
-    def handleNewSession() {
-      info("re-registering broker info in ZK for broker " + config.brokerId)
-      registerBrokerInZk()
-      lock synchronized {
-        info("re-registering broker topics in ZK for broker " + config.brokerId)
-        for (topic <- topics)
-          registerTopicInZkInternal(topic)
-      }
-      info("done re-registering broker")
-    }
-  }
-
-  def close() {
-    if (zkClient != null) {
-      info("Closing zookeeper client...")
-      zkClient.close()
-    }
-  } 
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/server/MessageSetSend.scala b/trunk/core/src/main/scala/kafka/server/MessageSetSend.scala
deleted file mode 100644
index e300ad0..0000000
--- a/trunk/core/src/main/scala/kafka/server/MessageSetSend.scala
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import java.nio._
-import java.nio.channels._
-import kafka.network._
-import kafka.message._
-import kafka.utils._
-import kafka.common.ErrorMapping
-
-/**
- * A zero-copy message response that writes the bytes needed directly from the file
- * wholly in kernel space
- */
-@nonthreadsafe
-private[server] class MessageSetSend(val messages: MessageSet, val errorCode: Int) extends Send {
-  
-  private var sent: Long = 0
-  private var size: Long = messages.sizeInBytes
-  private val header = ByteBuffer.allocate(6)
-  header.putInt(size.asInstanceOf[Int] + 2)
-  header.putShort(errorCode.asInstanceOf[Short])
-  header.rewind()
-  
-  var complete: Boolean = false
-
-  def this(messages: MessageSet) = this(messages, ErrorMapping.NoError)
-
-  def this() = this(MessageSet.Empty)
-
-  def writeTo(channel: GatheringByteChannel): Int = {
-    expectIncomplete()
-    var written = 0
-    if(header.hasRemaining)
-      written += channel.write(header)
-    if(!header.hasRemaining) {
-      val fileBytesSent = messages.writeTo(channel, sent, size - sent)
-      written += fileBytesSent.asInstanceOf[Int]
-      sent += fileBytesSent
-    }
-
-    if(logger.isTraceEnabled)
-      if (channel.isInstanceOf[SocketChannel]) {
-        val socketChannel = channel.asInstanceOf[SocketChannel]
-        logger.trace(sent + " bytes written to " + socketChannel.socket.getRemoteSocketAddress() + " expecting to send " + size + " bytes")
-      }
-
-    if(sent >= size)
-      complete = true
-    written
-  }
-  
-  def sendSize: Int = size.asInstanceOf[Int] + header.capacity
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/server/MultiMessageSetSend.scala b/trunk/core/src/main/scala/kafka/server/MultiMessageSetSend.scala
deleted file mode 100644
index 8926f8b..0000000
--- a/trunk/core/src/main/scala/kafka/server/MultiMessageSetSend.scala
+++ /dev/null
@@ -1,36 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.server
-
-import kafka.network._
-import kafka.utils._
-
-/**
- * A set of message sets prefixed by size
- */
-@nonthreadsafe
-private[server] class MultiMessageSetSend(val sets: List[MessageSetSend]) extends MultiSend(new ByteBufferSend(6) :: sets) {
-  
-  val buffer = this.sends.head.asInstanceOf[ByteBufferSend].buffer
-  val allMessageSetSize: Int = sets.foldLeft(0)(_ + _.sendSize)
-  val expectedBytesToWrite: Int = 4 + 2 + allMessageSetSize
-  buffer.putInt(2 + allMessageSetSize)
-  buffer.putShort(0)
-  buffer.rewind()
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/server/package.html b/trunk/core/src/main/scala/kafka/server/package.html
deleted file mode 100644
index ba4c706..0000000
--- a/trunk/core/src/main/scala/kafka/server/package.html
+++ /dev/null
@@ -1 +0,0 @@
-The kafka server.
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala b/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala
deleted file mode 100644
index d9fc023..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala
+++ /dev/null
@@ -1,161 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-
-import joptsimple._
-import org.I0Itec.zkclient.ZkClient
-import kafka.utils.{ZkUtils, ZKStringSerializer, Logging}
-import kafka.consumer.SimpleConsumer
-import collection.mutable.Map
-object ConsumerOffsetChecker extends Logging {
-
-  private val consumerMap: Map[String, Option[SimpleConsumer]] = Map()
-
-  private val BidPidPattern = """(\d+)-(\d+)""".r
-
-  private val BrokerIpPattern = """.*:(\d+\.\d+\.\d+\.\d+):(\d+$)""".r
-  // e.g., 127.0.0.1-1315436360737:127.0.0.1:9092
-
-  private def getConsumer(zkClient: ZkClient, bid: String): Option[SimpleConsumer] = {
-    val brokerInfo = ZkUtils.readDataMaybeNull(zkClient, "/brokers/ids/%s".format(bid))
-    val consumer = brokerInfo match {
-      case BrokerIpPattern(ip, port) =>
-        Some(new SimpleConsumer(ip, port.toInt, 10000, 100000))
-      case _ =>
-        error("Could not parse broker info %s".format(brokerInfo))
-        None
-    }
-    consumer
-  }
-
-  private def processPartition(zkClient: ZkClient,
-                               group: String, topic: String, bidPid: String) {
-    val offset = ZkUtils.readData(zkClient, "/consumers/%s/offsets/%s/%s".
-            format(group, topic, bidPid)).toLong
-    val owner = ZkUtils.readDataMaybeNull(zkClient, "/consumers/%s/owners/%s/%s".
-            format(group, topic, bidPid))
-    println("%s,%s,%s (Group,Topic,BrokerId-PartitionId)".format(group, topic, bidPid))
-    println("%20s%s".format("Owner = ", owner))
-    println("%20s%d".format("Consumer offset = ", offset))
-    println("%20s%,d (%,.2fG)".format("= ", offset, offset / math.pow(1024, 3)))
-
-    bidPid match {
-      case BidPidPattern(bid, pid) =>
-        val consumerOpt = consumerMap.getOrElseUpdate(
-          bid, getConsumer(zkClient, bid))
-        consumerOpt match {
-          case Some(consumer) =>
-            val logSize =
-              consumer.getOffsetsBefore(topic, pid.toInt, -1, 1).last.toLong
-            println("%20s%d".format("Log size = ", logSize))
-            println("%20s%,d (%,.2fG)".format("= ", logSize, logSize / math.pow(1024, 3)))
-
-            val lag = logSize - offset
-            println("%20s%d".format("Consumer lag = ", lag))
-            println("%20s%,d (%,.2fG)".format("= ", lag, lag / math.pow(1024, 3)))
-            println()
-          case None => // ignore
-        }
-      case _ =>
-        error("Could not parse broker/partition pair %s".format(bidPid))
-    }
-  }
-
-  private def processTopic(zkClient: ZkClient, group: String, topic: String) {
-    val bidsPids = ZkUtils.getChildrenParentMayNotExist(
-      zkClient, "/consumers/%s/offsets/%s".format(group, topic)).toList
-    bidsPids.sorted.foreach {
-      bidPid => processPartition(zkClient, group, topic, bidPid)
-    }
-  }
-
-  private def printBrokerInfo() {
-    println("BROKER INFO")
-    for ((bid, consumerOpt) <- consumerMap)
-      consumerOpt match {
-        case Some(consumer) =>
-          println("%s -> %s:%d".format(bid, consumer.host, consumer.port))
-        case None => // ignore
-      }
-  }
-
-  def main(args: Array[String]) {
-    val parser = new OptionParser()
-
-    val zkConnectOpt = parser.accepts("zkconnect", "ZooKeeper connect string.").
-            withRequiredArg().defaultsTo("localhost:2181").ofType(classOf[String]);
-    val topicsOpt = parser.accepts("topic",
-            "Comma-separated list of consumer topics (all topics if absent).").
-            withRequiredArg().ofType(classOf[String])
-    val groupOpt = parser.accepts("group", "Consumer group.").
-            withRequiredArg().ofType(classOf[String])
-    parser.accepts("help", "Print this message.")
-
-    val options = parser.parse(args : _*)
-
-    if (options.has("help")) {
-       parser.printHelpOn(System.out)
-       System.exit(0)
-    }
-
-    for (opt <- List(groupOpt))
-      if (!options.has(opt)) {
-        System.err.println("Missing required argument: %s".format(opt))
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-
-    val zkConnect = options.valueOf(zkConnectOpt)
-    val group = options.valueOf(groupOpt)
-    val topics = if (options.has(topicsOpt)) Some(options.valueOf(topicsOpt))
-      else None
-
-
-    var zkClient: ZkClient = null
-    try {
-      zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer)
-
-      val topicList = topics match {
-        case Some(x) => x.split(",").view.toList
-        case None => ZkUtils.getChildren(
-          zkClient, "/consumers/%s/offsets".format(group)).toList
-      }
-
-      debug("zkConnect = %s; topics = %s; group = %s".format(
-        zkConnect, topicList.toString(), group))
-
-      topicList.sorted.foreach {
-        topic => processTopic(zkClient, group, topic)
-      }
-
-      printBrokerInfo()
-    }
-    finally {
-      for (consumerOpt <- consumerMap.values) {
-        consumerOpt match {
-          case Some(consumer) => consumer.close()
-          case None => // ignore
-        }
-      }
-      if (zkClient != null)
-        zkClient.close()
-    }
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/tools/ConsumerShell.scala b/trunk/core/src/main/scala/kafka/tools/ConsumerShell.scala
deleted file mode 100644
index 5eb5269..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ConsumerShell.scala
+++ /dev/null
@@ -1,108 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import joptsimple._
-import kafka.utils.{Utils, Logging}
-import java.util.concurrent.CountDownLatch
-import kafka.consumer._
-import kafka.serializer.StringDecoder
-
-/**
- * Program to read using the rich consumer and dump the results to standard out
- */
-object ConsumerShell {
-  def main(args: Array[String]): Unit = {
-    
-    val parser = new OptionParser
-    val topicOpt = parser.accepts("topic", "REQUIRED: The topic to consume from.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    val consumerPropsOpt = parser.accepts("props", "REQUIRED: Properties file with the consumer properties.")
-                           .withRequiredArg
-                           .describedAs("properties")
-                           .ofType(classOf[String])
-    val partitionsOpt = parser.accepts("partitions", "Number of partitions to consume from.")
-                           .withRequiredArg
-                           .describedAs("count")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1)
-    
-    val options = parser.parse(args : _*)
-    
-    for(arg <- List(topicOpt, consumerPropsOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"") 
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    
-    val partitions = options.valueOf(partitionsOpt).intValue
-    val propsFile = options.valueOf(consumerPropsOpt)
-    val topic = options.valueOf(topicOpt)
-    
-    println("Starting consumer...")
-
-    val consumerConfig = new ConsumerConfig(Utils.loadProps(propsFile))
-    val consumerConnector: ConsumerConnector = Consumer.create(consumerConfig)
-    val topicMessageStreams = consumerConnector.createMessageStreams(Predef.Map(topic -> partitions), new StringDecoder)
-    var threadList = List[ZKConsumerThread]()
-    for ((topic, streamList) <- topicMessageStreams)
-      for (stream <- streamList)
-        threadList ::= new ZKConsumerThread(stream)
-
-    for (thread <- threadList)
-      thread.start
-
-    // attach shutdown handler to catch control-c
-    Runtime.getRuntime().addShutdownHook(new Thread() {
-      override def run() = {
-        consumerConnector.shutdown
-        threadList.foreach(_.shutdown)
-        println("consumer threads shutted down")        
-      }
-    })
-  }
-}
-
-class ZKConsumerThread(stream: KafkaStream[String]) extends Thread with Logging {
-  val shutdownLatch = new CountDownLatch(1)
-
-  override def run() {
-    println("Starting consumer thread..")
-    var count: Int = 0
-    try {
-      for (messageAndMetadata <- stream) {
-        println("consumed: " + messageAndMetadata.message)
-        count += 1
-      }
-    }catch {
-      case e:ConsumerTimeoutException => // this is ok
-      case oe: Exception => error("error in ZKConsumerThread", oe)
-    }
-    shutdownLatch.countDown
-    println("Received " + count + " messages")
-    println("thread shutdown !" )
-  }
-
-  def shutdown() {
-    shutdownLatch.await
-  }          
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala b/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala
deleted file mode 100644
index 85ae0c5..0000000
--- a/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import java.io._
-import kafka.message._
-import kafka.utils._
-
-object DumpLogSegments {
-
-  def main(args: Array[String]) {
-    var isNoPrint = false;
-    for(arg <- args)
-      if ("-noprint".compareToIgnoreCase(arg) == 0)
-        isNoPrint = true;
-
-    for(arg <- args) {
-      if (! ("-noprint".compareToIgnoreCase(arg) == 0) ) {
-        val file = new File(arg)
-        println("Dumping " + file)
-        val startOffset = file.getName().split("\\.")(0).toLong
-        var offset = 0L
-        println("Starting offset: " + startOffset)
-        val messageSet = new FileMessageSet(file, false)
-        for(messageAndOffset <- messageSet) {
-          val msg = messageAndOffset.message
-          println("offset: " + (startOffset + offset) + " isvalid: " + msg.isValid +
-                  " payloadsize: " + msg.payloadSize + " magic: " + msg.magic + " compresscodec: " + msg.compressionCodec)
-          if (!isNoPrint)
-            println("payload:\t" + Utils.toString(messageAndOffset.message.payload, "UTF-8"))
-          offset = messageAndOffset.offset
-        }
-        println("tail of the log is at offset: " + (startOffset + offset)) 
-      }
-    }
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/ExportZkOffsets.scala b/trunk/core/src/main/scala/kafka/tools/ExportZkOffsets.scala
deleted file mode 100644
index 725a4d3..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ExportZkOffsets.scala
+++ /dev/null
@@ -1,123 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import java.io.FileWriter
-import joptsimple._
-import kafka.utils.{Logging, ZkUtils, ZKStringSerializer,ZKGroupTopicDirs}
-import org.I0Itec.zkclient.ZkClient
-
-
-/**
- *  A utility that retrieve the offset of broker partitions in ZK and
- *  prints to an output file in the following format:
- *  
- *  /consumers/group1/offsets/topic1/1-0:286894308
- *  /consumers/group1/offsets/topic1/2-0:284803985
- *  
- *  This utility expects 3 arguments:
- *  1. Zk host:port string
- *  2. group name (all groups implied if omitted)
- *  3. output filename
- *     
- *  To print debug message, add the following line to log4j.properties:
- *  log4j.logger.kafka.tools.ExportZkOffsets$=DEBUG
- *  (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)
- */
-object ExportZkOffsets extends Logging {
-
-  def main(args: Array[String]) {
-    val parser = new OptionParser
-
-    val zkConnectOpt = parser.accepts("zkconnect", "ZooKeeper connect string.")
-                            .withRequiredArg()
-                            .defaultsTo("localhost:2181")
-                            .ofType(classOf[String])
-    val groupOpt = parser.accepts("group", "Consumer group.")
-                            .withRequiredArg()
-                            .ofType(classOf[String])
-    val outFileOpt = parser.accepts("output-file", "Output file")
-                            .withRequiredArg()
-                            .ofType(classOf[String])
-    parser.accepts("help", "Print this message.")
-            
-    val options = parser.parse(args : _*)
-    
-    if (options.has("help")) {
-       parser.printHelpOn(System.out)
-       System.exit(0)
-    }
-    
-    for (opt <- List(zkConnectOpt, outFileOpt)) {
-      if (!options.has(opt)) {
-        System.err.println("Missing required argument: %s".format(opt))
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    
-    val zkConnect  = options.valueOf(zkConnectOpt)
-    val groups     = options.valuesOf(groupOpt)
-    val outfile    = options.valueOf(outFileOpt)
-
-    var zkClient   : ZkClient    = null
-    val fileWriter : FileWriter  = new FileWriter(outfile)
-    
-    try {
-      zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer)
-      
-      var consumerGroups: Seq[String] = null
-
-      if (groups.size == 0) {
-        consumerGroups = ZkUtils.getChildren(zkClient, ZkUtils.ConsumersPath).toList
-      }
-      else {
-        import scala.collection.JavaConversions._
-        consumerGroups = groups
-      }
-      
-      for (consumerGrp <- consumerGroups) {
-        val topicsList = getTopicsList(zkClient, consumerGrp)
-        
-        for (topic <- topicsList) {
-          val bidPidList = getBrokeridPartition(zkClient, consumerGrp, topic)
-          
-          for (bidPid <- bidPidList) {
-            val zkGrpTpDir = new ZKGroupTopicDirs(consumerGrp,topic)
-            val offsetPath = zkGrpTpDir.consumerOffsetDir + "/" + bidPid
-            val offsetVal  = ZkUtils.readDataMaybeNull(zkClient, offsetPath)
-            fileWriter.write(offsetPath + ":" + offsetVal + "\n")
-            debug(offsetPath + " => " + offsetVal)
-          }
-        }
-      }      
-    }
-    finally {      
-      fileWriter.flush()
-      fileWriter.close()
-    }
-  }
-
-  private def getBrokeridPartition(zkClient: ZkClient, consumerGroup: String, topic: String): List[String] = {
-    return ZkUtils.getChildrenParentMayNotExist(zkClient, "/consumers/%s/offsets/%s".format(consumerGroup, topic)).toList
-  }
-  
-  private def getTopicsList(zkClient: ZkClient, consumerGroup: String): List[String] = {
-    return ZkUtils.getChildren(zkClient, "/consumers/%s/offsets".format(consumerGroup)).toList
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/GetOffsetShell.scala b/trunk/core/src/main/scala/kafka/tools/GetOffsetShell.scala
deleted file mode 100644
index 034b734..0000000
--- a/trunk/core/src/main/scala/kafka/tools/GetOffsetShell.scala
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package kafka.tools
-
-import kafka.consumer._
-import joptsimple._
-import java.net.URI
-
-object GetOffsetShell {
-
-  def main(args: Array[String]): Unit = {
-    val parser = new OptionParser
-    val urlOpt = parser.accepts("server", "REQUIRED: The hostname of the server to connect to.")
-                           .withRequiredArg
-                           .describedAs("kafka://hostname:port")
-                           .ofType(classOf[String])
-    val topicOpt = parser.accepts("topic", "REQUIRED: The topic to get offset from.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    val partitionOpt = parser.accepts("partition", "partition id")
-                           .withRequiredArg
-                           .describedAs("partition id")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(0)
-    val timeOpt = parser.accepts("time", "timestamp of the offsets before that")
-                           .withRequiredArg
-                           .describedAs("timestamp/-1(latest)/-2(earliest)")
-                           .ofType(classOf[java.lang.Long])
-    val nOffsetsOpt = parser.accepts("offsets", "number of offsets returned")
-                           .withRequiredArg
-                           .describedAs("count")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1)
-
-    val options = parser.parse(args : _*)
-
-    for(arg <- List(urlOpt, topicOpt, timeOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-
-    val url = new URI(options.valueOf(urlOpt))
-    val topic = options.valueOf(topicOpt)
-    val partition = options.valueOf(partitionOpt).intValue
-    var time = options.valueOf(timeOpt).longValue
-    val nOffsets = options.valueOf(nOffsetsOpt).intValue
-    val consumer = new SimpleConsumer(url.getHost, url.getPort, 10000, 100000)
-    val offsets = consumer.getOffsetsBefore(topic, partition, time, nOffsets)
-    println("get " + offsets.length + " results")
-    for (offset <- offsets)
-      println(offset)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/ImportZkOffsets.scala b/trunk/core/src/main/scala/kafka/tools/ImportZkOffsets.scala
deleted file mode 100644
index 63519e1..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ImportZkOffsets.scala
+++ /dev/null
@@ -1,112 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import java.io.BufferedReader
-import java.io.FileReader
-import joptsimple._
-import kafka.utils.{Logging, ZkUtils,ZKStringSerializer}
-import org.I0Itec.zkclient.ZkClient
-
-
-/**
- *  A utility that updates the offset of broker partitions in ZK.
- *  
- *  This utility expects 2 input files as arguments:
- *  1. consumer properties file
- *  2. a file contains partition offsets data such as:
- *     (This output data file can be obtained by running kafka.tools.ExportZkOffsets)
- *  
- *     /consumers/group1/offsets/topic1/3-0:285038193
- *     /consumers/group1/offsets/topic1/1-0:286894308
- *     
- *  To print debug message, add the following line to log4j.properties:
- *  log4j.logger.kafka.tools.ImportZkOffsets$=DEBUG
- *  (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)
- */
-object ImportZkOffsets extends Logging {
-
-  def main(args: Array[String]) {
-    val parser = new OptionParser
-    
-    val zkConnectOpt = parser.accepts("zkconnect", "ZooKeeper connect string.")
-                            .withRequiredArg()
-                            .defaultsTo("localhost:2181")
-                            .ofType(classOf[String])
-    val inFileOpt = parser.accepts("input-file", "Input file")
-                            .withRequiredArg()
-                            .ofType(classOf[String])
-    parser.accepts("help", "Print this message.")
-            
-    val options = parser.parse(args : _*)
-    
-    if (options.has("help")) {
-       parser.printHelpOn(System.out)
-       System.exit(0)
-    }
-    
-    for (opt <- List(inFileOpt)) {
-      if (!options.has(opt)) {
-        System.err.println("Missing required argument: %s".format(opt))
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    
-    val zkConnect           = options.valueOf(zkConnectOpt)
-    val partitionOffsetFile = options.valueOf(inFileOpt)
-
-    val zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer)
-    val partitionOffsets: Map[String,String] = getPartitionOffsetsFromFile(partitionOffsetFile)
-
-    updateZkOffsets(zkClient, partitionOffsets)
-  }
-
-  private def getPartitionOffsetsFromFile(filename: String):Map[String,String] = {
-    val fr = new FileReader(filename)
-    val br = new BufferedReader(fr)
-    var partOffsetsMap: Map[String,String] = Map()
-    
-    var s: String = br.readLine()
-    while ( s != null && s.length() >= 1) {
-      val tokens = s.split(":")
-      
-      partOffsetsMap += tokens(0) -> tokens(1)
-      debug("adding node path [" + s + "]")
-      
-      s = br.readLine()
-    }
-    
-    return partOffsetsMap
-  }
-  
-  private def updateZkOffsets(zkClient: ZkClient, partitionOffsets: Map[String,String]): Unit = {
-    val cluster = ZkUtils.getCluster(zkClient)
-    var partitions: List[String] = Nil
-
-    for ((partition, offset) <- partitionOffsets) {
-      debug("updating [" + partition + "] with offset [" + offset + "]")
-      
-      try {
-        ZkUtils.updatePersistentPath(zkClient, partition, offset.toString)
-      } catch {
-        case e => e.printStackTrace()
-      }
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/JmxTool.scala b/trunk/core/src/main/scala/kafka/tools/JmxTool.scala
deleted file mode 100644
index dbfa1d8..0000000
--- a/trunk/core/src/main/scala/kafka/tools/JmxTool.scala
+++ /dev/null
@@ -1,110 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package kafka.tools
-
-import java.util.Date
-import java.text.SimpleDateFormat
-import javax.management._
-import javax.management.remote._
-import joptsimple.OptionParser
-import scala.collection.JavaConversions._
-import scala.collection.mutable
-import scala.math._
-
-
-object JmxTool {
-
-  def main(args: Array[String]) {
-    // Parse command line
-    val parser = new OptionParser
-    val objectNameOpt = 
-      parser.accepts("object-name", "A JMX object name to use as a query. This can contain wild cards, and this option " +
-                                    "can be given multiple times to specify more than one query. If no objects are specified " +   
-                                    "all objects will be queried.")
-      .withRequiredArg
-      .describedAs("name")
-      .ofType(classOf[String])
-    val reportingIntervalOpt = parser.accepts("reporting-interval", "Interval in MS with which to poll jmx stats.")
-      .withRequiredArg
-      .describedAs("ms")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(5000)
-    val helpOpt = parser.accepts("help", "Print usage information.")
-    val dateFormatOpt = parser.accepts("date-format", "The date format to use for formatting the time field. " + 
-                                                      "See java.text.SimpleDateFormat for options.")
-      .withRequiredArg
-      .describedAs("format")
-      .ofType(classOf[String])
-      .defaultsTo("yyyy-MM-dd HH:mm:ss.SSS")
-    val jmxServiceUrlOpt = 
-      parser.accepts("jmx-url", "The url to connect to to poll JMX data. See Oracle javadoc for JMXServiceURL for details.")
-      .withRequiredArg
-      .describedAs("service-url")
-      .ofType(classOf[String])
-      .defaultsTo("service:jmx:rmi:///jndi/rmi://:9999/jmxrmi")
-
-    val options = parser.parse(args : _*)
-
-    if(options.has(helpOpt)) {
-      parser.printHelpOn(System.out)
-      System.exit(0)
-    }
-
-    val url = new JMXServiceURL(options.valueOf(jmxServiceUrlOpt))
-    val interval = options.valueOf(reportingIntervalOpt).intValue
-    val dateFormat = new SimpleDateFormat(options.valueOf(dateFormatOpt))
-    val jmxc = JMXConnectorFactory.connect(url, null)
-    val mbsc = jmxc.getMBeanServerConnection()
-
-    val queries: Iterable[ObjectName] = 
-      if(options.has(objectNameOpt))
-        options.valuesOf(objectNameOpt).map(new ObjectName(_))
-      else
-        List(null)
-    val names = queries.map((name: ObjectName) => asSet(mbsc.queryNames(name, null))).flatten
-    val attributes: Iterable[(ObjectName, Array[String])] = 
-      names.map((name: ObjectName) => (name, mbsc.getMBeanInfo(name).getAttributes().map(_.getName)))
-
-    // print csv header
-    val keys = List("time") ++ queryAttributes(mbsc, names).keys.toArray.sorted
-    println(keys.map("\"" + _ + "\"").mkString(", "))
-
-    while(true) {
-      val start = System.currentTimeMillis
-      val attributes = queryAttributes(mbsc, names)
-      attributes("time") = dateFormat.format(new Date)
-      println(keys.map(attributes(_)).mkString(", "))
-      val sleep = max(0, interval - (System.currentTimeMillis - start))
-      Thread.sleep(sleep)
-    }
-  }
-
-  def queryAttributes(mbsc: MBeanServerConnection, names: Iterable[ObjectName]) = {
-    var attributes = new mutable.HashMap[String, Any]()
-	for(name <- names) {
-	  val mbean = mbsc.getMBeanInfo(name)
-      for(attrObj <- mbsc.getAttributes(name, mbean.getAttributes.map(_.getName))) {
-        val attr = attrObj.asInstanceOf[Attribute]
-        attributes(name + ":" + attr.getName) = attr.getValue
-      }
-    }
-    attributes
-  }
-
-}
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/tools/MirrorMaker.scala b/trunk/core/src/main/scala/kafka/tools/MirrorMaker.scala
deleted file mode 100644
index 3438f2c..0000000
--- a/trunk/core/src/main/scala/kafka/tools/MirrorMaker.scala
+++ /dev/null
@@ -1,171 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import kafka.message.Message
-import joptsimple.OptionParser
-import kafka.utils.{Utils, Logging}
-import kafka.producer.{ProducerData, ProducerConfig, Producer}
-import scala.collection.JavaConversions._
-import java.util.concurrent.CountDownLatch
-import kafka.consumer._
-
-
-object MirrorMaker extends Logging {
-
-  def main(args: Array[String]) {
-    
-    info ("Starting mirror maker")
-    val parser = new OptionParser
-
-    val consumerConfigOpt = parser.accepts("consumer.config",
-      "Consumer config to consume from a source cluster. " +
-      "You may specify multiple of these.")
-      .withRequiredArg()
-      .describedAs("config file")
-      .ofType(classOf[String])
-
-    val producerConfigOpt = parser.accepts("producer.config",
-      "Embedded producer config.")
-      .withRequiredArg()
-      .describedAs("config file")
-      .ofType(classOf[String])
-
-    val numProducersOpt = parser.accepts("num.producers",
-      "Number of producer instances")
-      .withRequiredArg()
-      .describedAs("Number of producers")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(1)
-    
-    val numStreamsOpt = parser.accepts("num.streams",
-      "Number of consumption streams.")
-      .withRequiredArg()
-      .describedAs("Number of threads")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(1)
-    
-    val whitelistOpt = parser.accepts("whitelist",
-      "Whitelist of topics to mirror.")
-      .withRequiredArg()
-      .describedAs("Java regex (String)")
-      .ofType(classOf[String])
-
-    val blacklistOpt = parser.accepts("blacklist",
-            "Blacklist of topics to mirror.")
-            .withRequiredArg()
-            .describedAs("Java regex (String)")
-            .ofType(classOf[String])
-
-    val helpOpt = parser.accepts("help", "Print this message.")
-
-    val options = parser.parse(args : _*)
-
-    if (options.has(helpOpt)) {
-      parser.printHelpOn(System.out)
-      System.exit(0)
-    }
-
-    Utils.checkRequiredArgs(
-      parser, options, consumerConfigOpt, producerConfigOpt)
-    if (List(whitelistOpt, blacklistOpt).count(options.has) != 1) {
-      println("Exactly one of whitelist or blacklist is required.")
-      System.exit(1)
-    }
-
-    val numStreams = options.valueOf(numStreamsOpt)
-
-    val producers = (1 to options.valueOf(numProducersOpt).intValue()).map(_ => {
-      val config = new ProducerConfig(
-        Utils.loadProps(options.valueOf(producerConfigOpt)))
-      new Producer[Null, Message](config)
-    })
-
-    val threads = {
-      val connectors = options.valuesOf(consumerConfigOpt).toList
-              .map(cfg => new ConsumerConfig(Utils.loadProps(cfg.toString)))
-              .map(new ZookeeperConsumerConnector(_))
-
-      Runtime.getRuntime.addShutdownHook(new Thread() {
-        override def run() {
-          connectors.foreach(_.shutdown())
-          producers.foreach(_.close())
-        }
-      })
-
-      val filterSpec = if (options.has(whitelistOpt))
-        new Whitelist(options.valueOf(whitelistOpt))
-      else
-        new Blacklist(options.valueOf(blacklistOpt))
-
-      val streams =
-        connectors.map(_.createMessageStreamsByFilter(filterSpec, numStreams.intValue()))
-
-      streams.flatten.zipWithIndex.map(streamAndIndex => {
-        new MirrorMakerThread(streamAndIndex._1, producers, streamAndIndex._2)
-      })
-    }
-
-    threads.foreach(_.start())
-
-    threads.foreach(_.awaitShutdown())
-  }
-
-  class MirrorMakerThread(stream: KafkaStream[Message],
-                          producers: Seq[Producer[Null, Message]],
-                          threadId: Int)
-          extends Thread with Logging {
-
-    private val shutdownLatch = new CountDownLatch(1)
-    private val threadName = "mirrormaker-" + threadId
-    private val producerSelector = Utils.circularIterator(producers)
-
-    this.setName(threadName)
-
-    override def run() {
-      try {
-        for (msgAndMetadata <- stream) {
-          val producer = producerSelector.next()
-          val pd = new ProducerData[Null, Message](
-            msgAndMetadata.topic, msgAndMetadata.message)
-          producer.send(pd)
-        }
-      }
-      catch {
-        case e =>
-          fatal("%s stream unexpectedly exited.", e)
-      }
-      finally {
-        shutdownLatch.countDown()
-        info("Stopped thread %s.".format(threadName))
-      }
-    }
-
-    def awaitShutdown() {
-      try {
-        shutdownLatch.await()
-      }
-      catch {
-        case e: InterruptedException => fatal(
-          "Shutdown of thread %s interrupted. This might leak data!"
-                  .format(threadName))
-      }
-    }
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/tools/ProducerShell.scala b/trunk/core/src/main/scala/kafka/tools/ProducerShell.scala
deleted file mode 100644
index 1d31ffa..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ProducerShell.scala
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import java.io._
-import joptsimple._
-import kafka.producer._
-import kafka.utils.Utils
-
-/**
- * Interactive shell for producing messages from the command line
- */
-object ProducerShell {
-
-  def main(args: Array[String]) {
-    
-    val parser = new OptionParser
-    val producerPropsOpt = parser.accepts("props", "REQUIRED: Properties file with the producer properties.")
-                           .withRequiredArg
-                           .describedAs("properties")
-                           .ofType(classOf[String])
-    val topicOpt = parser.accepts("topic", "REQUIRED: The topic to produce to.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    
-    val options = parser.parse(args : _*)
-    
-    for(arg <- List(producerPropsOpt, topicOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"") 
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    
-    val propsFile = options.valueOf(producerPropsOpt)
-    val producerConfig = new ProducerConfig(Utils.loadProps(propsFile))
-    val topic = options.valueOf(topicOpt)
-    val producer = new Producer[String, String](producerConfig)
-
-    val input = new BufferedReader(new InputStreamReader(System.in))
-    var done = false
-    while(!done) {
-      val line = input.readLine()
-      if(line == null) {
-        done = true
-      } else {
-        val message = line.trim
-        producer.send(new ProducerData[String, String](topic, message))
-        println("Sent: %s (%d bytes)".format(line, message.getBytes.length))
-      }
-    }
-    producer.close()
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/ReplayLogProducer.scala b/trunk/core/src/main/scala/kafka/tools/ReplayLogProducer.scala
deleted file mode 100644
index 1300cf6..0000000
--- a/trunk/core/src/main/scala/kafka/tools/ReplayLogProducer.scala
+++ /dev/null
@@ -1,209 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import joptsimple.OptionParser
-import java.util.concurrent.{Executors, CountDownLatch}
-import java.util.Properties
-import kafka.producer.async.DefaultEventHandler
-import kafka.serializer.DefaultEncoder
-import kafka.producer.{ProducerData, DefaultPartitioner, ProducerConfig, Producer}
-import kafka.consumer._
-import kafka.utils.{ZKStringSerializer, Logging}
-import kafka.api.OffsetRequest
-import org.I0Itec.zkclient._
-import kafka.message.{CompressionCodec, Message}
-
-object ReplayLogProducer extends Logging {
-
-  private val GROUPID: String = "replay-log-producer"
-
-  def main(args: Array[String]) {
-    val config = new Config(args)
-
-    val executor = Executors.newFixedThreadPool(config.numThreads)
-    val allDone = new CountDownLatch(config.numThreads)
-
-    // if there is no group specified then avoid polluting zookeeper with persistent group data, this is a hack
-    tryCleanupZookeeper(config.zkConnect, GROUPID)
-    Thread.sleep(500)
-
-    // consumer properties
-    val consumerProps = new Properties
-    consumerProps.put("groupid", GROUPID)
-    consumerProps.put("zk.connect", config.zkConnect)
-    consumerProps.put("consumer.timeout.ms", "10000")
-    consumerProps.put("autooffset.reset", OffsetRequest.SmallestTimeString)
-    consumerProps.put("fetch.size", (1024*1024).toString)
-    consumerProps.put("socket.buffer.size", (2 * 1024 * 1024).toString)
-    val consumerConfig = new ConsumerConfig(consumerProps)
-    val consumerConnector: ConsumerConnector = Consumer.create(consumerConfig)
-    val topicMessageStreams = consumerConnector.createMessageStreams(Predef.Map(config.inputTopic -> config.numThreads))
-    var threadList = List[ZKConsumerThread]()
-    for ((topic, streamList) <- topicMessageStreams)
-      for (stream <- streamList)
-        threadList ::= new ZKConsumerThread(config, stream)
-
-    for (thread <- threadList)
-      thread.start
-
-    threadList.foreach(_.shutdown)
-    consumerConnector.shutdown
-  }
-
-  class Config(args: Array[String]) {
-    val parser = new OptionParser
-    val zkConnectOpt = parser.accepts("zookeeper", "REQUIRED: The connection string for the zookeeper connection in the form host:port. " +
-      "Multiple URLS can be given to allow fail-over.")
-      .withRequiredArg
-      .describedAs("zookeeper url")
-      .ofType(classOf[String])
-      .defaultsTo("127.0.0.1:2181")
-    val brokerInfoOpt = parser.accepts("brokerinfo", "REQUIRED: broker info (either from zookeeper or a list.")
-      .withRequiredArg
-      .describedAs("broker.list=brokerid:hostname:port or zk.connect=host:port")
-      .ofType(classOf[String])
-    val inputTopicOpt = parser.accepts("inputtopic", "REQUIRED: The topic to consume from.")
-      .withRequiredArg
-      .describedAs("input-topic")
-      .ofType(classOf[String])
-    val outputTopicOpt = parser.accepts("outputtopic", "REQUIRED: The topic to produce to")
-      .withRequiredArg
-      .describedAs("output-topic")
-      .ofType(classOf[String])
-    val numMessagesOpt = parser.accepts("messages", "The number of messages to send.")
-      .withRequiredArg
-      .describedAs("count")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(-1)
-    val asyncOpt = parser.accepts("async", "If set, messages are sent asynchronously.")
-    val delayMSBtwBatchOpt = parser.accepts("delay-btw-batch-ms", "Delay in ms between 2 batch sends.")
-      .withRequiredArg
-      .describedAs("ms")
-      .ofType(classOf[java.lang.Long])
-      .defaultsTo(0)
-    val batchSizeOpt = parser.accepts("batch-size", "Number of messages to send in a single batch.")
-      .withRequiredArg
-      .describedAs("batch size")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(200)
-    val numThreadsOpt = parser.accepts("threads", "Number of sending threads.")
-      .withRequiredArg
-      .describedAs("threads")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(1)
-    val reportingIntervalOpt = parser.accepts("reporting-interval", "Interval at which to print progress info.")
-      .withRequiredArg
-      .describedAs("size")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(5000)
-    val compressionCodecOption = parser.accepts("compression-codec", "If set, messages are sent compressed")
-      .withRequiredArg
-      .describedAs("compression codec ")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(0)
-
-    val options = parser.parse(args : _*)
-    for(arg <- List(brokerInfoOpt, inputTopicOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    val zkConnect = options.valueOf(zkConnectOpt)
-    val brokerInfo = options.valueOf(brokerInfoOpt)
-    val numMessages = options.valueOf(numMessagesOpt).intValue
-    val isAsync = options.has(asyncOpt)
-    val delayedMSBtwSend = options.valueOf(delayMSBtwBatchOpt).longValue
-    var batchSize = options.valueOf(batchSizeOpt).intValue
-    val numThreads = options.valueOf(numThreadsOpt).intValue
-    val inputTopic = options.valueOf(inputTopicOpt)
-    val outputTopic = options.valueOf(outputTopicOpt)
-    val reportingInterval = options.valueOf(reportingIntervalOpt).intValue
-    val compressionCodec = CompressionCodec.getCompressionCodec(options.valueOf(compressionCodecOption).intValue)
-  }
-
-  def tryCleanupZookeeper(zkUrl: String, groupId: String) {
-    try {
-      val dir = "/consumers/" + groupId
-      info("Cleaning up temporary zookeeper data under " + dir + ".")
-      val zk = new ZkClient(zkUrl, 30*1000, 30*1000, ZKStringSerializer)
-      zk.deleteRecursive(dir)
-      zk.close()
-    } catch {
-      case _ => // swallow
-    }
-  }
-
-  class ZKConsumerThread(config: Config, stream: KafkaStream[Message]) extends Thread with Logging {
-    val shutdownLatch = new CountDownLatch(1)
-    val props = new Properties()
-    val brokerInfoList = config.brokerInfo.split("=")
-    if (brokerInfoList(0) == "zk.connect")
-      props.put("zk.connect", brokerInfoList(1))
-    else
-      props.put("broker.list", brokerInfoList(1))
-    props.put("reconnect.interval", Integer.MAX_VALUE.toString)
-    props.put("buffer.size", (64*1024).toString)
-    props.put("compression.codec", config.compressionCodec.codec.toString)
-    props.put("batch.size", config.batchSize.toString)
-    props.put("queue.enqueueTimeout.ms", "-1")
-    
-    if(config.isAsync)
-      props.put("producer.type", "async")
-
-    val producerConfig = new ProducerConfig(props)
-    val producer = new Producer[Message, Message](producerConfig, new DefaultEncoder,
-                                                  new DefaultEventHandler[Message](producerConfig, null),
-                                                  null, new DefaultPartitioner[Message])
-
-    override def run() {
-      info("Starting consumer thread..")
-      var messageCount: Int = 0
-      try {
-        val iter =
-          if(config.numMessages >= 0)
-            stream.slice(0, config.numMessages)
-          else
-            stream
-        for (messageAndMetadata <- iter) {
-          try {
-            producer.send(new ProducerData[Message, Message](config.outputTopic, messageAndMetadata.message))
-            if (config.delayedMSBtwSend > 0 && (messageCount + 1) % config.batchSize == 0)
-              Thread.sleep(config.delayedMSBtwSend)
-            messageCount += 1
-          }catch {
-            case ie: Exception => error("Skipping this message", ie)
-          }
-        }
-      }catch {
-        case e: ConsumerTimeoutException => error("consumer thread timing out", e)
-      }
-      info("Sent " + messageCount + " messages")
-      shutdownLatch.countDown
-      info("thread finished execution !" )
-    }
-
-    def shutdown() {
-      shutdownLatch.await
-      producer.close
-    }
-
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/SimpleConsumerShell.scala b/trunk/core/src/main/scala/kafka/tools/SimpleConsumerShell.scala
deleted file mode 100644
index 74218ec..0000000
--- a/trunk/core/src/main/scala/kafka/tools/SimpleConsumerShell.scala
+++ /dev/null
@@ -1,113 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import java.net.URI
-import joptsimple._
-import kafka.api.FetchRequest
-import kafka.utils._
-import kafka.consumer._
-
-/**
- * Command line program to dump out messages to standard out using the simple consumer
- */
-object SimpleConsumerShell extends Logging {
-
-  def main(args: Array[String]): Unit = {
-
-    val parser = new OptionParser
-    val urlOpt = parser.accepts("server", "REQUIRED: The hostname of the server to connect to.")
-                           .withRequiredArg
-                           .describedAs("kafka://hostname:port")
-                           .ofType(classOf[String])
-    val topicOpt = parser.accepts("topic", "REQUIRED: The topic to consume from.")
-                           .withRequiredArg
-                           .describedAs("topic")
-                           .ofType(classOf[String])
-    val partitionOpt = parser.accepts("partition", "The partition to consume from.")
-                           .withRequiredArg
-                           .describedAs("partition")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(0)
-    val offsetOpt = parser.accepts("offset", "The offset to start consuming from.")
-                           .withRequiredArg
-                           .describedAs("offset")
-                           .ofType(classOf[java.lang.Long])
-                           .defaultsTo(0L)
-    val fetchsizeOpt = parser.accepts("fetchsize", "The fetch size of each request.")
-                           .withRequiredArg
-                           .describedAs("fetchsize")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1000000)
-    val printOffsetOpt = parser.accepts("print-offsets", "Print the offsets returned by the iterator")
-                           .withOptionalArg
-                           .describedAs("print offsets")
-                           .ofType(classOf[java.lang.Boolean])
-                           .defaultsTo(false)
-    val printMessageOpt = parser.accepts("print-messages", "Print the messages returned by the iterator")
-                           .withOptionalArg
-                           .describedAs("print messages")
-                           .ofType(classOf[java.lang.Boolean])
-                           .defaultsTo(false)
-
-    val options = parser.parse(args : _*)
-    
-    for(arg <- List(urlOpt, topicOpt)) {
-      if(!options.has(arg)) {
-        error("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-
-    val url = new URI(options.valueOf(urlOpt))
-    val topic = options.valueOf(topicOpt)
-    val partition = options.valueOf(partitionOpt).intValue
-    val startingOffset = options.valueOf(offsetOpt).longValue
-    val fetchsize = options.valueOf(fetchsizeOpt).intValue
-    val printOffsets = if(options.has(printOffsetOpt)) true else false
-    val printMessages = if(options.has(printMessageOpt)) true else false
-
-    info("Starting consumer...")
-    val consumer = new SimpleConsumer(url.getHost, url.getPort, 10000, 64*1024)
-    val thread = Utils.newThread("kafka-consumer", new Runnable() {
-      def run() {
-        var offset = startingOffset
-        while(true) {
-          val fetchRequest = new FetchRequest(topic, partition, offset, fetchsize)
-          val messageSets = consumer.multifetch(fetchRequest)
-          for (messages <- messageSets) {
-            debug("multi fetched " + messages.sizeInBytes + " bytes from offset " + offset)
-            var consumed = 0
-            for(messageAndOffset <- messages) {
-              if(printMessages)
-                info("consumed: " + Utils.toString(messageAndOffset.message.payload, "UTF-8"))
-              offset = messageAndOffset.offset
-              if(printOffsets)
-                info("next offset = " + offset)
-              consumed += 1
-            }
-          }
-        }
-      }
-    }, false);
-    thread.start()
-    thread.join()
-  }
-
-}
diff --git a/trunk/core/src/main/scala/kafka/tools/VerifyConsumerRebalance.scala b/trunk/core/src/main/scala/kafka/tools/VerifyConsumerRebalance.scala
deleted file mode 100644
index 2ad1a20..0000000
--- a/trunk/core/src/main/scala/kafka/tools/VerifyConsumerRebalance.scala
+++ /dev/null
@@ -1,135 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.tools
-
-import joptsimple.OptionParser
-import org.I0Itec.zkclient.ZkClient
-import kafka.utils.{Logging, ZKGroupTopicDirs, ZkUtils, ZKStringSerializer}
-
-object VerifyConsumerRebalance extends Logging {
-  def main(args: Array[String]) {
-    val parser = new OptionParser()
-
-    val zkConnectOpt = parser.accepts("zk.connect", "ZooKeeper connect string.").
-      withRequiredArg().defaultsTo("localhost:2181").ofType(classOf[String]);
-    val groupOpt = parser.accepts("group", "Consumer group.").
-      withRequiredArg().ofType(classOf[String])
-    parser.accepts("help", "Print this message.")
-
-    val options = parser.parse(args : _*)
-
-    if (options.has("help")) {
-      parser.printHelpOn(System.out)
-      System.exit(0)
-    }
-
-    for (opt <- List(groupOpt))
-      if (!options.has(opt)) {
-        System.err.println("Missing required argument: %s".format(opt))
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-
-    val zkConnect = options.valueOf(zkConnectOpt)
-    val group = options.valueOf(groupOpt)
-
-    var zkClient: ZkClient = null
-    try {
-      zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer)
-
-      debug("zkConnect = %s; group = %s".format(zkConnect, group))
-
-      // check if the rebalancing operation succeeded.
-      try {
-        if(validateRebalancingOperation(zkClient, group))
-          println("Rebalance operation successful !")
-        else
-          println("Rebalance operation failed !")
-      } catch {
-        case e2: Throwable => error("Error while verifying current rebalancing operation", e2)
-      }
-    }
-    finally {
-      if (zkClient != null)
-        zkClient.close()
-    }
-  }
-
-  private def validateRebalancingOperation(zkClient: ZkClient, group: String): Boolean = {
-    info("Verifying rebalancing operation for consumer group " + group)
-    var rebalanceSucceeded: Boolean = true
-    /**
-     * A successful rebalancing operation would select an owner for each available partition
-     * This means that for each partition registered under /brokers/topics/[topic]/[broker-id], an owner exists
-     * under /consumers/[consumer_group]/owners/[topic]/[broker_id-partition_id]
-     */
-    val consumersPerTopicMap = ZkUtils.getConsumersPerTopic(zkClient, group)
-    val partitionsPerTopicMap = ZkUtils.getPartitionsForTopics(zkClient, consumersPerTopicMap.keys.iterator)
-
-    partitionsPerTopicMap.foreach { partitionsForTopic =>
-      val topic = partitionsForTopic._1
-      val partitions = partitionsForTopic._2
-      val topicDirs = new ZKGroupTopicDirs(group, topic)
-      info("Alive partitions for topic %s are %s ".format(topic, partitions.toString))
-      info("Alive consumers for topic %s => %s ".format(topic, consumersPerTopicMap.get(topic)))
-      val partitionsWithOwners = ZkUtils.getChildrenParentMayNotExist(zkClient, topicDirs.consumerOwnerDir)
-      if(partitionsWithOwners.size == 0) {
-        error("No owners for any partitions for topic " + topic)
-        rebalanceSucceeded = false
-      }
-      debug("Children of " + topicDirs.consumerOwnerDir + " = " + partitionsWithOwners.toString)
-      val consumerIdsForTopic = consumersPerTopicMap.get(topic)
-
-      // for each available partition for topic, check if an owner exists
-      partitions.foreach { partition =>
-      // check if there is a node for [partition]
-        if(!partitionsWithOwners.exists(p => p.equals(partition))) {
-          error("No owner for topic %s partition %s".format(topic, partition))
-          rebalanceSucceeded = false
-        }
-        // try reading the partition owner path for see if a valid consumer id exists there
-        val partitionOwnerPath = topicDirs.consumerOwnerDir + "/" + partition
-        val partitionOwner = ZkUtils.readDataMaybeNull(zkClient, partitionOwnerPath)
-        if(partitionOwner == null) {
-          error("No owner for topic %s partition %s".format(topic, partition))
-          rebalanceSucceeded = false
-        }
-        else {
-          // check if the owner is a valid consumer id
-          consumerIdsForTopic match {
-            case Some(consumerIds) =>
-              if(!consumerIds.contains(partitionOwner)) {
-                error("Owner %s for topic %s partition %s is not a valid member of consumer " +
-                  "group %s".format(partitionOwner, topic, partition, group))
-                rebalanceSucceeded = false
-              }
-              else
-                info("Owner of topic %s partition %s is %s".format(topic, partition, partitionOwner))
-            case None => {
-              error("No consumer ids registered for topic " + topic)
-              rebalanceSucceeded = false
-            }
-          }
-        }
-      }
-
-    }
-
-    rebalanceSucceeded
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Annotations.scala b/trunk/core/src/main/scala/kafka/utils/Annotations.scala
deleted file mode 100644
index 28269eb..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Annotations.scala
+++ /dev/null
@@ -1,36 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-/* Some helpful annotations */
-
-/**
- * Indicates that the annotated class is meant to be threadsafe. For an abstract class it is an part of the interface that an implementation 
- * must respect
- */
-class threadsafe extends StaticAnnotation
-
-/**
- * Indicates that the annotated class is not threadsafe
- */
-class nonthreadsafe extends StaticAnnotation
-
-/**
- * Indicates that the annotated class is immutable
- */
-class immutable extends StaticAnnotation
diff --git a/trunk/core/src/main/scala/kafka/utils/DelayedItem.scala b/trunk/core/src/main/scala/kafka/utils/DelayedItem.scala
deleted file mode 100644
index 3d31d58..0000000
--- a/trunk/core/src/main/scala/kafka/utils/DelayedItem.scala
+++ /dev/null
@@ -1,49 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.util.concurrent._
-import scala.math._
-
-class DelayedItem[T](val item: T, delay: Long, unit: TimeUnit) extends Delayed {
-  
-  val delayMs = unit.toMillis(delay)
-  val createdMs = System.currentTimeMillis
-  
-  def this(item: T, delayMs: Long) = 
-    this(item, delayMs, TimeUnit.MILLISECONDS)
-
-  /**
-   * The remaining delay time
-   */
-  def getDelay(unit: TimeUnit): Long = {
-    val ellapsedMs = (System.currentTimeMillis - createdMs)
-    unit.convert(max(delayMs - ellapsedMs, 0), unit)
-  }
-    
-  def compareTo(d: Delayed): Int = {
-    val delayed = d.asInstanceOf[DelayedItem[T]]
-    val myEnd = createdMs + delayMs
-    val yourEnd = delayed.createdMs - delayed.delayMs
-    
-    if(myEnd < yourEnd) -1
-    else if(myEnd > yourEnd) 1
-    else 0 
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/IteratorTemplate.scala b/trunk/core/src/main/scala/kafka/utils/IteratorTemplate.scala
deleted file mode 100644
index 3f110c3..0000000
--- a/trunk/core/src/main/scala/kafka/utils/IteratorTemplate.scala
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-class State
-object DONE extends State
-object READY extends State
-object NOT_READY extends State
-object FAILED extends State
-
-/**
- * Transliteration of the iterator template in google collections. To implement an iterator
- * override makeNext and call allDone() when there is no more items
- */
-abstract class IteratorTemplate[T] extends Iterator[T] with java.util.Iterator[T] {
-  
-  private var state: State = NOT_READY
-  private var nextItem: Option[T] = None
-
-  def next(): T = {
-    if(!hasNext())
-      throw new NoSuchElementException()
-    state = NOT_READY
-    nextItem match {
-      case Some(item) => item
-      case None => throw new IllegalStateException("Expected item but none found.")
-    }
-  }
-  
-  def hasNext(): Boolean = {
-    if(state == FAILED)
-      throw new IllegalStateException("Iterator is in failed state")
-    state match {
-      case DONE => false
-      case READY => true
-      case _ => maybeComputeNext()
-    }
-  }
-  
-  protected def makeNext(): T
-  
-  def maybeComputeNext(): Boolean = {
-    state = FAILED
-    nextItem = Some(makeNext())
-    if(state == DONE) {
-      false
-    } else {
-      state = READY
-      true
-    }
-  }
-  
-  protected def allDone(): T = {
-    state = DONE
-    null.asInstanceOf[T]
-  }
-  
-  def remove = 
-    throw new UnsupportedOperationException("Removal not supported")
-
-  protected def resetState() {
-    state = NOT_READY
-  }
-}
-
diff --git a/trunk/core/src/main/scala/kafka/utils/KafkaScheduler.scala b/trunk/core/src/main/scala/kafka/utils/KafkaScheduler.scala
deleted file mode 100644
index 07b9994..0000000
--- a/trunk/core/src/main/scala/kafka/utils/KafkaScheduler.scala
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-
-/**
- * A scheduler for running jobs in the background
- * TODO: ScheduledThreadPoolExecutor notriously swallows exceptions
- */
-class KafkaScheduler(val numThreads: Int, val baseThreadName: String, isDaemon: Boolean) extends Logging {
-  private val threadId = new AtomicLong(0)
-  private val executor = new ScheduledThreadPoolExecutor(numThreads, new ThreadFactory() {
-    def newThread(runnable: Runnable): Thread = {
-      val t = new Thread(runnable, baseThreadName + threadId.getAndIncrement)
-      t.setDaemon(isDaemon)
-      t
-    }
-  })
-  executor.setContinueExistingPeriodicTasksAfterShutdownPolicy(false)
-  executor.setExecuteExistingDelayedTasksAfterShutdownPolicy(false)
-
-  def scheduleWithRate(fun: () => Unit, delayMs: Long, periodMs: Long) =
-    executor.scheduleAtFixedRate(Utils.loggedRunnable(fun), delayMs, periodMs, TimeUnit.MILLISECONDS)
-
-  def shutdownNow() {
-    executor.shutdownNow()
-    info("force shutdown scheduler " + baseThreadName)
-  }
-
-  def shutdown() {
-    executor.shutdown()
-    info("shutdown scheduler " + baseThreadName)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Log4jController.scala b/trunk/core/src/main/scala/kafka/utils/Log4jController.scala
deleted file mode 100644
index a015c81..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Log4jController.scala
+++ /dev/null
@@ -1,98 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-
-import org.apache.log4j.{Logger, Level, LogManager}
-import java.util
-
-
-object Log4jController {
-
-  private val controller = new Log4jController
-
-  Utils.registerMBean(controller, "kafka:type=kafka.Log4jController")
-
-}
-
-
-/**
- * An MBean that allows the user to dynamically alter log4j levels at runtime.
- * The companion object contains the singleton instance of this class and
- * registers the MBean. The [[kafka.utils.Logging]] trait forces initialization
- * of the companion object.
- */
-private class Log4jController extends Log4jControllerMBean {
-
-  def getLoggers = {
-    val lst = new util.ArrayList[String]()
-    lst.add("root=" + existingLogger("root").getLevel.toString)
-    val loggers = LogManager.getCurrentLoggers
-    while (loggers.hasMoreElements) {
-      val logger = loggers.nextElement().asInstanceOf[Logger]
-      if (logger != null) {
-        val level =  if (logger != null) logger.getLevel else null
-        lst.add("%s=%s".format(logger.getName, if (level != null) level.toString else "null"))
-      }
-    }
-    lst
-  }
-
-
-  private def newLogger(loggerName: String) =
-    if (loggerName == "root")
-      LogManager.getRootLogger
-    else LogManager.getLogger(loggerName)
-
-
-  private def existingLogger(loggerName: String) =
-    if (loggerName == "root")
-      LogManager.getRootLogger
-    else LogManager.exists(loggerName)
-
-
-  def getLogLevel(loggerName: String) = {
-    val log = existingLogger(loggerName)
-    if (log != null) {
-      val level = log.getLevel
-      if (level != null)
-        log.getLevel.toString
-      else "Null log level."
-    }
-    else "No such logger."
-  }
-
-
-  def setLogLevel(loggerName: String, level: String) = {
-    val log = newLogger(loggerName)
-    if (!loggerName.trim.isEmpty && !level.trim.isEmpty && log != null) {
-      log.setLevel(Level.toLevel(level.toUpperCase))
-      true
-    }
-    else false
-  }
-
-}
-
-
-private trait Log4jControllerMBean {
-  def getLoggers: java.util.List[String]
-  def getLogLevel(logger: String): String
-  def setLogLevel(logger: String, level: String): Boolean
-}
-
diff --git a/trunk/core/src/main/scala/kafka/utils/Logging.scala b/trunk/core/src/main/scala/kafka/utils/Logging.scala
deleted file mode 100644
index 3f69e54..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Logging.scala
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import org.apache.log4j.Logger
-
-trait Logging {
-  val loggerName = this.getClass.getName
-  lazy val logger = Logger.getLogger(loggerName)
-
-  protected var logIdent = ""
-
-  // Force initialization to register Log4jControllerMBean
-  private val log4jController = Log4jController
-
-  private def msgWithLogIdent(msg: String) = "%s%s".format(logIdent, msg)
-
-  def trace(msg: => String): Unit = {
-    if (logger.isTraceEnabled())
-      logger.trace(msgWithLogIdent(msg))
-  }
-  def trace(e: => Throwable): Any = {
-    if (logger.isTraceEnabled())
-      logger.trace(logIdent,e)
-  }
-  def trace(msg: => String, e: => Throwable) = {
-    if (logger.isTraceEnabled())
-      logger.trace(msgWithLogIdent(msg),e)
-  }
-
-  def debug(msg: => String): Unit = {
-    if (logger.isDebugEnabled())
-      logger.debug(msgWithLogIdent(msg))
-  }
-  def debug(e: => Throwable): Any = {
-    if (logger.isDebugEnabled())
-      logger.debug(logIdent,e)
-  }
-  def debug(msg: => String, e: => Throwable) = {
-    if (logger.isDebugEnabled())
-      logger.debug(msgWithLogIdent(msg),e)
-  }
-
-  def info(msg: => String): Unit = {
-    if (logger.isInfoEnabled())
-      logger.info(msgWithLogIdent(msg))
-  }
-  def info(e: => Throwable): Any = {
-    if (logger.isInfoEnabled())
-      logger.info(logIdent,e)
-  }
-  def info(msg: => String,e: => Throwable) = {
-    if (logger.isInfoEnabled())
-      logger.info(msgWithLogIdent(msg),e)
-  }
-
-  def warn(msg: => String): Unit = {
-    logger.warn(msgWithLogIdent(msg))
-  }
-  def warn(e: => Throwable): Any = {
-    logger.warn(logIdent,e)
-  }
-  def warn(msg: => String, e: => Throwable) = {
-    logger.warn(msgWithLogIdent(msg),e)
-  }	
-
-  def error(msg: => String): Unit = {
-    logger.error(msgWithLogIdent(msg))
-  }		
-  def error(e: => Throwable): Any = {
-    logger.error(logIdent,e)
-  }
-  def error(msg: => String, e: => Throwable) = {
-    logger.error(msgWithLogIdent(msg),e)
-  }
-
-  def fatal(msg: => String): Unit = {
-    logger.fatal(msgWithLogIdent(msg))
-  }
-  def fatal(e: => Throwable): Any = {
-    logger.fatal(logIdent,e)
-  }	
-  def fatal(msg: => String, e: => Throwable) = {
-    logger.fatal(msgWithLogIdent(msg),e)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/MockTime.scala b/trunk/core/src/main/scala/kafka/utils/MockTime.scala
deleted file mode 100644
index 5296aba..0000000
--- a/trunk/core/src/main/scala/kafka/utils/MockTime.scala
+++ /dev/null
@@ -1,34 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.util.concurrent._
-
-class MockTime(@volatile var currentMs: Long) extends Time {
-  
-  def this() = this(System.currentTimeMillis)
-  
-  def milliseconds: Long = currentMs
-
-  def nanoseconds: Long = 
-    TimeUnit.NANOSECONDS.convert(currentMs, TimeUnit.MILLISECONDS)
-
-  def sleep(ms: Long): Unit = 
-    currentMs += ms
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Mx4jLoader.scala b/trunk/core/src/main/scala/kafka/utils/Mx4jLoader.scala
deleted file mode 100644
index 64645b1..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Mx4jLoader.scala
+++ /dev/null
@@ -1,72 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-
-import java.lang.management.ManagementFactory
-import javax.management.ObjectName
-
-/**
- * If mx4j-tools is in the classpath call maybeLoad to load the HTTP interface of mx4j.
- *
- * The default port is 8082. To override that provide e.g. -Dmx4jport=8083
- * The default listen address is 0.0.0.0. To override that provide -Dmx4jaddress=127.0.0.1
- * This feature must be enabled with -Dmx4jenable=true
- *
- * This is a Scala port of org.apache.cassandra.utils.Mx4jTool written by Ran Tavory for CASSANDRA-1068
- * */
-object Mx4jLoader extends Logging {
-
-  def maybeLoad(): Boolean = {
-    if (!Utils.getBoolean(System.getProperties(), "kafka_mx4jenable", false))
-      false
-    val address = System.getProperty("mx4jaddress", "0.0.0.0")
-    val port = Utils.getInt(System.getProperties(), "mx4jport", 8082)
-    try {
-      debug("Will try to load MX4j now, if it's in the classpath");
-
-      val mbs = ManagementFactory.getPlatformMBeanServer()
-      val processorName = new ObjectName("Server:name=XSLTProcessor")
-
-      val httpAdaptorClass = Class.forName("mx4j.tools.adaptor.http.HttpAdaptor")
-      val httpAdaptor = httpAdaptorClass.newInstance()
-      httpAdaptorClass.getMethod("setHost", classOf[String]).invoke(httpAdaptor, address.asInstanceOf[AnyRef])
-      httpAdaptorClass.getMethod("setPort", Integer.TYPE).invoke(httpAdaptor, port.asInstanceOf[AnyRef])
-
-      val httpName = new ObjectName("system:name=http")
-      mbs.registerMBean(httpAdaptor, httpName)
-
-      val xsltProcessorClass = Class.forName("mx4j.tools.adaptor.http.XSLTProcessor")
-      val xsltProcessor = xsltProcessorClass.newInstance()
-      httpAdaptorClass.getMethod("setProcessor", Class.forName("mx4j.tools.adaptor.http.ProcessorMBean")).invoke(httpAdaptor, xsltProcessor.asInstanceOf[AnyRef])
-      mbs.registerMBean(xsltProcessor, processorName)
-      httpAdaptorClass.getMethod("start").invoke(httpAdaptor)
-      info("mx4j successfuly loaded")
-      true
-    }
-    catch {
-	  case e: ClassNotFoundException => {
-        info("Will not load MX4J, mx4j-tools.jar is not in the classpath");
-      }
-      case e => {
-        warn("Could not start register mbean in JMX", e);
-      }
-    }
-    false
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Pool.scala b/trunk/core/src/main/scala/kafka/utils/Pool.scala
deleted file mode 100644
index d62fa77..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Pool.scala
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.util.ArrayList
-import java.util.concurrent._
-import collection.JavaConversions
-
-class Pool[K,V] extends Iterable[(K, V)] {
-
-  private val pool = new ConcurrentHashMap[K, V]
-  
-  def this(m: collection.Map[K, V]) {
-    this()
-    for((k,v) <- m.elements)
-      pool.put(k, v)
-  }
-  
-  def put(k: K, v: V) = pool.put(k, v)
-  
-  def putIfNotExists(k: K, v: V) = pool.putIfAbsent(k, v)
-  
-  def contains(id: K) = pool.containsKey(id)
-  
-  def get(key: K): V = pool.get(key)
-  
-  def remove(key: K): V = pool.remove(key)
-  
-  def keys = JavaConversions.asSet(pool.keySet())
-  
-  def values: Iterable[V] = 
-    JavaConversions.asIterable(new ArrayList[V](pool.values()))
-  
-  def clear: Unit = pool.clear()
-  
-  override def size = pool.size
-  
-  override def iterator = new Iterator[(K,V)]() {
-    
-    private val iter = pool.entrySet.iterator
-    
-    def hasNext: Boolean = iter.hasNext
-    
-    def next: (K, V) = {
-      val n = iter.next
-      (n.getKey, n.getValue)
-    }
-    
-  }
-    
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Range.scala b/trunk/core/src/main/scala/kafka/utils/Range.scala
deleted file mode 100644
index ca7d699..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Range.scala
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-
-/**
- * A generic range value with a start and end 
- */
-trait Range {
-  /** The first index in the range */
-  def start: Long
-  /** The total number of indexes in the range */
-  def size: Long
-  /** Return true iff the range is empty */
-  def isEmpty: Boolean = size == 0
-
-  /** if value is in range */
-  def contains(value: Long): Boolean = {
-    if( (size == 0 && value == start) ||
-        (size > 0 && value >= start && value <= start + size - 1) )
-      return true
-    else
-      return false
-  }
-  
-  override def toString() = "(start=" + start + ", size=" + size + ")"
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Throttler.scala b/trunk/core/src/main/scala/kafka/utils/Throttler.scala
deleted file mode 100644
index e94230a..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Throttler.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils;
-
-import scala.math._
-
-object Throttler extends Logging {
-  val DefaultCheckIntervalMs = 100L
-}
-
-/**
- * A class to measure and throttle the rate of some process. The throttler takes a desired rate-per-second
- * (the units of the process don't matter, it could be bytes or a count of some other thing), and will sleep for 
- * an appropraite amount of time when maybeThrottle() is called to attain the desired rate.
- * 
- * @param desiredRatePerSec: The rate we want to hit in units/sec
- * @param checkIntervalMs: The interval at which to check our rate
- * @param throttleDown: Does throttling increase or decrease our rate?
- * @param time: The time implementation to use
- */
-@nonthreadsafe
-class Throttler(val desiredRatePerSec: Double, 
-                val checkIntervalMs: Long, 
-                val throttleDown: Boolean, 
-                val time: Time) {
-  
-  private val lock = new Object
-  private var periodStartNs: Long = time.nanoseconds
-  private var observedSoFar: Double = 0.0
-  
-  def this(desiredRatePerSec: Double, throttleDown: Boolean) = 
-    this(desiredRatePerSec, Throttler.DefaultCheckIntervalMs, throttleDown, SystemTime)
-
-  def this(desiredRatePerSec: Double) = 
-    this(desiredRatePerSec, Throttler.DefaultCheckIntervalMs, true, SystemTime)
-  
-  def maybeThrottle(observed: Double) {
-    lock synchronized {
-      observedSoFar += observed
-      val now = time.nanoseconds
-      val ellapsedNs = now - periodStartNs
-      // if we have completed an interval AND we have observed something, maybe
-      // we should take a little nap
-      if(ellapsedNs > checkIntervalMs * Time.NsPerMs && observedSoFar > 0) {
-        val rateInSecs = (observedSoFar * Time.NsPerSec) / ellapsedNs
-        val needAdjustment = !(throttleDown ^ (rateInSecs > desiredRatePerSec))
-        if(needAdjustment) {
-          // solve for the amount of time to sleep to make us hit the desired rate
-          val desiredRateMs = desiredRatePerSec / Time.MsPerSec.asInstanceOf[Double]
-          val ellapsedMs = ellapsedNs / Time.NsPerMs
-          val sleepTime = round(observedSoFar / desiredRateMs - ellapsedMs)
-          if(sleepTime > 0) {
-            Throttler.debug("Natural rate is " + rateInSecs + " per second but desired rate is " + desiredRatePerSec + 
-                                     ", sleeping for " + sleepTime + " ms to compensate.")
-            time.sleep(sleepTime)
-          }
-        }
-        periodStartNs = now
-        observedSoFar = 0
-      }
-    }
-  }
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Time.scala b/trunk/core/src/main/scala/kafka/utils/Time.scala
deleted file mode 100644
index 194cc1f..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Time.scala
+++ /dev/null
@@ -1,61 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-/**
- * Some common constants
- */
-object Time {
-  val NsPerUs = 1000
-  val UsPerMs = 1000
-  val MsPerSec = 1000
-  val NsPerMs = NsPerUs * UsPerMs
-  val NsPerSec = NsPerMs * MsPerSec
-  val UsPerSec = UsPerMs * MsPerSec
-  val SecsPerMin = 60
-  val MinsPerHour = 60
-  val HoursPerDay = 24
-  val SecsPerHour = SecsPerMin * MinsPerHour
-  val SecsPerDay = SecsPerHour * HoursPerDay
-  val MinsPerDay = MinsPerHour * HoursPerDay
-}
-
-/**
- * A mockable interface for time functions
- */
-trait Time {
-  
-  def milliseconds: Long
-
-  def nanoseconds: Long
-
-  def sleep(ms: Long)
-}
-
-/**
- * The normal system implementation of time functions
- */
-object SystemTime extends Time {
-  
-  def milliseconds: Long = System.currentTimeMillis
-  
-  def nanoseconds: Long = System.nanoTime
-  
-  def sleep(ms: Long): Unit = Thread.sleep(ms)
-  
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/TopicNameValidator.scala b/trunk/core/src/main/scala/kafka/utils/TopicNameValidator.scala
deleted file mode 100644
index 581e0e3..0000000
--- a/trunk/core/src/main/scala/kafka/utils/TopicNameValidator.scala
+++ /dev/null
@@ -1,41 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import kafka.common.InvalidTopicException
-import util.matching.Regex
-import kafka.server.KafkaConfig
-
-class TopicNameValidator(config: KafkaConfig) {
-  private val illegalChars = "/" + '\u0000' + '\u0001' + "-" + '\u001F' + '\u007F' + "-" + '\u009F' +
-                          '\uD800' + "-" + '\uF8FF' + '\uFFF0' + "-" + '\uFFFF'
-  // Regex checks for illegal chars and "." and ".." filenames
-  private val rgx = new Regex("(^\\.{1,2}$)|[" + illegalChars + "]")
-
-  def validate(topic: String) {
-    if (topic.length <= 0)
-      throw new InvalidTopicException("topic name is illegal, can't be empty")
-    else if (topic.length > config.maxTopicNameLength)
-      throw new InvalidTopicException("topic name is illegal, can't be longer than " + config.maxTopicNameLength + " characters")
-
-    rgx.findFirstIn(topic) match {
-      case Some(t) => throw new InvalidTopicException("topic name " + topic + " is illegal, doesn't match expected regular expression")
-      case None =>
-    }
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/UpdateOffsetsInZK.scala b/trunk/core/src/main/scala/kafka/utils/UpdateOffsetsInZK.scala
deleted file mode 100644
index ae0d86e..0000000
--- a/trunk/core/src/main/scala/kafka/utils/UpdateOffsetsInZK.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import org.I0Itec.zkclient.ZkClient
-import kafka.consumer.{SimpleConsumer, ConsumerConfig}
-import kafka.cluster.Partition
-import kafka.api.OffsetRequest
-import java.lang.IllegalStateException
-
-/**
- *  A utility that updates the offset of every broker partition to the offset of latest log segment file, in ZK.
- */
-object UpdateOffsetsInZK {
-  val Earliest = "earliest"
-  val Latest = "latest"
-
-  def main(args: Array[String]) {
-    if(args.length < 3)
-      usage
-    val config = new ConsumerConfig(Utils.loadProps(args(1)))
-    val zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs,
-        config.zkConnectionTimeoutMs, ZKStringSerializer)
-    args(0) match {
-      case Earliest => getAndSetOffsets(zkClient, OffsetRequest.EarliestTime, config, args(2))
-      case Latest => getAndSetOffsets(zkClient, OffsetRequest.LatestTime, config, args(2))
-      case _ => usage
-    }
-  }
-
-  private def getAndSetOffsets(zkClient: ZkClient, offsetOption: Long, config: ConsumerConfig, topic: String): Unit = {
-    val cluster = ZkUtils.getCluster(zkClient)
-    val partitionsPerTopicMap = ZkUtils.getPartitionsForTopics(zkClient, List(topic).iterator)
-    var partitions: List[String] = Nil
-
-    partitionsPerTopicMap.get(topic) match {
-      case Some(l) =>  partitions = l.sortWith((s,t) => s < t)
-      case _ => throw new RuntimeException("Can't find topic " + topic)
-    }
-
-    var numParts = 0
-    for (partString <- partitions) {
-      val part = Partition.parse(partString)
-      val broker = cluster.getBroker(part.brokerId) match {
-        case Some(b) => b
-        case None => throw new IllegalStateException("Broker " + part.brokerId + " is unavailable. Cannot issue " +
-          "getOffsetsBefore request")
-      }
-      val consumer = new SimpleConsumer(broker.host, broker.port, 10000, 100 * 1024)
-      val offsets = consumer.getOffsetsBefore(topic, part.partId, offsetOption, 1)
-      val topicDirs = new ZKGroupTopicDirs(config.groupId, topic)
-      
-      println("updating partition " + part.name + " with new offset: " + offsets(0))
-      ZkUtils.updatePersistentPath(zkClient, topicDirs.consumerOffsetDir + "/" + part.name, offsets(0).toString)
-      numParts += 1
-    }
-    println("updated the offset for " + numParts + " partitions")    
-  }
-
-  private def usage() = {
-    println("USAGE: " + UpdateOffsetsInZK.getClass.getName + " [earliest | latest] consumer.properties topic")
-    System.exit(1)
-  }
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/Utils.scala b/trunk/core/src/main/scala/kafka/utils/Utils.scala
deleted file mode 100644
index 23d64b7..0000000
--- a/trunk/core/src/main/scala/kafka/utils/Utils.scala
+++ /dev/null
@@ -1,841 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.io._
-import java.nio._
-import java.nio.channels._
-import java.util.concurrent.atomic._
-import java.lang.management._
-import java.util.zip.CRC32
-import javax.management._
-import java.util.Properties
-import scala.collection._
-import scala.collection.mutable
-import kafka.message.{NoCompressionCodec, CompressionCodec}
-import org.I0Itec.zkclient.ZkClient
-import joptsimple.{OptionSpec, OptionSet, OptionParser}
-import util.parsing.json.JSON
-
-/**
- * Helper functions!
- */
-object Utils extends Logging {
-  /**
-   * Wrap the given function in a java.lang.Runnable
-   * @param fun A function
-   * @return A Runnable that just executes the function
-   */
-  def runnable(fun: () => Unit): Runnable = 
-    new Runnable() {
-      def run() = fun()
-    }
-  
-  /**
-   * Wrap the given function in a java.lang.Runnable that logs any errors encountered
-   * @param fun A function
-   * @return A Runnable that just executes the function
-   */
-  def loggedRunnable(fun: () => Unit): Runnable =
-    new Runnable() {
-      def run() = {
-        try {
-          fun()
-        }
-        catch {
-          case t =>
-            // log any error and the stack trace
-            error("error in loggedRunnable", t)
-        }
-      }
-    }
-
-  /**
-   * Create a daemon thread
-   * @param name The name of the thread
-   * @param runnable The runnable to execute in the background
-   * @return The unstarted thread
-   */
-  def daemonThread(name: String, runnable: Runnable): Thread = 
-    newThread(name, runnable, true)
-  
-  /**
-   * Create a daemon thread
-   * @param name The name of the thread
-   * @param fun The runction to execute in the thread
-   * @return The unstarted thread
-   */
-  def daemonThread(name: String, fun: () => Unit): Thread = 
-    daemonThread(name, runnable(fun))
-  
-  /**
-   * Create a new thread
-   * @param name The name of the thread
-   * @param runnable The work for the thread to do
-   * @param daemon Should the thread block JVM shutdown?
-   * @return The unstarted thread
-   */
-  def newThread(name: String, runnable: Runnable, daemon: Boolean): Thread = {
-    val thread = new Thread(runnable, name) 
-    thread.setDaemon(daemon)
-    thread
-  }
-   
-  /**
-   * Read a byte array from the given offset and size in the buffer
-   * TODO: Should use System.arraycopy
-   */
-  def readBytes(buffer: ByteBuffer, offset: Int, size: Int): Array[Byte] = {
-    val bytes = new Array[Byte](size)
-    var i = 0
-    while(i < size) {
-      bytes(i) = buffer.get(offset + i)
-      i += 1
-    }
-    bytes
-  }
-  
-  /**
-   * Read size prefixed string where the size is stored as a 2 byte short.
-   * @param buffer The buffer to read from
-   * @param encoding The encoding in which to read the string
-   */
-  def readShortString(buffer: ByteBuffer, encoding: String): String = {
-    val size: Int = buffer.getShort()
-    if(size < 0)
-      return null
-    val bytes = new Array[Byte](size)
-    buffer.get(bytes)
-    new String(bytes, encoding)
-  }
-  
-  /**
-   * Write a size prefixed string where the size is stored as a 2 byte short
-   * @param buffer The buffer to write to
-   * @param string The string to write
-   * @param encoding The encoding in which to write the string
-   */
-  def writeShortString(buffer: ByteBuffer, string: String, encoding: String): Unit = {
-    if(string == null) {
-      buffer.putShort(-1)
-    } else if(string.length > Short.MaxValue) {
-      throw new IllegalArgumentException("String exceeds the maximum size of " + Short.MaxValue + ".")
-    } else {
-      buffer.putShort(string.length.asInstanceOf[Short])
-      buffer.put(string.getBytes(encoding))
-    }
-  }
-  
-  /**
-   * Read a properties file from the given path
-   * @param filename The path of the file to read
-   */
-  def loadProps(filename: String): Properties = {
-    val propStream = new FileInputStream(filename)
-    val props = new Properties()
-    props.load(propStream)
-    props
-  }
-  
-  /**
-   * Read a required integer property value or throw an exception if no such property is found
-   */
-  def getInt(props: Properties, name: String): Int = {
-    if(props.containsKey(name))
-      return getInt(props, name, -1)
-    else
-      throw new IllegalArgumentException("Missing required property '" + name + "'")
-  }
-  
-  /**
-   * Read an integer from the properties instance
-   * @param props The properties to read from
-   * @param name The property name
-   * @param default The default value to use if the property is not found
-   * @return the integer value
-   */
-  def getInt(props: Properties, name: String, default: Int): Int = 
-    getIntInRange(props, name, default, (Int.MinValue, Int.MaxValue))
-  
-  /**
-   * Read an integer from the properties instance. Throw an exception 
-   * if the value is not in the given range (inclusive)
-   * @param props The properties to read from
-   * @param name The property name
-   * @param default The default value to use if the property is not found
-   * @param range The range in which the value must fall (inclusive)
-   * @throws IllegalArgumentException If the value is not in the given range
-   * @return the integer value
-   */
-  def getIntInRange(props: Properties, name: String, default: Int, range: (Int, Int)): Int = {
-    val v = 
-      if(props.containsKey(name))
-        props.getProperty(name).toInt
-      else
-        default
-    if(v < range._1 || v > range._2)
-      throw new IllegalArgumentException(name + " has value " + v + " which is not in the range " + range + ".")
-    else
-      v
-  }
-  
-  /**
-   * Read a required long property value or throw an exception if no such property is found
-   */
-  def getLong(props: Properties, name: String): Long = {
-    if(props.containsKey(name))
-      return getLong(props, name, -1)
-    else
-      throw new IllegalArgumentException("Missing required property '" + name + "'")
-  }
-
-  /**
-   * Read an long from the properties instance
-   * @param props The properties to read from
-   * @param name The property name
-   * @param default The default value to use if the property is not found
-   * @return the long value
-   */
-  def getLong(props: Properties, name: String, default: Long): Long = 
-    getLongInRange(props, name, default, (Long.MinValue, Long.MaxValue))
-
-  /**
-   * Read an long from the properties instance. Throw an exception 
-   * if the value is not in the given range (inclusive)
-   * @param props The properties to read from
-   * @param name The property name
-   * @param default The default value to use if the property is not found
-   * @param range The range in which the value must fall (inclusive)
-   * @throws IllegalArgumentException If the value is not in the given range
-   * @return the long value
-   */
-  def getLongInRange(props: Properties, name: String, default: Long, range: (Long, Long)): Long = {
-    val v = 
-      if(props.containsKey(name))
-        props.getProperty(name).toLong
-      else
-        default
-    if(v < range._1 || v > range._2)
-      throw new IllegalArgumentException(name + " has value " + v + " which is not in the range " + range + ".")
-    else
-      v
-  }
-
-  /**
-   * Read a boolean value from the properties instance
-   * @param props The properties to read from
-   * @param name The property name
-   * @param default The default value to use if the property is not found
-   * @return the boolean value
-   */
-  def getBoolean(props: Properties, name: String, default: Boolean): Boolean = {
-    if(!props.containsKey(name))
-      default
-    else if("true" == props.getProperty(name))
-      true
-    else if("false" == props.getProperty(name))
-      false
-    else
-      throw new IllegalArgumentException("Unacceptable value for property '" + name + "', boolean values must be either 'true' or 'false" )
-  }
-  
-  /**
-   * Get a string property, or, if no such property is defined, return the given default value
-   */
-  def getString(props: Properties, name: String, default: String): String = {
-    if(props.containsKey(name))
-      props.getProperty(name)
-    else
-      default
-  }
-  
-  /**
-   * Get a string property or throw and exception if no such property is defined.
-   */
-  def getString(props: Properties, name: String): String = {
-    if(props.containsKey(name))
-      props.getProperty(name)
-    else
-      throw new IllegalArgumentException("Missing required property '" + name + "'")
-  }
-
-  /**
-   * Get a property of type java.util.Properties or throw and exception if no such property is defined.
-   */
-  def getProps(props: Properties, name: String): Properties = {
-    if(props.containsKey(name)) {
-      val propString = props.getProperty(name)
-      val propValues = propString.split(",")
-      val properties = new Properties
-      for(i <- 0 until propValues.length) {
-        val prop = propValues(i).split("=")
-        if(prop.length != 2)
-          throw new IllegalArgumentException("Illegal format of specifying properties '" + propValues(i) + "'")
-        properties.put(prop(0), prop(1))
-      }
-      properties
-    }
-    else
-      throw new IllegalArgumentException("Missing required property '" + name + "'")
-  }
-
-  /**
-   * Get a property of type java.util.Properties or return the default if no such property is defined
-   */
-  def getProps(props: Properties, name: String, default: Properties): Properties = {
-    if(props.containsKey(name)) {
-      val propString = props.getProperty(name)
-      val propValues = propString.split(",")
-      if(propValues.length < 1)
-        throw new IllegalArgumentException("Illegal format of specifying properties '" + propString + "'")
-      val properties = new Properties
-      for(i <- 0 until propValues.length) {
-        val prop = propValues(i).split("=")
-        if(prop.length != 2)
-          throw new IllegalArgumentException("Illegal format of specifying properties '" + propValues(i) + "'")
-        properties.put(prop(0), prop(1))
-      }
-      properties
-    }
-    else
-      default
-  }
-
-  /**
-   * Open a channel for the given file
-   */
-  def openChannel(file: File, mutable: Boolean): FileChannel = {
-    if(mutable)
-      new RandomAccessFile(file, "rw").getChannel()
-    else
-      new FileInputStream(file).getChannel()
-  }
-  
-  /**
-   * Do the given action and log any exceptions thrown without rethrowing them
-   * @param log The log method to use for logging. E.g. logger.warn
-   * @param action The action to execute
-   */
-  def swallow(log: (Object, Throwable) => Unit, action: => Unit) = {
-    try {
-      action
-    } catch {
-      case e: Throwable => log(e.getMessage(), e)
-    }
-  }
-  
-  /**
-   * Test if two byte buffers are equal. In this case equality means having
-   * the same bytes from the current position to the limit
-   */
-  def equal(b1: ByteBuffer, b2: ByteBuffer): Boolean = {
-    // two byte buffers are equal if their position is the same,
-    // their remaining bytes are the same, and their contents are the same
-    if(b1.position != b2.position)
-      return false
-    if(b1.remaining != b2.remaining)
-      return false
-    for(i <- 0 until b1.remaining)
-      if(b1.get(i) != b2.get(i))
-        return false
-    return true
-  }
-  
-  /**
-   * Translate the given buffer into a string
-   * @param buffer The buffer to translate
-   * @param encoding The encoding to use in translating bytes to characters
-   */
-  def toString(buffer: ByteBuffer, encoding: String): String = {
-    val bytes = new Array[Byte](buffer.remaining)
-    buffer.get(bytes)
-    new String(bytes, encoding)
-  }
-  
-  /**
-   * Print an error message and shutdown the JVM
-   * @param message The error message
-   */
-  def croak(message: String) {
-    System.err.println(message)
-    System.exit(1)
-  }
-  
-  /**
-   * Recursively delete the given file/directory and any subfiles (if any exist)
-   * @param file The root file at which to begin deleting
-   */
-  def rm(file: String): Unit = rm(new File(file))
-  
-  /**
-   * Recursively delete the given file/directory and any subfiles (if any exist)
-   * @param file The root file at which to begin deleting
-   */
-  def rm(file: File): Unit = {
-    if(file == null) {
-      return
-    } else if(file.isDirectory) {
-      val files = file.listFiles()
-      if(files != null) {
-        for(f <- files)
-          rm(f)
-      }
-      file.delete()
-    } else {
-      file.delete()
-    }
-  }
-  
-  /**
-   * Register the given mbean with the platform mbean server,
-   * unregistering any mbean that was there before. Note,
-   * this method will not throw an exception if the registration
-   * fails (since there is nothing you can do and it isn't fatal),
-   * instead it just returns false indicating the registration failed.
-   * @param mbean The object to register as an mbean
-   * @param name The name to register this mbean with
-   * @return true if the registration succeeded
-   */
-  def registerMBean(mbean: Object, name: String): Boolean = {
-    try {
-      val mbs = ManagementFactory.getPlatformMBeanServer()
-      mbs synchronized {
-        val objName = new ObjectName(name)
-        if(mbs.isRegistered(objName))
-          mbs.unregisterMBean(objName)
-        mbs.registerMBean(mbean, objName)
-        true
-      }
-    } catch {
-      case e: Exception => {
-        error("Failed to register Mbean " + name, e)
-        false
-      }
-    }
-  }
-  
-  /**
-   * Unregister the mbean with the given name, if there is one registered
-   * @param name The mbean name to unregister
-   */
-  def unregisterMBean(name: String) {
-    val mbs = ManagementFactory.getPlatformMBeanServer()
-    mbs synchronized {
-      val objName = new ObjectName(name)
-      if(mbs.isRegistered(objName))
-        mbs.unregisterMBean(objName)
-    }
-  }
-  
-  /**
-   * Read an unsigned integer from the current position in the buffer, 
-   * incrementing the position by 4 bytes
-   * @param buffer The buffer to read from
-   * @return The integer read, as a long to avoid signedness
-   */
-  def getUnsignedInt(buffer: ByteBuffer): Long = 
-    buffer.getInt() & 0xffffffffL
-  
-  /**
-   * Read an unsigned integer from the given position without modifying the buffers
-   * position
-   * @param The buffer to read from
-   * @param index the index from which to read the integer
-   * @return The integer read, as a long to avoid signedness
-   */
-  def getUnsignedInt(buffer: ByteBuffer, index: Int): Long = 
-    buffer.getInt(index) & 0xffffffffL
-  
-  /**
-   * Write the given long value as a 4 byte unsigned integer. Overflow is ignored.
-   * @param buffer The buffer to write to
-   * @param value The value to write
-   */
-  def putUnsignedInt(buffer: ByteBuffer, value: Long): Unit = 
-    buffer.putInt((value & 0xffffffffL).asInstanceOf[Int])
-  
-  /**
-   * Write the given long value as a 4 byte unsigned integer. Overflow is ignored.
-   * @param buffer The buffer to write to
-   * @param index The position in the buffer at which to begin writing
-   * @param value The value to write
-   */
-  def putUnsignedInt(buffer: ByteBuffer, index: Int, value: Long): Unit = 
-    buffer.putInt(index, (value & 0xffffffffL).asInstanceOf[Int])
-  
-  /**
-   * Compute the CRC32 of the byte array
-   * @param bytes The array to compute the checksum for
-   * @return The CRC32
-   */
-  def crc32(bytes: Array[Byte]): Long = crc32(bytes, 0, bytes.length)
-  
-  /**
-   * Compute the CRC32 of the segment of the byte array given by the specificed size and offset
-   * @param bytes The bytes to checksum
-   * @param the offset at which to begin checksumming
-   * @param the number of bytes to checksum
-   * @return The CRC32
-   */
-  def crc32(bytes: Array[Byte], offset: Int, size: Int): Long = {
-    val crc = new CRC32()
-    crc.update(bytes, offset, size)
-    crc.getValue()
-  }
-  
-  /**
-   * Compute the hash code for the given items
-   */
-  def hashcode(as: Any*): Int = {
-    if(as == null)
-      return 0
-    var h = 1
-    var i = 0
-    while(i < as.length) {
-      if(as(i) != null) {
-        h = 31 * h + as(i).hashCode
-        i += 1
-      }
-    }
-    return h
-  }
-  
-  /**
-   * Group the given values by keys extracted with the given function
-   */
-  def groupby[K,V](vals: Iterable[V], f: V => K): Map[K,List[V]] = {
-    val m = new mutable.HashMap[K, List[V]]
-    for(v <- vals) {
-      val k = f(v)
-      m.get(k) match {
-        case Some(l: List[V]) => m.put(k, v :: l)
-        case None => m.put(k, List(v))
-      }
-    } 
-    m
-  }
-  
-  /**
-   * Read some bytes into the provided buffer, and return the number of bytes read. If the 
-   * channel has been closed or we get -1 on the read for any reason, throw an EOFException
-   */
-  def read(channel: ReadableByteChannel, buffer: ByteBuffer): Int = {
-    channel.read(buffer) match {
-      case -1 => throw new EOFException("Received -1 when reading from channel, socket has likely been closed.")
-      case n: Int => n
-    }
-  } 
-  
-  def notNull[V](v: V) = {
-    if(v == null)
-      throw new IllegalArgumentException("Value cannot be null.")
-    else
-      v
-  }
-
-  def getHostPort(hostport: String) : Tuple2[String, Int] = {
-    val splits = hostport.split(":")
-    (splits(0), splits(1).toInt)
-  }
-
-  def getTopicPartition(topicPartition: String) : Tuple2[String, Int] = {
-    val index = topicPartition.lastIndexOf('-')
-    (topicPartition.substring(0,index), topicPartition.substring(index+1).toInt)
-  }
-
-  def stackTrace(e: Throwable): String = {
-    val sw = new StringWriter;
-    val pw = new PrintWriter(sw);
-    e.printStackTrace(pw);
-    sw.toString();
-  }
-
-  /**
-   * This method gets comma separated values which contains key,value pairs and returns a map of
-   * key value pairs. the format of allCSVal is key1:val1, key2:val2 ....
-   */
-  private def getCSVMap[K, V](allCSVals: String, exceptionMsg:String, successMsg:String) :Map[K, V] = {
-    val map = new mutable.HashMap[K, V]
-    if("".equals(allCSVals))
-      return map
-    val csVals = allCSVals.split(",")
-    for(i <- 0 until csVals.length)
-    {
-     try{
-      val tempSplit = csVals(i).split(":")
-      info(successMsg + tempSplit(0) + " : " + Integer.parseInt(tempSplit(1).trim))
-      map += tempSplit(0).asInstanceOf[K] -> Integer.parseInt(tempSplit(1).trim).asInstanceOf[V]
-      } catch {
-          case _ =>  error(exceptionMsg + ": " + csVals(i))
-        }
-    }
-    map
-  }
-
-  def getCSVList(csvList: String): Seq[String] = {
-    if(csvList == null)
-      Seq.empty[String]
-    else {
-      csvList.split(",").filter(v => !v.equals(""))
-    }
-  }
-
-  def getTopicRetentionHours(retentionHours: String) : Map[String, Int] = {
-    val exceptionMsg = "Malformed token for topic.log.retention.hours in server.properties: "
-    val successMsg =  "The retention hours for "
-    val map: Map[String, Int] = getCSVMap(retentionHours, exceptionMsg, successMsg)
-    map.foreach{case(topic, hrs) =>
-                  require(hrs > 0, "Log retention hours value for topic " + topic + " is " + hrs +
-                                   " which is not greater than 0.")}
-    map
-  }
-
-  def getTopicRollHours(rollHours: String) : Map[String, Int] = {
-    val exceptionMsg = "Malformed token for topic.log.roll.hours in server.properties: "
-    val successMsg =  "The roll hours for "
-    val map: Map[String, Int] = getCSVMap(rollHours, exceptionMsg, successMsg)
-    map.foreach{case(topic, hrs) =>
-                  require(hrs > 0, "Log roll hours value for topic " + topic + " is " + hrs +
-                                   " which is not greater than 0.")}
-    map
-  }
-
-  def getTopicFileSize(fileSizes: String): Map[String, Int] = {
-    val exceptionMsg = "Malformed token for topic.log.file.size in server.properties: "
-    val successMsg =  "The log file size for "
-    val map: Map[String, Int] = getCSVMap(fileSizes, exceptionMsg, successMsg)
-    map.foreach{case(topic, size) =>
-                  require(size > 0, "Log file size value for topic " + topic + " is " + size +
-                                   " which is not greater than 0.")}
-    map
-  }
-
-  def getTopicRetentionSize(retentionSizes: String): Map[String, Long] = {
-    val exceptionMsg = "Malformed token for topic.log.retention.size in server.properties: "
-    val successMsg =  "The log retention size for "
-    val map: Map[String, Long] = getCSVMap(retentionSizes, exceptionMsg, successMsg)
-    map.foreach{case(topic, size) =>
-                 require(size > 0, "Log retention size value for topic " + topic + " is " + size +
-                                   " which is not greater than 0.")}
-    map
-  }
-
-  def getTopicFlushIntervals(allIntervals: String) : Map[String, Int] = {
-    val exceptionMsg = "Malformed token for topic.flush.Intervals.ms in server.properties: "
-    val successMsg =  "The flush interval for "
-    val map: Map[String, Int] = getCSVMap(allIntervals, exceptionMsg, successMsg)
-    map.foreach{case(topic, interval) =>
-                  require(interval > 0, "Flush interval value for topic " + topic + " is " + interval +
-                                        " ms which is not greater than 0.")}
-    map
-  }
-
-  def getTopicPartitions(allPartitions: String) : Map[String, Int] = {
-    val exceptionMsg = "Malformed token for topic.partition.counts in server.properties: "
-    val successMsg =  "The number of partitions for topic  "
-    val map: Map[String, Int] = getCSVMap(allPartitions, exceptionMsg, successMsg)
-    map.foreach{case(topic, count) =>
-                  require(count > 0, "The number of partitions for topic " + topic + " is " + count +
-                                     " which is not greater than 0.")}
-    map
-  }
-
-  def getConsumerTopicMap(consumerTopicString: String) : Map[String, Int] = {
-    val exceptionMsg = "Malformed token for embeddedconsumer.topics in consumer.properties: "
-    val successMsg =  "The number of consumer threads for topic  "
-    val map: Map[String, Int] = getCSVMap(consumerTopicString, exceptionMsg, successMsg)
-    map.foreach{case(topic, count) =>
-                  require(count > 0, "The number of consumer threads for topic " + topic + " is " + count +
-                                     " which is not greater than 0.")}
-    map
-  }
-
-  def getObject[T<:AnyRef](className: String): T = {
-    className match {
-      case null => null.asInstanceOf[T]
-      case _ =>
-        val clazz = Class.forName(className)
-        val clazzT = clazz.asInstanceOf[Class[T]]
-        val constructors = clazzT.getConstructors
-        require(constructors.length == 1)
-        constructors.head.newInstance().asInstanceOf[T]
-    }
-  }
-
-  def propertyExists(prop: String): Boolean = {
-    if(prop == null)
-      false
-    else if(prop.compareTo("") == 0)
-      false
-    else true
-  }
-
-  def getCompressionCodec(props: Properties, codec: String): CompressionCodec = {
-    val codecValueString = props.getProperty(codec)
-    if(codecValueString == null)
-      NoCompressionCodec
-    else
-      CompressionCodec.getCompressionCodec(codecValueString.toInt)
-  }
-
-  def tryCleanupZookeeper(zkUrl: String, groupId: String) {
-    try {
-      val dir = "/consumers/" + groupId
-      logger.info("Cleaning up temporary zookeeper data under " + dir + ".")
-      val zk = new ZkClient(zkUrl, 30*1000, 30*1000, ZKStringSerializer)
-      zk.deleteRecursive(dir)
-      zk.close()
-    } catch {
-      case _ => // swallow
-    }
-  }
-
-  def checkRequiredArgs(parser: OptionParser, options: OptionSet, required: OptionSpec[_]*) {
-    for(arg <- required) {
-      if(!options.has(arg)) {
-        error("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-  }
-
-  /**
-   * Create a circular (looping) iterator over a collection.
-   * @param coll An iterable over the underlying collection.
-   * @return A circular iterator over the collection.
-   */
-  def circularIterator[T](coll: Iterable[T]) = {
-    val stream: Stream[T] =
-      for (forever <- Stream.continually(1); t <- coll) yield t
-    stream.iterator
-  }
-}
-
-class SnapshotStats(private val monitorDurationNs: Long = 600L * 1000L * 1000L * 1000L) {
-  private val time: Time = SystemTime
-
-  private val complete = new AtomicReference(new Stats())
-  private val current = new AtomicReference(new Stats())
-  private val total = new AtomicLong(0)
-  private val numCumulatedRequests = new AtomicLong(0)
-
-  def recordRequestMetric(requestNs: Long) {
-    val stats = current.get
-    stats.add(requestNs)
-    total.getAndAdd(requestNs)
-    numCumulatedRequests.getAndAdd(1)
-    val ageNs = time.nanoseconds - stats.start
-    // if the current stats are too old it is time to swap
-    if(ageNs >= monitorDurationNs) {
-      val swapped = current.compareAndSet(stats, new Stats())
-      if(swapped) {
-        complete.set(stats)
-        stats.end.set(time.nanoseconds)
-      }
-    }
-  }
-
-  def recordThroughputMetric(data: Long) {
-    val stats = current.get
-    stats.addData(data)
-    val ageNs = time.nanoseconds - stats.start
-    // if the current stats are too old it is time to swap
-    if(ageNs >= monitorDurationNs) {
-      val swapped = current.compareAndSet(stats, new Stats())
-      if(swapped) {
-        complete.set(stats)
-        stats.end.set(time.nanoseconds)
-      }
-    }
-  }
-
-  def getNumRequests(): Long = numCumulatedRequests.get
-
-  def getRequestsPerSecond: Double = {
-    val stats = complete.get
-    stats.numRequests / stats.durationSeconds
-  }
-
-  def getThroughput: Double = {
-    val stats = complete.get
-    stats.totalData / stats.durationSeconds
-  }
-
-  def getAvgMetric: Double = {
-    val stats = complete.get
-    if (stats.numRequests == 0) {
-      0
-    }
-    else {
-      stats.totalRequestMetric / stats.numRequests
-    }
-  }
-
-  def getTotalMetric: Long = total.get
-
-  def getMaxMetric: Double = complete.get.maxRequestMetric
-
-  class Stats {
-    val start = time.nanoseconds
-    var end = new AtomicLong(-1)
-    var numRequests = 0
-    var totalRequestMetric: Long = 0L
-    var maxRequestMetric: Long = 0L
-    var totalData: Long = 0L
-    private val lock = new Object()
-
-    def addData(data: Long) {
-      lock synchronized {
-        totalData += data
-      }
-    }
-
-    def add(requestNs: Long) {
-      lock synchronized {
-        numRequests +=1
-        totalRequestMetric += requestNs
-        maxRequestMetric = scala.math.max(maxRequestMetric, requestNs)
-      }
-    }
-
-    def durationSeconds: Double = (end.get - start) / (1000.0 * 1000.0 * 1000.0)
-
-    def durationMs: Double = (end.get - start) / (1000.0 * 1000.0)
-  }
-}
-
-/**
- *  A wrapper that synchronizes JSON in scala, which is not threadsafe.
- */
-object SyncJSON extends Logging {
-  val myConversionFunc = {input : String => input.toInt}
-  JSON.globalNumberParser = myConversionFunc
-  val lock = new Object
-
-  def parseFull(input: String): Option[Any] = {
-    lock synchronized {
-      try {
-        JSON.parseFull(input)
-      } catch {
-        case t =>
-          throw new RuntimeException("Can't parse json string: %s".format(input), t)
-      }
-    }
-  }
-}
\ No newline at end of file
diff --git a/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala b/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala
deleted file mode 100644
index caddb06..0000000
--- a/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala
+++ /dev/null
@@ -1,312 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import org.I0Itec.zkclient.ZkClient
-import org.I0Itec.zkclient.serialize.ZkSerializer
-import kafka.cluster.{Broker, Cluster}
-import scala.collection._
-import java.util.Properties
-import org.I0Itec.zkclient.exception.{ZkNodeExistsException, ZkNoNodeException, ZkMarshallingError}
-import kafka.consumer.TopicCount
-
-object ZkUtils extends Logging {
-  val ConsumersPath = "/consumers"
-  val BrokerIdsPath = "/brokers/ids"
-  val BrokerTopicsPath = "/brokers/topics"
-
-  /**
-   *  make sure a persistent path exists in ZK. Create the path if not exist.
-   */
-  def makeSurePersistentPathExists(client: ZkClient, path: String) {
-    if (!client.exists(path))
-      client.createPersistent(path, true) // won't throw NoNodeException or NodeExistsException
-  }
-
-  /**
-   *  create the parent path
-   */
-  private def createParentPath(client: ZkClient, path: String): Unit = {
-    val parentDir = path.substring(0, path.lastIndexOf('/'))
-    if (parentDir.length != 0)
-      client.createPersistent(parentDir, true)
-  }
-
-  /**
-   * Create an ephemeral node with the given path and data. Create parents if necessary.
-   */
-  private def createEphemeralPath(client: ZkClient, path: String, data: String): Unit = {
-    try {
-      client.createEphemeral(path, data)
-    }
-    catch {
-      case e: ZkNoNodeException => {
-        createParentPath(client, path)
-        client.createEphemeral(path, data)
-      }
-    }
-  }
-
-  /**
-   * Create an ephemeral node with the given path and data.
-   * Throw NodeExistException if node already exists.
-   */
-  def createEphemeralPathExpectConflict(client: ZkClient, path: String, data: String): Unit = {
-    try {
-      createEphemeralPath(client, path, data)
-    }
-    catch {
-      case e: ZkNodeExistsException => {
-        // this can happen when there is connection loss; make sure the data is what we intend to write
-        var storedData: String = null
-        try {
-          storedData = readData(client, path)
-        }
-        catch {
-          case e1: ZkNoNodeException => // the node disappeared; treat as if node existed and let caller handles this
-          case e2 => throw e2
-        }
-        if (storedData == null || storedData != data) {
-          info("conflict in " + path + " data: " + data + " stored data: " + storedData)
-          throw e
-        }
-        else {
-          // otherwise, the creation succeeded, return normally
-          info(path + " exists with value " + data + " during connection loss; this is ok")
-        }
-      }
-      case e2 => throw e2
-    }
-  }
-
-  /**
-   * Update the value of a persistent node with the given path and data.
-   * create parrent directory if necessary. Never throw NodeExistException.
-   */
-  def updatePersistentPath(client: ZkClient, path: String, data: String): Unit = {
-    try {
-      client.writeData(path, data)
-    }
-    catch {
-      case e: ZkNoNodeException => {
-        createParentPath(client, path)
-        try {
-          client.createPersistent(path, data)
-        }
-        catch {
-          case e: ZkNodeExistsException => client.writeData(path, data)
-          case e2 => throw e2
-        }
-      }
-      case e2 => throw e2
-    }
-  }
-
-  /**
-   * Update the value of a persistent node with the given path and data.
-   * create parrent directory if necessary. Never throw NodeExistException.
-   */
-  def updateEphemeralPath(client: ZkClient, path: String, data: String): Unit = {
-    try {
-      client.writeData(path, data)
-    }
-    catch {
-      case e: ZkNoNodeException => {
-        createParentPath(client, path)
-        client.createEphemeral(path, data)
-      }
-      case e2 => throw e2
-    }
-  }
-
-  def deletePath(client: ZkClient, path: String) {
-    try {
-      client.delete(path)
-    }
-    catch {
-      case e: ZkNoNodeException =>
-        // this can happen during a connection loss event, return normally
-        info(path + " deleted during connection loss; this is ok")
-      case e2 => throw e2
-    }
-  }
-
-  def deletePathRecursive(client: ZkClient, path: String) {
-    try {
-      client.deleteRecursive(path)
-    }
-    catch {
-      case e: ZkNoNodeException =>
-        // this can happen during a connection loss event, return normally
-        info(path + " deleted during connection loss; this is ok")
-      case e2 => throw e2
-    }
-  }
-
-  def readData(client: ZkClient, path: String): String = {
-    client.readData(path)
-  }
-
-  def readDataMaybeNull(client: ZkClient, path: String): String = {
-    client.readData(path, true)
-  }
-
-  def getChildren(client: ZkClient, path: String): Seq[String] = {
-    import scala.collection.JavaConversions._
-    // triggers implicit conversion from java list to scala Seq
-    client.getChildren(path)
-  }
-
-  def getChildrenParentMayNotExist(client: ZkClient, path: String): Seq[String] = {
-    import scala.collection.JavaConversions._
-    // triggers implicit conversion from java list to scala Seq
-
-    var ret: java.util.List[String] = null
-    try {
-      ret = client.getChildren(path)
-    }
-    catch {
-      case e: ZkNoNodeException =>
-        return Nil
-      case e2 => throw e2
-    }
-    return ret
-  }
-
-  /**
-   * Check if the given path exists
-   */
-  def pathExists(client: ZkClient, path: String): Boolean = {
-    client.exists(path)
-  }
-
-  def getLastPart(path : String) : String = path.substring(path.lastIndexOf('/') + 1)
-
-  def getCluster(zkClient: ZkClient) : Cluster = {
-    val cluster = new Cluster
-    val nodes = getChildrenParentMayNotExist(zkClient, BrokerIdsPath)
-    for (node <- nodes) {
-      val brokerZKString = readData(zkClient, BrokerIdsPath + "/" + node)
-      cluster.add(Broker.createBroker(node.toInt, brokerZKString))
-    }
-    cluster
-  }
-
-  def getPartitionsForTopics(zkClient: ZkClient, topics: Iterator[String]): mutable.Map[String, List[String]] = {
-    val ret = new mutable.HashMap[String, List[String]]()
-    for (topic <- topics) {
-      var partList: List[String] = Nil
-      val brokers = getChildrenParentMayNotExist(zkClient, BrokerTopicsPath + "/" + topic)
-      for (broker <- brokers) {
-        val nParts = readData(zkClient, BrokerTopicsPath + "/" + topic + "/" + broker).toInt
-        for (part <- 0 until nParts)
-          partList ::= broker + "-" + part
-      }
-      partList = partList.sortWith((s,t) => s < t)
-      ret += (topic -> partList)
-    }
-    ret
-  }
-
-  def setupPartition(zkClient : ZkClient, brokerId: Int, host: String, port: Int, topic: String, nParts: Int) {
-    val brokerIdPath = BrokerIdsPath + "/" + brokerId
-    val broker = new Broker(brokerId, brokerId.toString, host, port)
-    createEphemeralPathExpectConflict(zkClient, brokerIdPath, broker.getZKString)
-    val brokerPartTopicPath = BrokerTopicsPath + "/" + topic + "/" + brokerId
-    createEphemeralPathExpectConflict(zkClient, brokerPartTopicPath, nParts.toString)    
-  }
-
-  def deletePartition(zkClient : ZkClient, brokerId: Int, topic: String) {
-    val brokerIdPath = BrokerIdsPath + "/" + brokerId
-    zkClient.delete(brokerIdPath)
-    val brokerPartTopicPath = BrokerTopicsPath + "/" + topic + "/" + brokerId
-    zkClient.delete(brokerPartTopicPath)
-  }
-
-  def getConsumersInGroup(zkClient: ZkClient, group: String): Seq[String] = {
-    val dirs = new ZKGroupDirs(group)
-    getChildren(zkClient, dirs.consumerRegistryDir)
-  }
-
-  def getConsumerTopicMaps(zkClient: ZkClient, group: String): Map[String, TopicCount] = {
-    val dirs = new ZKGroupDirs(group)
-    val consumersInGroup = getConsumersInGroup(zkClient, group)
-    val topicCountMaps = consumersInGroup.map(consumerId => TopicCount.constructTopicCount(consumerId,
-      ZkUtils.readData(zkClient, dirs.consumerRegistryDir + "/" + consumerId), zkClient))
-    consumersInGroup.zip(topicCountMaps).toMap
-  }
-
-  def getConsumersPerTopic(zkClient: ZkClient, group: String) : mutable.Map[String, List[String]] = {
-    val dirs = new ZKGroupDirs(group)
-    val consumers = getChildrenParentMayNotExist(zkClient, dirs.consumerRegistryDir)
-    val consumersPerTopicMap = new mutable.HashMap[String, List[String]]
-    for (consumer <- consumers) {
-      val topicCount = TopicCount.constructTopicCount(group, consumer, zkClient)
-      for ((topic, consumerThreadIdSet) <- topicCount.getConsumerThreadIdsPerTopic) {
-        for (consumerThreadId <- consumerThreadIdSet)
-          consumersPerTopicMap.get(topic) match {
-            case Some(curConsumers) => consumersPerTopicMap.put(topic, consumerThreadId :: curConsumers)
-            case _ => consumersPerTopicMap.put(topic, List(consumerThreadId))
-          }
-      }
-    }
-    for ( (topic, consumerList) <- consumersPerTopicMap )
-      consumersPerTopicMap.put(topic, consumerList.sortWith((s,t) => s < t))
-    consumersPerTopicMap
-  }
-}
-
-object ZKStringSerializer extends ZkSerializer {
-
-  @throws(classOf[ZkMarshallingError])
-  def serialize(data : Object) : Array[Byte] = data.asInstanceOf[String].getBytes("UTF-8")
-
-  @throws(classOf[ZkMarshallingError])
-  def deserialize(bytes : Array[Byte]) : Object = {
-    if (bytes == null)
-      null
-    else
-      new String(bytes, "UTF-8")
-  }
-}
-
-class ZKGroupDirs(val group: String) {
-  def consumerDir = ZkUtils.ConsumersPath
-  def consumerGroupDir = consumerDir + "/" + group
-  def consumerRegistryDir = consumerGroupDir + "/ids"
-}
-
-class ZKGroupTopicDirs(group: String, topic: String) extends ZKGroupDirs(group) {
-  def consumerOffsetDir = consumerGroupDir + "/offsets/" + topic
-  def consumerOwnerDir = consumerGroupDir + "/owners/" + topic
-}
-
-
-class ZKConfig(props: Properties) {
-  /** ZK host string */
-  val zkConnect = Utils.getString(props, "zk.connect", null)
-
-  /** zookeeper session timeout */
-  val zkSessionTimeoutMs = Utils.getInt(props, "zk.sessiontimeout.ms", 6000)
-
-  /** the max time that the client waits to establish a connection to zookeeper */
-  val zkConnectionTimeoutMs = Utils.getInt(props, "zk.connectiontimeout.ms",zkSessionTimeoutMs)
-
-  /** how far a ZK follower can be behind a ZK leader */
-  val zkSyncTimeMs = Utils.getInt(props, "zk.synctime.ms", 2000)
-}
diff --git a/trunk/core/src/main/scala/kafka/utils/package.html b/trunk/core/src/main/scala/kafka/utils/package.html
deleted file mode 100644
index a3d5829..0000000
--- a/trunk/core/src/main/scala/kafka/utils/package.html
+++ /dev/null
@@ -1 +0,0 @@
-Utility functions.
\ No newline at end of file
diff --git a/trunk/core/src/test/resources/log4j.properties b/trunk/core/src/test/resources/log4j.properties
deleted file mode 100644
index fd66977..0000000
--- a/trunk/core/src/test/resources/log4j.properties
+++ /dev/null
@@ -1,25 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-log4j.rootLogger=WARN, stdout
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
-
-log4j.logger.kafka=WARN
-
-# zkclient can be verbose, during debugging it is common to adjust is separately
-log4j.logger.org.I0Itec.zkclient.ZkClient=WARN
-log4j.logger.org.apache.zookeeper=WARN
diff --git a/trunk/core/src/test/resources/test-kafka-logs/MagicByte0-0/00000000000000000000.kafka b/trunk/core/src/test/resources/test-kafka-logs/MagicByte0-0/00000000000000000000.kafka
deleted file mode 100644
index e500258..0000000
--- a/trunk/core/src/test/resources/test-kafka-logs/MagicByte0-0/00000000000000000000.kafka
+++ /dev/null
Binary files differ
diff --git a/trunk/core/src/test/scala/other/kafka.log4j.properties b/trunk/core/src/test/scala/other/kafka.log4j.properties
deleted file mode 100644
index 1a53fd5..0000000
--- a/trunk/core/src/test/scala/other/kafka.log4j.properties
+++ /dev/null
@@ -1,22 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-log4j.rootLogger=INFO, KAFKA
-
-log4j.appender.KAFKA=kafka.log4j.KafkaAppender
-
-log4j.appender.KAFKA.Port=9092
-log4j.appender.KAFKA.Host=localhost
-log4j.appender.KAFKA.Topic=test-logger
-log4j.appender.KAFKA.Serializer=kafka.AppenderStringSerializer
diff --git a/trunk/core/src/test/scala/other/kafka/DeleteZKPath.scala b/trunk/core/src/test/scala/other/kafka/DeleteZKPath.scala
deleted file mode 100644
index 2554503..0000000
--- a/trunk/core/src/test/scala/other/kafka/DeleteZKPath.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka
-
-import consumer.ConsumerConfig
-import utils.{ZKStringSerializer, ZkUtils, Utils}
-import org.I0Itec.zkclient.ZkClient
-
-object DeleteZKPath {
-  def main(args: Array[String]) {
-    if(args.length < 2) {
-      println("USAGE: " + DeleteZKPath.getClass.getName + " consumer.properties zk_path")
-      System.exit(1)
-    }
-
-    val config = new ConsumerConfig(Utils.loadProps(args(0)))
-    val zkPath = args(1)
-
-    val zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs, config.zkConnectionTimeoutMs,
-      ZKStringSerializer)
-
-    try {
-      ZkUtils.deletePathRecursive(zkClient, zkPath);
-      System.out.println(zkPath + " is deleted")
-    } catch {
-      case e: Exception => System.err.println("Path not deleted " + e.printStackTrace())
-    }
-    
-  }
-}
diff --git a/trunk/core/src/test/scala/other/kafka/TestKafkaAppender.scala b/trunk/core/src/test/scala/other/kafka/TestKafkaAppender.scala
deleted file mode 100644
index 8328e99..0000000
--- a/trunk/core/src/test/scala/other/kafka/TestKafkaAppender.scala
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka
-
-import message.Message
-import org.apache.log4j.PropertyConfigurator
-import kafka.utils.Logging
-import serializer.Encoder
-
-object TestKafkaAppender extends Logging {
-  
-  def main(args:Array[String]) {
-    
-    if(args.length < 1) {
-      println("USAGE: " + TestKafkaAppender.getClass.getName + " log4j_config")
-      System.exit(1)
-    }
-
-    try {
-      PropertyConfigurator.configure(args(0))
-    } catch {
-      case e: Exception => System.err.println("KafkaAppender could not be initialized ! Exiting..")
-      e.printStackTrace()
-      System.exit(1)
-    }
-
-    for(i <- 1 to 10)
-      info("test")    
-  }
-}
-
-class AppenderStringSerializer extends Encoder[AnyRef] {
-  def toMessage(event: AnyRef):Message = new Message(event.asInstanceOf[String].getBytes)
-}
-
diff --git a/trunk/core/src/test/scala/other/kafka/TestLinearWriteSpeed.scala b/trunk/core/src/test/scala/other/kafka/TestLinearWriteSpeed.scala
deleted file mode 100644
index d6fc65d..0000000
--- a/trunk/core/src/test/scala/other/kafka/TestLinearWriteSpeed.scala
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka
-
-import java.io._
-import java.nio._
-import java.nio.channels._
-import joptsimple._
-
-object TestLinearWriteSpeed {
-
-  def main(args: Array[String]): Unit = {
-    val parser = new OptionParser
-    val bytesOpt = parser.accepts("bytes", "REQUIRED: The number of bytes to write.")
-                           .withRequiredArg
-                           .describedAs("num_bytes")
-                           .ofType(classOf[java.lang.Integer])
-    val sizeOpt = parser.accepts("size", "REQUIRED: The size of each write.")
-                           .withRequiredArg
-                           .describedAs("num_bytes")
-                           .ofType(classOf[java.lang.Integer])
-    val filesOpt = parser.accepts("files", "REQUIRED: The number of files.")
-                           .withRequiredArg
-                           .describedAs("num_files")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1)
-    
-    val options = parser.parse(args : _*)
-    
-    for(arg <- List(bytesOpt, sizeOpt, filesOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"") 
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-
-    val bytesToWrite = options.valueOf(bytesOpt).intValue
-    val bufferSize = options.valueOf(sizeOpt).intValue
-    val numFiles = options.valueOf(filesOpt).intValue
-    val buffer = ByteBuffer.allocate(bufferSize)
-    while(buffer.hasRemaining)
-      buffer.put(123.asInstanceOf[Byte])
-    
-    val channels = new Array[FileChannel](numFiles)
-    for(i <- 0 until numFiles) {
-      val file = File.createTempFile("kafka-test", ".dat")
-      file.deleteOnExit()
-      channels(i) = new RandomAccessFile(file, "rw").getChannel()
-    }
-    
-    val begin = System.currentTimeMillis
-    for(i <- 0 until bytesToWrite / bufferSize) {
-      buffer.rewind()
-      channels(i % numFiles).write(buffer)
-    }
-    val ellapsedSecs = (System.currentTimeMillis - begin) / 1000.0
-    System.out.println(bytesToWrite / (1024 * 1024 * ellapsedSecs) + " MB per sec")
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/other/kafka/TestLogPerformance.scala b/trunk/core/src/test/scala/other/kafka/TestLogPerformance.scala
deleted file mode 100644
index 95efe5f..0000000
--- a/trunk/core/src/test/scala/other/kafka/TestLogPerformance.scala
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import kafka.message._
-import kafka.utils.{TestUtils, Utils, SystemTime}
-import kafka.server.KafkaConfig
-
-object TestLogPerformance {
-
-  def main(args: Array[String]): Unit = {
-    val props = TestUtils.createBrokerConfig(0, -1)
-    val config = new KafkaConfig(props)
-    if(args.length < 4)
-      Utils.croak("USAGE: java " + getClass().getName() + " num_messages message_size batch_size compression_codec")
-    val numMessages = args(0).toInt
-    val messageSize = args(1).toInt
-    val batchSize = args(2).toInt
-    val compressionCodec = CompressionCodec.getCompressionCodec(args(3).toInt)
-    val dir = TestUtils.tempDir()
-    val log = new Log(dir, SystemTime, 50*1024*1024, config.maxMessageSize, 5000000, 24*7*60*60*1000L, false)
-    val bytes = new Array[Byte](messageSize)
-    new java.util.Random().nextBytes(bytes)
-    val message = new Message(bytes)
-    val messages = new Array[Message](batchSize)
-    for(i <- 0 until batchSize)
-      messages(i) = message
-    val messageSet = new ByteBufferMessageSet(compressionCodec = compressionCodec, messages = messages: _*)
-    val numBatches = numMessages / batchSize
-    val start = System.currentTimeMillis()
-    for(i <- 0 until numBatches)
-      log.append(messageSet)
-    log.close()
-    val ellapsed = (System.currentTimeMillis() - start) / 1000.0
-    val writtenBytes = MessageSet.entrySize(message) * numMessages
-    println("message size = " + MessageSet.entrySize(message))
-    println("MB/sec: " + writtenBytes / ellapsed / (1024.0 * 1024.0))
-    Utils.rm(dir)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/other/kafka/TestTruncate.scala b/trunk/core/src/test/scala/other/kafka/TestTruncate.scala
deleted file mode 100644
index e90d57d..0000000
--- a/trunk/core/src/test/scala/other/kafka/TestTruncate.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka
-
-import java.io._
-import java.nio._
-
-/* This code tests the correct function of java's FileChannel.truncate--some platforms don't work. */
-object TestTruncate {
-
-  def main(args: Array[String]): Unit = {
-    val name = File.createTempFile("kafka", ".test")
-    name.deleteOnExit()
-    val file = new RandomAccessFile(name, "rw").getChannel()
-    val buffer = ByteBuffer.allocate(12)
-    buffer.putInt(4).putInt(4).putInt(4)
-    buffer.rewind()
-    file.write(buffer)
-    println("position prior to truncate: " + file.position)
-    file.truncate(4)
-    println("position after truncate to 4: " + file.position)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/other/kafka/TestZKConsumerOffsets.scala b/trunk/core/src/test/scala/other/kafka/TestZKConsumerOffsets.scala
deleted file mode 100644
index fa709de..0000000
--- a/trunk/core/src/test/scala/other/kafka/TestZKConsumerOffsets.scala
+++ /dev/null
@@ -1,74 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka
-
-import consumer._
-import message.Message
-import utils.Utils
-import java.util.concurrent.CountDownLatch
-
-object TestZKConsumerOffsets {
-  def main(args: Array[String]): Unit = {
-    if(args.length < 1) {
-      println("USAGE: " + TestZKConsumerOffsets.getClass.getName + " consumer.properties topic latest")
-      System.exit(1)
-    }
-    println("Starting consumer...")
-    val topic = args(1)
-    val autoOffsetReset = args(2)    
-    val props = Utils.loadProps(args(0))
-    props.put("autooffset.reset", "largest")
-    
-    val config = new ConsumerConfig(props)
-    val consumerConnector: ConsumerConnector = Consumer.create(config)
-    val topicMessageStreams = consumerConnector.createMessageStreams(Predef.Map(topic -> 1))
-    var threadList = List[ConsumerThread]()
-    for ((topic, streamList) <- topicMessageStreams)
-      for (stream <- streamList)
-        threadList ::= new ConsumerThread(stream)
-
-    for (thread <- threadList)
-      thread.start
-
-    // attach shutdown handler to catch control-c
-    Runtime.getRuntime().addShutdownHook(new Thread() {
-      override def run() = {
-        consumerConnector.shutdown
-        threadList.foreach(_.shutdown)
-        println("consumer threads shutted down")
-      }
-    })
-  }
-}
-
-private class ConsumerThread(stream: KafkaStream[Message]) extends Thread {
-  val shutdownLatch = new CountDownLatch(1)
-
-  override def run() {
-    println("Starting consumer thread..")
-    for (messageAndMetadata <- stream) {
-      println("consumed: " + Utils.toString(messageAndMetadata.message.payload, "UTF-8"))
-    }
-    shutdownLatch.countDown
-    println("thread shutdown !" )
-  }
-
-  def shutdown() {
-    shutdownLatch.await
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/consumer/TopicFilterTest.scala b/trunk/core/src/test/scala/unit/kafka/consumer/TopicFilterTest.scala
deleted file mode 100644
index 40a2bf7..0000000
--- a/trunk/core/src/test/scala/unit/kafka/consumer/TopicFilterTest.scala
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-
-import junit.framework.Assert._
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-
-
-class TopicFilterTest extends JUnitSuite {
-
-  @Test
-  def testWhitelists() {
-
-    val topicFilter1 = new Whitelist("white1,white2")
-    assertFalse(topicFilter1.requiresTopicEventWatcher)
-    assertTrue(topicFilter1.isTopicAllowed("white2"))
-    assertFalse(topicFilter1.isTopicAllowed("black1"))
-
-    val topicFilter2 = new Whitelist(".+")
-    assertTrue(topicFilter2.requiresTopicEventWatcher)
-    assertTrue(topicFilter2.isTopicAllowed("alltopics"))
-    
-    val topicFilter3 = new Whitelist("white_listed-topic.+")
-    assertTrue(topicFilter3.requiresTopicEventWatcher)
-    assertTrue(topicFilter3.isTopicAllowed("white_listed-topic1"))
-    assertFalse(topicFilter3.isTopicAllowed("black1"))
-  }
-
-  @Test
-  def testBlacklists() {
-    val topicFilter1 = new Blacklist("black1")
-    assertTrue(topicFilter1.requiresTopicEventWatcher)
-  }
-}
\ No newline at end of file
diff --git a/trunk/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala b/trunk/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala
deleted file mode 100644
index 0df05d3..0000000
--- a/trunk/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala
+++ /dev/null
@@ -1,288 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import junit.framework.Assert._
-import kafka.zk.ZooKeeperTestHarness
-import kafka.integration.KafkaServerTestHarness
-import kafka.server.KafkaConfig
-import scala.collection._
-import kafka.utils.{Utils, Logging}
-import kafka.utils.{TestZKUtils, TestUtils}
-import org.scalatest.junit.JUnit3Suite
-import org.apache.log4j.{Level, Logger}
-import kafka.message._
-import kafka.serializer.StringDecoder
-
-class ZookeeperConsumerConnectorTest extends JUnit3Suite with KafkaServerTestHarness with ZooKeeperTestHarness with Logging {
-
-  val zookeeperConnect = TestZKUtils.zookeeperConnect
-  val zkConnect = zookeeperConnect
-  val numNodes = 2
-  val numParts = 2
-  val topic = "topic1"
-  val configs =
-    for(props <- TestUtils.createBrokerConfigs(numNodes))
-    yield new KafkaConfig(props) {
-      override val enableZookeeper = true
-      override val numPartitions = numParts
-      override val zkConnect = zookeeperConnect
-    }
-  val group = "group1"
-  val consumer0 = "consumer0"
-  val consumer1 = "consumer1"
-  val consumer2 = "consumer2"
-  val consumer3 = "consumer3"
-  val nMessages = 2
-
-  def testBasic() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    var actualMessages: List[Message] = Nil
-
-    // test consumer timeout logic
-    val consumerConfig0 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer0)) {
-      override val consumerTimeoutMs = 200
-    }
-    val zkConsumerConnector0 = new ZookeeperConsumerConnector(consumerConfig0, true)
-    val topicMessageStreams0 = zkConsumerConnector0.createMessageStreams(Predef.Map(topic -> numNodes*numParts/2))
-
-    // no messages to consume, we should hit timeout;
-    // also the iterator should support re-entrant, so loop it twice
-    for (i <- 0 until  2) {
-      try {
-        getMessages(nMessages*2, topicMessageStreams0)
-        fail("should get an exception")
-      }
-      catch {
-        case e: ConsumerTimeoutException => // this is ok
-        case e => throw e
-      }
-    }
-
-    zkConsumerConnector0.shutdown
-
-    // send some messages to each broker
-    val sentMessages1 = sendMessages(nMessages, "batch1")
-    // create a consumer
-    val consumerConfig1 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer1))
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig1, true)
-    val topicMessageStreams1 = zkConsumerConnector1.createMessageStreams(Predef.Map(topic -> numNodes*numParts/2))
-    val receivedMessages1 = getMessages(nMessages*2, topicMessageStreams1)
-    assertEquals(sentMessages1, receivedMessages1)
-    // commit consumed offsets
-    zkConsumerConnector1.commitOffsets
-
-    // create a consumer
-    val consumerConfig2 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer2))
-    val zkConsumerConnector2 = new ZookeeperConsumerConnector(consumerConfig2, true)
-    val topicMessageStreams2 = zkConsumerConnector2.createMessageStreams(Predef.Map(topic -> numNodes*numParts/2))
-    // send some messages to each broker
-    val sentMessages2 = sendMessages(nMessages, "batch2")
-    Thread.sleep(200)
-    val receivedMessages2_1 = getMessages(nMessages, topicMessageStreams1)
-    val receivedMessages2_2 = getMessages(nMessages, topicMessageStreams2)
-    val receivedMessages2 = (receivedMessages2_1 ::: receivedMessages2_2).sortWith((s,t) => s.checksum < t.checksum)
-    assertEquals(sentMessages2, receivedMessages2)
-
-    // create a consumer with empty map
-    val consumerConfig3 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer3))
-    val zkConsumerConnector3 = new ZookeeperConsumerConnector(consumerConfig3, true)
-    val topicMessageStreams3 = zkConsumerConnector3.createMessageStreams(new mutable.HashMap[String, Int]())
-    // send some messages to each broker
-    Thread.sleep(200)
-    val sentMessages3 = sendMessages(nMessages, "batch3")
-    Thread.sleep(200)
-    val receivedMessages3_1 = getMessages(nMessages, topicMessageStreams1)
-    val receivedMessages3_2 = getMessages(nMessages, topicMessageStreams2)
-    val receivedMessages3 = (receivedMessages3_1 ::: receivedMessages3_2).sortWith((s,t) => s.checksum < t.checksum)
-    assertEquals(sentMessages3, receivedMessages3)
-
-    zkConsumerConnector1.shutdown
-    zkConsumerConnector2.shutdown
-    zkConsumerConnector3.shutdown
-    info("all consumer connectors stopped")
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testCompression() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    println("Sending messages for 1st consumer")
-    // send some messages to each broker
-    val sentMessages1 = sendMessages(nMessages, "batch1", DefaultCompressionCodec)
-    // create a consumer
-    val consumerConfig1 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer1))
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig1, true)
-    val topicMessageStreams1 = zkConsumerConnector1.createMessageStreams(Predef.Map(topic -> numNodes*numParts/2))
-    val receivedMessages1 = getMessages(nMessages*2, topicMessageStreams1)
-    assertEquals(sentMessages1, receivedMessages1)
-    // commit consumed offsets
-    zkConsumerConnector1.commitOffsets
-
-    println("Sending more messages for 2nd consumer")
-    // create a consumer
-    val consumerConfig2 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer2))
-    val zkConsumerConnector2 = new ZookeeperConsumerConnector(consumerConfig2, true)
-    val topicMessageStreams2 = zkConsumerConnector2.createMessageStreams(Predef.Map(topic -> numNodes*numParts/2))
-    // send some messages to each broker
-    val sentMessages2 = sendMessages(nMessages, "batch2", DefaultCompressionCodec)
-    Thread.sleep(200)
-    val receivedMessages2_1 = getMessages(nMessages, topicMessageStreams1)
-    val receivedMessages2_2 = getMessages(nMessages, topicMessageStreams2)
-    val receivedMessages2 = (receivedMessages2_1 ::: receivedMessages2_2).sortWith((s,t) => s.checksum < t.checksum)
-    assertEquals(sentMessages2, receivedMessages2)
-
-    // create a consumer with empty map
-    println("Sending more messages for 3rd consumer")
-    val consumerConfig3 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer3))
-    val zkConsumerConnector3 = new ZookeeperConsumerConnector(consumerConfig3, true)
-    val topicMessageStreams3 = zkConsumerConnector3.createMessageStreams(new mutable.HashMap[String, Int]())
-    // send some messages to each broker
-    Thread.sleep(200)
-    val sentMessages3 = sendMessages(nMessages, "batch3", DefaultCompressionCodec)
-    Thread.sleep(200)
-    val receivedMessages3_1 = getMessages(nMessages, topicMessageStreams1)
-    val receivedMessages3_2 = getMessages(nMessages, topicMessageStreams2)
-    val receivedMessages3 = (receivedMessages3_1 ::: receivedMessages3_2).sortWith((s,t) => s.checksum < t.checksum)
-    assertEquals(sentMessages3, receivedMessages3)
-
-    zkConsumerConnector1.shutdown
-    zkConsumerConnector2.shutdown
-    zkConsumerConnector3.shutdown
-    info("all consumer connectors stopped")
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testCompressionSetConsumption() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    var actualMessages: List[Message] = Nil
-
-    // shutdown one server
-    servers.last.shutdown
-    Thread.sleep(500)
-
-    // send some messages to each broker
-    val sentMessages = sendMessages(configs.head, 200, "batch1", DefaultCompressionCodec)
-    // test consumer timeout logic
-    val consumerConfig0 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer0)) {
-      override val consumerTimeoutMs = 5000
-    }
-    val zkConsumerConnector0 = new ZookeeperConsumerConnector(consumerConfig0, true)
-    val topicMessageStreams0 = zkConsumerConnector0.createMessageStreams(Predef.Map(topic -> 1))
-    getMessages(100, topicMessageStreams0)
-    zkConsumerConnector0.shutdown
-    // at this point, only some part of the message set was consumed. So consumed offset should still be 0
-    // also fetched offset should be 0
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig0, true)
-    val topicMessageStreams1 = zkConsumerConnector1.createMessageStreams(Predef.Map(topic -> 1))
-    val receivedMessages = getMessages(400, topicMessageStreams1)
-    val sortedReceivedMessages = receivedMessages.sortWith((s,t) => s.checksum < t.checksum)
-    val sortedSentMessages = sentMessages.sortWith((s,t) => s.checksum < t.checksum)
-    assertEquals(sortedSentMessages, sortedReceivedMessages)
-    zkConsumerConnector1.shutdown
-
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testConsumerDecoder() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    val sentMessages = sendMessages(nMessages, "batch1", NoCompressionCodec).
-      map(m => Utils.toString(m.payload, "UTF-8")).
-      sortWith((s, t) => s.compare(t) == -1)
-    val consumerConfig = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer1))
-
-    val zkConsumerConnector =
-      new ZookeeperConsumerConnector(consumerConfig, true)
-    val topicMessageStreams =
-      zkConsumerConnector.createMessageStreams(
-        Predef.Map(topic -> numNodes*numParts/2), new StringDecoder)
-
-    var receivedMessages: List[String] = Nil
-    for ((topic, messageStreams) <- topicMessageStreams) {
-      for (messageStream <- messageStreams) {
-        val iterator = messageStream.iterator
-        for (i <- 0 until nMessages * 2) {
-          assertTrue(iterator.hasNext())
-          val message = iterator.next().message
-          receivedMessages ::= message
-          debug("received message: " + message)
-        }
-      }
-    }
-    receivedMessages = receivedMessages.sortWith((s, t) => s.compare(t) == -1)
-    assertEquals(sentMessages, receivedMessages)
-
-    zkConsumerConnector.shutdown()
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def sendMessages(conf: KafkaConfig, messagesPerNode: Int, header: String, compression: CompressionCodec): List[Message]= {
-    var messages: List[Message] = Nil
-    val producer = TestUtils.createProducer("localhost", conf.port)
-    for (partition <- 0 until numParts) {
-      val ms = 0.until(messagesPerNode).map(x =>
-        new Message((header + conf.brokerId + "-" + partition + "-" + x).getBytes)).toArray
-      val mSet = new ByteBufferMessageSet(compressionCodec = compression, messages = ms: _*)
-      for (message <- ms)
-        messages ::= message
-      producer.send(topic, partition, mSet)
-    }
-    producer.close()
-    messages
-  }
-
-  def sendMessages(messagesPerNode: Int, header: String, compression: CompressionCodec = NoCompressionCodec): List[Message]= {
-    var messages: List[Message] = Nil
-    for(conf <- configs) {
-      messages ++= sendMessages(conf, messagesPerNode, header, compression)
-    }
-    messages.sortWith((s,t) => s.checksum < t.checksum)
-  }
-
-  def getMessages(nMessagesPerThread: Int, topicMessageStreams: Map[String,List[KafkaStream[Message]]]): List[Message]= {
-    var messages: List[Message] = Nil
-    for ((topic, messageStreams) <- topicMessageStreams) {
-      for (messageStream <- messageStreams) {
-        val iterator = messageStream.iterator
-        for (i <- 0 until nMessagesPerThread) {
-          assertTrue(iterator.hasNext)
-          val message = iterator.next.message
-          messages ::= message
-          debug("received message: " + Utils.toString(message.payload, "UTF-8"))
-        }
-      }
-    }
-    messages.sortWith((s,t) => s.checksum < t.checksum)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala
deleted file mode 100644
index 38a615d..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala
+++ /dev/null
@@ -1,223 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import junit.framework.Assert._
-import kafka.zk.ZooKeeperTestHarness
-import java.nio.channels.ClosedByInterruptException
-import java.util.concurrent.atomic.AtomicInteger
-import kafka.utils.{ZKGroupTopicDirs, Logging}
-import kafka.consumer.{ConsumerTimeoutException, ConsumerConfig, ConsumerConnector, Consumer}
-import kafka.server.{KafkaRequestHandlers, KafkaServer, KafkaConfig}
-import org.apache.log4j.{Level, Logger}
-import org.scalatest.junit.JUnit3Suite
-import kafka.utils.{TestUtils, TestZKUtils}
-
-class AutoOffsetResetTest extends JUnit3Suite with ZooKeeperTestHarness with Logging {
-
-  val zkConnect = TestZKUtils.zookeeperConnect
-  val topic = "test_topic"
-  val group = "default_group"
-  val testConsumer = "consumer"
-  val brokerPort = 9892
-  val kafkaConfig = new KafkaConfig(TestUtils.createBrokerConfig(0, brokerPort))
-  var kafkaServer : KafkaServer = null
-  val numMessages = 10
-  val largeOffset = 10000
-  val smallOffset = -1
-  
-  val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  override def setUp() {
-    super.setUp()
-    kafkaServer = TestUtils.createServer(kafkaConfig)
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-  }
-
-  override def tearDown() {
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-    kafkaServer.shutdown
-    super.tearDown
-  }
-  
-  def testEarliestOffsetResetForward() = {
-    val producer = TestUtils.createProducer("localhost", brokerPort)
-
-    for(i <- 0 until numMessages) {
-      producer.send(topic, TestUtils.singleMessageSet("test".getBytes()))
-    }
-
-    // update offset in zookeeper for consumer to jump "forward" in time
-    val dirs = new ZKGroupTopicDirs(group, topic)
-    var consumerProps = TestUtils.createConsumerProperties(zkConnect, group, testConsumer)
-    consumerProps.put("autooffset.reset", "smallest")
-    consumerProps.put("consumer.timeout.ms", "2000")
-    val consumerConfig = new ConsumerConfig(consumerProps)
-    
-    TestUtils.updateConsumerOffset(consumerConfig, dirs.consumerOffsetDir + "/" + "0-0", largeOffset)
-    info("Updated consumer offset to " + largeOffset)
-
-    Thread.sleep(500)
-    val consumerConnector: ConsumerConnector = Consumer.create(consumerConfig)
-    val messageStreams = consumerConnector.createMessageStreams(Map(topic -> 1))
-
-    var threadList = List[Thread]()
-    val nMessages : AtomicInteger = new AtomicInteger(0)
-    for ((topic, streamList) <- messageStreams)
-      for (i <- 0 until streamList.length)
-        threadList ::= new Thread("kafka-zk-consumer-" + i) {
-          override def run() {
-
-            try {
-              for (message <- streamList(i)) {
-                nMessages.incrementAndGet
-              }
-            }
-            catch {
-              case te: ConsumerTimeoutException => info("Consumer thread timing out..")
-              case _: InterruptedException => 
-              case _: ClosedByInterruptException =>
-              case e => throw e
-            }
-          }
-
-        }
-
-
-    for (thread <- threadList)
-      thread.start
-
-    threadList(0).join(2000)
-
-    info("Asserting...")
-    assertEquals(numMessages, nMessages.get)
-    consumerConnector.shutdown
-  }
-
-  def testEarliestOffsetResetBackward() = {
-    val producer = TestUtils.createProducer("localhost", brokerPort)
-
-    for(i <- 0 until numMessages) {
-      producer.send(topic, TestUtils.singleMessageSet("test".getBytes()))
-    }
-
-    // update offset in zookeeper for consumer to jump "forward" in time
-    val dirs = new ZKGroupTopicDirs(group, topic)
-    var consumerProps = TestUtils.createConsumerProperties(zkConnect, group, testConsumer)
-    consumerProps.put("autooffset.reset", "smallest")
-    consumerProps.put("consumer.timeout.ms", "2000")
-    val consumerConfig = new ConsumerConfig(consumerProps)
-
-    TestUtils.updateConsumerOffset(consumerConfig, dirs.consumerOffsetDir + "/" + "0-0", smallOffset)
-    info("Updated consumer offset to " + smallOffset)
-
-
-    val consumerConnector: ConsumerConnector = Consumer.create(consumerConfig)
-    val messageStreams = consumerConnector.createMessageStreams(Map(topic -> 1))
-
-    var threadList = List[Thread]()
-    val nMessages : AtomicInteger = new AtomicInteger(0)
-    for ((topic, streamList) <- messageStreams)
-      for (i <- 0 until streamList.length)
-        threadList ::= new Thread("kafka-zk-consumer-" + i) {
-          override def run() {
-
-            try {
-              for (message <- streamList(i)) {
-                nMessages.incrementAndGet
-              }
-            }
-            catch {
-              case _: InterruptedException => 
-              case _: ClosedByInterruptException =>
-              case e => throw e
-            }
-          }
-
-        }
-
-
-    for (thread <- threadList)
-      thread.start
-
-    threadList(0).join(2000)
-
-    info("Asserting...")
-    assertEquals(numMessages, nMessages.get)
-    consumerConnector.shutdown
-  }
-
-  def testLatestOffsetResetForward() = {
-    val producer = TestUtils.createProducer("localhost", brokerPort)
-
-    for(i <- 0 until numMessages) {
-      producer.send(topic, TestUtils.singleMessageSet("test".getBytes()))
-    }
-
-    // update offset in zookeeper for consumer to jump "forward" in time
-    val dirs = new ZKGroupTopicDirs(group, topic)
-    var consumerProps = TestUtils.createConsumerProperties(zkConnect, group, testConsumer)
-    consumerProps.put("autooffset.reset", "largest")
-    consumerProps.put("consumer.timeout.ms", "2000")
-    val consumerConfig = new ConsumerConfig(consumerProps)
-
-    TestUtils.updateConsumerOffset(consumerConfig, dirs.consumerOffsetDir + "/" + "0-0", largeOffset)
-    info("Updated consumer offset to " + largeOffset)
-
-
-    val consumerConnector: ConsumerConnector = Consumer.create(consumerConfig)
-    val messageStreams = consumerConnector.createMessageStreams(Map(topic -> 1))
-
-    var threadList = List[Thread]()
-    val nMessages : AtomicInteger = new AtomicInteger(0)
-    for ((topic, streamList) <- messageStreams)
-      for (i <- 0 until streamList.length)
-        threadList ::= new Thread("kafka-zk-consumer-" + i) {
-          override def run() {
-
-            try {
-              for (message <- streamList(i)) {
-                nMessages.incrementAndGet
-              }
-            }
-            catch {
-              case _: InterruptedException => 
-              case _: ClosedByInterruptException =>
-              case e => throw e
-            }
-          }
-
-        }
-
-
-    for (thread <- threadList)
-      thread.start
-
-    threadList(0).join(2000)
-
-    info("Asserting...")
-
-    assertEquals(0, nMessages.get)
-    consumerConnector.shutdown
-  }
-
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/BackwardsCompatibilityTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/BackwardsCompatibilityTest.scala
deleted file mode 100644
index 9febfc8..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/BackwardsCompatibilityTest.scala
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import kafka.server.{KafkaServer, KafkaConfig}
-import org.scalatest.junit.JUnit3Suite
-import org.apache.log4j.Logger
-import java.util.Properties
-import kafka.consumer.SimpleConsumer
-import kafka.utils.TestUtils
-import kafka.api.{OffsetRequest, FetchRequest}
-import junit.framework.Assert._
-
-class BackwardsCompatibilityTest extends JUnit3Suite {
-
-  val topic = "MagicByte0"
-  val group = "default_group"
-  val testConsumer = "consumer"
-  val kafkaProps = new Properties
-  val host = "localhost"
-  val port = TestUtils.choosePort
-  val loader = getClass.getClassLoader
-  val kafkaLogDir = loader.getResource("test-kafka-logs")
-  kafkaProps.put("brokerid", "12")
-  kafkaProps.put("port", port.toString)
-  kafkaProps.put("log.dir", kafkaLogDir.getPath)
-  val kafkaConfig =
-    new KafkaConfig(kafkaProps) {
-      override val enableZookeeper = false
-    }
-  var kafkaServer : KafkaServer = null
-  var simpleConsumer: SimpleConsumer = null
-
-  private val logger = Logger.getLogger(getClass())
-
-  override def setUp() {
-    super.setUp()
-    kafkaServer = TestUtils.createServer(kafkaConfig)
-    simpleConsumer = new SimpleConsumer(host, port, 1000000, 64*1024)
-  }
-
-  override def tearDown() {
-    simpleConsumer.close
-    kafkaServer.shutdown
-    super.tearDown
-  }
-
-  // test for reading data with magic byte 0
-  def testProtocolVersion0() {
-    val lastOffset = simpleConsumer.getOffsetsBefore(topic, 0, OffsetRequest.LatestTime, 1)
-    var fetchOffset: Long = 0L
-    var messageCount: Int = 0
-
-    while(fetchOffset < lastOffset(0)) {
-      val fetched = simpleConsumer.fetch(new FetchRequest(topic, 0, fetchOffset, 10000))
-      fetched.foreach(m => fetchOffset = m.offset)
-      messageCount += fetched.size
-    }
-    assertEquals(100, messageCount)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/FetcherTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/FetcherTest.scala
deleted file mode 100644
index 915af85..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/FetcherTest.scala
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.consumer
-
-import java.util.concurrent._
-import java.util.concurrent.atomic._
-import scala.collection._
-import junit.framework.Assert._
-
-import kafka.cluster._
-import kafka.message._
-import kafka.server._
-import org.scalatest.junit.JUnit3Suite
-import kafka.integration.KafkaServerTestHarness
-import kafka.utils.TestUtils
-
-class FetcherTest extends JUnit3Suite with KafkaServerTestHarness {
-
-  val numNodes = 2
-  val configs = 
-    for(props <- TestUtils.createBrokerConfigs(numNodes))
-      yield new KafkaConfig(props) {
-        override val enableZookeeper = false
-      }
-  val messages = new mutable.HashMap[Int, ByteBufferMessageSet]
-  val topic = "topic"
-  val cluster = new Cluster(configs.map(c => new Broker(c.brokerId, c.brokerId.toString, "localhost", c.port)))
-  val shutdown = ZookeeperConsumerConnector.shutdownCommand
-  val queue = new LinkedBlockingQueue[FetchedDataChunk]
-  val topicInfos = configs.map(c => new PartitionTopicInfo(topic,
-                                                      c.brokerId,
-                                                      new Partition(c.brokerId, 0), 
-                                                      queue, 
-                                                      new AtomicLong(0), 
-                                                      new AtomicLong(0), 
-                                                      new AtomicInteger(0)))
-  
-  var fetcher: Fetcher = null
-
-  override def setUp() {
-    super.setUp
-    fetcher = new Fetcher(new ConsumerConfig(TestUtils.createConsumerProperties("", "", "")), null)
-    fetcher.stopConnectionsToAllBrokers
-    fetcher.startConnections(topicInfos, cluster)
-  }
-
-  override def tearDown() {
-    fetcher.stopConnectionsToAllBrokers
-    super.tearDown
-  }
-    
-  def testFetcher() {
-    val perNode = 2
-    var count = sendMessages(perNode)
-    fetch(count)
-    Thread.sleep(100)
-    assertQueueEmpty()
-    count = sendMessages(perNode)
-    fetch(count)
-    Thread.sleep(100)
-    assertQueueEmpty()
-  }
-  
-  def assertQueueEmpty(): Unit = assertEquals(0, queue.size)
-  
-  def sendMessages(messagesPerNode: Int): Int = {
-    var count = 0
-    for(conf <- configs) {
-      val producer = TestUtils.createProducer("localhost", conf.port)
-      val ms = 0.until(messagesPerNode).map(x => new Message((conf.brokerId * 5 + x).toString.getBytes)).toArray
-      val mSet = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = ms: _*)
-      messages += conf.brokerId -> mSet
-      producer.send(topic, mSet)
-      producer.close()
-      count += ms.size
-    }
-    count
-  }
-  
-  def fetch(expected: Int) {
-    var count = 0
-    while(true) {
-      val chunk = queue.poll(2L, TimeUnit.SECONDS)
-      assertNotNull("Timed out waiting for data chunk " + (count + 1), chunk)
-      for(message <- chunk.messages)
-        count += 1
-      if(count == expected)
-        return
-    }
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/KafkaServerTestHarness.scala b/trunk/core/src/test/scala/unit/kafka/integration/KafkaServerTestHarness.scala
deleted file mode 100644
index 6b825f5..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/KafkaServerTestHarness.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import kafka.server._
-import kafka.utils.{Utils, TestUtils}
-import org.scalatest.junit.JUnit3Suite
-
-/**
- * A test harness that brings up some number of broker nodes
- */
-trait KafkaServerTestHarness extends JUnit3Suite {
-
-  val configs: List[KafkaConfig]
-  var servers: List[KafkaServer] = null
-
-  override def setUp() {
-    if(configs.size <= 0)
-      throw new IllegalArgumentException("Must suply at least one server config.")
-    servers = configs.map(TestUtils.createServer(_))
-    super.setUp
-  }
-
-  override def tearDown() {
-    super.tearDown
-    servers.map(server => server.shutdown())
-    servers.map(server => Utils.rm(server.config.logDir))
-  }
-
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/LazyInitProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/LazyInitProducerTest.scala
deleted file mode 100644
index 12701f1..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/LazyInitProducerTest.scala
+++ /dev/null
@@ -1,185 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import scala.collection._
-import junit.framework.Assert._
-import kafka.common.OffsetOutOfRangeException
-import kafka.api.{ProducerRequest, FetchRequest}
-import kafka.server.{KafkaRequestHandlers, KafkaServer, KafkaConfig}
-import org.apache.log4j.{Level, Logger}
-import org.scalatest.junit.JUnit3Suite
-import kafka.utils.{TestUtils, Utils}
-import kafka.message.{NoCompressionCodec, Message, ByteBufferMessageSet}
-
-/**
- * End to end tests of the primitive apis against a local server
- */
-class LazyInitProducerTest extends JUnit3Suite with ProducerConsumerTestHarness   {
-
-  val port = TestUtils.choosePort
-  val props = TestUtils.createBrokerConfig(0, port)
-  val config = new KafkaConfig(props) {
-                 override val enableZookeeper = false
-               }
-  val configs = List(config)
-  var servers: List[KafkaServer] = null
-  val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  override def setUp() {
-    super.setUp
-    if(configs.size <= 0)
-      throw new IllegalArgumentException("Must suply at least one server config.")
-    servers = configs.map(TestUtils.createServer(_))
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)    
-  }
-
-  override def tearDown() {
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-
-    super.tearDown    
-    servers.map(server => server.shutdown())
-    servers.map(server => Utils.rm(server.config.logDir))
-  }
-  
-  def testProduceAndFetch() {
-    // send some messages
-    val topic = "test"
-    val sent = new ByteBufferMessageSet(NoCompressionCodec,
-                                        new Message("hello".getBytes()), new Message("there".getBytes()))
-    producer.send(topic, sent)
-    sent.getBuffer.rewind
-    var fetched: ByteBufferMessageSet = null
-    while(fetched == null || fetched.validBytes == 0)
-      fetched = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    TestUtils.checkEquals(sent.iterator, fetched.iterator)
-
-    // send an invalid offset
-    var exceptionThrown = false
-    try {
-      val fetchedWithError = consumer.fetch(new FetchRequest(topic, 0, -1, 10000))
-      fetchedWithError.iterator
-    }
-    catch {
-      case e: OffsetOutOfRangeException => exceptionThrown = true
-      case e2 => throw e2
-    }
-    assertTrue(exceptionThrown)
-  }
-
-  def testProduceAndMultiFetch() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(NoCompressionCodec,
-                                           new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches += new FetchRequest(topic, 0, 0, 10000)
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(fetches: _*)
-      for((topic, resp) <- topics.zip(response.toList))
-        TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-    }
-
-    {
-      // send some invalid offsets
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, 0, -1, 10000)
-
-      var exceptionThrown = false
-      try {
-        val responses = consumer.multifetch(fetches: _*)
-        for(resp <- responses)
-          resp.iterator
-      }
-      catch {
-        case e: OffsetOutOfRangeException => exceptionThrown = true
-        case e2 => throw e2
-      }
-      assertTrue(exceptionThrown)
-    }
-  }
-
-  def testMultiProduce() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(NoCompressionCodec,
-                                         new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-
-    // wait a bit for produced message to be available
-    Thread.sleep(200)
-    val response = consumer.multifetch(fetches: _*)
-    for((topic, resp) <- topics.zip(response.toList))
-      TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-  }
-
-  def testMultiProduceResend() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(NoCompressionCodec,
-                                         new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    // resend the same multisend
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-
-    // wait a bit for produced message to be available
-    Thread.sleep(750)
-    val response = consumer.multifetch(fetches: _*)
-    for((topic, resp) <- topics.zip(response.toList))
-      TestUtils.checkEquals(TestUtils.stackedIterator(messages(topic).map(m => m.message).iterator,
-                                                      messages(topic).map(m => m.message).iterator),
-                            resp.map(m => m.message).iterator)
-//      TestUtils.checkEquals(TestUtils.stackedIterator(messages(topic).iterator, messages(topic).iterator), resp.iterator)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/LogCorruptionTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/LogCorruptionTest.scala
deleted file mode 100644
index 24c24be..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/LogCorruptionTest.scala
+++ /dev/null
@@ -1,111 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import kafka.server.KafkaConfig
-import java.io.File
-import java.nio.ByteBuffer
-import kafka.utils.Utils
-import kafka.api.FetchRequest
-import kafka.common.InvalidMessageSizeException
-import kafka.zk.ZooKeeperTestHarness
-import kafka.utils.{TestZKUtils, TestUtils}
-import kafka.consumer.{ZookeeperConsumerConnector, ConsumerConfig}
-import org.scalatest.junit.JUnit3Suite
-import kafka.integration.ProducerConsumerTestHarness
-import kafka.integration.KafkaServerTestHarness
-import org.apache.log4j.{Logger, Level}
-import kafka.message.{NoCompressionCodec, Message, ByteBufferMessageSet}
-
-class LogCorruptionTest extends JUnit3Suite with ProducerConsumerTestHarness with KafkaServerTestHarness with ZooKeeperTestHarness {
-  val zkConnect = TestZKUtils.zookeeperConnect  
-  val port = TestUtils.choosePort
-  val props = TestUtils.createBrokerConfig(0, port)
-  val config = new KafkaConfig(props) {
-                 override val hostName = "localhost"
-                 override val enableZookeeper = true
-               }
-  val configs = List(config)
-  val topic = "test"
-  val partition = 0
-
-  def testMessageSizeTooLarge() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    val fetcherLogger = Logger.getLogger(classOf[kafka.consumer.FetcherRunnable])
-
-    requestHandlerLogger.setLevel(Level.FATAL)
-    fetcherLogger.setLevel(Level.FATAL)
-
-    // send some messages
-    val sent1 = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message("hello".getBytes()))
-    producer.send(topic, sent1)
-    Thread.sleep(200)
-
-    // corrupt the file on disk
-    val logFile = new File(config.logDir + File.separator + topic + "-" + partition, Log.nameFromOffset(0))
-    val byteBuffer = ByteBuffer.allocate(4)
-    byteBuffer.putInt(1000) // wrong message size
-    byteBuffer.rewind()
-    val channel = Utils.openChannel(logFile, true)
-    channel.write(byteBuffer)
-    channel.force(true)
-    channel.close
-
-    Thread.sleep(500)
-    // test SimpleConsumer
-    val messageSet = consumer.fetch(new FetchRequest(topic, partition, 0, 10000))
-    try {
-      for (msg <- messageSet)
-        fail("shouldn't reach here in SimpleConsumer since log file is corrupted.")
-      fail("shouldn't reach here in SimpleConsumer since log file is corrupted.")
-    }
-    catch {
-      case e: InvalidMessageSizeException => "This is good"
-    }
-
-    val messageSet2 = consumer.fetch(new FetchRequest(topic, partition, 0, 10000))
-    try {
-      for (msg <- messageSet2)
-        fail("shouldn't reach here in SimpleConsumer since log file is corrupted.")
-      fail("shouldn't reach here in SimpleConsumer since log file is corrupted.")
-    }
-    catch {
-      case e: InvalidMessageSizeException => println("This is good")
-    }
-
-    // test ZookeeperConsumer
-    val consumerConfig1 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, "group1", "consumer1", 10000))
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig1)
-    val topicMessageStreams1 = zkConsumerConnector1.createMessageStreams(Predef.Map(topic -> 1))
-    try {
-      for ((topic, messageStreams) <- topicMessageStreams1)
-      for (message <- messageStreams(0))
-        fail("shouldn't reach here in ZookeeperConsumer since log file is corrupted.")
-      fail("shouldn't reach here in ZookeeperConsumer since log file is corrupted.")
-    }
-    catch {
-      case e: InvalidMessageSizeException => "This is good"
-      case e: Exception => "This is not bad too !"
-    }
-
-    zkConsumerConnector1.shutdown
-    requestHandlerLogger.setLevel(Level.ERROR)
-    fetcherLogger.setLevel(Level.ERROR)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala b/trunk/core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala
deleted file mode 100644
index 48d01e9..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala
+++ /dev/null
@@ -1,271 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import scala.collection._
-import junit.framework.Assert._
-import kafka.api.{ProducerRequest, FetchRequest}
-import kafka.common.{OffsetOutOfRangeException, InvalidPartitionException}
-import kafka.server.{KafkaRequestHandlers, KafkaConfig}
-import org.apache.log4j.{Level, Logger}
-import org.scalatest.junit.JUnit3Suite
-import java.util.Properties
-import kafka.producer.{ProducerData, Producer, ProducerConfig}
-import kafka.serializer.StringDecoder
-import kafka.utils.TestUtils
-import kafka.message.{DefaultCompressionCodec, NoCompressionCodec, Message, ByteBufferMessageSet}
-import java.io.File
-
-/**
- * End to end tests of the primitive apis against a local server
- */
-class PrimitiveApiTest extends JUnit3Suite with ProducerConsumerTestHarness with KafkaServerTestHarness {
-  
-  val port = TestUtils.choosePort
-  val props = TestUtils.createBrokerConfig(0, port)
-  val config = new KafkaConfig(props) {
-                 override val enableZookeeper = false
-               }
-  val configs = List(config)
-  val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  def testDefaultEncoderProducerAndFetch() {
-    val topic = "test-topic"
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("broker.list", "0:localhost:" + port)
-    val config = new ProducerConfig(props)
-
-    val stringProducer1 = new Producer[String, String](config)
-    stringProducer1.send(new ProducerData[String, String](topic, "test", Array("test-message")))
-    Thread.sleep(200)
-
-    var fetched = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    assertTrue(fetched.iterator.hasNext)
-
-    val fetchedMessageAndOffset = fetched.iterator.next
-    val stringDecoder = new StringDecoder
-    val fetchedStringMessage = stringDecoder.toEvent(fetchedMessageAndOffset.message)
-    assertEquals("test-message", fetchedStringMessage)
-  }
-
-  def testDefaultEncoderProducerAndFetchWithCompression() {
-    val topic = "test-topic"
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("broker.list", "0:localhost:" + port)
-    props.put("compression", "true")
-    val config = new ProducerConfig(props)
-
-    val stringProducer1 = new Producer[String, String](config)
-    stringProducer1.send(new ProducerData[String, String](topic, "test", Array("test-message")))
-    Thread.sleep(200)
-
-    var fetched = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    assertTrue(fetched.iterator.hasNext)
-
-    val fetchedMessageAndOffset = fetched.iterator.next
-    val stringDecoder = new StringDecoder
-    val fetchedStringMessage = stringDecoder.toEvent(fetchedMessageAndOffset.message)
-    assertEquals("test-message", fetchedStringMessage)
-  }
-
-  def testProduceAndMultiFetch() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(NoCompressionCodec,
-                                           new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches += new FetchRequest(topic, 0, 0, 10000)
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(700)
-      val response = consumer.multifetch(fetches: _*)
-      for((topic, resp) <- topics.zip(response.toList))
-        TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-    }
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    {
-      // send some invalid offsets
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, 0, -1, 10000)
-
-      try {
-        val responses = consumer.multifetch(fetches: _*)
-        for(resp <- responses)
-          resp.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: OffsetOutOfRangeException => "this is good"
-      }
-    }    
-
-    {
-      // send some invalid partitions
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, -1, 0, 10000)
-
-      try {
-        val responses = consumer.multifetch(fetches: _*)
-        for(resp <- responses)
-          resp.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: InvalidPartitionException => "this is good"
-      }
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testProduceAndMultiFetchWithCompression() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(DefaultCompressionCodec,
-                                           new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches += new FetchRequest(topic, 0, 0, 10000)
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(fetches: _*)
-      for((topic, resp) <- topics.zip(response.toList))
-        TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-    }
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    {
-      // send some invalid offsets
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, 0, -1, 10000)
-
-      try {
-        val responses = consumer.multifetch(fetches: _*)
-        for(resp <- responses)
-          resp.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: OffsetOutOfRangeException => "this is good"
-      }
-    }
-
-    {
-      // send some invalid partitions
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, -1, 0, 10000)
-
-      try {
-        val responses = consumer.multifetch(fetches: _*)
-        for(resp <- responses)
-          resp.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: InvalidPartitionException => "this is good"
-      }
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testMultiProduce() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(NoCompressionCodec,
-                                         new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-      
-    // wait a bit for produced message to be available
-    Thread.sleep(200)
-    val response = consumer.multifetch(fetches: _*)
-    for((topic, resp) <- topics.zip(response.toList))
-      TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-  }
-
-  def testMultiProduceWithCompression() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(DefaultCompressionCodec,
-                                         new Message(("a_" + topic).getBytes), new Message(("b_" + topic).getBytes))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-
-    // wait a bit for produced message to be available
-    Thread.sleep(200)
-    val response = consumer.multifetch(fetches: _*)
-    for((topic, resp) <- topics.zip(response.toList))
-      TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-  }
-
-  def testConsumerNotExistTopic() {
-    val newTopic = "new-topic"
-    val messageSetIter = consumer.fetch(new FetchRequest(newTopic, 0, 0, 10000)).iterator
-    assertTrue(messageSetIter.hasNext == false)
-    val logFile = new File(config.logDir, newTopic + "-0")
-    assertTrue(!logFile.exists)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/integration/ProducerConsumerTestHarness.scala b/trunk/core/src/test/scala/unit/kafka/integration/ProducerConsumerTestHarness.scala
deleted file mode 100644
index 76ae0b1..0000000
--- a/trunk/core/src/test/scala/unit/kafka/integration/ProducerConsumerTestHarness.scala
+++ /dev/null
@@ -1,52 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.integration
-
-import kafka.consumer.SimpleConsumer
-import org.scalatest.junit.JUnit3Suite
-import java.util.Properties
-import kafka.producer.{SyncProducerConfig, SyncProducer}
-
-trait ProducerConsumerTestHarness extends JUnit3Suite {
-  
-    val port: Int
-    val host = "localhost"
-    var producer: SyncProducer = null
-    var consumer: SimpleConsumer = null
-
-    override def setUp() {
-      val props = new Properties()
-      props.put("host", host)
-      props.put("port", port.toString)
-      props.put("buffer.size", "65536")
-      props.put("connect.timeout.ms", "100000")
-      props.put("reconnect.interval", "10000")
-      producer = new SyncProducer(new SyncProducerConfig(props))
-      consumer = new SimpleConsumer(host,
-                                   port,
-                                   1000000,
-                                   64*1024)
-      super.setUp
-    }
-
-   override def tearDown() {
-     super.tearDown
-     producer.close()
-     consumer.close()
-   }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
deleted file mode 100644
index f7a4b15..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
+++ /dev/null
@@ -1,124 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.consumer
-
-import junit.framework.Assert._
-import kafka.zk.ZooKeeperTestHarness
-import kafka.integration.KafkaServerTestHarness
-import kafka.server.KafkaConfig
-import kafka.utils.{Utils, Logging}
-import kafka.utils.{TestZKUtils, TestUtils}
-import org.scalatest.junit.JUnit3Suite
-import scala.collection.JavaConversions._
-import kafka.javaapi.message.ByteBufferMessageSet
-import org.apache.log4j.{Level, Logger}
-import kafka.message.{NoCompressionCodec, CompressionCodec, Message}
-import kafka.consumer.{KafkaStream, ConsumerConfig}
-
-
-class ZookeeperConsumerConnectorTest extends JUnit3Suite with KafkaServerTestHarness with ZooKeeperTestHarness with Logging {
-
-  val zookeeperConnect = TestZKUtils.zookeeperConnect
-  val zkConnect = zookeeperConnect
-  val numNodes = 2
-  val numParts = 2
-  val topic = "topic1"
-  val configs =
-    for(props <- TestUtils.createBrokerConfigs(numNodes))
-    yield new KafkaConfig(props) {
-      override val enableZookeeper = true
-      override val numPartitions = numParts
-      override val zkConnect = zookeeperConnect
-    }
-  val group = "group1"
-  val consumer1 = "consumer1"
-  val nMessages = 2
-
-  def testBasic() {
-    val requestHandlerLogger = Logger.getLogger(classOf[kafka.server.KafkaRequestHandlers])
-    requestHandlerLogger.setLevel(Level.FATAL)
-    var actualMessages: List[Message] = Nil
-
-    // send some messages to each broker
-    val sentMessages1 = sendMessages(nMessages, "batch1")
-    // create a consumer
-    val consumerConfig1 = new ConsumerConfig(
-      TestUtils.createConsumerProperties(zkConnect, group, consumer1))
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig1, true)
-    val topicMessageStreams1 = zkConsumerConnector1.createMessageStreams(toJavaMap(Predef.Map(topic -> numNodes*numParts/2)))
-    val receivedMessages1 = getMessages(nMessages*2, topicMessageStreams1)
-    assertEquals(sentMessages1, receivedMessages1)
-
-    zkConsumerConnector1.shutdown
-    info("all consumer connectors stopped")
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def sendMessages(conf: KafkaConfig, messagesPerNode: Int, header: String, compressed: CompressionCodec): List[Message]= {
-    var messages: List[Message] = Nil
-    val producer = kafka.javaapi.Implicits.toJavaSyncProducer(TestUtils.createProducer("localhost", conf.port))
-    for (partition <- 0 until numParts) {
-      val ms = 0.until(messagesPerNode).map(x =>
-        new Message((header + conf.brokerId + "-" + partition + "-" + x).getBytes)).toArray
-      val mSet = new ByteBufferMessageSet(compressionCodec = compressed, messages = getMessageList(ms: _*))
-      for (message <- ms)
-        messages ::= message
-      producer.send(topic, partition, mSet)
-    }
-    producer.close()
-    messages
-  }
-
-  def sendMessages(messagesPerNode: Int, header: String, compressed: CompressionCodec = NoCompressionCodec): List[Message]= {
-    var messages: List[Message] = Nil
-    for(conf <- configs) {
-      messages ++= sendMessages(conf, messagesPerNode, header, compressed)
-    }
-    messages.sortWith((s,t) => s.checksum < t.checksum)
-  }
-
-  def getMessages(nMessagesPerThread: Int, jTopicMessageStreams: java.util.Map[String, java.util.List[KafkaStream[Message]]])
-  : List[Message]= {
-    var messages: List[Message] = Nil
-    val topicMessageStreams = asMap(jTopicMessageStreams)
-    for ((topic, messageStreams) <- topicMessageStreams) {
-      for (messageStream <- messageStreams) {
-        val iterator = messageStream.iterator
-        for (i <- 0 until nMessagesPerThread) {
-          assertTrue(iterator.hasNext)
-          val message = iterator.next.message
-          messages ::= message
-          debug("received message: " + Utils.toString(message.payload, "UTF-8"))
-        }
-      }
-    }
-    messages.sortWith((s,t) => s.checksum < t.checksum)
-  }
-
-  private def getMessageList(messages: Message*): java.util.List[Message] = {
-    val messageList = new java.util.ArrayList[Message]()
-    messages.foreach(m => messageList.add(m))
-    messageList
-  }
-
-  private def toJavaMap(scalaMap: Map[String, Int]): java.util.Map[String, java.lang.Integer] = {
-    val javaMap = new java.util.HashMap[String, java.lang.Integer]()
-    scalaMap.foreach(m => javaMap.put(m._1, m._2.asInstanceOf[java.lang.Integer]))
-    javaMap
-  }  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/integration/PrimitiveApiTest.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/integration/PrimitiveApiTest.scala
deleted file mode 100644
index bb5daf4..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/integration/PrimitiveApiTest.scala
+++ /dev/null
@@ -1,417 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.integration
-
-import scala.collection._
-import kafka.api.FetchRequest
-import kafka.common.{InvalidPartitionException, OffsetOutOfRangeException}
-import kafka.server.{KafkaRequestHandlers, KafkaConfig}
-import org.apache.log4j.{Level, Logger}
-import org.scalatest.junit.JUnit3Suite
-import kafka.javaapi.message.ByteBufferMessageSet
-import kafka.javaapi.ProducerRequest
-import kafka.utils.TestUtils
-import kafka.message.{DefaultCompressionCodec, NoCompressionCodec, Message}
-
-/**
- * End to end tests of the primitive apis against a local server
- */
-class PrimitiveApiTest extends JUnit3Suite with ProducerConsumerTestHarness with kafka.integration.KafkaServerTestHarness {
-  
-  val port = 9999
-  val props = TestUtils.createBrokerConfig(0, port)
-  val config = new KafkaConfig(props) {
-                 override val enableZookeeper = false
-               }
-  val configs = List(config)
-  val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  def testProduceAndFetch() {
-    // send some messages
-    val topic = "test"
-
-//    send an empty messageset first
-    val sent2 = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                         messages = getMessageList(Seq.empty[Message]: _*))
-    producer.send(topic, sent2)
-    Thread.sleep(200)
-    sent2.getBuffer.rewind
-    var fetched2 = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    TestUtils.checkEquals(sent2.iterator, fetched2.iterator)
-
-
-    // send some messages
-    val sent3 = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                         messages = getMessageList(new Message("hello".getBytes()),
-      new Message("there".getBytes())))
-    producer.send(topic, sent3)
-
-    Thread.sleep(200)
-    sent3.getBuffer.rewind
-    var fetched3: ByteBufferMessageSet = null
-    while(fetched3 == null || fetched3.validBytes == 0)
-      fetched3 = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    TestUtils.checkEquals(sent3.iterator, fetched3.iterator)
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    // send an invalid offset
-    try {
-      val fetchedWithError = consumer.fetch(new FetchRequest(topic, 0, -1, 10000))
-      fetchedWithError.iterator
-      fail("expect exception")
-    }
-    catch {
-      case e: OffsetOutOfRangeException => "this is good"
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testProduceAndFetchWithCompression() {
-    // send some messages
-    val topic = "test"
-
-//    send an empty messageset first
-    val sent2 = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                         messages = getMessageList(Seq.empty[Message]: _*))
-    producer.send(topic, sent2)
-    Thread.sleep(200)
-    sent2.getBuffer.rewind
-    var fetched2 = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    TestUtils.checkEquals(sent2.iterator, fetched2.iterator)
-
-
-    // send some messages
-    val sent3 = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                         messages = getMessageList(new Message("hello".getBytes()),
-      new Message("there".getBytes())))
-    producer.send(topic, sent3)
-
-    Thread.sleep(200)
-    sent3.getBuffer.rewind
-    var fetched3: ByteBufferMessageSet = null
-    while(fetched3 == null || fetched3.validBytes == 0)
-      fetched3 = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-    TestUtils.checkEquals(sent3.iterator, fetched3.iterator)
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    // send an invalid offset
-    try {
-      val fetchedWithError = consumer.fetch(new FetchRequest(topic, 0, -1, 10000))
-      fetchedWithError.iterator
-      fail("expect exception")
-    }
-    catch {
-      case e: OffsetOutOfRangeException => "this is good"
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testProduceAndMultiFetch() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                           messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                     new Message(("b_" + topic).getBytes)))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches += new FetchRequest(topic, 0, 0, 10000)
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(getFetchRequestList(fetches: _*))
-      val iter = response.iterator
-      for(topic <- topics) {
-        if (iter.hasNext) {
-          val resp = iter.next
-          TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-        }
-        else
-          fail("fewer responses than expected")
-      }
-    }
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    {
-      // send some invalid offsets
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, 0, -1, 10000)
-
-      try {
-        val responses = consumer.multifetch(getFetchRequestList(fetches: _*))
-        val iter = responses.iterator
-        while (iter.hasNext)
-          iter.next.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: OffsetOutOfRangeException => "this is good"
-      }
-    }    
-
-    {
-      // send some invalid partitions
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, -1, 0, 10000)
-
-      try {
-        val responses = consumer.multifetch(getFetchRequestList(fetches: _*))
-        val iter = responses.iterator
-        while (iter.hasNext)
-          iter.next.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: InvalidPartitionException => "this is good"
-      }
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testProduceAndMultiFetchWithCompression() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                           messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                     new Message(("b_" + topic).getBytes)))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches += new FetchRequest(topic, 0, 0, 10000)
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(getFetchRequestList(fetches: _*))
-      val iter = response.iterator
-      for(topic <- topics) {
-        if (iter.hasNext) {
-          val resp = iter.next
-          TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-        }
-        else
-          fail("fewer responses than expected")
-      }
-    }
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    {
-      // send some invalid offsets
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, 0, -1, 10000)
-
-      try {
-        val responses = consumer.multifetch(getFetchRequestList(fetches: _*))
-        val iter = responses.iterator
-        while (iter.hasNext)
-          iter.next.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: OffsetOutOfRangeException => "this is good"
-      }
-    }
-
-    {
-      // send some invalid partitions
-      val fetches = new mutable.ArrayBuffer[FetchRequest]
-      for(topic <- topics)
-        fetches += new FetchRequest(topic, -1, 0, 10000)
-
-      try {
-        val responses = consumer.multifetch(getFetchRequestList(fetches: _*))
-        val iter = responses.iterator
-        while (iter.hasNext)
-          iter.next.iterator
-        fail("expect exception")
-      }
-      catch {
-        case e: InvalidPartitionException => "this is good"
-      }
-    }
-
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-  }
-
-  def testProduceAndMultiFetchJava() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches : java.util.ArrayList[FetchRequest] = new java.util.ArrayList[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                           messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                     new Message(("b_" + topic).getBytes)))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches.add(new FetchRequest(topic, 0, 0, 10000))
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(fetches)
-      val iter = response.iterator
-      for(topic <- topics) {
-        if (iter.hasNext) {
-          val resp = iter.next
-          TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-        }
-        else
-          fail("fewer responses than expected")
-      }
-    }
-  }
-
-  def testProduceAndMultiFetchJavaWithCompression() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    {
-      val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-      val fetches : java.util.ArrayList[FetchRequest] = new java.util.ArrayList[FetchRequest]
-      for(topic <- topics) {
-        val set = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                           messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                     new Message(("b_" + topic).getBytes)))
-        messages += topic -> set
-        producer.send(topic, set)
-        set.getBuffer.rewind
-        fetches.add(new FetchRequest(topic, 0, 0, 10000))
-      }
-
-      // wait a bit for produced message to be available
-      Thread.sleep(200)
-      val response = consumer.multifetch(fetches)
-      val iter = response.iterator
-      for(topic <- topics) {
-        if (iter.hasNext) {
-          val resp = iter.next
-          TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-        }
-        else
-          fail("fewer responses than expected")
-      }
-    }
-  }
-
-  def testMultiProduce() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                         messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                   new Message(("b_" + topic).getBytes)))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-      
-    // wait a bit for produced message to be available
-    Thread.sleep(200)
-    val response = consumer.multifetch(getFetchRequestList(fetches: _*))
-    val iter = response.iterator
-    for(topic <- topics) {
-      if (iter.hasNext) {
-        val resp = iter.next
-        TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-      }
-      else
-        fail("fewer responses than expected")
-    }
-  }
-
-  def testMultiProduceWithCompression() {
-    // send some messages
-    val topics = List("test1", "test2", "test3");
-    val messages = new mutable.HashMap[String, ByteBufferMessageSet]
-    val fetches = new mutable.ArrayBuffer[FetchRequest]
-    var produceList: List[ProducerRequest] = Nil
-    for(topic <- topics) {
-      val set = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                         messages = getMessageList(new Message(("a_" + topic).getBytes),
-                                                                   new Message(("b_" + topic).getBytes)))
-      messages += topic -> set
-      produceList ::= new ProducerRequest(topic, 0, set)
-      fetches += new FetchRequest(topic, 0, 0, 10000)
-    }
-    producer.multiSend(produceList.toArray)
-
-    for (messageSet <- messages.values)
-      messageSet.getBuffer.rewind
-
-    // wait a bit for produced message to be available
-    Thread.sleep(200)
-    val response = consumer.multifetch(getFetchRequestList(fetches: _*))
-    val iter = response.iterator
-    for(topic <- topics) {
-      if (iter.hasNext) {
-        val resp = iter.next
-        TestUtils.checkEquals(messages(topic).iterator, resp.iterator)
-      }
-      else
-        fail("fewer responses than expected")
-    }
-  }
-
-  private def getMessageList(messages: Message*): java.util.List[Message] = {
-    val messageList = new java.util.ArrayList[Message]()
-    messages.foreach(m => messageList.add(m))
-    messageList
-  }
-
-  private def getFetchRequestList(fetches: FetchRequest*): java.util.List[FetchRequest] = {
-    val fetchReqs = new java.util.ArrayList[FetchRequest]()
-    fetches.foreach(f => fetchReqs.add(f))
-    fetchReqs
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/integration/ProducerConsumerTestHarness.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/integration/ProducerConsumerTestHarness.scala
deleted file mode 100644
index 3b1aa2a..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/integration/ProducerConsumerTestHarness.scala
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.integration
-
-import org.scalatest.junit.JUnit3Suite
-import java.util.Properties
-import kafka.producer.SyncProducerConfig
-import kafka.javaapi.producer.SyncProducer
-import kafka.javaapi.consumer.SimpleConsumer
-
-trait ProducerConsumerTestHarness extends JUnit3Suite {
-  
-    val port: Int
-    val host = "localhost"
-    var producer: SyncProducer = null
-    var consumer: SimpleConsumer = null
-
-    override def setUp() {
-      val props = new Properties()
-      props.put("host", host)
-      props.put("port", port.toString)
-      props.put("buffer.size", "65536")
-      props.put("connect.timeout.ms", "100000")
-      props.put("reconnect.interval", "10000")
-      producer = new SyncProducer(new SyncProducerConfig(props))
-      consumer = new SimpleConsumer(host,
-                                   port,
-                                   1000000,
-                                   64*1024)
-      super.setUp
-    }
-
-   override def tearDown() {
-     super.tearDown
-     producer.close()
-     consumer.close()
-   }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/message/BaseMessageSetTestCases.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/message/BaseMessageSetTestCases.scala
deleted file mode 100644
index c48f7dc..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/message/BaseMessageSetTestCases.scala
+++ /dev/null
@@ -1,74 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.message
-
-import junit.framework.Assert._
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-import kafka.utils.TestUtils
-import kafka.message.{DefaultCompressionCodec, NoCompressionCodec, CompressionCodec, Message}
-
-trait BaseMessageSetTestCases extends JUnitSuite {
-  
-  val messages = Array(new Message("abcd".getBytes()), new Message("efgh".getBytes()))
-  def createMessageSet(messages: Seq[Message], compressed: CompressionCodec = NoCompressionCodec): MessageSet
-  def toMessageIterator(messageSet: MessageSet): Iterator[Message] = {
-    import scala.collection.JavaConversions._
-    val messages = asIterable(messageSet)
-    messages.map(m => m.message).iterator
-  }
-
-  @Test
-  def testWrittenEqualsRead {
-    val messageSet = createMessageSet(messages)
-    TestUtils.checkEquals(messages.iterator, toMessageIterator(messageSet))
-  }
-
-  @Test
-  def testIteratorIsConsistent() {
-    import scala.collection.JavaConversions._
-    val m = createMessageSet(messages)
-    // two iterators over the same set should give the same results
-    TestUtils.checkEquals(asIterator(m.iterator), asIterator(m.iterator))
-  }
-
-  @Test
-  def testIteratorIsConsistentWithCompression() {
-    import scala.collection.JavaConversions._
-    val m = createMessageSet(messages, DefaultCompressionCodec)
-    // two iterators over the same set should give the same results
-    TestUtils.checkEquals(asIterator(m.iterator), asIterator(m.iterator))
-  }
-
-  @Test
-  def testSizeInBytes() {
-    assertEquals("Empty message set should have 0 bytes.",
-                 0L,
-                 createMessageSet(Array[Message]()).sizeInBytes)
-    assertEquals("Predicted size should equal actual size.", 
-                 kafka.message.MessageSet.messageSetSize(messages).toLong,
-                 createMessageSet(messages).sizeInBytes)
-  }
-
-  @Test
-  def testSizeInBytesWithCompression () {
-    assertEquals("Empty message set should have 0 bytes.",
-                 0L,           // overhead of the GZIP output stream
-                 createMessageSet(Array[Message](), DefaultCompressionCodec).sizeInBytes)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/message/ByteBufferMessageSetTest.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/message/ByteBufferMessageSetTest.scala
deleted file mode 100644
index 86154d9..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/message/ByteBufferMessageSetTest.scala
+++ /dev/null
@@ -1,86 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.message
-
-import java.nio._
-import junit.framework.Assert._
-import org.junit.Test
-import kafka.message.{DefaultCompressionCodec, CompressionCodec, NoCompressionCodec, Message}
-
-class ByteBufferMessageSetTest extends kafka.javaapi.message.BaseMessageSetTestCases {
-
-  override def createMessageSet(messages: Seq[Message],
-                                compressed: CompressionCodec = NoCompressionCodec): ByteBufferMessageSet =
-    new ByteBufferMessageSet(compressed, getMessageList(messages: _*))
-  
-  @Test
-  def testValidBytes() {
-    val messageList = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                               messages = getMessageList(new Message("hello".getBytes()),
-                                                                      new Message("there".getBytes())))
-    val buffer = ByteBuffer.allocate(messageList.sizeInBytes.toInt + 2)
-    buffer.put(messageList.getBuffer)
-    buffer.putShort(4)
-    val messageListPlus = new ByteBufferMessageSet(buffer)
-    assertEquals("Adding invalid bytes shouldn't change byte count", messageList.validBytes, messageListPlus.validBytes)
-  }
-
-  @Test
-  def testValidBytesWithCompression () {
-    val messageList = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                               messages = getMessageList(new Message("hello".getBytes()),
-                                                                         new Message("there".getBytes())))
-    val buffer = ByteBuffer.allocate(messageList.sizeInBytes.toInt + 2)
-    buffer.put(messageList.getBuffer)
-    buffer.putShort(4)
-    val messageListPlus = new ByteBufferMessageSet(buffer, 0, 0)
-    assertEquals("Adding invalid bytes shouldn't change byte count", messageList.validBytes, messageListPlus.validBytes)
-  }
-
-  @Test
-  def testEquals() {
-    val messageList = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                            messages = getMessageList(new Message("hello".getBytes()),
-                                                                      new Message("there".getBytes())))
-    val moreMessages = new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                messages = getMessageList(new Message("hello".getBytes()),
-                                                                          new Message("there".getBytes())))
-
-    assertEquals(messageList, moreMessages)
-    assertTrue(messageList.equals(moreMessages))
-  }
-
-  @Test
-  def testEqualsWithCompression () {
-    val messageList = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                            messages = getMessageList(new Message("hello".getBytes()),
-                                                                      new Message("there".getBytes())))
-    val moreMessages = new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec,
-                                                messages = getMessageList(new Message("hello".getBytes()),
-                                                                          new Message("there".getBytes())))
-
-    assertEquals(messageList, moreMessages)
-    assertTrue(messageList.equals(moreMessages))
-  }
-
-  private def getMessageList(messages: Message*): java.util.List[Message] = {
-    val messageList = new java.util.ArrayList[Message]()
-    messages.foreach(m => messageList.add(m))
-    messageList
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/producer/ProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/producer/ProducerTest.scala
deleted file mode 100644
index e749597..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/producer/ProducerTest.scala
+++ /dev/null
@@ -1,633 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.javaapi.producer
-
-import java.util.Properties
-import org.apache.log4j.{Logger, Level}
-import kafka.server.{KafkaRequestHandlers, KafkaServer, KafkaConfig}
-import kafka.zk.EmbeddedZookeeper
-import kafka.utils.{TestZKUtils, TestUtils}
-import org.junit.{After, Before, Test}
-import junit.framework.Assert
-import org.easymock.EasyMock
-import kafka.utils.Utils
-import java.util.concurrent.ConcurrentHashMap
-import kafka.cluster.Partition
-import kafka.common.{UnavailableProducerException, InvalidPartitionException, InvalidConfigException}
-import org.scalatest.junit.JUnitSuite
-import kafka.producer.{SyncProducerConfig, Partitioner, ProducerConfig, DefaultPartitioner}
-import kafka.producer.ProducerPool
-import kafka.javaapi.message.ByteBufferMessageSet
-import kafka.producer.async.AsyncProducer
-import kafka.javaapi.Implicits._
-import kafka.serializer.{StringEncoder, Encoder}
-import kafka.javaapi.consumer.SimpleConsumer
-import kafka.api.FetchRequest
-import kafka.message.{NoCompressionCodec, Message}
-
-class ProducerTest extends JUnitSuite {
-  private val topic = "test-topic"
-  private val brokerId1 = 0
-  private val brokerId2 = 1  
-  private val port1 = 9092
-  private val port2 = 9093
-  private var server1: KafkaServer = null
-  private var server2: KafkaServer = null
-  private var producer1: SyncProducer = null
-  private var producer2: SyncProducer = null
-  private var consumer1: SimpleConsumer = null
-  private var consumer2: SimpleConsumer = null
-  private var zkServer:EmbeddedZookeeper = null
-  private val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  @Before
-  def setUp() {
-    // set up 2 brokers with 4 partitions each
-    zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect)
-
-    val props1 = TestUtils.createBrokerConfig(brokerId1, port1)
-    val config1 = new KafkaConfig(props1) {
-      override val numPartitions = 4
-    }
-    server1 = TestUtils.createServer(config1)
-
-    val props2 = TestUtils.createBrokerConfig(brokerId2, port2)
-    val config2 = new KafkaConfig(props2) {
-      override val numPartitions = 4
-    }
-    server2 = TestUtils.createServer(config2)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", port1.toString)
-
-    producer1 = new SyncProducer(new SyncProducerConfig(props))
-    val messages1 = new java.util.ArrayList[Message]
-    messages1.add(new Message("test".getBytes()))
-    producer1.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messages1))
-
-    producer2 = new SyncProducer(new SyncProducerConfig(props) {
-      override val port = port2
-    })
-    val messages2 = new java.util.ArrayList[Message]
-    messages2.add(new Message("test".getBytes()))
-
-    producer2.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messages2))
-
-    consumer1 = new SimpleConsumer("localhost", port1, 1000000, 64*1024)
-    consumer2 = new SimpleConsumer("localhost", port2, 1000000, 64*1024)
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    Thread.sleep(500)
-  }
-
-  @After
-  def tearDown() {
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-    server1.shutdown
-    server2.shutdown
-    Utils.rm(server1.config.logDir)
-    Utils.rm(server2.config.logDir)    
-    Thread.sleep(500)
-    zkServer.shutdown
-  }
-
-  @Test
-  def testSend() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    // it should send to partition 0 (first partition) on second broker i.e broker2
-    val messageList = new java.util.ArrayList[Message]
-    messageList.add(new Message("test1".getBytes()))
-    syncProducer2.send(topic, 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messageList))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    val producerPool = new ProducerPool[String](config, serializer, syncProducers,
-      new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    val messagesContent = new java.util.ArrayList[String]
-    messagesContent.add("test1")
-    producer.send(new ProducerData[String, String](topic, "test", messagesContent))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testSendSingleMessage() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("broker.list", "0:localhost:9092")
-
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    // it should send to a random partition due to use of broker.list
-    val messageList = new java.util.ArrayList[Message]
-    messageList.add(new Message("t".getBytes()))
-    syncProducer1.send(topic, -1, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messageList))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-
-    syncProducers.put(brokerId1, syncProducer1)
-
-    val producerPool = new ProducerPool[String](config, serializer, syncProducers,
-      new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    producer.send(new ProducerData[String, String](topic, "t"))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-  }
-
-  @Test
-  def testInvalidPartition() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new ProducerConfig(props)
-
-    val richProducer = new Producer[String, String](config)
-    val messagesContent = new java.util.ArrayList[String]
-    messagesContent.add("test")
-    try {
-      richProducer.send(new ProducerData[String, String](topic, "test", messagesContent))
-      Assert.fail("Should fail with InvalidPartitionException")
-    }catch {
-      case e: InvalidPartitionException => // expected, do nothing
-    }
-  }
-
-  @Test
-  def testSyncProducerPool() {
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val messageList = new java.util.ArrayList[Message]
-    messageList.add(new Message("test1".getBytes()))
-    syncProducer1.send("test-topic", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messageList))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    // default for producer.type is "sync"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    producerPool.send(producerPool.getProducerPoolData("test-topic", new Partition(brokerId1, 0), Array("test1")))
-
-    producerPool.close
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testAsyncProducerPool() {
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    val asyncProducer2 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    asyncProducer1.send(topic, "test1", 0)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    asyncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-    EasyMock.replay(asyncProducer2)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-    asyncProducers.put(brokerId2, asyncProducer2)
-
-    // change producer.type to "async"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      new ConcurrentHashMap[Int, kafka.producer.SyncProducer](), asyncProducers)
-    producerPool.send(producerPool.getProducerPoolData(topic, new Partition(brokerId1, 0), Array("test1")))
-
-    producerPool.close
-    EasyMock.verify(asyncProducer1)
-    EasyMock.verify(asyncProducer2)
-  }
-
-  @Test
-  def testSyncUnavailableProducerException() {
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId2, syncProducer2)
-
-    // default for producer.type is "sync"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    try {
-      producerPool.send(producerPool.getProducerPoolData("test-topic", new Partition(brokerId1, 0), Array("test1")))
-      Assert.fail("Should fail with UnavailableProducerException")
-    }catch {
-      case e: UnavailableProducerException => // expected
-    }
-
-    producerPool.close
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testAsyncUnavailableProducerException() {
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    val asyncProducer2 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    asyncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-    EasyMock.replay(asyncProducer2)
-
-    asyncProducers.put(brokerId2, asyncProducer2)
-
-    // change producer.type to "async"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      new ConcurrentHashMap[Int, kafka.producer.SyncProducer](), asyncProducers)
-    try {
-      producerPool.send(producerPool.getProducerPoolData(topic, new Partition(brokerId1, 0), Array("test1")))
-      Assert.fail("Should fail with UnavailableProducerException")
-    }catch {
-      case e: UnavailableProducerException => // expected
-    }
-
-    producerPool.close
-    EasyMock.verify(asyncProducer1)
-    EasyMock.verify(asyncProducer2)
-  }
-
-  @Test
-  def testConfigBrokerPartitionInfoWithPartitioner {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1 + ":" + 4 + "," +
-                                       brokerId2 + ":" + "localhost" + ":" + port2 + ":" + 4)
-
-    var config: ProducerConfig = null
-    try {
-      config = new ProducerConfig(props)
-      fail("should fail with InvalidConfigException due to presence of partitioner.class and broker.list")
-    }catch {
-      case e: InvalidConfigException => // expected
-    }
-  }
-
-  @Test
-  def testConfigBrokerPartitionInfo() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    // it should send to a random partition due to use of broker.list
-    asyncProducer1.send(topic, "test1", -1)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-
-    val producerPool = new ProducerPool(config, serializer, new ConcurrentHashMap[Int, kafka.producer.SyncProducer](),
-      asyncProducers)
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    val messagesContent = new java.util.ArrayList[String]
-    messagesContent.add("test1")
-    producer.send(new ProducerData[String, String](topic, "test1", messagesContent))
-    producer.close
-
-    EasyMock.verify(asyncProducer1)
-  }
-
-  @Test
-  def testZKSendToNewTopic() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val serializer = new StringEncoder
-
-    val producer = new Producer[String, String](config)
-    try {
-      import scala.collection.JavaConversions._
-      producer.send(new ProducerData[String, String]("new-topic", "test", asList(Array("test1"))))
-      Thread.sleep(100)
-      producer.send(new ProducerData[String, String]("new-topic", "test", asList(Array("test1"))))
-      Thread.sleep(100)
-      // cross check if brokers got the messages
-      val messageSet1 = consumer1.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-      val messageSet2 = consumer2.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet2.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet2.next.message)
-    } catch {
-      case e: Exception => fail("Not expected")
-    }
-    producer.close
-  }
-
-  @Test
-  def testZKSendWithDeadBroker() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val serializer = new StringEncoder
-
-    val producer = new Producer[String, String](config)
-    try {
-      import scala.collection.JavaConversions._
-      producer.send(new ProducerData[String, String]("new-topic", "test", asList(Array("test1"))))
-      Thread.sleep(100)
-      // kill 2nd broker
-      server2.shutdown
-      Thread.sleep(100)
-      producer.send(new ProducerData[String, String]("new-topic", "test", asList(Array("test1"))))
-      Thread.sleep(100)
-      // cross check if brokers got the messages
-      val messageSet1 = consumer1.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-      Assert.assertTrue("Message set should have another message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-    } catch {
-      case e: Exception => fail("Not expected")
-    }
-    producer.close
-  }
-
-  @Test
-  def testPartitionedSendToNewTopic() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringEncoder
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    import scala.collection.JavaConversions._
-    syncProducer1.send("test-topic1", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                                  messages = asList(Array(new Message("test1".getBytes)))))
-    EasyMock.expectLastCall
-    syncProducer1.send("test-topic1", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                                  messages = asList(Array(new Message("test1".getBytes)))))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    val producerPool = new ProducerPool(config, serializer, syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    producer.send(new ProducerData[String, String]("test-topic1", "test", asList(Array("test1"))))
-    Thread.sleep(100)
-
-    // now send again to this topic using a real producer, this time all brokers would have registered
-    // their partitions in zookeeper
-    producer1.send("test-topic1", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                           messages = asList(Array(new Message("test".getBytes())))))
-    Thread.sleep(100)
-
-    // wait for zookeeper to register the new topic
-    producer.send(new ProducerData[String, String]("test-topic1", "test1", asList(Array("test1"))))
-    Thread.sleep(100)
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testPartitionedSendToNewBrokerInExistingTopic() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val syncProducer3 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    val messages1 = new java.util.ArrayList[Message]
-    messages1.add(new Message("test1".getBytes()))
-    syncProducer3.send("test-topic", 2, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messages1))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    syncProducer3.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-    EasyMock.replay(syncProducer3)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-    syncProducers.put(2, syncProducer3)
-
-    val producerPool = new ProducerPool(config, serializer, syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    val serverProps = TestUtils.createBrokerConfig(2, 9094)
-    val serverConfig = new KafkaConfig(serverProps) {
-      override val numPartitions = 4
-    }
-    val server3 = TestUtils.createServer(serverConfig)
-
-    // send a message to the new broker to register it under topic "test-topic"
-    val tempProps = new Properties()
-    tempProps.put("host", "localhost")
-    tempProps.put("port", "9094")
-    val tempProducer = new kafka.producer.SyncProducer(new SyncProducerConfig(tempProps))
-    val messageList = new java.util.ArrayList[Message]
-    messageList.add(new Message("test".getBytes()))
-    tempProducer.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = messageList))
-    Thread.sleep(500)
-
-    val messagesContent = new java.util.ArrayList[String]
-    messagesContent.add("test1")
-    producer.send(new ProducerData[String, String]("test-topic", "test-topic", messagesContent))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-    EasyMock.verify(syncProducer3)
-
-    server3.shutdown
-    Utils.rm(server3.config.logDir)
-  }
-
-  @Test
-  def testDefaultPartitioner() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1)
-    val config = new ProducerConfig(props)
-    val partitioner = new DefaultPartitioner[String]
-    val serializer = new StringSerializer
-
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    val asyncProducer2 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    // it should send to a random partition due to use of broker.list
-    asyncProducer1.send(topic, "test1", -1)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-
-    val producerPool = new ProducerPool(config, serializer, new ConcurrentHashMap[Int, kafka.producer.SyncProducer](),
-      asyncProducers)
-    val producer = new Producer[String, String](config, partitioner, producerPool, false)
-
-    val messagesContent = new java.util.ArrayList[String]
-    messagesContent.add("test1")
-    producer.send(new ProducerData[String, String](topic, "test", messagesContent))
-    producer.close
-
-    EasyMock.verify(asyncProducer1)
-  }
-}
-
-class StringSerializer extends Encoder[String] {
-  def toEvent(message: Message):String = message.toString
-  def toMessage(event: String):Message = new Message(event.getBytes)
-  def getTopic(event: String): String = event.concat("-topic")
-}
-
-class NegativePartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    -1
-  }
-}
-
-class StaticPartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    (data.length % numPartitions)
-  }
-}
-
-class HashPartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    (data.hashCode % numPartitions)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/javaapi/producer/SyncProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/javaapi/producer/SyncProducerTest.scala
deleted file mode 100644
index 1923d24..0000000
--- a/trunk/core/src/test/scala/unit/kafka/javaapi/producer/SyncProducerTest.scala
+++ /dev/null
@@ -1,98 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.javaapi.producer
-
-import junit.framework.Assert
-import kafka.utils.SystemTime
-import kafka.utils.TestUtils
-import kafka.server.{KafkaServer, KafkaConfig}
-import org.apache.log4j.Logger
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-import java.util.Properties
-import kafka.producer.SyncProducerConfig
-import kafka.javaapi.message.ByteBufferMessageSet
-import kafka.javaapi.ProducerRequest
-import kafka.message.{NoCompressionCodec, Message}
-
-class SyncProducerTest extends JUnitSuite {
-  private var messageBytes =  new Array[Byte](2);
-  private var server: KafkaServer = null
-  val simpleProducerLogger = Logger.getLogger(classOf[kafka.producer.SyncProducer])
-
-  @Before
-  def setUp() {
-    server = TestUtils.createServer(new KafkaConfig(TestUtils.createBrokerConfig(0, 9092))
-    {
-      override val enableZookeeper = false
-    })
-  }
-
-  @After
-  def tearDown() {
-    server.shutdown
-  }
-
-  @Test
-  def testReachableServer() {
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("buffer.size", "102400")
-    props.put("connect.timeout.ms", "500")
-    props.put("reconnect.interval", "1000")
-    val producer = new SyncProducer(new SyncProducerConfig(props))
-    var failed = false
-    val firstStart = SystemTime.milliseconds
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                        messages = getMessageList(new Message(messageBytes))))
-    }catch {
-      case e: Exception => failed=true
-    }
-    Assert.assertFalse(failed)
-    failed = false
-    val firstEnd = SystemTime.milliseconds
-    Assert.assertTrue((firstEnd-firstStart) < 500)
-    val secondStart = SystemTime.milliseconds
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                        messages = getMessageList(new Message(messageBytes))))
-    }catch {
-      case e: Exception => failed = true
-    }
-    Assert.assertFalse(failed)
-    val secondEnd = SystemTime.milliseconds
-    Assert.assertTrue((secondEnd-secondEnd) < 500)
-
-    try {
-      producer.multiSend(Array(new ProducerRequest("test", 0,
-        new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                 messages = getMessageList(new Message(messageBytes))))))
-    }catch {
-      case e: Exception => failed=true
-    }
-    Assert.assertFalse(failed)
-  }
-
-  private def getMessageList(message: Message): java.util.List[Message] = {
-    val messageList = new java.util.ArrayList[Message]()
-    messageList.add(message)
-    messageList
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/log/LogManagerTest.scala b/trunk/core/src/test/scala/unit/kafka/log/LogManagerTest.scala
deleted file mode 100644
index 0506c80..0000000
--- a/trunk/core/src/test/scala/unit/kafka/log/LogManagerTest.scala
+++ /dev/null
@@ -1,233 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.io._
-import junit.framework.Assert._
-import kafka.server.KafkaConfig
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-import kafka.utils.{Utils, MockTime, TestUtils}
-import kafka.common.{InvalidTopicException, OffsetOutOfRangeException}
-import collection.mutable.ArrayBuffer
-
-class LogManagerTest extends JUnitSuite {
-
-  val time: MockTime = new MockTime()
-  val maxSegAge = 100
-  val maxLogAge = 1000
-  var logDir: File = null
-  var logManager: LogManager = null
-  var config:KafkaConfig = null
-
-  @Before
-  def setUp() {
-    val props = TestUtils.createBrokerConfig(0, -1)
-    config = new KafkaConfig(props) {
-                   override val logFileSize = 1024
-                   override val enableZookeeper = false
-                   override val flushInterval = 100
-                 }
-    logManager = new LogManager(config, null, time, maxSegAge, -1, maxLogAge, false)
-    logManager.startup
-    logDir = logManager.logDir
-  }
-
-  @After
-  def tearDown() {
-    logManager.close()
-    Utils.rm(logDir)
-  }
-  
-  @Test
-  def testCreateLog() {
-    val name = "kafka"
-    val log = logManager.getOrCreateLog(name, 0)
-    val logFile = new File(config.logDir, name + "-0")
-    assertTrue(logFile.exists)
-    log.append(TestUtils.singleMessageSet("test".getBytes()))
-  }
-
-  @Test
-  def testGetLog() {
-    val name = "kafka"
-    val log = logManager.getLog(name, 0)
-    val logFile = new File(config.logDir, name + "-0")
-    assertTrue(!logFile.exists)
-  }
-
-  @Test
-  def testInvalidTopicName() {
-    val invalidTopicNames = new ArrayBuffer[String]()
-    invalidTopicNames += ("", ".", "..")
-    var longName = "ATCG"
-    for (i <- 3 to 8)
-      longName += longName
-    invalidTopicNames += longName
-    val badChars = Array('/', '\u0000', '\u0001', '\u0018', '\u001F', '\u008F', '\uD805', '\uFFFA')
-    for (weirdChar <- badChars) {
-      invalidTopicNames += "Is" + weirdChar + "funny"
-    }
-
-    for (i <- 0 until invalidTopicNames.size) {
-      try {
-        logManager.getOrCreateLog(invalidTopicNames(i), 0)
-        fail("Should throw InvalidTopicException.")
-      }
-      catch {
-        case e: InvalidTopicException => "This is good."
-      }
-    }
-  }
-
-  @Test
-  def testCleanupExpiredSegments() {
-    val log = logManager.getOrCreateLog("cleanup", 0)
-    var offset = 0L
-    for(i <- 0 until 1000) {
-      var set = TestUtils.singleMessageSet("test".getBytes())
-      log.append(set)
-      offset += set.sizeInBytes
-    }
-    log.flush
-
-    assertTrue("There should be more than one segment now.", log.numberOfSegments > 1)
-
-    // update the last modified time of all log segments
-    val logSegments = log.segments.view
-    logSegments.foreach(s => s.file.setLastModified(time.currentMs))
-
-    time.currentMs += maxLogAge + 3000
-    logManager.cleanupLogs()
-    assertEquals("Now there should only be only one segment.", 1, log.numberOfSegments)
-    assertEquals("Should get empty fetch off new log.", 0L, log.read(offset, 1024).sizeInBytes)
-    try {
-      log.read(0, 1024)
-      fail("Should get exception from fetching earlier.")
-    } catch {
-      case e: OffsetOutOfRangeException => "This is good."
-    }
-    // log should still be appendable
-    log.append(TestUtils.singleMessageSet("test".getBytes()))
-  }
-
-  @Test
-  def testCleanupSegmentsToMaintainSize() {
-    val setSize = TestUtils.singleMessageSet("test".getBytes()).sizeInBytes
-    val retentionHours = 1
-    val retentionMs = 1000 * 60 * 60 * retentionHours
-    val props = TestUtils.createBrokerConfig(0, -1)
-    logManager.close
-    Thread.sleep(100)
-    config = new KafkaConfig(props) {
-      override val logFileSize = (10 * (setSize - 1)).asInstanceOf[Int] // each segment will be 10 messages
-      override val enableZookeeper = false
-      override val logRetentionSize = (5 * 10 * setSize + 10).asInstanceOf[Long]
-      override val logRetentionHours = retentionHours
-      override val flushInterval = 100
-    }
-    logManager = new LogManager(config, null, time, maxSegAge, -1, retentionMs, false)
-    logManager.startup
-
-    // create a log
-    val log = logManager.getOrCreateLog("cleanup", 0)
-    var offset = 0L
-
-    // add a bunch of messages that should be larger than the retentionSize
-    for(i <- 0 until 1000) {
-      val set = TestUtils.singleMessageSet("test".getBytes())
-      log.append(set)
-      offset += set.sizeInBytes
-    }
-    // flush to make sure it's written to disk, then sleep to confirm
-    log.flush
-    Thread.sleep(2000)
-
-    // should be exactly 100 full segments
-    assertEquals("There should be example 100 segments.", 100, log.numberOfSegments)
-
-    // this cleanup shouldn't find any expired segments but should delete some to reduce size
-    logManager.cleanupLogs()
-    assertEquals("Now there should be exactly 6 segments", 6, log.numberOfSegments)
-    assertEquals("Should get empty fetch off new log.", 0L, log.read(offset, 1024).sizeInBytes)
-    try {
-      log.read(0, 1024)
-      fail("Should get exception from fetching earlier.")
-    } catch {
-      case e: OffsetOutOfRangeException => "This is good."
-    }
-    // log should still be appendable
-    log.append(TestUtils.singleMessageSet("test".getBytes()))
-  }
-
-  @Test
-  def testTimeBasedFlush() {
-    val props = TestUtils.createBrokerConfig(0, -1)
-    logManager.close
-    Thread.sleep(100)
-    config = new KafkaConfig(props) {
-                   override val logFileSize = 1024 *1024 *1024 
-                   override val enableZookeeper = false
-                   override val flushSchedulerThreadRate = 50
-                   override val flushInterval = Int.MaxValue
-                   override val flushIntervalMap = Utils.getTopicFlushIntervals("timebasedflush:100")
-                 }
-    logManager = new LogManager(config, null, time, maxSegAge, -1, maxLogAge, false)
-    logManager.startup
-    val log = logManager.getOrCreateLog("timebasedflush", 0)
-    for(i <- 0 until 200) {
-      var set = TestUtils.singleMessageSet("test".getBytes())
-      log.append(set)
-    }
-
-    assertTrue("The last flush time has to be within defaultflushInterval of current time ",
-                     (System.currentTimeMillis - log.getLastFlushedTime) < 100)
-  }
-
-  @Test
-  def testConfigurablePartitions() {
-    val props = TestUtils.createBrokerConfig(0, -1)
-    logManager.close
-    Thread.sleep(100)
-    config = new KafkaConfig(props) {
-                   override val logFileSize = 256
-                   override val enableZookeeper = false
-                   override val topicPartitionsMap = Utils.getTopicPartitions("testPartition:2")
-                   override val flushInterval = 100
-                 }
-    
-    logManager = new LogManager(config, null, time, maxSegAge, -1, maxLogAge, false)
-    logManager.startup
-    
-    for(i <- 0 until 2) {
-      val log = logManager.getOrCreateLog("testPartition", i)
-      for(i <- 0 until 250) {
-        var set = TestUtils.singleMessageSet("test".getBytes())
-        log.append(set)
-      }
-    }
-
-    try
-    {
-      val log = logManager.getOrCreateLog("testPartition", 2)
-      assertTrue("Should not come here", log != null)
-    } catch {
-       case _ =>
-    }
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/log/LogOffsetTest.scala b/trunk/core/src/test/scala/unit/kafka/log/LogOffsetTest.scala
deleted file mode 100644
index 3ba26d3..0000000
--- a/trunk/core/src/test/scala/unit/kafka/log/LogOffsetTest.scala
+++ /dev/null
@@ -1,215 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.io.File
-import kafka.utils._
-import kafka.server.{KafkaConfig, KafkaServer}
-import junit.framework.Assert._
-import java.util.{Random, Properties}
-import kafka.api.{FetchRequest, OffsetRequest}
-import collection.mutable.WrappedArray
-import kafka.consumer.SimpleConsumer
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet, Message}
-import org.apache.log4j._
-
-object LogOffsetTest {
-  val random = new Random()  
-}
-
-class LogOffsetTest extends JUnitSuite {
-  var logDir: File = null
-  var topicLogDir: File = null
-  var server: KafkaServer = null
-  var logSize: Int = 100
-  val brokerPort: Int = 9099
-  var simpleConsumer: SimpleConsumer = null
-
-  @Before
-  def setUp() {
-    val config: Properties = createBrokerConfig(1, brokerPort)
-    val logDirPath = config.getProperty("log.dir")
-    logDir = new File(logDirPath)
-    
-    server = TestUtils.createServer(new KafkaConfig(config))
-    simpleConsumer = new SimpleConsumer("localhost", brokerPort, 1000000, 64*1024)
-  }
-
-  @After
-  def tearDown() {
-    simpleConsumer.close
-    server.shutdown
-    Utils.rm(logDir)
-  }
-
-  @Test
-  def testEmptyLogs() {
-    val messageSet: ByteBufferMessageSet = simpleConsumer.fetch(
-      new FetchRequest("test", 0, 0, 300 * 1024))
-    assertFalse(messageSet.iterator.hasNext)
-
-    val name = "test"
-    val logFile = new File(logDir, name + "-0")
-    
-    {
-      val offsets = simpleConsumer.getOffsetsBefore(name, 0, OffsetRequest.LatestTime, 10)
-      assertTrue( (Array(0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]) )
-      assertTrue(!logFile.exists())
-    }
-
-    {
-      val offsets = simpleConsumer.getOffsetsBefore(name, 0, OffsetRequest.EarliestTime, 10)
-      assertTrue( (Array(0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]) )
-      assertTrue(!logFile.exists())
-    }
-
-    {
-      val offsets = simpleConsumer.getOffsetsBefore(name, 0, SystemTime.milliseconds, 10)
-      assertEquals( 0, offsets.length )
-      assertTrue(!logFile.exists())
-    }
-  }
-
-  @Test
-  def testGetOffsetsBeforeLatestTime() {
-    val topicPartition = "kafka-" + 0
-    val topicPartitionPath = getLogDir.getAbsolutePath + "/" + topicPartition
-    val topic = topicPartition.split("-").head
-    val part = Integer.valueOf(topicPartition.split("-").last).intValue
-
-    val logManager = server.getLogManager
-    val log = logManager.getOrCreateLog(topic, part)
-
-    val message = new Message(Integer.toString(42).getBytes())
-    for(i <- 0 until 20)
-      log.append(new ByteBufferMessageSet(NoCompressionCodec, message))
-    log.flush()
-
-    Thread.sleep(100)
-
-    val offsetRequest = new OffsetRequest(topic, part, OffsetRequest.LatestTime, 10)
-
-    val offsets = log.getOffsetsBefore(offsetRequest)
-    assertTrue((Array(240L, 216L, 108L, 0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]))
-
-    val consumerOffsets = simpleConsumer.getOffsetsBefore(topic, part,
-                                                          OffsetRequest.LatestTime, 10)
-    assertTrue((Array(240L, 216L, 108L, 0L): WrappedArray[Long]) == (consumerOffsets: WrappedArray[Long]))
-
-    // try to fetch using latest offset
-    val messageSet: ByteBufferMessageSet = simpleConsumer.fetch(
-      new FetchRequest(topic, 0, consumerOffsets.head, 300 * 1024))
-    assertFalse(messageSet.iterator.hasNext)
-  }
-
-  @Test
-  def testEmptyLogsGetOffsets() {
-    val topicPartition = "kafka-" + LogOffsetTest.random.nextInt(10)
-    val topicPartitionPath = getLogDir.getAbsolutePath + "/" + topicPartition
-    topicLogDir = new File(topicPartitionPath)
-    topicLogDir.mkdir
-
-    val topic = topicPartition.split("-").head
-    val part = Integer.valueOf(topicPartition.split("-").last).intValue
-
-    var offsetChanged = false
-    for(i <- 1 to 14) {
-      val consumerOffsets = simpleConsumer.getOffsetsBefore(topic, part,
-        OffsetRequest.EarliestTime, 1)
-
-      if(consumerOffsets(0) == 1) {
-        offsetChanged = true
-      }
-    }
-    assertFalse(offsetChanged)
-  }
-
-  @Test
-  def testGetOffsetsBeforeNow() {
-    val topicPartition = "kafka-" + LogOffsetTest.random.nextInt(10)
-    val topicPartitionPath = getLogDir.getAbsolutePath + "/" + topicPartition
-    val topic = topicPartition.split("-").head
-    val part = Integer.valueOf(topicPartition.split("-").last).intValue
-
-    val logManager = server.getLogManager
-    val log = logManager.getOrCreateLog(topic, part)
-    val message = new Message(Integer.toString(42).getBytes())
-    for(i <- 0 until 20)
-      log.append(new ByteBufferMessageSet(NoCompressionCodec, message))
-    log.flush()
-
-    val now = System.currentTimeMillis
-    Thread.sleep(100)
-
-    val offsetRequest = new OffsetRequest(topic, part, now, 10)
-    val offsets = log.getOffsetsBefore(offsetRequest)
-    assertTrue((Array(216L, 108L, 0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]))
-
-    val consumerOffsets = simpleConsumer.getOffsetsBefore(topic, part, now, 10)
-    assertTrue((Array(216L, 108L, 0L): WrappedArray[Long]) == (consumerOffsets: WrappedArray[Long]))
-  }
-
-  @Test
-  def testGetOffsetsBeforeEarliestTime() {
-    val topicPartition = "kafka-" + LogOffsetTest.random.nextInt(10)
-    val topicPartitionPath = getLogDir.getAbsolutePath + "/" + topicPartition
-    val topic = topicPartition.split("-").head
-    val part = Integer.valueOf(topicPartition.split("-").last).intValue
-
-    val logManager = server.getLogManager
-    val log = logManager.getOrCreateLog(topic, part)
-    val message = new Message(Integer.toString(42).getBytes())
-    for(i <- 0 until 20)
-      log.append(new ByteBufferMessageSet(NoCompressionCodec, message))
-    log.flush()
-
-    Thread.sleep(100)
-
-    val offsetRequest = new OffsetRequest(topic, part,
-                                          OffsetRequest.EarliestTime, 10)
-    val offsets = log.getOffsetsBefore(offsetRequest)
-
-    assertTrue( (Array(0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]) )
-
-    val consumerOffsets = simpleConsumer.getOffsetsBefore(topic, part,
-                                                          OffsetRequest.EarliestTime, 10)
-    assertTrue( (Array(0L): WrappedArray[Long]) == (offsets: WrappedArray[Long]) )
-  }
-
-  private def createBrokerConfig(nodeId: Int, port: Int): Properties = {
-    val props = new Properties
-    props.put("brokerid", nodeId.toString)
-    props.put("port", port.toString)
-    props.put("log.dir", getLogDir.getAbsolutePath)
-    props.put("log.flush.interval", "1")
-    props.put("enable.zookeeper", "false")
-    props.put("num.partitions", "20")
-    props.put("log.retention.hours", "10")
-    props.put("log.cleanup.interval.mins", "5")
-    props.put("log.file.size", logSize.toString)
-    props
-  }
-
-  private def getLogDir(): File = {
-    val dir = TestUtils.tempDir()
-    dir
-  }
-
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/log/LogTest.scala b/trunk/core/src/test/scala/unit/kafka/log/LogTest.scala
deleted file mode 100644
index f448f60..0000000
--- a/trunk/core/src/test/scala/unit/kafka/log/LogTest.scala
+++ /dev/null
@@ -1,299 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import java.io._
-import java.util.ArrayList
-import junit.framework.Assert._
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-import kafka.utils.{Utils, TestUtils, Range, SystemTime, MockTime}
-import kafka.common.{MessageSizeTooLargeException, OffsetOutOfRangeException}
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet, Message}
-import kafka.server.KafkaConfig
-
-class LogTest extends JUnitSuite {
-  
-  var logDir: File = null
-  var config:KafkaConfig = null
-  @Before
-  def setUp() {
-    logDir = TestUtils.tempDir()
-    val props = TestUtils.createBrokerConfig(0, -1)
-    config = new KafkaConfig(props)
-  }
-
-  @After
-  def tearDown() {
-    Utils.rm(logDir)
-  }
-  
-  def createEmptyLogs(dir: File, offsets: Int*) = {
-    for(offset <- offsets)
-      new File(dir, Integer.toString(offset) + Log.FileSuffix).createNewFile()
-  }
-
-  /** Test that the size and time based log segment rollout works. */
-  @Test
-  def testTimeBasedLogRoll() {
-    val set = TestUtils.singleMessageSet("test".getBytes())
-    val rollMs = 1 * 60 * 60L
-    val time: MockTime = new MockTime()
-
-    // create a log
-    val log = new Log(logDir, time, 1000, config.maxMessageSize, 1000, rollMs, false)
-    time.currentMs += rollMs + 1
-
-    // segment age is less than its limit
-    log.append(set)
-    assertEquals("There should be exactly one segment.", 1, log.numberOfSegments)
-
-    log.append(set)
-    assertEquals("There should be exactly one segment.", 1, log.numberOfSegments)
-
-    // segment expires in age
-    time.currentMs += rollMs + 1
-    log.append(set)
-    assertEquals("There should be exactly 2 segments.", 2, log.numberOfSegments)
-
-    time.currentMs += rollMs + 1
-    val blank = Array[Message]()
-    log.append(new ByteBufferMessageSet(blank:_*))
-    assertEquals("There should be exactly 3 segments.", 3, log.numberOfSegments)
-
-    time.currentMs += rollMs + 1
-    // the last segment expired in age, but was blank. So new segment should not be generated
-    log.append(set)
-    assertEquals("There should be exactly 3 segments.", 3, log.numberOfSegments)
-  }
-
-  @Test
-  def testSizeBasedLogRoll() {
-    val set = TestUtils.singleMessageSet("test".getBytes())
-    val setSize = set.sizeInBytes
-    val msgPerSeg = 10
-    val segSize = msgPerSeg * (setSize - 1).asInstanceOf[Int] // each segment will be 10 messages
-
-    // create a log
-    val log = new Log(logDir, SystemTime, segSize, config.maxMessageSize, 1000, 10000, false)
-    assertEquals("There should be exactly 1 segment.", 1, log.numberOfSegments)
-
-    // segments expire in size
-    for (i<- 1 to (msgPerSeg + 1)) {
-      log.append(set)
-    }
-    assertEquals("There should be exactly 2 segments.", 2, log.numberOfSegments)
-  }
-
-  @Test
-  def testLoadEmptyLog() {
-    createEmptyLogs(logDir, 0)
-    new Log(logDir, SystemTime, 1024, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-  }
-  
-  @Test
-  def testLoadInvalidLogsFails() {
-    createEmptyLogs(logDir, 0, 15)
-    try {
-      new Log(logDir, SystemTime, 1024, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-      fail("Allowed load of corrupt logs without complaint.")
-    } catch {
-      case e: IllegalStateException => "This is good"
-    }
-  }
-  
-  @Test
-  def testAppendAndRead() {
-    val log = new Log(logDir, SystemTime, 1024, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-    val message = new Message(Integer.toString(42).getBytes())
-    for(i <- 0 until 10)
-      log.append(new ByteBufferMessageSet(NoCompressionCodec, message))
-    log.flush()
-    val messages = log.read(0, 1024)
-    var current = 0
-    for(curr <- messages) {
-      assertEquals("Read message should equal written", message, curr.message)
-      current += 1
-    }
-    assertEquals(10, current)
-  }
-  
-  @Test
-  def testReadOutOfRange() {
-    createEmptyLogs(logDir, 1024)
-    val log = new Log(logDir, SystemTime, 1024, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-    assertEquals("Reading just beyond end of log should produce 0 byte read.", 0L, log.read(1024, 1000).sizeInBytes)
-    try {
-      log.read(0, 1024)
-      fail("Expected exception on invalid read.")
-    } catch {
-      case e: OffsetOutOfRangeException => "This is good."
-    }
-    try {
-      log.read(1025, 1000)
-      fail("Expected exception on invalid read.")
-    } catch {
-      case e: OffsetOutOfRangeException => "This is good."
-    }
-  }
-  
-  /** Test that writing and reading beyond the log size boundary works */
-  @Test
-  def testLogRolls() {
-    /* create a multipart log with 100 messages */
-    val log = new Log(logDir, SystemTime, 100, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-    val numMessages = 100
-    for(i <- 0 until numMessages)
-      log.append(TestUtils.singleMessageSet(Integer.toString(i).getBytes()))
-    log.flush
-    
-    /* now do successive reads and iterate over the resulting message sets counting the messages
-     * we should find exact 100 messages.
-     */
-    var reads = 0
-    var current = 0
-    var offset = 0L
-    var readOffset = 0L
-    while(current < numMessages) {
-      val messages = log.read(readOffset, 1024*1024)
-      readOffset += messages.last.offset
-      current += messages.size
-      if(reads > 2*numMessages)
-        fail("Too many read attempts.")
-      reads += 1
-    }
-    assertEquals("We did not find all the messages we put in", numMessages, current)
-  }
-  
-  @Test
-  def testFindSegment() {
-    assertEquals("Search in empty segments list should find nothing", None, Log.findRange(makeRanges(), 45))
-    assertEquals("Search in segment list just outside the range of the last segment should find nothing",
-                 None, Log.findRange(makeRanges(5, 9, 12), 12))
-    try {
-      Log.findRange(makeRanges(35), 36)
-      fail("expect exception")
-    }
-    catch {
-      case e: OffsetOutOfRangeException => "this is good"
-    }
-
-    try {
-      Log.findRange(makeRanges(35,35), 36)
-    }
-    catch {
-      case e: OffsetOutOfRangeException => "this is good"
-    }
-
-    assertContains(makeRanges(5, 9, 12), 11)
-    assertContains(makeRanges(5), 4)
-    assertContains(makeRanges(5,8), 5)
-    assertContains(makeRanges(5,8), 6)
-  }
-  
-  /** Test corner cases of rolling logs */
-  @Test
-  def testEdgeLogRolls() {
-    {
-      // first test a log segment starting at 0
-      val log = new Log(logDir, SystemTime, 100, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-      val curOffset = log.nextAppendOffset
-      assertEquals(curOffset, 0)
-
-      // time goes by; the log file is deleted
-      log.markDeletedWhile(_ => true)
-
-      // we now have a new log; the starting offset of the new log should remain 0
-      assertEquals(curOffset, log.nextAppendOffset)
-    }
-
-    {
-      // second test an empty log segment starting at none-zero
-      val log = new Log(logDir, SystemTime, 100, config.maxMessageSize, 1000, config.logRollHours*60*60*1000L, false)
-      val numMessages = 1
-      for(i <- 0 until numMessages)
-        log.append(TestUtils.singleMessageSet(Integer.toString(i).getBytes()))
-
-      val curOffset = log.nextAppendOffset
-      // time goes by; the log file is deleted
-      log.markDeletedWhile(_ => true)
-
-      // we now have a new log
-      assertEquals(curOffset, log.nextAppendOffset)
-
-      // time goes by; the log file (which is empty) is deleted again
-      val deletedSegments = log.markDeletedWhile(_ => true)
-
-      // we shouldn't delete the last empty log segment.
-      assertTrue(deletedSegments.size == 0)
-
-      // we now have a new log
-      assertEquals(curOffset, log.nextAppendOffset)
-    }
-  }
-
-  @Test
-  def testMessageSizeCheck() {
-    val first = new ByteBufferMessageSet(NoCompressionCodec, new Message ("You".getBytes()), new Message("bethe".getBytes()))
-    val second = new ByteBufferMessageSet(NoCompressionCodec, new Message("change".getBytes()))
-
-    // append messages to log
-    val log = new Log(logDir, SystemTime, 100, 5, 1000, 24*7*60*60*1000L, false)
-
-    var ret =
-    try {
-      log.append(first)
-      true
-    }
-    catch {
-      case e: MessageSizeTooLargeException => false
-    }
-    assert(ret, "First messageset should pass.")
-
-    ret =
-    try {
-      log.append(second)
-      false
-    }
-    catch {
-      case e:MessageSizeTooLargeException => true
-    }
-    assert(ret, "Second message set should throw MessageSizeTooLargeException.")
-  }
-
-  def assertContains(ranges: Array[Range], offset: Long) = {
-    Log.findRange(ranges, offset) match {
-      case Some(range) => 
-        assertTrue(range + " does not contain " + offset, range.contains(offset))
-      case None => fail("No range found, but expected to find " + offset)
-    }
-  }
-  
-  class SimpleRange(val start: Long, val size: Long) extends Range
-  
-  def makeRanges(breaks: Int*): Array[Range] = {
-    val list = new ArrayList[Range]
-    var prior = 0
-    for(brk <- breaks) {
-      list.add(new SimpleRange(prior, brk - prior))
-      prior = brk
-    }
-    list.toArray(new Array[Range](list.size))
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/log/SegmentListTest.scala b/trunk/core/src/test/scala/unit/kafka/log/SegmentListTest.scala
deleted file mode 100644
index 9cc059f..0000000
--- a/trunk/core/src/test/scala/unit/kafka/log/SegmentListTest.scala
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.log
-
-import junit.framework.Assert._
-import org.junit.Test
-import org.scalatest.junit.JUnitSuite
-
-class SegmentListTest extends JUnitSuite {
-
-  @Test
-  def testAppend() {
-    val list = List(1, 2, 3, 4)
-    val sl = new SegmentList(list)
-    val view = sl.view
-    assertEquals(list, view.iterator.toList)
-    sl.append(5)
-    assertEquals("Appending to both should result in list that are still equals", 
-                 list ::: List(5), sl.view.iterator.toList)
-    assertEquals("But the prior view should still equal the original list", list, view.iterator.toList)
-  }
-  
-  @Test
-  def testTrunc() {
-    val hd = List(1,2,3)
-    val tail = List(4,5,6)
-    val sl = new SegmentList(hd ::: tail)
-    val view = sl.view
-    assertEquals(hd ::: tail, view.iterator.toList)
-    val deleted = sl.trunc(3)
-    assertEquals(tail, sl.view.iterator.toList)
-    assertEquals(hd, deleted.iterator.toList)
-    assertEquals("View should remain consistent", hd ::: tail, view.iterator.toList)
-  }
-  
-  @Test
-  def testTruncBeyondList() {
-    val sl = new SegmentList(List(1, 2))
-    sl.trunc(3)
-    assertEquals(0, sl.view.length)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala b/trunk/core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala
deleted file mode 100644
index 7f67eb3..0000000
--- a/trunk/core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala
+++ /dev/null
@@ -1,254 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.log4j
-
-import org.apache.log4j.spi.LoggingEvent
-import org.apache.log4j.{PropertyConfigurator, Logger}
-import java.util.Properties
-import java.io.File
-import kafka.consumer.SimpleConsumer
-import kafka.server.{KafkaConfig, KafkaServer}
-import kafka.utils.{TestUtils, TestZKUtils,Utils, Logging}
-import kafka.zk.EmbeddedZookeeper
-import junit.framework.Assert._
-import kafka.api.FetchRequest
-import kafka.serializer.Encoder
-import kafka.message.Message
-import kafka.producer.async.MissingConfigException
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-
-class KafkaLog4jAppenderTest extends JUnitSuite with Logging {
-
-  var logDirZk: File = null
-  var logDirBl: File = null
-  //  var topicLogDir: File = null
-  var serverBl: KafkaServer = null
-  var serverZk: KafkaServer = null
-
-  var simpleConsumerZk: SimpleConsumer = null
-  var simpleConsumerBl: SimpleConsumer = null
-
-  val tLogger = Logger.getLogger(getClass())
-
-  private val brokerZk = 0
-  private val brokerBl = 1
-
-  private val ports = TestUtils.choosePorts(2)
-  private val (portZk, portBl) = (ports(0), ports(1))
-
-  private var zkServer:EmbeddedZookeeper = null
-
-  @Before
-  def setUp() {
-    zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect)
-
-    val propsZk = TestUtils.createBrokerConfig(brokerZk, portZk)
-    val logDirZkPath = propsZk.getProperty("log.dir")
-    logDirZk = new File(logDirZkPath)
-    serverZk = TestUtils.createServer(new KafkaConfig(propsZk));
-
-    val propsBl: Properties = createBrokerConfig(brokerBl, portBl)
-    val logDirBlPath = propsBl.getProperty("log.dir")
-    logDirBl = new File(logDirBlPath)
-    serverBl = TestUtils.createServer(new KafkaConfig(propsBl))
-
-    Thread.sleep(100)
-
-    simpleConsumerZk = new SimpleConsumer("localhost", portZk, 1000000, 64*1024)
-    simpleConsumerBl = new SimpleConsumer("localhost", portBl, 1000000, 64*1024)
-  }
-
-  @After
-  def tearDown() {
-    simpleConsumerZk.close
-    simpleConsumerBl.close
-
-    serverZk.shutdown
-    serverBl.shutdown
-    Utils.rm(logDirZk)
-    Utils.rm(logDirBl)
-
-    Thread.sleep(500)
-    zkServer.shutdown
-    Thread.sleep(500)
-  }
-
-  @Test
-  def testKafkaLog4jConfigs() {
-    var props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.Topic", "test-topic")
-    props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
-    props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
-
-    // port missing
-    try {
-      PropertyConfigurator.configure(props)
-      fail("Missing properties exception was expected !")
-    }catch {
-      case e: MissingConfigException =>
-    }
-
-    props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.Topic", "test-topic")
-    props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
-    props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
-
-    // host missing
-    try {
-      PropertyConfigurator.configure(props)
-      fail("Missing properties exception was expected !")
-    }catch {
-      case e: MissingConfigException =>
-    }
-
-    props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
-    props.put("log4j.appender.KAFKA.BrokerList", "0:localhost:"+portBl.toString)
-    props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
-
-    // topic missing
-    try {
-      PropertyConfigurator.configure(props)
-      fail("Missing properties exception was expected !")
-    }catch {
-      case e: MissingConfigException =>
-    }
-
-    props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.BrokerList", "0:localhost:"+portBl.toString)
-    props.put("log4j.appender.KAFKA.Topic", "test-topic")
-    props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
-
-    // serializer missing
-    try {
-      PropertyConfigurator.configure(props)
-    }catch {
-      case e: MissingConfigException => fail("should default to kafka.serializer.StringEncoder")
-    }
-  }
-
-  @Test
-  def testBrokerListLog4jAppends() {
-    PropertyConfigurator.configure(getLog4jConfigWithBrokerList)
-
-    for(i <- 1 to 5)
-      info("test")
-
-    Thread.sleep(500)
-
-    var offset = 0L
-    val messages = simpleConsumerBl.fetch(new FetchRequest("test-topic", 0, offset, 1024*1024))
-
-    var count = 0
-    for(message <- messages) {
-      count = count + 1
-      offset += message.offset
-    }
-
-    assertEquals(5, count)
-  }
-
-  @Test
-  def testZkConnectLog4jAppends() {
-    PropertyConfigurator.configure(getLog4jConfigWithZkConnect)
-
-    for(i <- 1 to 5)
-      info("test")
-
-    Thread.sleep(500)
-
-    var offset = 0L
-    val messages = simpleConsumerZk.fetch(new FetchRequest("test-topic", 0, offset, 1024*1024))
-
-    var count = 0
-    for(message <- messages) {
-      count = count + 1
-      offset += message.offset
-    }
-
-    assertEquals(5, count)
-  }
-
-  private def getLog4jConfigWithBrokerList: Properties = {
-    var props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.BrokerList", "0:localhost:"+portBl.toString)
-    props.put("log4j.appender.KAFKA.Topic", "test-topic")
-    props.put("log4j.logger.kafka.log4j", "INFO,KAFKA")
-    props
-  }
-
-  private def getLog4jConfigWithZkConnect: Properties = {
-    var props = new Properties()
-    props.put("log4j.rootLogger", "INFO")
-    props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
-    props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
-    props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
-    props.put("log4j.appender.KAFKA.ZkConnect", TestZKUtils.zookeeperConnect)
-    props.put("log4j.appender.KAFKA.Topic", "test-topic")
-    props.put("log4j.logger.kafka.log4j", "INFO,KAFKA")
-    props
-  }
-
-  private def createBrokerConfig(nodeId: Int, port: Int): Properties = {
-    val props = new Properties
-    props.put("brokerid", nodeId.toString)
-    props.put("port", port.toString)
-    props.put("log.dir", getLogDir.getAbsolutePath)
-    props.put("log.flush.interval", "1")
-    props.put("enable.zookeeper", "false")
-    props.put("num.partitions", "1")
-    props.put("log.retention.hours", "10")
-    props.put("log.cleanup.interval.mins", "5")
-    props.put("log.file.size", "1000")
-    props
-  }
-
-  private def getLogDir(): File = {
-    val dir = TestUtils.tempDir()
-    dir
-  }
-}
-
-class AppenderStringEncoder extends Encoder[LoggingEvent] {
-  def toMessage(event: LoggingEvent):Message = {
-    val logMessage = event.getMessage
-    new Message(logMessage.asInstanceOf[String].getBytes)
-  }
-}
-
diff --git a/trunk/core/src/test/scala/unit/kafka/message/BaseMessageSetTestCases.scala b/trunk/core/src/test/scala/unit/kafka/message/BaseMessageSetTestCases.scala
deleted file mode 100644
index a6dc642..0000000
--- a/trunk/core/src/test/scala/unit/kafka/message/BaseMessageSetTestCases.scala
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import junit.framework.Assert._
-import kafka.utils.TestUtils._
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-
-trait BaseMessageSetTestCases extends JUnitSuite {
-  
-  val messages = Array(new Message("abcd".getBytes()), new Message("efgh".getBytes()))
-  
-  def createMessageSet(messages: Seq[Message]): MessageSet
-
-  @Test
-  def testWrittenEqualsRead {
-    val messageSet = createMessageSet(messages)
-    checkEquals(messages.iterator, messageSet.map(m => m.message).iterator)
-  }
-
-  @Test
-  def testIteratorIsConsistent() {
-    val m = createMessageSet(messages)
-    // two iterators over the same set should give the same results
-    checkEquals(m.iterator, m.iterator)
-  }
-
-  @Test
-  def testSizeInBytes() {
-    assertEquals("Empty message set should have 0 bytes.",
-                 0L,
-                 createMessageSet(Array[Message]()).sizeInBytes)
-    assertEquals("Predicted size should equal actual size.", 
-                 MessageSet.messageSetSize(messages).toLong, 
-                 createMessageSet(messages).sizeInBytes)
-  }
-
-  @Test
-  def testWriteTo() {
-    // test empty message set
-    testWriteToWithMessageSet(createMessageSet(Array[Message]()))
-    testWriteToWithMessageSet(createMessageSet(messages))
-  }
-
-  def testWriteToWithMessageSet(set: MessageSet) {
-    val channel = tempChannel()
-    val written = set.writeTo(channel, 0, 1024)
-    assertEquals("Expect to write the number of bytes in the set.", set.sizeInBytes, written)
-    val newSet = new FileMessageSet(channel, false)
-    checkEquals(set.iterator, newSet.iterator)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala b/trunk/core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala
deleted file mode 100644
index c81c356..0000000
--- a/trunk/core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala
+++ /dev/null
@@ -1,162 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.nio._
-import junit.framework.Assert._
-import org.junit.Test
-import kafka.utils.TestUtils
-import kafka.common.InvalidMessageSizeException
-
-class ByteBufferMessageSetTest extends BaseMessageSetTestCases {
-
-  override def createMessageSet(messages: Seq[Message]): ByteBufferMessageSet = 
-    new ByteBufferMessageSet(NoCompressionCodec, messages: _*)
-  
-  @Test
-  def testSmallFetchSize() {
-    // create a ByteBufferMessageSet that doesn't contain a full message
-    // iterating it should get an InvalidMessageSizeException
-    val messages = new ByteBufferMessageSet(NoCompressionCodec, new Message("01234567890123456789".getBytes()))
-    val buffer = messages.serialized.slice
-    buffer.limit(10)
-    val messageSetWithNoFullMessage = new ByteBufferMessageSet(buffer = buffer, initialOffset = 1000)
-    try {
-      for (message <- messageSetWithNoFullMessage)
-        fail("shouldn't see any message")
-    }
-    catch {
-      case e: InvalidMessageSizeException => //this is expected
-      case e2 => fail("shouldn't see any other exceptions")
-    }
-  }
-
-  @Test
-  def testValidBytes() {
-    {
-      val messages = new ByteBufferMessageSet(NoCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-      val buffer = ByteBuffer.allocate(messages.sizeInBytes.toInt + 2)
-      buffer.put(messages.serialized)
-      buffer.putShort(4)
-      val messagesPlus = new ByteBufferMessageSet(buffer)
-      assertEquals("Adding invalid bytes shouldn't change byte count", messages.validBytes, messagesPlus.validBytes)
-    }
-
-    // test valid bytes on empty ByteBufferMessageSet
-    {
-      assertEquals("Valid bytes on an empty ByteBufferMessageSet should return 0", 0,
-        MessageSet.Empty.asInstanceOf[ByteBufferMessageSet].validBytes)
-    }
-  }
-
-  @Test
-  def testEquals() {
-    var messages = new ByteBufferMessageSet(DefaultCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-    var moreMessages = new ByteBufferMessageSet(DefaultCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-
-    assertTrue(messages.equals(moreMessages))
-
-    messages = new ByteBufferMessageSet(NoCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-    moreMessages = new ByteBufferMessageSet(NoCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-
-    assertTrue(messages.equals(moreMessages))
-  }
-  
-
-  @Test
-  def testIterator() {
-    val messageList = List(
-        new Message("msg1".getBytes),
-        new Message("msg2".getBytes),
-        new Message("msg3".getBytes)
-      )
-
-    // test for uncompressed regular messages
-    {
-      val messageSet = new ByteBufferMessageSet(NoCompressionCodec, messageList: _*)
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(messageSet.iterator))
-      //make sure ByteBufferMessageSet is re-iterable.
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(messageSet.iterator))
-      //make sure the last offset after iteration is correct
-      assertEquals("offset of last message not expected", messageSet.last.offset, messageSet.serialized.limit)
-
-      //make sure shallow iterator is the same as deep iterator
-      TestUtils.checkEquals[Message](TestUtils.getMessageIterator(messageSet.shallowIterator),
-                                     TestUtils.getMessageIterator(messageSet.iterator))
-    }
-
-    // test for compressed regular messages
-    {
-      val messageSet = new ByteBufferMessageSet(DefaultCompressionCodec, messageList: _*)
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(messageSet.iterator))
-      //make sure ByteBufferMessageSet is re-iterable.
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(messageSet.iterator))
-      //make sure the last offset after iteration is correct
-      assertEquals("offset of last message not expected", messageSet.last.offset, messageSet.serialized.limit)
-
-      verifyShallowIterator(messageSet)
-    }
-
-    // test for mixed empty and non-empty messagesets uncompressed
-    {
-      val emptyMessageList : List[Message] = Nil
-      val emptyMessageSet = new ByteBufferMessageSet(NoCompressionCodec, emptyMessageList: _*)
-      val regularMessgeSet = new ByteBufferMessageSet(NoCompressionCodec, messageList: _*)
-      val buffer = ByteBuffer.allocate(emptyMessageSet.serialized.limit + regularMessgeSet.serialized.limit)
-      buffer.put(emptyMessageSet.serialized)
-      buffer.put(regularMessgeSet.serialized)
-      buffer.rewind
-      val mixedMessageSet = new ByteBufferMessageSet(buffer, 0, 0)
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(mixedMessageSet.iterator))
-      //make sure ByteBufferMessageSet is re-iterable.
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(mixedMessageSet.iterator))
-      //make sure the last offset after iteration is correct
-      assertEquals("offset of last message not expected", mixedMessageSet.last.offset, mixedMessageSet.serialized.limit)
-
-      //make sure shallow iterator is the same as deep iterator
-      TestUtils.checkEquals[Message](TestUtils.getMessageIterator(mixedMessageSet.shallowIterator),
-                                     TestUtils.getMessageIterator(mixedMessageSet.iterator))
-    }
-
-    // test for mixed empty and non-empty messagesets compressed
-    {
-      val emptyMessageList : List[Message] = Nil
-      val emptyMessageSet = new ByteBufferMessageSet(DefaultCompressionCodec, emptyMessageList: _*)
-      val regularMessgeSet = new ByteBufferMessageSet(DefaultCompressionCodec, messageList: _*)
-      val buffer = ByteBuffer.allocate(emptyMessageSet.serialized.limit + regularMessgeSet.serialized.limit)
-      buffer.put(emptyMessageSet.serialized)
-      buffer.put(regularMessgeSet.serialized)
-      buffer.rewind
-      val mixedMessageSet = new ByteBufferMessageSet(buffer, 0, 0)
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(mixedMessageSet.iterator))
-      //make sure ByteBufferMessageSet is re-iterable.
-      TestUtils.checkEquals[Message](messageList.iterator, TestUtils.getMessageIterator(mixedMessageSet.iterator))
-      //make sure the last offset after iteration is correct
-      assertEquals("offset of last message not expected", mixedMessageSet.last.offset, mixedMessageSet.serialized.limit)
-
-      verifyShallowIterator(mixedMessageSet)
-    }
-  }
-
-  def verifyShallowIterator(messageSet: ByteBufferMessageSet) {
-      //make sure the offsets returned by a shallow iterator is a subset of that of a deep iterator
-      val shallowOffsets = messageSet.shallowIterator.map(msgAndOff => msgAndOff.offset).toSet
-      val deepOffsets = messageSet.iterator.map(msgAndOff => msgAndOff.offset).toSet
-      assertTrue(shallowOffsets.subsetOf(deepOffsets))
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/message/CompressionUtilsTest.scala b/trunk/core/src/test/scala/unit/kafka/message/CompressionUtilsTest.scala
deleted file mode 100644
index df96603..0000000
--- a/trunk/core/src/test/scala/unit/kafka/message/CompressionUtilsTest.scala
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import kafka.utils.TestUtils
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-import junit.framework.Assert._
-
-class CompressionUtilTest extends JUnitSuite {
-
-  
-  @Test
-  def testSimpleCompressDecompress() {
-
-    val messages = List[Message](new Message("hi there".getBytes), new Message("I am fine".getBytes), new Message("I am not so well today".getBytes))
-
-    val message = CompressionUtils.compress(messages)
-
-    val decompressedMessages = CompressionUtils.decompress(message)
-
-    TestUtils.checkLength(decompressedMessages.iterator,3)
-
-    TestUtils.checkEquals(messages.iterator, TestUtils.getMessageIterator(decompressedMessages.iterator))
-  }
-
-  @Test
-  def testComplexCompressDecompress() {
-
-    val messages = List[Message](new Message("hi there".getBytes), new Message("I am fine".getBytes), new Message("I am not so well today".getBytes))
-
-    val message = CompressionUtils.compress(messages.slice(0, 2))
-
-    val complexMessages = List[Message](message):::messages.slice(2,3)
-
-    val complexMessage = CompressionUtils.compress(complexMessages)
-
-    val decompressedMessages = CompressionUtils.decompress(complexMessage)
-
-    TestUtils.checkLength(TestUtils.getMessageIterator(decompressedMessages.iterator),3)
-
-    TestUtils.checkEquals(messages.iterator, TestUtils.getMessageIterator(decompressedMessages.iterator))
-  }
-
-  @Test
-  def testSnappyCompressDecompressExplicit() {
-
-    val messages = List[Message](new Message("hi there".getBytes), new Message("I am fine".getBytes), new Message("I am not so well today".getBytes))
-
-    val message = CompressionUtils.compress(messages,SnappyCompressionCodec)
-
-    assertEquals(message.compressionCodec,SnappyCompressionCodec)
-
-    val decompressedMessages = CompressionUtils.decompress(message)
-
-    TestUtils.checkLength(decompressedMessages.iterator,3)
-
-    TestUtils.checkEquals(messages.iterator, TestUtils.getMessageIterator(decompressedMessages.iterator))
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/message/FileMessageSetTest.scala b/trunk/core/src/test/scala/unit/kafka/message/FileMessageSetTest.scala
deleted file mode 100644
index a683963..0000000
--- a/trunk/core/src/test/scala/unit/kafka/message/FileMessageSetTest.scala
+++ /dev/null
@@ -1,84 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.nio._
-import junit.framework.Assert._
-import kafka.utils.TestUtils._
-import org.junit.Test
-
-class FileMessageSetTest extends BaseMessageSetTestCases {
-  
-  val messageSet = createMessageSet(messages)
-  
-  def createMessageSet(messages: Seq[Message]): FileMessageSet = {
-    val set = new FileMessageSet(tempFile(), true)
-    set.append(new ByteBufferMessageSet(NoCompressionCodec, messages: _*))
-    set.flush()
-    set
-  }
-
-  @Test
-  def testFileSize() {
-    assertEquals(messageSet.channel.size, messageSet.sizeInBytes)
-    messageSet.append(singleMessageSet("abcd".getBytes()))
-    assertEquals(messageSet.channel.size, messageSet.sizeInBytes)
-  }
-  
-  @Test
-  def testIterationOverPartialAndTruncation() {
-    testPartialWrite(0, messageSet)
-    testPartialWrite(2, messageSet)
-    testPartialWrite(4, messageSet)
-    testPartialWrite(5, messageSet)
-    testPartialWrite(6, messageSet)
-  }
-  
-  def testPartialWrite(size: Int, messageSet: FileMessageSet) {
-    val buffer = ByteBuffer.allocate(size)
-    val originalPosition = messageSet.channel.position
-    for(i <- 0 until size)
-      buffer.put(0.asInstanceOf[Byte])
-    buffer.rewind()
-    messageSet.channel.write(buffer)
-    // appending those bytes should not change the contents
-    checkEquals(messages.iterator, messageSet.map(m => m.message).iterator)
-    assertEquals("Unexpected number of bytes truncated", size.longValue, messageSet.recover())
-    assertEquals("File pointer should now be at the end of the file.", originalPosition, messageSet.channel.position)
-    // nor should recovery change the contents
-    checkEquals(messages.iterator, messageSet.map(m => m.message).iterator)
-  }
-  
-  @Test
-  def testIterationDoesntChangePosition() {
-    val position = messageSet.channel.position
-    checkEquals(messages.iterator, messageSet.map(m => m.message).iterator)
-    assertEquals(position, messageSet.channel.position)
-  }
-  
-  @Test
-  def testRead() {
-    val read = messageSet.read(0, messageSet.sizeInBytes)
-    checkEquals(messageSet.iterator, read.iterator)
-    val items = read.iterator.toList
-    val first = items.head
-    val read2 = messageSet.read(first.offset, messageSet.sizeInBytes)
-    checkEquals(items.tail.iterator, read2.iterator)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/message/MessageTest.scala b/trunk/core/src/test/scala/unit/kafka/message/MessageTest.scala
deleted file mode 100644
index 4e3184c..0000000
--- a/trunk/core/src/test/scala/unit/kafka/message/MessageTest.scala
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.message
-
-import java.util._
-import java.nio._
-import junit.framework.Assert._
-import org.scalatest.junit.JUnitSuite
-import org.junit.{Before, Test}
-import kafka.utils.TestUtils
-
-class MessageTest extends JUnitSuite {
-  
-  var message: Message = null
-  val payload = "some bytes".getBytes()
-
-  @Before
-  def setUp(): Unit = {
-    message = new Message(payload)
-  }
-  
-  @Test
-  def testFieldValues = {
-    TestUtils.checkEquals(ByteBuffer.wrap(payload), message.payload)
-    assertEquals(Message.CurrentMagicValue, message.magic)
-    assertEquals(69L, new Message(69, "hello".getBytes()).checksum)
-  }
-
-  @Test
-  def testChecksum() {
-    assertTrue("Auto-computed checksum should be valid", message.isValid)
-    val badChecksum = message.checksum + 1 % Int.MaxValue
-    val invalid = new Message(badChecksum, payload)
-    assertEquals("Message should return written checksum", badChecksum, invalid.checksum)
-    assertFalse("Message with invalid checksum should be invalid", invalid.isValid)
-  }
-  
-  @Test
-  def testEquality() = {
-    assertFalse("Should not equal null", message.equals(null))
-    assertFalse("Should not equal a random string", message.equals("asdf"))
-    assertTrue("Should equal itself", message.equals(message))
-    val copy = new Message(message.checksum, payload)
-    assertTrue("Should equal another message with the same content.", message.equals(copy))
-  }
-  
-  @Test
-  def testIsHashable() = {
-    // this is silly, but why not
-    val m = new HashMap[Message,Boolean]()
-    m.put(message, true)
-    assertNotNull(m.get(message))
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/network/SocketServerTest.scala b/trunk/core/src/test/scala/unit/kafka/network/SocketServerTest.scala
deleted file mode 100644
index cae6651..0000000
--- a/trunk/core/src/test/scala/unit/kafka/network/SocketServerTest.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.network;
-
-import java.net._
-import java.io._
-import org.junit._
-import org.scalatest.junit.JUnitSuite
-import kafka.utils.TestUtils
-import java.util.Random
-import org.apache.log4j._
-
-class SocketServerTest extends JUnitSuite {
-
-  Logger.getLogger("kafka").setLevel(Level.INFO)
-
-  def echo(receive: Receive): Option[Send] = {
-    val id = receive.buffer.getShort
-    Some(new BoundedByteBufferSend(receive.buffer.slice))
-  }
-  
-  val server = new SocketServer(port = TestUtils.choosePort, 
-                                numProcessorThreads = 1, 
-                                monitoringPeriodSecs = 30, 
-                                handlerFactory = (requestId: Short, receive: Receive) => echo, 
-                                sendBufferSize = 300000,
-                                receiveBufferSize = 300000,
-                                maxRequestSize = 50)
-  server.startup()
-
-  def sendRequest(id: Short, request: Array[Byte]): Array[Byte] = {
-    val socket = new Socket("localhost", server.port)
-    val outgoing = new DataOutputStream(socket.getOutputStream)
-    outgoing.writeInt(request.length + 2)
-    outgoing.writeShort(id)
-    outgoing.write(request)
-    outgoing.flush()
-    val incoming = new DataInputStream(socket.getInputStream)
-    val len = incoming.readInt()
-    val response = new Array[Byte](len)
-    incoming.readFully(response)
-    socket.close()
-    response
-  }
-
-  @After
-  def cleanup() {
-    server.shutdown()
-  }
-
-  @Test
-  def simpleRequest() {
-    val response = new String(sendRequest(0, "hello".getBytes))
-    
-  }
-
-  @Test(expected=classOf[IOException])
-  def tooBigRequestIsRejected() {
-    val tooManyBytes = new Array[Byte](server.maxRequestSize + 1)
-    new Random().nextBytes(tooManyBytes)
-    sendRequest(0, tooManyBytes)
-  }
-
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala
deleted file mode 100644
index 5268e12..0000000
--- a/trunk/core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala
+++ /dev/null
@@ -1,319 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import junit.framework.Assert
-import java.util.Properties
-import org.easymock.EasyMock
-import kafka.api.ProducerRequest
-import org.apache.log4j.{Logger, Level}
-import org.junit.Test
-import org.scalatest.junit.JUnitSuite
-import kafka.producer.async._
-import kafka.serializer.Encoder
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet, Message}
-import kafka.utils.TestZKUtils
-
-class AsyncProducerTest extends JUnitSuite {
-
-  private val messageContent1 = "test"
-  private val topic1 = "test-topic"
-  private val message1: Message = new Message(messageContent1.getBytes)
-
-  private val messageContent2 = "test1"
-  private val topic2 = "test1$topic"
-  private val message2: Message = new Message(messageContent2.getBytes)
-  val asyncProducerLogger = Logger.getLogger(classOf[AsyncProducer[String]])
-
-  @Test
-  def testProducerQueueSize() {
-    val basicProducer = EasyMock.createMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-      getMessageSetOfSize(List(message1), 10)))))
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "10")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-
-    //temporarily set log4j to a higher level to avoid error in the output
-    producer.setLoggerLevel(Level.FATAL)
-
-    try {
-      for(i <- 0 until 11) {
-        producer.send(messageContent1 + "-topic", messageContent1)
-      }
-      Assert.fail("Queue should be full")
-    }
-    catch {
-      case e: QueueFullException => println("Queue is full..")
-    }
-    producer.start
-    producer.close
-    Thread.sleep(2000)
-    EasyMock.verify(basicProducer)
-    producer.setLoggerLevel(Level.ERROR)
-  }
-
-  @Test
-  def testAddAfterQueueClosed() {
-    val basicProducer = EasyMock.createMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-      getMessageSetOfSize(List(message1), 10)))))
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "10")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-
-    producer.start
-    for(i <- 0 until 10) {
-      producer.send(messageContent1 + "-topic", messageContent1)
-    }
-    producer.close
-
-    try {
-      producer.send(messageContent1 + "-topic", messageContent1)
-      Assert.fail("Queue should be closed")
-    } catch {
-      case e: QueueClosedException =>
-    }
-    EasyMock.verify(basicProducer)
-  }
-
-  @Test
-  def testBatchSize() {
-    val basicProducer = EasyMock.createStrictMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-      getMessageSetOfSize(List(message1), 5)))))
-    EasyMock.expectLastCall.times(2)
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-      getMessageSetOfSize(List(message1), 1)))))
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "10")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("batch.size", "5")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-
-    producer.start
-    for(i <- 0 until 10) {
-      producer.send(messageContent1 + "-topic", messageContent1)
-    }
-
-    Thread.sleep(100)
-    try {
-      producer.send(messageContent1 + "-topic", messageContent1)
-    } catch {
-      case e: QueueFullException =>
-        Assert.fail("Queue should not be full")
-    }
-
-    producer.close
-    EasyMock.verify(basicProducer)
-  }
-
-  @Test
-  def testQueueTimeExpired() {
-    val basicProducer = EasyMock.createMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-      getMessageSetOfSize(List(message1), 3)))))
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "10")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("queue.time", "200")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-    val serializer = new StringSerializer
-
-    producer.start
-    for(i <- 0 until 3) {
-      producer.send(serializer.getTopic(messageContent1), messageContent1, ProducerRequest.RandomPartition)
-    }
-
-    Thread.sleep(300)
-    producer.close
-    EasyMock.verify(basicProducer)
-  }
-
-  @Test
-  def testSenderThreadShutdown() {
-    val syncProducerProps = new Properties()
-    syncProducerProps.put("host", "localhost")
-    syncProducerProps.put("port", "9092")
-    syncProducerProps.put("buffer.size", "1000")
-    syncProducerProps.put("connect.timeout.ms", "1000")
-    syncProducerProps.put("reconnect.interval", "1000")
-    val basicProducer = new MockProducer(new SyncProducerConfig(syncProducerProps))
-
-    val asyncProducerProps = new Properties()
-    asyncProducerProps.put("host", "localhost")
-    asyncProducerProps.put("port", "9092")
-    asyncProducerProps.put("queue.size", "10")
-    asyncProducerProps.put("serializer.class", "kafka.producer.StringSerializer")
-    asyncProducerProps.put("queue.time", "100")
-    asyncProducerProps.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new AsyncProducerConfig(asyncProducerProps)
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-    producer.start
-    producer.send(messageContent1 + "-topic", messageContent1)
-    producer.close
-  }
-
-  @Test
-  def testCollateEvents() {
-    val basicProducer = EasyMock.createMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic2, ProducerRequest.RandomPartition,
-                                                                     getMessageSetOfSize(List(message2), 5)),
-                                                 new ProducerRequest(topic1, ProducerRequest.RandomPartition,
-                                                                     getMessageSetOfSize(List(message1), 5)))))
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "50")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("batch.size", "10")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-
-    producer.start
-    val serializer = new StringSerializer
-    for(i <- 0 until 5) {
-      producer.send(messageContent1 + "-topic", messageContent1)
-      producer.send(messageContent2 + "$topic", messageContent2, ProducerRequest.RandomPartition)
-    }
-
-    producer.close
-    EasyMock.verify(basicProducer)
-
-  }
-
-  @Test
-  def testCollateAndSerializeEvents() {
-    val basicProducer = EasyMock.createMock(classOf[SyncProducer])
-    basicProducer.multiSend(EasyMock.aryEq(Array(new ProducerRequest(topic2, 1,
-                                                                     getMessageSetOfSize(List(message2), 5)),
-                                                 new ProducerRequest(topic1, 0,
-                                                                     getMessageSetOfSize(List(message1), 5)),
-                                                 new ProducerRequest(topic1, 1,
-                                                                     getMessageSetOfSize(List(message1), 5)),
-                                                 new ProducerRequest(topic2, 0,
-                                                                     getMessageSetOfSize(List(message2), 5)))))
-
-    EasyMock.expectLastCall
-    basicProducer.close
-    EasyMock.expectLastCall
-    EasyMock.replay(basicProducer)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", "9092")
-    props.put("queue.size", "50")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("batch.size", "20")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new AsyncProducerConfig(props)
-
-    val producer = new AsyncProducer[String](config, basicProducer, new StringSerializer)
-
-    producer.start
-    val serializer = new StringSerializer
-    for(i <- 0 until 5) {
-      producer.send(topic2, messageContent2, 0)
-      producer.send(topic2, messageContent2, 1)
-      producer.send(topic1, messageContent1, 0)
-      producer.send(topic1, messageContent1, 1)
-    }
-
-    producer.close
-    EasyMock.verify(basicProducer)
-
-  }
-
-  private def getMessageSetOfSize(messages: List[Message], counts: Int): ByteBufferMessageSet = {
-    var messageList = new Array[Message](counts)
-    for(message <- messages) {
-      for(i <- 0 until counts) {
-        messageList(i) = message
-      }
-    }
-    new ByteBufferMessageSet(NoCompressionCodec, messageList: _*)
-  }
-
-  class StringSerializer extends Encoder[String] {
-    def toMessage(event: String):Message = new Message(event.getBytes)
-    def getTopic(event: String): String = event.concat("-topic")
-  }
-
-  class MockProducer(override val config: SyncProducerConfig) extends SyncProducer(config) {
-    override def send(topic: String, messages: ByteBufferMessageSet): Unit = {
-      Thread.sleep(1000)
-    }
-    override def multiSend(produces: Array[ProducerRequest]) {
-      Thread.sleep(1000)
-    }
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/producer/ProducerMethodsTest.scala b/trunk/core/src/test/scala/unit/kafka/producer/ProducerMethodsTest.scala
deleted file mode 100644
index 908c567..0000000
--- a/trunk/core/src/test/scala/unit/kafka/producer/ProducerMethodsTest.scala
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package unit.kafka.producer
-
-import collection.immutable.SortedSet
-import java.util._
-import junit.framework.Assert._
-import kafka.cluster.Partition
-import kafka.common.NoBrokersForPartitionException
-import kafka.producer._
-import org.easymock.EasyMock
-import org.junit.Test
-import org.scalatest.junit.JUnitSuite
-import scala.collection.immutable.List
-
-class ProducerMethodsTest extends JUnitSuite {
-
-  @Test
-  def producerThrowsNoBrokersException() = {
-    val props = new Properties
-    props.put("broker.list", "placeholder") // Need to fake out having specified one
-    val config = new ProducerConfig(props)
-    val mockPartitioner = EasyMock.createMock(classOf[Partitioner[String]])
-    val mockProducerPool = EasyMock.createMock(classOf[ProducerPool[String]])
-    val mockBrokerPartitionInfo = EasyMock.createMock(classOf[kafka.producer.BrokerPartitionInfo])
-
-    EasyMock.expect(mockBrokerPartitionInfo.getBrokerPartitionInfo("the_topic")).andReturn(SortedSet[Partition]())
-    EasyMock.replay(mockBrokerPartitionInfo)
-
-    val producer = new Producer[String, String](config,mockPartitioner, mockProducerPool,false, mockBrokerPartitionInfo)
-
-    try {
-      val producerData = new ProducerData[String, String]("the_topic", "the_key", List("the_datum"))
-      producer.send(producerData)
-      fail("Should have thrown a NoBrokersForPartitionException.")
-    } catch {
-      case nb: NoBrokersForPartitionException => assertTrue(nb.getMessage.contains("the_key"))
-    }
-
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/producer/ProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/producer/ProducerTest.scala
deleted file mode 100644
index 53e920c..0000000
--- a/trunk/core/src/test/scala/unit/kafka/producer/ProducerTest.scala
+++ /dev/null
@@ -1,702 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.producer
-
-import async.AsyncProducer
-import java.util.Properties
-import org.apache.log4j.{Logger, Level}
-import kafka.server.{KafkaRequestHandlers, KafkaServer, KafkaConfig}
-import kafka.zk.EmbeddedZookeeper
-import org.junit.{After, Before, Test}
-import junit.framework.Assert
-import org.easymock.EasyMock
-import java.util.concurrent.ConcurrentHashMap
-import kafka.cluster.Partition
-import org.scalatest.junit.JUnitSuite
-import kafka.common.{InvalidConfigException, UnavailableProducerException, InvalidPartitionException}
-import kafka.utils.{TestUtils, TestZKUtils, Utils}
-import kafka.serializer.{StringEncoder, Encoder}
-import kafka.consumer.SimpleConsumer
-import kafka.api.FetchRequest
-import kafka.message.{NoCompressionCodec, ByteBufferMessageSet, Message}
-
-class ProducerTest extends JUnitSuite {
-  private val topic = "test-topic"
-  private val brokerId1 = 0
-  private val brokerId2 = 1  
-  private val ports = TestUtils.choosePorts(2)
-  private val (port1, port2) = (ports(0), ports(1))
-  private var server1: KafkaServer = null
-  private var server2: KafkaServer = null
-  private var producer1: SyncProducer = null
-  private var producer2: SyncProducer = null
-  private var consumer1: SimpleConsumer = null
-  private var consumer2: SimpleConsumer = null
-  private var zkServer:EmbeddedZookeeper = null
-  private val requestHandlerLogger = Logger.getLogger(classOf[KafkaRequestHandlers])
-
-  @Before
-  def setUp() {
-    // set up 2 brokers with 4 partitions each
-    zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect)
-
-    val props1 = TestUtils.createBrokerConfig(brokerId1, port1)
-    val config1 = new KafkaConfig(props1) {
-      override val numPartitions = 4
-    }
-    server1 = TestUtils.createServer(config1)
-
-    val props2 = TestUtils.createBrokerConfig(brokerId2, port2)
-    val config2 = new KafkaConfig(props2) {
-      override val numPartitions = 4
-    }
-    server2 = TestUtils.createServer(config2)
-
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", port1.toString)
-
-    producer1 = new SyncProducer(new SyncProducerConfig(props))
-    producer1.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                          messages = new Message("test".getBytes())))
-
-    producer2 = new SyncProducer(new SyncProducerConfig(props) {
-      override val port = port2
-    })
-    producer2.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                          messages = new Message("test".getBytes())))
-
-    consumer1 = new SimpleConsumer("localhost", port1, 1000000, 64*1024)
-    consumer2 = new SimpleConsumer("localhost", port2, 100, 64*1024)
-
-    // temporarily set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.FATAL)
-
-    Thread.sleep(500)
-  }
-
-  @After
-  def tearDown() {
-    // restore set request handler logger to a higher level
-    requestHandlerLogger.setLevel(Level.ERROR)
-    server1.shutdown
-    server2.shutdown
-    Utils.rm(server1.config.logDir)
-    Utils.rm(server2.config.logDir)    
-    Thread.sleep(500)
-    zkServer.shutdown
-    Thread.sleep(500)
-  }
-
-  @Test
-  def testSend() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[SyncProducer])
-    // it should send to partition 0 (first partition) on second broker i.e broker2
-    syncProducer2.send(topic, 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message("test1".getBytes)))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    val producerPool = new ProducerPool(config, serializer, syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    producer.send(new ProducerData[String, String](topic, "test", Array("test1")))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testSendSingleMessage() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("broker.list", "0:localhost:9092")
-
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, kafka.producer.SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[kafka.producer.SyncProducer])
-    // it should send to a random partition due to use of broker.list
-    syncProducer1.send(topic, -1, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message("t".getBytes())))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-
-    syncProducers.put(brokerId1, syncProducer1)
-
-    val producerPool = new ProducerPool[String](config, serializer, syncProducers,
-      new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    producer.send(new ProducerData[String, String](topic, "t"))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-  }
-
-  @Test
-  def testInvalidPartition() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new ProducerConfig(props)
-
-    val richProducer = new Producer[String, String](config)
-    try {
-      richProducer.send(new ProducerData[String, String](topic, "test", Array("test")))
-      Assert.fail("Should fail with InvalidPartitionException")
-    }catch {
-      case e: InvalidPartitionException => // expected, do nothing
-    }finally {
-      richProducer.close()
-    }
-  }
-
-  @Test
-  def testDefaultEncoder() {
-    val props = new Properties()
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val config = new ProducerConfig(props)
-
-    val stringProducer1 = new Producer[String, String](config)
-    try {
-      stringProducer1.send(new ProducerData[String, String](topic, "test", Array("test")))
-      fail("Should fail with ClassCastException due to incompatible Encoder")
-    } catch {
-      case e: ClassCastException =>
-    }finally {
-      stringProducer1.close()
-    }
-
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    val stringProducer2 = new Producer[String, String](new ProducerConfig(props))
-    stringProducer2.send(new ProducerData[String, String](topic, "test", Array("test")))
-    stringProducer2.close()
-
-    val messageProducer1 = new Producer[String, Message](config)
-    try {
-      messageProducer1.send(new ProducerData[String, Message](topic, "test", Array(new Message("test".getBytes))))
-    } catch {
-      case e: ClassCastException => fail("Should not fail with ClassCastException due to default Encoder")
-    }finally {
-      messageProducer1.close()
-    }
-  }
-
-  @Test
-  def testSyncProducerPool() {
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[SyncProducer])
-    syncProducer1.send("test-topic", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message("test1".getBytes)))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    // default for producer.type is "sync"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    producerPool.send(producerPool.getProducerPoolData("test-topic", new Partition(brokerId1, 0), Array("test1")))
-
-    producerPool.close
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testAsyncProducerPool() {
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    val asyncProducer2 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    asyncProducer1.send(topic, "test1", 0)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    asyncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-    EasyMock.replay(asyncProducer2)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-    asyncProducers.put(brokerId2, asyncProducer2)
-
-    // change producer.type to "async"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      new ConcurrentHashMap[Int, SyncProducer](), asyncProducers)
-    producerPool.send(producerPool.getProducerPoolData(topic, new Partition(brokerId1, 0), Array("test1")))
-
-    producerPool.close
-    EasyMock.verify(asyncProducer1)
-    EasyMock.verify(asyncProducer2)
-  }
-
-  @Test
-  def testSyncUnavailableProducerException() {
-    val syncProducers = new ConcurrentHashMap[Int, SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[SyncProducer])
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId2, syncProducer2)
-
-    // default for producer.type is "sync"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    try {
-      producerPool.send(producerPool.getProducerPoolData("test-topic", new Partition(brokerId1, 0), Array("test1")))
-      Assert.fail("Should fail with UnavailableProducerException")
-    }catch {
-      case e: UnavailableProducerException => // expected
-    }
-
-    producerPool.close
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testAsyncUnavailableProducerException() {
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    val asyncProducer2 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    asyncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-    EasyMock.replay(asyncProducer2)
-
-    asyncProducers.put(brokerId2, asyncProducer2)
-
-    // change producer.type to "async"
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.NegativePartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    val producerPool = new ProducerPool[String](new ProducerConfig(props), new StringSerializer,
-      new ConcurrentHashMap[Int, SyncProducer](), asyncProducers)
-    try {
-      producerPool.send(producerPool.getProducerPoolData(topic, new Partition(brokerId1, 0), Array("test1")))
-      Assert.fail("Should fail with UnavailableProducerException")
-    }catch {
-      case e: UnavailableProducerException => // expected
-    }
-
-    producerPool.close
-    EasyMock.verify(asyncProducer1)
-    EasyMock.verify(asyncProducer2)
-  }
-
-  @Test
-  def testConfigBrokerPartitionInfoWithPartitioner {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1 + ":" + 4 + "," +
-                                       brokerId2 + ":" + "localhost" + ":" + port2 + ":" + 4)
-
-    var config: ProducerConfig = null
-    try {
-      config = new ProducerConfig(props)
-      fail("should fail with InvalidConfigException due to presence of partitioner.class and broker.list")
-    }catch {
-      case e: InvalidConfigException => // expected
-    }
-  }
-
-  @Test
-  def testConfigBrokerPartitionInfo() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    // it should send to a random partition due to use of broker.list
-    asyncProducer1.send(topic, "test1", -1)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-
-    val producerPool = new ProducerPool(config, serializer, new ConcurrentHashMap[Int, SyncProducer](), asyncProducers)
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    producer.send(new ProducerData[String, String](topic, "test1", Array("test1")))
-    producer.close
-
-    EasyMock.verify(asyncProducer1)
-  }
-
-  @Test
-  def testZKSendToNewTopic() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val serializer = new StringEncoder
-
-    val producer = new Producer[String, String](config)
-    try {
-      // Available broker id, partition id at this stage should be (0,0), (1,0)
-      // this should send the message to broker 0 on partition 0
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test1")))
-      Thread.sleep(100)
-      // Available broker id, partition id at this stage should be (0,0), (0,1), (0,2), (0,3), (1,0)
-      // Since 4 % 5 = 4, this should send the message to broker 1 on partition 0
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test1")))
-      Thread.sleep(100)
-      // cross check if brokers got the messages
-      val messageSet1 = consumer1.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-      val messageSet2 = consumer2.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet2.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet2.next.message)
-    } catch {
-      case e: Exception => fail("Not expected", e)
-    }finally {
-      producer.close
-    }
-  }
-
-  @Test
-  def testZKSendWithDeadBroker() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val serializer = new StringEncoder
-
-    val producer = new Producer[String, String](config)
-    try {
-      // Available broker id, partition id at this stage should be (0,0), (1,0)
-      // Hence, this should send the message to broker 0 on partition 0
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test1")))
-      Thread.sleep(100)
-      // kill 2nd broker
-      server2.shutdown
-      Thread.sleep(100)
-      // Available broker id, partition id at this stage should be (0,0), (0,1), (0,2), (0,3), (1,0)
-      // Since 4 % 5 = 4, in a normal case, it would send to broker 1 on partition 0. But since broker 1 is down,
-      // 4 % 4 = 0, So it should send the message to broker 0 on partition 0
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test1")))
-      Thread.sleep(100)
-      // cross check if brokers got the messages
-      val messageSet1 = consumer1.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-      Assert.assertTrue("Message set should have another message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test1".getBytes), messageSet1.next.message)
-    } catch {
-      case e: Exception => fail("Not expected")
-    }finally {
-      producer.close
-    }
-  }
-
-  @Test
-  def testZKSendToExistingTopicWithNoBrokers() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.serializer.StringEncoder")
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val serializer = new StringEncoder
-
-    val producer = new Producer[String, String](config)
-    var server: KafkaServer = null
-
-    try {
-      // shutdown server1
-      server1.shutdown
-      Thread.sleep(100)
-      // Available broker id, partition id at this stage should be (1,0)
-      // this should send the message to broker 1 on partition 0
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test")))
-      Thread.sleep(100)
-      // cross check if brokers got the messages
-      val messageSet1 = consumer2.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet1.hasNext)
-      Assert.assertEquals(new Message("test".getBytes), messageSet1.next.message)
-
-      // shutdown server2
-      server2.shutdown
-      Thread.sleep(100)
-      // delete the new-topic logs
-      Utils.rm(server2.config.logDir)
-      Thread.sleep(100)
-      // start it up again. So broker 2 exists under /broker/ids, but nothing exists under /broker/topics/new-topic
-      val props2 = TestUtils.createBrokerConfig(brokerId1, port1)
-      val config2 = new KafkaConfig(props2) {
-        override val numPartitions = 4
-      }
-      server = TestUtils.createServer(config2)
-      Thread.sleep(100)
-
-      // now there are no brokers registered under test-topic.
-      producer.send(new ProducerData[String, String]("new-topic", "test", Array("test")))
-      Thread.sleep(100)
-
-      // cross check if brokers got the messages
-      val messageSet2 = consumer1.fetch(new FetchRequest("new-topic", 0, 0, 10000)).iterator
-      Assert.assertTrue("Message set should have 1 message", messageSet2.hasNext)
-      Assert.assertEquals(new Message("test".getBytes), messageSet2.next.message)
-
-    } catch {
-      case e: Exception => fail("Not expected", e)
-    }finally {
-      server.shutdown
-      producer.close
-    }
-  }
-
-  @Test
-  def testPartitionedSendToNewTopic() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[SyncProducer])
-    syncProducer1.send("test-topic1", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                                  messages = new Message("test1".getBytes)))
-    EasyMock.expectLastCall
-    syncProducer1.send("test-topic1", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                                  messages = new Message("test1".getBytes)))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-
-    val producerPool = new ProducerPool(config, serializer, syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    producer.send(new ProducerData[String, String]("test-topic1", "test", Array("test1")))
-    Thread.sleep(100)
-
-    // now send again to this topic using a real producer, this time all brokers would have registered
-    // their partitions in zookeeper
-    producer1.send("test-topic1", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                           messages = new Message("test".getBytes())))
-    Thread.sleep(100)
-
-    // wait for zookeeper to register the new topic
-    producer.send(new ProducerData[String, String]("test-topic1", "test1", Array("test1")))
-    Thread.sleep(100)
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-  }
-
-  @Test
-  def testPartitionedSendToNewBrokerInExistingTopic() {
-    val props = new Properties()
-    props.put("partitioner.class", "kafka.producer.StaticPartitioner")
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-
-    val config = new ProducerConfig(props)
-    val partitioner = new StaticPartitioner
-    val serializer = new StringSerializer
-
-    // 2 sync producers
-    val syncProducers = new ConcurrentHashMap[Int, SyncProducer]()
-    val syncProducer1 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer2 = EasyMock.createMock(classOf[SyncProducer])
-    val syncProducer3 = EasyMock.createMock(classOf[SyncProducer])
-    syncProducer3.send("test-topic", 2, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                                 messages = new Message("test1".getBytes)))
-    EasyMock.expectLastCall
-    syncProducer1.close
-    EasyMock.expectLastCall
-    syncProducer2.close
-    EasyMock.expectLastCall
-    syncProducer3.close
-    EasyMock.expectLastCall
-    EasyMock.replay(syncProducer1)
-    EasyMock.replay(syncProducer2)
-    EasyMock.replay(syncProducer3)
-
-    syncProducers.put(brokerId1, syncProducer1)
-    syncProducers.put(brokerId2, syncProducer2)
-    syncProducers.put(2, syncProducer3)
-
-    val producerPool = new ProducerPool(config, serializer, syncProducers, new ConcurrentHashMap[Int, AsyncProducer[String]]())
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    val port = TestUtils.choosePort
-    val serverProps = TestUtils.createBrokerConfig(2, port)
-    val serverConfig = new KafkaConfig(serverProps) {
-      override val numPartitions = 4
-    }
-
-    val server3 = TestUtils.createServer(serverConfig)
-    Thread.sleep(500)
-    // send a message to the new broker to register it under topic "test-topic"
-    val tempProps = new Properties()
-    tempProps.put("host", "localhost")
-    tempProps.put("port", port.toString)
-    val tempProducer = new SyncProducer(new SyncProducerConfig(tempProps))
-    tempProducer.send("test-topic", new ByteBufferMessageSet(compressionCodec = NoCompressionCodec,
-                                                             messages = new Message("test".getBytes())))
-
-    Thread.sleep(500)
-    producer.send(new ProducerData[String, String]("test-topic", "test-topic", Array("test1")))
-    producer.close
-
-    EasyMock.verify(syncProducer1)
-    EasyMock.verify(syncProducer2)
-    EasyMock.verify(syncProducer3)
-
-    server3.shutdown
-    Utils.rm(server3.config.logDir)
-  }
-
-  @Test
-  def testDefaultPartitioner() {
-    val props = new Properties()
-    props.put("serializer.class", "kafka.producer.StringSerializer")
-    props.put("producer.type", "async")
-    props.put("broker.list", brokerId1 + ":" + "localhost" + ":" + port1)
-    val config = new ProducerConfig(props)
-    val partitioner = new DefaultPartitioner[String]
-    val serializer = new StringSerializer
-
-    // 2 async producers
-    val asyncProducers = new ConcurrentHashMap[Int, AsyncProducer[String]]()
-    val asyncProducer1 = EasyMock.createMock(classOf[AsyncProducer[String]])
-    // it should send to a random partition due to use of broker.list
-    asyncProducer1.send(topic, "test1", -1)
-    EasyMock.expectLastCall
-    asyncProducer1.close
-    EasyMock.expectLastCall
-    EasyMock.replay(asyncProducer1)
-
-    asyncProducers.put(brokerId1, asyncProducer1)
-
-    val producerPool = new ProducerPool(config, serializer, new ConcurrentHashMap[Int, SyncProducer](), asyncProducers)
-    val producer = new Producer[String, String](config, partitioner, producerPool, false, null)
-
-    producer.send(new ProducerData[String, String](topic, "test", Array("test1")))
-    producer.close
-
-    EasyMock.verify(asyncProducer1)
-  }
-}
-
-class StringSerializer extends Encoder[String] {
-  def toEvent(message: Message):String = message.toString
-  def toMessage(event: String):Message = new Message(event.getBytes)
-  def getTopic(event: String): String = event.concat("-topic")
-}
-
-class NegativePartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    -1
-  }
-}
-
-class StaticPartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    (data.length % numPartitions)
-  }
-}
-
-class HashPartitioner extends Partitioner[String] {
-  def partition(data: String, numPartitions: Int): Int = {
-    (data.hashCode % numPartitions)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala b/trunk/core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala
deleted file mode 100644
index 8d65bb6..0000000
--- a/trunk/core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala
+++ /dev/null
@@ -1,134 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.producer
-
-import junit.framework.Assert
-import kafka.utils.SystemTime
-import kafka.utils.TestUtils
-import kafka.server.{KafkaServer, KafkaConfig}
-import org.apache.log4j.Logger
-import org.scalatest.junit.JUnitSuite
-import org.junit.{After, Before, Test}
-import kafka.common.MessageSizeTooLargeException
-import java.util.Properties
-import kafka.api.ProducerRequest
-import kafka.message.{NoCompressionCodec, DefaultCompressionCodec, Message, ByteBufferMessageSet}
-
-class SyncProducerTest extends JUnitSuite {
-  private var messageBytes =  new Array[Byte](2);
-  private var server: KafkaServer = null
-  val simpleProducerLogger = Logger.getLogger(classOf[SyncProducer])
-
-  @Before
-  def setUp() {
-    server = TestUtils.createServer(new KafkaConfig(TestUtils.createBrokerConfig(0, TestUtils.choosePort))
-    {
-      override val enableZookeeper = false
-    })
-  }
-
-  @After
-  def tearDown() {
-    server.shutdown
-  }
-
-  @Test
-  def testReachableServer() {
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", server.socketServer.port.toString)
-    props.put("buffer.size", "102400")
-    props.put("connect.timeout.ms", "500")
-    props.put("reconnect.interval", "1000")
-    val producer = new SyncProducer(new SyncProducerConfig(props))
-    var failed = false
-    val firstStart = SystemTime.milliseconds
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message(messageBytes)))
-    }catch {
-      case e: Exception => failed=true
-    }
-    Assert.assertFalse(failed)
-    failed = false
-    val firstEnd = SystemTime.milliseconds
-    Assert.assertTrue((firstEnd-firstStart) < 500)
-    val secondStart = SystemTime.milliseconds
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message(messageBytes)))
-    }catch {
-      case e: Exception => failed = true
-    }
-    Assert.assertFalse(failed)
-    val secondEnd = SystemTime.milliseconds
-    Assert.assertTrue((secondEnd-secondStart) < 500)
-
-    try {
-      producer.multiSend(Array(new ProducerRequest("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message(messageBytes)))))
-    }catch {
-      case e: Exception => failed=true
-    }
-    Assert.assertFalse(failed)
-  }
-
-  @Test
-  def testSingleMessageSizeTooLarge() {
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", server.socketServer.port.toString)
-    props.put("buffer.size", "102400")
-    props.put("connect.timeout.ms", "300")
-    props.put("reconnect.interval", "500")
-    props.put("max.message.size", "100")
-    val producer = new SyncProducer(new SyncProducerConfig(props))
-    val bytes = new Array[Byte](101)
-    var failed = false
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message(bytes)))
-    }catch {
-      case e: MessageSizeTooLargeException => failed = true
-    }
-    Assert.assertTrue(failed)
-  }
-
-  @Test
-  def testCompressedMessageSizeTooLarge() {
-    val props = new Properties()
-    props.put("host", "localhost")
-    props.put("port", server.socketServer.port.toString)
-    props.put("buffer.size", "102400")
-    props.put("connect.timeout.ms", "300")
-    props.put("reconnect.interval", "500")
-    props.put("max.message.size", "100")
-    val producer = new SyncProducer(new SyncProducerConfig(props))
-    val messages = new Array[Message](10)
-    import Array.fill
-    var a = 0
-    for( a <- 0 to  9){
-      val bytes = fill(20){a.asInstanceOf[Byte]}
-      messages(a) = new Message(bytes)
-    }
-    var failed = false
-    /** After compression, the compressed message has size 118 **/
-    try {
-      producer.send("test", 0, new ByteBufferMessageSet(compressionCodec = DefaultCompressionCodec, messages = messages: _*))
-    }catch {
-      case e: MessageSizeTooLargeException => failed = true
-    }
-    Assert.assertTrue(failed)
-  }
-}
\ No newline at end of file
diff --git a/trunk/core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala b/trunk/core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala
deleted file mode 100644
index 5b0aefe..0000000
--- a/trunk/core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala
+++ /dev/null
@@ -1,122 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.server
-
-import kafka.utils.TestUtils
-import java.io.File
-import kafka.utils.Utils
-import kafka.api.FetchRequest
-import kafka.producer.{SyncProducer, SyncProducerConfig}
-import kafka.consumer.SimpleConsumer
-import java.util.Properties
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-import junit.framework.Assert._
-import kafka.message.{NoCompressionCodec, Message, ByteBufferMessageSet}
-
-class ServerShutdownTest extends JUnitSuite {
-  val port = TestUtils.choosePort
-
-  @Test
-  def testCleanShutdown() {
-    val props = TestUtils.createBrokerConfig(0, port)
-    val config = new KafkaConfig(props) {
-      override val enableZookeeper = false
-    }
-
-    val host = "localhost"
-    val topic = "test"
-    val sent1 = new ByteBufferMessageSet(NoCompressionCodec, new Message("hello".getBytes()), new Message("there".getBytes()))
-    val sent2 = new ByteBufferMessageSet(NoCompressionCodec, new Message("more".getBytes()), new Message("messages".getBytes()))
-
-    {
-      val producer = new SyncProducer(getProducerConfig(host,
-                                                        port,
-                                                        64*1024,
-                                                        100000,
-                                                        10000))
-      val consumer = new SimpleConsumer(host,
-                                        port,
-                                        1000000,
-                                        64*1024)
-
-      val server = new KafkaServer(config)
-      server.startup()
-
-      // send some messages
-      producer.send(topic, sent1)
-      sent1.getBuffer.rewind
-
-      Thread.sleep(200)
-      // do a clean shutdown
-      server.shutdown()
-      val cleanShutDownFile = new File(new File(config.logDir), server.CLEAN_SHUTDOWN_FILE)
-      assertTrue(cleanShutDownFile.exists)
-      producer.close()
-    }
-
-
-    {
-      val producer = new SyncProducer(getProducerConfig(host,
-                                                        port,
-                                                        64*1024,
-                                                        100000,
-                                                        10000))
-      val consumer = new SimpleConsumer(host,
-                                        port,
-                                        1000000,
-                                        64*1024)
-
-      val server = new KafkaServer(config)
-      server.startup()
-
-      // bring the server back again and read the messages
-      var fetched: ByteBufferMessageSet = null
-      while(fetched == null || fetched.validBytes == 0)
-        fetched = consumer.fetch(new FetchRequest(topic, 0, 0, 10000))
-      TestUtils.checkEquals(sent1.iterator, fetched.iterator)
-      val newOffset = fetched.validBytes
-
-      // send some more messages
-      producer.send(topic, sent2)
-      sent2.getBuffer.rewind
-
-      Thread.sleep(200)
-
-      fetched = null
-      while(fetched == null || fetched.validBytes == 0)
-        fetched = consumer.fetch(new FetchRequest(topic, 0, newOffset, 10000))
-      TestUtils.checkEquals(sent2.map(m => m.message).iterator, fetched.map(m => m.message).iterator)
-
-      server.shutdown()
-      Utils.rm(server.config.logDir)
-      producer.close()
-    }
-
-  }
-
-  private def getProducerConfig(host: String, port: Int, bufferSize: Int, connectTimeout: Int,
-                                reconnectInterval: Int): SyncProducerConfig = {
-    val props = new Properties()
-    props.put("host", host)
-    props.put("port", port.toString)
-    props.put("buffer.size", bufferSize.toString)
-    props.put("connect.timeout.ms", connectTimeout.toString)
-    props.put("reconnect.interval", reconnectInterval.toString)
-    new SyncProducerConfig(props)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/utils/TestUtils.scala b/trunk/core/src/test/scala/unit/kafka/utils/TestUtils.scala
deleted file mode 100644
index 25f6b49..0000000
--- a/trunk/core/src/test/scala/unit/kafka/utils/TestUtils.scala
+++ /dev/null
@@ -1,308 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import java.io._
-import java.net._
-import java.nio._
-import java.nio.channels._
-import java.util.Random
-import java.util.Properties
-import junit.framework.Assert._
-import kafka.server._
-import kafka.producer._
-import kafka.message._
-import org.I0Itec.zkclient.ZkClient
-import kafka.consumer.ConsumerConfig
-
-/**
- * Utility functions to help with testing
- */
-object TestUtils {
-  
-  val Letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
-  val Digits = "0123456789"
-  val LettersAndDigits = Letters + Digits
-  
-  /* A consistent random number generator to make tests repeatable */
-  val seededRandom = new Random(192348092834L)
-  val random = new Random()
-  
-  /**
-   * Choose a number of random available ports
-   */
-  def choosePorts(count: Int): List[Int] = {
-    val sockets = 
-      for(i <- 0 until count)
-        yield new ServerSocket(0)
-    val socketList = sockets.toList
-    val ports = socketList.map(_.getLocalPort)
-    socketList.map(_.close)
-    ports
-  }
-  
-  /**
-   * Choose an available port
-   */
-  def choosePort(): Int = choosePorts(1).head
-  
-  /**
-   * Create a temporary directory
-   */
-  def tempDir(): File = {
-    val ioDir = System.getProperty("java.io.tmpdir")
-    val f = new File(ioDir, "kafka-" + random.nextInt(1000000))
-    f.mkdirs()
-    f.deleteOnExit()
-    f
-  }
-  
-  /**
-   * Create a temporary file
-   */
-  def tempFile(): File = {
-    val f = File.createTempFile("kafka", ".tmp")
-    f.deleteOnExit()
-    f
-  }
-  
-  /**
-   * Create a temporary file and return an open file channel for this file
-   */
-  def tempChannel(): FileChannel = new RandomAccessFile(tempFile(), "rw").getChannel()
-  
-  /**
-   * Create a kafka server instance with appropriate test settings
-   * USING THIS IS A SIGN YOU ARE NOT WRITING A REAL UNIT TEST
-   * @param config The configuration of the server
-   */
-  def createServer(config: KafkaConfig): KafkaServer = {
-    val server = new KafkaServer(config)
-    server.startup()
-    server
-  }
-  
-  /**
-   * Create a test config for the given node id
-   */
-  def createBrokerConfigs(numConfigs: Int): List[Properties] = {
-    for((port, node) <- choosePorts(numConfigs).zipWithIndex)
-      yield createBrokerConfig(node, port)
-  }
-  
-  /**
-   * Create a test config for the given node id
-   */
-  def createBrokerConfig(nodeId: Int, port: Int): Properties = {
-    val props = new Properties
-    props.put("brokerid", nodeId.toString)
-    props.put("port", port.toString)
-    props.put("log.dir", TestUtils.tempDir().getAbsolutePath)
-    props.put("log.flush.interval", "1")
-    props.put("zk.connect", TestZKUtils.zookeeperConnect)
-    props
-  }
-  
-  /**
-   * Create a test config for a consumer
-   */
-  def createConsumerProperties(zkConnect: String, groupId: String, consumerId: String,
-                               consumerTimeout: Long = -1): Properties = {
-    val props = new Properties
-    props.put("zk.connect", zkConnect)
-    props.put("groupid", groupId)
-    props.put("consumerid", consumerId)
-    props.put("consumer.timeout.ms", consumerTimeout.toString)
-    props.put("zk.sessiontimeout.ms", "400")
-    props.put("zk.synctime.ms", "200")
-    props.put("autocommit.interval.ms", "1000")
-
-    props
-  }
-
-  /**
-   * Wrap the message in a message set
-   * @param payload The bytes of the message
-   */
-  def singleMessageSet(payload: Array[Byte]) = 
-    new ByteBufferMessageSet(compressionCodec = NoCompressionCodec, messages = new Message(payload))
-  
-  /**
-   * Generate an array of random bytes
-   * @param numBytes The size of the array
-   */
-  def randomBytes(numBytes: Int): Array[Byte] = {
-    val bytes = new Array[Byte](numBytes)
-    seededRandom.nextBytes(bytes)
-    bytes
-  }
-  
-  /**
-   * Generate a random string of letters and digits of the given length
-   * @param len The length of the string
-   * @return The random string
-   */
-  def randomString(len: Int): String = {
-    val b = new StringBuilder()
-    for(i <- 0 until len)
-      b.append(LettersAndDigits.charAt(seededRandom.nextInt(LettersAndDigits.length)))
-    b.toString
-  }
-
-  /**
-   * Check that the buffer content from buffer.position() to buffer.limit() is equal
-   */
-  def checkEquals(b1: ByteBuffer, b2: ByteBuffer) {
-    assertEquals("Buffers should have equal length", b1.limit - b1.position, b2.limit - b2.position)
-    for(i <- 0 until b1.limit - b1.position)
-      assertEquals("byte " + i + " byte not equal.", b1.get(b1.position + i), b2.get(b1.position + i))
-  }
-  
-  /**
-   * Throw an exception if the two iterators are of differing lengths or contain
-   * different messages on their Nth element
-   */
-  def checkEquals[T](expected: Iterator[T], actual: Iterator[T]) {
-    var length = 0
-    while(expected.hasNext && actual.hasNext) {
-      length += 1
-      assertEquals(expected.next, actual.next)
-    }
-    
-    if (expected.hasNext)
-    {
-     var length1 = length;
-     while (expected.hasNext)
-     {
-       expected.next
-       length1 += 1
-     }
-     assertFalse("Iterators have uneven length-- first has more: "+length1 + " > " + length, true);
-    }
-    
-    if (actual.hasNext)
-    {
-     var length2 = length;
-     while (actual.hasNext)
-     {
-       actual.next
-       length2 += 1
-     }
-     assertFalse("Iterators have uneven length-- second has more: "+length2 + " > " + length, true);
-    }
-  }
-
-  /**
-   *  Throw an exception if an iterable has different length than expected
-   *  
-   */
-  def checkLength[T](s1: Iterator[T], expectedLength:Int) {
-    var n = 0
-    while (s1.hasNext) {
-      n+=1
-      s1.next
-    }
-    assertEquals(expectedLength, n)
-  }
-
-  /**
-   * Throw an exception if the two iterators are of differing lengths or contain
-   * different messages on their Nth element
-   */
-  def checkEquals[T](s1: java.util.Iterator[T], s2: java.util.Iterator[T]) {
-    while(s1.hasNext && s2.hasNext)
-      assertEquals(s1.next, s2.next)
-    assertFalse("Iterators have uneven length--first has more", s1.hasNext)
-    assertFalse("Iterators have uneven length--second has more", s2.hasNext)
-  }
-
-  def stackedIterator[T](s: Iterator[T]*): Iterator[T] = {
-    new Iterator[T] {
-      var cur: Iterator[T] = null
-      val topIterator = s.iterator
-
-      def hasNext() : Boolean = {
-        while (true) {
-          if (cur == null) {
-            if (topIterator.hasNext)
-              cur = topIterator.next
-            else
-              return false
-          }
-          if (cur.hasNext)
-            return true
-          cur = null
-        }
-        // should never reach her
-        throw new RuntimeException("should not reach here")
-      }
-
-      def next() : T = cur.next
-    }
-  }
-
-  /**
-   * Create a hexidecimal string for the given bytes
-   */
-  def hexString(bytes: Array[Byte]): String = hexString(ByteBuffer.wrap(bytes))
-  
-  /**
-   * Create a hexidecimal string for the given bytes
-   */
-  def hexString(buffer: ByteBuffer): String = {
-    val builder = new StringBuilder("0x")
-    for(i <- 0 until buffer.limit)
-      builder.append(String.format("%x", Integer.valueOf(buffer.get(buffer.position + i))))
-    builder.toString
-  }
-  
-  /**
-   * Create a producer for the given host and port
-   */
-  def createProducer(host: String, port: Int): SyncProducer = {
-    val props = new Properties()
-    props.put("host", host)
-    props.put("port", port.toString)
-    props.put("buffer.size", "65536")
-    props.put("connect.timeout.ms", "100000")
-    props.put("reconnect.interval", "10000")
-    return new SyncProducer(new SyncProducerConfig(props))
-  }
-
-  def updateConsumerOffset(config : ConsumerConfig, path : String, offset : Long) = {
-    val zkClient = new ZkClient(config.zkConnect, config.zkSessionTimeoutMs, config.zkConnectionTimeoutMs, ZKStringSerializer)
-    ZkUtils.updatePersistentPath(zkClient, path, offset.toString)
-
-  }
-
-  def getMessageIterator(iter: Iterator[MessageAndOffset]): Iterator[Message] = {
-    new IteratorTemplate[Message] {
-      override def makeNext(): Message = {
-        if (iter.hasNext)
-          return iter.next.message
-        else
-          return allDone()
-      }
-    }
-  }
-
-}
-
-object TestZKUtils {
-  val zookeeperConnect = "127.0.0.1:2182"  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/utils/UtilsTest.scala b/trunk/core/src/test/scala/unit/kafka/utils/UtilsTest.scala
deleted file mode 100644
index 771432e..0000000
--- a/trunk/core/src/test/scala/unit/kafka/utils/UtilsTest.scala
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.utils
-
-import org.apache.log4j.Logger
-import org.scalatest.junit.JUnitSuite
-import org.junit.Test
-import org.junit.Assert._
-
-
-class UtilsTest extends JUnitSuite {
-  
-  private val logger = Logger.getLogger(classOf[UtilsTest]) 
-
-  @Test
-  def testSwallow() {
-    Utils.swallow(logger.info, throw new IllegalStateException("test"))
-  }
-
-  @Test
-  def testCircularIterator() {
-    val l = List(1, 2)
-    val itl = Utils.circularIterator(l)
-    assertEquals(1, itl.next())
-    assertEquals(2, itl.next())
-    assertEquals(1, itl.next())
-    assertEquals(2, itl.next())
-    assertFalse(itl.hasDefiniteSize)
-
-    val s = Set(1, 2)
-    val its = Utils.circularIterator(s)
-    assertEquals(1, its.next())
-    assertEquals(2, its.next())
-    assertEquals(1, its.next())
-    assertEquals(2, its.next())
-    assertEquals(1, its.next())
-  }
-
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/zk/EmbeddedZookeeper.scala b/trunk/core/src/test/scala/unit/kafka/zk/EmbeddedZookeeper.scala
deleted file mode 100644
index 44eb492..0000000
--- a/trunk/core/src/test/scala/unit/kafka/zk/EmbeddedZookeeper.scala
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.zk
-
-import org.apache.zookeeper.server.ZooKeeperServer
-import org.apache.zookeeper.server.NIOServerCnxn
-import kafka.utils.TestUtils
-import java.net.InetSocketAddress
-import kafka.utils.Utils
-
-class EmbeddedZookeeper(val connectString: String) {
-  val snapshotDir = TestUtils.tempDir()
-  val logDir = TestUtils.tempDir()
-  val zookeeper = new ZooKeeperServer(snapshotDir, logDir, 3000)
-  val port = connectString.split(":")(1).toInt
-  val factory = new NIOServerCnxn.Factory(new InetSocketAddress("127.0.0.1", port))
-  factory.startup(zookeeper)
-
-  def shutdown() {
-    factory.shutdown()
-    Utils.rm(logDir)
-    Utils.rm(snapshotDir)
-  }
-  
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/zk/ZKEphemeralTest.scala b/trunk/core/src/test/scala/unit/kafka/zk/ZKEphemeralTest.scala
deleted file mode 100644
index cdc250b..0000000
--- a/trunk/core/src/test/scala/unit/kafka/zk/ZKEphemeralTest.scala
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.zk
-
-import kafka.consumer.ConsumerConfig
-import org.I0Itec.zkclient.ZkClient
-import kafka.utils.{ZkUtils, ZKStringSerializer}
-import kafka.utils.{TestZKUtils, TestUtils}
-import org.junit.Assert
-import org.scalatest.junit.JUnit3Suite
-
-class ZKEphemeralTest extends JUnit3Suite with ZooKeeperTestHarness {
-  val zkConnect = TestZKUtils.zookeeperConnect
-  var zkSessionTimeoutMs = 1000
-
-  def testEphemeralNodeCleanup = {
-    val config = new ConsumerConfig(TestUtils.createConsumerProperties(zkConnect, "test", "1"))
-    var zkClient = new ZkClient(zkConnect, zkSessionTimeoutMs, config.zkConnectionTimeoutMs,
-                                ZKStringSerializer)
-
-    try {
-      ZkUtils.createEphemeralPathExpectConflict(zkClient, "/tmp/zktest", "node created")
-    } catch {                       
-      case e: Exception => println("Exception in creating ephemeral node")
-    }
-
-    var testData: String = null
-
-    testData = ZkUtils.readData(zkClient, "/tmp/zktest")
-    Assert.assertNotNull(testData)
-
-    zkClient.close
-
-    Thread.sleep(zkSessionTimeoutMs)
-
-    zkClient = new ZkClient(zkConnect, zkSessionTimeoutMs, config.zkConnectionTimeoutMs,
-                                ZKStringSerializer)
-
-    val nodeExists = ZkUtils.pathExists(zkClient, "/tmp/zktest")
-    Assert.assertFalse(nodeExists)
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/zk/ZKLoadBalanceTest.scala b/trunk/core/src/test/scala/unit/kafka/zk/ZKLoadBalanceTest.scala
deleted file mode 100644
index a2303da..0000000
--- a/trunk/core/src/test/scala/unit/kafka/zk/ZKLoadBalanceTest.scala
+++ /dev/null
@@ -1,127 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.zk
-
-import junit.framework.Assert._
-import java.util.Collections
-import kafka.consumer.{ConsumerConfig, ZookeeperConsumerConnector}
-import java.lang.Thread
-import org.scalatest.junit.JUnit3Suite
-import kafka.utils.{TestUtils, ZkUtils, ZKGroupTopicDirs, TestZKUtils}
-
-class ZKLoadBalanceTest extends JUnit3Suite with ZooKeeperTestHarness {
-  val zkConnect = TestZKUtils.zookeeperConnect
-  var dirs : ZKGroupTopicDirs = null
-  val topic = "topic1"
-  val group = "group1"
-  val firstConsumer = "consumer1"
-  val secondConsumer = "consumer2"
-
-  override def setUp() {
-    super.setUp()
-
-    dirs = new ZKGroupTopicDirs(group, topic)
-  }
-
-  def testLoadBalance() {
-    // create the first partition
-    ZkUtils.setupPartition(zkClient, 400, "broker1", 1111, "topic1", 1)
-    // add the first consumer
-    val consumerConfig1 = new ConsumerConfig(TestUtils.createConsumerProperties(zkConnect, group, firstConsumer))
-    val zkConsumerConnector1 = new ZookeeperConsumerConnector(consumerConfig1, false)
-    zkConsumerConnector1.createMessageStreams(Map(topic -> 1))
-
-    {
-      // check Partition Owner Registry
-      val actual_1 = getZKChildrenValues(dirs.consumerOwnerDir)
-      val expected_1 = List( ("400-0", "group1_consumer1-0") )
-      checkSetEqual(actual_1, expected_1)
-    }
-
-    // add a second consumer
-    val consumerConfig2 = new ConsumerConfig(TestUtils.createConsumerProperties(zkConnect, group, secondConsumer))
-    val ZKConsumerConnector2 = new ZookeeperConsumerConnector(consumerConfig2, false)
-    ZKConsumerConnector2.createMessageStreams(Map(topic -> 1))
-    // wait a bit to make sure rebalancing logic is triggered
-    Thread.sleep(200)
-
-    {
-      // check Partition Owner Registry
-      val actual_2 = getZKChildrenValues(dirs.consumerOwnerDir)
-      val expected_2 = List( ("400-0", "group1_consumer1-0") )
-      checkSetEqual(actual_2, expected_2)
-    }
-
-    {
-      // add a few more partitions
-      val brokers = List(
-        (200, "broker2", 1111, "topic1", 2),
-        (300, "broker3", 1111, "topic1", 2) )
-
-      for ((brokerID, host, port, topic, nParts) <- brokers)
-        ZkUtils.setupPartition(zkClient, brokerID, host, port, topic, nParts)
-
-
-      // wait a bit to make sure rebalancing logic is triggered
-      Thread.sleep(1000)
-      // check Partition Owner Registry
-      val actual_3 = getZKChildrenValues(dirs.consumerOwnerDir)
-      val expected_3 = List( ("200-0", "group1_consumer1-0"),
-                             ("200-1", "group1_consumer1-0"),
-                             ("300-0", "group1_consumer1-0"),
-                             ("300-1", "group1_consumer2-0"),
-                             ("400-0", "group1_consumer2-0") )
-      checkSetEqual(actual_3, expected_3)
-    }
-
-    {
-      // now delete a partition
-      ZkUtils.deletePartition(zkClient, 400, "topic1")
-
-      // wait a bit to make sure rebalancing logic is triggered
-      Thread.sleep(500)
-      // check Partition Owner Registry
-      val actual_4 = getZKChildrenValues(dirs.consumerOwnerDir)
-      val expected_4 = List( ("200-0", "group1_consumer1-0"),
-                             ("200-1", "group1_consumer1-0"),
-                             ("300-0", "group1_consumer2-0"),
-                             ("300-1", "group1_consumer2-0") )
-      checkSetEqual(actual_4, expected_4)
-    }
-
-    zkConsumerConnector1.shutdown
-    ZKConsumerConnector2.shutdown
-  }
-
-  private def getZKChildrenValues(path : String) : Seq[Tuple2[String,String]] = {
-    import scala.collection.JavaConversions
-    val children = zkClient.getChildren(path)
-    Collections.sort(children)
-    val childrenAsSeq : Seq[java.lang.String] = JavaConversions.asBuffer(children)
-    childrenAsSeq.map(partition =>
-      (partition, zkClient.readData(path + "/" + partition).asInstanceOf[String]))
-  }
-
-  private def checkSetEqual(actual : Seq[Tuple2[String,String]], expected : Seq[Tuple2[String,String]]) {
-    assertEquals(expected.length, actual.length)
-    for (i <- 0 until expected.length) {
-      assertEquals(expected(i)._1, actual(i)._1)
-      assertEquals(expected(i)._2, actual(i)._2)
-    }
-  }
-}
diff --git a/trunk/core/src/test/scala/unit/kafka/zk/ZooKeeperTestHarness.scala b/trunk/core/src/test/scala/unit/kafka/zk/ZooKeeperTestHarness.scala
deleted file mode 100644
index c85640d..0000000
--- a/trunk/core/src/test/scala/unit/kafka/zk/ZooKeeperTestHarness.scala
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.zk
-
-import org.scalatest.junit.JUnit3Suite
-import org.I0Itec.zkclient.ZkClient
-import kafka.utils.ZKStringSerializer
-
-trait ZooKeeperTestHarness extends JUnit3Suite {
-  val zkConnect: String
-  var zookeeper: EmbeddedZookeeper = null
-  var zkClient: ZkClient = null
-
-  override def setUp() {
-    zookeeper = new EmbeddedZookeeper(zkConnect)
-    zkClient = new ZkClient(zookeeper.connectString)
-    zkClient.setZkSerializer(ZKStringSerializer)
-    super.setUp
-  }
-
-  override def tearDown() {
-    super.tearDown
-    zkClient.close()
-    zookeeper.shutdown()
-  }
-
-}
diff --git a/trunk/examples/README b/trunk/examples/README
deleted file mode 100644
index d33f6c5..0000000
--- a/trunk/examples/README
+++ /dev/null
@@ -1,19 +0,0 @@
-This directory contains examples of client code that uses kafka.
-
-The default target for ant is kafka.examples.KafkaConsumerProducerDemo which sends and receives 
-messages from Kafka server.
-
-In order to run demo from SBT:
-   1. Start Zookeeper and the Kafka server
-   2. ./sbt from top-level kafka directory
-   3. Switch to the kafka java examples project -> project Kafka Java Examples
-   4. execute run -> run
-   5. For unlimited producer-consumer run, select option 1
-      For simple consumer demo, select option 2
-
-To run the demo using scripts: 
-
-   1. Start Zookeeper and the Kafka server
-   2. For simple consumer demo, run bin/java-simple-consumer-demo.sh
-   3. For unlimited producer-consumer run, run bin/java-producer-consumer-demo.sh
-
diff --git a/trunk/examples/bin/java-producer-consumer-demo.sh b/trunk/examples/bin/java-producer-consumer-demo.sh
deleted file mode 100755
index 29e01c2..0000000
--- a/trunk/examples/bin/java-producer-consumer-demo.sh
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)/../..
-
-for file in $base_dir/project/boot/scala-2.8.0/lib/*.jar;
-do
-  if [ ${file##*/} != "sbt-launch.jar" ]; then
-    CLASSPATH=$CLASSPATH:$file
-  fi
-done
-
-for file in $base_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/examples/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-echo $CLASSPATH
-
-if [ -z "$KAFKA_PERF_OPTS" ]; then
-  KAFKA_OPTS="-Xmx512M -server -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3333 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
-fi
-
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-$JAVA $KAFKA_OPTS -cp $CLASSPATH kafka.examples.KafkaConsumerProducerDemo $@
-
diff --git a/trunk/examples/bin/java-simple-consumer-demo.sh b/trunk/examples/bin/java-simple-consumer-demo.sh
deleted file mode 100755
index 4716a09..0000000
--- a/trunk/examples/bin/java-simple-consumer-demo.sh
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-base_dir=$(dirname $0)/../..
-
-for file in $base_dir/project/boot/scala-2.8.0/lib/*.jar;
-do
-  if [ ${file##*/} != "sbt-launch.jar" ]; then
-    CLASSPATH=$CLASSPATH:$file
-  fi
-done
-
-for file in $base_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/core/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $base_dir/examples/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-echo $CLASSPATH
-
-if [ -z "$KAFKA_PERF_OPTS" ]; then
-  KAFKA_OPTS="-Xmx512M -server -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3333 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
-fi
-
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-$JAVA $KAFKA_OPTS -cp $CLASSPATH kafka.examples.SimpleConsumerDemo $@
-
diff --git a/trunk/examples/src/main/java/kafka/examples/Consumer.java b/trunk/examples/src/main/java/kafka/examples/Consumer.java
deleted file mode 100644
index cb01577..0000000
--- a/trunk/examples/src/main/java/kafka/examples/Consumer.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import kafka.consumer.ConsumerConfig;
-import kafka.consumer.ConsumerIterator;
-import kafka.consumer.KafkaStream;
-import kafka.javaapi.consumer.ConsumerConnector;
-import kafka.message.Message;
-
-
-public class Consumer extends Thread
-{
-  private final ConsumerConnector consumer;
-  private final String topic;
-  
-  public Consumer(String topic)
-  {
-    consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
-            createConsumerConfig());
-    this.topic = topic;
-  }
-
-  private static ConsumerConfig createConsumerConfig()
-  {
-    Properties props = new Properties();
-    props.put("zk.connect", KafkaProperties.zkConnect);
-    props.put("groupid", KafkaProperties.groupId);
-    props.put("zk.sessiontimeout.ms", "400");
-    props.put("zk.synctime.ms", "200");
-    props.put("autocommit.interval.ms", "1000");
-
-    return new ConsumerConfig(props);
-
-  }
- 
-  public void run() {
-    Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
-    topicCountMap.put(topic, new Integer(1));
-    Map<String, List<KafkaStream<Message>>> consumerMap = consumer.createMessageStreams(topicCountMap);
-    KafkaStream<Message> stream =  consumerMap.get(topic).get(0);
-    ConsumerIterator<Message> it = stream.iterator();
-    while(it.hasNext())
-      System.out.println(ExampleUtils.getMessage(it.next().message()));
-  }
-}
diff --git a/trunk/examples/src/main/java/kafka/examples/ExampleUtils.java b/trunk/examples/src/main/java/kafka/examples/ExampleUtils.java
deleted file mode 100644
index 34fd1c0..0000000
--- a/trunk/examples/src/main/java/kafka/examples/ExampleUtils.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-
-import java.nio.ByteBuffer;
-import kafka.message.Message;
-
-public class ExampleUtils
-{
-  public static String getMessage(Message message)
-  {
-    ByteBuffer buffer = message.payload();
-    byte [] bytes = new byte[buffer.remaining()];
-    buffer.get(bytes);
-    return new String(bytes);
-  }
-}
diff --git a/trunk/examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java b/trunk/examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java
deleted file mode 100644
index 1239394..0000000
--- a/trunk/examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java
+++ /dev/null
@@ -1,30 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-public class KafkaConsumerProducerDemo implements KafkaProperties
-{
-  public static void main(String[] args)
-  {
-    Producer producerThread = new Producer(KafkaProperties.topic);
-    producerThread.start();
-
-    Consumer consumerThread = new Consumer(KafkaProperties.topic);
-    consumerThread.start();
-    
-  }
-}
diff --git a/trunk/examples/src/main/java/kafka/examples/KafkaProperties.java b/trunk/examples/src/main/java/kafka/examples/KafkaProperties.java
deleted file mode 100644
index d9a2104..0000000
--- a/trunk/examples/src/main/java/kafka/examples/KafkaProperties.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-public interface KafkaProperties
-{
-  final static String zkConnect = "127.0.0.1:2181";
-  final static  String groupId = "group1";
-  final static String topic = "topic1";
-  final static String kafkaServerURL = "localhost";
-  final static int kafkaServerPort = 9092;
-  final static int kafkaProducerBufferSize = 64*1024;
-  final static int connectionTimeOut = 100000;
-  final static int reconnectInterval = 10000;
-  final static String topic2 = "topic2";
-  final static String topic3 = "topic3";
-}
diff --git a/trunk/examples/src/main/java/kafka/examples/Producer.java b/trunk/examples/src/main/java/kafka/examples/Producer.java
deleted file mode 100644
index 353a7eb..0000000
--- a/trunk/examples/src/main/java/kafka/examples/Producer.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-
-import java.util.Properties;
-import kafka.javaapi.producer.ProducerData;
-import kafka.producer.ProducerConfig;
-
-public class Producer extends Thread
-{
-  private final kafka.javaapi.producer.Producer<Integer, String> producer;
-  private final String topic;
-  private final Properties props = new Properties();
-
-  public Producer(String topic)
-  {
-    props.put("serializer.class", "kafka.serializer.StringEncoder");
-    props.put("zk.connect", "localhost:2181");
-    // Use random partitioner. Don't need the key type. Just set it to Integer.
-    // The message is of type String.
-    producer = new kafka.javaapi.producer.Producer<Integer, String>(new ProducerConfig(props));
-    this.topic = topic;
-  }
-  
-  public void run() {
-    int messageNo = 1;
-    while(true)
-    {
-      String messageStr = new String("Message_" + messageNo);
-      producer.send(new ProducerData<Integer, String>(topic, messageStr));
-      messageNo++;
-    }
-  }
-
-}
diff --git a/trunk/examples/src/main/java/kafka/examples/SimpleConsumerDemo.java b/trunk/examples/src/main/java/kafka/examples/SimpleConsumerDemo.java
deleted file mode 100644
index c2b88da..0000000
--- a/trunk/examples/src/main/java/kafka/examples/SimpleConsumerDemo.java
+++ /dev/null
@@ -1,83 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package kafka.examples;
-
-
-import java.util.ArrayList;
-import java.util.List;
-import kafka.api.FetchRequest;
-import kafka.javaapi.MultiFetchResponse;
-import kafka.javaapi.consumer.SimpleConsumer;
-import kafka.javaapi.message.ByteBufferMessageSet;
-import kafka.message.MessageAndOffset;
-
-
-public class SimpleConsumerDemo
-{
-  private static void printMessages(ByteBufferMessageSet messageSet)
-  {
-    for (MessageAndOffset messageAndOffset : messageSet) {
-      System.out.println(ExampleUtils.getMessage(messageAndOffset.message()));
-    }
-  }
-
-  private static void generateData()
-  {
-    Producer producer2 = new Producer(KafkaProperties.topic2);
-    producer2.start();
-    Producer producer3 = new Producer(KafkaProperties.topic3);
-    producer3.start();
-    try
-    {
-      Thread.sleep(1000);
-    }
-    catch (InterruptedException e)
-    {
-      e.printStackTrace();
-    }
-  }
-  
-  public static void main(String[] args)
-  {
-    
-    generateData();
-    SimpleConsumer simpleConsumer = new SimpleConsumer(KafkaProperties.kafkaServerURL,
-                                                       KafkaProperties.kafkaServerPort,
-                                                       KafkaProperties.connectionTimeOut,
-                                                       KafkaProperties.kafkaProducerBufferSize);
-
-    System.out.println("Testing single fetch");
-    FetchRequest req = new FetchRequest(KafkaProperties.topic2, 0, 0L, 100);
-    ByteBufferMessageSet messageSet = simpleConsumer.fetch(req);
-    printMessages(messageSet);
-
-    System.out.println("Testing single multi-fetch");
-    req = new FetchRequest(KafkaProperties.topic2, 0, 0L, 100);
-    List<FetchRequest> list = new ArrayList<FetchRequest>();
-    list.add(req);
-    req = new FetchRequest(KafkaProperties.topic3, 0, 0L, 100);
-    list.add(req);
-    MultiFetchResponse response = simpleConsumer.multifetch(list);
-    int fetchReq = 0;
-    for (ByteBufferMessageSet resMessageSet : response )
-    {
-      System.out.println("Response from fetch request no: " + ++fetchReq);
-      printMessages(resMessageSet);
-    }
-  }
-
-}
diff --git a/trunk/lib/sbt-launch.jar b/trunk/lib/sbt-launch.jar
deleted file mode 100644
index 67ee369..0000000
--- a/trunk/lib/sbt-launch.jar
+++ /dev/null
Binary files differ
diff --git a/trunk/perf/config/log4j.properties b/trunk/perf/config/log4j.properties
deleted file mode 100644
index 542b739..0000000
--- a/trunk/perf/config/log4j.properties
+++ /dev/null
@@ -1,24 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-log4j.rootLogger=INFO, fileAppender
-
-log4j.appender.fileAppender=org.apache.log4j.FileAppender
-log4j.appender.fileAppender.File=perf.log
-log4j.appender.fileAppender.layout=org.apache.log4j.PatternLayout
-log4j.appender.fileAppender.layout.ConversionPattern=%m %n 
-
-# Turn on all our debugging info
-log4j.logger.kafka=INFO
-
diff --git a/trunk/perf/src/main/scala/kafka/perf/ConsumerPerformance.scala b/trunk/perf/src/main/scala/kafka/perf/ConsumerPerformance.scala
deleted file mode 100644
index 414c965..0000000
--- a/trunk/perf/src/main/scala/kafka/perf/ConsumerPerformance.scala
+++ /dev/null
@@ -1,196 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.perf
-
-import java.util.concurrent.CountDownLatch
-import java.util.concurrent.atomic.AtomicLong
-import java.nio.channels.ClosedByInterruptException
-import org.apache.log4j.Logger
-import kafka.message.Message
-import kafka.utils.Utils
-import java.util.{Random, Properties}
-import kafka.consumer._
-import java.text.SimpleDateFormat
-
-/**
- * Performance test for the full zookeeper consumer
- */
-object ConsumerPerformance {
-  private val logger = Logger.getLogger(getClass())
-
-  def main(args: Array[String]): Unit = {
-
-    val config = new ConsumerPerfConfig(args)
-    logger.info("Starting consumer...")
-    var totalMessagesRead = new AtomicLong(0)
-    var totalBytesRead = new AtomicLong(0)
-
-    if(!config.hideHeader) {
-      if(!config.showDetailedStats)
-        println("start.time, end.time, fetch.size, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec")
-      else
-        println("time, fetch.size, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec")
-    }
-
-    // clean up zookeeper state for this group id for every perf run
-    Utils.tryCleanupZookeeper(config.consumerConfig.zkConnect, config.consumerConfig.groupId)
-
-    val consumerConnector: ConsumerConnector = Consumer.create(config.consumerConfig)
-
-    val topicMessageStreams = consumerConnector.createMessageStreams(Predef.Map(config.topic -> config.numThreads))
-    var threadList = List[ConsumerPerfThread]()
-    for ((topic, streamList) <- topicMessageStreams)
-      for (i <- 0 until streamList.length)
-        threadList ::= new ConsumerPerfThread(i, "kafka-zk-consumer-" + i, streamList(i), config,
-                                              totalMessagesRead, totalBytesRead)
-
-    logger.info("Sleeping for 1000 seconds.")
-    Thread.sleep(1000)
-    logger.info("starting threads")
-    val startMs = System.currentTimeMillis
-    for (thread <- threadList)
-      thread.start
-
-    for (thread <- threadList)
-      thread.shutdown
-
-    val endMs = System.currentTimeMillis
-    val elapsedSecs = (endMs - startMs - config.consumerConfig.consumerTimeoutMs) / 1000.0
-    if(!config.showDetailedStats) {
-      val totalMBRead = (totalBytesRead.get*1.0)/(1024*1024)
-      println(("%s, %s, %d, %.4f, %.4f, %d, %.4f").format(config.dateFormat.format(startMs), config.dateFormat.format(endMs),
-        config.consumerConfig.fetchSize, totalMBRead, totalMBRead/elapsedSecs, totalMessagesRead.get,
-        totalMessagesRead.get/elapsedSecs))
-    }
-    System.exit(0)
-  }
-
-  class ConsumerPerfConfig(args: Array[String]) extends PerfConfig(args) {
-    val zkConnectOpt = parser.accepts("zookeeper", "REQUIRED: The connection string for the zookeeper connection in the form host:port. " +
-                                      "Multiple URLS can be given to allow fail-over.")
-                           .withRequiredArg
-                           .describedAs("urls")
-                           .ofType(classOf[String])
-    val groupIdOpt = parser.accepts("group", "The group id to consume on.")
-                           .withRequiredArg
-                           .describedAs("gid")
-                           .defaultsTo("perf-consumer-" + new Random().nextInt(100000))
-                           .ofType(classOf[String])
-    val fetchSizeOpt = parser.accepts("fetch-size", "The amount of data to fetch in a single request.")
-                           .withRequiredArg
-                           .describedAs("size")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1024 * 1024)
-    val resetBeginningOffsetOpt = parser.accepts("from-latest", "If the consumer does not already have an established " +
-      "offset to consume from, start with the latest message present in the log rather than the earliest message.")
-    val socketBufferSizeOpt = parser.accepts("socket-buffer-size", "The size of the tcp RECV size.")
-                           .withRequiredArg
-                           .describedAs("size")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(2 * 1024 * 1024)
-    val numThreadsOpt = parser.accepts("threads", "Number of processing threads.")
-                           .withRequiredArg
-                           .describedAs("count")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(10)
-
-    val options = parser.parse(args : _*)
-
-    for(arg <- List(topicOpt, zkConnectOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-
-    val props = new Properties
-    props.put("groupid", options.valueOf(groupIdOpt))
-    props.put("socket.buffer.size", options.valueOf(socketBufferSizeOpt).toString)
-    props.put("fetch.size", options.valueOf(fetchSizeOpt).toString)
-    props.put("autooffset.reset", if(options.has(resetBeginningOffsetOpt)) "largest" else "smallest")
-    props.put("zk.connect", options.valueOf(zkConnectOpt))
-    props.put("consumer.timeout.ms", "5000")
-    val consumerConfig = new ConsumerConfig(props)
-    val numThreads = options.valueOf(numThreadsOpt).intValue
-    val topic = options.valueOf(topicOpt)
-    val numMessages = options.valueOf(numMessagesOpt).longValue
-    val reportingInterval = options.valueOf(reportingIntervalOpt).intValue
-    val showDetailedStats = options.has(showDetailedStatsOpt)
-    val dateFormat = new SimpleDateFormat(options.valueOf(dateFormatOpt))
-    val hideHeader = options.has(hideHeaderOpt)
-  }
-
-  class ConsumerPerfThread(threadId: Int, name: String, stream: KafkaStream[Message],
-                           config:ConsumerPerfConfig, totalMessagesRead: AtomicLong, totalBytesRead: AtomicLong)
-    extends Thread(name) {
-    private val shutdownLatch = new CountDownLatch(1)
-
-    def shutdown(): Unit = {
-      shutdownLatch.await
-    }
-
-    override def run() {
-      var bytesRead = 0L
-      var messagesRead = 0L
-      val startMs = System.currentTimeMillis
-      var lastReportTime: Long = startMs
-      var lastBytesRead = 0L
-      var lastMessagesRead = 0L
-
-      try {
-        for (messageAndMetadata <- stream if messagesRead < config.numMessages) {
-          messagesRead += 1
-          bytesRead += messageAndMetadata.message.payloadSize
-
-          if (messagesRead % config.reportingInterval == 0) {
-            if(config.showDetailedStats)
-              printMessage(threadId, bytesRead, lastBytesRead, messagesRead, lastMessagesRead, lastReportTime, System.currentTimeMillis)
-            lastReportTime = System.currentTimeMillis
-            lastMessagesRead = messagesRead
-            lastBytesRead = bytesRead
-          }
-        }
-      }
-      catch {
-        case _: InterruptedException =>
-        case _: ClosedByInterruptException =>
-        case _: ConsumerTimeoutException =>
-        case e => throw e
-      }
-      totalMessagesRead.addAndGet(messagesRead)
-      totalBytesRead.addAndGet(bytesRead)
-      if(config.showDetailedStats)
-        printMessage(threadId, bytesRead, lastBytesRead, messagesRead, lastMessagesRead, startMs, System.currentTimeMillis)
-      shutdownComplete
-    }
-
-    private def printMessage(id: Int, bytesRead: Long, lastBytesRead: Long, messagesRead: Long, lastMessagesRead: Long,
-                             startMs: Long, endMs: Long) = {
-      val elapsedMs = endMs - startMs
-      val totalMBRead = (bytesRead*1.0)/(1024*1024)
-      val mbRead = ((bytesRead - lastBytesRead)*1.0)/(1024*1024)
-      println(("%s, %d, %d, %.4f, %.4f, %d, %.4f").format(config.dateFormat.format(endMs), id,
-        config.consumerConfig.fetchSize, totalMBRead,
-        1000.0*(mbRead/elapsedMs), messagesRead, ((messagesRead - lastMessagesRead)/elapsedMs)*1000.0))
-    }
-
-    private def shutdownComplete() = shutdownLatch.countDown
-  }
-
-}
diff --git a/trunk/perf/src/main/scala/kafka/perf/PerfConfig.scala b/trunk/perf/src/main/scala/kafka/perf/PerfConfig.scala
deleted file mode 100644
index db2c1a1..0000000
--- a/trunk/perf/src/main/scala/kafka/perf/PerfConfig.scala
+++ /dev/null
@@ -1,48 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-package kafka.perf
-
-import joptsimple.OptionParser
-
-
-class PerfConfig(args: Array[String]) {
-  val parser = new OptionParser
-  val topicOpt = parser.accepts("topic", "REQUIRED: The topic to consume from.")
-    .withRequiredArg
-    .describedAs("topic")
-    .ofType(classOf[String])
-  val numMessagesOpt = parser.accepts("messages", "The number of messages to send or consume")
-    .withRequiredArg
-    .describedAs("count")
-    .ofType(classOf[java.lang.Long])
-    .defaultsTo(Long.MaxValue)
-  val reportingIntervalOpt = parser.accepts("reporting-interval", "Interval at which to print progress info.")
-    .withRequiredArg
-    .describedAs("size")
-    .ofType(classOf[java.lang.Integer])
-    .defaultsTo(5000)
-  val dateFormatOpt = parser.accepts("date-format", "The date format to use for formatting the time field. " +
-    "See java.text.SimpleDateFormat for options.")
-    .withRequiredArg
-    .describedAs("date format")
-    .ofType(classOf[String])
-    .defaultsTo("yyyy-MM-dd HH:mm:ss:SSS")
-  val showDetailedStatsOpt = parser.accepts("show-detailed-stats", "If set, stats are reported for each reporting " +
-    "interval as configured by reporting-interval")
-  val hideHeaderOpt = parser.accepts("hide-header", "If set, skips printing the header for the stats ")
-}
diff --git a/trunk/perf/src/main/scala/kafka/perf/ProducerPerformance.scala b/trunk/perf/src/main/scala/kafka/perf/ProducerPerformance.scala
deleted file mode 100644
index 5888f1e..0000000
--- a/trunk/perf/src/main/scala/kafka/perf/ProducerPerformance.scala
+++ /dev/null
@@ -1,237 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.perf
-
-import java.util.concurrent.{CountDownLatch, Executors}
-import java.util.concurrent.atomic.AtomicLong
-import kafka.producer._
-import org.apache.log4j.Logger
-import kafka.message.{CompressionCodec, Message}
-import java.text.SimpleDateFormat
-import java.util.{Random, Properties}
-import kafka.utils.Logging
-
-/**
- * Load test for the producer
- */
-object ProducerPerformance extends Logging {
-
-  def main(args: Array[String]) {
-
-    val logger = Logger.getLogger(getClass)
-    val config = new ProducerPerfConfig(args)
-    if(!config.isFixSize)
-      logger.info("WARN: Throughput will be slower due to changing message size per request")
-
-    val totalBytesSent = new AtomicLong(0)
-    val totalMessagesSent = new AtomicLong(0)
-    val executor = Executors.newFixedThreadPool(config.numThreads)
-    val allDone = new CountDownLatch(config.numThreads)
-    val startMs = System.currentTimeMillis
-    val rand = new java.util.Random
-
-    if(!config.hideHeader) {
-      if(!config.showDetailedStats)
-        println("start.time, end.time, compression, message.size, batch.size, total.data.sent.in.MB, MB.sec, " +
-          "total.data.sent.in.nMsg, nMsg.sec")
-      else
-        println("time, compression, thread.id, message.size, batch.size, total.data.sent.in.MB, MB.sec, " +
-          "total.data.sent.in.nMsg, nMsg.sec")
-    }
-
-    for(i <- 0 until config.numThreads) {
-      executor.execute(new ProducerThread(i, config, totalBytesSent, totalMessagesSent, allDone, rand))
-    }
-
-    allDone.await()
-    val endMs = System.currentTimeMillis
-    val elapsedSecs = (endMs - startMs) / 1000.0
-    if(!config.showDetailedStats) {
-      val totalMBSent = (totalBytesSent.get * 1.0)/ (1024 * 1024)
-      println(("%s, %s, %d, %d, %d, %.2f, %.4f, %d, %.4f").format(config.dateFormat.format(startMs),
-        config.dateFormat.format(endMs), config.compressionCodec.codec, config.messageSize, config.batchSize,
-        totalMBSent, totalMBSent/elapsedSecs, totalMessagesSent.get, totalMessagesSent.get/elapsedSecs))
-    }
-    System.exit(0)
-  }
-
-  class ProducerPerfConfig(args: Array[String]) extends PerfConfig(args) {
-    val brokerInfoOpt = parser.accepts("brokerinfo", "REQUIRED: broker info (either from zookeeper or a list.")
-      .withRequiredArg
-      .describedAs("broker.list=brokerid:hostname:port or zk.connect=host:port")
-      .ofType(classOf[String])
-    val messageSizeOpt = parser.accepts("message-size", "The size of each message.")
-      .withRequiredArg
-      .describedAs("size")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(100)
-    val varyMessageSizeOpt = parser.accepts("vary-message-size", "If set, message size will vary up to the given maximum.")
-    val asyncOpt = parser.accepts("async", "If set, messages are sent asynchronously.")
-    val batchSizeOpt = parser.accepts("batch-size", "Number of messages to send in a single batch.")
-      .withRequiredArg
-      .describedAs("size")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(200)
-    val numThreadsOpt = parser.accepts("threads", "Number of sending threads.")
-      .withRequiredArg
-      .describedAs("count")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(10)
-    val compressionCodecOption = parser.accepts("compression-codec", "If set, messages are sent compressed")
-      .withRequiredArg
-      .describedAs("compression codec ")
-      .ofType(classOf[java.lang.Integer])
-      .defaultsTo(0)
-
-    val options = parser.parse(args : _*)
-    for(arg <- List(topicOpt, brokerInfoOpt, numMessagesOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    val topic = options.valueOf(topicOpt)
-    val numMessages = options.valueOf(numMessagesOpt).longValue
-    val reportingInterval = options.valueOf(reportingIntervalOpt).intValue
-    val showDetailedStats = options.has(showDetailedStatsOpt)
-    val dateFormat = new SimpleDateFormat(options.valueOf(dateFormatOpt))
-    val hideHeader = options.has(hideHeaderOpt)
-    val brokerInfo = options.valueOf(brokerInfoOpt)
-    val messageSize = options.valueOf(messageSizeOpt).intValue
-    val isFixSize = !options.has(varyMessageSizeOpt)
-    val isAsync = options.has(asyncOpt)
-    var batchSize = options.valueOf(batchSizeOpt).intValue
-    val numThreads = options.valueOf(numThreadsOpt).intValue
-    val compressionCodec = CompressionCodec.getCompressionCodec(options.valueOf(compressionCodecOption).intValue)
-  }
-
-  private def getStringOfLength(len: Int) : String = {
-    val strArray = new Array[Char](len)
-    for (i <- 0 until len)
-      strArray(i) = 'x'
-    return new String(strArray)
-  }
-
-  private def getByteArrayOfLength(len: Int): Array[Byte] = {
-    //new Array[Byte](len)
-    new Array[Byte]( if (len == 0) 5 else len )
-  }
-
-  class ProducerThread(val threadId: Int,
-                       val config: ProducerPerfConfig,
-                       val totalBytesSent: AtomicLong,
-                       val totalMessagesSent: AtomicLong,
-                       val allDone: CountDownLatch,
-                       val rand: Random) extends Runnable {
-    val props = new Properties()
-    val brokerInfoList = config.brokerInfo.split("=")
-    if (brokerInfoList(0) == "zk.connect") {
-      props.put("zk.connect", brokerInfoList(1))
-      props.put("zk.sessiontimeout.ms", "300000")
-    }
-    else
-      props.put("broker.list", brokerInfoList(1))
-    props.put("compression.codec", config.compressionCodec.codec.toString)
-    props.put("reconnect.interval", Integer.MAX_VALUE.toString)
-    props.put("buffer.size", (64*1024).toString)
-    if(config.isAsync) {
-      props.put("producer.type","async")
-      props.put("batch.size", config.batchSize.toString)
-      props.put("queue.enqueueTimeout.ms", "-1")
-    }
-    val producerConfig = new ProducerConfig(props)
-    val producer = new Producer[Message, Message](producerConfig)
-
-    override def run {
-      var bytesSent = 0L
-      var lastBytesSent = 0L
-      var nSends = 0
-      var lastNSends = 0
-      val message = new Message(new Array[Byte](config.messageSize))
-      var reportTime = System.currentTimeMillis()
-      var lastReportTime = reportTime
-      val messagesPerThread = if(!config.isAsync) config.numMessages / config.numThreads / config.batchSize
-                              else config.numMessages / config.numThreads
-      debug("Messages per thread = " + messagesPerThread)
-      var messageSet: List[Message] = Nil
-      if(config.isFixSize) {
-        for(k <- 0 until config.batchSize) {
-          messageSet ::= message
-        }
-      }
-      var j: Long = 0L
-      while(j < messagesPerThread) {
-        var strLength = config.messageSize
-        if (!config.isFixSize) {
-          for(k <- 0 until config.batchSize) {
-            strLength = rand.nextInt(config.messageSize)
-            val message = new Message(getByteArrayOfLength(strLength))
-            messageSet ::= message
-            bytesSent += message.payloadSize
-          }
-        }else if(!config.isAsync) {
-          bytesSent += config.batchSize*message.payloadSize
-        }
-        try  {
-          if(!config.isAsync) {
-            producer.send(new ProducerData[Message,Message](config.topic, null, messageSet))
-            if(!config.isFixSize) messageSet = Nil
-            nSends += config.batchSize
-          }else {
-            if(!config.isFixSize) {
-              strLength = rand.nextInt(config.messageSize)
-              val messageBytes = getByteArrayOfLength(strLength)
-              rand.nextBytes(messageBytes)
-              val message = new Message(messageBytes)
-              producer.send(new ProducerData[Message,Message](config.topic, message))
-              debug(config.topic + "-checksum:" + message.checksum)
-              bytesSent += message.payloadSize
-            }else {
-              producer.send(new ProducerData[Message,Message](config.topic, message))
-              debug(config.topic + "-checksum:" + message.checksum)
-              bytesSent += message.payloadSize
-            }
-            nSends += 1
-          }
-        }catch {
-          case e: Exception => e.printStackTrace
-        }
-        if(nSends % config.reportingInterval == 0) {
-          reportTime = System.currentTimeMillis()
-          val elapsed = (reportTime - lastReportTime)/ 1000.0
-          val mbBytesSent = ((bytesSent - lastBytesSent) * 1.0)/(1024 * 1024)
-          val numMessagesPerSec = (nSends - lastNSends) / elapsed
-          val mbPerSec = mbBytesSent / elapsed
-          val formattedReportTime = config.dateFormat.format(reportTime)
-          if(config.showDetailedStats)
-            println(("%s, %d, %d, %d, %d, %.2f, %.4f, %d, %.4f").format(formattedReportTime, config.compressionCodec.codec,
-              threadId, config.messageSize, config.batchSize, (bytesSent*1.0)/(1024 * 1024), mbPerSec, nSends, numMessagesPerSec))
-          lastReportTime = reportTime
-          lastBytesSent = bytesSent
-          lastNSends = nSends
-        }
-        j += 1
-      }
-      producer.close()
-      totalBytesSent.addAndGet(bytesSent)
-      totalMessagesSent.addAndGet(nSends)
-      allDone.countDown()
-    }
-  }
-}
diff --git a/trunk/perf/src/main/scala/kafka/perf/SimpleConsumerPerformance.scala b/trunk/perf/src/main/scala/kafka/perf/SimpleConsumerPerformance.scala
deleted file mode 100644
index ca8df59..0000000
--- a/trunk/perf/src/main/scala/kafka/perf/SimpleConsumerPerformance.scala
+++ /dev/null
@@ -1,141 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package kafka.perf
-
-import java.net.URI
-import kafka.utils._
-import kafka.consumer.SimpleConsumer
-import org.apache.log4j.Logger
-import kafka.api.{OffsetRequest, FetchRequest}
-import java.text.SimpleDateFormat
-
-/**
- * Performance test for the simple consumer
- */
-object SimpleConsumerPerformance {
-
-  def main(args: Array[String]) {
-    val logger = Logger.getLogger(getClass)
-    val config = new ConsumerPerfConfig(args)
-
-    if(!config.hideHeader) {
-      if(!config.showDetailedStats)
-        println("start.time, end.time, fetch.size, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec")
-      else
-        println("time, fetch.size, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec")
-    }
-
-    val consumer = new SimpleConsumer(config.url.getHost, config.url.getPort, 30*1000, 2*config.fetchSize)
-
-    // reset to latest or smallest offset
-    var offset: Long = if(config.fromLatest) consumer.getOffsetsBefore(config.topic, config.partition, OffsetRequest.LatestTime, 1).head
-                       else consumer.getOffsetsBefore(config.topic, config.partition, OffsetRequest.EarliestTime, 1).head
-
-    val startMs = System.currentTimeMillis
-    var done = false
-    var totalBytesRead = 0L
-    var totalMessagesRead = 0L
-    var consumedInterval = 0
-    var lastReportTime: Long = startMs
-    var lastBytesRead = 0L
-    var lastMessagesRead = 0L
-    while(!done) {
-      val messages = consumer.fetch(new FetchRequest(config.topic, config.partition, offset, config.fetchSize))
-      var messagesRead = 0
-      var bytesRead = 0
-
-      for(message <- messages) {
-        messagesRead += 1
-        bytesRead += message.message.payloadSize
-      }
-      
-      if(messagesRead == 0 || totalMessagesRead > config.numMessages)
-        done = true
-      else
-        offset += messages.validBytes
-      
-      totalBytesRead += bytesRead
-      totalMessagesRead += messagesRead
-      consumedInterval += messagesRead
-      
-      if(consumedInterval > config.reportingInterval) {
-        if(config.showDetailedStats) {
-          val reportTime = System.currentTimeMillis
-          val elapsed = (reportTime - lastReportTime)/1000.0
-          val totalMBRead = ((totalBytesRead-lastBytesRead)*1.0)/(1024*1024)
-          println(("%s, %d, %.4f, %.4f, %d, %.4f").format(config.dateFormat.format(reportTime), config.fetchSize,
-            (totalBytesRead*1.0)/(1024*1024), totalMBRead/elapsed,
-            totalMessagesRead, (totalMessagesRead-lastMessagesRead)/elapsed))
-        }
-        lastReportTime = SystemTime.milliseconds
-        lastBytesRead = totalBytesRead
-        lastMessagesRead = totalMessagesRead
-        consumedInterval = 0
-      }
-    }
-    val reportTime = System.currentTimeMillis
-    val elapsed = (reportTime - startMs) / 1000.0
-
-    if(!config.showDetailedStats) {
-      val totalMBRead = (totalBytesRead*1.0)/(1024*1024)
-      println(("%s, %s, %d, %.4f, %.4f, %d, %.4f").format(config.dateFormat.format(startMs),
-        config.dateFormat.format(reportTime), config.fetchSize, totalMBRead, totalMBRead/elapsed,
-        totalMessagesRead, totalMessagesRead/elapsed))
-    }
-    System.exit(0)
-  }
-
-  class ConsumerPerfConfig(args: Array[String]) extends PerfConfig(args) {
-    val urlOpt = parser.accepts("server", "REQUIRED: The hostname of the server to connect to.")
-                           .withRequiredArg
-                           .describedAs("kafka://hostname:port")
-                           .ofType(classOf[String])
-    val resetBeginningOffsetOpt = parser.accepts("from-latest", "If the consumer does not already have an established " +
-      "offset to consume from, start with the latest message present in the log rather than the earliest message.")
-    val partitionOpt = parser.accepts("partition", "The topic partition to consume from.")
-                           .withRequiredArg
-                           .describedAs("partition")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(0)
-    val fetchSizeOpt = parser.accepts("fetch-size", "REQUIRED: The fetch size to use for consumption.")
-                           .withRequiredArg
-                           .describedAs("bytes")
-                           .ofType(classOf[java.lang.Integer])
-                           .defaultsTo(1024*1024)
-
-    val options = parser.parse(args : _*)
-
-    for(arg <- List(topicOpt, urlOpt)) {
-      if(!options.has(arg)) {
-        System.err.println("Missing required argument \"" + arg + "\"")
-        parser.printHelpOn(System.err)
-        System.exit(1)
-      }
-    }
-    val url = new URI(options.valueOf(urlOpt))
-    val fetchSize = options.valueOf(fetchSizeOpt).intValue
-    val fromLatest = options.has(resetBeginningOffsetOpt)
-    val partition = options.valueOf(partitionOpt).intValue
-    val topic = options.valueOf(topicOpt)
-    val numMessages = options.valueOf(numMessagesOpt).longValue
-    val reportingInterval = options.valueOf(reportingIntervalOpt).intValue
-    val showDetailedStats = options.has(showDetailedStatsOpt)
-    val dateFormat = new SimpleDateFormat(options.valueOf(dateFormatOpt))
-    val hideHeader = options.has(hideHeaderOpt)
-  }
-}
diff --git a/trunk/project/build.properties b/trunk/project/build.properties
deleted file mode 100644
index 36df2333..0000000
--- a/trunk/project/build.properties
+++ /dev/null
@@ -1,24 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#Project properties
-#Mon Feb 28 11:55:49 PST 2011
-project.name=Kafka
-sbt.version=0.7.5
-project.version=0.7.0
-build.scala.versions=2.8.0
-contrib.root.dir=contrib
-lib.dir=lib
-target.dir=target/scala_2.8.0
-dist.dir=dist
diff --git a/trunk/project/build/KafkaProject.scala b/trunk/project/build/KafkaProject.scala
deleted file mode 100644
index d0b52cf..0000000
--- a/trunk/project/build/KafkaProject.scala
+++ /dev/null
@@ -1,249 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import sbt._
-import scala.xml.{Node, Elem, NodeSeq}
-import scala.xml.transform.{RewriteRule, RuleTransformer}
-
-class KafkaProject(info: ProjectInfo) extends ParentProject(info) with IdeaProject {
-  lazy val core = project("core", "core-kafka", new CoreKafkaProject(_))
-  lazy val examples = project("examples", "java-examples", new KafkaExamplesProject(_), core)
-  lazy val contrib = project("contrib", "contrib", new ContribProject(_))
-  lazy val perf = project("perf", "perf", new KafkaPerfProject(_))
-
-  lazy val releaseZipTask = core.packageDistTask
-
-  val releaseZipDescription = "Compiles every sub project, runs unit tests, creates a deployable release zip file with dependencies, config, and scripts."
-  lazy val releaseZip = releaseZipTask dependsOn(core.corePackageAction, core.test, examples.examplesPackageAction,
-    contrib.producerPackageAction, contrib.consumerPackageAction) describedAs releaseZipDescription
-
-  val runRatDescription = "Runs Apache rat on Kafka"
-  lazy val runRatTask = task {
-    Runtime.getRuntime().exec("bin/run-rat.sh")
-    None
-  } describedAs runRatDescription
-
-  val rat = "org.apache.rat" % "apache-rat" % "0.8"
-
-  class CoreKafkaProject(info: ProjectInfo) extends DefaultProject(info)
-     with IdeaProject with CoreDependencies with TestDependencies with CompressionDependencies {
-   val corePackageAction = packageAllAction
-
-  //The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
-  // some dependencies on various sun and javax packages.
-   override def ivyXML =
-    <dependencies>
-      <exclude module="javax"/>
-      <exclude module="jmxri"/>
-      <exclude module="jmxtools"/>
-      <exclude module="mail"/>
-      <exclude module="jms"/>
-      <dependency org="org.apache.zookeeper" name="zookeeper" rev="3.3.4">
-        <exclude module="log4j"/>
-        <exclude module="jline"/>
-      </dependency>
-      <dependency org="com.github.sgroschupf" name="zkclient" rev="0.1">
-      </dependency>
-    </dependencies>
-
-    override def artifactID = "kafka"
-    override def filterScalaJars = false
-
-    // build the executable jar's classpath.
-    // (why is it necessary to explicitly remove the target/{classes,resources} paths? hm.)
-    def dependentJars = {
-      val jars =
-      publicClasspath +++ mainDependencies.scalaJars --- mainCompilePath --- mainResourcesOutputPath
-      if (jars.get.find { jar => jar.name.startsWith("scala-library-") }.isDefined) {
-        // workaround bug in sbt: if the compiler is explicitly included, don't include 2 versions
-        // of the library.
-        jars --- jars.filter { jar =>
-          jar.absolutePath.contains("/boot/") && jar.name == "scala-library.jar"
-        }
-      } else {
-        jars
-      }
-    }
-
-    def dependentJarNames = dependentJars.getFiles.map(_.getName).filter(_.endsWith(".jar"))
-    override def manifestClassPath = Some(dependentJarNames.map { "libs/" + _ }.mkString(" "))
-
-    def distName = (artifactID + "-" + projectVersion.value)
-    def distPath = "dist" / distName ##
-
-    def configPath = "config" ##
-    def configOutputPath = distPath / "config"
-
-    def binPath = "bin" ##
-    def binOutputPath = distPath / "bin"
-
-    def distZipName = {
-      "%s-%s.zip".format(artifactID, projectVersion.value)
-    }
-
-    lazy val packageDistTask = task {
-      distPath.asFile.mkdirs()
-      (distPath / "libs").asFile.mkdirs()
-      binOutputPath.asFile.mkdirs()
-      configOutputPath.asFile.mkdirs()
-
-      FileUtilities.copyFlat(List(jarPath), distPath, log).left.toOption orElse
-              FileUtilities.copyFlat(dependentJars.get, distPath / "libs", log).left.toOption orElse
-              FileUtilities.copy((configPath ***).get, configOutputPath, log).left.toOption orElse
-              FileUtilities.copy((binPath ***).get, binOutputPath, log).left.toOption orElse
-              FileUtilities.zip((("dist" / distName) ##).get, "dist" / distZipName, true, log)
-      None
-    }
-
-    val PackageDistDescription = "Creates a deployable zip file with dependencies, config, and scripts."
-    lazy val packageDist = packageDistTask dependsOn(`package`, `test`) describedAs PackageDistDescription
-
-    val cleanDist = cleanTask("dist" ##) describedAs("Erase any packaged distributions.")
-    override def cleanAction = super.cleanAction dependsOn(cleanDist)
-
-    override def javaCompileOptions = super.javaCompileOptions ++
-      List(JavaCompileOption("-source"), JavaCompileOption("1.5"))
-
-    override def packageAction = super.packageAction dependsOn (testCompileAction)
-
-  }
-
-  class KafkaPerfProject(info: ProjectInfo) extends DefaultProject(info)
-     with IdeaProject
-     with CoreDependencies {
-    val perfPackageAction = packageAllAction
-    val dependsOnCore = core
-
-  //The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
-  // some dependencies on various sun and javax packages.
-   override def ivyXML =
-    <dependencies>
-      <exclude module="javax"/>
-      <exclude module="jmxri"/>
-      <exclude module="jmxtools"/>
-      <exclude module="mail"/>
-      <exclude module="jms"/>
-    </dependencies>
-
-    override def artifactID = "kafka-perf"
-    override def filterScalaJars = false
-    override def javaCompileOptions = super.javaCompileOptions ++
-      List(JavaCompileOption("-Xlint:unchecked"))
-  }
-
-  class KafkaExamplesProject(info: ProjectInfo) extends DefaultProject(info)
-     with IdeaProject
-     with CoreDependencies {
-    val examplesPackageAction = packageAllAction
-    val dependsOnCore = core
-  //The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
-  // some dependencies on various sun and javax packages.
-   override def ivyXML =
-    <dependencies>
-      <exclude module="javax"/>
-      <exclude module="jmxri"/>
-      <exclude module="jmxtools"/>
-      <exclude module="mail"/>
-      <exclude module="jms"/>
-    </dependencies>
-
-    override def artifactID = "kafka-java-examples"
-    override def filterScalaJars = false
-    override def javaCompileOptions = super.javaCompileOptions ++
-      List(JavaCompileOption("-Xlint:unchecked"))
-  }
-
-  class ContribProject(info: ProjectInfo) extends ParentProject(info) with IdeaProject {
-    lazy val hadoopProducer = project("hadoop-producer", "hadoop producer",
-                                      new HadoopProducerProject(_), core)
-    lazy val hadoopConsumer = project("hadoop-consumer", "hadoop consumer",
-                                      new HadoopConsumerProject(_), core)
-
-    val producerPackageAction = hadoopProducer.producerPackageAction
-    val consumerPackageAction = hadoopConsumer.consumerPackageAction
-
-    class HadoopProducerProject(info: ProjectInfo) extends DefaultProject(info)
-      with IdeaProject
-      with CoreDependencies with HadoopDependencies {
-      val producerPackageAction = packageAllAction
-      override def ivyXML =
-       <dependencies>
-         <exclude module="netty"/>
-           <exclude module="javax"/>
-           <exclude module="jmxri"/>
-           <exclude module="jmxtools"/>
-           <exclude module="mail"/>
-           <exclude module="jms"/>
-         <dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2">
-           <exclude module="junit"/>
-         </dependency>
-         <dependency org="org.apache.pig" name="pig" rev="0.8.0">
-           <exclude module="junit"/>
-         </dependency>
-       </dependencies>
-
-    }
-
-    class HadoopConsumerProject(info: ProjectInfo) extends DefaultProject(info)
-      with IdeaProject
-      with CoreDependencies {
-      val consumerPackageAction = packageAllAction
-      override def ivyXML =
-       <dependencies>
-         <exclude module="netty"/>
-           <exclude module="javax"/>
-           <exclude module="jmxri"/>
-           <exclude module="jmxtools"/>
-           <exclude module="mail"/>
-           <exclude module="jms"/>
-           <exclude module=""/>
-         <dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2">
-           <exclude module="junit"/>
-         </dependency>
-         <dependency org="org.apache.pig" name="pig" rev="0.8.0">
-           <exclude module="junit"/>
-         </dependency>
-       </dependencies>
-
-      val jodaTime = "joda-time" % "joda-time" % "1.6"
-    }
-  }
-
-  trait TestDependencies {
-    val easymock = "org.easymock" % "easymock" % "3.0" % "test"
-    val junit = "junit" % "junit" % "4.1" % "test"
-    val scalaTest = "org.scalatest" % "scalatest" % "1.2" % "test"
-  }
-
-  trait CoreDependencies {
-    val log4j = "log4j" % "log4j" % "1.2.15"
-    val jopt = "net.sf.jopt-simple" % "jopt-simple" % "3.2"
-  }
-  
-  trait HadoopDependencies {
-    val avro = "org.apache.avro" % "avro" % "1.4.0"
-    val commonsLogging = "commons-logging" % "commons-logging" % "1.0.4"
-    val jacksonCore = "org.codehaus.jackson" % "jackson-core-asl" % "1.5.5"
-    val jacksonMapper = "org.codehaus.jackson" % "jackson-mapper-asl" % "1.5.5"
-    val hadoop = "org.apache.hadoop" % "hadoop-core" % "0.20.2"
-  }
-
-  trait CompressionDependencies {
-    val snappy = "org.xerial.snappy" % "snappy-java" % "1.0.4.1"	
-  }
-
-}
diff --git a/trunk/project/plugins/Plugins.scala b/trunk/project/plugins/Plugins.scala
deleted file mode 100644
index 0777d82..0000000
--- a/trunk/project/plugins/Plugins.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import sbt._
-
-class Plugins(info: ProjectInfo) extends PluginDefinition(info) {
-  val repo = "GH-pages repo" at "http://mpeltonen.github.com/maven/"
-  val idea = "com.github.mpeltonen" % "sbt-idea-plugin" % "0.1-SNAPSHOT"
-}
diff --git a/trunk/sbt b/trunk/sbt
deleted file mode 100755
index 7d8b41e..0000000
--- a/trunk/sbt
+++ /dev/null
@@ -1 +0,0 @@
-java -Xmx1024M -XX:MaxPermSize=512m -jar `dirname $0`/lib/sbt-launch.jar "$@"
diff --git a/trunk/system_test/broker_failure/README b/trunk/system_test/broker_failure/README
deleted file mode 100644
index e7ff738..0000000
--- a/trunk/system_test/broker_failure/README
+++ /dev/null
@@ -1,72 +0,0 @@
-** Please note that the following commands should be executed
-   after downloading the kafka source code to build all the
-   required binaries:
-   1. <kafka install dir>/ $ ./sbt update
-   2. <kafka install dir>/ $ ./sbt package
-
-   Now you are ready to follow the steps below.
-
-This script performs broker failure tests in an environment with
-Mirrored Source & Target clusters in a single machine:
-
-1. Start a cluster of Kafka source brokers
-2. Start a cluster of Kafka target brokers
-3. Start one or more Mirror Maker to create mirroring between
-   source and target clusters
-4. A producer produces batches of messages to the SOURCE brokers
-   in the background
-5. The Kafka SOURCE, TARGET brokers and Mirror Maker will be
-   terminated in a round-robin fashion and wait for the consumer
-   to catch up.
-6. Repeat step 5 as many times as specified in the script
-7. An independent ConsoleConsumer in publish/subcribe mode to
-   consume messages from the SOURCE brokers cluster
-8. An independent ConsoleConsumer in publish/subcribe mode to
-   consume messages from the TARGET brokers cluster
-
-Expected results:
-==================
-There should not be any discrepancies by comparing the unique 
-message checksums from the source ConsoleConsumer and the 
-target ConsoleConsumer.
-
-Notes:
-==================
-The number of Kafka SOURCE brokers can be increased as follows:
-1. Update the value of $num_kafka_source_server in this script
-2. Make sure that there are corresponding number of prop files:
-   $base_dir/config/server_source{1..4}.properties
-
-The number of Kafka TARGET brokers can be increased as follows:
-1. Update the value of $num_kafka_target_server in this script
-2. Make sure that there are corresponding number of prop files:
-   $base_dir/config/server_target{1..3}.properties
-
-Quick Start:
-==================
-In the directory <kafka home>/system_test/broker_failure,
-execute this script as following:
-  $ bin/run-test.sh -n <num of iterations> -s <servers to bounce>
-
-num of iterations - the number of iterations that the test runs
-
-servers to bounce - the servers to be bounced in a round-robin fashion.
-
-    Values to be entered:
-        1 - source broker
-        2 - mirror maker
-        3 - target broker
-
-    Example:
-        * To bounce only mirror maker and target broker
-          in turns, enter the value 23.
-        * To bounce only mirror maker, enter the value 2.
-        * To run the test without bouncing, enter 0.
-
-At the end of the test, the received messages checksums in both
-SOURCE & TARGET will be compared. If all checksums are matched,
-the test is PASSED. Otherwise, the test is FAILED.
-
-In the event of failure, by default the brokers and zookeepers
-remain running to make it easier to debug the issue - hit Ctrl-C
-to shut them down. 
diff --git a/trunk/system_test/broker_failure/bin/kafka-run-class.sh b/trunk/system_test/broker_failure/bin/kafka-run-class.sh
deleted file mode 100755
index 05f46b6..0000000
--- a/trunk/system_test/broker_failure/bin/kafka-run-class.sh
+++ /dev/null
@@ -1,67 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-if [ $# -lt 1 ];
-then
-  echo "USAGE: $0 classname [opts]"
-  exit 1
-fi
-
-base_dir=$(dirname $0)/..
-kafka_inst_dir=${base_dir}/../..
-
-for file in $kafka_inst_dir/project/boot/scala-2.8.0/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $kafka_inst_dir/core/target/scala_2.8.0/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $kafka_inst_dir/core/lib/*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $kafka_inst_dir/perf/target/scala_2.8.0/kafka*.jar;
-do
-  CLASSPATH=$CLASSPATH:$file
-done
-
-for file in $kafka_inst_dir/core/lib_managed/scala_2.8.0/compile/*.jar;
-do
-  if [ ${file##*/} != "sbt-launch.jar" ]; then
-    CLASSPATH=$CLASSPATH:$file
-  fi
-done
-if [ -z "$KAFKA_JMX_OPTS" ]; then
-  KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false "
-fi
-if [ -z "$KAFKA_OPTS" ]; then
-  KAFKA_OPTS="-Xmx512M -server  -Dlog4j.configuration=file:$base_dir/config/log4j.properties"
-fi
-if [  $JMX_PORT ]; then
-  KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
-fi
-if [ -z "$JAVA_HOME" ]; then
-  JAVA="java"
-else
-  JAVA="$JAVA_HOME/bin/java"
-fi
-
-$JAVA $KAFKA_OPTS $KAFKA_JMX_OPTS -cp $CLASSPATH $@
diff --git a/trunk/system_test/broker_failure/bin/run-test.sh b/trunk/system_test/broker_failure/bin/run-test.sh
deleted file mode 100755
index 368c052..0000000
--- a/trunk/system_test/broker_failure/bin/run-test.sh
+++ /dev/null
@@ -1,823 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# ===========
-# run-test.sh
-# ===========
- 
-# ====================================
-# Do not change the followings
-# (keep this section at the beginning
-# of this script)
-# ====================================
-readonly system_test_root=$(dirname $0)/../..        # path of <kafka install>/system_test
-readonly common_dir=${system_test_root}/common       # common util scripts for system_test
-source   ${common_dir}/util.sh                       # include the util script
-
-readonly base_dir=$(dirname $0)/..                   # the base dir of this test suite
-readonly test_start_time="$(date +%s)"               # time starting this test
-readonly bounce_source_id=1
-readonly bounce_mir_mkr_id=2
-readonly bounce_target_id=3
-readonly log4j_prop_file=$base_dir/config/log4j.properties
-
-iter=1                                               # init a counter to keep track of iterations
-num_iterations=5                                     # total no. of iterations to run
-svr_to_bounce=0                                      # servers to bounce: 1-source 2-mirror_maker 3-target
-                                                     # 12 - source & mirror_maker
-                                                     # 13 - source & target
-
-# ====================================
-# No need to change the following
-# configurations in most cases
-# ====================================
-readonly zk_source_port=2181                         # source zk port
-readonly zk_target_port=2182                         # target zk port
-readonly test_topic=test01                           # topic used in this test
-readonly consumer_grp=group1                         # consumer group
-readonly source_console_consumer_grp=source
-readonly target_console_consumer_grp=target
-readonly message_size=5000
-readonly console_consumer_timeout_ms=15000
-readonly num_kafka_source_server=4                   # requires same no. of property files such as: 
-                                                     # $base_dir/config/server_source{1..4}.properties
-readonly num_kafka_target_server=3                   # requires same no. of property files such as: 
-                                                     # $base_dir/config/server_target{1..3}.properties
-readonly num_kafka_mirror_maker=3                    # any values greater than 0
-readonly wait_time_after_killing_broker=0            # wait after broker is stopped but before starting again
-readonly wait_time_after_restarting_broker=10
-
-# ====================================
-# Change the followings as needed
-# ====================================
-num_msg_per_batch=500                                # no. of msg produced in each calling of ProducerPerformance
-producer_sleep_min=5                                 # min & max sleep time (in sec) between each 
-producer_sleep_max=5                                 # batch of messages sent from producer 
-
-# ====================================
-# zookeeper
-# ====================================
-pid_zk_source=
-pid_zk_target=
-zk_log4j_log=
-
-# ====================================
-# kafka source
-# ====================================
-kafka_source_pids=
-kafka_source_prop_files=
-kafka_source_log_files=
-kafka_topic_creation_log_file=$base_dir/kafka_topic_creation.log
-kafka_log4j_log=
-
-# ====================================
-# kafka target
-# ====================================
-kafka_target_pids=
-kafka_target_prop_files=
-kafka_target_log_files=
-
-# ====================================
-# mirror maker
-# ====================================
-kafka_mirror_maker_pids=
-kafka_mirror_maker_log_files=
-consumer_prop_file=$base_dir/config/whitelisttest.consumer.properties
-mirror_producer_prop_files=
-
-# ====================================
-# console consumer source
-# ====================================
-console_consumer_source_pid=
-console_consumer_source_log=$base_dir/console_consumer_source.log
-console_consumer_source_crc_log=$base_dir/console_consumer_source_crc.log
-console_consumer_source_crc_sorted_log=$base_dir/console_consumer_source_crc_sorted.log
-console_consumer_source_crc_sorted_uniq_log=$base_dir/console_consumer_source_crc_sorted_uniq.log
-
-# ====================================
-# console consumer target
-# ====================================
-console_consumer_target_pid=
-console_consumer_target_log=$base_dir/console_consumer_target.log
-console_consumer_target_crc_log=$base_dir/console_consumer_target_crc.log
-console_consumer_target_crc_sorted_log=$base_dir/console_consumer_target_crc_sorted.log
-console_consumer_target_crc_sorted_uniq_log=$base_dir/console_consumer_target_crc_sorted_uniq.log
-
-# ====================================
-# producer
-# ====================================
-background_producer_pid=
-producer_performance_log=$base_dir/producer_performance.log
-producer_performance_crc_log=$base_dir/producer_performance_crc.log
-producer_performance_crc_sorted_log=$base_dir/producer_performance_crc_sorted.log
-producer_performance_crc_sorted_uniq_log=$base_dir/producer_performance_crc_sorted_uniq.log
-tmp_file_to_stop_background_producer=/tmp/tmp_file_to_stop_background_producer
-
-# ====================================
-# test reports
-# ====================================
-checksum_diff_log=$base_dir/checksum_diff.log
-
-
-# ====================================
-# initialize prop and log files
-# ====================================
-initialize() {
-    for ((i=1; i<=$num_kafka_target_server; i++))
-    do
-        kafka_target_prop_files[${i}]=$base_dir/config/server_target${i}.properties
-        kafka_target_log_files[${i}]=$base_dir/kafka_target${i}.log
-        kafka_mirror_maker_log_files[${i}]=$base_dir/kafka_mirror_maker${i}.log
-    done
-
-    for ((i=1; i<=$num_kafka_source_server; i++))
-    do
-        kafka_source_prop_files[${i}]=$base_dir/config/server_source${i}.properties
-        kafka_source_log_files[${i}]=$base_dir/kafka_source${i}.log
-    done
-
-    for ((i=1; i<=$num_kafka_mirror_maker; i++))
-    do
-        mirror_producer_prop_files[${i}]=$base_dir/config/mirror_producer${i}.properties
-    done
-
-    zk_log4j_log=`grep "log4j.appender.zookeeperAppender.File=" $log4j_prop_file | awk -F '=' '{print $2}'`
-    kafka_log4j_log=`grep "log4j.appender.kafkaAppender.File=" $log4j_prop_file | awk -F '=' '{print $2}'`
-}
-
-# =========================================
-# cleanup
-# =========================================
-cleanup() {
-    info "cleaning up"
-
-    rm -rf $tmp_file_to_stop_background_producer
-    rm -rf $kafka_topic_creation_log_file
-
-    rm -rf /tmp/zookeeper_source
-    rm -rf /tmp/zookeeper_target
-
-    rm -rf /tmp/kafka-source{1..4}-logs
-    rm -rf /tmp/kafka-target{1..3}-logs
-
-    rm -rf $zk_log4j_log
-    rm -rf $kafka_log4j_log
-
-    for ((i=1; i<=$num_kafka_target_server; i++))
-    do
-        rm -rf ${kafka_target_log_files[${i}]}
-        rm -rf ${kafka_mirror_maker_log_files[${i}]}
-    done
-
-    rm -f $base_dir/zookeeper_source.log
-    rm -f $base_dir/zookeeper_target.log
-    rm -f $base_dir/kafka_source{1..4}.log
-
-    rm -f $producer_performance_log
-    rm -f $producer_performance_crc_log
-    rm -f $producer_performance_crc_sorted_log
-    rm -f $producer_performance_crc_sorted_uniq_log
-
-    rm -f $console_consumer_target_log
-    rm -f $console_consumer_source_log
-    rm -f $console_consumer_target_crc_log
-    rm -f $console_consumer_source_crc_log
-
-    rm -f $checksum_diff_log
-
-    rm -f $console_consumer_target_crc_sorted_log
-    rm -f $console_consumer_source_crc_sorted_log
-    rm -f $console_consumer_target_crc_sorted_uniq_log
-    rm -f $console_consumer_source_crc_sorted_uniq_log
-}
-
-# =========================================
-# wait_for_zero_consumer_lags
-# =========================================
-wait_for_zero_consumer_lags() {
-
-    this_group_name=$1
-    this_zk_port=$2
-
-    # no of times to check for zero lagging
-    no_of_zero_to_verify=3
-
-    while [ 'x' == 'x' ]
-    do
-        TOTAL_LAG=0
-        CONSUMER_LAGS=`$base_dir/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker \
-                       --group $target_console_consumer_grp \
-                       --zkconnect localhost:$zk_target_port \
-                       --topic $test_topic \
-                       | grep "Consumer lag" | tr -d ' ' | cut -f2 -d '='`
-
-        for lag in $CONSUMER_LAGS;
-        do
-            TOTAL_LAG=$(($TOTAL_LAG + $lag))
-        done
-
-        info "mirror console consumer TOTAL_LAG = $TOTAL_LAG"
-        if [ $TOTAL_LAG -eq 0 ]; then
-            if [ $no_of_zero_to_verify -eq 0 ]; then
-                echo
-                return 0
-            fi
-            no_of_zero_to_verify=$(($no_of_zero_to_verify - 1))
-        fi
-        sleep 1
-    done
-}
-
-# =========================================
-# create_topic
-# =========================================
-create_topic() {
-    this_topic_to_create=$1
-    this_zk_conn_str=$2
-    this_replica_factor=$3
-
-    info "creating topic [$this_topic_to_create] on [$this_zk_conn_str]"
-    $base_dir/../../bin/kafka-create-topic.sh \
-        --topic $this_topic_to_create \
-        --zookeeper $this_zk_conn_str \
-        --replica $this_replica_factor \
-        2> $kafka_topic_creation_log_file 
-}
-
-# =========================================
-# start_zk
-# =========================================
-start_zk() {
-    info "starting zookeepers"
-
-    $base_dir/../../bin/zookeeper-server-start.sh \
-        $base_dir/config/zookeeper_source.properties \
-        2>&1 > $base_dir/zookeeper_source.log &
-    pid_zk_source=$!
-
-    $base_dir/../../bin/zookeeper-server-start.sh \
-        $base_dir/config/zookeeper_target.properties \
-        2>&1 > $base_dir/zookeeper_target.log &
-    pid_zk_target=$!
-}
-
-# =========================================
-# start_source_servers_cluster
-# =========================================
-start_source_servers_cluster() {
-    info "starting source cluster"
-
-    for ((i=1; i<=$num_kafka_source_server; i++)) 
-    do
-        start_source_server $i
-    done
-}
-
-# =========================================
-# start_source_server
-# =========================================
-start_source_server() {
-    s_idx=$1
-
-    $base_dir/bin/kafka-run-class.sh kafka.Kafka \
-        ${kafka_source_prop_files[$s_idx]} \
-        2>&1 >> ${kafka_source_log_files[$s_idx]} &
-    kafka_source_pids[${s_idx}]=$!
-
-    info "  -> kafka_source_pids[$s_idx]: ${kafka_source_pids[$s_idx]}"
-}
-
-# =========================================
-# start_target_servers_cluster
-# =========================================
-start_target_servers_cluster() {
-    info "starting mirror cluster"
-
-    for ((i=1; i<=$num_kafka_target_server; i++))
-    do
-        start_target_server $i
-    done
-}
-
-# =========================================
-# start_target_server
-# =========================================
-start_target_server() {
-    s_idx=$1
-
-    $base_dir/bin/kafka-run-class.sh kafka.Kafka \
-        ${kafka_target_prop_files[${s_idx}]} \
-        2>&1 >> ${kafka_target_log_files[${s_idx}]} &
-    kafka_target_pids[$s_idx]=$!
-
-    info "  -> kafka_target_pids[$s_idx]: ${kafka_target_pids[$s_idx]}"
-}
-
-# =========================================
-# start_target_mirror_maker
-# =========================================
-start_target_mirror_maker() {
-    info "starting mirror maker"
-
-    for ((i=1; i<=$num_kafka_mirror_maker; i++))
-    do
-        start_mirror_maker $i
-    done
-}
-
-# =========================================
-# start_mirror_maker
-# =========================================
-start_mirror_maker() {
-    s_idx=$1
-
-    $base_dir/bin/kafka-run-class.sh kafka.tools.MirrorMaker \
-        --consumer.config $consumer_prop_file \
-        --producer.config ${mirror_producer_prop_files[${s_idx}]} \
-        --whitelist=\".*\" \
-        2>&1 >> ${kafka_mirror_maker_log_files[$s_idx]} &
-    kafka_mirror_maker_pids[${s_idx}]=$!
-
-    info "  -> kafka_mirror_maker_pids[$s_idx]: ${kafka_mirror_maker_pids[$s_idx]}"
-}
-
-# =========================================
-# start_console_consumer
-# =========================================
-start_console_consumer() {
-
-    this_consumer_grp=$1
-    this_consumer_zk_port=$2
-    this_consumer_log=$3
-
-    info "starting console consumers for $this_consumer_grp"
-
-    $base_dir/bin/kafka-run-class.sh kafka.consumer.ConsoleConsumer \
-        --zookeeper localhost:$this_consumer_zk_port \
-        --topic $test_topic \
-        --group $this_consumer_grp \
-        --from-beginning \
-        --consumer-timeout-ms $console_consumer_timeout_ms \
-        --formatter "kafka.consumer.ConsoleConsumer\$ChecksumMessageFormatter" \
-        2>&1 > ${this_consumer_log} &
-    console_consumer_pid=$!
-
-    info "  -> console consumer pid: $console_consumer_pid"
-}
-
-# =========================================
-# force_shutdown_background_producer
-# - to be called when user press Ctrl-C
-# =========================================
-force_shutdown_background_producer() {
-    info "force shutting down producer"
-    `ps auxw | grep "run\-test\|ProducerPerformance" | grep -v grep | awk '{print $2}' | xargs kill -9`
-}
-
-# =========================================
-# force_shutdown_consumer
-# - to be called when user press Ctrl-C
-# =========================================
-force_shutdown_consumer() {
-    info "force shutting down consumer"
-    `ps auxw | grep ChecksumMessageFormatter | grep -v grep | awk '{print $2}' | xargs kill -9`
-}
-
-# =========================================
-# shutdown_servers
-# =========================================
-shutdown_servers() {
-
-    info "shutting down mirror makers"
-    for ((i=1; i<=$num_kafka_mirror_maker; i++))
-    do
-        #info "stopping mm pid: ${kafka_mirror_maker_pids[$i]}"
-        if [ "x${kafka_mirror_maker_pids[$i]}" != "x" ]; then
-            kill_child_processes 0 ${kafka_mirror_maker_pids[$i]};
-        fi
-    done
-
-    info "shutting down target servers"
-    for ((i=1; i<=$num_kafka_target_server; i++))
-    do
-        if [ "x${kafka_target_pids[$i]}" != "x" ]; then
-            kill_child_processes 0 ${kafka_target_pids[$i]};
-        fi
-    done
-
-    info "shutting down source servers"
-    for ((i=1; i<=$num_kafka_source_server; i++))
-    do
-        if [ "x${kafka_source_pids[$i]}" != "x" ]; then
-            kill_child_processes 0 ${kafka_source_pids[$i]};
-        fi
-    done
-
-    info "shutting down zookeeper servers"
-    if [ "x${pid_zk_target}" != "x" ]; then kill_child_processes 0 ${pid_zk_target}; fi
-    if [ "x${pid_zk_source}" != "x" ]; then kill_child_processes 0 ${pid_zk_source}; fi
-}
-
-# =========================================
-# start_background_producer
-# =========================================
-start_background_producer() {
-
-    topic=$1
-
-    batch_no=0
-
-    while [ ! -e $tmp_file_to_stop_background_producer ]
-    do
-        sleeptime=
-
-        get_random_range $producer_sleep_min $producer_sleep_max
-        sleeptime=$?
-
-        batch_no=$(($batch_no + 1))
-
-        info "producing $num_msg_per_batch messages on topic '$topic'"
-        $base_dir/bin/kafka-run-class.sh \
-            kafka.perf.ProducerPerformance \
-            --brokerinfo zk.connect=localhost:2181 \
-            --topic $topic \
-            --messages $num_msg_per_batch \
-            --message-size $message_size \
-            --batch-size 50 \
-            --vary-message-size \
-            --threads 1 \
-            --reporting-interval $num_msg_per_batch \
-            --async \
-            2>&1 >> $base_dir/producer_performance.log    # appending all producers' msgs
-
-        sleep $sleeptime
-    done
-}
-
-# =========================================
-# cmp_checksum
-# =========================================
-cmp_checksum() {
-
-    cmp_result=0
-
-    grep ^checksum $console_consumer_source_log | tr -d ' ' | cut -f2 -d ':' > $console_consumer_source_crc_log
-    grep ^checksum $console_consumer_target_log | tr -d ' ' | cut -f2 -d ':' > $console_consumer_target_crc_log
-    grep checksum $producer_performance_log | tr ' ' '\n' | grep checksum | awk -F ':' '{print $2}' > $producer_performance_crc_log
-
-    sort $console_consumer_target_crc_log > $console_consumer_target_crc_sorted_log
-    sort $console_consumer_source_crc_log > $console_consumer_source_crc_sorted_log
-    sort $producer_performance_crc_log > $producer_performance_crc_sorted_log
-
-    sort -u $console_consumer_target_crc_log > $console_consumer_target_crc_sorted_uniq_log
-    sort -u $console_consumer_source_crc_log > $console_consumer_source_crc_sorted_uniq_log
-    sort -u $producer_performance_crc_log > $producer_performance_crc_sorted_uniq_log
-
-    msg_count_from_source_consumer=`cat $console_consumer_source_crc_log | wc -l | tr -d ' '`
-    uniq_msg_count_from_source_consumer=`cat $console_consumer_source_crc_sorted_uniq_log | wc -l | tr -d ' '`
-
-    msg_count_from_mirror_consumer=`cat $console_consumer_target_crc_log | wc -l | tr -d ' '`
-    uniq_msg_count_from_mirror_consumer=`cat $console_consumer_target_crc_sorted_uniq_log | wc -l | tr -d ' '`
-
-    uniq_msg_count_from_producer=`cat $producer_performance_crc_sorted_uniq_log | wc -l | tr -d ' '`
-
-    total_msg_published=`cat $producer_performance_crc_log | wc -l | tr -d ' '`
-
-    duplicate_msg_in_producer=$(( $total_msg_published - $uniq_msg_count_from_producer ))
-
-    crc_only_in_mirror_consumer=`comm -23 $console_consumer_target_crc_sorted_uniq_log $console_consumer_source_crc_sorted_uniq_log`
-    crc_only_in_source_consumer=`comm -13 $console_consumer_target_crc_sorted_uniq_log $console_consumer_source_crc_sorted_uniq_log`
-    crc_common_in_both_consumer=`comm -12 $console_consumer_target_crc_sorted_uniq_log $console_consumer_source_crc_sorted_uniq_log`
-
-    crc_only_in_producer=`comm -23 $producer_performance_crc_sorted_uniq_log $console_consumer_source_crc_sorted_uniq_log`
-
-    duplicate_mirror_crc=`comm -23 $console_consumer_target_crc_sorted_log $console_consumer_target_crc_sorted_uniq_log` 
-    no_of_duplicate_msg=$(( $msg_count_from_mirror_consumer - $uniq_msg_count_from_mirror_consumer \
-                          + $msg_count_from_source_consumer - $uniq_msg_count_from_source_consumer - \
-                          2*$duplicate_msg_in_producer ))
-
-    source_mirror_uniq_msg_diff=$(($uniq_msg_count_from_source_consumer - $uniq_msg_count_from_mirror_consumer))
-
-    echo ""
-    echo "========================================================"
-    echo "no. of messages published            : $total_msg_published"
-    echo "producer unique msg rec'd            : $uniq_msg_count_from_producer"
-    echo "source consumer msg rec'd            : $msg_count_from_source_consumer"
-    echo "source consumer unique msg rec'd     : $uniq_msg_count_from_source_consumer"
-    echo "mirror consumer msg rec'd            : $msg_count_from_mirror_consumer"
-    echo "mirror consumer unique msg rec'd     : $uniq_msg_count_from_mirror_consumer"
-    echo "total source/mirror duplicate msg    : $no_of_duplicate_msg"
-    echo "source/mirror uniq msg count diff    : $source_mirror_uniq_msg_diff"
-    echo "========================================================"
-    echo "(Please refer to $checksum_diff_log for more details)"
-    echo ""
-
-    echo "========================================================" >> $checksum_diff_log
-    echo "crc only in producer"                                     >> $checksum_diff_log 
-    echo "========================================================" >> $checksum_diff_log
-    echo "${crc_only_in_producer}"                                  >> $checksum_diff_log 
-    echo ""                                                         >> $checksum_diff_log
-    echo "========================================================" >> $checksum_diff_log
-    echo "crc only in source consumer"                              >> $checksum_diff_log 
-    echo "========================================================" >> $checksum_diff_log
-    echo "${crc_only_in_source_consumer}"                           >> $checksum_diff_log 
-    echo ""                                                         >> $checksum_diff_log
-    echo "========================================================" >> $checksum_diff_log
-    echo "crc only in mirror consumer"                              >> $checksum_diff_log
-    echo "========================================================" >> $checksum_diff_log
-    echo "${crc_only_in_mirror_consumer}"                           >> $checksum_diff_log   
-    echo ""                                                         >> $checksum_diff_log
-    echo "========================================================" >> $checksum_diff_log
-    echo "duplicate crc in mirror consumer"                         >> $checksum_diff_log
-    echo "========================================================" >> $checksum_diff_log
-    echo "${duplicate_mirror_crc}"                                  >> $checksum_diff_log
-
-    echo "================="
-    if [[ $source_mirror_uniq_msg_diff -eq 0 && $uniq_msg_count_from_source_consumer -gt 0 ]]; then
-        echo "## Test PASSED"
-    else
-        echo "## Test FAILED"
-    fi
-    echo "================="
-    echo
-
-    return $cmp_result
-}
-
-# =========================================
-# start_test
-# =========================================
-start_test() {
-
-    echo
-    info "==========================================================="
-    info "#### Starting Kafka Broker / Mirror Maker Failure Test #### (v1.0)"
-    info "==========================================================="
-    echo
-
-    start_zk
-    sleep 2
-
-    start_source_servers_cluster
-    sleep 2
-
-#    create_topic $test_topic localhost:$zk_source_port 1
-#    sleep 2
-
-    start_target_servers_cluster
-    sleep 2
-
-    start_target_mirror_maker
-    sleep 2
-
-    start_background_producer $test_topic &
-    background_producer_pid=$!
-
-    info "Started background producer pid [${background_producer_pid}]"
-    sleep 5
-   
-    # loop for no. of iterations specified in $num_iterations 
-    while [ $num_iterations -ge $iter ]
-    do
-        # if $svr_to_bounce is '0', it means no bouncing
-        if [[ $num_iterations -ge $iter && $svr_to_bounce -gt 0 ]]; then
-            idx=
-
-            # check which type of broker bouncing is requested: source, mirror_maker or target
-
-            # $svr_to_bounce contains $bounce_target_id - eg. '3', '123', ... etc
-            svr_idx=`expr index $svr_to_bounce $bounce_target_id`
-            if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
-                echo
-                info "=========================================="
-                info "Iteration $iter of ${num_iterations}"
-                info "=========================================="
-
-                # bounce target kafka broker
-                get_random_range 1 $num_kafka_target_server 
-                idx=$?
-
-                if [ "x${kafka_target_pids[$idx]}" != "x" ]; then
-                    echo
-                    info "#### Bouncing Kafka TARGET Broker ####"
-
-                    info "terminating kafka target[$idx] with process id ${kafka_target_pids[$idx]}"
-                    kill_child_processes 0 ${kafka_target_pids[$idx]}
-
-                    info "sleeping for ${wait_time_after_killing_broker}s"
-                    sleep $wait_time_after_killing_broker
-
-                    info "starting kafka target server"
-                    start_target_server $idx
-                fi
-                iter=$(($iter+1))
-                info "sleeping for ${wait_time_after_restarting_broker}s"
-                sleep $wait_time_after_restarting_broker
-             fi
-
-            # $svr_to_bounce contains $bounce_mir_mkr_id - eg. '2', '123', ... etc
-            svr_idx=`expr index $svr_to_bounce $bounce_mir_mkr_id`
-            if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
-                echo
-                info "=========================================="
-                info "Iteration $iter of ${num_iterations}"
-                info "=========================================="
-
-                # bounce mirror maker
-                get_random_range 1 $num_kafka_mirror_maker
-                idx=$?
-
-                if [ "x${kafka_mirror_maker_pids[$idx]}" != "x" ]; then
-                    echo
-                    info "#### Bouncing Kafka Mirror Maker ####"
-
-                    info "terminating kafka mirror maker [$idx] with process id ${kafka_mirror_maker_pids[$idx]}"
-                    kill_child_processes 0 ${kafka_mirror_maker_pids[$idx]}
-
-                    info "sleeping for ${wait_time_after_killing_broker}s"
-                    sleep $wait_time_after_killing_broker
-
-                    info "starting kafka mirror maker"
-                    start_mirror_maker $idx
-                fi
-                iter=$(($iter+1))
-                info "sleeping for ${wait_time_after_restarting_broker}s"
-                sleep $wait_time_after_restarting_broker
-             fi
-
-            # $svr_to_bounce contains $bounce_source_id - eg. '1', '123', ... etc
-            svr_idx=`expr index $svr_to_bounce $bounce_source_id`
-            if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
-                echo
-                info "=========================================="
-                info "Iteration $iter of ${num_iterations}"
-                info "=========================================="
-
-                # bounce source kafka broker
-                get_random_range 1 $num_kafka_source_server 
-                idx=$?
-
-                if [ "x${kafka_source_pids[$idx]}" != "x" ]; then
-                    echo
-                    info "#### Bouncing Kafka SOURCE Broker ####"
-
-                    info "terminating kafka source[$idx] with process id ${kafka_source_pids[$idx]}"
-                    kill_child_processes 0 ${kafka_source_pids[$idx]}
-
-                    info "sleeping for ${wait_time_after_killing_broker}s"
-                    sleep $wait_time_after_killing_broker
-
-                    info "starting kafka source server"
-                    start_source_server $idx
-                fi
-                iter=$(($iter+1))
-                info "sleeping for ${wait_time_after_restarting_broker}s"
-                sleep $wait_time_after_restarting_broker
-             fi
-        else
-            echo
-            info "=========================================="
-            info "Iteration $iter of ${num_iterations}"
-            info "=========================================="
-
-            info "No bouncing performed"
-            iter=$(($iter+1))
-            info "sleeping for ${wait_time_after_restarting_broker}s"
-            sleep $wait_time_after_restarting_broker
-        fi
-    done
-
-    # notify background producer to stop
-    `touch $tmp_file_to_stop_background_producer`
-
-    echo
-    info "Tests completed. Waiting for consumers to catch up "
-
-    # =======================================================
-    # remove the following 'sleep 30' when KAFKA-313 is fixed
-    # =======================================================
-    info "sleeping 30 sec"
-    sleep 30
-}
-
-# =========================================
-# print_usage
-# =========================================
-print_usage() {
-    echo
-    echo "Error : invalid no. of arguments"
-    echo "Usage : $0 -n <no. of iterations> -s <servers to bounce>"
-    echo
-    echo "  num of iterations - the number of iterations that the test runs"
-    echo
-    echo "  servers to bounce - the servers to be bounced in a round-robin fashion"
-    echo "      Values of the servers:"
-    echo "        0 - no bouncing"
-    echo "        1 - source broker"
-    echo "        2 - mirror maker"
-    echo "        3 - target broker"
-    echo "      Example:"
-    echo "        * To bounce only mirror maker and target broker"
-    echo "          in turns, enter the value 23"
-    echo "        * To bounce only mirror maker, enter the value 2"
-    echo "        * To run the test without bouncing, enter 0"
-    echo
-    echo "Usage Example : $0 -n 10 -s 12"
-    echo "  (run 10 iterations and bounce source broker (1) + mirror maker (2) in turn)"
-    echo
-}
-
-
-# =========================================
-#
-#         Main test begins here
-#
-# =========================================
-
-# get command line arguments
-while getopts "hb:i:n:s:x:" opt
-do
-    case $opt in
-      b)
-        num_msg_per_batch=$OPTARG
-        ;;
-      h)
-        print_usage
-        exit
-        ;;
-      i)
-        producer_sleep_min=$OPTARG
-        ;;
-      n)
-        num_iterations=$OPTARG
-        ;;
-      s)
-        svr_to_bounce=$OPTARG
-        ;;
-      x)
-        producer_sleep_max=$OPTARG
-        ;;
-      ?)
-        print_usage
-        exit
-        ;;
-    esac
-done
-
-# initialize and cleanup
-initialize
-cleanup
-sleep 5
-
-# Ctrl-c trap. Catches INT signal
-trap "shutdown_servers; force_shutdown_consumer; force_shutdown_background_producer; cmp_checksum; exit 0" INT
-
-# starting the test
-start_test
-
-# starting consumer to consume data in source
-start_console_consumer $source_console_consumer_grp $zk_source_port $console_consumer_source_log
-
-# starting consumer to consume data in target
-start_console_consumer $target_console_consumer_grp $zk_target_port $console_consumer_target_log
-
-# wait for zero source consumer lags
-wait_for_zero_consumer_lags $source_console_consumer_grp $zk_source_port
-
-# wait for zero target consumer lags
-wait_for_zero_consumer_lags $target_console_consumer_grp $zk_target_port
-
-# =======================================================
-# remove the following 'sleep 30' when KAFKA-313 is fixed
-# =======================================================
-info "sleeping 30 sec"
-sleep 30
-
-shutdown_servers
-
-cmp_checksum
-result=$?
-
-# ===============================================
-# Report the time taken
-# ===============================================
-test_end_time="$(date +%s)"
-total_test_time_sec=$(( $test_end_time - $test_start_time ))
-total_test_time_min=$(( $total_test_time_sec / 60 ))
-info "Total time taken: $total_test_time_min min for $num_iterations iterations"
-echo
-
-exit $result
diff --git a/trunk/system_test/broker_failure/config/log4j.properties b/trunk/system_test/broker_failure/config/log4j.properties
deleted file mode 100644
index 23ece9b..0000000
--- a/trunk/system_test/broker_failure/config/log4j.properties
+++ /dev/null
@@ -1,86 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-log4j.rootLogger=INFO, stdout
-
-# ====================================
-# messages going to kafkaAppender
-# ====================================
-log4j.logger.kafka=DEBUG, kafkaAppender
-log4j.logger.org.I0Itec.zkclient.ZkClient=INFO, kafkaAppender
-log4j.logger.org.apache.zookeeper=INFO, kafkaAppender
-
-# ====================================
-# messages going to zookeeperAppender
-# ====================================
-# (comment out this line to redirect ZK-related messages to kafkaAppender
-#  to allow reading both Kafka and ZK debugging messages in a single file)
-log4j.logger.org.apache.zookeeper=INFO, zookeeperAppender
-
-# ====================================
-# stdout
-# ====================================
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
-
-# ====================================
-# fileAppender
-# ====================================
-log4j.appender.fileAppender=org.apache.log4j.FileAppender
-log4j.appender.fileAppender.File=/tmp/kafka_all_request.log
-log4j.appender.fileAppender.layout=org.apache.log4j.PatternLayout
-log4j.appender.fileAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
-
-# ====================================
-# kafkaAppender
-# ====================================
-log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
-log4j.appender.kafkaAppender.File=/tmp/kafka.log
-log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
-log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
-log4j.additivity.kafka=true
-
-# ====================================
-# zookeeperAppender
-# ====================================
-log4j.appender.zookeeperAppender=org.apache.log4j.DailyRollingFileAppender
-log4j.appender.zookeeperAppender.File=/tmp/zookeeper.log
-log4j.appender.zookeeperAppender.layout=org.apache.log4j.PatternLayout
-log4j.appender.zookeeperAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
-log4j.additivity.org.apache.zookeeper=false
-
-# ====================================
-# other available debugging info 
-# ====================================
-#log4j.logger.kafka.server.EmbeddedConsumer$MirroringThread=TRACE
-#log4j.logger.kafka.server.KafkaRequestHandlers=TRACE
-#log4j.logger.kafka.producer.async.AsyncProducer=TRACE
-#log4j.logger.kafka.producer.async.ProducerSendThread=TRACE
-#log4j.logger.kafka.producer.async.DefaultEventHandler=TRACE
-
-log4j.logger.kafka.consumer=DEBUG
-log4j.logger.kafka.tools.VerifyConsumerRebalance=DEBUG
-log4j.logger.kafka.tools.ConsumerOffsetChecker=DEBUG
-
-# to print message checksum from ProducerPerformance
-log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG
-
-# to print socket buffer size validated by Kafka broker
-log4j.logger.kafka.network.Acceptor=DEBUG
-
-# to print socket buffer size validated by SimpleConsumer
-log4j.logger.kafka.consumer.SimpleConsumer=TRACE
-
diff --git a/trunk/system_test/broker_failure/config/mirror_producer.properties b/trunk/system_test/broker_failure/config/mirror_producer.properties
deleted file mode 100644
index 9ea68d0..0000000
--- a/trunk/system_test/broker_failure/config/mirror_producer.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-producer.type=async
-
-# to avoid dropping events if the queue is full, wait indefinitely
-queue.enqueueTimeout.ms=-1
-
diff --git a/trunk/system_test/broker_failure/config/mirror_producer1.properties b/trunk/system_test/broker_failure/config/mirror_producer1.properties
deleted file mode 100644
index 7f37db3..0000000
--- a/trunk/system_test/broker_failure/config/mirror_producer1.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-#broker.list=0:localhost:9081
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-producer.type=async
-
-# to avoid dropping events if the queue is full, wait indefinitely
-queue.enqueueTimeout.ms=-1
-
diff --git a/trunk/system_test/broker_failure/config/mirror_producer2.properties b/trunk/system_test/broker_failure/config/mirror_producer2.properties
deleted file mode 100644
index 047f840..0000000
--- a/trunk/system_test/broker_failure/config/mirror_producer2.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-#broker.list=0:localhost:9082
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-producer.type=async
-
-# to avoid dropping events if the queue is full, wait indefinitely
-queue.enqueueTimeout.ms=-1
-
diff --git a/trunk/system_test/broker_failure/config/mirror_producer3.properties b/trunk/system_test/broker_failure/config/mirror_producer3.properties
deleted file mode 100644
index 5e8b7dc..0000000
--- a/trunk/system_test/broker_failure/config/mirror_producer3.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-#broker.list=0:localhost:9083
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-producer.type=async
-
-# to avoid dropping events if the queue is full, wait indefinitely
-queue.enqueueTimeout.ms=-1
-
diff --git a/trunk/system_test/broker_failure/config/server_source1.properties b/trunk/system_test/broker_failure/config/server_source1.properties
deleted file mode 100644
index 6f7df9e..0000000
--- a/trunk/system_test/broker_failure/config/server_source1.properties
+++ /dev/null
@@ -1,81 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=1
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9091
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source1-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
diff --git a/trunk/system_test/broker_failure/config/server_source2.properties b/trunk/system_test/broker_failure/config/server_source2.properties
deleted file mode 100644
index be5bb8b..0000000
--- a/trunk/system_test/broker_failure/config/server_source2.properties
+++ /dev/null
@@ -1,82 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=2
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9092
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source2-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/server_source3.properties b/trunk/system_test/broker_failure/config/server_source3.properties
deleted file mode 100644
index 90610ad..0000000
--- a/trunk/system_test/broker_failure/config/server_source3.properties
+++ /dev/null
@@ -1,82 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=3
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9093
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source3-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/server_source4.properties b/trunk/system_test/broker_failure/config/server_source4.properties
deleted file mode 100644
index c9f34f8..0000000
--- a/trunk/system_test/broker_failure/config/server_source4.properties
+++ /dev/null
@@ -1,82 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=4
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9094
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source4-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/server_target1.properties b/trunk/system_test/broker_failure/config/server_target1.properties
deleted file mode 100644
index 87f84f1..0000000
--- a/trunk/system_test/broker_failure/config/server_target1.properties
+++ /dev/null
@@ -1,85 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=1
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9081
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-target1-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/server_target2.properties b/trunk/system_test/broker_failure/config/server_target2.properties
deleted file mode 100644
index 4401414..0000000
--- a/trunk/system_test/broker_failure/config/server_target2.properties
+++ /dev/null
@@ -1,85 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=2
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9082
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-target2-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/server_target3.properties b/trunk/system_test/broker_failure/config/server_target3.properties
deleted file mode 100644
index eee7c9d..0000000
--- a/trunk/system_test/broker_failure/config/server_target3.properties
+++ /dev/null
@@ -1,85 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=3
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9083
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-target3-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
-
-# set sendBufferSize
-send.buffer.size=500000
-
-# set receiveBufferSize
-receive.buffer.size=500000
-
diff --git a/trunk/system_test/broker_failure/config/whitelisttest.consumer.properties b/trunk/system_test/broker_failure/config/whitelisttest.consumer.properties
deleted file mode 100644
index aaa3f7c..0000000
--- a/trunk/system_test/broker_failure/config/whitelisttest.consumer.properties
+++ /dev/null
@@ -1,29 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.consumer.ConsumerConfig for more details
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-#consumer group id
-groupid=group1
-
-mirror.topics.whitelist=test_1,test_2
-autooffset.reset=smallest
diff --git a/trunk/system_test/broker_failure/config/zookeeper_source.properties b/trunk/system_test/broker_failure/config/zookeeper_source.properties
deleted file mode 100644
index 76b02a2..0000000
--- a/trunk/system_test/broker_failure/config/zookeeper_source.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper_source
-# the port at which the clients will connect
-clientPort=2181
diff --git a/trunk/system_test/broker_failure/config/zookeeper_target.properties b/trunk/system_test/broker_failure/config/zookeeper_target.properties
deleted file mode 100644
index 28561d95..0000000
--- a/trunk/system_test/broker_failure/config/zookeeper_target.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper_target
-# the port at which the clients will connect
-clientPort=2182
diff --git a/trunk/system_test/common/util.sh b/trunk/system_test/common/util.sh
deleted file mode 100644
index d2e4ec9..0000000
--- a/trunk/system_test/common/util.sh
+++ /dev/null
@@ -1,168 +0,0 @@
-#!/bin/bash
-
-# =========================================
-# info - print messages with timestamp
-# =========================================
-info() {
-    echo -e "$(date +"%Y-%m-%d %H:%M:%S") $*"
-}
-
-# =========================================
-# info_no_newline - print messages with
-# timestamp without newline
-# =========================================
-info_no_newline() {
-    echo -e -n "$(date +"%Y-%m-%d %H:%M:%S") $*"
-}
-
-# =========================================
-# get_random_range - return a random number
-#     between the lower & upper bounds
-# usage:
-#     get_random_range $lower $upper
-#     random_no=$?
-# =========================================
-get_random_range() {
-    lo=$1
-    up=$2
-    range=$(($up - $lo + 1))
-
-    return $(($(($RANDOM % range)) + $lo))
-}
-
-# =========================================
-# kill_child_processes - terminate a
-# process and its child processes
-# =========================================
-kill_child_processes() {
-    isTopmost=$1
-    curPid=$2
-    childPids=$(ps a -o pid= -o ppid= | grep "${curPid}$" | awk '{print $1;}')
-
-    for childPid in $childPids
-    do
-        kill_child_processes 0 $childPid
-    done
-    if [ $isTopmost -eq 0 ]; then
-        kill -15 $curPid 2> /dev/null
-    fi
-}
-
-# =========================================================================
-# generate_kafka_properties_files -
-# 1. it takes the following arguments and generate server_{1..n}.properties
-#    for the total no. of kafka broker as specified in "num_server"; the
-#    resulting properties files will be located at: 
-#      <kafka home>/system_test/<test suite>/config
-# 2. the default values in the generated properties files will be copied
-#    from the settings in config/server.properties while the brokerid and
-#    server port will be incremented accordingly
-# 3. to generate properties files with non-default values such as 
-#    "socket.send.buffer=2097152", simply add the property with new value
-#    to the array variable kafka_properties_to_replace as shown below
-# =========================================================================
-generate_kafka_properties_files() {
-
-    test_suite_full_path=$1      # eg. <kafka home>/system_test/single_host_multi_brokers
-    num_server=$2                # total no. of brokers in the cluster
-    brokerid_to_start=$3         # this should be '0' in most cases
-    kafka_port_to_start=$4       # if 9091 is used, the rest would be 9092, 9093, ...
-
-    this_config_dir=${test_suite_full_path}/config
-
-    # info "test suite full path : $test_suite_full_path"
-    # info "broker id to start   : $brokerid_to_start"
-    # info "kafka port to start  : $kafka_port_to_start"
-    # info "num of server        : $num_server"
-    # info "config dir           : $this_config_dir"
-
-    # =============================================
-    # array to keep kafka properties statements
-    # from the file 'server.properties' need
-    # to be changed from their default values
-    # =============================================
-    # kafka_properties_to_replace     # DO NOT uncomment this line !!
-
-    # =============================================
-    # Uncomment the following kafka properties
-    # array element as needed to change the default
-    # values. Other kafka properties can be added
-    # in a similar fashion.
-    # =============================================
-    # kafka_properties_to_replace[1]="socket.send.buffer=2097152"
-    # kafka_properties_to_replace[2]="socket.receive.buffer=2097152"
-    # kafka_properties_to_replace[3]="num.partitions=3"
-    # kafka_properties_to_replace[4]="max.socket.request.bytes=10485760"
-
-    server_properties=`cat ${this_config_dir}/server.properties`
-
-    for ((i=1; i<=$num_server; i++))
-    do
-        # ======================
-        # update misc properties
-        # ======================
-        for ((j=1; j<=${#kafka_properties_to_replace[@]}; j++))
-        do
-            keyword_to_replace=`echo ${kafka_properties_to_replace[${j}]} | awk -F '=' '{print $1}'`
-            string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace` 
-            # info "string to be replaced : [$string_to_be_replaced]"
-            # info "string to replace     : [${kafka_properties_to_replace[${j}]}]"
-
-            echo "${server_properties}" | \
-              sed -e "s/${string_to_be_replaced}/${kafka_properties_to_replace[${j}]}/g" \
-              >${this_config_dir}/server_${i}.properties
-
-            server_properties=`cat ${this_config_dir}/server_${i}.properties`
-        done
-
-        # ======================
-        # update brokerid
-        # ======================
-        keyword_to_replace="brokerid="
-        string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
-        brokerid_idx=$(( $brokerid_to_start + $i - 1 ))
-        string_to_replace="${keyword_to_replace}${brokerid_idx}"
-        # info "string to be replaced : [${string_to_be_replaced}]"
-        # info "string to replace     : [${string_to_replace}]"
-
-        echo "${server_properties}" | \
-          sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
-          >${this_config_dir}/server_${i}.properties
-
-        server_properties=`cat ${this_config_dir}/server_${i}.properties`
-
-        # ======================
-        # update kafak_port
-        # ======================
-        keyword_to_replace="port="
-        string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
-        port_idx=$(( $kafka_port_to_start + $i - 1 ))
-        string_to_replace="${keyword_to_replace}${port_idx}"
-        # info "string to be replaced : [${string_to_be_replaced}]"
-        # info "string to replace     : [${string_to_replace}]"
-
-        echo "${server_properties}" | \
-          sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
-          >${this_config_dir}/server_${i}.properties
-
-        server_properties=`cat ${this_config_dir}/server_${i}.properties`
-
-        # ======================
-        # update kafka_log dir
-        # ======================
-        keyword_to_replace="log.dir="
-        string_to_be_replaced=`echo "$server_properties" | grep $keyword_to_replace`
-        string_to_be_replaced=${string_to_be_replaced//\//\\\/}
-        string_to_replace="${keyword_to_replace}\/tmp\/kafka_server_${i}_logs"
-        # info "string to be replaced : [${string_to_be_replaced}]"
-        # info "string to replace     : [${string_to_replace}]"
-
-        echo "${server_properties}" | \
-          sed -e "s/${string_to_be_replaced}/${string_to_replace}/g" \
-          >${this_config_dir}/server_${i}.properties
-
-        server_properties=`cat ${this_config_dir}/server_${i}.properties`
-
-     done
-}
-
diff --git a/trunk/system_test/mirror_maker/README b/trunk/system_test/mirror_maker/README
deleted file mode 100644
index da53c14..0000000
--- a/trunk/system_test/mirror_maker/README
+++ /dev/null
@@ -1,22 +0,0 @@
-This test replicates messages from two source kafka clusters into one target
-kafka cluster using the mirror-maker tool.  At the end, the messages produced
-at the source brokers should match that at the target brokers.
-
-To run this test, do
-bin/run-test.sh
-
-In the event of failure, by default the brokers and zookeepers remain running
-to make it easier to debug the issue - hit Ctrl-C to shut them down. You can
-change this behavior by setting the action_on_fail flag in the script to "exit"
-or "proceed", in which case a snapshot of all the logs and directories is
-placed in the test's base directory.
-
-It is a good idea to run the test in a loop. E.g.:
-
-:>/tmp/mirrormaker_test.log
-for i in {1..10}; do echo "run $i"; ./bin/run-test.sh 2>1 >> /tmp/mirrormaker_test.log; done
-tail -F /tmp/mirrormaker_test.log
-
-grep -ic passed /tmp/mirrormaker_test.log
-grep -ic failed /tmp/mirrormaker_test.log
-
diff --git a/trunk/system_test/mirror_maker/bin/expected.out b/trunk/system_test/mirror_maker/bin/expected.out
deleted file mode 100644
index 0a1bbaf..0000000
--- a/trunk/system_test/mirror_maker/bin/expected.out
+++ /dev/null
@@ -1,18 +0,0 @@
-start the servers ...
-start producing messages ...
-wait for consumer to finish consuming ...
-[2011-05-17 14:49:11,605] INFO Creating async producer for broker id = 2 at localhost:9091 (kafka.producer.ProducerPool)
-[2011-05-17 14:49:11,606] INFO Creating async producer for broker id = 1 at localhost:9092 (kafka.producer.ProducerPool)
-[2011-05-17 14:49:11,607] INFO Creating async producer for broker id = 3 at localhost:9090 (kafka.producer.ProducerPool)
-thread 0: 400000 messages sent 3514012.1233 nMsg/sec 3.3453 MBs/sec
-[2011-05-17 14:49:34,382] INFO Closing all async producers (kafka.producer.ProducerPool)
-[2011-05-17 14:49:34,383] INFO Closed AsyncProducer (kafka.producer.async.AsyncProducer)
-[2011-05-17 14:49:34,384] INFO Closed AsyncProducer (kafka.producer.async.AsyncProducer)
-[2011-05-17 14:49:34,385] INFO Closed AsyncProducer (kafka.producer.async.AsyncProducer)
-Total Num Messages: 400000 bytes: 79859641 in 22.93 secs
-Messages/sec: 17444.3960
-MB/sec: 3.3214
-test passed
-stopping the servers
-bin/../../../bin/zookeeper-server-start.sh: line 9: 22584 Terminated              $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.server.quorum.QuorumPeerMain $@
-bin/../../../bin/zookeeper-server-start.sh: line 9: 22585 Terminated              $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.server.quorum.QuorumPeerMain $@
diff --git a/trunk/system_test/mirror_maker/bin/run-test.sh b/trunk/system_test/mirror_maker/bin/run-test.sh
deleted file mode 100644
index e4bbd81..0000000
--- a/trunk/system_test/mirror_maker/bin/run-test.sh
+++ /dev/null
@@ -1,357 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-readonly num_messages=10000
-readonly message_size=100
-readonly action_on_fail="proceed"
-# readonly action_on_fail="wait"
-
-readonly test_start_time="$(date +%s)"
-
-readonly base_dir=$(dirname $0)/..
-
-info() {
-    echo -e "$(date +"%Y-%m-%d %H:%M:%S") $*"
-}
-
-kill_child_processes() {
-    isTopmost=$1
-    curPid=$2
-    childPids=$(ps a -o pid= -o ppid= | grep "${curPid}$" | awk '{print $1;}')
-    for childPid in $childPids
-    do
-        kill_child_processes 0 $childPid
-    done
-    if [ $isTopmost -eq 0 ]; then
-        kill -15 $curPid 2> /dev/null
-    fi
-}
-
-cleanup() {
-    info "cleaning up"
-
-    pid_zk_source1=
-    pid_zk_source2=
-    pid_zk_target=
-    pid_kafka_source_1_1=
-    pid_kafka_source_1_2=
-    pid_kafka_source_2_1=
-    pid_kafka_source_2_2=
-    pid_kafka_target_1_1=
-    pid_kafka_target_1_2=
-    pid_producer=
-    pid_mirrormaker_1=
-    pid_mirrormaker_2=
-
-    rm -rf /tmp/zookeeper*
-
-    rm -rf /tmp/kafka*
-}
-
-begin_timer() {
-    t_begin=$(date +%s)
-}
-
-end_timer() {
-    t_end=$(date +%s)
-}
-
-start_zk() {
-    info "starting zookeepers"
-    $base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper_source_1.properties 2>&1 > $base_dir/zookeeper_source-1.log &
-    pid_zk_source1=$!
-    $base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper_source_2.properties 2>&1 > $base_dir/zookeeper_source-2.log &
-    pid_zk_source2=$!
-    $base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper_target.properties 2>&1 > $base_dir/zookeeper_target.log &
-    pid_zk_target=$!
-}
-
-start_source_servers() {
-    info "starting source cluster"
-
-    JMX_PORT=1111 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_source_1_1.properties 2>&1 > $base_dir/kafka_source-1-1.log &
-    pid_kafka_source_1_1=$!
-    JMX_PORT=2222 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_source_1_2.properties 2>&1 > $base_dir/kafka_source-1-2.log &
-    pid_kafka_source_1_2=$!
-    JMX_PORT=3333 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_source_2_1.properties 2>&1 > $base_dir/kafka_source-2-1.log &
-    pid_kafka_source_2_1=$!
-    JMX_PORT=4444 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_source_2_2.properties 2>&1 > $base_dir/kafka_source-2-2.log &
-    pid_kafka_source_2_2=$!
-}
-
-start_target_servers() {
-    info "starting mirror cluster"
-    JMX_PORT=5555 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_target_1_1.properties 2>&1 > $base_dir/kafka_target-1-1.log &
-    pid_kafka_target_1_1=$!
-    JMX_PORT=6666 $base_dir/../../bin/kafka-run-class.sh kafka.Kafka $base_dir/config/server_target_1_2.properties 2>&1 > $base_dir/kafka_target-1-2.log &
-    pid_kafka_target_1_2=$!
-}
-
-shutdown_servers() {
-    info "stopping mirror-maker"
-    if [ "x${pid_mirrormaker_1}" != "x" ]; then kill_child_processes 0 ${pid_mirrormaker_1}; fi
-    # sleep to avoid rebalancing during shutdown
-    sleep 2
-    if [ "x${pid_mirrormaker_2}" != "x" ]; then kill_child_processes 0 ${pid_mirrormaker_2}; fi
-
-    info "stopping producer"
-    if [ "x${pid_producer}" != "x" ]; then kill_child_processes 0 ${pid_producer}; fi
-
-    info "shutting down target servers"
-    if [ "x${pid_kafka_target_1_1}" != "x" ]; then kill_child_processes 0 ${pid_kafka_target_1_1}; fi
-    if [ "x${pid_kafka_target_1_2}" != "x" ]; then kill_child_processes 0 ${pid_kafka_target_1_2}; fi
-    sleep 2
-
-    info "shutting down source servers"
-    if [ "x${pid_kafka_source_1_1}" != "x" ]; then kill_child_processes 0 ${pid_kafka_source_1_1}; fi
-    if [ "x${pid_kafka_source_1_2}" != "x" ]; then kill_child_processes 0 ${pid_kafka_source_1_2}; fi
-    if [ "x${pid_kafka_source_2_1}" != "x" ]; then kill_child_processes 0 ${pid_kafka_source_2_1}; fi
-    if [ "x${pid_kafka_source_2_2}" != "x" ]; then kill_child_processes 0 ${pid_kafka_source_2_2}; fi
-
-    info "shutting down zookeeper servers"
-    if [ "x${pid_zk_target}" != "x" ]; then kill_child_processes 0 ${pid_zk_target}; fi
-    if [ "x${pid_zk_source1}" != "x" ]; then kill_child_processes 0 ${pid_zk_source1}; fi
-    if [ "x${pid_zk_source2}" != "x" ]; then kill_child_processes 0 ${pid_zk_source2}; fi
-}
-
-start_producer() {
-    topic=$1
-    zk=$2
-    info "start producing messages for topic $topic to zookeeper $zk ..."
-    $base_dir/../../bin/kafka-run-class.sh kafka.perf.ProducerPerformance --brokerinfo zk.connect=$zk --topic $topic --messages $num_messages --message-size $message_size --batch-size 200 --vary-message-size --threads 1 --reporting-interval $num_messages --async 2>&1 > $base_dir/producer_performance.log &
-    pid_producer=$!
-}
-
-# Usage: wait_partition_done ([kafka-server] [topic] [partition-id])+
-wait_partition_done() {
-    n_tuples=$(($# / 3))
-
-    i=1
-    while (($#)); do
-        kafka_server[i]=$1
-        topic[i]=$2
-        partitionid[i]=$3
-        prev_offset[i]=0
-        info "\twaiting for partition on server ${kafka_server[i]}, topic ${topic[i]}, partition ${partitionid[i]}"
-        i=$((i+1))
-        shift 3
-    done
-
-    all_done=0
-
-    # set -x
-    while [[ $all_done != 1 ]]; do
-        sleep 4
-        i=$n_tuples
-        all_done=1
-        for ((i=1; i <= $n_tuples; i++)); do
-            cur_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server ${kafka_server[i]} --topic ${topic[i]} --partition ${partitionid[i]} --time -1 --offsets 1 | tail -1)
-            if [ "x$cur_size" != "x${prev_offset[i]}" ]; then
-                all_done=0
-                prev_offset[i]=$cur_size
-            fi
-        done
-    done
-
-}
-
-cmp_logs() {
-    topic=$1
-    info "comparing source and target logs for topic $topic"
-    source_part0_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9090 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    source_part1_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9091 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    source_part2_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    source_part3_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9093 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    target_part0_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9094 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    target_part1_size=$($base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9095 --topic $topic --partition 0 --time -1 --offsets 1 | tail -1)
-    if [ "x$source_part0_size" == "x" ]; then source_part0_size=0; fi
-    if [ "x$source_part1_size" == "x" ]; then source_part1_size=0; fi
-    if [ "x$source_part2_size" == "x" ]; then source_part2_size=0; fi
-    if [ "x$source_part3_size" == "x" ]; then source_part3_size=0; fi
-    if [ "x$target_part0_size" == "x" ]; then target_part0_size=0; fi
-    if [ "x$target_part1_size" == "x" ]; then target_part1_size=0; fi
-    expected_size=$(($source_part0_size + $source_part1_size + $source_part2_size + $source_part3_size))
-    actual_size=$(($target_part0_size + $target_part1_size))
-    if [ "x$expected_size" != "x$actual_size" ]
-    then
-        info "source size: $expected_size target size: $actual_size"
-        return 1
-    else
-        return 0
-    fi
-}
-
-take_fail_snapshot() {
-    snapshot_dir="$base_dir/failed-${snapshot_prefix}-${test_start_time}"
-    mkdir $snapshot_dir
-    for dir in /tmp/zookeeper_source{1..2} /tmp/zookeeper_target /tmp/kafka-source-{1..2}-{1..2}-logs /tmp/kafka-target{1..2}-logs; do
-        if [ -d $dir ]; then
-            cp -r $dir $snapshot_dir
-        fi
-    done
-}
-
-# Usage: process_test_result <result> <action_on_fail>
-# result: last test result
-# action_on_fail: (exit|wait|proceed)
-# ("wait" is useful if you want to troubleshoot using zookeeper)
-process_test_result() {
-    result=$1
-    if [ $1 -eq 0 ]; then
-        info "test passed"
-    else
-        info "test failed"
-        case "$2" in
-            "wait") info "waiting: hit Ctrl-c to quit"
-                wait
-                ;;
-            "exit") shutdown_servers
-                take_fail_snapshot
-                exit $result
-                ;;
-            *) shutdown_servers
-                take_fail_snapshot
-                info "proceeding"
-                ;;
-        esac
-    fi
-}
-
-test_whitelists() {
-    info "### Testing whitelists"
-    snapshot_prefix="whitelist-test"
-
-    cleanup
-    start_zk
-    start_source_servers
-    start_target_servers
-    sleep 4
-
-    info "starting mirror makers"
-    JMX_PORT=7777 $base_dir/../../bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config $base_dir/config/whitelisttest_1.consumer.properties --consumer.config $base_dir/config/whitelisttest_2.consumer.properties --producer.config $base_dir/config/mirror_producer.properties --whitelist="white.*" --num.streams 2 2>&1 > $base_dir/kafka_mirrormaker_1.log &
-    pid_mirrormaker_1=$!
-    JMX_PORT=8888 $base_dir/../../bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config $base_dir/config/whitelisttest_1.consumer.properties --consumer.config $base_dir/config/whitelisttest_2.consumer.properties --producer.config $base_dir/config/mirror_producer.properties --whitelist="white.*" --num.streams 2 2>&1 > $base_dir/kafka_mirrormaker_2.log &
-    pid_mirrormaker_2=$!
-
-    begin_timer
-
-    start_producer whitetopic01 localhost:2181
-    start_producer whitetopic01 localhost:2182
-    info "waiting for whitetopic01 producers to finish producing ..."
-    wait_partition_done kafka://localhost:9090 whitetopic01 0 kafka://localhost:9091 whitetopic01 0 kafka://localhost:9092 whitetopic01 0 kafka://localhost:9093 whitetopic01 0
-
-    start_producer whitetopic02 localhost:2181
-    start_producer whitetopic03 localhost:2181
-    start_producer whitetopic04 localhost:2182
-    info "waiting for whitetopic02,whitetopic03,whitetopic04 producers to finish producing ..."
-    wait_partition_done kafka://localhost:9090 whitetopic02 0 kafka://localhost:9091 whitetopic02 0 kafka://localhost:9090 whitetopic03 0 kafka://localhost:9091 whitetopic03 0 kafka://localhost:9092 whitetopic04 0 kafka://localhost:9093 whitetopic04 0
-
-    start_producer blacktopic01 localhost:2182
-    info "waiting for blacktopic01 producer to finish producing ..."
-    wait_partition_done kafka://localhost:9092 blacktopic01 0 kafka://localhost:9093 blacktopic01 0
-
-    info "waiting for consumer to finish consuming ..."
-
-    wait_partition_done kafka://localhost:9094 whitetopic01 0 kafka://localhost:9095 whitetopic01 0 kafka://localhost:9094 whitetopic02 0 kafka://localhost:9095 whitetopic02 0 kafka://localhost:9094 whitetopic03 0 kafka://localhost:9095 whitetopic03 0 kafka://localhost:9094 whitetopic04 0 kafka://localhost:9095 whitetopic04 0
-
-    end_timer
-    info "embedded consumer took $((t_end - t_begin)) seconds"
-
-    sleep 2
-
-    # if [[ -d /tmp/kafka-target-1-1-logs/blacktopic01 || /tmp/kafka-target-1-2-logs/blacktopic01 ]]; then
-    #     echo "blacktopic01 found on target cluster"
-    #     result=1
-    # else
-    #     cmp_logs whitetopic01 && cmp_logs whitetopic02 && cmp_logs whitetopic03 && cmp_logs whitetopic04
-    #     result=$?
-    # fi
-
-    cmp_logs blacktopic01
-
-    cmp_logs whitetopic01 && cmp_logs whitetopic02 && cmp_logs whitetopic03 && cmp_logs whitetopic04
-    result=$?
-
-    return $result
-}
-
-test_blacklists() {
-    info "### Testing blacklists"
-    snapshot_prefix="blacklist-test"
-    cleanup
-    start_zk
-    start_source_servers
-    start_target_servers
-    sleep 4
-
-    info "starting mirror maker"
-    $base_dir/../../bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config $base_dir/config/blacklisttest.consumer.properties --producer.config $base_dir/config/mirror_producer.properties --blacklist="black.*" --num.streams 2 2>&1 > $base_dir/kafka_mirrormaker_1.log &
-    pid_mirrormaker_1=$!
-
-    start_producer blacktopic01 localhost:2181
-    start_producer blacktopic02 localhost:2181
-    info "waiting for producer to finish producing blacktopic01,blacktopic02 ..."
-    wait_partition_done kafka://localhost:9090 blacktopic01 0 kafka://localhost:9091 blacktopic01 0 kafka://localhost:9090 blacktopic02 0 kafka://localhost:9091 blacktopic02 0
-
-    begin_timer
-
-    start_producer whitetopic01 localhost:2181
-    info "waiting for producer to finish producing whitetopic01 ..."
-    wait_partition_done kafka://localhost:9090 whitetopic01 0 kafka://localhost:9091 whitetopic01 0
-
-    info "waiting for consumer to finish consuming ..."
-    wait_partition_done kafka://localhost:9094 whitetopic01 0 kafka://localhost:9095 whitetopic01 0
-
-    end_timer
-
-    info "embedded consumer took $((t_end - t_begin)) seconds"
-
-    sleep 2
-
-    cmp_logs blacktopic01 || cmp_logs blacktopic02
-    if [ $? -eq 0 ]; then
-        return 1
-    fi
-    
-    cmp_logs whitetopic01
-    return $?
-}
-
-# main test begins
-
-echo "Test-$test_start_time"
-
-# Ctrl-c trap. Catches INT signal
-trap "shutdown_servers; exit 0" INT
-
-test_whitelists
-result=$?
-
-process_test_result $result $action_on_fail
-
-shutdown_servers
- 
-sleep 2
- 
-test_blacklists
-result=$?
-
-process_test_result $result $action_on_fail
-
-shutdown_servers
-
-exit $result
-
diff --git a/trunk/system_test/mirror_maker/config/blacklisttest.consumer.properties b/trunk/system_test/mirror_maker/config/blacklisttest.consumer.properties
deleted file mode 100644
index 6ea85ec..0000000
--- a/trunk/system_test/mirror_maker/config/blacklisttest.consumer.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.consumer.ConsumerConfig for more details
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-#consumer group id
-groupid=group1
-shallowiterator.enable=true
-
diff --git a/trunk/system_test/mirror_maker/config/mirror_producer.properties b/trunk/system_test/mirror_maker/config/mirror_producer.properties
deleted file mode 100644
index b74c631..0000000
--- a/trunk/system_test/mirror_maker/config/mirror_producer.properties
+++ /dev/null
@@ -1,30 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2183
-# broker.list=1:localhost:9094,2:localhost:9095
-
-# timeout in ms for connecting to zookeeper
-# zk.connectiontimeout.ms=1000000
-
-producer.type=async
-
-# to avoid dropping events if the queue is full, wait indefinitely
-queue.enqueueTimeout.ms=-1
-
-num.producers.per.broker=2
-
diff --git a/trunk/system_test/mirror_maker/config/server_source_1_1.properties b/trunk/system_test/mirror_maker/config/server_source_1_1.properties
deleted file mode 100644
index d89c4fb..0000000
--- a/trunk/system_test/mirror_maker/config/server_source_1_1.properties
+++ /dev/null
@@ -1,76 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=1
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9090
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source-1-1-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=10000000
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
diff --git a/trunk/system_test/mirror_maker/config/server_source_1_2.properties b/trunk/system_test/mirror_maker/config/server_source_1_2.properties
deleted file mode 100644
index 063d68b..0000000
--- a/trunk/system_test/mirror_maker/config/server_source_1_2.properties
+++ /dev/null
@@ -1,76 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=2
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9091
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source-1-2-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
diff --git a/trunk/system_test/mirror_maker/config/server_source_2_1.properties b/trunk/system_test/mirror_maker/config/server_source_2_1.properties
deleted file mode 100644
index 998b460..0000000
--- a/trunk/system_test/mirror_maker/config/server_source_2_1.properties
+++ /dev/null
@@ -1,76 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=1
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9092
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source-2-1-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
diff --git a/trunk/system_test/mirror_maker/config/server_source_2_2.properties b/trunk/system_test/mirror_maker/config/server_source_2_2.properties
deleted file mode 100644
index 81427ae..0000000
--- a/trunk/system_test/mirror_maker/config/server_source_2_2.properties
+++ /dev/null
@@ -1,76 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=2
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9093
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-source-2-2-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
diff --git a/trunk/system_test/mirror_maker/config/server_target_1_1.properties b/trunk/system_test/mirror_maker/config/server_target_1_1.properties
deleted file mode 100644
index 0265f4e..0000000
--- a/trunk/system_test/mirror_maker/config/server_target_1_1.properties
+++ /dev/null
@@ -1,78 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=1
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9094
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-target-1-1-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2183
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
diff --git a/trunk/system_test/mirror_maker/config/server_target_1_2.properties b/trunk/system_test/mirror_maker/config/server_target_1_2.properties
deleted file mode 100644
index a31e9ca..0000000
--- a/trunk/system_test/mirror_maker/config/server_target_1_2.properties
+++ /dev/null
@@ -1,78 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=2
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9095
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-target-1-2-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2183
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
diff --git a/trunk/system_test/mirror_maker/config/whitelisttest_1.consumer.properties b/trunk/system_test/mirror_maker/config/whitelisttest_1.consumer.properties
deleted file mode 100644
index 6ea85ec..0000000
--- a/trunk/system_test/mirror_maker/config/whitelisttest_1.consumer.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.consumer.ConsumerConfig for more details
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-#consumer group id
-groupid=group1
-shallowiterator.enable=true
-
diff --git a/trunk/system_test/mirror_maker/config/whitelisttest_2.consumer.properties b/trunk/system_test/mirror_maker/config/whitelisttest_2.consumer.properties
deleted file mode 100644
index e11112f..0000000
--- a/trunk/system_test/mirror_maker/config/whitelisttest_2.consumer.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.consumer.ConsumerConfig for more details
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2182
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-#consumer group id
-groupid=group1
-shallowiterator.enable=true
-
diff --git a/trunk/system_test/mirror_maker/config/zookeeper_source_1.properties b/trunk/system_test/mirror_maker/config/zookeeper_source_1.properties
deleted file mode 100644
index f851796..0000000
--- a/trunk/system_test/mirror_maker/config/zookeeper_source_1.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper_source-1
-# the port at which the clients will connect
-clientPort=2181
diff --git a/trunk/system_test/mirror_maker/config/zookeeper_source_2.properties b/trunk/system_test/mirror_maker/config/zookeeper_source_2.properties
deleted file mode 100644
index d534d18..0000000
--- a/trunk/system_test/mirror_maker/config/zookeeper_source_2.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper_source-2
-# the port at which the clients will connect
-clientPort=2182
diff --git a/trunk/system_test/mirror_maker/config/zookeeper_target.properties b/trunk/system_test/mirror_maker/config/zookeeper_target.properties
deleted file mode 100644
index 55a7eb1..0000000
--- a/trunk/system_test/mirror_maker/config/zookeeper_target.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper_target
-# the port at which the clients will connect
-clientPort=2183
diff --git a/trunk/system_test/producer_perf/README b/trunk/system_test/producer_perf/README
deleted file mode 100644
index be3bb51..0000000
--- a/trunk/system_test/producer_perf/README
+++ /dev/null
@@ -1,9 +0,0 @@
-This test produces a large number of messages to a broker. It measures the throughput and tests
-the amount of data received is expected.
-
-To run this test, do
-bin/run-test.sh
-
-The expected output is given in expected.out. There are 2 things to pay attention to:
-1. The output should have a line "test passed".
-2. The throughput from the producer should be around 300,000 Messages/sec on a typical machine.
diff --git a/trunk/system_test/producer_perf/bin/expected.out b/trunk/system_test/producer_perf/bin/expected.out
deleted file mode 100644
index 311d9b7..0000000
--- a/trunk/system_test/producer_perf/bin/expected.out
+++ /dev/null
@@ -1,32 +0,0 @@
-start the servers ...
-start producing 2000000 messages ...
-[2011-05-17 14:31:12,568] INFO Creating async producer for broker id = 0 at localhost:9092 (kafka.producer.ProducerPool)
-thread 0: 100000 messages sent 3272786.7779 nMsg/sec 3.1212 MBs/sec
-thread 0: 200000 messages sent 3685956.5057 nMsg/sec 3.5152 MBs/sec
-thread 0: 300000 messages sent 3717472.1190 nMsg/sec 3.5453 MBs/sec
-thread 0: 400000 messages sent 3730647.2673 nMsg/sec 3.5578 MBs/sec
-thread 0: 500000 messages sent 3730647.2673 nMsg/sec 3.5578 MBs/sec
-thread 0: 600000 messages sent 3722315.2801 nMsg/sec 3.5499 MBs/sec
-thread 0: 700000 messages sent 3718854.5928 nMsg/sec 3.5466 MBs/sec
-thread 0: 800000 messages sent 3714020.4271 nMsg/sec 3.5420 MBs/sec
-thread 0: 900000 messages sent 3713330.8578 nMsg/sec 3.5413 MBs/sec
-thread 0: 1000000 messages sent 3710575.1391 nMsg/sec 3.5387 MBs/sec
-thread 0: 1100000 messages sent 3711263.6853 nMsg/sec 3.5393 MBs/sec
-thread 0: 1200000 messages sent 3716090.6726 nMsg/sec 3.5439 MBs/sec
-thread 0: 1300000 messages sent 3709198.8131 nMsg/sec 3.5374 MBs/sec
-thread 0: 1400000 messages sent 3705762.4606 nMsg/sec 3.5341 MBs/sec
-thread 0: 1500000 messages sent 3701647.2330 nMsg/sec 3.5302 MBs/sec
-thread 0: 1600000 messages sent 3696174.4594 nMsg/sec 3.5249 MBs/sec
-thread 0: 1700000 messages sent 3703703.7037 nMsg/sec 3.5321 MBs/sec
-thread 0: 1800000 messages sent 3703017.9596 nMsg/sec 3.5315 MBs/sec
-thread 0: 1900000 messages sent 3700277.5208 nMsg/sec 3.5289 MBs/sec
-thread 0: 2000000 messages sent 3702332.4695 nMsg/sec 3.5308 MBs/sec
-[2011-05-17 14:33:01,102] INFO Closing all async producers (kafka.producer.ProducerPool)
-[2011-05-17 14:33:01,103] INFO Closed AsyncProducer (kafka.producer.async.AsyncProducer)
-Total Num Messages: 2000000 bytes: 400000000 in 108.678 secs
-Messages/sec: 18402.9886
-MB/sec: 3.5101
-wait for data to be persisted
-test passed
-bin/../../../bin/kafka-server-start.sh: line 11: 21110 Terminated              $(dirname $0)/kafka-run-class.sh kafka.Kafka $@
-bin/../../../bin/zookeeper-server-start.sh: line 9: 21109 Terminated              $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.server.quorum.QuorumPeerMain $@
diff --git a/trunk/system_test/producer_perf/bin/run-compression-test.sh b/trunk/system_test/producer_perf/bin/run-compression-test.sh
deleted file mode 100755
index d7c9231..0000000
--- a/trunk/system_test/producer_perf/bin/run-compression-test.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-num_messages=2000000
-message_size=200
-
-base_dir=$(dirname $0)/..
-
-rm -rf /tmp/zookeeper
-rm -rf /tmp/kafka-logs
-
-echo "start the servers ..."
-$base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper.properties 2>&1 > $base_dir/zookeeper.log &
-$base_dir/../../bin/kafka-server-start.sh $base_dir/config/server.properties 2>&1 > $base_dir/kafka.log &
-
-sleep 4
-echo "start producing $num_messages messages ..."
-$base_dir/../../bin/kafka-run-class.sh kafka.tools.ProducerPerformance --brokerinfo broker.list=0:localhost:9092 --topic test01 --messages $num_messages --message-size $message_size --batch-size 200 --threads 1 --reporting-interval 100000 num_messages --async --compression-codec 1 
-
-echo "wait for data to be persisted" 
-cur_offset="-1"
-quit=0
-while [ $quit -eq 0 ]
-do
-  sleep 2
-  target_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
-  if [ $target_size -eq $cur_offset ]
-  then
-    quit=1
-  fi
-  cur_offset=$target_size
-done
-
-sleep 2
-actual_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
-num_batches=`expr $num_messages \/ $message_size`
-expected_size=`expr $num_batches \* 262`
-
-if [ $actual_size != $expected_size ]
-then
-   echo "actual size: $actual_size expected size: $expected_size test failed!!! look at it!!!"
-else
-   echo "test passed"
-fi
-
-ps ax | grep -i 'kafka.kafka' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
-sleep 2
-ps ax | grep -i 'QuorumPeerMain' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
-
diff --git a/trunk/system_test/producer_perf/bin/run-test.sh b/trunk/system_test/producer_perf/bin/run-test.sh
deleted file mode 100755
index ad65563..0000000
--- a/trunk/system_test/producer_perf/bin/run-test.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-num_messages=2000000
-message_size=200
-
-base_dir=$(dirname $0)/..
-
-rm -rf /tmp/zookeeper
-rm -rf /tmp/kafka-logs
-
-echo "start the servers ..."
-$base_dir/../../bin/zookeeper-server-start.sh $base_dir/config/zookeeper.properties 2>&1 > $base_dir/zookeeper.log &
-$base_dir/../../bin/kafka-server-start.sh $base_dir/config/server.properties 2>&1 > $base_dir/kafka.log &
-
-sleep 4
-echo "start producing $num_messages messages ..."
-$base_dir/../../bin/kafka-run-class.sh kafka.tools.ProducerPerformance --brokerinfo broker.list=0:localhost:9092 --topic test01 --messages $num_messages --message-size $message_size --batch-size 200 --threads 1 --reporting-interval 100000 num_messages --async
-
-echo "wait for data to be persisted" 
-cur_offset="-1"
-quit=0
-while [ $quit -eq 0 ]
-do
-  sleep 2
-  target_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
-  if [ $target_size -eq $cur_offset ]
-  then
-    quit=1
-  fi
-  cur_offset=$target_size
-done
-
-sleep 2
-actual_size=`$base_dir/../../bin/kafka-run-class.sh kafka.tools.GetOffsetShell --server kafka://localhost:9092 --topic test01 --partition 0 --time -1 --offsets 1 | tail -1`
-msg_full_size=`expr $message_size + 10`
-expected_size=`expr $num_messages \* $msg_full_size`
-
-if [ $actual_size != $expected_size ]
-then
-   echo "actual size: $actual_size expected size: $expected_size test failed!!! look at it!!!"
-else
-   echo "test passed"
-fi
-
-ps ax | grep -i 'kafka.kafka' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
-sleep 2
-ps ax | grep -i 'QuorumPeerMain' | grep -v grep | awk '{print $1}' | xargs kill -15 > /dev/null
-
diff --git a/trunk/system_test/producer_perf/config/server.properties b/trunk/system_test/producer_perf/config/server.properties
deleted file mode 100644
index abd0765..0000000
--- a/trunk/system_test/producer_perf/config/server.properties
+++ /dev/null
@@ -1,78 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-# the id of the broker
-brokerid=0
-
-# hostname of broker. If not set, will pick up from the value returned
-# from getLocalHost.  If there are multiple interfaces getLocalHost
-# may not be what you want.
-# hostname=
-
-# number of logical partitions on this broker
-num.partitions=1
-
-# the port the socket server runs on
-port=9092
-
-# the number of processor threads the socket server uses. Defaults to the number of cores on the machine
-num.threads=8
-
-# the directory in which to store log files
-log.dir=/tmp/kafka-logs
-
-# the send buffer used by the socket server 
-socket.send.buffer=1048576
-
-# the receive buffer used by the socket server
-socket.receive.buffer=1048576
-
-# the maximum size of a log segment
-log.file.size=536870912
-
-# the interval between running cleanup on the logs
-log.cleanup.interval.mins=1
-
-# the minimum age of a log file to eligible for deletion
-log.retention.hours=168
-
-#the number of messages to accept without flushing the log to disk
-log.flush.interval=600
-
-#set the following properties to use zookeeper
-
-# enable connecting to zookeeper
-enable.zookeeper=true
-
-# zk connection string
-# comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
-zk.connect=localhost:2181
-
-# timeout in ms for connecting to zookeeper
-zk.connectiontimeout.ms=1000000
-
-# time based topic flush intervals in ms
-#topic.flush.intervals.ms=topic:1000
-
-# default time based flush interval in ms
-log.default.flush.interval.ms=1000
-
-# time based topic flasher time rate in ms
-log.default.flush.scheduler.interval.ms=1000
-
-# topic partition count map
-# topic.partition.count.map=topic1:3, topic2:4
diff --git a/trunk/system_test/producer_perf/config/zookeeper.properties b/trunk/system_test/producer_perf/config/zookeeper.properties
deleted file mode 100644
index bd3fe84..0000000
--- a/trunk/system_test/producer_perf/config/zookeeper.properties
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-#    http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# the directory where the snapshot is stored.
-dataDir=/tmp/zookeeper
-# the port at which the clients will connect
-clientPort=2181