Avro-1.3.3-rc0 release.

git-svn-id: https://svn.apache.org/repos/asf/avro/tags/release-1.3.3-rc0@952068 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/CHANGES.txt b/CHANGES.txt
index 3e1d596..e5fd377 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,6 @@
 Avro Change Log
 
-Avro 1.3.3 (Unreleased)
+Avro 1.3.3 (7 June 2010)
 
   IMPROVEMENTS
 
@@ -28,6 +28,10 @@
 
     AVRO-504. ruby impl could stand better error messages on schema parsing (jmhodges)
 
+    AVRO-556. Poor performance for Reader::readBytes improved
+    (Dave Wright via sbanacho)
+
+
   BUG FIXES
     AVRO-461. Skipping primitives in the ruby side (jmhodges)
 
diff --git a/branch-1.3/.gitignore b/branch-1.3/.gitignore
deleted file mode 100644
index 30e1107..0000000
--- a/branch-1.3/.gitignore
+++ /dev/null
@@ -1,5 +0,0 @@
-*.pyc
-.svn
-/build
-test-output
-/dist
diff --git a/branch-1.3/BUILD.txt b/branch-1.3/BUILD.txt
deleted file mode 100644
index db34b68..0000000
--- a/branch-1.3/BUILD.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-Apache Avro Build Instructions
-
-REQUIREMENTS
-
-The following packages must be installed before Avro can be built:
-
- - Java: JDK 1.6 and 1.5 (for Forrest docs)
- - Python: 2.5 or greater, python-setuptools for dist target
- - C: gcc, autoconf, automake, libtool, asciidoc, source-highlight
- - C++: g++, flex, bison, libboost-dev
- - Ruby: ruby 1.86 or greater, ruby-dev, gem, rake, echoe, yajl-ruby
- - Apache Ant 1.7
- - Apache Forrest 0.8 (for documentation, requires Java 1.5)
- - md5sum, sha1sum, used by top-level dist target
-
-BUILDING
-
-Once the requirements are installed, build.sh can used as follows:
-
- './build.sh test' runs tests for all languages
- './build.sh dist' creates all release distribution files in dist/
- './build.sh clean' removes all generated artifacts
diff --git a/branch-1.3/CHANGES.txt b/branch-1.3/CHANGES.txt
deleted file mode 100644
index 3e1d596..0000000
--- a/branch-1.3/CHANGES.txt
+++ /dev/null
@@ -1,1010 +0,0 @@
-Avro Change Log
-
-Avro 1.3.3 (Unreleased)
-
-  IMPROVEMENTS
-
-    AVRO-525. remove unused imports (Esteve Fernandez via hammer)
-
-    AVRO-526. Fall back to pure Python StringIO if cStringIO is not available
-    (Esteve Fernandez via hammer)
-
-    AVRO-560. Python impl should include system errors in every protocol (hammer)
-
-    AVRO-486. DataFile.open for the ruby side (jmhodges)
-
-    AVRO-559. Handle read_union error where the list index of the union branch 
-    to follow exceeds the size of the union schema (hammer)
-
-    AVRO-491. Doing doubles and floats better in the ruby impl. (jmhodges)
-
-    AVRO-450. HTTP IPC for ruby. (jmhodges)
-
-    AVRO-514. Removing unnecessary ruby StringIO calls. (jmhodges)
-
-    AVRO-511. Ruby implementation passes the rpc interop tests.
-
-    AVRO-543. Schema comparison is hella slow on the Ruby side. (jmhodges)
-
-    AVRO-504. ruby impl could stand better error messages on schema parsing (jmhodges)
-
-  BUG FIXES
-    AVRO-461. Skipping primitives in the ruby side (jmhodges)
-
-    AVRO-496. python sample_http_client.py is broken (Jeff Hodges via hammer)
-
-    AVRO-527. Undefined variable "schm" error (Esteve Fernandez via hammer)
-
-    AVRO-548. Python client should handle CLIENT handshake match status
-    correctly. (hammer)
-
-    AVR0-555 Missing license headers in some ruby source
-    files. (jmhodges)
-
-    AVRO-554 Fixing syncing in ruby data file writing. (Grant Rodgers
-    via jmhodges)
-
-    AVRO-562 ruby side had busted client handshaking. (jmhodges)
-
-    AVRO-517. Resolving Decoder fails in some cases. (thiru) 
-
-    AVRO-524. DataFileWriter.appendTo leads to intermittent IOException during write() (thiru)
-
-    AVRO-499. Java: Fix protocol reflection to reject interfaces with
-    multiple methods of the same name.  (cutting)
-
-    AVRO-489. Skipping complex objects in the ruby impl. (jmhodges)
-
-    AVR0-555 Missing license headers in some ruby source
-    files. (jmhodges)
-
-    AVRO-500. ruby side dev packaging (jmhodges)
-
-    AVRO-516. ruby: buffer length should not be little-endian in socket rpc (jmhodges)
- 
-Avro 1.3.2 (31 March 2010)
-
-  IMPROVEMENTS
-
-    AVRO-449. CMake-based build system for Avro/C (Bruce Mitchener via massie)
-
-    AVRO-418. avro.h generates errors when included in C++ code 
-    (Bruce Mitchener via massie)
-
-    AVRO-480. avro_flush() is in the header, but not implemented
-    (Bruce Mitchener via massie)
-
-    AVRO-481. Buildbot warning fixes (Bruce Mitchener via massie)
-
-    AVRO-451. Try to use hashlib in Python implementation and fall
-    back to md5 if we can't find it (Bruce Mitchener via hammer)
-
-    AVRO-423. HTTPTransceiver does not reuse connections
-    (Eric Evans via hammer)
-
-    AVRO-490. Add Ant task to deploy Java artifacts to Maven repo. (cutting)
-
-  BUG FIXES
-
-    AVRO-479. Fix 'sign' target in top-level build.sh to generate md5
-    checksums. (cutting)
-
-    AVRO-487. Fix Java reflect protocols to transmit error messages. (cutting)
-
-Avro 1.3.1 (16 March 2010)
-
-  NEW FEATURES
-
-    AVRO-432. Add @Nullable annotation to Java reflect API. (cutting)
-
-  IMPROVEMENTS
-
-    AVRO-426. Include a ruby gem in distributions.
-    (Ryan King via cutting)
-
-    AVRO-439. Remove unused headers from being checked in configure.in
-    (Bruce Mitchener via massie)
-
-    AVRO-438. Clarify spec.  (Amichai Rothman via cutting)
-
-    AVRO-445. avro_size_data() to pre-calculate the size of an 
-    avro_datum_t in serialized form (Bruce Mitchener via massie)
-
-    AVRO-443. Endianness is determined at configure time rather 
-    than compile time (Bruce Mitchener via massie)
-  
-    AVRO-448. encoding_binary.c doesn't build on big endian platforms
-    (Bruce Mitchener via massie)
- 
-    AVRO-442. sizeof void* and sizeof long detected at configure time
-    (Bruce Mitchener via massie)
-
-    AVRO-444. Fix warnings (Bruce Mitchener via massie)
-
-    AVRO-452. Include cleanup (Bruce Mitchener via massie)
-
-    AVRO-453. More warning cleanup (Bruce Mitchener via massie)
-
-    AVRO-440. config.h output not correctly used (Bruce Mitchener via massie)
-
-    AVRO-460. Performance improvement to write_long() (Bruce Mitchener via massie)
-    AVRO-455. Update Java dependencies. (David Dabbs via cutting)
-
-    AVRO-446. Add a build.sh task that signs and checksums artifacts. (cutting)
-
-    AVRO-454. Change DataFileStream to implement Closeable. (cutting)
-
-  BUG FIXES
-
-    AVRO-424. Fix the specification of the deflate codec.
-    (Scott Carey via cutting)
-
-    AVRO-431. Fix Java's mvn-install Ant target to work in clean build.
-    (cutting)
-
-    AVRO-437. Fix some typos in docs. (Amichai Rothman via cutting)
-
-    AVRO-433. Fix exceptions in Java reflect RPC. (cutting)
-
-Avro 1.3.0 (24 February 2010)
-
-  INCOMPATIBLE CHANGES
-
-    AVRO-185. Java's specific API no longer depends on reflection.
-    This reverses the inheritance of most classes in the specific and
-    reflect packages. (cutting)
-
-    AVRO-201.  Move Python data file code into its own module.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-80.  Java reflect API no longer uses Avro-specific classes
-    for string and array.  Instead now Java strings and arrays or
-    Lists are used. (cutting)
-
-    AVRO-237.  Reflect API now represents any Java Collection as an
-    Avro array.  Also inherited fields are included in records, and
-    inherited methods in protocols.  Finally, Java shorts are
-    supported as integers.  (cutting)
-
-    AVRO-160. Revised data file format and Java API.  Simplified
-    format now permits streaming but no longer supports multiple
-    schemas per file.  Java API for reading is iterator-based.
-
-    AVRO-278. Changed GenericRecord API and implementation to be
-    array-based rather than Map-based. (cutting)
-
-    AVRO-163. Re-organized source tree into separate directories for
-    each language.  (cutting)
-
-    AVRO-344. Complete rewrite of C implementation (massie)
-
-    AVRO-349. Fix C++ build for post-AVRO-163. (sbanacho)
-
-    AVRO-374. Remove and ignore files that are created by autoreconf. (sbanacho)
-
-    AVRO-387. Add IndexedRecord interface, common to both specific and
-    generic records, so that toString() and hashCode() implementations
-    can be shared.  Also fix toString() and hashCode() to not throw
-    NPE for uninitialized records. (cutting)
-
-  NEW FEATURES
-
-    AVRO-151. Validating Avro schema parser for C (massie)
-
-    AVRO-158. Permit appending to a data file from Java.  (cutting)
-
-    AVRO-154. Add 'induce' sub-command to avroj command line tool.
-    (Philip Zeyliger via cutting)
-
-    AVRO-245. Add four new avroj commands:
-      - fromjson Reads JSON records and writes to an Avro data file.
-      - tojson  Dumps an Avro data file as JSON, one record per line.
-      - fragtojson Renders a binary-encoded Avro datum as JSON.
-      - jsontofrag Renders a JSON-encoded Avro datum as binary.
-    (Philip Zeyliger via cutting)
-
-    AVRO-272. Extend RPCContext to include message. 
-    (Philip Zeyliger via cutting)
-
-    AVRO-258. Add GenAvro language tool.  (Todd Lipcon via cutting)
-
-    AVRO-267. Add two new avroj commands: rpcsend and rpcreceive.
-    (Philip Zeyliger via cutting)
-
-    AVRO-271. Add a Java local RPC transceiver. (Philip Zeyliger via cutting)
-
-    AVRO-273, AVRO-275, & AVRO-279. Add Java RPC statistics collection
-    and display. (Philip Zeyliger via cutting)
-
-    AVRO-152. Add support for documentation strings to schemas,
-    protocols, and messages. (Philip Zeyliger via cutting)
-
-    AVRO-274. Make Java's data file sync interval configurable.  (cutting)
-  
-    AVRO-346. Add function to validate a datum against a schema. (massie)
-
-    AVRO-306. Add Ruby implementation. (Jeff Hodges via cutting)
-    
-    AVRO-135. Add compression to data files. (philz)
-
-    AVRO-368. Reserve avro.* in object container files, and 
-    rename existing reserved words. (philz)
-
-    AVRO-380. Avro Container File format change: add block size to block
-    descriptor.  (Scott Carey via philz)
-
-    AVRO-322. Add a working client and server to Python implementation 
-    using HTTP as a transport (hammer)
-
-    AVRO-287. Make RPC interop tests work with new Python implementation
-    (hammer)
-
-    AVRO-136. Add support for building/releasing python eggs (hammer)
-
-    AVRO-414. Add Java support for concatenating and appending data
-    files. (Scott Carey via cutting)
-
-  IMPROVEMENTS
-
-    AVRO-157. Changes from code review comments for C++. (sbanacho)
-
-    AVRO-168. Correct shared library versioning for C implementation (massie)
-
-    AVRO-142. Remove some Java unused fields and imports.  Start
-    running checkstyle on Java test code.  (Philip Zeyliger via cutting)
-
-    AVRO-147. Use configure to create makefile for C++ builds. (sbanacho)
-
-    AVRO-155. Make python avro.io.DataFileReader iterable.
-    (Jeff Hammerbacher via sharad)
-
-    AVRO-148. Add ant target to build C++ project.  (sbanacho)
-
-    AVRO-166. Improve error checking in Java schema parser.
-    (Philip Zeyliger via cutting)
-
-    AVRO-167. Refactor Java SpecificCompiler to simplify testing, and
-    add some tests. (Philip Zeyliger via cutting)
-
-    AVRO-146. Add support for using Eclipse to develop Avro's Java.
-    (Philip Zeyliger via cutting)
-
-    AVRO-149. Add Java command-line executable, "avroj".
-    (Philip Zeyliger via cutting)
-
-    AVRO-175. Split the avro_io interface into two interfaces: avro_reader 
-    and avro_writer (massie)
-
-    AVRO-179. Add units tests for all Avro C primitives (massie)
-
-    AVRO-177. Upgrade Java dependencies to recent versions. (cutting)
-
-    AVRO-180. Enhance code generator script and unit tests. (sbanacho)
-
-    AVRO-186. Full read-path interoperability test (massie)
-
-    AVRO-187. Move top-level source files into separate directories 
-    for easier maintenance (massie)
-
-    AVRO-188. Need to update svn ignores (massie)
-
-    AVRO-190. Use fixed size C++ types for Avro fixed types. (sbanacho)
-
-    AVRO-192. Improved errors for Java schema parsing problems. (cutting)
-
-    AVRO-195. Complex type support for write streams (massie)
-
-    AVRO-197. Add mapping of name to index for records and enums. (sbanacho)
-
-    AVRO-204. Change the way symbolic references are tracked. (sbanacho)
-
-    AVRO-205. APIs for checking schema resolution. (sbanacho)
-
-    AVRO-203. Reformat license in Python sources.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-199. Make Python test schemas more readable.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-216. Formatting cleanups to schema.py.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-202. Add __all__ listing to Python module, to ease import.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-229. Change Java to implement Flushable and Closeable
-    interfaces where appropriate.  (tomwhite via cutting)
-
-    AVRO-231. Tutorial added to C++ docs. (sbanacho)
-
-    AVRO-220. Dynamic schema resolution from writer to reader. (sbanacho)
-
-    AVRO-213. Add Apache RAT to tests, to validate licenses.  (cutting)
-
-    AVRO-233. Elaborate Java tool API. (Philip Zeyliger via cutting)
-
-    AVRO-236. Add protocol support to avroj induce tool.  (cutting)
-
-    AVRO-234. C++ code cleanup. (sbanacho)
-    
-    AVRO-240. In Python, if simplejson is not available, try using
-    2.6's built-in json module. (Jeff Hammerbacher via cutting)
-
-    AVRO-242. In Java, add support for extensible string-valued
-    properties to schemas.  (cutting)
-
-    AVRO-241. In Java, add a union annotation for reflection. (cutting)
-
-    AVRO-249. In reflection, implement Java short as an int whose
-    "java-class" property is set to java.lang.Short. (cutting)
-
-    AVRO-247. In reflection, add Stringable annotation to indicate
-    classes that can be represented by an Avro string.  (cutting)
-    
-    AVRO-246 Java schema parser should take schema from InputStream
-    in addition to file. (thiru)
-
-    AVRO-250. Make reflect's Union annotation applicable to message
-    parameters and return types too.  (cutting)
-
-    AVRO-253. Improve documentation of schema names in specification. (cutting)
-
-    AVRO-257. Remove some dead Java code and un-needed casts.
-    (Kevin Oliver via cutting)
-
-    AVRO-263. Change avroj command line tools to return exit codes.
-    (Todd Lipcon via cutting)
-
-    AVRO-260. Upgrade to Jackson 1.4.0. (cutting)
-
-    AVRO-269. Use java compiler to validate specific compiler's output.
-    (Philip Zeyliger via cutting)
-
-    AVRO-219. Rework Python API.  (Jeff Hammerbacher via cutting)
-
-    AVRO-264. Rework Python RPC.  (Jeff Hammerbacher via cutting)
-
-    AVRO-75. Clarify that missing values with no default values cause
-    errors, and fix Java implementation.  (cutting)
-
-    AVRO-259. Add null schema check in GenericData.Record and
-    GenericData.Array construtors. (Kevin Oliver via cutting)
-
-    AVRO-294. Clarify that bytes and fixed are unsigned, and how their
-    JSON default values are interpreted.  (Jeff Hammerbacher & cutting)
-
-    AVRO-298. Fix Java's DatumReader and DatumWriter APIs to better
-    use generics.  (philz via cutting)
-
-    AVRO-288. Implement schema resolution for Python parameters.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-282. Improve avroj build to better specify dependencies.
-    (philz via cutting)
-
-    AVRO-309. Fix python build, post-AVRO-163. (cutting)
-
-    AVRO-310. Improve top-level build.sh. (cutting)
-
-    AVRO-317. Restore Java data interop tests. (cutting)
-
-    AVRO-320. Rename avroj to be avro-tools.  Also add LICENSE.txt and
-    NOTICE.txt to it, print the NOTICE.txt and version in help, and
-    include the tools jar in distributions.  (cutting)
-
-    AVRO-314. Add mvn-install ant task to publish jar to local Maven
-    repository.  (Aaron Kimball via cutting)
-
-    AVRO-243. Use automake generated Makefile.in. (sbanacho)
-
-    AVRO-198. Fix specification of protocol name, also clarify which
-    properties are required.  (cutting)
-
-    AVRO-336. Check that appropriate schemas are passed to
-    GenericData#Record and #Array.  (cutting)
-
-    AVRO-353. Publish the C API to avro-doc package when 'dist' target run (massie)
-
-    AVRO-359. Add support for encoding/decoding arrays and maps (massie)
-
-    AVRO-360. Standardize on Linux coding style instead of GNU (massie)
-
-    AVRO-362. Add test to ensure Python implementation handles Union schema
-    with two fixed types of different names (hammer)
-
-    AVRO-364. Add support for encoding/decoding records (massie)
-
-    AVRO-367. Complete memory management for the C implementation (massie)
-
-    AVRO-369. Add support for encoding/decoding enum values (massie)
-
-    AVRO-370. Add support for encoding/decoding fixed data (massie)
-
-    AVRO-371. Add support for encoding/decoding unions (massie)
-
-    AVRO-377. Add getters and setters for all Avro datum types (massie)
-
-    AVRO-378. Add example code to the C implementation and update 
-              documentation (massie)
-
-    AVRO-379. Changed record getter/setter API to match other datatypes (massie)
-
-    AVRO-381. Update documentation to talk about reference counting and 
-              memory management (massie)
-
-    AVRO-384. Add schema projection to the C implementation (massie)
-
-    AVRO-388. Using ResolvingDecoder in GenericDatumReader (thiru)
-
-    AVRO-386. Python implementaiton of compression (philz)
-
-    AVRO-394. Simplify and consolidate all data structures into hash
-    tables (massie)
-
-    AVRO-393.  Add a constructor for Utf8 that accepts byte[].
-    (Jeff Hodges via cutting)
-
-    AVRO-395. Add a cscope Makefile target (Eli Collins via massie)
-
-    AVRO-397. Whitespace change and comment clarification in
-    datafile.py (hammer)
-
-    AVRO-340. Define usage of HTTP as RPC transport in spec.  (cutting)
-
-    AVRO-342. Document that Java's socket and datagram RPC transports
-    are non-standard.  (cutting)
-
-    AVRO-208. Clarify that enum symbols must be unique.  (cutting)
-
-    AVRO-321. Restore Java RPC interop tests. (cutting)
-
-    AVRO-402. Add method for writing avro_schema_t structure to an 
-              avro_writer_t (massie)
-
-    AVRO-398. avro_read_file doesn't detect eof (Eli Collins via massie)
-
-    AVRO-403. Add file object container support to C implementation (massie)
-
-    AVRO-400. Adding warning for unused parameters (Eli Collins via massie)
-
-    AVRO-409. Update contact database example to use a file object 
-              container for C implementation (massie)
-
-    AVRO-420. Add namespace support to C implementation (massie)
-
-    AVRO-261. Allow Schemas to be immutable (thiru)
-
-    AVRO-412. Allow schema validation to be optional (massie)
-
-    AVRO-295. JsonEncoder is not flushed after writing using ReflectDatumWriter (thiru)
-
-    AVRO-416. Produce Java source archive.  (Ryan Rawson via cutting)
-
-    AVRO-417. Produce Java documentation archive.  (Scott Carey via cutting)
-
-    AVRO-428. Improve file read performance by buffering data (massie)
-
-    AVRO-430. Remove subversion directories from Avro C tarball (massie)
-
-  OPTIMIZATIONS
-
-    AVRO-172. More efficient schema processing (massie)
-
-    AVRO-291. Set NODELAY in Java's SocketTransceiver.
-    (Eric Evans via cutting)
-
-    AVRO-315. Performance improvements to BinaryDecoder (thiru)
-
-    AVRO-316. Optiminzing inner loop functions of Avro io (thiru)
-
-    AVRO-328. Performance improvements Validating encoder/decoder for nested records (thiru)
-
-    AVRO-345. Optimization for ResolvingDecoder (thiru)
-
-    AVRO-363. estSchema had two tests disabled; new test for named schemas 
-    named after primitives. (philz)
-
-    AVRO-354. Performance improvement to BinaryDecoder.readInt() (Kevin Oliver via thiru)
-
-    AVRO-343. Minor fixes to Eclipse config after build re-org (philz)
-
-    AVRO-383. Optimizing ResolvingDecoder for default values (thiru)
-
-    AVRO-411, AVRO-413. Add Ruby data file interop tests. (Jeff Hodges
-    via cutting)
-
-    AVRO-399. Make data file interop tests work with the Python implementation (hammer)
-
-    AVRO-392. Overhaul of Java binary decoder to significantly improve
-    performance.  (Scott Carey via cutting)
-
-  BUG FIXES
- 
-    AVRO-176. Safeguard against bad istreams before reading. (sbanacho)
-
-    AVRO-141.  Fix a NullPointerException in ReflectData#isRecord().
-    (Isabel Drost via cutting)
-
-    AVRO-156.  Fix broken links to Wiki in documentation.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-165.  Fix an equals implementation in TestReflect.
-    (Philip Zeyliger via cutting)
-
-    AVRO-169.  Fix a typo in the spec. (Jeff Hammerbacher via cutting)
-
-    AVRO-189. test-c target fails (massie)
-
-    AVRO-182. Fix Java's generic and specific implementations of
-    equals() and hashCode() to be consistent with compareTo().
-    (cutting)
-
-    AVRO-193. Fix 'ant test-avroj' on Ubuntu 9.10. (cutting)
-
-    AVRO-171. Fix Java's Protocol#toString() to correctly handle
-    forward-references. (cutting)
-
-    AVRO-191. Explicitly include stdint.h for C++. (cutting via sbanacho)
-
-    AVRO-194. C++ varint encoding buffer too small. (sbanacho)
-
-    AVRO-210. Memory leak with recursive schemas when constructed 
-    by hand. (sbanacho)
-
-    AVRO-211. Nested schema does not get parsed in C++. (sbanacho)
-
-    AVRO-222. Fix Python interop tests broken by AVRO-201.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-223. Fix test-avroj on Mac OS X.  (Philip Zeyliger via cutting)
-
-    AVRO-224. Code cleanup: cleaner distinction between public and private 
-    methods (massie)
-
-    AVRO-221. Mangle Java reserved words in generated code to avoid
-    name conflicts.  (Philip Zeyliger via cutting)
-
-    AVRO-225. In generated Java, use dollar-sign, not underscore, to
-    prevent name conflicts. (cutting)
-
-    AVRO-227. Fix a typo in the spec document.  (Todd Lipcon via cutting)
-
-    AVRO-232. Fix C++ build in cygwin. (sbanacho)
-
-    AVRO-238. Fix so that slf4j-simple is only required by tests.  (cutting)
-
-    AVRO-184. Better eclipse configuration support. (thiru)
-
-    AVRO-256. Use fully-qualified class names in generated Java code
-    to eliminate name conflicts. (cutting)
-
-    AVRO-255. Fix Java so that, when parsing schemas, unspecified
-    namespaces are defaulted to nearest enclosing namespace. (cutting)
-
-    AVRO-262. Fix two typos in the spec.  (Jeff Hammerbacher via cutting)
-
-    AVRO-276. Fix GenAvro to specify file encoding as UTF-8.
-    (Philip Zeyliger via cutting)
-
-    AVRO-280. Fix file header schema in specification.  Also fix
-    "forrestdoc" build target to work on clean checkout.
-    (Jeff Hammerbacher & cutting)	 
-
-    AVRO-292. Fix Python skipping of ints and longs.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-289. Fix Python schema resolution.
-    (Jeff Hammerbacher via cutting)
-
-    AVRO-281. Symlink in build.xml does not work well with Cygwin (thiru)
-
-    AVRO-299. Fix Python numeric promotion.  (Jeff Hammerbacher via cutting)
-
-    AVRO-207. Fix Python to detect duplicate enum symbols and add
-    tests for duplicates in unions.  (Jeff Hammerbacher via cutting)
-
-    AVRO-313. Default values for fields or records and array (or map) don't work with ResolvingDecoder (thiru)
-
-    AVRO-47. Use void* for byte sequences. (sbanacho)
-
-    AVRO-337. ant test-java fails in Cygwin due to CRLF v LF problem (thiru)
- 
-    AVRO-347. Add the --unsafe flag to asciidoc in order to include source/header files (massie)
-
-    AVRO-352. Incorrect binary encoding for strings and bytes (massie)
-
-    AVRO-356. RAT fails with "Unknown license" error (massie)
-
-    AVRO-355. io.Perf test harness uses different random number seeds for each run (Kevin Oliver via thiru)
-
-    AVRO-375. Initializing uninizialized encoders fail (thiru)
-
-    AVRO-373. EOF detection broken in JsonDecoder (thiru)
-
-    AVRO-382. Avro hashCode throws a NullPointerException when fields are uninitialized (Michael Armbrust via philz)
-
-    AVRO-385. Initializing uninizialized BlockingBinaryEncoder fails (thiru)
-    
-    AVRO-389. ResolvingDecoder does not resolve enum well (thiru)
-
-    AVRO-390. ResolvingDecoder does not handle default values for records well (thiru)
-
-    AVRO-361. Specific Compiler fails to handle union with two fixed branches (Scott Carey via philz)
-
-    AVRO-350. Fix GenericData.Record#get(String) to return null rather than
-    throw NPE when passed a field name that's not in the record.
-    (Kevin Oliver via cutting)
-
-    AVRO-401. Fix a typo in the specification.  (Tom White via cutting)
-
-    AVRO-408. lang/c/build.sh 'dist' broken (massie)
-
-    AVRO-407. Fix a bug in the Java data file reader. (Scott Carey via cutting)
-
-    AVRO-415. Fix Ruby to work with Ruby 1.8.6.
-
-    AVRO-421. Fix some dist target issues.  (cutting)
-
-    AVRO-422. Build c++ docs in correct location. (sbanacho)
-
-Avro 1.2.0 (14 October 2009)
-
-  INCOMPATIBLE CHANGES
-
-    AVRO-115. Remove RPC's session notion to facilliate the use of
-    stateless transports like UDP and HTTP.  Add a UDP transport.
-    (cutting)
-
-    AVRO-120. Improved package and namespace handling, including:
-
-     * Removed explicit package-name specification from specific and
-       reflect public APIs.  Package names are now determined either
-       by namespace declarations or by a Java classes, as appropriate.
-
-     * Changed the specific compiler to generate separate java files
-       per class, rather than nested classes.  This permits generated
-       classes to be in packages declared in their schema namespaces.
-
-     * Fix namespace defaulting.  The default namespace is declared in
-       the outermost schema or protocol.  Nested schemas can now
-       declare different namespaces than the default.
-
-     * Names may now be specified with a dotted notation, e.g.,
-       "foo.bar.Baz", to indicate the name "Baz" in namespace
-       "foo.bar".  This permits one to refer to schemas in a namespace
-       other than the default.
-
-  NEW FEATURES
-
-    AVRO-121.  Permit reflect and specific datum readers to read data
-    written with a different version of the schema than is current.
-    (cutting)
-
-    AVRO-129.  Add HTTP-based RPC client and server.  (cutting)
-
-    AVRO-24.  Add a simple bulk-data benchmark.  (cutting)
-
-    AVRO-139. Refactor HTTP servlet to separate, public class. (cutting)
-
-  IMPROVEMENTS
-
-    AVRO-99.  Use Boost framework for C++ unit tests.
-    (Scott Banachowski via cutting)
-
-    AVRO-116.  Make C++ compatible with Boost 1.32.
-    (Scott Banachowski via cutting)
-
-    AVRO-119.  Add Java GenericData.Array#toString() implementation,
-    to facillitate debugging. (cutting)
-
-    AVRO-118.  JSON encoder and decoder now permit one to write
-    multiple instances without flushing or explicitly resetting the
-    codec between each instance.  (Thiruvalluvan M. G. via cutting)
-
-    AVRO-133.  Update version number in specification document and
-    documentation tab automatically from build version. (cutting)
-
-    AVRO-131.  Permit specification of JUnit test output format.
-    (Giridharan Kesavan via cutting)
-
-    AVRO-134.  Update data file format specification to include
-    reserved metadata keys "codec" and "sync".  The only codec
-    currently defined is "null".  (Thiruvalluvan M. G. via cutting)
-
-    AVRO-138.  Add a "unit-test-java" Ant target that runs tests
-    without running checkstyle or javadoc.  (Thiruvalluvan M. G. via
-    cutting)
-
-    AVRO-140.  Add javadoc to public classes with none.  (cutting)
-
-  OPTIMIZATIONS
-
-  BUG FIXES
-
-    AVRO-132.  Fix multi-threading race condition when threads share schema objects.
-    (sbanacho)
-
-    AVRO-113.  Fix endian bug with C++ integer/long varint codec.
-    (Scott Banachowski via cutting)
-
-    AVRO-117.  Fix memory leak in C++ JSON parser.
-    (Scott Banachowski via cutting)
-
-    AVRO-122.  Fix so that, when multiple Ant targets are specified on
-    the command line that depend on ivy, ivy does not fail.  (phunt
-    via cutting)
-
-    AVRO-123.  Fix Java's specific protocol compiler so that
-    parameters and return types are unboxed. (cutting)
-
-    AVRO-125.  Fix sample protocol in specification document to use
-    the correct syntax.  (cutting)
-
-    AVRO-101.  Add Java reflect API test case using nested classes.
-    (Eelco Hillenius via cutting)
-
-    AVRO-124.  Remove Ivy's jar from distributions.  (cutting)
-
-    AVRO-137. Suppress warnings in generated java code. (cutting via sharad)
-
-Avro 1.1.0 (8 September 2009)
-
-  INCOMPATIBLE CHANGES
-
-    AVRO-110. GenericData and ReflectData have been converted to use a
-    singleton pattern.  Calls to static methods on these classes must
-    be replaced with calls on the singleton instance. (cutting)
-
-    AVRO-41. GenericArray's constructor now requires a Schema, so that
-    it may implement Comparable consistently with AVRO-108. (cutting)
-
-    AVRO-41. Several GenericDatumWriter methods (instanceOf(),
-    isRecord(), etc.) have been moved to GenericData, where they can
-    better be shared with comparators.  Applications which subclassed
-    GenericDatumWriter overriding these methods must now instead
-    subclass GenericData and pass their subclass to
-    GenericDatumWriter. (cutting)
-
-    AVRO-41. SpecificRecord's schema() method has been renamed
-    getSchema(), since it now implements the new GenericContainer
-    interface shared with GenericRecord. (cutting)
-
-  NEW FEATURES
-
-    AVRO-50. Implmenent JSON data codec in Java. (Thiruvalluvan
-    M. G. & cutting)
-
-    AVRO-76. Add Java RPC plugin framework.  (George Porter)
-
-    AVRO-104. Permit null fields in Java reflection.
-    (Eelco Hillenius via cutting)
-
-    AVRO-92. Describe JSON data encoding in specification
-    document. (cutting)
-
-    AVRO-108.  Add Java implementation of binary comparator.
-    (cutting)
-
-    AVRO-41. Java generic and specific data instances now implement
-    Comparable.  The implementation is consistent with the binary
-    comparator added in AVRO-108. (cutting)
-
-    AVRO-109.  Add Java support for controlling sort order via schema
-    annotations.  Record fields now support an "order" attribute whose
-    possible values are "increasing" (the default), "decreasing", and
-    "ignore".  (cutting)
-
-    AVRO-111.  Document sort ordering in the specification. (cutting)
-
-  IMPROVEMENTS
-
-    AVRO-71.  C++: make deserializer more generic.  (Scott Banachowski
-    via cutting)
-
-    AVRO-60. Fix C JSON parser to correctly handle escapes and
-    multi-byte characters.  Add tests.  (Matt Massie via cutting)
-
-    AVRO-54. Re-upgrade to testng 5.9 and re-enable listeners. (cutting)
-
-    AVRO-82. Add checkstyle to java compilation.  (Thiruvalluvan
-    M. G. via cutting)
-
-    AVRO-81. Switch back from TestNG to JUnit. (Konstantin Boudnik via
-    cutting)
-
-    AVRO-84, AVRO-85.  Clarify a few things in the specification
-    document.  (Thiruvalluvan M. G. and cutting)
-
-    AVRO-89. In fields of Java generated classes, use unboxed numeric
-    types.  (cutting)
-
-    AVRO-83. In generated Java code, elide unions with null. (cutting)
-
-    AVRO-98. Fix C++ schema parser to permit JSON attributes in any
-    order and to ignore extra attributes. (Scott Banachowski via cutting)
-
-  OPTIMIZATIONS
-
-  BUG FIXES
-
-    AVRO-78. Fix Java reflect to work on non-public fields. (cutting)
-
-    AVRO-79. Specify format for default fixed values, and implement
-    correctly in Java.  (Thiruvalluvan M. G. via cutting)
-
-    AVRO-87. Fix broken links in javadoc introduced by AVRO-82.  Also
-    change test-java build target to fail on javadoc warnings.
-    (Thiruvalluvan M. G. and cutting)
-
-    AVRO-90. Fix Java's JSON codec to correctly encode unions. (cutting)
-
-    AVRO-95. Fix writing of Java reflect-based unions.  Also extend
-    DataFileWriter to permit adding branches to a union schema while
-    writing.
-
-    AVRO-88. Fix Java's BlockingBinaryEncoder to correctly override
-    writeEnum().  (Ravi Gummadi via cutting)
-
-    AVRO-61. Add Python support for reading blocked data.
-    (Ravi Gummadi via cutting)
-
-    AVRO-97.  Fix various C++ bugs.  (Scott Banachowski via cutting)
-
-    AVRO-100.  In spec, remove warning about blocking being draft. (cutting)
-
-    AVRO-107.  Fix Protocol#equals() and Protocol#hashCode() to
-    consider the protocol's types, and also fix Schema#equals() to not
-    throw ClassCastException when a fixed schema is compared to
-    non-fixed. (cutting)
-
-    AVRO-112.  Turn off C++ debug output.  (Scott Banachowski via cutting)
-
-    AVRO-114.  Fix "cdoc" Ant target to correctly run doxygen.
-    (Matt Massie via cutting)
-
-Avro 1.0.0 -- 9 July 2009
-
-  INCOMPATIBLE CHANGES
-
-    AVRO-1. Record fields are now defined with JSON arrays, rather
-    than JSON objects, since fields are ordered.  (cutting & sharad)
-
-    AVRO-9. Restrict map keys to strings.  (cutting & sharad)
-
-    AVRO-2. Optimized RPC handshake protocol for Java.  (cutting)
-
-    AVRO-57. Make ValueWriter an abstract class named Encoder and make
-    ValueReader an abstract class named Decoder, and add concrete
-    implementations named BinaryEncoder and BinaryDecoder. (cutting)
-
-    AVRO-46. Optimized RPC handshake protocol for Python.  (sharad)
-
-    AVRO-66.  Add per-call RPC metadata to Java and Python. (George
-    Porter & cutting)
-
-  NEW FEATURES
-
-    AVRO-6. Permit easier implementation of alternate generic data
-    representations, especially records with integer-indexed fields.
-    (Hong Tang via cutting)
-
-    AVRO-8. Add Java support for default values. (cutting)
-
-    AVRO-33.  C support for primitive types.  (Matt Massie via cutting)
-
-    AVRO-18.  Add support for enum types.  (cutting & sharad)
-
-    AVRO-10.  Add Java support for fixed-sized types. (cutting)
-
-    AVRO-38.  Add Python support for fixed-sized types. (sharad)
-
-    AVRO-42.  Add partial C++ implementation. (Scott Banachowski via cutting)
-
-    AVRO-25.  Add blocking value writer that permits arbitrarily long
-    arrays and maps to be efficiently written as sequences of blocks.
-    (Thiruvalluvan M. G. via cutting)
-
-    AVRO-48.  Add JSON parser for C.  (Matt Massie via cutting)
-
-    AVRO-29.  Add to Java a validating encoder & decoder, and a
-    resolving decoder.  (Thiruvalluvan M. G. & Raymie Stata)
-
-    AVRO-67.  Add per-call RPC metadata to spec. (George Porter via cutting)
-
-    AVRO-28. Add Python support for default values. (sharad via cutting)
-
-  IMPROVEMENTS
-
-    AVRO-11.  Re-implement specific and reflect datum readers and
-    writers to leverage AVRO-6.  (cutting)
-
-    AVRO-13. Use dictionary instead of if-else in validate. (sharad)
-
-    AVRO-5. Add java versus python RPC interoperability tests. 
-    (sharad)
-
-    AVRO-16.  Minor documentation improvements.  (cutting)
-
-    AVRO-15. Override __eq__() and __hash__() in Schema classes.
-    (sharad)
-
-    AVRO-26. Switch tests from JUnit to TestNG.  (Konstantin Boudnik
-    via cutting)
-
-    AVRO-34. Upgrade to Jackson version 1.0.0.  (cutting)
-
-    AVRO-37. Add C api docs.  Also link to py docs.  (Matt Massie & cutting)
-
-    AVRO-32. Java specific generated record classes now implement
-    equals() and hashCode().  (cutting)
-
-    AVRO-48. Remove unused imports and annotations.
-    (Thiruvalluvan M. G. via cutting)
-
-    AVRO-53. Use Ivy to retrieve Java dependencies.  (cutting)
-
-    AVRO-56. Use Jackson to generate JSON from Java.  (cutting)
-
-    AVRO-36. Correctly encode and decode binary default values.
-    (cutting)
-
-    AVRO-59.  C++: make serializer more generic.  (Scott Banachowski
-    via cutting)
-
-    AVRO-68. Add license headers to C sources and improve C packaging.
-    (Matt Massie via cutting)
-
-    AVRO-351. Shorten induce tool description; add check to avoid overly verbose 
-    descriptions. (philz)
-
-  OPTIMIZATIONS
-
-  BUG FIXES
-
-    AVRO-3. Fix ValueReader to throw an exception at EOF.
-    (Pat Hunt via cutting)
-    
-    AVRO-4. Fix so that specific code generation works under Eclipse.
-    (Pat Hunt via cutting)
-
-    AVRO-14. Fix so that EOF is not thrown when one attempts to read
-    an empty buffer.  (sharad via cutting)
-
-    AVRO-31. Fix Java package imports in generated specific classes.
-    (sharad via cutting)
-
-    AVRO-21. Default Java namespace from containing definition. (cutting)
-
-    AVRO-12. Fix recursive schemas in Java so that equals() and
-    hashCode() do not cause a stack overflow.  (cutting)
-
-    AVRO-22. When parsing schemas in Java, do not permit anonymous.
-    (cutting)
-
-    AVRO-39. Fix bug in Java record schema toString().  (sharad)
-
-    AVRO-40. Fix typo in specfication, where 'unsigned' was used where
-    'signed' was intended.  (cutting)
-
-    AVRO-44. Fix so that 'ant clean' works even if C has not been
-    built.  (Matt Massie via cutting)
-
-    AVRO-45. Fix c++ compliation so that python script need not be
-    made executable.  (Scott Banachowski via cutting)
-
-    AVRO-51. Fix testio.py to exit correctly. (Philip Zeyliger 
-    via sharad)
-
-    AVRO-55. Fix two spec document typos.  (cutting)
-
-    AVRO-69. Make C's install-sh script executable.  (Matt Massie via cutting)
-
-    AVRO-70. Add license header to json_schema.y.  (Matt Massie via cutting)
-
-    AVRO-74. Add missing license headers in C++.  (cutting)
-
-    AVRO-73. Workaround in python to fix simplejson bug on Mac OS. (sharad)
-
-    AVRO-64. Fix socket and parser issue on Mac OS. (sharad)
-
-    AVRO-77. Fix C unit tests on Mac OS.  (Matt Massie via cutting)
diff --git a/branch-1.3/DIST_README.txt b/branch-1.3/DIST_README.txt
deleted file mode 100644
index 57ecf8d..0000000
--- a/branch-1.3/DIST_README.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-Apache Avro Distribution 
-
-Avro is a data serialization system.
-
-This distribution contains the following files:
-
-  - avro-src-x.y.z.tar.gz contains the full source for Avro, including
-    all programming language implementations, documentation source, etc.
-
-  - avro-doc-x.y.z.tar.gz contains Avro's pre-built documentation.
-
-  - the c/, cpp/, java/, py/, and ruby/ subdirectories contain
-    pre-built, language-specific binaries, bundles, etc. as
-    conveniences.
diff --git a/branch-1.3/LICENSE.txt b/branch-1.3/LICENSE.txt
deleted file mode 100644
index 5e4a8e5..0000000
--- a/branch-1.3/LICENSE.txt
+++ /dev/null
@@ -1,224 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-License for the Jansson C JSON parser used in the C implementation:
-
-Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE.
diff --git a/branch-1.3/NOTICE.txt b/branch-1.3/NOTICE.txt
deleted file mode 100644
index e601a8e..0000000
--- a/branch-1.3/NOTICE.txt
+++ /dev/null
@@ -1,9 +0,0 @@
-Apache Avro
-Copyright 2010 The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-C JSON parsing provided by Jansson and
-written by Petri Lehtinen. The original software is
-available from http://www.digip.org/jansson/.
diff --git a/branch-1.3/build.sh b/branch-1.3/build.sh
deleted file mode 100755
index 1257cc8..0000000
--- a/branch-1.3/build.sh
+++ /dev/null
@@ -1,139 +0,0 @@
-#!/bin/bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e						  # exit on error
-
-cd `dirname "$0"`				  # connect to root
-
-VERSION=`cat share/VERSION.txt`
-
-function usage {
-  echo "Usage: $0 {test|dist|sign|clean}"
-  exit 1
-}
-
-if [ $# -eq 0 ]
-then
-  usage
-fi
-
-set -x						  # echo commands
-
-for target in "$@"
-do
-
-case "$target" in
-
-    test)
-	# run lang-specific tests
-	(cd lang/java; ant test)
-	(cd lang/py; ant test)
-	(cd lang/c; ./build.sh test)
-	(cd lang/c++; ./build.sh test)
-	(cd lang/ruby; rake test)
-
-	# create interop test data
-	(cd lang/java; ant interop-data-generate)
-	(cd lang/py; ant interop-data-generate)
-	(cd lang/c; ./build.sh interop-data-generate)
-	#(cd lang/c++; make interop-data-generate)
-	(cd lang/ruby; rake generate_interop)
-
-	# run interop data tests
-	(cd lang/java; ant interop-data-test)
-	(cd lang/py; ant interop-data-test)
-	(cd lang/c; ./build.sh interop-data-test)
-	#(cd lang/c++; make interop-data-test)
-	(cd lang/ruby; rake interop)
-
-	# run interop rpc tests
-	/bin/bash share/test/interop/bin/test_rpc_interop.sh
-
-	;;
-
-    dist)
-	# build source tarball
-	mkdir -p build
-	rm -rf build/avro-src-$VERSION
-	svn export --force . build/avro-src-$VERSION
-	(cd lang/java; ant rat)
-
-	mkdir -p dist
-        (cd build; tar czf ../dist/avro-src-$VERSION.tar.gz avro-src-$VERSION)
-
-	# build lang-specific artifacts
-	(cd lang/java; ant dist)
-
-	(cd lang/py; ant dist)
-
-	(cd lang/c; ./build.sh dist)
-
-	(cd lang/c++; ./build.sh dist)
-
-	(cd lang/ruby; rake dist)
-
-	# build docs
-	(cd doc; ant)
-	(cd build; tar czf ../dist/avro-doc-$VERSION.tar.gz avro-doc-$VERSION)
-
-	cp DIST_README.txt dist/README.txt
-	;;
-
-    sign)
-
-	set +x
-
-	echo -n "Enter password: "
-	stty -echo
-	read password
-	stty echo
-
-	for f in $(find dist -type f \
-	    \! -name '*.sha1' \! -name '*.asc' \! -name '*.txt' );
-	do
-	    md5sum $f > $f.md5
-	    sha1sum $f > $f.sha1
-	    gpg --passphrase $password --armor --output $f.asc --detach-sig $f
-	done
-
-	set -x
-	;;
-
-    clean)
-	rm -rf build dist
-	(cd doc; ant clean)
-
-	(cd lang/java; ant clean)
-
-	(cd lang/py; ant clean)
-
-	(cd lang/c; ./build.sh clean)
-
-	(cd lang/c++; ./build.sh clean)
-
-	(cd lang/ruby; rake clean)
-
-	;;
-
-    *)
-        usage
-        ;;
-esac
-
-done
-
-exit 0
diff --git a/branch-1.3/doc/build.xml b/branch-1.3/doc/build.xml
deleted file mode 100644
index 87486af..0000000
--- a/branch-1.3/doc/build.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-<?xml version="1.0"?>
-
-<project name="doc" default="doc" basedir=".">
-  
-  <!-- Load user's default properties. -->
-  <property file="${user.home}/build.properties" />
-  
-  <loadresource property="version">
-    <file file="${basedir}/../share/VERSION.txt"/>
-  </loadresource>
-
-  <property name="build.dir" value="../build/avro-doc-${version}"/>
-
-  <target name="doc" depends="forrest.check" description="Generate forrest-based documentation. To use, specify -Dforrest.home=&lt;base of Apache Forrest installation&gt; on the command line." if="forrest.home">
-    <mkdir dir="${build.dir}"/>
-    <echo  file="../build/avro.ent" append="false">
-      &lt;!ENTITY AvroVersion "${version}"&gt;
-    </echo>
-    <exec executable="${forrest.home}/bin/forrest" failonerror="true">
-      <env key="JAVA_HOME" value="${java5.home}"/>
-      <arg value="-Dproject.content-dir=src"/>
-      <arg value="-Dproject.site=../${build.dir}/"/>
-    </exec>
-  </target>
-
-  <target name="forrest.check" unless="forrest.home" depends="java5.check">
-    <fail message="'forrest.home' is not defined. Please pass -Dforrest.home=&lt;base of Apache Forrest installation&gt; to Ant on the command-line." />
-  </target>
-
-  <target name="java5.check" unless="java5.home">
-    <fail message="'java5.home' is not defined.  Forrest requires Java 5.  Please pass -Djava5.home=&lt;base of Java 5 distribution&gt; to Ant on the command-line." />
-  </target>
-
-  <target name="clean" description="Delete build files, and their directories">
-    <delete dir="${basedir}/build"/>
-  </target>
-
-</project>
diff --git a/branch-1.3/doc/src/content/xdocs/genavro.xml b/branch-1.3/doc/src/content/xdocs/genavro.xml
deleted file mode 100644
index d40f19f..0000000
--- a/branch-1.3/doc/src/content/xdocs/genavro.xml
+++ /dev/null
@@ -1,362 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd" [
-  <!ENTITY % avro-entities PUBLIC "-//Apache//ENTITIES Avro//EN"
-	   "../../../../build/avro.ent">
-  %avro-entities;
-]>
-<document>
-  <header>
-    <title>Avro &AvroVersion; GenAvro Language Docs</title>
-  </header>
-  <body>
-
-    <section id="preamble">
-      <title>Introduction</title>
-
-      <p>This document defines GenAvro, an experimental higher-level language for authoring Avro schemata.
-      Before reading this document, you should have familiarity with the concepts of schemata and protocols,
-      as well as the various primitive and complex types available in Avro.
-      </p>
-      <p>
-        <strong>N.B.</strong> This feature is considered experimental in the current version of Avro and the language
-        has not been finalized. Although major changes are unlikely, some syntax may change in future
-        versions of Avro.
-      </p>
-    </section>
-
-    <section id="overview">
-      <title>Overview</title>
-      <section id="overview_purpose">
-        <title>Purpose</title>
-        <p>The aim of the GenAvro language is to enable developers to author schemata in a way that
-        feels more similar to common programming languages like Java, C++, or Python. Additionally,
-        the GenAvro language may feel more familiar for those users who have previously used the
-        interface description languages (IDLs) in other frameworks like Thrift, Protocol Buffers, or CORBA.
-        </p>
-      </section>
-      <section id="overview_usage">
-        <title>Usage</title>
-        <p>
-          Each GenAvro file defines a single Avro Protocol, and thus generates as its output a JSON-format
-          Avro Protocol file with extension <code>.avpr</code>.
-        </p>
-        <p>
-          To convert a <code>.genavro</code> file into a <code>.avpr</code> file, it must be processed by the
-          <code>avroj genavro</code> tool. For example:
-        </p>
-        <source>
-$ java -jar avroj-1.3.0.jar genavro src/test/genavro/input/namespaces.genavro /tmp/namespaces.avpr
-$ head /tmp/namespaces.avpr
-{
-  "protocol" : "TestNamespace",
-  "namespace" : "avro.test.protocol",
-        </source>
-        <p>
-          The <code>avroj genavro</code> tool can also process input to and from <em>stdin</em> and <em>stdout</em>.
-          See <code>avroj genavro --help</code> for full usage information.
-        </p>
-      </section>
-    </section> <!-- end overview -->
-
-    <section id="defining_protocol">
-      <title>Defining a Protocol in GenAvro</title>
-
-      <p>A GenAvro file consists of exactly one protocol definition. The minimal protocol is defined
-      by the following code:
-      </p>
-      <source>
-protocol MyProtocol {
-}
-      </source>
-      <p>
-        This is equivalent to (and generates) the following JSON protocol definition:
-      </p>
-  <!--  "namespace" : null, TODO: this is generated but shouldnt be - AVRO-263 -->
-      <source>
-{
-"protocol" : "MyProtocol",
-  "types" : [ ],
-  "messages" : {
-  }
-}
-      </source>
-      <p>
-        The namespace of the protocol may be changed using the <code>@namespace</code> annotation:
-      </p>
-      <source>
-@namespace("mynamespace")
-protocol MyProtocol {
-}
-      </source>
-      <p>
-3        This notation is used throughout GenAvro as a way of specifying properties for the annotated element,
-        as will be described later in this document.
-      </p>
-      <p>
-        Protocols in GenAvro can contain the following items:
-      </p>
-        <ul>
-          <li>Definitions of named schemata, including <em>record</em>s, <em>error</em>s, <em>enum</em>s, and <em>fixed</em>s.</li>
-          <li>Definitions of RPC messages</li>
-        </ul>
-    </section>
-    <section id="format_enums">
-      <title>Defining an Enumeration</title>
-      <p>
-        Enums are defined in GenAvro using a syntax similar to C or Java:
-      </p>
-      <source>
-enum Suit {
-  SPADES, DIAMONDS, CLUBS, HEARTS
-}
-      </source>
-      <p>
-        Note that, unlike the JSON format, anonymous enums cannot be defined.
-      </p>
-    </section>
-    <section id="format_fixed">
-      <title>Defining a Fixed Length Field</title>
-      <p>
-        Fixed fields are defined using the following syntax:
-      </p>
-      <source>
-fixed MD5(16);
-      </source>
-      <p>This example defines a fixed-length type called <code>MD5</code> which contains 16 bytes.</p>
-    </section>
-
-    <section id="format_records">
-      <title>Defining Records and Errors</title>
-      <p>
-        Records are defined in GenAvro using a syntax similar to a <code>struct</code> definition in C:
-      </p>
-      <source>
-record Employee {
-  string name;
-  boolean active;
-  long salary;
-}
-      </source>
-      <p>
-        The above example defines a record with the name &ldquo;Employee&rdquo; with three fields.
-      </p>
-      <p>
-        To define an error, simply use the keyword <code>error</code> instead of <code>record</code>.
-        For example:
-      </p>
-      <source>
-error Kaboom {
-  string explanation;
-  int result_code;
-}
-      </source>
-      <p>
-        Each field in a record or error consists of a type and a name, along with optional property annotations.
-      </p>
-      <p>A type reference in GenAvro must be one of:</p>
-      <ul>
-        <li>A primitive type</li>
-        <li>A named schema defined prior to this usage in the same Protocol</li>
-        <li>A complex type (array, map, or union)</li>
-      </ul>
-
-      <section id="primitive_types">
-        <title>Primitive Types</title>
-        <p>The primitive types supported by GenAvro are the same as those supported by Avro's JSON format.
-        This list includes <code>int</code>, <code>long</code>, <code>string</code>, <code>boolean</code>,
-        <code>float</code>, <code>double</code>, <code>null</code>, and <code>bytes</code>.
-        </p>
-      </section>
-
-      <section id="schema_references">
-        <title>References to Named Schemata</title>
-        <p>If a named schema has already been defined in the same GenAvro file, it may be referenced by name
-        as if it were a primitive type:
-        </p>
-        <source>
-record Card {
-  Suit suit; // refers to the enum Card defined above
-  int number;
-}
-        </source>
-      </section>
-      <section id="complex_types">
-        <title>Complex Types</title>
-
-        <section id="arrays">
-          <title>Arrays</title>
-          <p>
-            Array types are written in a manner that will seem familiar to C++ or Java programmers. An array of
-            any type <code>t</code> is denoted <code>array&lt;t&gt;</code>. For example, an array of strings is
-            denoted <code>array&lt;string&gt;</code>, and a multidimensional array of <code>Foo</code> records
-            would be <code>array&lt;array&lt;Foo&gt;&gt;</code>.
-          </p>
-        </section>
-
-        <section id="maps">
-          <title>Maps</title>
-          <p>Map types are written similarly to array types. An array that contains values of type
-          <code>t</code> is written <code>map&lt;t&gt;</code>. As in the JSON schema format, all
-          maps contain <code>string</code>-type keys.</p>
-        </section>
-
-        <section id="unions">
-          <title>Unions</title>
-          <p>Union types are denoted as <code>union { typeA, typeB, typeC, ... }</code>. For example,
-          this record contains a string field that is optional (unioned with <code>null</code>):
-          </p>
-          <source>
-record RecordWithUnion {
-  union { null, string } optionalString;
-}
-          </source>
-          <p>
-            Note that the same restrictions apply to GenAvro unions as apply to unions defined in the
-            JSON format; namely, a record may not contain multiple elements of the same type.
-          </p>
-        </section> <!-- unions -->
-      </section> <!-- complex types -->
-    </section> <!-- how to define records -->
-    <section id="define_messages">
-      <title>Defining RPC Messages</title>
-      <p>The syntax to define an RPC message within a GenAvro protocol is similar to the syntax for
-      a method declaration within a C header file or a Java interface. To define an RPC message
-      <code>add</code> which takes two arguments named <code>foo</code> and <code>bar</code>,
-      returning an <code>int</code>, simply include the following definition within the protocol:
-      </p>
-      <source>
-int add(int foo, int bar);
-      </source>
-      <p>To define a message with no response, you may use the alias <code>void</code>, equivalent
-      to the Avro <code>null</code> type:
-      </p>
-      <source>
-void logMessage(string message);
-      </source>
-      <p>
-        If you have previously defined an error type within the same protocol, you may declare that
-        a message can throw this error using the syntax:
-      </p>
-      <source>
-void goKaboom() throws Kaboom;
-      </source>
-    </section> <!-- define messages -->
-    <section id="minutiae">
-      <title>Other Language Features</title>
-      <section id="minutiae_comments">
-        <title>Comments</title>
-        <p>All Java-style comments are supported within a GenAvro file. Any text following
-        <code>//</code> on a line is ignored, as is any text between <code>/*</code> and
-        <code>*/</code>, possibly spanning multiple lines.</p>
-      </section>
-      <section id="minutiae_escaping">
-        <title>Escaping Identifiers</title>
-        <p>Occasionally, one will need to use a reserved language keyword as an identifier. In order
-        to do so, backticks (<code>`</code>) may be used to escape the identifier. For example, to define
-        a message with the literal name <em>error</em>, you may write:
-        </p>
-        <source>
-void `error`();
-        </source>
-        <p>This syntax is allowed anywhere an identifier is expected.</p>
-      </section>
-      <section id="minutiae_annotations">
-        <title>Annotations for Ordering and Namespaces</title>
-        <p>
-          Java-style annotations may be used to add additional properties to types throughout
-          GenAvro. For example, to specify the sort order of a field within a record, one may use
-          the <code>@order</code> annotation as follows:
-        </p>
-        <source>
-record MyRecord {
-  @order("ascending")
-  string myAscendingSortField;
-
-  @order("descending")
-  string myDescendingField;
-
-  @order("ignore")
-  string myIgnoredField;
-}
-        </source>
-        <p>Similarly, a <code>@namespace</code> annotation may be used to modify the namespace
-        when defining a named schema. For example:
-        </p>
-        <source>
-@namespace("org.apache.avro.firstNamespace")
-protocol MyProto {
-  @namespace("org.apache.avro.someOtherNamespace")
-  record Foo {}
-
-  record Bar {}
-}
-        </source>
-        <p>
-          will define a protocol in the <code>firstNamespace</code> namespace. The record <code>Foo</code> will be
-          defined in <code>someOtherNamespace</code> and <code>Bar</code> will be defined in <code>firstNamespace</code>
-          as it inherits its default from its container.
-        </p>
-      </section>
-    </section>
-    <section id="example">
-      <title>Complete Example</title>
-      <p>The following is a complete example of a GenAvro file that shows most of the above features:</p>
-      <source>
-/**
- * An example protocol in GenAvro
- */
-@namespace("org.apache.avro.test")
-protocol Simple {
-  enum Kind {
-    FOO,
-    BAR, // the bar enum value
-    BAZ
-  }
-
-  fixed MD5(16);
-
-  record TestRecord {
-    @order("ignore")
-    string name;
-
-    @order("descending")
-    Kind kind;
-
-    MD5 hash;
-
-    union { MD5, null} nullableHash;
-
-    array&lt;long&gt; arrayOfLongs;
-  }
-
-  error TestError {
-    string message;
-  }
-
-  string hello(string greeting);
-  TestRecord echo(TestRecord `record`);
-  int add(int arg1, int arg2);
-  bytes echoBytes(bytes data);
-  void `error`() throws TestError;
-}
-      </source>
-      <p>Additional examples may be found in the Avro source tree under the <code>src/test/genavro/input</code> directory.</p>
-    </section>
-  </body>
-</document>
diff --git a/branch-1.3/doc/src/content/xdocs/index.xml b/branch-1.3/doc/src/content/xdocs/index.xml
deleted file mode 100644
index 5f06c8f..0000000
--- a/branch-1.3/doc/src/content/xdocs/index.xml
+++ /dev/null
@@ -1,87 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-  -->
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
-<document>
-  <header>
-    <title>Welcome to Avro!</title>
-  </header>
-  <body>
-    <section id="intro">
-      <title>Introduction</title>
-      <p>Avro is a data serialization system.</p>
-      <p>Avro provides:</p>
-	<ul>
-	  <li>Rich data structures.</li>
-	  <li>A compact, fast, binary data format.</li>
-	  <li>A container file, to store persistent data.</li>
-	  <li>Remote procedure call (RPC).</li>
-	  <li>Simple integration with dynamic languages.  Code
-	    generation is not required to read or write data files nor
-	    to use or implement RPC protocols.  Code generation as an
-	    optional optimization, only worth implementing for
-	    statically typed languages.</li>
-	</ul>
-    </section>
-    <section id="schemas">
-      <title>Schemas</title>
-      <p>Avro relies on <em>schemas</em>.  When Avro data is read, the
-	schema used when writing it is always present.  This permits
-	each datum to be written with no per-value overheads, making
-	serialization both fast and small.  This also facilitates use
-	with dynamic, scripting languages, since data, together with
-	its schema, is fully self-describing.</p>
-      <p>When Avro data is stored in a file, its schema is stored with
-	it, so that files may be processed later by any program.  If
-	the program reading the data expects a different schema this
-	can be easily resolved, since both schemas are present.</p>
-      <p>When Avro is used in RPC, the client and server exchange
-	schemas in the connection handshake.  (This can be optimized
-	so that, for most calls, no schemas are actually transmitted.)
-	Since both client and server both have the other's full
-	schema, correspondence between same named fields, missing
-	fields, extra fields, etc. can all be easily resolved.</p>
-      <p>Avro schemas are defined with
-	<a href="http://www.json.org/">JSON</a> .  This
-	facilitates implementation in languages that already have
-	JSON libraries.</p>
-    </section>
-    <section id="compare">
-      <title>Comparison with other systems</title>
-      <p>Avro provides functionality similar to systems such
-	as <a href="http://incubator.apache.org/thrift/">Thrift</a>,
-	<a href="http://code.google.com/protobuf/">Protocol
-	  Buffers</a>, etc.  Avro differs from these systems in the
-	  following fundamental aspects.</p>
-      <ul>
-	<li><em>Dynamic typing</em>: Avro does not require that code
-	  be generated.  Data is always accompanied by a schema that
-	  permits full processing of that data without code
-	  generation, static datatypes, etc.  This facilitates
-	  construction of generic data-processing systems and
-	  languages.</li>
-	<li><em>Untagged data</em>: Since the schema is present when
-	  data is read, considerably less type information need be
-	  encoded with data, resulting in smaller serialization size.</li>
-	<li><em>No manually-assigned field IDs</em>: When a schema
-	  changes, both the old and new schema are always present when
-	  processing data, so differences may be resolved
-	  symbolically, using field names.</li>
-      </ul>
-    </section>
-  </body>
-</document>
diff --git a/branch-1.3/doc/src/content/xdocs/site.xml b/branch-1.3/doc/src/content/xdocs/site.xml
deleted file mode 100644
index 6d5246d..0000000
--- a/branch-1.3/doc/src/content/xdocs/site.xml
+++ /dev/null
@@ -1,79 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<!--
-Forrest site.xml
-
-This file contains an outline of the site's information content.  It is used to:
-- Generate the website menus (though these can be overridden - see docs)
-- Provide semantic, location-independent aliases for internal 'site:' URIs, eg
-<link href="site:changes"> links to changes.html (or ../changes.html if in
-  subdir).
-- Provide aliases for external URLs in the external-refs section.  Eg, <link
-  href="ext:cocoon"> links to http://cocoon.apache.org/ 
-
-See http://forrest.apache.org/docs/linking.html for more info
--->
-<!-- The label attribute of the outer "site" element will only show
-  in the linkmap (linkmap.html).
-  Use elements project-name and group-name in skinconfig to change name of 
-  your site or project that is usually shown at the top of page.
-  No matter what you configure for the href attribute, Forrest will
-  always use index.html when you request http://yourHost/
-  See FAQ: "How can I use a start-up-page other than index.html?"
--->
-
-<site label="Avro" href="" xmlns="http://apache.org/forrest/linkmap/1.0">
-
-  <docs label="Documentation"> 
-    <overview   label="Overview"          href="index.html" />
-    <spec       label="Specification"     href="spec.html" />
-    <java-api   label="Java API"          href="ext:api/java/index" />
-    <c-api      label="C API"             href="ext:api/c/index" />
-    <cpp-api    label="C++ API"           href="ext:api/cpp/index" />
-    <genavro    label="GenAvro language"  href="genavro.html" />
-    <wiki       label="Wiki"              href="ext:wiki" />
-    <faq        label="FAQ"               href="ext:faq" />
-  </docs>
-  
-  <external-refs>
-    <site      href="http://hadoop.apache.org/avro/"/>
-    <lists     href="http://hadoop.apache.org/avro/mailing_lists.html"/>
-    <archive   href="http://mail-archives.apache.org/mod_mbox/hadoop-avro-commits/"/>
-    <releases  href="http://hadoop.apache.org/avro/releases.html">
-      <download href="#Download" />
-    </releases>
-    <jira      href="http://hadoop.apache.org/avro/issue_tracking.html"/>
-    <wiki      href="http://wiki.apache.org/hadoop/Avro/" />
-    <faq       href="http://wiki.apache.org/hadoop/Avro/FAQ" />
-    <json      href="http://www.json.org/" />
-    <vint      href="http://lucene.apache.org/java/2_4_0/fileformats.html#VInt"/>
-    <zigzag    href="http://code.google.com/apis/protocolbuffers/docs/encoding.html#types"/>
-    <api href="api/">
-      <c href="c/">
-	<index href="index.html" />
-      </c>
-      <cpp href="cpp/">
-	<index href="html/index.html" />
-      </cpp>
-      <java href="java/">
-	<index href="index.html" />
-      </java>
-    </api>
-  </external-refs>
- 
-</site>
diff --git a/branch-1.3/doc/src/content/xdocs/spec.xml b/branch-1.3/doc/src/content/xdocs/spec.xml
deleted file mode 100644
index bf2ebe9..0000000
--- a/branch-1.3/doc/src/content/xdocs/spec.xml
+++ /dev/null
@@ -1,1056 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd" [
-  <!ENTITY % avro-entities PUBLIC "-//Apache//ENTITIES Avro//EN"
-	   "../../../../build/avro.ent">
-  %avro-entities;
-]>
-<document>
-  <header>
-    <title>Avro &AvroVersion; Specification</title>
-  </header>
-  <body>
-
-    <section id="preamble">
-      <title>Introduction</title>
-
-      <p>This document defines Avro.  It is intended to be the
-        authoritative specification. Implementations of Avro must
-        adhere to this document.
-      </p>
-
-    </section>
-
-    <section id="schemas">
-      <title>Schema Declaration</title>
-      <p>A Schema is represented in <a href="ext:json">JSON</a> by one of:</p>
-      <ul>
-        <li>A JSON string, naming a defined type.</li>
-        
-        <li>A JSON object, of the form:
-          
-          <source>{"type": "<em>typeName</em>" ...<em>attributes</em>...}</source>
-
-          where <em>typeName</em> is either a primitive or derived
-          type name, as defined below.  Attributes not defined in this
-          document are permitted as metadata, but must not affect
-          the format of serialized data.
-          </li>
-        <li>A JSON array, representing a union of embedded types.</li>
-      </ul>
-
-      <section id="schema_primitive">
-        <title>Primitive Types</title>
-        <p>The set of primitive type names is:</p>
-        <ul>
-          <li><code>null</code>: no value</li>
-          <li><code>boolean</code>: a binary value</li>
-          <li><code>int</code>: 32-bit signed integer</li>
-          <li><code>long</code>: 64-bit signed integer</li>
-          <li><code>float</code>: single precision (32-bit) IEEE 754 floating-point number</li>
-          <li><code>double</code>: double precision (64-bit) IEEE 754 floating-point number</li>
-          <li><code>bytes</code>: sequence of 8-bit unsigned bytes</li>
-          <li><code>string</code>: unicode character sequence</li>
-        </ul>
-        
-        <p>Primitive types have no specified attributes.</p>
-        
-        <p>Primitive type names are also defined type names.  Thus, for
-          example, the schema "string" is equivalent to:</p>
-        
-        <source>{"type": "string"}</source>
-
-      </section>
-
-      <section id="schema_complex">
-        <title>Complex Types</title>
-        
-        <p>Avro supports six kinds of complex types: records, enums,
-        arrays, maps, unions and fixed.</p>
-
-        <section id="schema_record">
-          <title>Records</title>
-          
-	  <p>Records use the type name "record" and support three attributes:</p>
-	  <ul>
-	    <li><code>name</code>: a JSON string providing the name
-	    of the record (required).</li>
-	    <li><em>namespace</em>, a JSON string that qualifies the name;</li>
-	    <li><code>doc</code>: a JSON string providing documentation to the
-	    user of this schema (optional).</li>
-	    <li><code>fields</code>: a JSON array, listing fields (required).
-	    Each field is a JSON object with the following attributes:
-	      <ul>
-		<li><code>name</code>: a JSON string providing the name
-		  of the field (required), and </li>
-		<li><code>doc</code>: a JSON string describing this field
-                  for users (optional).</li>
-		<li><code>type:</code> A JSON object defining a schema, or
-		  a JSON string naming a record definition
-		  (required).</li>
-		<li><code>default:</code> A default value for this
-		  field, used when reading instances that lack this
-		  field (optional).  Permitted values depend on the
-		  field's schema type, according to the table below.
-		  Default values for union fields correspond to the
-		  first schema in the union. Default values for bytes
-		  and fixed fields are JSON strings, where Unicode
-		  code points 0-255 are mapped to unsigned 8-bit byte
-		  values 0-255.
-		  <table class="right">
-		    <caption>field default values</caption>
-		    <tr><th>avro type</th><th>json type</th><th>example</th></tr>
-		    <tr><td>null</td><td>null</td><td>null</td></tr>
-		    <tr><td>boolean</td><td>boolean</td><td>true</td></tr>
-		    <tr><td>int,long</td><td>integer</td><td>1</td></tr>
-		    <tr><td>float,double</td><td>number</td><td>1.1</td></tr>
-		    <tr><td>bytes</td><td>string</td><td>"\u00FF"</td></tr>
-		    <tr><td>string</td><td>string</td><td>"foo"</td></tr>
-		    <tr><td>record</td><td>object</td><td>{"a": 1}</td></tr>
-		    <tr><td>enum</td><td>string</td><td>"FOO"</td></tr>
-		    <tr><td>array</td><td>array</td><td>[1]</td></tr>
-		    <tr><td>map</td><td>object</td><td>{"a": 1}</td></tr>
-		    <tr><td>fixed</td><td>string</td><td>"\u00ff"</td></tr>
-		  </table>
-		</li>
-		<li><code>order:</code> specifies how this field
-		  impacts sort ordering of this record (optional).
-		  Valid values are "ascending" (the default),
-		  "descending", or "ignore".  For more details on how
-		  this is used, see the the <a href="#order">sort
-		  order</a> section below.</li>
-	      </ul>
-	    </li>
-	  </ul>
-
-	  <p>For example, a linked-list of 64-bit values may be defined with:</p>
-	  <source>
-{
-  "type": "record", 
-  "name": "LongList",
-  "fields" : [
-    {"name": "value", "type": "long"},             // each element has a long
-    {"name": "next", "type": ["LongList", "null"]} // optional next element
-  ]
-}
-	  </source>
-	</section>
-        
-        <section>
-          <title>Enums</title>
-          
-	  <p>Enums use the type name "enum" and support the following
-	  attributes:</p>
-	  <ul>
-	    <li><code>name</code>: a JSON string providing the name
-	    of the enum (required).</li>
-	    <li><em>namespace</em>, a JSON string that qualifies the name;</li>
-	    <li><code>doc</code>: a JSON string providing documentation to the
-	    user of this schema (optional).</li>
-	    <li><code>symbols</code>: a JSON array, listing symbols,
-	    as JSON strings (required).  All symbols in an enum must
-	    be unique; duplicates are prohibited.</li>
-	  </ul>
-	  <p>For example, playing card suits might be defined with:</p>
-	  <source>
-{ "type": "enum",
-  "name": "Suit",
-  "symbols" : ["SPADES", "HEARTS", "DIAMONDS", "CLUBS"]
-}
-	  </source>
-	</section>
-        
-        <section>
-          <title>Arrays</title>
-          <p>Arrays use the type name <code>"array"</code> and support
-          a single attribute:</p>
-	  <ul>
-            <li><code>items</code>: the schema of the array's items.</li>
-	  </ul>
-	  <p>For example, an array of strings is declared
-	  with:</p>
-	  <source>{"type": "array", "items": "string"}</source>
-	</section>
-
-        <section>
-          <title>Maps</title>
-          <p>Maps use the type name <code>"map"</code> and support
-          one attribute:</p>
-	  <ul>
-            <li><code>values</code>: the schema of the map's values.</li>
-	  </ul>
-	  <p>Map keys are assumed to be strings.</p>
-	  <p>For example, a map from string to long is declared
-	  with:</p>
-	  <source>{"type": "map", "values": "long"}</source>
-	</section>
-
-        <section>
-          <title>Unions</title>
-          <p>Unions, as mentioned above, are represented using JSON
-          arrays.  For example, <code>["string", "null"]</code>
-          declares a schema which may be either a string or null.</p>
-	  <p>Unions may not contain more than one schema with the same
-	  type, except for the named types record, fixed and enum.  For
-	  example, unions containing two array types or two map types
-	  are not permitted, but two types with different names are
-	  permitted.  (Names permit efficient resolution when reading
-	  and writing unions.)</p>
-	  <p>Unions may not immediately contain other unions.</p>
-        </section>
-
-        <section>
-          <title>Fixed</title>
-          <p>Fixed uses the type name <code>"fixed"</code> and supports
-          two attributes:</p>
-	  <ul>
-	    <li><code>name</code>: a string naming this fixed (required).</li>
-	    <li><em>namespace</em>, a string that qualifies the name;</li>
-            <li><code>size</code>: an integer, specifying the number
-            of bytes per value (required).</li>
-	  </ul>
-	  <p>For example, 16-byte quantity may be declared with:</p>
-	  <source>{"type": "fixed", "size": 16, "name": "md5"}</source>
-	</section>
-
-
-      </section> <!-- end complex types -->
-
-      <section>
-	<title>Names</title>
-        <p>Record, enums and fixed are named types.  Each has
-          a <em>fullname</em> that is composed of two parts;
-          a <em>name</em> and a <em>namespace</em>.  Equality of names
-          is defined on the fullname.</p>
-	<p>The name portion of a fullname, and record field names must:</p>
-	<ul>
-          <li>start with <code>[A-Za-z_]</code></li>
-          <li>subsequently contain only <code>[A-Za-z0-9_]</code></li>
-	</ul>
-        <p>A namespace is a dot-separated sequence of such names.</p>
-        <p>In record, enum and fixed definitions, the fullname is
-        determined in one of the following ways:</p>
-	<ul>
-	  <li>A name and namespace are both specified.  For example,
-	  one might use <code>"name": "X", "namespace":
-	  "org.foo"</code> to indicate the
-	  fullname <code>org.foo.X</code>.</li>
-	  <li>A fullname is specified.  If the name specified contains
-	  a dot, then it is assumed to be a fullname, and any
-	  namespace also specified is ignored.  For example,
-	  use <code>"name": "org.foo.X"</code> to indicate the
-	  fullname <code>org.foo.X</code>.</li>
-	  <li>A name only is specified, i.e., a name that contains no
-	  dots.  In this case the namespace is taken from the most
-	  tightly enclosing schema or protocol.  For example,
-	  if <code>"name": "X"</code> is specified, and this occurs
-	  within a field of the record definition
-	  of <code>org.foo.Y</code>, then the fullname
-	  is <code>org.foo.X</code>.</li>
-	</ul>
-	<p>References to previously defined names are as in the latter
-	two cases above: if they contain a dot they are a fullname, if
-	they do not contain a dot, the namespace is the namespace of
-	the enclosing definition.</p>
-	<p>Primitive type names have no namespace and their names may
-	not be defined in any namespace.  A schema may only contain
-	multiple definitions of a fullname if the definitions are
-	equivalent.</p>
-      </section>
-
-    </section> <!-- end schemas -->
-
-    <section>
-      <title>Data Serialization</title>
-
-      <p>Avro data is always serialized with its schema.  Files that
-	store Avro data should always also include the schema for that
-	data in the same file.  Avro-based remote procedure call (RPC)
-	systems must also guarantee that remote recipients of data
-	have a copy of the schema used to write that data.</p>
-
-      <p>Because the schema used to write data is always available
-	when the data is read, Avro data itself is not tagged with
-	type information.  The schema is required to parse data.</p>
-
-      <p>In general, both serialization and deserialization proceed as
-      a depth-first, left-to-right traversal of the schema,
-      serializing primitive types as they are encountered.</p>
-
-      <section>
-	<title>Encodings</title>
-	<p>Avro specifies two serialization encodings: binary and
-	  JSON.  Most applications will use the binary encoding, as it
-	  is smaller and faster.  But, for debugging and web-based
-	  applications, the JSON encoding may sometimes be
-	  appropriate.</p>
-      </section>
-
-      <section id="binary_encoding">
-        <title>Binary Encoding</title>
-
-	<section id="binary_encode_primitive">
-          <title>Primitive Types</title>
-          <p>Primitive types are encoded in binary as follows:</p>
-          <ul>
-            <li><code>null</code> is written as zero bytes.</li>
-            <li>a <code>boolean</code> is written as a single byte whose
-              value is either <code>0</code> (false) or <code>1</code>
-              (true).</li>
-            <li><code>int</code> and <code>long</code> values are written
-              using <a href="ext:vint">variable-length</a>
-	      <a href="ext:zigzag">zig-zag</a> coding.  Some examples:
-	      <table class="right">
-		<tr><th>value</th><th>hex</th></tr>
-		<tr><td><code> 0</code></td><td><code>00</code></td></tr>
-		<tr><td><code>-1</code></td><td><code>01</code></td></tr>
-		<tr><td><code> 1</code></td><td><code>02</code></td></tr>
-		<tr><td><code>-2</code></td><td><code>03</code></td></tr>
-		<tr><td><code> 2</code></td><td><code>04</code></td></tr>
-		<tr><td colspan="2"><code>...</code></td></tr>
-		<tr><td><code>-64</code></td><td><code>7f</code></td></tr>
-		<tr><td><code> 64</code></td><td><code>&nbsp;80 01</code></td></tr>
-		<tr><td colspan="2"><code>...</code></td></tr>
-	      </table>
-	    </li>
-            <li>a <code>float</code> is written as 4 bytes. The float is
-              converted into a 32-bit integer using a method equivalent
-              to <a href="http://java.sun.com/javase/6/docs/api/java/lang/Float.html#floatToIntBits%28float%29">Java's floatToIntBits</a> and then encoded
-              in little-endian format.</li>
-            <li>a <code>double</code> is written as 8 bytes. The double
-              is converted into a 64-bit integer using a method equivalent
-              to <a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#doubleToLongBits%28double%29">Java's
-		doubleToLongBits</a> and then encoded in little-endian
-              format.</li>
-            <li><code>bytes</code> are encoded as
-              a <code>long</code> followed by that many bytes of data.
-            </li>
-            <li>a <code>string</code> is encoded as
-              a <code>long</code> followed by that many bytes of UTF-8
-              encoded character data.
-              <p>For example, the three-character string "foo" would
-              be encoded as the long value 3 (encoded as
-              hex <code>06</code>) followed by the UTF-8 encoding of
-              'f', 'o', and 'o' (the hex bytes <code>66 6f
-              6f</code>):
-              </p>
-              <source>06 66 6f 6f</source>
-            </li>
-          </ul>
-
-	</section>
-
-
-	<section id="binary_encode_complex">
-          <title>Complex Types</title>
-          <p>Complex types are encoded in binary as follows:</p>
-
-          <section>
-            <title>Records</title>
-	    <p>A record is encoded by encoding the values of its
-	      fields in the order that they are declared.  In other
-	      words, a record is encoded as just the concatenation of
-	      the encodings of its fields.  Field values are encoded per
-	      their schema.</p>
-	    <p>For example, the record schema</p>
-	    <source>
-	      {
-	      "type": "record", 
-	      "name": "test",
-	      "fields" : [
-	      {"name": "a", "type": "long"},
-	      {"name": "b", "type": "string"}
-	      ]
-	      }
-	    </source>
-	    <p>An instance of this record whose <code>a</code> field has
-	      value 27 (encoded as hex <code>36</code>) and
-	      whose <code>b</code> field has value "foo" (encoded as hex
-	      bytes <code>06 66 6f 6f</code>), would be encoded simply
-	      as the concatenation of these, namely the hex byte
-	      sequence:</p>
-	    <source>36 06 66 6f 6f</source>
-	  </section>
-          
-          <section>
-            <title>Enums</title>
-            <p>An enum is encoded by a <code>int</code>, representing
-              the zero-based position of the symbol in the schema.</p>
-	    <p>For example, consider the enum:</p>
-	    <source>
-	      {"type": "enum", "name": "Foo", "symbols": ["A", "B", "C", "D"] }
-	    </source>
-	    <p>This would be encoded by an <code>int</code> between
-	      zero and three, with zero indicating "A", and 3 indicating
-	      "D".</p>
-	  </section>
-
-
-          <section>
-            <title>Arrays</title>
-            <p>Arrays are encoded as a series of <em>blocks</em>.
-              Each block consists of a <code>long</code> <em>count</em>
-              value, followed by that many array items.  A block with
-              count zero indicates the end of the array.  Each item is
-              encoded per the array's item schema.</p>
-
-            <p>If a block's count is negative, its absolute value is used,
-              and the count is followed immediately by a <code>long</code>
-              block <em>size</em> indicating the number of bytes in the
-              block.  This block size permits fast skipping through data,
-              e.g., when projecting a record to a subset of its fields.</p>
-
-            <p>For example, the array schema</p>
-            <source>{"type": "array", "items": "long"}</source>
-            <p>an array containing the items 3 and 27 could be encoded
-              as the long value 2 (encoded as hex 04) followed by long
-              values 3 and 27 (encoded as hex <code>06 36</code>)
-              terminated by zero:</p>
-            <source>04 06 36 00</source>
-
-            <p>The blocked representation permits one to read and write
-              arrays larger than can be buffered in memory, since one can
-              start writing items without knowing the full length of the
-              array.</p>
-
-          </section>
-
-	  <section>
-            <title>Maps</title>
-            <p>Maps are encoded as a series of <em>blocks</em>.  Each
-              block consists of a <code>long</code> <em>count</em>
-              value, followed by that many key/value pairs.  A block
-              with count zero indicates the end of the map.  Each item
-              is encoded per the map's value schema.</p>
-	    
-            <p>If a block's count is negative, its absolute value is used,
-              and the count is followed immediately by a <code>long</code>
-              block <em>size</em> indicating the number of bytes in the
-              block.  This block size permits fast skipping through data,
-              e.g., when projecting a record to a subset of its fields.</p>
-	    
-            <p>The blocked representation permits one to read and write
-              maps larger than can be buffered in memory, since one can
-              start writing items without knowing the full length of the
-              map.</p>
-	
-	  </section>
-
-          <section>
-            <title>Unions</title>
-            <p>A union is encoded by first writing a <code>long</code>
-              value indicating the zero-based position within the
-              union of the schema of its value.  The value is then
-              encoded per the indicated schema within the union.</p>
-            <p>For example, the union
-              schema <code>["string","null"]</code> would encode:</p>
-            <ul>
-              <li><code>null</code> as the integer 1 (the index of
-                "null" in the union, encoded as
-                hex <code>02</code>): <source>02</source></li>
-              <li>the string <code>"a"</code> as zero (the index of
-                "string" in the union), followed by the serialized string:
-                <source>00 02 61</source></li>
-            </ul>
-          </section>
-
-          <section>
-            <title>Fixed</title>
-            <p>Fixed instances are encoded using the number of bytes
-              declared in the schema.</p>
-          </section>
-
-        </section> <!-- end complex types -->
-
-      </section>
-
-      <section id="json_encoding">
-        <title>JSON Encoding</title>
-        
-        <p>Except for unions, the JSON encoding is the same as is used
-        to encode <a href="#schema_record">field default
-        values</a>.</p>
-
-        <p>The value of a union is encoded in JSON as follows:</p>
-
-        <ul>
-          <li>if its type is <code>null</code>, then it is encoded as
-          a JSON null;</li>
-          <li>otherwise it is encoded as a JSON object with one
-          name/value pair whose name is the type's name and whose
-          value is the recursively encoded value.  For Avro's named
-          types (record, fixed or enum) the user-specified name is
-          used, for other types the type name is used.</li>
-        </ul>
-          
-        <p>For example, the union
-          schema <code>["null","string","Foo"]</code>, where Foo is a
-          record name, would encode:</p>
-        <ul>
-          <li><code>null</code> as <code>null</code>;</li>
-          <li>the string <code>"a"</code> as
-            <code>{"string": "a"}</code>; and</li>
-          <li>a Foo instance as <code>{"Foo": {...}}</code>,
-          where <code>{...}</code> indicates the JSON encoding of a
-          Foo instance.</li>
-        </ul>
-
-        <p>Note that a schema is still required to correctly process
-        JSON-encoded data.  For example, the JSON encoding does not
-        distinguish between <code>int</code>
-        and <code>long</code>, <code>float</code>
-        and <code>double</code>, records and maps, enums and strings,
-        etc.</p>
-
-      </section>
-
-    </section>
-
-    <section id="order">
-      <title>Sort Order</title>
-
-      <p>Avro defines a standard sort order for data.  This permits
-        data written by one system to be efficiently sorted by another
-        system.  This can be an important optimization, as sort order
-        comparisons are sometimes the most frequent per-object
-        operation.  Note also that Avro binary-encoded data can be
-        efficiently ordered without deserializing it to objects.</p>
-
-      <p>Data items may only be compared if they have identical
-        schemas.  Pairwise comparisons are implemented recursively
-        with a depth-first, left-to-right traversal of the schema.
-        The first mismatch encountered determines the order of the
-        items.</p>
-
-      <p>Two items with the same schema are compared according to the
-        following rules.</p>
-      <ul>
-        <li><code>null</code> data is always equal.</li>
-        <li><code>boolean</code> data is ordered with false before true.</li>
-        <li><code>int</code>, <code>long</code>, <code>float</code>
-          and <code>double</code> data is ordered by ascending numeric
-          value.</li>
-        <li><code>bytes</code> and <code>fixed</code> data are
-          compared lexicographically by unsigned 8-bit values.</li>
-        <li><code>string</code> data is compared lexicographically by
-          Unicode code point.  Note that since UTF-8 is used as the
-          binary encoding for strings, sorting of bytes and string
-          binary data is identical.</li>
-        <li><code>array</code> data is compared lexicographically by
-          element.</li>
-        <li><code>enum</code> data is ordered by the symbol's position
-          in the enum schema.  For example, an enum whose symbols are
-          <code>["z", "a"]</code> would sort <code>"z"</code> values
-          before <code>"a"</code> values.</li>
-        <li><code>union</code> data is first ordered by the branch
-          within the union, and, within that, by the type of the
-          branch.  For example, an <code>["int", "string"]</code>
-          union would order all int values before all string values,
-          with the ints and strings themselves ordered as defined
-          above.</li>
-        <li><code>record</code> data is ordered lexicographically by
-          field.  If a field specifies that its order is:
-          <ul>
-            <li><code>"ascending"</code>, then the order of its values
-              is unaltered.</li>
-            <li><code>"descending"</code>, then the order of its values
-              is reversed.</li>
-            <li><code>"ignore"</code>, then its values are ignored
-              when sorting.</li>
-          </ul>
-        </li>
-        <li><code>map</code> data may not be compared.  It is an error
-          to attempt to compare data containing maps unless those maps
-          are in an <code>"order":"ignore"</code> record field.
-        </li>
-      </ul>
-    </section>
-
-    <section>
-      <title>Object Container Files</title>
-      <p>Avro includes a simple object container file format.  A file
-      has a schema, and all objects stored in the file must be written
-      according to that schema, using binary encoding.  Objects are
-      stored in blocks that may be compressed.  Syncronization markers
-      are used between blocks to permit efficient splitting of files
-      for MapReduce processing.</p>
-
-      <p>Files may include arbitrary user-specified metadata.</p>
-
-      <p>A file consists of:</p>
-      <ul>
-        <li>A <em>file header</em>, followed by</li>
-        <li>one or more <em>file data blocks</em>.</li>
-      </ul>
-
-      <p>A file header consists of:</p>
-      <ul>
-        <li>Four bytes, ASCII 'O', 'b', 'j', followed by 1.</li>
-        <li><em>file metadata</em>, including the schema.</li>
-        <li>The 16-byte, randomly-generated sync marker for this file.</li>
-      </ul>
-
-      <p>File metadata consists of:</p>
-      <ul>
-        <li>A long indicating the number of metadata key/value pairs.</li>
-        <li>For each pair, a string key and bytes value.</li>
-      </ul>
-
-      <p>All metadata properties that start with "avro." are reserved.
-      The following file metadata properties are currently used:</p>
-      <ul>
-        <li><strong>avro.schema</strong> contains the schema of objects
-        stored in the file, as JSON data (required).</li>
-        <li><strong>avro.codec</strong> the name of the compression codec
-        used to compress blocks, as a string.  Implementations
-        are required to support the following codecs: "null" and "deflate".  
-        If codec is absent, it is assumed to be "null".  The codecs
-        are described with more detail below.</li>
-      </ul>
-
-      <p>A file header is thus described by the following schema:</p>
-      <source>
-{"type": "record", "name": "org.apache.avro.file.Header",
- "fields" : [
-   {"name": "magic", "type": {"type": "fixed", "name": "Magic", "size": 4}},
-   {"name": "meta", "type": {"type": "map", "values": "bytes"}},
-   {"name": "sync", "type": {"type": "fixed", "name": "Sync", "size": 16}},
-  ]
-}
-      </source>
-
-      <p>A file data block consists of:</p>
-      <ul>
-        <li>A long indicating the count of objects in this block.</li>
-        <li>A long indicating the size in bytes of the serialized objects
-        in the current block, after any codec is applied</li>
-        <li>The serialized objects.  If a codec is specified, this is
-        compressed by that codec.</li>
-        <li>The file's 16-byte sync marker.</li>
-      </ul>
-          <p>Thus, each block's binary data can be efficiently extracted or skipped without
-          deserializing the contents.  The combination of block size, object counts, and 
-          sync markers enable detection of corrupt blocks and help ensure data integrity.</p>
-      <section>
-      <title>Required Codecs</title>
-        <section>
-        <title>null</title>
-        <p>The "null" codec simply passes through data uncompressed.</p>
-        </section>
-
-        <section>
-        <title>deflate</title>
-        <p>The "deflate" codec writes the data block using the
-        deflate algorithm as specified in 
-        <a href="http://www.isi.edu/in-notes/rfc1951.txt">RFC 1951</a>,
-        and typically implemented using the zlib library.  Note that this
-        format (unlike the "zlib format" in RFC 1950) does not have a
-        checksum.
-        </p>
-        </section>
-      </section>
-
-    </section>
-
-    <section>
-      <title>Protocol Declaration</title>
-      <p>Avro protocols describe RPC interfaces.  Like schemas, they are
-      defined with JSON text.</p>
-
-      <p>A protocol is a JSON object with the following attributes:</p>
-      <ul>
-        <li><em>protocol</em>, a string, the name of the protocol
-        (required);</li>
-        <li><em>namespace</em>, an optional string that qualifies the name;</li>
-        <li><em>doc</em>, an optional string describing this protocol;</li>
-        <li><em>types</em>, an optional list of definitions of named types
-          (records, enums, fixed and errors).  An error definition is
-          just like a record definition except it uses "error" instead
-          of "record".  Note that forward references to named types
-          are not permitted.</li>
-        <li><em>messages</em>, an optional JSON object whose keys are
-          message names and whose values are objects whose attributes
-          are described below.  No two messages may have the same
-          name.</li>
-      </ul>
-      <p>The name and namespace qualification rules defined for schema objects
-	apply to protocols as well.</p>
-
-      <section>
-        <title>Messages</title>
-        <p>A message has attributes:</p>
-        <ul>
-          <li>a <em>doc</em>, an optional description of the message,</li>
-          <li>a <em>request</em>, a list of named,
-            typed <em>parameter</em> schemas (this has the same form
-            as the fields of a record declaration);</li>
-          <li>a <em>response</em> schema; and</li> 
-          <li>an optional union of <em>error</em> schemas.</li>
-        </ul>
-        <p>A request parameter list is processed equivalently to an
-          anonymous record.  Since record field lists may vary between
-          reader and writer, request parameters may also differ
-          between the caller and responder, and such differences are
-          resolved in the same manner as record field differences.</p>
-      </section>
-      <section>
-        <title>Sample Protocol</title>
-        <p>For example, one may define a simple HelloWorld protocol with:</p>
-        <source>
-{
-  "namespace": "com.acme",
-  "protocol": "HelloWorld",
-  "doc": "Protocol Greetings",
-
-  "types": [
-    {"name": "Greeting", "type": "record", "fields": [
-      {"name": "message", "type": "string"}]},
-    {"name": "Curse", "type": "error", "fields": [
-      {"name": "message", "type": "string"}]}
-  ],
-
-  "messages": {
-    "hello": {
-      "doc": "Say hello.",
-      "request": [{"name": "greeting", "type": "Greeting" }],
-      "response": "Greeting",
-      "errors": ["Curse"]
-    }
-  }
-}
-        </source>
-      </section>
-    </section>
-
-    <section>
-      <title>Protocol Wire Format</title>
-
-      <section>
-        <title>Message Transport</title>
-        <p>Messages may be transmitted via
-        different <em>transport</em> mechanisms.</p>
-
-        <p>To the transport, a <em>message</em> is an opaque byte sequence.</p>
-
-        <p>A transport is a system that supports:</p>
-        <ul>
-          <li><strong>transmission of request messages</strong>
-          </li>
-          <li><strong>receipt of corresponding response messages</strong>
-            <p>Servers will send a response message back to the client
-            corresponding to each request message.  The mechanism of
-            that correspondance is transport-specific.  For example,
-            in HTTP it might be implicit, since HTTP directly supports
-            requests and responses.  But a transport that multiplexes
-            many client threads over a single socket would need to tag
-            messages with unique identifiers.</p>
-          </li>
-        </ul>
-
-        <section>
-          <title>HTTP as Transport</title>
-          <p>When
-            <a href="http://www.w3.org/Protocols/rfc2616/rfc2616.html">HTTP</a>
-            is used as a transport, each Avro message exchange is an
-            HTTP request/response pair.  All messages of an Avro
-            protocol should share a single URL at an HTTP server.
-            Other protocols may also use that URL.  Both normal and
-            error Avro response messages should use the 200 (OK)
-            response code.  The chunked encoding may be used for
-            requests and responses, but, regardless the Avro request
-            and response are the entire content of an HTTP request and
-            response.  The HTTP Content-Type of requests and responses
-            should be specified as "avro/binary".  Requests should be
-            made using the POST method.</p>
-        </section>
-      </section>
-
-      <section>
-        <title>Message Framing</title>
-        <p>Avro messages are <em>framed</em> as a list of buffers.</p>
-        <p>Framing is a layer between messages and the transport.
-        It exists to optimize certain operations.</p>
-
-        <p>The format of framed message data is:</p>
-        <ul>
-          <li>a series of <em>buffers</em>, where each buffer consists of:
-            <ul>
-              <li>a four-byte, big-endian <em>buffer length</em>, followed by</li>
-              <li>that many bytes of <em>buffer data</em>.</li>
-            </ul>
-          </li>
-          <li>A message is always terminated by a zero-lenghted buffer.</li>
-        </ul>
-
-        <p>Framing is transparent to request and response message
-        formats (described below).  Any message may be presented as a
-        single or multiple buffers.</p>
-
-        <p>Framing can permit readers to more efficiently get
-        different buffers from different sources and for writers to
-        more efficiently store different buffers to different
-        destinations.  In particular, it can reduce the number of
-        times large binary objects are copied.  For example, if an RPC
-        parameter consists of a megabyte of file data, that data can
-        be copied directly to a socket from a file descriptor, and, on
-        the other end, it could be written directly to a file
-        descriptor, never entering user space.</p>
-
-        <p>A simple, recommended, framing policy is for writers to
-        create a new segment whenever a single binary object is
-        written that is larger than a normal output buffer.  Small
-        objects are then appended in buffers, while larger objects are
-        written as their own buffers.  When a reader then tries to
-        read a large object the runtime can hand it an entire buffer
-        directly, without having to copy it.</p>
-      </section>
-
-      <section>
-        <title>Handshake</title>
-
-        <p>RPC requests and responses are prefixed by handshakes.  The
-        purpose of the handshake is to ensure that the client and the
-        server have each other's protocol definition, so that the
-        client can correctly deserialize responses, and the server can
-        correctly deserialize requests.  Both clients and servers
-        should maintain a cache of recently seen protocols, so that,
-        in most cases, a handshake will be completed without extra
-        round-trip network exchanges or the transmission of full
-        protocol text.</p>
-
-        <p>The handshake process uses the following record schemas:</p>
-
-        <source>
-{
-  "type": "record",
-  "name": "HandshakeRequest", "namespace":"org.apache.avro.ipc",
-  "fields": [
-    {"name": "clientHash",
-     "type": {"type": "fixed", "name": "MD5", "size": 16}},
-    {"name": "clientProtocol", "type": ["null", "string"]},
-    {"name": "serverHash", "type": "MD5"},
-    {"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]}
-  ]
-}
-{
-  "type": "record",
-  "name": "HandshakeResponse", "namespace": "org.apache.avro.ipc",
-  "fields": [
-    {"name": "match",
-     "type": {"type": "enum", "name": "HandshakeMatch",
-              "symbols": ["BOTH", "CLIENT", "NONE"]}},
-    {"name": "serverProtocol",
-     "type": ["null", "string"]},
-    {"name": "serverHash",
-     "type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]},
-    {"name": "meta",
-     "type": ["null", {"type": "map", "values": "bytes"}]}
-  ]
-}
-        </source>
-
-        <ul>
-          <li>A client first prefixes each request with
-          a <code>HandshakeRequest</code> containing just the hash of
-          its protocol and of the server's protocol
-          (<code>clientHash!=null, clientProtocol=null,
-          serverHash!=null</code>), where the hashes are 128-bit MD5
-          hashes of the JSON protocol text. If a client has never
-          connected to a given server, it sends its hash as a guess of
-          the server's hash, otherwise it sends the hash that it
-          previously obtained from this server.</li>
-
-          <li>The server responds with
-          a <code>HandshakeResponse</code> containing one of:
-            <ul>
-              <li><code>match=BOTH, serverProtocol=null,
-              serverHash=null</code> if the client sent the valid hash
-              of the server's protocol and the server knows what
-              protocol corresponds to the client's hash. In this case,
-              the request is complete and the response data
-              immediately follows the HandshakeResponse.</li>
-
-              <li><code>match=CLIENT, serverProtocol!=null,
-              serverHash!=null</code> if the server has previously
-              seen the client's protocol, but the client sent an
-              incorrect hash of the server's protocol. The request is
-              complete and the response data immediately follows the
-              HandshakeResponse. The client must use the returned
-              protocol to process the response and should also cache
-              that protocol and its hash for future interactions with
-              this server.</li>
-
-              <li><code>match=NONE</code> if the server has not
-              previously seen the client's protocol.
-              The <code>serverHash</code>
-              and <code>serverProtocol</code> may also be non-null if
-              the server's protocol hash was incorrect.
-
-              <p>In this case the client must then re-submit its request
-              with its protocol text (<code>clientHash!=null,
-              clientProtocol!=null, serverHash!=null</code>) and the
-              server should respond with a successful match
-              (<code>match=BOTH, serverProtocol=null,
-              serverHash=null</code>) as above.</p>
-              </li>
-            </ul>
-          </li>
-        </ul>
-
-        <p>The <code>meta</code> field is reserved for future
-        handshake enhancements.</p>
-
-      </section>
-
-      <section>
-        <title>Call Format</title>
-        <p>A <em>call</em> consists of a request message paired with
-        its resulting response or error message.  Requests and
-        responses contain extensible metadata, and both kinds of
-        messages are framed as described above.</p>
-
-        <p>The format of a call request is:</p>
-        <ul>
-          <li><em>request metadata</em>, a map with values of
-          type <code>bytes</code></li>
-          <li>the <em>message name</em>, an Avro string,
-          followed by</li>
-          <li>the message <em>parameters</em>.  Parameters are
-          serialized according to the message's request
-          declaration.</li>
-        </ul>
-
-        <p>The format of a call response is:</p>
-        <ul>
-          <li><em>response metadata</em>, a map with values of
-          type <code>bytes</code></li>
-          <li>a one-byte <em>error flag</em> boolean, followed by either:
-            <ul>
-              <li>if the error flag is false, the message <em>response</em>,
-                serialized per the message's response schema.</li>
-              <li>if the error flag is true, the <em>error</em>,
-              serialized per the message's error union schema.</li>
-            </ul>
-          </li>
-        </ul>
-      </section>
-
-    </section>
-
-    <section>
-      <title>Schema Resolution</title>
-
-      <p>A reader of Avro data, whether from an RPC or a file, can
-        always parse that data because its schema is provided.  But
-        that schema may not be exactly the schema that was expected.
-        For example, if the data was written with a different version
-        of the software than it is read, then records may have had
-        fields added or removed.  This section specifies how such
-        schema differences should be resolved.</p>
-
-      <p>We call the schema used to write the data as
-        the <em>writer's</em> schema, and the schema that the
-        application expects the <em>reader's</em> schema.  Differences
-        between these should be resolved as follows:</p>
-
-      <ul>
-        <li><p>It is an error if the two schemas do not <em>match</em>.</p>
-          <p>To match, one of the following must hold:</p>
-          <ul>
-            <li>both schemas are arrays whose item types match</li>
-            <li>both schemas are maps whose value types match</li>
-            <li>both schemas are enums whose names match</li>
-            <li>both schemas are fixed whose sizes and names match</li>
-            <li>both schemas are records with the same name</li>
-            <li>either schema is a union</li>
-            <li>both schemas have same primitive type</li>
-            <li>the writer's schema may be <em>promoted</em> to the
-              reader's as follows:
-              <ul>
-                <li>int is promotable to long, float, or double</li>
-                <li>long is promotable to float or double</li>
-                <li>float is promotable to double</li>
-                </ul>
-            </li>
-          </ul>
-        </li>
-
-        <li><strong>if both are records:</strong>
-          <ul>
-            <li>the ordering of fields may be different: fields are
-              matched by name.</li>
-            
-            <li>schemas for fields with the same name in both records
-              are resolved recursively.</li>
-            
-            <li>if the writer's record contains a field with a name
-              not present in the reader's record, the writer's value
-              for that field is ignored.</li>
-            
-            <li>if the reader's record schema has a field that
-              contains a default value, and writer's schema does not
-              have a field with the same name, then the reader should
-              use the default value from its field.</li>
-
-            <li>if the reader's record schema has a field with no
-              default value, and writer's schema does not have a field
-              with the same name, an error is signalled.</li>
-          </ul>
-        </li>
-
-        <li><strong>if both are enums:</strong>
-          <p>if the writer's symbol is not present in the reader's
-            enum, then an error is signalled.</p>
-        </li>
-
-        <li><strong>if both are arrays:</strong>
-          <p>This resolution algorithm is applied recursively to the reader's and
-            writer's array item schemas.</p>
-        </li>
-
-        <li><strong>if both are maps:</strong>
-          <p>This resolution algorithm is applied recursively to the reader's and
-            writer's value schemas.</p>
-        </li>
-
-        <li><strong>if both are unions:</strong>
-          <p>The first schema in the reader's union that matches the
-            selected writer's union schema is recursively resolved
-            against it.  if none match, an error is signalled.</p>
-        </li>
-
-        <li><strong>if reader's is a union, but writer's is not</strong>
-          <p>The first schema in the reader's union that matches the
-            writer's schema is recursively resolved against it.  If none
-            match, an error is signalled.</p>
-        </li>
-          
-        <li><strong>if writer's is a union, but reader's is not</strong>
-          <p>If the reader's schema matches the selected writer's schema,
-            it is recursively resolved against it.  If they do not
-            match, an error is signalled.</p>
-        </li>
-          
-      </ul>
-
-      <p>A schema's "doc" fields are ignored for the purposes of schema resolution.  Hence,
-        the "doc" portion of a schema may be dropped at serialization.</p>
-
-    </section>
-  </body>
-</document>
diff --git a/branch-1.3/doc/src/content/xdocs/tabs.xml b/branch-1.3/doc/src/content/xdocs/tabs.xml
deleted file mode 100644
index 39a026e..0000000
--- a/branch-1.3/doc/src/content/xdocs/tabs.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<!DOCTYPE tabs PUBLIC "-//APACHE//DTD Cocoon Documentation Tab V1.1//EN" "http://forrest.apache.org/dtd/tab-cocoon-v11.dtd" [
-  <!ENTITY % avro-entities PUBLIC "-//Apache//ENTITIES Avro//EN"
-	   "../../../../build/avro.ent">
-  %avro-entities;
-]>
-
-<tabs software="Avro"
-      title="Avro"
-      copyright="The Apache Software Foundation"
-      xmlns:xlink="http://www.w3.org/1999/xlink">
-
-  <!-- The rules are:
-    @dir will always have /index.html added.
-    @href is not modified unless it is root-relative and obviously specifies a
-    directory (ends in '/'), in which case /index.html will be added
-  -->
-
-  <tab label="Project" href="http://hadoop.apache.org/avro/" />
-  <tab label="Wiki" href="http://wiki.apache.org/hadoop/Avro/" />
-  <tab label="Avro &AvroVersion; Documentation" dir="" />  
-  
-</tabs>
diff --git a/branch-1.3/doc/src/resources/images/avro-logo.png b/branch-1.3/doc/src/resources/images/avro-logo.png
deleted file mode 100644
index c9b7e75..0000000
--- a/branch-1.3/doc/src/resources/images/avro-logo.png
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/doc/src/resources/images/favicon.ico b/branch-1.3/doc/src/resources/images/favicon.ico
deleted file mode 100644
index 161bcf7..0000000
--- a/branch-1.3/doc/src/resources/images/favicon.ico
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/doc/src/resources/images/hadoop-logo.jpg b/branch-1.3/doc/src/resources/images/hadoop-logo.jpg
deleted file mode 100644
index 809525d..0000000
--- a/branch-1.3/doc/src/resources/images/hadoop-logo.jpg
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/doc/src/skinconf.xml b/branch-1.3/doc/src/skinconf.xml
deleted file mode 100644
index 20e1984..0000000
--- a/branch-1.3/doc/src/skinconf.xml
+++ /dev/null
@@ -1,350 +0,0 @@
-<?xml version="1.0"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-<!--
-Skin configuration file. This file contains details of your project,
-which will be used to configure the chosen Forrest skin.
--->
-
-<!DOCTYPE skinconfig PUBLIC "-//APACHE//DTD Skin Configuration V0.6-3//EN" "http://forrest.apache.org/dtd/skinconfig-v06-3.dtd">
-<skinconfig>
-  <!-- To enable lucene search add provider="lucene" (default is google).
-    Add box-location="alt" to move the search box to an alternate location
-    (if the skin supports it) and box-location="all" to show it in all
-    available locations on the page.  Remove the <search> element to show
-    no search box. @domain will enable sitesearch for the specific domain with google.
-    In other words google will search the @domain for the query string.
-
-  -->
-  <search name="Avro" domain="hadoop.apache.org" provider="google"/>
-
-  <!-- Disable the print link? If enabled, invalid HTML 4.0.1 -->
-  <disable-print-link>true</disable-print-link>  
-  <!-- Disable the PDF link? -->
-  <disable-pdf-link>false</disable-pdf-link>
-  <!-- Disable the POD link? -->
-  <disable-pod-link>true</disable-pod-link>
-  <!-- Disable the Text link? FIXME: NOT YET IMPLEMENETED. -->
-  <disable-txt-link>true</disable-txt-link>
-  <!-- Disable the xml source link? -->
-  <!-- The xml source link makes it possible to access the xml rendition
-    of the source frim the html page, and to have it generated statically.
-    This can be used to enable other sites and services to reuse the
-    xml format for their uses. Keep this disabled if you don't want other
-    sites to easily reuse your pages.-->
-  <disable-xml-link>true</disable-xml-link>
-
-  <!-- Disable navigation icons on all external links? -->
-  <disable-external-link-image>true</disable-external-link-image>
-
-  <!-- Disable w3c compliance links? 
-    Use e.g. align="center" to move the compliance links logos to 
-    an alternate location default is left.
-    (if the skin supports it) -->
-  <disable-compliance-links>true</disable-compliance-links>
-
-  <!-- Render mailto: links unrecognisable by spam harvesters? -->
-  <obfuscate-mail-links>false</obfuscate-mail-links>
-
-  <!-- Disable the javascript facility to change the font size -->
-  <disable-font-script>true</disable-font-script>
-
-  <!-- project logo -->
-  <project-name>Avro</project-name>
-  <project-description>Serialization System</project-description>
-  <project-url>http://hadoop.apache.org/avro/</project-url>
-  <project-logo>images/avro-logo.png</project-logo>
-
-  <!-- group logo -->
-  <group-name>Hadoop</group-name>
-  <group-description>Apache Hadoop</group-description>
-  <group-url>http://hadoop.apache.org/</group-url>
-  <group-logo>images/hadoop-logo.jpg</group-logo>
-
-  <!-- optional host logo (e.g. sourceforge logo)
-       default skin: renders it at the bottom-left corner -->
-  <host-url></host-url>
-  <host-logo></host-logo>
-
-  <!-- relative url of a favicon file, normally favicon.ico -->
-  <favicon-url>images/favicon.ico</favicon-url>
-
-  <!-- The following are used to construct a copyright statement -->
-  <year>2009</year>
-  <vendor>The Apache Software Foundation.</vendor>
-  <copyright-link>http://www.apache.org/licenses/</copyright-link>
-
-  <!-- Some skins use this to form a 'breadcrumb trail' of links.
-    Use location="alt" to move the trail to an alternate location
-    (if the skin supports it).
-	  Omit the location attribute to display the trail in the default location.
-	  Use location="none" to not display the trail (if the skin supports it).
-    For some skins just set the attributes to blank.
-  -->
-  <trail>
-    <link1 name="Apache" href="http://www.apache.org/"/>
-    <link2 name="Hadoop" href="http://hadoop.apache.org/"/>
-    <link3 name="Avro" href="http://hadoop.apache.org/avro/"/>
-  </trail>
-
-  <!-- Configure the TOC, i.e. the Table of Contents.
-  @max-depth
-   how many "section" levels need to be included in the
-   generated Table of Contents (TOC). 
-  @min-sections
-   Minimum required to create a TOC.
-  @location ("page","menu","page,menu", "none")
-   Where to show the TOC.
-  -->
-  <toc max-depth="2" min-sections="1" location="page"/>
-
-  <!-- Heading types can be clean|underlined|boxed  -->
-  <headings type="clean"/>
-  
-  <!-- The optional feedback element will be used to construct a
-    feedback link in the footer with the page pathname appended:
-    <a href="@href">{@to}</a>
-  <feedback to="webmaster@foo.com"
-    href="mailto:webmaster@foo.com?subject=Feedback&#160;" >
-    Send feedback about the website to:
-  </feedback>
-    -->
-  <!--
-    extra-css - here you can define custom css-elements that are 
-    a. overriding the fallback elements or 
-    b. adding the css definition from new elements that you may have 
-       used in your documentation.
-    -->
-  <extra-css>
-    <!--Example of b. 
-        To define the css definition of a new element that you may have used
-        in the class attribute of a <p> node. 
-        e.g. <p class="quote"/>
-    -->
-    p.quote {
-      margin-left: 2em;
-      padding: .5em;
-      background-color: #f0f0f0;
-      font-family: monospace;
-    }
-    table.right {
-      text-align: right;
-      display: block;
-    }
-  </extra-css>
-
-  <colors>
-  <!-- These values are used for the generated CSS files. -->
-
-  <!-- Krysalis -->
-<!--
-    <color name="header"    value="#FFFFFF"/>
-
-    <color name="tab-selected" value="#a5b6c6" link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="tab-unselected" value="#F7F7F7"  link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="subtab-selected" value="#a5b6c6"  link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="subtab-unselected" value="#a5b6c6"  link="#000000" vlink="#000000" hlink="#000000"/>
-
-    <color name="heading" value="#a5b6c6"/>
-    <color name="subheading" value="#CFDCED"/>
-        
-    <color name="navstrip" value="#CFDCED" font="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="toolbox" value="#a5b6c6"/>
-    <color name="border" value="#a5b6c6"/>
-        
-    <color name="menu" value="#F7F7F7" link="#000000" vlink="#000000" hlink="#000000"/>    
-    <color name="dialog" value="#F7F7F7"/>
-            
-    <color name="body"    value="#ffffff" link="#0F3660" vlink="#009999" hlink="#000066"/>
-    
-    <color name="table" value="#a5b6c6"/>    
-    <color name="table-cell" value="#ffffff"/>    
-    <color name="highlight" value="#ffff00"/>
-    <color name="fixme" value="#cc6600"/>
-    <color name="note" value="#006699"/>
-    <color name="warning" value="#990000"/>
-    <color name="code" value="#a5b6c6"/>
-        
-    <color name="footer" value="#a5b6c6"/>
--->
-  
-  <!-- Forrest -->
-<!--
-    <color name="header"    value="#294563"/>
-
-    <color name="tab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
-    <color name="tab-unselected" value="#b5c7e7" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
-    <color name="subtab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
-    <color name="subtab-unselected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
-
-    <color name="heading" value="#294563"/>
-    <color name="subheading" value="#4a6d8c"/>
-        
-    <color name="navstrip" value="#cedfef" font="#0F3660" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
-    <color name="toolbox" value="#4a6d8c"/>
-    <color name="border" value="#294563"/>
-    
-    <color name="menu" value="#4a6d8c" font="#cedfef" link="#ffffff" vlink="#ffffff" hlink="#ffcf00"/>    
-    <color name="dialog" value="#4a6d8c"/>
-            
-    <color name="body" value="#ffffff"  link="#0F3660" vlink="#009999" hlink="#000066"/>
-    
-    <color name="table" value="#7099C5"/>    
-    <color name="table-cell" value="#f0f0ff"/>    
-    <color name="highlight" value="#ffff00"/>
-    <color name="fixme" value="#cc6600"/>
-    <color name="note" value="#006699"/>
-    <color name="warning" value="#990000"/>
-    <color name="code" value="#CFDCED"/>
-        
-    <color name="footer" value="#cedfef"/>
--->
-
-  <!-- Collabnet --> 
-<!--
-    <color name="header"    value="#003366"/>
-
-    <color name="tab-selected" value="#dddddd" link="#555555" vlink="#555555" hlink="#555555"/>
-    <color name="tab-unselected" value="#999999" link="#ffffff" vlink="#ffffff" hlink="#ffffff"/>
-    <color name="subtab-selected" value="#cccccc" link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="subtab-unselected" value="#cccccc" link="#555555" vlink="#555555" hlink="#555555"/>
-
-    <color name="heading" value="#003366"/>
-    <color name="subheading" value="#888888"/>
-    
-    <color name="navstrip" value="#dddddd" font="#555555"/>
-    <color name="toolbox" value="#dddddd" font="#555555"/>
-    <color name="border" value="#999999"/>
-    
-    <color name="menu" value="#ffffff"/>    
-    <color name="dialog" value="#eeeeee"/>
-            
-    <color name="body"      value="#ffffff"/>
-    
-    <color name="table" value="#ccc"/>    
-    <color name="table-cell" value="#ffffff"/>   
-    <color name="highlight" value="#ffff00"/>
-    <color name="fixme" value="#cc6600"/>
-    <color name="note" value="#006699"/>
-    <color name="warning" value="#990000"/>
-    <color name="code" value="#003366"/>
-        
-    <color name="footer" value="#ffffff"/>
--->
- <!-- Lenya using pelt-->
-<!--
-    <color name="header" value="#ffffff"/>
-
-    <color name="tab-selected" value="#4C6C8F" link="#ffffff" vlink="#ffffff" hlink="#ffffff"/>
-    <color name="tab-unselected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="subtab-selected" value="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
-    <color name="subtab-unselected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
-
-    <color name="heading" value="#E5E4D9"/>
-    <color name="subheading" value="#000000"/>
-    <color name="published" value="#4C6C8F" font="#FFFFFF"/>
-    <color name="feedback" value="#4C6C8F" font="#FFFFFF" align="center"/>
-    <color name="navstrip" value="#E5E4D9" font="#000000"/>
-
-    <color name="toolbox" value="#CFDCED" font="#000000"/>
-
-    <color name="border" value="#999999"/>
-    <color name="menu" value="#4C6C8F" font="#ffffff" link="#ffffff" vlink="#ffffff" hlink="#ffffff" current="#FFCC33" />    
-    <color name="menuheading" value="#cfdced" font="#000000" />
-    <color name="searchbox" value="#E5E4D9" font="#000000"/>
-    
-    <color name="dialog" value="#CFDCED"/>
-    <color name="body" value="#ffffff" />            
-    
-    <color name="table" value="#ccc"/>    
-    <color name="table-cell" value="#ffffff"/>   
-    <color name="highlight" value="#ffff00"/>
-    <color name="fixme" value="#cc6600"/>
-    <color name="note" value="#006699"/>
-    <color name="warning" value="#990000"/>
-    <color name="code" value="#003366"/>
-        
-    <color name="footer" value="#E5E4D9"/>
--->
-  </colors>
- 
-  <!-- Settings specific to PDF output. -->
-  <pdf>
-    <!-- 
-       Supported page sizes are a0, a1, a2, a3, a4, a5, executive,
-       folio, legal, ledger, letter, quarto, tabloid (default letter).
-       Supported page orientations are portrait, landscape (default
-       portrait).
-       Supported text alignments are left, right, justify (default left).
-    -->
-    <page size="letter" orientation="portrait" text-align="left"/>
-
-    <!--
-       Margins can be specified for top, bottom, inner, and outer
-       edges. If double-sided="false", the inner edge is always left
-       and the outer is always right. If double-sided="true", the
-       inner edge will be left on odd pages, right on even pages,
-       the outer edge vice versa.
-       Specified below are the default settings.
-    -->
-    <margins double-sided="false">
-      <top>1in</top>
-      <bottom>1in</bottom>
-      <inner>1.25in</inner>
-      <outer>1in</outer>
-    </margins>
-
-    <!--
-      Print the URL text next to all links going outside the file
-    -->
-    <show-external-urls>false</show-external-urls>
-
-    <!--
-      Disable the copyright footer on each page of the PDF.
-      A footer is composed for each page. By default, a "credit" with role=pdf
-      will be used, as explained below. Otherwise a copyright statement
-      will be generated. This latter can be disabled.
-    -->
-    <disable-copyright-footer>false</disable-copyright-footer>
-  </pdf>
-
-  <!-- Credits are typically rendered as a set of small clickable
-    images in the page footer.
-    Use box-location="alt" to move the credit to an alternate location
-    (if the skin supports it). 
-  -->
-  <credits>
-    <credit box-location="alt">
-      <name>Built with Apache Forrest</name>
-      <url>http://forrest.apache.org/</url>
-      
-      <width>88</width>
-      <height>31</height>
-    </credit>
-    <!-- A credit with @role="pdf" will be used to compose a footer
-     for each page in the PDF, using either "name" or "url" or both.
-    -->
-    <!--
-    <credit role="pdf">
-      <name>Built with Apache Forrest</name>
-      <url>http://forrest.apache.org/</url>
-    </credit>
-    -->
-  </credits>
-
-</skinconfig>
diff --git a/branch-1.3/lang/c++/.gitignore b/branch-1.3/lang/c++/.gitignore
deleted file mode 100644
index 403b1d0..0000000
--- a/branch-1.3/lang/c++/.gitignore
+++ /dev/null
@@ -1,5 +0,0 @@
-autom4te.cache
-Makefile.in
-configure
-INSTALL
-aclocal.m4
diff --git a/branch-1.3/lang/c++/AUTHORS b/branch-1.3/lang/c++/AUTHORS
deleted file mode 100644
index 7b45e94..0000000
--- a/branch-1.3/lang/c++/AUTHORS
+++ /dev/null
@@ -1,4 +0,0 @@
-
-See http://hadoop.apache.org/avro/ for a list of authors
-
-
diff --git a/branch-1.3/lang/c++/COPYING b/branch-1.3/lang/c++/COPYING
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/c++/COPYING
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/c++/ChangeLog b/branch-1.3/lang/c++/ChangeLog
deleted file mode 100644
index 317b99f..0000000
--- a/branch-1.3/lang/c++/ChangeLog
+++ /dev/null
@@ -1 +0,0 @@
-Refer to CHANGES.txt in the root of avro repository for change log
diff --git a/branch-1.3/lang/c++/Doxyfile b/branch-1.3/lang/c++/Doxyfile
deleted file mode 100644
index 26b23d3..0000000
--- a/branch-1.3/lang/c++/Doxyfile
+++ /dev/null
@@ -1,1226 +0,0 @@
-# Doxyfile 1.4.3
-
-# This file describes the settings to be used by the documentation system
-# doxygen (www.doxygen.org) for a project
-#
-# All text after a hash (#) is considered a comment and will be ignored
-# The format is:
-#       TAG = value [value, ...]
-# For lists items can also be appended using:
-#       TAG += value [value, ...]
-# Values that contain spaces should be placed between quotes (" ")
-
-#---------------------------------------------------------------------------
-# Project related configuration options
-#---------------------------------------------------------------------------
-
-# The PROJECT_NAME tag is a single word (or a sequence of words surrounded 
-# by quotes) that should identify the project.
-
-PROJECT_NAME           =  "Avro C++"
-
-# The PROJECT_NUMBER tag can be used to enter a project or revision number. 
-# This could be handy for archiving the generated documentation or 
-# if some version control system is used.
-
-PROJECT_NUMBER         = 
-
-# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) 
-# base path where the generated documentation will be put. 
-# If a relative path is entered, it will be relative to the location 
-# where doxygen was started. If left blank the current directory will be used.
-
-OUTPUT_DIRECTORY       =  doc
-
-# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 
-# 4096 sub-directories (in 2 levels) under the output directory of each output 
-# format and will distribute the generated files over these directories. 
-# Enabling this option can be useful when feeding doxygen a huge amount of 
-# source files, where putting all generated files in the same directory would 
-# otherwise cause performance problems for the file system.
-
-CREATE_SUBDIRS         = NO
-
-# The OUTPUT_LANGUAGE tag is used to specify the language in which all 
-# documentation generated by doxygen is written. Doxygen will use this 
-# information to generate all constant output in the proper language. 
-# The default language is English, other supported languages are: 
-# Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, 
-# Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, 
-# Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, 
-# Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, 
-# Swedish, and Ukrainian.
-
-OUTPUT_LANGUAGE        = English
-
-# This tag can be used to specify the encoding used in the generated output. 
-# The encoding is not always determined by the language that is chosen, 
-# but also whether or not the output is meant for Windows or non-Windows users. 
-# In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES 
-# forces the Windows encoding (this is the default for the Windows binary), 
-# whereas setting the tag to NO uses a Unix-style encoding (the default for 
-# all platforms other than Windows).
-
-USE_WINDOWS_ENCODING   = NO
-
-# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 
-# include brief member descriptions after the members that are listed in 
-# the file and class documentation (similar to JavaDoc). 
-# Set to NO to disable this.
-
-BRIEF_MEMBER_DESC      = YES
-
-# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 
-# the brief description of a member or function before the detailed description. 
-# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 
-# brief descriptions will be completely suppressed.
-
-REPEAT_BRIEF           = YES
-
-# This tag implements a quasi-intelligent brief description abbreviator 
-# that is used to form the text in various listings. Each string 
-# in this list, if found as the leading text of the brief description, will be 
-# stripped from the text and the result after processing the whole list, is 
-# used as the annotated text. Otherwise, the brief description is used as-is. 
-# If left blank, the following values are used ("$name" is automatically 
-# replaced with the name of the entity): "The $name class" "The $name widget" 
-# "The $name file" "is" "provides" "specifies" "contains" 
-# "represents" "a" "an" "the"
-
-ABBREVIATE_BRIEF       = 
-
-# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 
-# Doxygen will generate a detailed section even if there is only a brief 
-# description.
-
-ALWAYS_DETAILED_SEC    = NO
-
-# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all 
-# inherited members of a class in the documentation of that class as if those 
-# members were ordinary class members. Constructors, destructors and assignment 
-# operators of the base classes will not be shown.
-
-INLINE_INHERITED_MEMB  = NO
-
-# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 
-# path before files name in the file list and in the header files. If set 
-# to NO the shortest path that makes the file name unique will be used.
-
-FULL_PATH_NAMES        = YES
-
-# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 
-# can be used to strip a user-defined part of the path. Stripping is 
-# only done if one of the specified strings matches the left-hand part of 
-# the path. The tag can be used to show relative paths in the file list. 
-# If left blank the directory from which doxygen is run is used as the 
-# path to strip.
-
-STRIP_FROM_PATH        = 
-
-# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of 
-# the path mentioned in the documentation of a class, which tells 
-# the reader which header file to include in order to use a class. 
-# If left blank only the name of the header file containing the class 
-# definition is used. Otherwise one should specify the include paths that 
-# are normally passed to the compiler using the -I flag.
-
-STRIP_FROM_INC_PATH    = 
-
-# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter 
-# (but less readable) file names. This can be useful is your file systems 
-# doesn't support long names like on DOS, Mac, or CD-ROM.
-
-SHORT_NAMES            = NO
-
-# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 
-# will interpret the first line (until the first dot) of a JavaDoc-style 
-# comment as the brief description. If set to NO, the JavaDoc 
-# comments will behave just like the Qt-style comments (thus requiring an 
-# explicit @brief command for a brief description.
-
-JAVADOC_AUTOBRIEF      = YES
-
-# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen 
-# treat a multi-line C++ special comment block (i.e. a block of //! or /// 
-# comments) as a brief description. This used to be the default behaviour. 
-# The new default is to treat a multi-line C++ comment block as a detailed 
-# description. Set this tag to YES if you prefer the old behaviour instead.
-
-MULTILINE_CPP_IS_BRIEF = NO
-
-# If the DETAILS_AT_TOP tag is set to YES then Doxygen 
-# will output the detailed description near the top, like JavaDoc.
-# If set to NO, the detailed description appears after the member 
-# documentation.
-
-DETAILS_AT_TOP         = NO
-
-# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 
-# member inherits the documentation from any documented member that it 
-# re-implements.
-
-INHERIT_DOCS           = YES
-
-# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 
-# tag is set to YES, then doxygen will reuse the documentation of the first 
-# member in the group (if any) for the other members of the group. By default 
-# all members of a group must be documented explicitly.
-
-DISTRIBUTE_GROUP_DOC   = NO
-
-# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce 
-# a new page for each member. If set to NO, the documentation of a member will 
-# be part of the file/class/namespace that contains it.
-
-SEPARATE_MEMBER_PAGES  = NO
-
-# The TAB_SIZE tag can be used to set the number of spaces in a tab. 
-# Doxygen uses this value to replace tabs by spaces in code fragments.
-
-TAB_SIZE               = 4
-
-# This tag can be used to specify a number of aliases that acts 
-# as commands in the documentation. An alias has the form "name=value". 
-# For example adding "sideeffect=\par Side Effects:\n" will allow you to 
-# put the command \sideeffect (or @sideeffect) in the documentation, which 
-# will result in a user-defined paragraph with heading "Side Effects:". 
-# You can put \n's in the value part of an alias to insert newlines.
-
-ALIASES                = 
-
-# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C 
-# sources only. Doxygen will then generate output that is more tailored for C. 
-# For instance, some of the names that are used will be different. The list 
-# of all members will be omitted, etc.
-
-OPTIMIZE_OUTPUT_FOR_C  = NO
-
-# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources 
-# only. Doxygen will then generate output that is more tailored for Java. 
-# For instance, namespaces will be presented as packages, qualified scopes 
-# will look different, etc.
-
-OPTIMIZE_OUTPUT_JAVA   = NO
-
-# Set the SUBGROUPING tag to YES (the default) to allow class member groups of 
-# the same type (for instance a group of public functions) to be put as a 
-# subgroup of that type (e.g. under the Public Functions section). Set it to 
-# NO to prevent subgrouping. Alternatively, this can be done per class using 
-# the \nosubgrouping command.
-
-SUBGROUPING            = YES
-
-#---------------------------------------------------------------------------
-# Build related configuration options
-#---------------------------------------------------------------------------
-
-# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in 
-# documentation are documented, even if no documentation was available. 
-# Private class members and static file members will be hidden unless 
-# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
-
-EXTRACT_ALL            = NO
-
-# If the EXTRACT_PRIVATE tag is set to YES all private members of a class 
-# will be included in the documentation.
-
-EXTRACT_PRIVATE        = NO
-
-# If the EXTRACT_STATIC tag is set to YES all static members of a file 
-# will be included in the documentation.
-
-EXTRACT_STATIC         = NO
-
-# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) 
-# defined locally in source files will be included in the documentation. 
-# If set to NO only classes defined in header files are included.
-
-EXTRACT_LOCAL_CLASSES  = YES
-
-# This flag is only useful for Objective-C code. When set to YES local 
-# methods, which are defined in the implementation section but not in 
-# the interface are included in the documentation. 
-# If set to NO (the default) only methods in the interface are included.
-
-EXTRACT_LOCAL_METHODS  = NO
-
-# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all 
-# undocumented members of documented classes, files or namespaces. 
-# If set to NO (the default) these members will be included in the 
-# various overviews, but no documentation section is generated. 
-# This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_MEMBERS     = NO
-
-# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all 
-# undocumented classes that are normally visible in the class hierarchy. 
-# If set to NO (the default) these classes will be included in the various 
-# overviews. This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_CLASSES     = NO
-
-# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all 
-# friend (class|struct|union) declarations. 
-# If set to NO (the default) these declarations will be included in the 
-# documentation.
-
-HIDE_FRIEND_COMPOUNDS  = NO
-
-# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any 
-# documentation blocks found inside the body of a function. 
-# If set to NO (the default) these blocks will be appended to the 
-# function's detailed documentation block.
-
-HIDE_IN_BODY_DOCS      = NO
-
-# The INTERNAL_DOCS tag determines if documentation 
-# that is typed after a \internal command is included. If the tag is set 
-# to NO (the default) then the documentation will be excluded. 
-# Set it to YES to include the internal documentation.
-
-INTERNAL_DOCS          = NO
-
-# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 
-# file names in lower-case letters. If set to YES upper-case letters are also 
-# allowed. This is useful if you have classes or files whose names only differ 
-# in case and if your file system supports case sensitive file names. Windows 
-# and Mac users are advised to set this option to NO.
-
-CASE_SENSE_NAMES       = YES
-
-# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 
-# will show members with their full class and namespace scopes in the 
-# documentation. If set to YES the scope will be hidden.
-
-HIDE_SCOPE_NAMES       = NO
-
-# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 
-# will put a list of the files that are included by a file in the documentation 
-# of that file.
-
-SHOW_INCLUDE_FILES     = YES
-
-# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 
-# is inserted in the documentation for inline members.
-
-INLINE_INFO            = YES
-
-# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 
-# will sort the (detailed) documentation of file and class members 
-# alphabetically by member name. If set to NO the members will appear in 
-# declaration order.
-
-SORT_MEMBER_DOCS       = YES
-
-# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the 
-# brief documentation of file, namespace and class members alphabetically 
-# by member name. If set to NO (the default) the members will appear in 
-# declaration order.
-
-SORT_BRIEF_DOCS        = NO
-
-# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be 
-# sorted by fully-qualified names, including namespaces. If set to 
-# NO (the default), the class list will be sorted only by class name, 
-# not including the namespace part. 
-# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
-# Note: This option applies only to the class list, not to the 
-# alphabetical list.
-
-SORT_BY_SCOPE_NAME     = NO
-
-# The GENERATE_TODOLIST tag can be used to enable (YES) or 
-# disable (NO) the todo list. This list is created by putting \todo 
-# commands in the documentation.
-
-GENERATE_TODOLIST      = YES
-
-# The GENERATE_TESTLIST tag can be used to enable (YES) or 
-# disable (NO) the test list. This list is created by putting \test 
-# commands in the documentation.
-
-GENERATE_TESTLIST      = YES
-
-# The GENERATE_BUGLIST tag can be used to enable (YES) or 
-# disable (NO) the bug list. This list is created by putting \bug 
-# commands in the documentation.
-
-GENERATE_BUGLIST       = YES
-
-# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or 
-# disable (NO) the deprecated list. This list is created by putting 
-# \deprecated commands in the documentation.
-
-GENERATE_DEPRECATEDLIST= YES
-
-# The ENABLED_SECTIONS tag can be used to enable conditional 
-# documentation sections, marked by \if sectionname ... \endif.
-
-ENABLED_SECTIONS       = 
-
-# The MAX_INITIALIZER_LINES tag determines the maximum number of lines 
-# the initial value of a variable or define consists of for it to appear in 
-# the documentation. If the initializer consists of more lines than specified 
-# here it will be hidden. Use a value of 0 to hide initializers completely. 
-# The appearance of the initializer of individual variables and defines in the 
-# documentation can be controlled using \showinitializer or \hideinitializer 
-# command in the documentation regardless of this setting.
-
-MAX_INITIALIZER_LINES  = 30
-
-# Set the SHOW_USED_FILES tag to NO to disable the list of files generated 
-# at the bottom of the documentation of classes and structs. If set to YES the 
-# list will mention the files that were used to generate the documentation.
-
-SHOW_USED_FILES        = YES
-
-# If the sources in your project are distributed over multiple directories 
-# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy 
-# in the documentation.
-
-SHOW_DIRECTORIES       = YES
-
-# The FILE_VERSION_FILTER tag can be used to specify a program or script that 
-# doxygen should invoke to get the current version for each file (typically from the 
-# version control system). Doxygen will invoke the program by executing (via 
-# popen()) the command <command> <input-file>, where <command> is the value of 
-# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file 
-# provided by doxygen. Whatever the progam writes to standard output 
-# is used as the file version. See the manual for examples.
-
-FILE_VERSION_FILTER    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to warning and progress messages
-#---------------------------------------------------------------------------
-
-# The QUIET tag can be used to turn on/off the messages that are generated 
-# by doxygen. Possible values are YES and NO. If left blank NO is used.
-
-QUIET                  = NO
-
-# The WARNINGS tag can be used to turn on/off the warning messages that are 
-# generated by doxygen. Possible values are YES and NO. If left blank 
-# NO is used.
-
-WARNINGS               = YES
-
-# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 
-# for undocumented members. If EXTRACT_ALL is set to YES then this flag will 
-# automatically be disabled.
-
-WARN_IF_UNDOCUMENTED   = YES
-
-# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for 
-# potential errors in the documentation, such as not documenting some 
-# parameters in a documented function, or documenting parameters that 
-# don't exist or using markup commands wrongly.
-
-WARN_IF_DOC_ERROR      = YES
-
-# This WARN_NO_PARAMDOC option can be abled to get warnings for 
-# functions that are documented, but have no documentation for their parameters 
-# or return value. If set to NO (the default) doxygen will only warn about 
-# wrong or incomplete parameter documentation, but not about the absence of 
-# documentation.
-
-WARN_NO_PARAMDOC       = NO
-
-# The WARN_FORMAT tag determines the format of the warning messages that 
-# doxygen can produce. The string should contain the $file, $line, and $text 
-# tags, which will be replaced by the file and line number from which the 
-# warning originated and the warning text. Optionally the format may contain 
-# $version, which will be replaced by the version of the file (if it could 
-# be obtained via FILE_VERSION_FILTER)
-
-WARN_FORMAT            = "$file:$line: $text"
-
-# The WARN_LOGFILE tag can be used to specify a file to which warning 
-# and error messages should be written. If left blank the output is written 
-# to stderr.
-
-WARN_LOGFILE           = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the input files
-#---------------------------------------------------------------------------
-
-# The INPUT tag can be used to specify the files and/or directories that contain 
-# documented source files. You may enter file names like "myfile.cpp" or 
-# directories like "/usr/src/myproject". Separate the files or directories 
-# with spaces.
-
-INPUT                  = MainPage.dox api
-
-# If the value of the INPUT tag contains directories, you can use the 
-# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank the following patterns are tested: 
-# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx 
-# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm
-
-FILE_PATTERNS          = 
-
-# The RECURSIVE tag can be used to turn specify whether or not subdirectories 
-# should be searched for input files as well. Possible values are YES and NO. 
-# If left blank NO is used.
-
-RECURSIVE              = NO
-
-# The EXCLUDE tag can be used to specify files and/or directories that should 
-# excluded from the INPUT source files. This way you can easily exclude a 
-# subdirectory from a directory tree whose root is specified with the INPUT tag.
-
-EXCLUDE                = 
-
-# The EXCLUDE_SYMLINKS tag can be used select whether or not files or 
-# directories that are symbolic links (a Unix filesystem feature) are excluded 
-# from the input.
-
-EXCLUDE_SYMLINKS       = NO
-
-# If the value of the INPUT tag contains directories, you can use the 
-# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 
-# certain files from those directories.
-
-EXCLUDE_PATTERNS       = 
-
-# The EXAMPLE_PATH tag can be used to specify one or more files or 
-# directories that contain example code fragments that are included (see 
-# the \include command).
-
-EXAMPLE_PATH           = 
-
-# If the value of the EXAMPLE_PATH tag contains directories, you can use the 
-# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank all files are included.
-
-EXAMPLE_PATTERNS       = 
-
-# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be 
-# searched for input files to be used with the \include or \dontinclude 
-# commands irrespective of the value of the RECURSIVE tag. 
-# Possible values are YES and NO. If left blank NO is used.
-
-EXAMPLE_RECURSIVE      = NO
-
-# The IMAGE_PATH tag can be used to specify one or more files or 
-# directories that contain image that are included in the documentation (see 
-# the \image command).
-
-IMAGE_PATH             = 
-
-# The INPUT_FILTER tag can be used to specify a program that doxygen should 
-# invoke to filter for each input file. Doxygen will invoke the filter program 
-# by executing (via popen()) the command <filter> <input-file>, where <filter> 
-# is the value of the INPUT_FILTER tag, and <input-file> is the name of an 
-# input file. Doxygen will then use the output that the filter program writes 
-# to standard output.  If FILTER_PATTERNS is specified, this tag will be 
-# ignored.
-
-INPUT_FILTER           = 
-
-# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern 
-# basis.  Doxygen will compare the file name with each pattern and apply the 
-# filter if there is a match.  The filters are a list of the form: 
-# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further 
-# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER 
-# is applied to all files.
-
-FILTER_PATTERNS        = 
-
-# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 
-# INPUT_FILTER) will be used to filter the input files when producing source 
-# files to browse (i.e. when SOURCE_BROWSER is set to YES).
-
-FILTER_SOURCE_FILES    = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to source browsing
-#---------------------------------------------------------------------------
-
-# If the SOURCE_BROWSER tag is set to YES then a list of source files will 
-# be generated. Documented entities will be cross-referenced with these sources. 
-# Note: To get rid of all source code in the generated output, make sure also 
-# VERBATIM_HEADERS is set to NO.
-
-SOURCE_BROWSER         = NO
-
-# Setting the INLINE_SOURCES tag to YES will include the body 
-# of functions and classes directly in the documentation.
-
-INLINE_SOURCES         = NO
-
-# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 
-# doxygen to hide any special comment blocks from generated source code 
-# fragments. Normal C and C++ comments will always remain visible.
-
-STRIP_CODE_COMMENTS    = YES
-
-# If the REFERENCED_BY_RELATION tag is set to YES (the default) 
-# then for each documented function all documented 
-# functions referencing it will be listed.
-
-REFERENCED_BY_RELATION = YES
-
-# If the REFERENCES_RELATION tag is set to YES (the default) 
-# then for each documented function all documented entities 
-# called/used by that function will be listed.
-
-REFERENCES_RELATION    = YES
-
-# If the USE_HTAGS tag is set to YES then the references to source code 
-# will point to the HTML generated by the htags(1) tool instead of doxygen 
-# built-in source browser. The htags tool is part of GNU's global source 
-# tagging system (see http://www.gnu.org/software/global/global.html). You 
-# will need version 4.8.6 or higher.
-
-USE_HTAGS              = NO
-
-# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 
-# will generate a verbatim copy of the header file for each class for 
-# which an include is specified. Set to NO to disable this.
-
-VERBATIM_HEADERS       = YES
-
-#---------------------------------------------------------------------------
-# configuration options related to the alphabetical class index
-#---------------------------------------------------------------------------
-
-# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 
-# of all compounds will be generated. Enable this if the project 
-# contains a lot of classes, structs, unions or interfaces.
-
-ALPHABETICAL_INDEX     = NO
-
-# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 
-# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 
-# in which this list will be split (can be a number in the range [1..20])
-
-COLS_IN_ALPHA_INDEX    = 5
-
-# In case all classes in a project start with a common prefix, all 
-# classes will be put under the same header in the alphabetical index. 
-# The IGNORE_PREFIX tag can be used to specify one or more prefixes that 
-# should be ignored while generating the index headers.
-
-IGNORE_PREFIX          = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the HTML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_HTML tag is set to YES (the default) Doxygen will 
-# generate HTML output.
-
-GENERATE_HTML          = YES
-
-# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `html' will be used as the default path.
-
-HTML_OUTPUT            = html
-
-# The HTML_FILE_EXTENSION tag can be used to specify the file extension for 
-# each generated HTML page (for example: .htm,.php,.asp). If it is left blank 
-# doxygen will generate files with .html extension.
-
-HTML_FILE_EXTENSION    = .html
-
-# The HTML_HEADER tag can be used to specify a personal HTML header for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard header.
-
-HTML_HEADER            = 
-
-# The HTML_FOOTER tag can be used to specify a personal HTML footer for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard footer.
-
-HTML_FOOTER            = 
-
-# The HTML_STYLESHEET tag can be used to specify a user-defined cascading 
-# style sheet that is used by each HTML page. It can be used to 
-# fine-tune the look of the HTML output. If the tag is left blank doxygen 
-# will generate a default style sheet. Note that doxygen will try to copy 
-# the style sheet file to the HTML output directory, so don't put your own 
-# stylesheet in the HTML output directory as well, or it will be erased!
-
-HTML_STYLESHEET        = 
-
-# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 
-# files or namespaces will be aligned in HTML using tables. If set to 
-# NO a bullet list will be used.
-
-HTML_ALIGN_MEMBERS     = YES
-
-# If the GENERATE_HTMLHELP tag is set to YES, additional index files 
-# will be generated that can be used as input for tools like the 
-# Microsoft HTML help workshop to generate a compressed HTML help file (.chm) 
-# of the generated HTML documentation.
-
-GENERATE_HTMLHELP      = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can 
-# be used to specify the file name of the resulting .chm file. You 
-# can add a path in front of the file if the result should not be 
-# written to the html output directory.
-
-CHM_FILE               = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can 
-# be used to specify the location (absolute path including file name) of 
-# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run 
-# the HTML help compiler on the generated index.hhp.
-
-HHC_LOCATION           = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag 
-# controls if a separate .chi index file is generated (YES) or that 
-# it should be included in the master .chm file (NO).
-
-GENERATE_CHI           = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag 
-# controls whether a binary table of contents is generated (YES) or a 
-# normal table of contents (NO) in the .chm file.
-
-BINARY_TOC             = NO
-
-# The TOC_EXPAND flag can be set to YES to add extra items for group members 
-# to the contents of the HTML help documentation and to the tree view.
-
-TOC_EXPAND             = NO
-
-# The DISABLE_INDEX tag can be used to turn on/off the condensed index at 
-# top of each HTML page. The value NO (the default) enables the index and 
-# the value YES disables it.
-
-DISABLE_INDEX          = NO
-
-# This tag can be used to set the number of enum values (range [1..20]) 
-# that doxygen will group on one line in the generated HTML documentation.
-
-ENUM_VALUES_PER_LINE   = 4
-
-# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be
-# generated containing a tree-like index structure (just like the one that 
-# is generated for HTML Help). For this to work a browser that supports 
-# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, 
-# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are 
-# probably better off using the HTML help feature.
-
-GENERATE_TREEVIEW      = NO
-
-# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 
-# used to set the initial width (in pixels) of the frame in which the tree 
-# is shown.
-
-TREEVIEW_WIDTH         = 250
-
-#---------------------------------------------------------------------------
-# configuration options related to the LaTeX output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 
-# generate Latex output.
-
-GENERATE_LATEX         = NO
-
-# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `latex' will be used as the default path.
-
-LATEX_OUTPUT           = latex
-
-# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be 
-# invoked. If left blank `latex' will be used as the default command name.
-
-LATEX_CMD_NAME         = latex
-
-# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to 
-# generate index for LaTeX. If left blank `makeindex' will be used as the 
-# default command name.
-
-MAKEINDEX_CMD_NAME     = makeindex
-
-# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 
-# LaTeX documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_LATEX          = NO
-
-# The PAPER_TYPE tag can be used to set the paper type that is used 
-# by the printer. Possible values are: a4, a4wide, letter, legal and 
-# executive. If left blank a4wide will be used.
-
-PAPER_TYPE             = a4wide
-
-# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 
-# packages that should be included in the LaTeX output.
-
-EXTRA_PACKAGES         = 
-
-# The LATEX_HEADER tag can be used to specify a personal LaTeX header for 
-# the generated latex document. The header should contain everything until 
-# the first chapter. If it is left blank doxygen will generate a 
-# standard header. Notice: only use this tag if you know what you are doing!
-
-LATEX_HEADER           = 
-
-# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 
-# is prepared for conversion to pdf (using ps2pdf). The pdf file will 
-# contain links (just like the HTML output) instead of page references 
-# This makes the output suitable for online browsing using a pdf viewer.
-
-PDF_HYPERLINKS         = NO
-
-# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 
-# plain latex in the generated Makefile. Set this option to YES to get a 
-# higher quality PDF documentation.
-
-USE_PDFLATEX           = NO
-
-# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 
-# command to the generated LaTeX files. This will instruct LaTeX to keep 
-# running if errors occur, instead of asking the user for help. 
-# This option is also used when generating formulas in HTML.
-
-LATEX_BATCHMODE        = NO
-
-# If LATEX_HIDE_INDICES is set to YES then doxygen will not 
-# include the index chapters (such as File Index, Compound Index, etc.) 
-# in the output.
-
-LATEX_HIDE_INDICES     = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the RTF output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 
-# The RTF output is optimized for Word 97 and may not look very pretty with 
-# other RTF readers or editors.
-
-GENERATE_RTF           = NO
-
-# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `rtf' will be used as the default path.
-
-RTF_OUTPUT             = rtf
-
-# If the COMPACT_RTF tag is set to YES Doxygen generates more compact 
-# RTF documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_RTF            = NO
-
-# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 
-# will contain hyperlink fields. The RTF file will 
-# contain links (just like the HTML output) instead of page references. 
-# This makes the output suitable for online browsing using WORD or other 
-# programs which support those fields. 
-# Note: wordpad (write) and others do not support links.
-
-RTF_HYPERLINKS         = NO
-
-# Load stylesheet definitions from file. Syntax is similar to doxygen's 
-# config file, i.e. a series of assignments. You only have to provide 
-# replacements, missing definitions are set to their default value.
-
-RTF_STYLESHEET_FILE    = 
-
-# Set optional variables used in the generation of an rtf document. 
-# Syntax is similar to doxygen's config file.
-
-RTF_EXTENSIONS_FILE    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the man page output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_MAN tag is set to YES (the default) Doxygen will 
-# generate man pages
-
-GENERATE_MAN           = NO
-
-# The MAN_OUTPUT tag is used to specify where the man pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `man' will be used as the default path.
-
-MAN_OUTPUT             = man
-
-# The MAN_EXTENSION tag determines the extension that is added to 
-# the generated man pages (default is the subroutine's section .3)
-
-MAN_EXTENSION          = .3
-
-# If the MAN_LINKS tag is set to YES and Doxygen generates man output, 
-# then it will generate one additional man file for each entity 
-# documented in the real man page(s). These additional files 
-# only source the real man page, but without them the man command 
-# would be unable to find the correct page. The default is NO.
-
-MAN_LINKS              = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the XML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_XML tag is set to YES Doxygen will 
-# generate an XML file that captures the structure of 
-# the code including all documentation.
-
-GENERATE_XML           = NO
-
-# The XML_OUTPUT tag is used to specify where the XML pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `xml' will be used as the default path.
-
-XML_OUTPUT             = xml
-
-# The XML_SCHEMA tag can be used to specify an XML schema, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_SCHEMA             = 
-
-# The XML_DTD tag can be used to specify an XML DTD, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_DTD                = 
-
-# If the XML_PROGRAMLISTING tag is set to YES Doxygen will 
-# dump the program listings (including syntax highlighting 
-# and cross-referencing information) to the XML output. Note that 
-# enabling this will significantly increase the size of the XML output.
-
-XML_PROGRAMLISTING     = YES
-
-#---------------------------------------------------------------------------
-# configuration options for the AutoGen Definitions output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will 
-# generate an AutoGen Definitions (see autogen.sf.net) file 
-# that captures the structure of the code including all 
-# documentation. Note that this feature is still experimental 
-# and incomplete at the moment.
-
-GENERATE_AUTOGEN_DEF   = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the Perl module output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_PERLMOD tag is set to YES Doxygen will 
-# generate a Perl module file that captures the structure of 
-# the code including all documentation. Note that this 
-# feature is still experimental and incomplete at the 
-# moment.
-
-GENERATE_PERLMOD       = NO
-
-# If the PERLMOD_LATEX tag is set to YES Doxygen will generate 
-# the necessary Makefile rules, Perl scripts and LaTeX code to be able 
-# to generate PDF and DVI output from the Perl module output.
-
-PERLMOD_LATEX          = NO
-
-# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be 
-# nicely formatted so it can be parsed by a human reader.  This is useful 
-# if you want to understand what is going on.  On the other hand, if this 
-# tag is set to NO the size of the Perl module output will be much smaller 
-# and Perl will parse it just the same.
-
-PERLMOD_PRETTY         = YES
-
-# The names of the make variables in the generated doxyrules.make file 
-# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. 
-# This is useful so different doxyrules.make files included by the same 
-# Makefile don't overwrite each other's variables.
-
-PERLMOD_MAKEVAR_PREFIX = 
-
-#---------------------------------------------------------------------------
-# Configuration options related to the preprocessor   
-#---------------------------------------------------------------------------
-
-# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 
-# evaluate all C-preprocessor directives found in the sources and include 
-# files.
-
-ENABLE_PREPROCESSING   = YES
-
-# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 
-# names in the source code. If set to NO (the default) only conditional 
-# compilation will be performed. Macro expansion can be done in a controlled 
-# way by setting EXPAND_ONLY_PREDEF to YES.
-
-MACRO_EXPANSION        = NO
-
-# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 
-# then the macro expansion is limited to the macros specified with the 
-# PREDEFINED and EXPAND_AS_PREDEFINED tags.
-
-EXPAND_ONLY_PREDEF     = NO
-
-# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 
-# in the INCLUDE_PATH (see below) will be search if a #include is found.
-
-SEARCH_INCLUDES        = YES
-
-# The INCLUDE_PATH tag can be used to specify one or more directories that 
-# contain include files that are not input files but should be processed by 
-# the preprocessor.
-
-INCLUDE_PATH           = 
-
-# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 
-# patterns (like *.h and *.hpp) to filter out the header-files in the 
-# directories. If left blank, the patterns specified with FILE_PATTERNS will 
-# be used.
-
-INCLUDE_FILE_PATTERNS  = 
-
-# The PREDEFINED tag can be used to specify one or more macro names that 
-# are defined before the preprocessor is started (similar to the -D option of 
-# gcc). The argument of the tag is a list of macros of the form: name 
-# or name=definition (no spaces). If the definition and the = are 
-# omitted =1 is assumed. To prevent a macro definition from being 
-# undefined via #undef or recursively expanded use the := operator 
-# instead of the = operator.
-
-PREDEFINED             = 
-
-# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then 
-# this tag can be used to specify a list of macro names that should be expanded. 
-# The macro definition that is found in the sources will be used. 
-# Use the PREDEFINED tag if you want to use a different macro definition.
-
-EXPAND_AS_DEFINED      = 
-
-# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then 
-# doxygen's preprocessor will remove all function-like macros that are alone 
-# on a line, have an all uppercase name, and do not end with a semicolon. Such 
-# function macros are typically used for boiler-plate code, and will confuse 
-# the parser if not removed.
-
-SKIP_FUNCTION_MACROS   = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to external references   
-#---------------------------------------------------------------------------
-
-# The TAGFILES option can be used to specify one or more tagfiles. 
-# Optionally an initial location of the external documentation 
-# can be added for each tagfile. The format of a tag file without 
-# this location is as follows: 
-#   TAGFILES = file1 file2 ... 
-# Adding location for the tag files is done as follows: 
-#   TAGFILES = file1=loc1 "file2 = loc2" ... 
-# where "loc1" and "loc2" can be relative or absolute paths or 
-# URLs. If a location is present for each tag, the installdox tool 
-# does not have to be run to correct the links.
-# Note that each tag file must have a unique name
-# (where the name does NOT include the path)
-# If a tag file is not located in the directory in which doxygen 
-# is run, you must also specify the path to the tagfile here.
-
-TAGFILES               = 
-
-# When a file name is specified after GENERATE_TAGFILE, doxygen will create 
-# a tag file that is based on the input files it reads.
-
-GENERATE_TAGFILE       = 
-
-# If the ALLEXTERNALS tag is set to YES all external classes will be listed 
-# in the class index. If set to NO only the inherited external classes 
-# will be listed.
-
-ALLEXTERNALS           = NO
-
-# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed 
-# in the modules index. If set to NO, only the current project's groups will 
-# be listed.
-
-EXTERNAL_GROUPS        = YES
-
-# The PERL_PATH should be the absolute path and name of the perl script 
-# interpreter (i.e. the result of `which perl').
-
-PERL_PATH              = /usr/bin/perl
-
-#---------------------------------------------------------------------------
-# Configuration options related to the dot tool   
-#---------------------------------------------------------------------------
-
-# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 
-# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base 
-# or super classes. Setting the tag to NO turns the diagrams off. Note that 
-# this option is superseded by the HAVE_DOT option below. This is only a 
-# fallback. It is recommended to install and use dot, since it yields more 
-# powerful graphs.
-
-CLASS_DIAGRAMS         = YES
-
-# If set to YES, the inheritance and collaboration graphs will hide 
-# inheritance and usage relations if the target is undocumented 
-# or is not a class.
-
-HIDE_UNDOC_RELATIONS   = YES
-
-# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 
-# available from the path. This tool is part of Graphviz, a graph visualization 
-# toolkit from AT&T and Lucent Bell Labs. The other options in this section 
-# have no effect if this option is set to NO (the default)
-
-HAVE_DOT               = NO
-
-# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect inheritance relations. Setting this tag to YES will force the 
-# the CLASS_DIAGRAMS tag to NO.
-
-CLASS_GRAPH            = YES
-
-# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect implementation dependencies (inheritance, containment, and 
-# class references variables) of the class with other documented classes.
-
-COLLABORATION_GRAPH    = YES
-
-# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for groups, showing the direct groups dependencies
-
-GROUP_GRAPHS           = YES
-
-# If the UML_LOOK tag is set to YES doxygen will generate inheritance and 
-# collaboration diagrams in a style similar to the OMG's Unified Modeling 
-# Language.
-
-UML_LOOK               = NO
-
-# If set to YES, the inheritance and collaboration graphs will show the 
-# relations between templates and their instances.
-
-TEMPLATE_RELATIONS     = NO
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT 
-# tags are set to YES then doxygen will generate a graph for each documented 
-# file showing the direct and indirect include dependencies of the file with 
-# other documented files.
-
-INCLUDE_GRAPH          = YES
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and 
-# HAVE_DOT tags are set to YES then doxygen will generate a graph for each 
-# documented header file showing the documented files that directly or 
-# indirectly include this file.
-
-INCLUDED_BY_GRAPH      = YES
-
-# If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will 
-# generate a call dependency graph for every global function or class method. 
-# Note that enabling this option will significantly increase the time of a run. 
-# So in most cases it will be better to enable call graphs for selected 
-# functions only using the \callgraph command.
-
-CALL_GRAPH             = NO
-
-# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 
-# will graphical hierarchy of all classes instead of a textual one.
-
-GRAPHICAL_HIERARCHY    = YES
-
-# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES 
-# then doxygen will show the dependencies a directory has on other directories 
-# in a graphical way. The dependency relations are determined by the #include
-# relations between the files in the directories.
-
-DIRECTORY_GRAPH        = YES
-
-# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images 
-# generated by dot. Possible values are png, jpg, or gif
-# If left blank png will be used.
-
-DOT_IMAGE_FORMAT       = png
-
-# The tag DOT_PATH can be used to specify the path where the dot tool can be 
-# found. If left blank, it is assumed the dot tool can be found in the path.
-
-DOT_PATH               = 
-
-# The DOTFILE_DIRS tag can be used to specify one or more directories that 
-# contain dot files that are included in the documentation (see the 
-# \dotfile command).
-
-DOTFILE_DIRS           = 
-
-# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_WIDTH    = 1024
-
-# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_HEIGHT   = 1024
-
-# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the 
-# graphs generated by dot. A depth value of 3 means that only nodes reachable 
-# from the root by following a path via at most 3 edges will be shown. Nodes 
-# that lay further from the root node will be omitted. Note that setting this 
-# option to 1 or 2 may greatly reduce the computation time needed for large 
-# code bases. Also note that a graph may be further truncated if the graph's 
-# image dimensions are not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH 
-# and MAX_DOT_GRAPH_HEIGHT). If 0 is used for the depth value (the default), 
-# the graph is not depth-constrained.
-
-MAX_DOT_GRAPH_DEPTH    = 0
-
-# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent 
-# background. This is disabled by default, which results in a white background. 
-# Warning: Depending on the platform used, enabling this option may lead to 
-# badly anti-aliased labels on the edges of a graph (i.e. they become hard to 
-# read).
-
-DOT_TRANSPARENT        = NO
-
-# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output 
-# files in one run (i.e. multiple -o and -T options on the command line). This 
-# makes dot run faster, but since only newer versions of dot (>1.8.10) 
-# support this, this feature is disabled by default.
-
-DOT_MULTI_TARGETS      = NO
-
-# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 
-# generate a legend page explaining the meaning of the various boxes and 
-# arrows in the dot generated graphs.
-
-GENERATE_LEGEND        = YES
-
-# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will 
-# remove the intermediate dot files that are used to generate 
-# the various graphs.
-
-DOT_CLEANUP            = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to the search engine   
-#---------------------------------------------------------------------------
-
-# The SEARCHENGINE tag specifies whether or not a search engine should be 
-# used. If set to NO the values of all tags below this one will be ignored.
-
-SEARCHENGINE           = NO
diff --git a/branch-1.3/lang/c++/MainPage.dox b/branch-1.3/lang/c++/MainPage.dox
deleted file mode 100644
index 479ba8b..0000000
--- a/branch-1.3/lang/c++/MainPage.dox
+++ /dev/null
@@ -1,376 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*!
-\mainpage
-
-\htmlonly
-
-<H2>Introduction to Avro C++</H2>
-
-<P>Avro is a data serialization system. See
-<A HREF="http://hadoop.apache.org/avro/docs/current/">h</A><A HREF="http://hadoop.apache.org/avro/docs/current/">ttp://hadoop.apache.org/avro/docs/current/</A>
-for background information.</P>
-<P>This is the documentation for a C++ implementation of Avro. The
-library includes:</P>
-<UL>
-	<LI><P>objects for assembling schemas programmatically 
-	</P>
-	<LI><P>objects for reading and writing data, that may be used to
-	build custom serializers and parsers</P>
-	<LI><P>an object that validates the data against a schema during
-	serialization (used primarily for debugging)</P>
-	<LI><P>an object that reads a schema during parsing, and notifies
-	the reader which type (and name or other attributes) to expect next,
-	used for debugging or for building dynamic parsers that don't know a
-	priori which data to expect</P>
-	<LI><P>a code generation tool that creates C++ objects from a
-	schema, and the code to convert back and forth between the
-	serialized data and the object</P>
-	<LI><P>a parser that can convert data written in one schema to a C++
-	object with a different schema</P>
-</UL>
-
-<H2>Getting started with Avro C++</H2>
-
-<P>Although Avro does not require use of code generation, the easiest
-way to get started with the Avro C++ library is to use the code
-generation tool. The code generator reads a schema, and outputs a C++
-object to represent the data for the schema. It also creates the code
-to serialize this object, and to deserialize it... all the heavy
-coding is done for you. Even if you wish to write custom serializers
-or parsers using the core C++ libraries, the generated code can serve
-as an example of how to use these libraries.</P>
-<P>Let's walk through an example, using a simple schema. Use the
-schema that represents an imaginary number:</P>
-<PRE>{
-  &quot;type&quot;: &quot;record&quot;, 
-  &quot;name&quot;: &quot;complex&quot;,
-  &quot;fields&quot; : [
-    {&quot;name&quot;: &quot;real&quot;, &quot;type&quot;: &quot;double&quot;},    
-    {&quot;name&quot;: &quot;imaginary&quot;, &quot;type&quot; : &quot;double&quot;}
-  ]
-}</PRE><P>
-Assume this JSON representation of the schema is stored in a file
-called imaginary. To generate the code is a two step process:</P>
-<PRE>precompile &lt; imaginary &gt; imaginary.flat</PRE><P>
-The precompile step converts the schema into an intermediate format
-that is used by the code generator. This intermediate file is just a
-text-based representation of the schema, flattened by a
-depth-first-traverse of the tree structure of the schema types.</P>
-<PRE>python scripts/gen-cppcode.py --input=example.flat --output=example.hh –-namespace=Math</PRE><P>
-This tells the code generator to read your flattened schema as its
-input, and generate a C++ header file in example.hh. The optional
-argument namespace will put the objects in that namespace (if you
-don't specify a namespace, you will still get a default namespace of
-avrouser).</P>
-<P>Here's the start of the generated code:</P>
-<PRE>namespace Math {
-
-struct complex {
-
-    complex () :
-        real(),
-        imaginary()
-    { } 
-
-    double real;
-    double imaginary;
-};</PRE><P>
-This is the C++ representation of the schema. It creates a structure
-for the record, a default constructor, and a member for each field of
-the record.</P>
-<P>There is some other output that we can ignore for now. Let's look
-at an example of serializing this data:</P>
-<PRE>void serializeMyData()
-{
-    Math::complex c;
-    c.real = 10.0;
-    c.imaginary = 20.0;
-
-    // Declare the stream to which to serialize the data to
-    std::ostringstream os;
-
-    // Ostreamer wraps a stream so that Avro serializer can use it
-    avro::Ostreamer ostreamer(os);
-    
-    // Writer is the object that will do the actual I/O
-    avro::Writer writer(ostreamer);
-
-    // This will invoke the writer on my object
-    avro::serialize(writer, c);
-
-    // At this point, the ostringstream “os” stores the serialized data!
-}</PRE><P>
-Using the generated code, all that is required to serialize the data
-is to call avro::serialize() on the object. There is some setup
-required to tell where to write the data. The Ostreamer object is a
-simple object that understands how to wite to STL ostreams. It is
-derived from a virtual base class called OutputStreamer. You can
-derive from OutputStream to create an object that can write to any
-kind of buffer you wish.</P>
-<P>Now let's do the inverse, and read the serialized data into our
-object:</P>
-<PRE>void parseMyData(const std::string &amp;myData)
-{
-    Math::complex c;
-
-    // Assume the serialized data is being passed as the contents of a string
-    // (Note: this may not be the best way since the data is binary)
-
-    // Declare a stream from which to read the serialized data
-    std::istringstream is(myData);
-
-    // Istreamer wraps a stream so that Avro parser can use it
-    avro::Istreamer istreamer(is);
-    
-    // Reader is the object that will do the actual I/O
-    avro::Reader reader(istreamer);
-
-    // This will invoke the reader on my object
-    avro::parse(reader, c);
-
-    // At this point, c is populated with the deserialized data!
-}</PRE><P>
-In case you're wondering how avro::serialize() and avro::parse()
-handled the custom data type, the answer is in the generated code. It
-created the following functions:</P>
-<PRE>template &lt;typename Serializer&gt;
-inline void serialize(Serializer &amp;s, const complex &amp;val, const boost::true_type &amp;) {
-    s.writeRecord();
-    serialize(s, val.real);
-    serialize(s, val.imaginary);
-}
-
-template &lt;typename Parser&gt;
-inline void parse(Parser &amp;p, complex &amp;val, const boost::true_type &amp;) {
-    p.readRecord();
-    parse(p, val.real);
-    parse(p, val.imaginary);
-}</PRE><P>
-It also adds the following to the avro namespace:</P>
-<PRE>template &lt;&gt; struct is_serializable&lt;Math::complex&gt; : public boost::true_type{};</PRE><P>
-This sets up a type trait for the complex structure, telling Avro
-that this object has serialize and parse functions available.</P>
-
-<H2>Reading a Json schema</H2>
-
-<P>The above section demonstrated pretty much all that's needed to
-know to get started reading and writing objects using the Avro C++
-code generator. The following sections will cover some more
-information.</P>
-<P>The library provides some utilities to read a schema that is
-stored in a JSON file or string. Take a look:</P>
-<PRE>void readSchema()
-{
-    // My schema is stored in a file called “example”
-    std::ifstream in(“example”);
-
-    avro::ValidSchema mySchema;
-    avro::compileJsonSchema(in, mySchema);
-}
- </PRE><P>
-This reads the file, and parses the JSON schema into an object of
-type avro::ValidSchema. If, for some reason, the schema is not valid,
-the ValidSchema object will not be set, and an exception will be
-thrown. 
-</P>
-
-<H2>To validate or not to validate</H2>
-
-<P>The last section showed how to create a ValidSchema object from a
-schema stored in JSON. You may wonder, what can I use the ValidSchema
-for?</P>
-<P>One use is to ensure that the writer is actually writing the types
-that match what the schema expects. Let's revisit the serialize
-function from above, but this time checking against our schema.</P>
-<PRE>void serializeMyData(const ValidSchema &amp;mySchema)
-{
-    Math::complex c;
-    c.real = 10.0;
-    c.imaginary = 20.0;
-
-    std::ostringstream os;
-    avro::Ostreamer ostreamer(os);
-    
-    // ValidatingWriter will make sure our serializer is writing the correct types
-    avro::ValidatingWriter writer(mySchema, ostreamer);
-
-    try {
-        avro::serialize(writer, c);
-        // At this point, the ostringstream “os” stores the serialized data!
-    } 
-    catch (avro::Exception &amp;e) {
-        std::cerr &lt;&lt; “ValidatingWriter encountered an error: “ &lt;&lt; e.what();
-    }  
-}</PRE><P>
-The difference between this code and the previous version is that the
-Writer object was replaced with a ValidatingWriter. If the serializer
-function mistakenly writes a type that does not match the schema, the
-ValidatingWriter will throw an exception. 
-</P>
-<P>The ValidatingWriter will incur more processing overhead while
-writing your data. For the generated code, it's not necessary to use
-validation, because (hopefully!) the mechanically generated code will
-match the schema. Nevertheless it is nice while debugging to have the
-added safety of validation, especially when writing and testing your
-own serializing code.</P>
-<P>The ValidSchema may also be used when parsing data. In addition to
-making sure that the parser reads types that match the schema, it
-provides an interface to query the next type to expect, and the
-field's name if it is a member of a record.</P>
-<P>The following code is not very flexible, but it does demonstrate
-the API:</P>
-<PRE>void parseMyData(const std::string &amp;myData, const avro::ValidSchema &amp;mySchema)
-{
-    std::istringstream is(myData);
-    avro::Istreamer istreamer(is);
-    
-    // Manually parse data, the Parser object binds the data to the schema
-    avro::Parser&lt;ValidatingReader&gt; parser(mySchema, istreamer);
-
-    assert( parser.nextType() == AVRO_READER);
-    
-    // Begin parsing
-    parser.beginRecord();
-   
-    Math::complex c;
-
-    assert( parser.currentRecordName() == “complex”);
-    for(int i=0; i &lt; 2; ++i) {
-        assert( parser.nextType() == AVRO_DOUBLE);
-        if(parser.nextFieldName() == “real”) {
-            c.real = parser.readDouble();
-        } 
-        else if (parser.nextFieldName() == “imaginary”) {
-            c.imaginary = parser.readDouble();
-        } 
-        else {
-            std::cout &lt;&lt; “I did not expect that!\n”
-        }
-    }
-}</PRE><P>
-The above code shows that if you don't know the schema at compile
-time, you can still write code that parses the data, by reading the
-schema at runtime and querying the ValidatingReader to discover what
-is in the serialized data.</P>
-
-<H2>Programmatically creating schemas</H2>
-
-<P>You can use objects to create schemas in your code. There are
-schema objects for each primitive and compound type, and they all
-share a common base class called Schema.</P>
-<P>Here's an example, of creating a schema for an array of records of
-complex data types:</P>
-<PRE>void createMySchema()
-{
-    // First construct our complex data type:
-    avro::RecordSchema myRecord(“complex”);
-   
-    // Now populate my record with fields (each field is another schema):
-    myRecord.addField(“real”, avro::DoubleSchema());
-    myRecord.addField(“imaginary”, avro::DoubleSchema());
-
-    // The complex record is the same as used above, let's make a schema 
-    // for an array of these record
-  
-    avro::ArraySchema complexArray(myRecord); </PRE><P>
-The above code created our schema, but at this point it is possible
-that a schema is not valid (a record may not have any fields, or some
-field names may not be unique, etc.) In order to use the schema, you
-need to convert it to the ValidSchema object:</P>
-<PRE>   // this will throw if the schema is invalid!
-   avro::ValidSchema validComplexArray(complexArray);
-
-   // now that I have my schema, what does it look like in JSON?
-   // print it to the screen
-   validComplexArray.toJson(std::cout);
-}</PRE><P>
-When the above code executes, it prints:</P>
-<PRE>{
-    &quot;type&quot;: &quot;array&quot;,
-    &quot;items&quot;: {
-        &quot;type&quot;: &quot;record&quot;,
-        &quot;name&quot;: &quot;complex&quot;,
-        &quot;fields&quot;: [
-            {
-                &quot;name&quot;: &quot;real&quot;,
-                &quot;type&quot;: &quot;double&quot; 
-            },
-            {
-                &quot;name&quot;: &quot;imaginary&quot;,
-                &quot;type&quot;: &quot;double&quot; 
-            } 
-        ]
-    }
-}
-</PRE>
-
-<H2>Converting from one schema to another</H2>
-
-<P>The Avro spec provides rules for dealing with schemas that are not
-exactly the same (for example, the schema may evolve over time, and
-the data my program now expects may differ than the data stored
-previously with the older version).</P>
-<P>The code generation tool may help again in this case.  For each
-structure it generates, it creates a special indexing structure that
-may be used to read the data, even if the data was written with a
-different schema.</P>
-<P>In example.hh, this indexing structure looks like:</P>
-<PRE>class complex_Layout : public avro::CompoundOffset {
-  public:
-    complex_Layout(size_t offset) :
-        CompoundOffset(offset)
-    {
-        add(new avro::Offset(offset + offsetof(complex, real)));
-        add(new avro::Offset(offset + offsetof(complex, imaginary)));
-    }
-}; 
-</PRE>
-<P>Let's say my data was previously written with floats instead of
-doubles.  According the schema resolution rules, the schemas are
-compatible, because floats are promotable to doubles.  As long as
-both the old and the new schemas are available, a dynamic parser may
-be created that reads the data to the code generated structure.</P>
-<PRE>void dynamicParse(const avro::ValidSchema &amp;writerSchema, 
-                  const avro::ValidSchema &amp;readerSchema) {
-
-    // Instantiate the Layout object
-    Math::complex_Layout layout;
-
-    // Create a schema parser that is aware of my type's layout, and both schemas
-    avro::ResolverSchema resolverSchema(writerSchema, readerSchema, layout);
-
-    // Setup the reader
-    std::istringstream is(data);
-    avro::IStreamer istreamer(is);
-    avro::ResolvingReader reader(resolverSchema, is);
-
-    Math::complex c;
-    
-    // Do the parse
-    avro::parse(reader, c);
-
-    // At this point, c is populated with the deserialized data!
-}
-</PRE>
-
-\endhtmlonly
-
-*/
-
diff --git a/branch-1.3/lang/c++/Makefile.am b/branch-1.3/lang/c++/Makefile.am
deleted file mode 100644
index 3c7e54f..0000000
--- a/branch-1.3/lang/c++/Makefile.am
+++ /dev/null
@@ -1,157 +0,0 @@
-
-AM_CXXFLAGS= -I$(top_srcdir)/api $(BOOST_CPPFLAGS)
-ACLOCAL_AMFLAGS=-I m4
-
-BOOST_CPPFLAGS = @BOOST_CPPFLAGS@
-BOOST_LDFLAGS = @BOOST_LDFLAGS@
-BOOST_REGEX_LIB = @BOOST_REGEX_LIB@
-PYTHON = @PYTHON@
-
-library_includedir=$(includedir)/avrocpp
-library_include_HEADERS = \
-api/AvroParse.hh \
-api/AvroSerialize.hh \
-api/AvroTraits.hh \
-api/Boost.hh \
-api/Compiler.hh \
-api/CompilerNode.hh \
-api/Exception.hh \
-api/InputStreamer.hh \
-api/Layout.hh \
-api/Node.hh \
-api/NodeConcepts.hh \
-api/NodeImpl.hh \
-api/OutputStreamer.hh \
-api/Parser.hh \
-api/Reader.hh \
-api/Resolver.hh \
-api/ResolverSchema.hh \
-api/ResolvingReader.hh \
-api/Schema.hh \
-api/SchemaResolution.hh \
-api/Serializer.hh \
-api/SymbolMap.hh \
-api/Types.hh \
-api/ValidSchema.hh \
-api/ValidatingReader.hh \
-api/ValidatingWriter.hh \
-api/Validator.hh \
-api/Writer.hh \
-api/Zigzag.hh 
-
-BUILT_SOURCES = AvroYacc.h testgen.hh testgen2.hh
-
-bin_PROGRAMS = precompile testparser 
-bin_SCRIPTS = scripts/gen-cppcode.py
-
-precompile_SOURCES = test/precompile.cc
-
-precompile_LDFLAGS = -static $(BOOST_LDFLAGS)
-precompile_LDADD = $(top_builddir)/libavrocpp.la $(BOOST_REGEX_LIB)
-
-testparser_SOURCES = test/testparser.cc
-
-testparser_LDFLAGS = -static -no-install $(BOOST_LDFLAGS)
-testparser_LDADD = $(top_builddir)/libavrocpp.la $(BOOST_REGEX_LIB)
-
-lib_LTLIBRARIES = libavrocpp.la
-
-libavrocpp_la_SOURCES = \
-api/AvroParse.hh \
-api/AvroSerialize.hh \
-api/AvroTraits.hh \
-api/Boost.hh \
-api/Compiler.hh \
-api/CompilerNode.hh \
-api/Exception.hh \
-api/InputStreamer.hh \
-api/Layout.hh \
-api/Node.hh \
-api/NodeConcepts.hh \
-api/NodeImpl.hh \
-api/OutputStreamer.hh \
-api/Parser.hh \
-api/Reader.hh \
-api/Resolver.hh \
-api/ResolverSchema.hh \
-api/ResolvingReader.hh \
-api/Schema.hh \
-api/SchemaResolution.hh \
-api/Serializer.hh \
-api/SymbolMap.hh \
-api/Types.hh \
-api/ValidSchema.hh \
-api/ValidatingReader.hh \
-api/ValidatingWriter.hh \
-api/Validator.hh \
-api/Writer.hh \
-api/Zigzag.hh \
-impl/Compiler.cc \
-impl/CompilerNode.cc \
-impl/Node.cc \
-impl/NodeImpl.cc \
-impl/Resolver.cc \
-impl/ResolverSchema.cc \
-impl/Schema.cc \
-impl/Types.cc \
-impl/ValidSchema.cc \
-impl/ValidatingReader.cc \
-impl/ValidatingWriter.cc \
-impl/Validator.cc \
-impl/Zigzag.cc \
-parser/AvroYacc.yy \
-parser/AvroLex.ll 
-
-# libavrocpp_la_LDFLAGS = -export-dynamic
-
-AM_LFLAGS= -o$(LEX_OUTPUT_ROOT).c
-AM_YFLAGS = -d
-
-check_PROGRAMS = unittest testgen 
-
-TESTS=unittest testgen
-TESTS_ENVIRONMENT = top_srcdir=$(top_srcdir)
-
-unittest_SOURCES = test/unittest.cc
-unittest_LDFLAGS = -static -no-install $(BOOST_LDFLAGS)
-unittest_LDADD = $(top_builddir)/libavrocpp.la $(BOOST_REGEX_LIB)
-
-testgen_SOURCES = test/testgen.cc testgen.hh testgen2.hh
-testgen_CXXFLAGS = $(AM_CXXFLAGS) -Wno-invalid-offsetof  
-testgen_LDFLAGS = -static -no-install $(BOOST_LDFLAGS)
-testgen_LDADD = $(top_builddir)/libavrocpp.la $(BOOST_REGEX_LIB)
-
-# Make sure we never package up '.svn' directories
-dist-hook:
-	find $(distdir) -name '.svn' | xargs rm -rf
-
-testgen.hh : bigrecord.precompile 
-	$(PYTHON) $(top_srcdir)/scripts/gen-cppcode.py -n testgen -i $< -o $@
-
-testgen2.hh : bigrecord2.precompile 
-	$(PYTHON) $(top_srcdir)/scripts/gen-cppcode.py -n testgen2 -i $< -o $@
-
-bigrecord.precompile: $(top_srcdir)/jsonschemas/bigrecord precompile$(EXEEXT)
-	$(top_builddir)/precompile$(EXEEXT) < $< > $@
-
-bigrecord2.precompile: $(top_srcdir)/jsonschemas/bigrecord2 precompile$(EXEEXT) 
-	$(top_builddir)/precompile$(EXEEXT) < $< > $@
-
-DOXYGEN_INPUTS= $(top_srcdir)/MainPage.dox $(patsubst %,$(top_srcdir)/%, $(library_include_HEADERS))
-
-CPP_DOC_DIR ?= "$(top_builddir)/doc"
-
-doc: $(CPP_DOC_DIR)/html/index.html
-
-$(CPP_DOC_DIR)/html/index.html: $(DOXYGEN_INPUTS) $(srcdir)/Doxyfile
-	(cat $(srcdir)/Doxyfile; echo "OUTPUT_DIRECTORY=$(CPP_DOC_DIR)";  echo "INPUT=$(DOXYGEN_INPUTS)") | $(DOXYGEN) - ;
-
-EXTRA_DIST=jsonschemas scripts
-
-CLEANFILES=bigrecord.precompile bigrecord2.precompile testgen.hh testgen2.hh AvroLex.cc AvroYacc.cc AvroYacc.h test.avro
-
-clean-local: clean-local-check
-.PHONY: clean-local-check
-clean-local-check:
-	-rm -rf $(CPP_DOC_DIR)
-
diff --git a/branch-1.3/lang/c++/NEWS b/branch-1.3/lang/c++/NEWS
deleted file mode 100644
index e70f439..0000000
--- a/branch-1.3/lang/c++/NEWS
+++ /dev/null
@@ -1,5 +0,0 @@
-
-For news, visit the Avro web site at
-http://hadoop.apache.org/avro/
-
-
diff --git a/branch-1.3/lang/c++/README b/branch-1.3/lang/c++/README
deleted file mode 100644
index b4378c0..0000000
--- a/branch-1.3/lang/c++/README
+++ /dev/null
@@ -1,54 +0,0 @@
-Avro C++ README.txt
-
-The C++ port is thus far incomplete.  Currently, it contains:
-
- - Serializer/Parser- objects for writing/reading raw binary.
-
- - xxxSchema- objects for composing schemas.
-
- - ValidSchema- a schema object that has been converted to a parse tree
-   (with some sanity checks). 
-
- - ValidSchema.toJson() writes the schema as a json object.
-
- - ValidatingSerializer/ValidatingParser- check that reads/writes
-   match the expected schema type (more expensive than the raw
-   serializer/parser but they detect errors, and allow dynamic
-   discovery of parsed data/attributes).
-
- - Compiler (compileJsonSchema())- converts a Json string schema to a
-   ValidSchema.
-
- - Code Generation (experimental) - given a schema it generates C++
-   objects of the same data types, and the code to serialize and parse
-   it.
-
-What's missing: Defaults are not yet supported.  And the file and rpc
-containers are not yet implemented. Documentation, sparse.
-
-INSTRUCTIONS
-
-To compile requires boost headers, and the boost regex library.
-Additionally, to generate the avro spec compiler requires flex and bison.
-
-There is a configure file generated by autoconf.  Use:
-
-    ./configure 
-
-to generate a Makefile.
-
-If it doesn't work, either you are missing some packages (boost, flex or bison),
-or you need to help configure locate them.
-
-Type ./configure --help if you need help.
-
-If the Makefile is configured correctly, then you can make and run tests:
-
-    make
-    make check
-
-Before running tests, you may need to set LD_LIBRARY_PATH to the boost
-libraries if they are staged instead of installed.
-
-The install target is not yet set up (coming soon!)
-
diff --git a/branch-1.3/lang/c++/api/AvroParse.hh b/branch-1.3/lang/c++/api/AvroParse.hh
deleted file mode 100644
index c9a5fd0..0000000
--- a/branch-1.3/lang/c++/api/AvroParse.hh
+++ /dev/null
@@ -1,85 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_AvroParse_hh__
-#define avro_AvroParse_hh__
-
-#include <boost/static_assert.hpp>
-#include "AvroTraits.hh"
-#include "ResolvingReader.hh"
-
-/// \file
-///
-/// Standalone parse functions for Avro types.
-
-namespace avro {
-    
-/// The main parse entry point function.  Takes a parser (either validating or
-/// plain) and the object that should receive the parsed data.
-
-template <typename Reader, typename T>
-void parse(Reader &p, T& val)
-{
-    parse(p, val, is_serializable<T>());
-}
-
-template <typename T>
-void parse(ResolvingReader &p, T& val)
-{
-    translatingParse(p, val, is_serializable<T>());
-}
-
-/// Type trait should be set to is_serializable in otherwise force the compiler to complain.
-
-template <typename Reader, typename T>
-void parse(Reader &p, T& val, const boost::false_type &)
-{
-    BOOST_STATIC_ASSERT(sizeof(T)==0);
-}
-
-template <typename Reader, typename T>
-void translatingParse(Reader &p, T& val, const boost::false_type &)
-{
-    BOOST_STATIC_ASSERT(sizeof(T)==0);
-}
-
-// @{
-
-/// The remainder of the file includes default implementations for serializable types.
-
-
-template <typename Reader, typename T>
-void parse(Reader &p, T &val, const boost::true_type &) {
-    p.readValue(val);
-}
-
-template <typename Reader>
-void parse(Reader &p, std::vector<uint8_t> &val, const boost::true_type &) {
-    p.readBytes(val);
-}
-
-template<typename T>
-void translatingParse(ResolvingReader &p, T& val, const boost::true_type &) {
-    p.parse(val);
-}
-
-// @}
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/AvroSerialize.hh b/branch-1.3/lang/c++/api/AvroSerialize.hh
deleted file mode 100644
index 0af03da..0000000
--- a/branch-1.3/lang/c++/api/AvroSerialize.hh
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_AvroSerialize_hh__
-#define avro_AvroSerialize_hh__
-
-#include <boost/static_assert.hpp>
-#include "AvroTraits.hh"
-
-/// \file
-///
-/// Standalone serialize functions for Avro types.
-
-namespace avro {
-    
-/// The main serializer entry point function.  Takes a serializer (either validating or
-/// plain) and the object that should be serialized.
-
-template <typename Writer, typename T>
-void serialize(Writer &s, const T& val)
-{
-    serialize(s, val, is_serializable<T>());
-}
-
-/// Type trait should be set to is_serializable in otherwise force the compiler to complain.
-
-template <typename Writer, typename T>
-void serialize(Writer &s, const T& val, const boost::false_type &)
-{
-    BOOST_STATIC_ASSERT(sizeof(T)==0);
-}
-
-/// The remainder of the file includes default implementations for serializable types.
-
-// @{
-
-template <typename Writer, typename T>
-void serialize(Writer &s, T val, const boost::true_type &) {
-    s.writeValue(val);
-}
-
-template <typename Writer>
-void serialize(Writer &s, const std::vector<uint8_t> &val, const boost::true_type &) {
-    s.writeBytes(&val[0], val.size());
-}
-
-// @}
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/AvroTraits.hh b/branch-1.3/lang/c++/api/AvroTraits.hh
deleted file mode 100644
index d1da17e..0000000
--- a/branch-1.3/lang/c++/api/AvroTraits.hh
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_AvroTraits_hh__
-#define avro_AvroTraits_hh__
-
-#include "Boost.hh"
-#include "Types.hh"
-
-/// \file
-///
-/// Define an is_serializable trait for types we can serialize natively. 
-/// New types will need to define the trait as well.
-
-namespace avro {
-
-template <typename T>
-struct is_serializable : public boost::false_type{};
-
-template <typename T>
-struct is_promotable : public boost::false_type{};
-
-template <typename T>
-struct type_to_avro {
-    static const Type type = AVRO_NUM_TYPES;
-};
-
-#define DEFINE_PRIMITIVE(CTYPE, AVROTYPE) \
-template <> \
-struct is_serializable<CTYPE> : public boost::true_type{}; \
-\
-template <> \
-struct type_to_avro<CTYPE> { \
-    static const Type type = AVROTYPE; \
-};
-
-#define DEFINE_PROMOTABLE_PRIMITIVE(CTYPE, AVROTYPE) \
-template <> \
-struct is_promotable<CTYPE> : public boost::true_type{}; \
-\
-DEFINE_PRIMITIVE(CTYPE, AVROTYPE)
-
-DEFINE_PROMOTABLE_PRIMITIVE(int32_t, AVRO_INT)
-DEFINE_PROMOTABLE_PRIMITIVE(int64_t, AVRO_LONG)
-DEFINE_PROMOTABLE_PRIMITIVE(float, AVRO_FLOAT)
-DEFINE_PRIMITIVE(double, AVRO_DOUBLE)
-DEFINE_PRIMITIVE(bool, AVRO_BOOL)
-DEFINE_PRIMITIVE(Null, AVRO_NULL)
-DEFINE_PRIMITIVE(std::string, AVRO_STRING)
-DEFINE_PRIMITIVE(std::vector<uint8_t>, AVRO_BYTES)
-
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Boost.hh b/branch-1.3/lang/c++/api/Boost.hh
deleted file mode 100644
index 19e41f3..0000000
--- a/branch-1.3/lang/c++/api/Boost.hh
+++ /dev/null
@@ -1,113 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Boost_hh__
-#define avro_Boost_hh__
-
-#include <boost/version.hpp>
-
-#define BOOST_MINOR_VERSION ( BOOST_VERSION / 100 % 1000 )
-
-#if (BOOST_MINOR_VERSION < 33)
-
-/* 
- * In boost 1.33, boost introduced the type trait definitions for true_type and
- * false_type, the pointer containers, and allow any objects to return
- * references.
- *
- * In order to support earlier versions of boost, if these do not exist we just
- * create them here.
- */
-
-#define AVRO_BOOST_NO_ANYREF
-#define AVRO_BOOST_NO_TRAIT
-#define AVRO_BOOST_NO_PTRVECTOR
-
-#else
-#endif
-
-#include <boost/any.hpp>
-
-#ifdef AVRO_BOOST_NO_TRAIT
-// this is copied directly from boost documentation
-namespace boost {
-    template <class T, T val>
-    struct integral_constant {
-        typedef integral_constant<T, val>  type;
-        typedef T                          value_type;
-        static const T value = val;
-    };
-
-    typedef integral_constant<bool, true>  true_type;
-    typedef integral_constant<bool, false> false_type;
-} // namespace boost
-#else 
-#include <boost/type_traits.hpp>
-#endif // AVRO_BOOST_NO_TRAIT
-
-#ifdef AVRO_BOOST_NO_PTRVECTOR
-#include <vector>
-// this implements a minimal subset of ptr_vector (the parts of the API used by avro)
-namespace boost {
-    template <class T>
-    class ptr_vector {
-      public:
-        ptr_vector() : ptrs_() {}
-        ~ptr_vector() {
-            for(size_t i=0; i < ptrs_.size(); ++i) {
-                delete ptrs_[i];
-            }
-        }
-        void push_back(T *v) {
-            ptrs_.push_back(v);
-        }
-        void pop_back() {
-            T *toDelete = ptrs_.back();
-            ptrs_.pop_back();
-            delete toDelete;
-        }
-        const T& back() const {
-            return *ptrs_.back();
-        };
-        T& back() {
-            return *ptrs_.back();
-        };
-        bool empty() const {
-            return ptrs_.empty();
-        }
-        const T& at(size_t index) const {
-            return *(ptrs_.at(index));
-        }
-        const T& operator[](size_t index) const {
-            return *(ptrs_[index]);
-        }
-        size_t size() const {
-            return ptrs_.size();
-        }
-        void reserve(size_t elems) {
-            ptrs_.reserve(elems);
-        }
-      private:
-        std::vector<T *> ptrs_;
-    };
-} // namespace boost
-#else 
-#include <boost/ptr_container/ptr_vector.hpp>
-#endif // AVRO_BOOST_NO_PTRVECTOR
-
-#endif // avro_Boost_hh__
diff --git a/branch-1.3/lang/c++/api/Compiler.hh b/branch-1.3/lang/c++/api/Compiler.hh
deleted file mode 100644
index c730a8a..0000000
--- a/branch-1.3/lang/c++/api/Compiler.hh
+++ /dev/null
@@ -1,110 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Compiler_hh__
-#define avro_Compiler_hh__
-
-#include "Boost.hh"
-
-#include <FlexLexer.h>
-#include "Types.hh"
-#include "Node.hh"
-#include "CompilerNode.hh"
-
-namespace avro {
-
-/// This class is used to implement an avro spec parser using a flex/bison
-/// compiler.  In order for the lexer to be reentrant, this class provides a
-/// lexer object for each parse.  The bison parser also uses this class to
-/// build up an avro parse tree as the avro spec is parsed.
-    
-class CompilerContext {
-
-
-  public:
-
-    CompilerContext(std::istream &is) :
-        lexer_(&is)
-    {}
-
-    /// Called by the lexer whenever it encounters text that is not a symbol it recognizes
-    /// (names, fieldnames, values to be converted to integers, etc).
-    void setText(const char *text) {
-        text_ = text;
-    }
-
-    void addNamedType();
-
-    void startType();
-    void stopType();
-
-    void addType(avro::Type type);
-
-    void setSizeAttribute();
-    void setNameAttribute();
-    void setSymbolsAttribute();
-
-    void setFieldsAttribute();
-    void setItemsAttribute();
-    void setValuesAttribute();
-    void setTypesAttribute();
-
-    void textContainsFieldName();
-
-    const FlexLexer &lexer() const {
-        return lexer_;
-    }
-    FlexLexer &lexer() {
-        return lexer_;
-    }
-
-    const NodePtr &getRoot() const {
-        return root_;
-    }
-
-  private:
-
-    typedef boost::ptr_vector<CompilerNode> Stack;
-
-    void add(const NodePtr &node);
-
-    yyFlexLexer lexer_;
-    std::string text_;
-    
-    NodePtr   root_;
-    Stack     stack_;
-};
-
-class ValidSchema;
-
-/// Given a stream comtaining a JSON schema, compiles the schema to a
-/// ValidSchema object.  Throws if the schema cannot be compiled to a valid
-/// schema
-
-void compileJsonSchema(std::istream &is, ValidSchema &schema);
-
-/// Non-throwing version of compileJsonSchema.  
-///
-/// \return True if no error, false if error (with the error string set)
-///
-
-bool compileJsonSchema(std::istream &is, ValidSchema &schema, std::string &error);
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/CompilerNode.hh b/branch-1.3/lang/c++/api/CompilerNode.hh
deleted file mode 100644
index b9faf98..0000000
--- a/branch-1.3/lang/c++/api/CompilerNode.hh
+++ /dev/null
@@ -1,128 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_CompilerNode_hh__
-#define avro_CompilerNode_hh__
-
-#include "NodeConcepts.hh"
-#include "Node.hh"
-
-namespace avro {
-
-
-/// This is a generic "untyped" node that may store values for all possible
-/// attributes of Avro complex types.  This allows a Node to be assembled by
-/// the compiler, before it knows what attributes the Node actually contains.
-/// All the Avro types defined (see NodeImpl) may be copy constructed from a
-/// CompilerNode, at which time the attributes actually required by the Avro
-/// type are copied from the CompilerNode, and all unused attributes are
-/// dropped.
-
-class CompilerNode
-{
-
-  public:
-
-    enum AttributeType {
-        NONE,
-        FIELDS,
-        VALUES,
-        ITEMS,
-        TYPES
-    };
-
-    CompilerNode() :
-        type_(AVRO_NUM_TYPES),
-        attributeType_(NONE)
-    {}
-
-    CompilerNode(const CompilerNode &rhs) :
-        type_(rhs.type_),
-        attributeType_(rhs.attributeType_)
-    {}
-
-
-    AttributeType attributeType() const {
-        return attributeType_;
-    }
-
-    void setAttributeType(AttributeType attributeType) {
-        attributeType_ = attributeType;
-    }
-
-    Type type() const {
-        return type_;
-    }
-
-    void setType(Type type) {
-        type_ = type;
-    } 
-
-    void addNode(const NodePtr &node) {
-        switch(attributeType_) {
-          case FIELDS:
-            fieldsAttribute_.add(node);
-            break;
-          case VALUES:
-            valuesAttribute_.add(node);
-            break;
-          case ITEMS:
-            itemsAttribute_.add(node);
-            break;
-          case TYPES:
-            typesAttribute_.add(node);
-            break;
-
-          default:
-            throw Exception("Can't add node if the attribute type is not set");
-        }
-    }
-
-
-    // attribute used by records, enums, symbols, and fixed:
-    concepts::SingleAttribute<std::string> nameAttribute_;
-
-    // attribute used by fixed:
-    concepts::SingleAttribute<int> sizeAttribute_;
-
-    // attributes used by records:
-    concepts::MultiAttribute<NodePtr>     fieldsAttribute_;
-    concepts::MultiAttribute<std::string> fieldsNamesAttribute_;
-
-    // attribute used by enums:
-    concepts::MultiAttribute<std::string> symbolsAttribute_;
-
-    // attribute used by arrays:
-    concepts::SingleAttribute<NodePtr> itemsAttribute_;
-
-    // attribute used by maps:
-    concepts::SingleAttribute<NodePtr> valuesAttribute_;
-
-    // attribute used by unions:
-    concepts::MultiAttribute<NodePtr> typesAttribute_;
-
-    Type type_;
-    AttributeType attributeType_;
-
-};
-
-NodePtr nodeFromCompilerNode(CompilerNode &compilerNode);
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Exception.hh b/branch-1.3/lang/c++/api/Exception.hh
deleted file mode 100644
index 1149c88..0000000
--- a/branch-1.3/lang/c++/api/Exception.hh
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Exception_hh__
-#define avro_Exception_hh__
-
-#include <stdexcept>
-#include <boost/format.hpp>
-
-namespace avro {
-
-/// Wrapper for std::runtime_error that provides convenience constructor
-/// for boost::format objects
-
-class Exception : public virtual std::runtime_error
-{
-  public:
-
-    Exception(const std::string &msg) :
-        std::runtime_error(msg)
-    { }
-
-    Exception(const boost::format &msg) :
-        std::runtime_error( boost::str(msg))
-    { }  
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/InputStreamer.hh b/branch-1.3/lang/c++/api/InputStreamer.hh
deleted file mode 100644
index 2007113..0000000
--- a/branch-1.3/lang/c++/api/InputStreamer.hh
+++ /dev/null
@@ -1,94 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_InputStreamer_hh__
-#define avro_InputStreamer_hh__
-
-#include <iostream>
-#include <stdint.h>
-
-namespace avro {
-
-///
-/// A generic object for reading inputs from a stream.  Serves as a base class,
-/// so that avro parser objects can read from different sources (for example,
-/// istreams or blocks of memory),  but the derived class provides the
-/// implementation for the different source.
-///
-/// Right now this class is very bare-bones.
-///
-    
-class InputStreamer {
-
-  public:
-
-    virtual ~InputStreamer()
-    { }
-
-    virtual size_t readByte(uint8_t &byte) = 0;
-    virtual size_t readWord(uint32_t &word) = 0;
-    virtual size_t readLongWord(uint64_t &word) = 0;
-    virtual size_t readBytes(void *bytes, size_t size) = 0;
-};
-
-
-///
-/// An implementation of InputStreamer that uses a std::istream for input.
-///
-/// Right now this class is very bare-bones, without much in way of error
-/// handling.
-///
-    
-class IStreamer : public InputStreamer {
-
-  public:
-
-    IStreamer(std::istream &is) :
-        is_(is)
-    {}
-
-    size_t readByte(uint8_t &byte) {
-        char val;
-        is_.get(val);
-        byte = val;
-        return 1;
-    }
-
-    size_t readWord(uint32_t &word) {
-        is_.read(reinterpret_cast<char *>(&word), sizeof(word));
-        return is_.gcount();
-    }
-
-    size_t readLongWord(uint64_t &word) {
-        is_.read(reinterpret_cast<char *>(&word), sizeof(word));
-        return is_.gcount();
-    }
-
-    size_t readBytes(void *bytes, size_t size) {
-        is_.read(reinterpret_cast<char *>(bytes), size);
-        return is_.gcount();
-    }
-
-  private:
-
-    std::istream &is_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Layout.hh b/branch-1.3/lang/c++/api/Layout.hh
deleted file mode 100644
index d2b00b7..0000000
--- a/branch-1.3/lang/c++/api/Layout.hh
+++ /dev/null
@@ -1,84 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Layout_hh__
-#define avro_Layout_hh__
-
-#include <boost/noncopyable.hpp>
-#include "Boost.hh"
-
-/// \file Layout.hh
-///
-
-namespace avro {
-    
-class Layout : private boost::noncopyable {
-
-  protected:
-
-    Layout(size_t offset = 0) :
-        offset_(offset)
-    {}
-
-  public:
-
-    size_t offset() const {
-        return offset_;
-    }
-
-    virtual ~Layout() {}
-
-  private:
-
-    const size_t offset_;
-};
-
-class PrimitiveLayout : public Layout {
-
-  public:
-
-    PrimitiveLayout(size_t offset = 0) :
-        Layout(offset)
-    {}
-};
-
-class CompoundLayout : public Layout {
-
-  public:
-
-    CompoundLayout(size_t offset = 0) :
-        Layout(offset)
-    {}
-
-    void add(Layout *layout) {
-        layouts_.push_back(layout);
-    }
-
-    const Layout &at (size_t idx) const {
-        return layouts_.at(idx);
-    }
-
-  private:
-
-    boost::ptr_vector<Layout> layouts_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Node.hh b/branch-1.3/lang/c++/api/Node.hh
deleted file mode 100644
index d4f1311..0000000
--- a/branch-1.3/lang/c++/api/Node.hh
+++ /dev/null
@@ -1,141 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Node_hh__
-#define avro_Node_hh__
-
-#include <cassert>
-#include <boost/noncopyable.hpp>
-#include <boost/shared_ptr.hpp>
-
-#include "Exception.hh"
-#include "Types.hh"
-#include "SchemaResolution.hh"
-
-namespace avro {
-
-class Node;
-
-typedef boost::shared_ptr<Node> NodePtr;
-
-
-/// Node is the building block for parse trees.  Each node represents an avro
-/// type.  Compound types have leaf nodes that represent the types they are
-/// composed of.
-///
-/// The user does not use the Node object directly, they interface with Schema
-/// objects.
-///
-/// The Node object uses reference-counted pointers.  This is so that schemas 
-/// may be reused in other other schemas, without needing to worry about memory
-/// deallocation for nodes that are added to multiple schema parse trees.
-///
-/// Node has minimal implementation, serving as an abstract base class for
-/// different node types.
-///
-
-class Node : private boost::noncopyable
-{
-  public:
-
-    Node(Type type) :
-        type_(type),
-        locked_(false)
-    {}
-
-    virtual ~Node();
-
-    Type type() const {
-        return type_;
-    }
-
-    void lock() {
-        locked_ = true;
-    }
-
-    bool locked() const {
-        return locked_;
-    }
-
-    virtual bool hasName() const = 0;
-
-    void setName(const std::string &name) {
-        checkLock();
-        checkName(name);
-        doSetName(name);
-    }
-    virtual const std::string &name() const = 0;
-
-    void addLeaf(const NodePtr &newLeaf) {
-        checkLock();
-        doAddLeaf(newLeaf);
-    }
-    virtual size_t leaves() const = 0;
-    virtual const NodePtr& leafAt(int index) const = 0;
-
-    void addName(const std::string &name) {
-        checkLock();
-        checkName(name);
-        doAddName(name);
-    }
-    virtual size_t names() const = 0;
-    virtual const std::string &nameAt(int index) const = 0;
-    virtual bool nameIndex(const std::string &name, size_t &index) const = 0;
-
-    void setFixedSize(int size) {
-        checkLock();
-        doSetFixedSize(size);
-    }
-    virtual int fixedSize() const = 0;
-
-    virtual bool isValid() const = 0;
-
-    virtual SchemaResolution resolve(const Node &reader) const = 0;
-
-    virtual void printJson(std::ostream &os, int depth) const = 0;
-
-    virtual void printBasicInfo(std::ostream &os) const = 0;
-
-  protected:
-
-    friend class ValidSchema;
-
-    virtual void setLeafToSymbolic(int index, const NodePtr &node) = 0;
-
-    void checkLock() const {
-        if(locked()) {
-            throw Exception("Cannot modify locked schema");
-        }
-    }
-
-    void checkName(const std::string &name) const;
-
-    virtual void doSetName(const std::string &name) = 0;
-    virtual void doAddLeaf(const NodePtr &newLeaf) = 0;
-    virtual void doAddName(const std::string &name) = 0;
-    virtual void doSetFixedSize(int size) = 0;
-
-  private:
-
-    const Type type_;
-    bool locked_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/NodeConcepts.hh b/branch-1.3/lang/c++/api/NodeConcepts.hh
deleted file mode 100644
index 7e645cb..0000000
--- a/branch-1.3/lang/c++/api/NodeConcepts.hh
+++ /dev/null
@@ -1,216 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_NodeConcepts_hh__
-#define avro_NodeConcepts_hh__
-
-#include <vector>
-#include <map>
-#include "Exception.hh"
-
-namespace avro {
-
-
-/// 
-/// The concept classes are used to simplify NodeImpl.  Since different types
-/// of avro types carry different attributes, such as names, or field names for
-/// record members.  Using the concept class of NoAttribute vs Attribute, the
-/// NodeImpl object can enable/disable the attribute, but the code is the same
-/// in either case.
-///
-/// Furthermore, attributes may have different types, for example, most
-/// attributes are strings, but fixed types have a size attribute, which is
-/// integer.
-///
-/// Since compound types are composed of other types, the leaf attribute
-/// concepts extend a NodeImpl to include leaf nodes, and attributes for leaf
-/// nodes, which are used to build parse trees.
-///
-///
-
-namespace concepts {
-
-template <typename Attribute>
-struct NoAttribute
-{
-    static const bool hasAttribute = false;
-
-    size_t size() const {
-        return 0;
-    }
-
-    void add( const Attribute &attr) {
-        // There must be an add function for the generic NodeImpl, but the
-        // Node APIs ensure that it is never called, the throw here is
-        // just in case
-        throw Exception("This type does not have attribute");
-    }
-
-    const Attribute &get(size_t index = 0) const {
-        // There must be an get function for the generic NodeImpl, but the
-        // Node APIs ensure that it is never called, the throw here is
-        // just in case
-        throw Exception("This type does not have attribute");
-        // even though this code is unreachable the compiler requires it
-        static const Attribute empty = Attribute();
-        return empty;
-    }
-
-};
-
-template<typename Attribute>
-struct SingleAttribute
-{
-    static const bool hasAttribute = true;
-
-    SingleAttribute() : attr_(), size_(0)
-    { }
-
-    // copy constructing from another single attribute is allowed
-    SingleAttribute(const SingleAttribute<Attribute> &rhs) : 
-        attr_(rhs.attr_), size_(rhs.size_)
-    { }
-
-    // copy constructing from a no attribute is allowed
-    SingleAttribute(const NoAttribute<Attribute> &rhs) : 
-        attr_(), size_(0)
-    { }
-
-    size_t size() const {
-        return size_;
-    }
-
-    void add(const Attribute &attr) {
-        if(size_ == 0) {
-            size_ = 1;
-        }
-        else {
-            throw Exception("SingleAttribute can only be set once");
-        }
-        attr_ = attr;
-    }
-
-    const Attribute &get(size_t index = 0) const {
-        if(index != 0) {
-            throw Exception("SingleAttribute has only 1 value");
-        }
-        return attr_;
-    }
-
-  private:
-
-    template<typename T> friend class MultiAttribute;
-
-    Attribute attr_;
-    int       size_;
-};
-
-template<typename Attribute>
-struct MultiAttribute
-{
-    static const bool hasAttribute = true;
-
-    MultiAttribute() 
-    { }
-
-    // copy constructing from another single attribute is allowed, it
-    // pushes the attribute
-    MultiAttribute(const SingleAttribute<Attribute> &rhs) 
-    { 
-        // since map is the only type that does this we know it's
-        // final size will be two, so reserve 
-        attrs_.reserve(2);
-        attrs_.push_back(rhs.attr_);
-    }
-
-    MultiAttribute(const MultiAttribute<Attribute> &rhs)  :
-        attrs_(rhs.attrs_)
-    { }
-
-    MultiAttribute(const NoAttribute<Attribute> &rhs)
-    {}
-
-    size_t size() const {
-        return attrs_.size();
-    }
-
-    void add(const Attribute &attr) {
-        attrs_.push_back(attr); 
-    }
-
-    const Attribute &get(size_t index = 0) const {
-        return attrs_.at(index);
-    }
-
-    Attribute &get(size_t index) {
-        return attrs_.at(index);
-    }
-
-  private:
-
-    std::vector<Attribute> attrs_;
-};
-
-
-template<typename T>
-struct NameIndexConcept {
-
-    bool lookup(const std::string &name, size_t &index) const {
-        throw Exception("Name index does not exist");
-        return 0;
-    }
-
-    bool add(const::std::string &name, size_t index) {
-        throw Exception("Name index does not exist");
-        return false;
-    }
-};
-
-template<>
-struct NameIndexConcept < MultiAttribute<std::string> > 
-{
-    typedef std::map<std::string, size_t> IndexMap;
-
-    bool lookup(const std::string &name, size_t &index) const {
-        IndexMap::const_iterator iter = map_.find(name); 
-        if(iter == map_.end()) {
-            return false;
-        }
-        index = iter->second;
-        return true;
-    }
-
-    bool add(const::std::string &name, size_t index) {
-        bool added = false;
-        IndexMap::iterator lb = map_.lower_bound(name); 
-        if(lb == map_.end() || map_.key_comp()(name, lb->first)) {
-            map_.insert(lb, IndexMap::value_type(name, index));
-            added = true;
-        }
-        return added;
-    }
-
-  private:
-
-    IndexMap map_;
-};
-
-} // namespace concepts
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/NodeImpl.hh b/branch-1.3/lang/c++/api/NodeImpl.hh
deleted file mode 100644
index 268bb8a..0000000
--- a/branch-1.3/lang/c++/api/NodeImpl.hh
+++ /dev/null
@@ -1,433 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_NodeImpl_hh__
-#define avro_NodeImpl_hh__
-
-#include <limits>
-#include <boost/weak_ptr.hpp>
-
-#include "Node.hh"
-#include "NodeConcepts.hh"
-
-namespace avro {
-
-/// Implementation details for Node.  NodeImpl represents all the avro types,
-/// whose properties are enabled are disabled by selecting concept classes.
-
-template 
-< 
-    class NameConcept,
-    class LeavesConcept,
-    class LeafNamesConcept,
-    class SizeConcept
->
-class NodeImpl : public Node
-{
-
-  protected:
-
-    NodeImpl(Type type) :
-        Node(type),
-        nameAttribute_(),
-        leafAttributes_(),
-        leafNameAttributes_(),
-        sizeAttribute_()
-    { }
-
-    NodeImpl(Type type, 
-             const NameConcept &name, 
-             const LeavesConcept &leaves, 
-             const LeafNamesConcept &leafNames,
-             const SizeConcept &size) :
-        Node(type),
-        nameAttribute_(name),
-        leafAttributes_(leaves),
-        leafNameAttributes_(leafNames),
-        sizeAttribute_(size)
-    { }
-
-    bool hasName() const {
-        return NameConcept::hasAttribute;
-    }
-
-    void doSetName(const std::string &name) {
-        nameAttribute_.add(name);
-    }
-    
-    const std::string &name() const {
-        return nameAttribute_.get();
-    }
-
-    void doAddLeaf(const NodePtr &newLeaf) { 
-        leafAttributes_.add(newLeaf);
-    }
-
-    size_t leaves() const {
-        return leafAttributes_.size();
-    }
-
-    const NodePtr &leafAt(int index) const { 
-        return leafAttributes_.get(index);
-    }
-
-    void doAddName(const std::string &name) { 
-        if(! nameIndex_.add(name, leafNameAttributes_.size())) {
-            throw Exception(boost::format("Cannot add duplicate name: %1%") % name);
-        }
-        leafNameAttributes_.add(name);
-    }
-
-    size_t names() const {
-        return leafNameAttributes_.size();
-    }
-
-    const std::string &nameAt(int index) const { 
-        return leafNameAttributes_.get(index);
-    }
-
-    bool nameIndex(const std::string &name, size_t &index) const {
-        return nameIndex_.lookup(name, index);
-    }
-
-    void doSetFixedSize(int size) {
-        sizeAttribute_.add(size);
-    }
-
-    int fixedSize() const {
-        return sizeAttribute_.get();
-    }
-
-    virtual bool isValid() const = 0;
-
-    void printBasicInfo(std::ostream &os) const;
-
-    void setLeafToSymbolic(int index, const NodePtr &node);
-   
-    SchemaResolution furtherResolution(const Node &node) const;
-
-    NameConcept nameAttribute_;
-    LeavesConcept leafAttributes_;
-    LeafNamesConcept leafNameAttributes_;
-    SizeConcept sizeAttribute_;
-    concepts::NameIndexConcept<LeafNamesConcept> nameIndex_;
-};
-
-typedef concepts::NoAttribute<std::string>     NoName;
-typedef concepts::SingleAttribute<std::string> HasName;
-
-typedef concepts::NoAttribute<NodePtr>      NoLeaves;
-typedef concepts::SingleAttribute<NodePtr>  SingleLeaf;
-typedef concepts::MultiAttribute<NodePtr>   MultiLeaves;
-
-typedef concepts::NoAttribute<std::string>     NoLeafNames;
-typedef concepts::MultiAttribute<std::string>  LeafNames;
-
-typedef concepts::NoAttribute<int>     NoSize;
-typedef concepts::SingleAttribute<int> HasSize;
-
-typedef NodeImpl< NoName,  NoLeaves,    NoLeafNames,  NoSize  > NodeImplPrimitive;
-typedef NodeImpl< HasName, NoLeaves,    NoLeafNames,  NoSize  > NodeImplSymbolic;
-
-typedef NodeImpl< HasName, MultiLeaves, LeafNames,    NoSize  > NodeImplRecord;
-typedef NodeImpl< HasName, NoLeaves,    LeafNames,    NoSize  > NodeImplEnum;
-typedef NodeImpl< NoName,  SingleLeaf,  NoLeafNames,  NoSize  > NodeImplArray;
-typedef NodeImpl< NoName,  MultiLeaves, NoLeafNames,  NoSize  > NodeImplMap;
-typedef NodeImpl< NoName,  MultiLeaves, NoLeafNames,  NoSize  > NodeImplUnion;
-typedef NodeImpl< HasName, NoLeaves,    NoLeafNames,  HasSize > NodeImplFixed;
-
-class NodePrimitive : public NodeImplPrimitive
-{
-  public:
-
-    explicit NodePrimitive(Type type) :
-        NodeImplPrimitive(type)
-    { }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return true;
-    }
-};
-
-class NodeSymbolic : public NodeImplSymbolic
-{
-    typedef boost::weak_ptr<Node> NodeWeakPtr;
-
-  public:
-
-    NodeSymbolic() :
-        NodeImplSymbolic(AVRO_SYMBOLIC)
-    { }
-
-    explicit NodeSymbolic(const HasName &name) :
-        NodeImplSymbolic(AVRO_SYMBOLIC, name, NoLeaves(), NoLeafNames(), NoSize())
-    { }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (nameAttribute_.size() == 1);
-    }
-
-    bool isSet() const {
-         return (actualNode_.lock() != 0);
-    }
-
-    NodePtr getNode() const {
-        NodePtr node = actualNode_.lock();
-        if(!node) {
-            throw Exception(boost::format("Could not follow symbol %1%") % name());
-        }
-        return node;
-    }
-
-    void setNode(const NodePtr &node) {
-        actualNode_ = node;
-    }
-
-  protected:
-
-    NodeWeakPtr actualNode_;
-
-};
-
-class NodeRecord : public NodeImplRecord
-{
-  public:
-
-    NodeRecord() :
-        NodeImplRecord(AVRO_RECORD) 
-    { }
-
-    NodeRecord(const HasName &name, const MultiLeaves &fields, const LeafNames &fieldsNames) :
-        NodeImplRecord(AVRO_RECORD, name, fields, fieldsNames, NoSize())
-    { 
-        for(size_t i=0; i < leafNameAttributes_.size(); ++i) {
-            if(!nameIndex_.add(leafNameAttributes_.get(i), i)) {
-                 throw Exception(boost::format("Cannot add duplicate name: %1%") % leafNameAttributes_.get(i));
-            }
-        }
-    }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (
-                (nameAttribute_.size() == 1) && 
-                (leafAttributes_.size() > 0) &&
-                (leafAttributes_.size() == leafNameAttributes_.size())
-               );
-    }
-};
-
-class NodeEnum : public NodeImplEnum
-{
-  public:
-
-    NodeEnum() :
-        NodeImplEnum(AVRO_ENUM) 
-    { }
-
-    NodeEnum(const HasName &name, const LeafNames &symbols) :
-        NodeImplEnum(AVRO_ENUM, name, NoLeaves(), symbols, NoSize())
-    { 
-        for(size_t i=0; i < leafNameAttributes_.size(); ++i) {
-            if(!nameIndex_.add(leafNameAttributes_.get(i), i)) {
-                 throw Exception(boost::format("Cannot add duplicate name: %1%") % leafNameAttributes_.get(i));
-            }
-        }
-    }
-        
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (
-                (nameAttribute_.size() == 1) && 
-                (leafNameAttributes_.size() > 0) 
-               );
-    }
-};
-
-class NodeArray : public NodeImplArray
-{
-  public:
-
-    NodeArray() :
-        NodeImplArray(AVRO_ARRAY)
-    { }
-
-    explicit NodeArray(const SingleLeaf &items) :
-        NodeImplArray(AVRO_ARRAY, NoName(), items, NoLeafNames(), NoSize())
-    { }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (leafAttributes_.size() == 1);
-    }
-};
-
-class NodeMap : public NodeImplMap
-{
-  public:
-
-    NodeMap() :
-        NodeImplMap(AVRO_MAP)
-    { 
-         NodePtr key(new NodePrimitive(AVRO_STRING));
-         doAddLeaf(key);
-    }
-
-    explicit NodeMap(const SingleLeaf &values) :
-        NodeImplMap(AVRO_MAP, NoName(), values, NoLeafNames(), NoSize())
-    { 
-        // need to add the key for the map too
-        NodePtr key(new NodePrimitive(AVRO_STRING));
-        doAddLeaf(key);
-
-        // key goes before value
-        std::swap(leafAttributes_.get(0), leafAttributes_.get(1));
-    }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (leafAttributes_.size() == 2);
-    }
-};
-
-class NodeUnion : public NodeImplUnion
-{
-  public:
-
-    NodeUnion() :
-        NodeImplUnion(AVRO_UNION)
-    { }
-
-    explicit NodeUnion(const MultiLeaves &types) :
-        NodeImplUnion(AVRO_UNION, NoName(), types, NoLeafNames(), NoSize())
-    { }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (leafAttributes_.size() > 1);
-    }
-};
-
-class NodeFixed : public NodeImplFixed
-{
-  public:
-
-    NodeFixed() :
-        NodeImplFixed(AVRO_FIXED)
-    { }
-
-    NodeFixed(const HasName &name, const HasSize &size) :
-        NodeImplFixed(AVRO_FIXED, name, NoLeaves(), NoLeafNames(), size)
-    { }
-
-    SchemaResolution resolve(const Node &reader)  const;
-
-    void printJson(std::ostream &os, int depth) const;
-
-    bool isValid() const {
-        return (
-                (nameAttribute_.size() == 1) && 
-                (sizeAttribute_.size() == 1) 
-               );
-    }
-};
-
-template < class A, class B, class C, class D >
-inline void 
-NodeImpl<A,B,C,D>::setLeafToSymbolic(int index, const NodePtr &node)
-{
-    if(!B::hasAttribute) {
-        throw Exception("Cannot change leaf node for nonexistent leaf");
-    } 
-
-    NodePtr &replaceNode = const_cast<NodePtr &>(leafAttributes_.get(index));
-    if(replaceNode->name() != node->name()) {
-        throw Exception("Symbolic name does not match the name of the schema it references");
-    }
-
-    NodePtr symbol(new NodeSymbolic);
-    NodeSymbolic *ptr = static_cast<NodeSymbolic *> (symbol.get());
-
-    ptr->setName(node->name());
-    ptr->setNode(node);
-    replaceNode.swap(symbol);
-}
-
-template < class A, class B, class C, class D >
-inline void 
-NodeImpl<A,B,C,D>::printBasicInfo(std::ostream &os) const
-{
-    os << type();
-    if(hasName()) {
-        os << " " << nameAttribute_.get();
-    }
-    if(D::hasAttribute) {
-        os << " " << sizeAttribute_.get();
-    }
-    os << '\n';
-    int count = leaves();
-    count = count ? count : names();
-    for(int i= 0; i < count; ++i) {
-        if( C::hasAttribute ) {
-            os << "name " << nameAt(i) << '\n';
-        }
-        if( type() != AVRO_SYMBOLIC && leafAttributes_.hasAttribute) {
-            leafAt(i)->printBasicInfo(os);
-        }
-    }
-    if(isCompound(type())) {
-        os << "end " << type() << '\n';
-    }
-}
-
-
-inline NodePtr resolveSymbol(const NodePtr &node) 
-{
-    if(node->type() != AVRO_SYMBOLIC) {
-        throw Exception("Only symbolic nodes may be resolved");
-    }
-    boost::shared_ptr<NodeSymbolic> symNode = boost::static_pointer_cast<NodeSymbolic>(node);
-    return symNode->getNode();
-}
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/OutputStreamer.hh b/branch-1.3/lang/c++/api/OutputStreamer.hh
deleted file mode 100644
index 3c30dfd..0000000
--- a/branch-1.3/lang/c++/api/OutputStreamer.hh
+++ /dev/null
@@ -1,127 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_OutputStreamer_hh__
-#define avro_OutputStreamer_hh__
-
-#include <iostream>
-#include <stdint.h>
-
-namespace avro {
-
-///
-/// A generic object for outputing data to a stream.
-///
-/// Serves as a base class, so that avro serializer objects can write to
-/// different sources (for example, istreams or blocks of memory),  but the
-/// derived class provides the implementation for the different source.
-///
-/// Right now this class is very bare-bones.
-///
-    
-class OutputStreamer {
-
-  public:
-
-    virtual ~OutputStreamer()
-    { }
-
-    virtual size_t writeByte(uint8_t byte) = 0;
-    virtual size_t writeWord(uint32_t word) = 0;
-    virtual size_t writeLongWord(uint64_t word) = 0;
-    virtual size_t writeBytes(const void *bytes, size_t size) = 0;
-};
-
-
-///
-/// An implementation of OutputStreamer that writes bytes to screen in ascii
-/// representation of the hex digits, used for debugging.
-///
-    
-class ScreenStreamer : public OutputStreamer {
-
-    size_t writeByte(uint8_t byte) {
-        std::cout << "0x" << std::hex << static_cast<int32_t>(byte) << std::dec << " ";
-        return 1;
-    }
-
-    size_t writeWord(uint32_t word) {
-        ScreenStreamer::writeBytes(&word, sizeof(word));
-        return sizeof(uint32_t);
-    }
-
-    size_t writeLongWord(uint64_t word) {
-        ScreenStreamer::writeBytes(&word, sizeof(word));
-        return sizeof(uint64_t);
-    }
-
-    size_t writeBytes(const void *bytes, size_t size) {
-        const uint8_t *ptr = reinterpret_cast<const uint8_t *>(bytes);
-        for (size_t i= 0; i < size; ++i) {
-            ScreenStreamer::writeByte(*ptr++);
-        }
-        std::cout << std::endl;
-        return size;
-    }
-
-};
-
-///
-/// An implementation of OutputStreamer that writes bytes to a std::ostream for
-/// output.
-///
-/// Right now this class is very bare-bones, without much in way of error
-/// handling.
-///
-    
-class OStreamer : public OutputStreamer {
-
-  public:
-
-    OStreamer(std::ostream &os) :
-        os_(os)
-    {}
-
-    size_t writeByte(uint8_t byte) {
-        os_.put(byte);
-        return 1;
-    }
-
-    size_t writeWord(uint32_t word) {
-        os_.write(reinterpret_cast<char *>(&word), sizeof(word));
-        return sizeof(uint32_t);
-    }
-
-    size_t writeLongWord(uint64_t word) {
-        os_.write(reinterpret_cast<char *>(&word), sizeof(word));
-        return sizeof(uint64_t);
-    }
-
-    size_t writeBytes(const void *bytes, size_t size) {
-        os_.write(reinterpret_cast<const char *>(bytes), size);
-        return size;
-    }
-
-  private:
-
-    std::ostream &os_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Parser.hh b/branch-1.3/lang/c++/api/Parser.hh
deleted file mode 100644
index 41f1255..0000000
--- a/branch-1.3/lang/c++/api/Parser.hh
+++ /dev/null
@@ -1,145 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Parser_hh__
-#define avro_Parser_hh__
-
-#include "Reader.hh"
-#include "ValidatingReader.hh"
-
-namespace avro {
-
-///
-/// Class that wraps a reader or ValidatingReade with an interface that uses
-/// explicit get* names instead of getValue
-///
-
-template<class Reader>
-class Parser : private boost::noncopyable
-{
-
-  public:
-
-    // Constructor only works with Writer
-    explicit Parser(InputStreamer &in) :
-        reader_(in)
-    {}
-
-    /// Constructor only works with ValidatingWriter
-    Parser(const ValidSchema &schema, InputStreamer &in) :
-        reader_(schema, in)
-    {}
-
-    void readNull() {
-        Null null;
-        reader_.readValue(null);
-    }
-
-    bool readBool() {
-        bool val;
-        reader_.readValue(val);
-        return val;
-    }
-
-    int32_t readInt() {
-        int32_t val;
-        reader_.readValue(val);
-        return val;
-    }
-
-    int64_t readLong() {
-        int64_t val;
-        reader_.readValue(val);
-        return val;
-    }
-
-    float readFloat() {
-        float val;
-        reader_.readValue(val);
-        return val;
-    }
-
-    double readDouble() {
-        double val;
-        reader_.readValue(val);
-        return val;
-    }
-
-    void readString(std::string &val) {
-        reader_.readValue(val);
-    }
-
-    void readBytes(std::vector<uint8_t> &val) {
-        reader_.readBytes(val);
-    }
-
-    template <size_t N>
-    void readFixed(uint8_t (&val)[N]) {
-        reader_.readFixed(val);
-    }
-
-    template<size_t N>
-    void readFixed(boost::array<uint8_t, N> &val) {
-        reader_.readFixed(val);
-    }
-
-    void readRecord() { 
-        reader_.readRecord();
-    }
-
-    int64_t readArrayBlockSize() {
-        return reader_.readArrayBlockSize();
-    }
-
-    int64_t readUnion() { 
-        return reader_.readUnion();
-    }
-
-    int64_t readEnum() {
-        return reader_.readEnum();
-    }
-
-    int64_t readMapBlockSize() {
-        return reader_.readMapBlockSize();
-    }
-
-  private:
-
-    friend Type nextType(Parser<ValidatingReader> &p);
-    friend bool currentRecordName(Parser<ValidatingReader> &p, std::string &name);
-    friend bool nextFieldName(Parser<ValidatingReader> &p, std::string &name);
-
-    Reader reader_;
-
-};
-
-inline Type nextType(Parser<ValidatingReader> &p) {
-    return p.reader_.nextType();
-}
-
-inline bool currentRecordName(Parser<ValidatingReader> &p, std::string &name) {
-    return p.reader_.currentRecordName(name);
-}
-
-inline bool nextFieldName(Parser<ValidatingReader> &p, std::string &name) {
-    return p.reader_.nextFieldName(name);
-}
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Reader.hh b/branch-1.3/lang/c++/api/Reader.hh
deleted file mode 100644
index cdf14aa..0000000
--- a/branch-1.3/lang/c++/api/Reader.hh
+++ /dev/null
@@ -1,167 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Reader_hh__
-#define avro_Reader_hh__
-
-#include <stdint.h>
-#include <vector>
-#include <boost/noncopyable.hpp>
-
-#include "InputStreamer.hh"
-#include "Zigzag.hh"
-#include "Types.hh"
-
-namespace avro {
-
-///
-/// Parses from an avro encoding to the requested type.  Assumes the next item
-/// in the avro binary data is the expected type.
-///
-
-class Reader : private boost::noncopyable
-{
-
-  public:
-
-    explicit Reader(InputStreamer &in) :
-        in_(in)
-    {}
-
-    void readValue(Null &) {}
-
-    void readValue(bool &val) {
-        uint8_t ival;
-        in_.readByte(ival);
-        val = (ival != 0);
-    }
-
-    void readValue(int32_t &val) {
-        uint32_t encoded = readVarInt();
-        val = decodeZigzag32(encoded);
-    }
-
-    void readValue(int64_t &val) {
-        uint64_t encoded = readVarInt();
-        val = decodeZigzag64(encoded);
-    }
-
-    void readValue(float &val) {
-        union { 
-            float f;
-            uint32_t i;
-        } v;
-        in_.readWord(v.i);
-        val = v.f;
-    }
-
-    void readValue(double &val) {
-        union { 
-            double d;
-            uint64_t i;
-        } v;
-        in_.readLongWord(v.i);
-        val = v.d;
-    }
-
-    void readValue(std::string &val) {
-        int64_t size = readSize();
-        val.clear();
-        val.reserve(size);
-        uint8_t bval;
-        for(size_t bytes = 0; bytes < static_cast<size_t>(size); bytes++) {
-            in_.readByte(bval);
-            val.push_back(bval);
-        }
-    }
-
-    void readBytes(std::vector<uint8_t> &val) {
-        int64_t size = readSize();
-        
-        val.reserve(size);
-        uint8_t bval;
-        for(size_t bytes = 0; bytes < static_cast<size_t>(size); bytes++) {
-            in_.readByte(bval);
-            val.push_back(bval);
-        }
-    }
-
-    void readFixed(uint8_t *val, size_t size) {
-        for(size_t bytes = 0; bytes < size; bytes++) {
-            in_.readByte(val[bytes]);
-        }
-    }
-
-    template <size_t N>
-    void readFixed(uint8_t (&val)[N]) {
-        readFixed(val, N);
-    }
-  
-    template <size_t N>
-    void readFixed(boost::array<uint8_t, N> &val) {
-        readFixed(val.c_array(), N);
-    }
-  
-    void readRecord() { }
-
-    int64_t readArrayBlockSize() {
-        return readSize();
-    }
-
-    int64_t readUnion() { 
-        return readSize();
-    }
-
-    int64_t readEnum() {
-        return readSize();
-    }
-
-    int64_t readMapBlockSize() {
-        return readSize();
-    }
-
-  private:
-
-    int64_t readSize() {
-        int64_t size(0);
-        readValue(size);
-        return size;
-    }
-
-    uint64_t readVarInt() {
-        uint64_t encoded = 0;
-        uint8_t val = 0;
-        int shift = 0;
-        do {
-            in_.readByte(val);
-            uint64_t newbits = static_cast<uint64_t>(val & 0x7f) << shift;
-            encoded |= newbits;
-            shift += 7;
-        } while (val & 0x80);
-
-        return encoded;
-    }
-
-    InputStreamer &in_;
-
-};
-
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Resolver.hh b/branch-1.3/lang/c++/api/Resolver.hh
deleted file mode 100644
index e665338..0000000
--- a/branch-1.3/lang/c++/api/Resolver.hh
+++ /dev/null
@@ -1,55 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Resolver_hh__
-#define avro_Resolver_hh__
-
-#include <boost/noncopyable.hpp>
-#include <stdint.h>
-
-/// \file Resolver.hh
-///
-
-namespace avro {
-
-class Reader;
-class ValidSchema;
-class Layout;
-    
-class Resolver : private boost::noncopyable
-{
-
-  public:
-
-    virtual void parse(Reader &reader, uint8_t *address) const = 0;
-    virtual ~Resolver() {}
-
-};
-
-Resolver *constructResolver(
-        const ValidSchema &rwriterSchema,
-        const ValidSchema &readerSchema,
-        const Layout &readerLayout
-    );
-
-
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/ResolverSchema.hh b/branch-1.3/lang/c++/api/ResolverSchema.hh
deleted file mode 100644
index 3bb5a19..0000000
--- a/branch-1.3/lang/c++/api/ResolverSchema.hh
+++ /dev/null
@@ -1,56 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_ResolverSchema_hh__
-#define avro_ResolverSchema_hh__
-
-#include <boost/noncopyable.hpp>
-#include <boost/shared_ptr.hpp>
-#include <stdint.h>
-#include "Boost.hh"
-
-/// \file ResolverSchema.hh
-///
-
-namespace avro {
-    
-class ValidSchema;
-class Reader;
-class Layout;
-class Resolver;
-
-class ResolverSchema {
-
-  public:
-
-    ResolverSchema(const ValidSchema &writer, const ValidSchema &reader, const Layout &readerLayout);
-
-  private:
-
-    friend class ResolvingReader;
-
-    void parse(Reader &reader, uint8_t *address); 
-
-    boost::shared_ptr<Resolver> resolver_;
-
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/ResolvingReader.hh b/branch-1.3/lang/c++/api/ResolvingReader.hh
deleted file mode 100644
index 3a873eb..0000000
--- a/branch-1.3/lang/c++/api/ResolvingReader.hh
+++ /dev/null
@@ -1,56 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_ResolvingReader_hh__
-#define avro_ResolvingReader_hh__
-
-#include <stdint.h>
-#include <boost/noncopyable.hpp>
-
-#include "ResolverSchema.hh"
-#include "Reader.hh"
-
-namespace avro {
-
-class InputStreamer;
-
-class ResolvingReader : private boost::noncopyable
-{
-
-  public:
-
-    ResolvingReader(const ResolverSchema &schema, InputStreamer &in) :
-        reader_(in),
-        schema_(schema)
-    {}
-
-    template<typename T>
-    void parse(T &object) {
-        schema_.parse(reader_, reinterpret_cast<uint8_t *>(&object));
-    }
-
-  private:
-
-    Reader reader_;
-    ResolverSchema schema_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Schema.hh b/branch-1.3/lang/c++/api/Schema.hh
deleted file mode 100644
index d226c3c..0000000
--- a/branch-1.3/lang/c++/api/Schema.hh
+++ /dev/null
@@ -1,162 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Schema_hh__ 
-#define avro_Schema_hh__ 
-
-#include "NodeImpl.hh"
-
-/// \file
-///
-/// Schemas for representing all the avro types.  The compound schema objects
-/// allow composition from other schemas.
-///
-
-namespace avro {
-
-class ValidSchema;
-
-
-/// The root Schema object is a base class.  Nobody constructs this class directly.
-
-class Schema 
-{
-  public:
-
-    virtual ~Schema();
-
-    Type type() const {
-        return node_->type();
-    }
-
-    const NodePtr &root() const {
-        return node_;
-    }
-
-    NodePtr &root() {
-        return node_;
-    }
-
-  protected:
-
-    friend void compileJsonSchema(std::istream &is, ValidSchema &schema);
-
-    Schema();
-    explicit Schema(const NodePtr &node);
-    explicit Schema(Node *node);
-
-    NodePtr node_;
-};
-
-class NullSchema : public Schema
-{
-  public:
-    NullSchema(): Schema(new NodePrimitive(AVRO_NULL)) {}
-};
-
-class BoolSchema : public Schema
-{
-  public:
-    BoolSchema(): Schema(new NodePrimitive(AVRO_BOOL)) {}
-};
-
-class IntSchema : public Schema
-{
-  public:
-    IntSchema(): Schema(new NodePrimitive(AVRO_INT)) {}
-};
-
-class LongSchema : public Schema
-{
-  public:
-    LongSchema(): Schema(new NodePrimitive(AVRO_LONG)) {}
-};
-
-class FloatSchema : public Schema
-{
-  public:
-    FloatSchema(): Schema(new NodePrimitive(AVRO_FLOAT)) {}
-};
-
-class DoubleSchema : public Schema
-{
-  public:
-    DoubleSchema(): Schema(new NodePrimitive(AVRO_DOUBLE)) {}
-};
-
-class StringSchema : public Schema
-{
-  public:
-    StringSchema(): Schema(new NodePrimitive(AVRO_STRING)) {}
-};
-
-class BytesSchema : public Schema
-{
-  public:
-    BytesSchema(): Schema(new NodePrimitive(AVRO_BYTES)) {}
-};
-
-class RecordSchema : public Schema
-{
-  public:
-
-    RecordSchema(const std::string &name);
-    void addField(const std::string &name, const Schema &fieldSchema);
-};
-
-class EnumSchema : public Schema
-{
-  public:
-
-    EnumSchema(const std::string &name);
-    void addSymbol(const std::string &symbol);
-};
-
-class ArraySchema : public Schema
-{
-  public:
-
-    ArraySchema(const Schema &itemsSchema);
-};
-
-class MapSchema : public Schema
-{
-  public:
-
-    MapSchema(const Schema &valuesSchema);
-};
-
-
-class UnionSchema : public Schema
-{
-  public:
-
-    UnionSchema();
-    void addType(const Schema &typeSchema);
-};
-
-class FixedSchema : public Schema
-{
-  public:
-
-    FixedSchema(int size, const std::string &name);
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/SchemaResolution.hh b/branch-1.3/lang/c++/api/SchemaResolution.hh
deleted file mode 100644
index 781e757..0000000
--- a/branch-1.3/lang/c++/api/SchemaResolution.hh
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_SchemaResolution_hh__
-#define avro_SchemaResolution_hh__
-
-namespace avro {
-
-
-enum SchemaResolution {
-
-    /// The schemas definitely do not match
-    
-    RESOLVE_NO_MATCH, 
-
-    /// The schemas match at a cursory level
-    ///
-    /// For records and enums, this means the name is the same, but it does not
-    /// necessarily mean that every symbol or field is an exact match.
-    
-    RESOLVE_MATCH,    
-
-    /// For primitives, the matching may occur if the type is promotable.  This means that the
-    /// writer matches reader if the writer's type is promoted the specified type.
-    
-    //@{
-    
-    RESOLVE_PROMOTABLE_TO_LONG,
-    RESOLVE_PROMOTABLE_TO_FLOAT,
-    RESOLVE_PROMOTABLE_TO_DOUBLE,
-
-    //@}
-
-};
-
-} // namespace avro 
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Serializer.hh b/branch-1.3/lang/c++/api/Serializer.hh
deleted file mode 100644
index b8a3beb..0000000
--- a/branch-1.3/lang/c++/api/Serializer.hh
+++ /dev/null
@@ -1,126 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Serializer_hh__
-#define avro_Serializer_hh__
-
-#include <boost/noncopyable.hpp>
-
-#include "Writer.hh"
-#include "ValidatingWriter.hh"
-
-namespace avro {
-
-/// Class that wraps a Writer or ValidatingWriter with an interface that uses
-/// explicit write* names instead of writeValue
-
-template<class Writer>
-class Serializer : private boost::noncopyable
-{
-
-  public:
-
-    /// Constructor only works with Writer
-    explicit Serializer(OutputStreamer &out) :
-        writer_(out)
-    {}
-
-    /// Constructor only works with ValidatingWriter
-    Serializer(const ValidSchema &schema, OutputStreamer &out) :
-        writer_(schema, out)
-    {}
-
-    void writeNull() {
-        writer_.writeValue(Null());
-    }
-
-    void writeBool(bool val) {
-        writer_.writeValue(val);
-    }
-
-    void writeInt(int32_t val) {
-        writer_.writeValue(val);
-    }
-
-    void writeLong(int64_t val) {
-        writer_.writeValue(val);
-    }
-
-    void writeFloat(float val) {
-        writer_.writeValue(val);
-    }
-
-    void writeDouble(double val) {
-        writer_.writeValue(val);
-    }
-
-    void writeBytes(const void *val, size_t size) {
-        writer_.writeBytes(val);
-    }
-
-    template <size_t N>
-    void writeFixed(const uint8_t (&val)[N]) {
-        writer_.writeFixed(val);
-    }
-
-    template <size_t N>
-    void writeFixed(const boost::array<uint8_t, N> &val) {
-        writer_.writeFixed(val);
-    }
-
-    void writeString(const std::string &val) {
-        writer_.writeValue(val);
-    }
-
-    void writeRecord() {
-        writer_.writeRecord();
-    }
-
-    void writeArrayBlock(int64_t size) {
-        writer_.writeArrayBlock(size);
-    }
-
-    void writeArrayEnd() {
-        writer_.writeArrayEnd();
-    }
-
-    void writeMapBlock(int64_t size) {
-        writer_.writeMapBlock(size);
-    }
-
-    void writeMapEnd() {
-        writer_.writeMapEnd();
-    }
-
-    void writeUnion(int64_t choice) {
-        writer_.writeUnion(choice);
-    }
-
-    void writeEnum(int64_t choice) {
-        writer_.writeEnum(choice);
-    }
-
-  private:
-
-    Writer writer_;
-
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/SymbolMap.hh b/branch-1.3/lang/c++/api/SymbolMap.hh
deleted file mode 100644
index fa8bc3a..0000000
--- a/branch-1.3/lang/c++/api/SymbolMap.hh
+++ /dev/null
@@ -1,82 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_SymbolMap_hh__
-#define avro_SymbolMap_hh__
-
-#include <map>
-#include <boost/noncopyable.hpp>
-
-#include "Node.hh"
-#include "Schema.hh"
-
-namespace avro {
-
-/// Avro schemas can include types that were previously defined with names in
-/// the same avro schema.  In order to identify new types, they are stored in a
-/// map so that the actual type may be identified by name.  This class
-/// implements the symbolic name to node mapping.
-///
-
-class SymbolMap : private boost::noncopyable
-{
-
-  public:
-
-    SymbolMap()
-    {}
-
-    bool registerSymbol(const NodePtr &node) {
-
-        if(node->type() == AVRO_SYMBOLIC) {
-            throw Exception("Node must not be a symbolic name");
-        }
-        const std::string name = node->name();
-        if(name.empty()) {
-            throw Exception("Node must have a name to be registered");
-        }
-        bool added = false;
-        MapImpl::iterator lb = map_.lower_bound(name);
-
-        if(lb == map_.end() || map_.key_comp()(name, lb->first)) {
-            map_.insert(lb, std::make_pair(name, node));
-            added = true; 
-        }
-        return added;
-    }
-
-    bool hasSymbol(const std::string &name) const {
-        return map_.find(name) != map_.end();
-    }
-
-    NodePtr locateSymbol(const std::string &name) const {
-        MapImpl::const_iterator iter = map_.find(name);
-        return (iter == map_.end()) ? NodePtr() : iter->second;
-    }
-
-  private:
-
-    typedef std::map<std::string, NodePtr> MapImpl;
-
-    MapImpl map_;
-};
-
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Types.hh b/branch-1.3/lang/c++/api/Types.hh
deleted file mode 100644
index 07da341..0000000
--- a/branch-1.3/lang/c++/api/Types.hh
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Types_hh__
-#define avro_Types_hh__
-
-#include <iostream>
-
-namespace avro {
-
-enum Type {
-
-    AVRO_STRING,
-    AVRO_BYTES,
-    AVRO_INT,
-    AVRO_LONG,
-    AVRO_FLOAT,
-    AVRO_DOUBLE,
-    AVRO_BOOL,
-    AVRO_NULL,
-
-    AVRO_RECORD,
-    AVRO_ENUM,
-    AVRO_ARRAY,
-    AVRO_MAP,
-    AVRO_UNION,
-    AVRO_FIXED,
-
-    AVRO_NUM_TYPES, // marker
-    
-    // The following is a pseudo-type used in implementation
-    
-    AVRO_SYMBOLIC = AVRO_NUM_TYPES
-
-};
-
-inline bool isPrimitive(Type t) {
-    return (t >= AVRO_STRING) && (t < AVRO_RECORD);
-}
-
-inline bool isCompound(Type t) {
-    return (t>= AVRO_RECORD) && (t < AVRO_NUM_TYPES);
-}
-
-inline bool isAvroType(Type t) {
-    return (t >= AVRO_STRING) && (t < AVRO_NUM_TYPES);
-}
-
-inline bool isAvroTypeOrPseudoType(Type t) {
-    return (t >= AVRO_STRING) && (t <= AVRO_NUM_TYPES);
-}
-
-
-std::ostream &operator<< (std::ostream &os, avro::Type type);
-
-/// define a type to identify Null in template functions
-struct Null { };
-
-std::ostream& operator<< (std::ostream &os, const Null &null);
-
-} // namespace avro
-
-
-#endif
diff --git a/branch-1.3/lang/c++/api/ValidSchema.hh b/branch-1.3/lang/c++/api/ValidSchema.hh
deleted file mode 100644
index 909c4e1..0000000
--- a/branch-1.3/lang/c++/api/ValidSchema.hh
+++ /dev/null
@@ -1,68 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_ValidSchema_hh__ 
-#define avro_ValidSchema_hh__ 
-
-#include <boost/noncopyable.hpp>
-
-#include "Node.hh"
-
-namespace avro {
-
-class Schema;
-class SymbolMap;
-
-/// A ValidSchema is basically a non-mutable Schema that has passed some
-/// minumum of sanity checks.  Once valididated, any Schema that is part of
-/// this ValidSchema is considered locked, and cannot be modified (an attempt
-/// to modify a locked Schema will throw).  Also, as it is validated, any
-/// recursive duplications of schemas are replaced with symbolic links to the
-/// original.
-///
-/// Once a Schema is converted to a valid schema it can be used in validating
-/// parsers/serializers, converted to a json schema, etc.
-///
-
-class ValidSchema : private boost::noncopyable
-{
-  public:
-
-    explicit ValidSchema(const Schema &schema);
-    ValidSchema();
-
-    void setSchema(const Schema &schema);
-
-    const NodePtr &root() const {
-        return root_;
-    }
-
-    void toJson(std::ostream &os) const;
-
-    void toFlatList(std::ostream &os) const;
-
-  protected:
-
-    bool validate(const NodePtr &node, SymbolMap &symbolMap);
-
-    NodePtr root_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/ValidatingReader.hh b/branch-1.3/lang/c++/api/ValidatingReader.hh
deleted file mode 100644
index e301200..0000000
--- a/branch-1.3/lang/c++/api/ValidatingReader.hh
+++ /dev/null
@@ -1,126 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_ValidatingReader_hh__
-#define avro_ValidatingReader_hh__
-
-#include <stdint.h>
-#include <vector>
-#include <boost/noncopyable.hpp>
-
-#include "Reader.hh"
-#include "Validator.hh"
-#include "AvroTraits.hh"
-
-namespace avro {
-
-class ValidSchema;
-class InputStreamer;
-
-/// As an avro object is being parsed from binary data to its C++
-/// representation, this parser will walk the parse tree and ensure that the
-/// correct type is being asked for.  If the user attempts to parse a type that
-/// does not match what the schema says, an exception will be thrown.  
-///
-/// The ValidatingReader object can also be used to tell what the next type is,
-/// so that callers can dynamically discover the contents.  It also tells
-/// the attribute names of the objects or their fields, if they exist.
-///
-
-class ValidatingReader : private boost::noncopyable
-{
-
-  public:
-
-    ValidatingReader(const ValidSchema &schema, InputStreamer &in);
-
-    template<typename T>
-    void readValue(T &val) {
-        checkSafeToGet(type_to_avro<T>::type);
-        reader_.readValue(val);
-        validator_.advance();
-    }
-
-    void readBytes(std::vector<uint8_t> &val) {
-        checkSafeToGet(AVRO_BYTES);
-        validator_.advance();
-        reader_.readBytes(val);
-    }
-
-    void readFixed(uint8_t *val, size_t size) {
-        checkSafeToGet(AVRO_FIXED);
-        checkSizeExpected(size);
-        validator_.advance();
-        reader_.readFixed(val, size);
-    }
-
-    template <size_t N>
-    void readFixed(uint8_t (&val)[N]) {
-        readFixed(val, N);
-    }
-
-    template<size_t N>
-    void readFixed(boost::array<uint8_t, N> &val) {
-        readFixed(val.c_array(), N);
-    }
-
-    void readRecord();
-
-    int64_t readArrayBlockSize();
-
-    int64_t readUnion();
-
-    int64_t readEnum();
-
-    int64_t readMapBlockSize();
-
-    Type nextType() const{
-        return validator_.nextTypeExpected();
-    }
-
-    bool currentRecordName(std::string &name) const {
-        return validator_.getCurrentRecordName(name);
-    }
-
-    bool nextFieldName(std::string &name) const {
-        return validator_.getNextFieldName(name);
-    }
-
-  private:
-
-    int64_t readCount();
-
-    void checkSafeToGet(Type type) const {
-        if(validator_.nextTypeExpected() != type) {
-            throw Exception("Type does not match");
-        }
-    }
-
-    void checkSizeExpected(int size) const {
-        if(validator_.nextSizeExpected() != size) {
-            throw Exception("Wrong size of for fixed");
-        }
-    }
-
-    Validator validator_;
-    Reader reader_;
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/ValidatingWriter.hh b/branch-1.3/lang/c++/api/ValidatingWriter.hh
deleted file mode 100644
index 585b91d..0000000
--- a/branch-1.3/lang/c++/api/ValidatingWriter.hh
+++ /dev/null
@@ -1,110 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_ValidatingWriter_hh__
-#define avro_ValidatingWriter_hh__
-
-#include <boost/noncopyable.hpp>
-
-#include "Writer.hh"
-#include "Validator.hh"
-#include "AvroTraits.hh"
-
-namespace avro {
-
-class ValidSchema;
-class OutputStreamer;
-
-/// This class walks the parse tree as data is being serialized, and throws if
-/// attempt to serialize a data type does not match the type expected in the
-/// schema.
-
-class ValidatingWriter : private boost::noncopyable
-{
-
-  public:
-
-    ValidatingWriter(const ValidSchema &schema, OutputStreamer &out);
-
-    template<typename T>
-    void writeValue(T val) {
-        checkSafeToPut(type_to_avro<T>::type);
-        writer_.writeValue(val);
-        validator_.advance();
-    }
-
-    void writeValue(const std::string &val) {
-        checkSafeToPut(type_to_avro<std::string>::type);
-        writer_.writeValue(val);
-        validator_.advance();
-    }
-
-    void writeBytes(const void *val, size_t size);
-
-    template <size_t N>
-    void writeFixed(const uint8_t (&val)[N]) {
-        checkSafeToPut(AVRO_FIXED);
-        checkSizeExpected(N);
-        writer_.writeFixed(val);
-        validator_.advance();
-    }
-
-    template <size_t N>
-    void writeFixed(const boost::array<uint8_t, N> &val) {
-        checkSafeToPut(AVRO_FIXED);
-        checkSizeExpected(val.size());
-        writer_.writeFixed(val);
-        validator_.advance();
-    }
-
-    void writeRecord();
-
-    void writeArrayBlock(int64_t size);
-    void writeArrayEnd();
-
-    void writeMapBlock(int64_t size);
-    void writeMapEnd();
-
-    void writeUnion(int64_t choice);
-
-    void writeEnum(int64_t choice);
-
-  private:
-
-    void writeCount(int64_t count);
-
-    void checkSafeToPut(Type type) const {
-        if(! validator_.typeIsExpected(type)) {
-            throw Exception("Type does not match schema");
-        }
-    }
-
-    void checkSizeExpected(int size) const {
-        if(validator_.nextSizeExpected() != size) {
-            throw Exception("Wrong size of for fixed");
-        }
-    }
-
-    Validator validator_;
-    Writer writer_;
-
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Validator.hh b/branch-1.3/lang/c++/api/Validator.hh
deleted file mode 100644
index e7749e0..0000000
--- a/branch-1.3/lang/c++/api/Validator.hh
+++ /dev/null
@@ -1,112 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Validating_hh__
-#define avro_Validating_hh__
-
-#include <boost/noncopyable.hpp>
-#include <vector>
-#include <stdint.h>
-
-#include "Types.hh"
-#include "Node.hh"
-
-namespace avro {
-
-class ValidSchema;
-class OutputStreamer;
-
-/// This class is used by both the ValidatingSerializer and ValidationParser
-/// objects.  It advances the parse tree (containing logic how to advance
-/// through the various compound types, for example a record must advance
-/// through all leaf nodes but a union only skips to one), and reports which
-/// type is next.
-
-class Validator : private boost::noncopyable
-{
-    typedef uint32_t flag_t;
-
-  public:
-
-    explicit Validator(const ValidSchema &schema);
-
-    void advance();
-    void advanceWithCount(int64_t val);
-
-    bool typeIsExpected(Type type) const {
-        return (expectedTypesFlag_ & typeToFlag(type));
-    }
-
-    Type nextTypeExpected() const {
-        return nextType_;
-    }
-
-    int nextSizeExpected() const;
-
-    bool getCurrentRecordName(std::string &name) const;
-    bool getNextFieldName(std::string &name) const;
-
-  private:
-
-    flag_t typeToFlag(Type type) const {
-        flag_t flag = (1L << type);
-        return flag;
-    }
-
-    void setupOperation(const NodePtr &node);
-
-    void setWaitingForCount();
-
-    void recordAdvance();
-    void enumAdvance();
-    void countingAdvance();
-    void unionAdvance();
-    void fixedAdvance();
-
-    void setupFlag(Type type);
-
-    const ValidSchema &schema_;
-
-    // since this only keeps a reference to the schema, to ensure its parse
-    // tree is not deleted, keep a copy of a shared pointer to the root of the
-    // tree
-
-    const NodePtr parseTree_;
-
-    Type nextType_; 
-    flag_t expectedTypesFlag_;
-    bool compoundStarted_;
-    bool waitingForCount_;
-    int64_t count_;
-
-    struct CompoundType {
-        explicit CompoundType(const NodePtr &n) :
-            node(n), pos(0)
-        {}
-        NodePtr node;  ///< save the node
-        size_t  pos; ///< track the leaf position to visit
-    };
-
-    std::vector<CompoundType> compoundStack_;
-    std::vector<size_t> counters_;
-
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Writer.hh b/branch-1.3/lang/c++/api/Writer.hh
deleted file mode 100644
index 29d97eb..0000000
--- a/branch-1.3/lang/c++/api/Writer.hh
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Writer_hh__
-#define avro_Writer_hh__
-
-#include <boost/noncopyable.hpp>
-
-#include "OutputStreamer.hh"
-#include "Zigzag.hh"
-#include "Types.hh"
-
-namespace avro {
-
-/// Class for writing avro data to a stream.
-
-class Writer : private boost::noncopyable
-{
-
-  public:
-
-    explicit Writer(OutputStreamer &out) :
-        out_(out)
-    {}
-
-    void writeValue(const Null &) {}
-
-    void writeValue(bool val) {
-        int8_t byte = (val != 0);
-        out_.writeByte(byte);
-    }
-
-    void writeValue(int32_t val) {
-        boost::array<uint8_t, 5> bytes;
-        size_t size = encodeInt32(val, bytes);
-        out_.writeBytes(bytes.data(), size);
-    }
-
-    void writeValue(int64_t val) {
-        boost::array<uint8_t, 10> bytes;
-        size_t size = encodeInt64(val, bytes);
-        out_.writeBytes(bytes.data(), size);
-    }
-
-    void writeValue(float val) {
-        union {
-            float f;
-            int32_t i;
-        } v;
-    
-        v.f = val;
-        out_.writeWord(v.i);
-    }
-
-    void writeValue(double val) {
-        union {
-            double d;
-            int64_t i;
-        } v;
-        
-        v.d = val;
-        out_.writeLongWord(v.i);
-    }
-
-    void writeValue(const std::string &val) {
-        writeBytes(val.c_str(), val.size());
-    }
-
-    void writeBytes(const void *val, size_t size) {
-        this->writeValue(static_cast<int64_t>(size));
-        out_.writeBytes(val, size);
-    }
-
-    template <size_t N>
-    void writeFixed(const uint8_t (&val)[N]) {
-        out_.writeBytes(val, N);
-    }
-
-    template <size_t N>
-    void writeFixed(const boost::array<uint8_t, N> &val) {
-        out_.writeBytes(val.data(), val.size());
-    }
-
-    void writeRecord() {}
-
-    void writeArrayBlock(int64_t size) {
-        this->writeValue(static_cast<int64_t>(size));
-    }
-
-    void writeArrayEnd() {
-        out_.writeByte(0);
-    }
-
-    void writeMapBlock(int64_t size) {
-        this->writeValue(static_cast<int64_t>(size));
-    }
-
-    void writeMapEnd() {
-        out_.writeByte(0);
-    }
-
-    void writeUnion(int64_t choice) {
-        this->writeValue(static_cast<int64_t>(choice));
-    }
-
-    void writeEnum(int64_t choice) {
-        this->writeValue(static_cast<int64_t>(choice));
-    }
-
-  private:
-
-    OutputStreamer &out_;
-
-};
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/api/Zigzag.hh b/branch-1.3/lang/c++/api/Zigzag.hh
deleted file mode 100644
index b6f10a9..0000000
--- a/branch-1.3/lang/c++/api/Zigzag.hh
+++ /dev/null
@@ -1,41 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef avro_Encoding_hh__
-#define avro_Encoding_hh__
-
-#include <stdint.h>
-#include <boost/array.hpp>
-
-/// \file
-/// Functions for encoding and decoding integers with zigzag compression
-
-namespace avro {
-
-uint64_t encodeZigzag64(int64_t input);
-int64_t decodeZigzag64(uint64_t input);
-
-uint32_t encodeZigzag32(int32_t input);
-int32_t decodeZigzag32(uint32_t input);
-
-size_t encodeInt32(int32_t input, boost::array<uint8_t, 5> &output);
-size_t encodeInt64(int64_t input, boost::array<uint8_t, 10> &output);
-
-} // namespace avro
-
-#endif
diff --git a/branch-1.3/lang/c++/build.sh b/branch-1.3/lang/c++/build.sh
deleted file mode 100755
index d70f172..0000000
--- a/branch-1.3/lang/c++/build.sh
+++ /dev/null
@@ -1,145 +0,0 @@
-#!/bin/bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e # exit on error
-
-function usage {
-  echo "Usage: $0 {test|dist|clean}"
-  exit 1
-}
-
-if [ $# -eq 0 ]
-then
-  usage
-fi
-
-cd `dirname "$0"` # connect to root
-
-VERSION=`cat ../../share/VERSION.txt`
-
-root_dir=$(pwd)
-build_dir="../../build/avro-cpp-$VERSION"
-dist_dir="../../dist/cpp"
-doc_dir="../../build/avro-doc-$VERSION/api/cpp/html"
-
-tarfile=avro-cpp-$VERSION.tar.gz
-
-set -x # echo commands
-
-for target in "$@"
-do
-
-function do_autoreconf {
-    if [ ! -f configure ]; then
-        autoreconf -f -i
-    fi
-    if [ ! -f configure ]; then
-        exit 1
-    fi
-}
-
-function check_dir {
-    if [ ! -d $1 ]; then
-        mkdir -p $1
-    fi
-    if [ ! -d $1 ]; then
-        exit 1
-    fi
-}
-
-function do_configure {
-
-    do_autoreconf
-
-    check_dir $build_dir
-
-    if [ ! -f $build_dir/Makefile ]; then
-        (cd $build_dir && ../../lang/c++/configure)
-    fi
-
-    if [ ! -f $build_dir/Makefile ]; then
-        exit 1
-    fi
-}
-
-function do_build {
-    (cd $build_dir && make)
-    (cd $build_dir && make check)
-}
-
-function do_docs {
-    check_dir $doc_dir
-    (cd $build_dir && make doc)
-    if [ ! -f $build_dir/doc/html/index.html ]; then
-        exit 1
-    fi
-    cp -rf $build_dir/doc/html/* $doc_dir/
-}
-
-function do_tar_file {
-    check_dir $dist_dir
-    (cd $build_dir && make dist)
-    if [ ! -f $build_dir/$tarfile ]; then
-        exit 1
-    fi
-    cp -f $build_dir/$tarfile $dist_dir/$tarfile
-}
-
-function do_dist {
-    (cd $build_dir && make)
-    (cd $build_dir && make check)
-    do_docs
-    do_tar_file
-}
-
-function do_clean {
-
-    if [ -d $build_dir ]; then
-        rm -rf $build_dir
-    fi
-    if [ -d $doc_dir ]; then
-        rm -rf $doc_dir
-    fi
-    rm -rf $dist_dir/$tarfile
-    rm -rf $dist_dir/$tarfile.md5
-
-}
-
-case "$target" in
-
-
-    test)
-    do_configure
-    do_build
-	;;
-
-    dist)
-    do_configure
-    do_dist
-    ;;
-
-    clean)
-    do_clean 
-	;;
-
-    *)
-        usage
-esac
-
-done
-
-exit 0
diff --git a/branch-1.3/lang/c++/config/.gitignore b/branch-1.3/lang/c++/config/.gitignore
deleted file mode 100644
index 72e8ffc..0000000
--- a/branch-1.3/lang/c++/config/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-*
diff --git a/branch-1.3/lang/c++/configure.in b/branch-1.3/lang/c++/configure.in
deleted file mode 100644
index c34e665..0000000
--- a/branch-1.3/lang/c++/configure.in
+++ /dev/null
@@ -1,49 +0,0 @@
-#                                               -*- Autoconf -*-
-# Process this file with autoconf to produce a configure script.
-
-AC_PREREQ(2.61)
-AC_INIT([avro-cpp], m4_esyscmd([cat ../../share/VERSION.txt]), [avro-devel@apache.org])
-AC_CONFIG_AUX_DIR([config])
-AM_INIT_AUTOMAKE
-AC_CONFIG_SRCDIR([api/AvroParse.hh])
-AC_CONFIG_MACRO_DIR([m4])
-
-# Checks for programs.
-AC_PROG_CXX
-AC_PROG_LIBTOOL
-AC_PROG_LEX
-AC_PROG_YACC
-
-AC_CHECK_PROG(DOXYGEN, doxygen, doxygen)
-AC_CHECK_PROG(PYTHON, python, python)
-
-# Checks for libraries.
-AX_BOOST_BASE([1.32.0])
-AX_BOOST_REGEX
-
-# Checks for header files.
-AC_FUNC_ALLOCA
-AC_HEADER_STDC
-AC_CHECK_HEADERS([inttypes.h libintl.h malloc.h stddef.h stdint.h stdlib.h string.h unistd.h])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_HEADER_STDBOOL
-AC_C_CONST
-AC_C_INLINE
-AC_TYPE_INT16_T
-AC_TYPE_INT32_T
-AC_TYPE_INT64_T
-AC_TYPE_INT8_T
-AC_TYPE_SIZE_T
-AC_TYPE_UINT16_T
-AC_TYPE_UINT32_T
-AC_TYPE_UINT64_T
-AC_TYPE_UINT8_T
-
-# Checks for library functions.
-AC_FUNC_MALLOC
-AC_FUNC_REALLOC
-AC_CHECK_FUNCS([memset])
-
-AC_CONFIG_FILES([Makefile])
-AC_OUTPUT
diff --git a/branch-1.3/lang/c++/impl/Compiler.cc b/branch-1.3/lang/c++/impl/Compiler.cc
deleted file mode 100644
index f0be89a..0000000
--- a/branch-1.3/lang/c++/impl/Compiler.cc
+++ /dev/null
@@ -1,190 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "Compiler.hh"
-#include "InputStreamer.hh"
-#include "Types.hh"
-#include "Schema.hh"
-#include "ValidSchema.hh"
-
-extern void yyparse(void *ctx);
-
-namespace avro {
-
-// #define DEBUG_VERBOSE
-
-void
-compileJsonSchema(std::istream &is, ValidSchema &schema)
-{
-    if(!is.good()) {
-        throw Exception("Input stream is not good");
-    }
-
-    CompilerContext myctx(is);
-    yyparse(&myctx);
-
-    Schema s(myctx.getRoot());
-    schema.setSchema(s);
-}
-
-bool
-compileJsonSchema(std::istream &is, ValidSchema &schema, std::string &error)
-{
-    bool success = false;
-    if(!is.good()) {
-        error = "Input stream is not good";
-        return false;
-    }
-
-    try {
-        compileJsonSchema(is, schema);
-        success = true;
-    }
-    catch (Exception &e) {
-        error = e.what();
-    }
-
-    return success;
-}
-
-void 
-CompilerContext::add(const NodePtr &node)
-{
-    if(stack_.empty() ) {
-        root_ = node;
-    }
-    else {
-        stack_.back().addNode(node);
-    }   
-}
-
-void
-CompilerContext::startType()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Start type definition\n";
-#endif
-    stack_.push_back(new CompilerNode());
-}
-
-void
-CompilerContext::stopType()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Stop type " << stack_.back().type() << '\n';
-#endif
-
-    assert(!stack_.empty());
-    NodePtr nodePtr(nodeFromCompilerNode(stack_.back()));
-    stack_.pop_back();
-    add(nodePtr);
-}
-
-void 
-CompilerContext::addType(Type type)
-{    
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Setting type to " << type << '\n';
-#endif
-    stack_.back().setType(type);
-}
-
-void 
-CompilerContext::setSizeAttribute()
-{
-    int size = atol(text_.c_str()); 
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Setting size to " << size << '\n';
-#endif
-    stack_.back().sizeAttribute_.add(size);
-}
-
-void 
-CompilerContext::addNamedType()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Adding named type " << text_ << '\n';
-#endif
-    stack_.back().setType(AVRO_SYMBOLIC);
-    stack_.back().nameAttribute_.add(text_);
-}
-
-void 
-CompilerContext::setNameAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Setting name to " << text_ << '\n';
-#endif
-    stack_.back().nameAttribute_.add(text_);
-}
-
-void 
-CompilerContext::setSymbolsAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Adding enum symbol " << text_ << '\n';
-#endif
-    stack_.back().symbolsAttribute_.add(text_);
-}
-
-void 
-CompilerContext::setValuesAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Ready for map type\n";
-#endif
-    stack_.back().setAttributeType(CompilerNode::VALUES);
-}
-
-void 
-CompilerContext::setTypesAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Ready for union types\n";
-#endif
-    stack_.back().setAttributeType(CompilerNode::TYPES);
-}
-
-void 
-CompilerContext::setItemsAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Ready for array type\n";
-#endif
-    stack_.back().setAttributeType(CompilerNode::ITEMS);
-}
-
-void 
-CompilerContext::setFieldsAttribute()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Ready for record fields\n";
-#endif
-    stack_.back().setAttributeType(CompilerNode::FIELDS);
-}
-
-void 
-CompilerContext::textContainsFieldName()
-{
-#ifdef DEBUG_VERBOSE
-    std::cerr << "Setting field name to " << text_ << '\n';
-#endif
-    stack_.back().fieldsNamesAttribute_.add(text_);
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/CompilerNode.cc b/branch-1.3/lang/c++/impl/CompilerNode.cc
deleted file mode 100644
index 59390a9..0000000
--- a/branch-1.3/lang/c++/impl/CompilerNode.cc
+++ /dev/null
@@ -1,73 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#include "CompilerNode.hh"
-#include "NodeImpl.hh"
-
-namespace avro {
-
-NodePtr
-nodeFromCompilerNode(CompilerNode &node)
-{
-    NodePtr ptr;
-
-    switch(node.type()) {
-
-      case AVRO_ARRAY:
-        ptr.reset ( new NodeArray(node.itemsAttribute_));
-        break;
-    
-      case AVRO_ENUM:
-        ptr.reset ( new NodeEnum(node.nameAttribute_, node.symbolsAttribute_));
-        break;
-
-      case AVRO_FIXED:
-        ptr.reset ( new NodeFixed(node.nameAttribute_, node.sizeAttribute_));
-        break;
-    
-      case AVRO_MAP:
-        ptr.reset ( new NodeMap(node.valuesAttribute_));
-        break;
-
-      case AVRO_RECORD:
-        ptr.reset ( new NodeRecord(node.nameAttribute_, node.fieldsAttribute_, node.fieldsNamesAttribute_));
-        break;
-    
-      case AVRO_UNION:
-        ptr.reset ( new NodeUnion(node.typesAttribute_));
-        break;
-    
-      case AVRO_SYMBOLIC:
-        ptr.reset ( new NodeSymbolic(node.nameAttribute_));
-        break;
-
-      default:
-        if(isPrimitive(node.type())) {
-            ptr.reset ( new NodePrimitive(node.type()));        
-        }
-        else {
-            throw Exception("Unknown type in nodeFromCompilerNode");
-        }
-        break;
-    }
-
-    return ptr;
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Node.cc b/branch-1.3/lang/c++/impl/Node.cc
deleted file mode 100644
index 7b754ec..0000000
--- a/branch-1.3/lang/c++/impl/Node.cc
+++ /dev/null
@@ -1,36 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <boost/regex.hpp>
-#include "Node.hh"
-
-namespace avro {
-
-Node::~Node()
-{ }
-
-void 
-Node::checkName(const std::string &name) const
-{
-    const boost::regex exp("[A-Za-z_][A-Za-z0-9_]*");
-    if(!name.empty() && !boost::regex_match(name, exp)) {
-        throw Exception("Names must match [A-Za-z_][A-Za-z0-9_]*");
-    }
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/NodeImpl.cc b/branch-1.3/lang/c++/impl/NodeImpl.cc
deleted file mode 100644
index e9187da..0000000
--- a/branch-1.3/lang/c++/impl/NodeImpl.cc
+++ /dev/null
@@ -1,319 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#include "NodeImpl.hh"
-
-namespace avro {
-
-
-template < class A, class B, class C, class D >
-SchemaResolution 
-NodeImpl<A,B,C,D>::furtherResolution(const Node &reader) const
-{
-    SchemaResolution match = RESOLVE_NO_MATCH;
-
-    if(reader.type() == AVRO_SYMBOLIC) {
-    
-        // resolve the symbolic type, and check again
-        const NodePtr &node = reader.leafAt(0);
-        match = resolve(*node);
-    }
-    else if(reader.type() == AVRO_UNION) {
-
-        // in this case, need to see if there is an exact match for the
-        // writer's type, or if not, the first one that can be promoted to a
-        // match
-        
-        for(size_t i= 0; i < reader.leaves(); ++i)  {
-
-            const NodePtr &node = reader.leafAt(i);
-            SchemaResolution thisMatch = resolve(*node);
-
-            // if matched then the search is done
-            if(thisMatch == RESOLVE_MATCH) {
-                match = thisMatch;
-                break;
-            }
-
-            // thisMatch is either no match, or promotable, this will set match to 
-            // promotable if it hasn't been set already
-            if (match == RESOLVE_NO_MATCH) {
-                match = thisMatch;
-            }
-        }
-    }
-
-    return match;
-}
-
-SchemaResolution 
-NodePrimitive::resolve(const Node &reader) const
-{
-    if(type() == reader.type()) {
-        return RESOLVE_MATCH;
-    }
-
-    switch ( type() ) {
-
-      case AVRO_INT:
-
-        if( reader.type() == AVRO_LONG ) { 
-            return RESOLVE_PROMOTABLE_TO_LONG;
-        }   
-
-        // fall-through intentional
-
-      case AVRO_LONG:
- 
-        if (reader.type() == AVRO_FLOAT) {
-            return RESOLVE_PROMOTABLE_TO_FLOAT;
-        }   
-
-        // fall-through intentional
-
-      case AVRO_FLOAT:
-
-        if (reader.type() == AVRO_DOUBLE) {
-            return RESOLVE_PROMOTABLE_TO_DOUBLE;
-        }   
-
-      default:
-        break;
-    }   
-
-    return furtherResolution(reader);
-}
-
-SchemaResolution 
-NodeRecord::resolve(const Node &reader) const
-{
-    if(reader.type() == AVRO_RECORD) {
-        if(name() == reader.name()) {
-            return RESOLVE_MATCH;
-        }
-    }
-    return furtherResolution(reader);
-}
-
-SchemaResolution 
-NodeEnum::resolve(const Node &reader) const
-{
-    if(reader.type() == AVRO_ENUM) {
-        return (name() == reader.name()) ? RESOLVE_MATCH : RESOLVE_NO_MATCH;
-    }
-    return furtherResolution(reader);
-}
-
-SchemaResolution 
-NodeArray::resolve(const Node &reader) const
-{
-    if(reader.type() == AVRO_ARRAY) {
-        const NodePtr &arrayType = leafAt(0);
-        return arrayType->resolve(*reader.leafAt(0));
-    }
-    return furtherResolution(reader);
-}
-
-SchemaResolution 
-NodeMap::resolve(const Node &reader) const
-{
-    if(reader.type() == AVRO_MAP) {
-        const NodePtr &mapType = leafAt(1);
-        return mapType->resolve(*reader.leafAt(1));
-    }
-    return furtherResolution(reader);
-}
-
-SchemaResolution
-NodeUnion::resolve(const Node &reader) const 
-{
-
-    // If the writer is union, resolution only needs to occur when the selected
-    // type of the writer is known, so this function is not very helpful.
-    //
-    // In this case, this function returns if there is a possible match given
-    // any writer type, so just search type by type returning the best match
-    // found.
-    
-    SchemaResolution match = RESOLVE_NO_MATCH;
-    for(size_t i=0; i < leaves(); ++i) {
-        const NodePtr &node = leafAt(i);
-        SchemaResolution thisMatch = node->resolve(reader);
-        if(thisMatch == RESOLVE_MATCH) {
-            match = thisMatch;
-            break;
-        }
-        if(match == RESOLVE_NO_MATCH) {
-            match = thisMatch;
-        }
-    }
-    return match;
-}
-
-SchemaResolution 
-NodeFixed::resolve(const Node &reader) const
-{
-    if(reader.type() == AVRO_FIXED) {
-        return (
-                (reader.fixedSize() == fixedSize()) &&
-                (reader.name() == name()) 
-            ) ? 
-            RESOLVE_MATCH : RESOLVE_NO_MATCH;
-    }
-    return furtherResolution(reader);
-}
-
-SchemaResolution 
-NodeSymbolic::resolve(const Node &reader) const
-{
-    const NodePtr &node = leafAt(0);
-    return node->resolve(reader);
-}
-
-// Wrap an indentation in a struct for ostream operator<< 
-struct indent { 
-    indent(int depth) :
-        d(depth)
-    { }
-    int d; 
-};
-
-/// ostream operator for indent
-std::ostream& operator <<(std::ostream &os, indent x)
-{
-    static const std::string spaces("    ");
-    while(x.d--) {
-        os << spaces; 
-    }
-    return os;
-}
-
-void 
-NodePrimitive::printJson(std::ostream &os, int depth) const
-{
-    os << '\"' << type() << '\"';
-}
-
-void 
-NodeSymbolic::printJson(std::ostream &os, int depth) const
-{
-    os << '\"' << nameAttribute_.get() << '\"';
-}
-
-void 
-NodeRecord::printJson(std::ostream &os, int depth) const
-{
-    os << "{\n";
-    os << indent(++depth) << "\"type\": \"record\",\n";
-    if(!nameAttribute_.get().empty()) {
-        os << indent(depth) << "\"name\": \"" << nameAttribute_.get() << "\",\n";
-    }
-    os << indent(depth) << "\"fields\": [\n";
-
-    int fields = leafAttributes_.size();
-    ++depth;
-    for(int i = 0; i < fields; ++i) {
-        if(i > 0) {
-            os << indent(depth) << "},\n";
-        }
-        os << indent(depth) << "{\n";
-        os << indent(++depth) << "\"name\": \"" << leafNameAttributes_.get(i) << "\",\n";
-        os << indent(depth) << "\"type\": ";
-        leafAttributes_.get(i)->printJson(os, depth);
-        os << '\n';
-        --depth;
-    }
-    os << indent(depth) << "}\n";
-    os << indent(--depth) << "]\n";
-    os << indent(--depth) << '}';
-}
-
-void 
-NodeEnum::printJson(std::ostream &os, int depth) const
-{
-    os << "{\n";
-    os << indent(++depth) << "\"type\": \"enum\",\n";
-    if(!nameAttribute_.get().empty()) {
-        os << indent(depth) << "\"name\": \"" << nameAttribute_.get() << "\",\n";
-    }
-    os << indent(depth) << "\"symbols\": [\n";
-
-    int names = leafNameAttributes_.size();
-    ++depth;
-    for(int i = 0; i < names; ++i) {
-        if(i > 0) {
-            os << ",\n";
-        }
-        os << indent(depth) << '\"' << leafNameAttributes_.get(i) << '\"';
-    }
-    os << '\n';
-    os << indent(--depth) << "]\n";
-    os << indent(--depth) << '}';
-}
-
-void 
-NodeArray::printJson(std::ostream &os, int depth) const
-{
-    os << "{\n";
-    os << indent(depth+1) << "\"type\": \"array\",\n";
-    os << indent(depth+1) <<  "\"items\": ";
-    leafAttributes_.get()->printJson(os, depth+1);
-    os << '\n';
-    os << indent(depth) << '}';
-}
-
-void 
-NodeMap::printJson(std::ostream &os, int depth) const
-{
-    os << "{\n";
-    os << indent(depth+1) <<"\"type\": \"map\",\n";
-    os << indent(depth+1) << "\"values\": ";
-    leafAttributes_.get(1)->printJson(os, depth+1);
-    os << '\n';
-    os << indent(depth) << '}';
-}
-
-void 
-NodeUnion::printJson(std::ostream &os, int depth) const
-{
-    os << "[\n";
-    int fields = leafAttributes_.size();
-    ++depth;
-    for(int i = 0; i < fields; ++i) {
-        if(i > 0) {
-            os << ",\n";
-        }
-        os << indent(depth);
-        leafAttributes_.get(i)->printJson(os, depth);
-    }
-    os << '\n';
-    os << indent(--depth) << ']';
-}
-
-void 
-NodeFixed::printJson(std::ostream &os, int depth) const
-{
-    os << "{\n";
-    os << indent(++depth) << "\"type\": \"fixed\",\n";
-    os << indent(depth) << "\"size\": " << sizeAttribute_.get() << ",\n";
-    os << indent(depth) << "\"name\": \"" << nameAttribute_.get() << "\"\n";
-    os << indent(--depth) << '}';
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Resolver.cc b/branch-1.3/lang/c++/impl/Resolver.cc
deleted file mode 100644
index 4fbde13..0000000
--- a/branch-1.3/lang/c++/impl/Resolver.cc
+++ /dev/null
@@ -1,867 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "Resolver.hh"
-#include "Layout.hh"
-#include "NodeImpl.hh"
-#include "ValidSchema.hh"
-#include "Reader.hh"
-#include "Boost.hh"
-#include "AvroTraits.hh"
-
-namespace avro {
-
-class ResolverFactory;
-typedef boost::shared_ptr<Resolver> ResolverPtr;
-typedef boost::ptr_vector<Resolver> ResolverPtrVector;
-
-// #define DEBUG_VERBOSE
-
-#ifdef DEBUG_VERBOSE
-#define DEBUG_OUT(str) std::cout << str << '\n'
-#else
-class NoOp {};
-template<typename T> NoOp& operator<<(NoOp &noOp, const T&) {
-    return noOp;
-}
-NoOp noop;
-#define DEBUG_OUT(str) noop << str 
-#endif
-
-template<typename T>
-class PrimitiveSkipper : public Resolver
-{
-  public:
-
-    PrimitiveSkipper() : 
-        Resolver()
-    {}
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        T val;
-        reader.readValue(val);
-        DEBUG_OUT("Skipping " << val);
-    }
-};
-
-template<typename T>
-class PrimitiveParser : public Resolver
-{
-  public:
-
-    PrimitiveParser(const PrimitiveLayout &offset) : 
-        Resolver(),
-        offset_(offset.offset())
-    {}
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        T* location = reinterpret_cast<T *> (address + offset_);
-        reader.readValue(*location);
-        DEBUG_OUT("Reading " << *location);
-    }
-
-  private:
-
-    size_t offset_;
-};
-
-template<typename WT, typename RT>
-class PrimitivePromoter : public Resolver
-{
-  public:
-
-    PrimitivePromoter(const PrimitiveLayout &offset) : 
-        Resolver(),
-        offset_(offset.offset())
-    {}
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        parseIt<WT>(reader, address);
-    }
-
-  private:
-
-    void parseIt(Reader &reader, uint8_t *address, const boost::true_type &) const
-    {
-        WT val;
-        reader.readValue(val);
-        RT *location = reinterpret_cast<RT *> (address + offset_);
-        *location = static_cast<RT>(val);
-        DEBUG_OUT("Promoting " << val);
-    }
-
-    void parseIt(Reader &reader, uint8_t *address, const boost::false_type &) const
-    { }
-
-    template<typename T>
-    void parseIt(Reader &reader, uint8_t *address) const
-    {
-        parseIt(reader, address, is_promotable<T>());
-    }
-
-    size_t offset_;
-};
-
-template <>
-class PrimitiveSkipper<std::vector<uint8_t> > : public Resolver
-{
-  public:
-
-    PrimitiveSkipper() : 
-        Resolver()
-    {}
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        std::vector<uint8_t> val;
-        reader.readBytes(val);
-        DEBUG_OUT("Skipping bytes");
-    }
-};
-
-template <>
-class PrimitiveParser<std::vector<uint8_t> > : public Resolver
-{
-  public:
-
-    PrimitiveParser(const PrimitiveLayout &offset) : 
-        Resolver(),
-        offset_(offset.offset()) 
-    {}
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        std::vector<uint8_t> *location = reinterpret_cast<std::vector<uint8_t> *> (address + offset_);
-        reader.readBytes(*location);
-        DEBUG_OUT("Reading bytes");
-    }
-
-  private:
-
-    size_t offset_;
-};
-
-class RecordSkipper : public Resolver
-{
-  public:
-
-    RecordSkipper(ResolverFactory &factory, const NodePtr &writer);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Skipping record");
-
-        reader.readRecord();
-        size_t steps = resolvers_.size();
-        for(size_t i = 0; i < steps; ++i) {
-            resolvers_[i].parse(reader, address);
-        }
-    }
-
-  protected:
-    
-    ResolverPtrVector resolvers_;
-
-};
-
-class RecordParser : public Resolver
-{
-  public:
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading record");
-
-        reader.readRecord();
-        size_t steps = resolvers_.size();
-        for(size_t i = 0; i < steps; ++i) {
-            resolvers_[i].parse(reader, address);
-        }
-    }
-
-    RecordParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets);
-
-  protected:
-    
-    ResolverPtrVector resolvers_;
-
-};
-
-
-class MapSkipper : public Resolver
-{
-  public:
-
-    MapSkipper(ResolverFactory &factory, const NodePtr &writer);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Skipping map");
-
-        std::string key;
-        int64_t size = 0;
-        do {
-            size = reader.readMapBlockSize();
-            for(int64_t i = 0; i < size; ++i) {
-                reader.readValue(key);
-                resolver_->parse(reader, address);
-            }
-        } while (size != 0);
-    }
-
-  protected:
-
-    ResolverPtr resolver_;
-};
-
-
-class MapParser : public Resolver
-{
-  public:
-
-    typedef uint8_t *(*GenericMapSetter)(uint8_t *map, const std::string &key);
-
-    MapParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading map");
-
-        uint8_t *mapAddress = address + offset_;
-
-        std::string key;
-        GenericMapSetter* setter = reinterpret_cast<GenericMapSetter *> (address + setFuncOffset_);
-
-        int64_t size = 0;
-        do {
-            size = reader.readMapBlockSize();
-            for(int64_t i = 0; i < size; ++i) {
-                reader.readValue(key);
-
-                // create a new map entry and get the address
-                uint8_t *location = (*setter)(mapAddress, key);
-                resolver_->parse(reader, location);
-            }
-        } while (size != 0);
-    }
-
-  protected:
-    
-    ResolverPtr  resolver_;
-    size_t          offset_;
-    size_t          setFuncOffset_;
-};
-
-class ArraySkipper : public Resolver
-{
-  public:
-
-    ArraySkipper(ResolverFactory &factory, const NodePtr &writer);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Skipping array");
-
-        int64_t size = 0;
-        do {
-            size = reader.readArrayBlockSize();
-            for(int64_t i = 0; i < size; ++i) {
-                resolver_->parse(reader, address);
-            }
-        } while (size != 0);
-    }
-
-  protected:
-   
-    ResolverPtr resolver_;
-};
-
-typedef uint8_t *(*GenericArraySetter)(uint8_t *array);
-
-class ArrayParser : public Resolver
-{
-  public:
-
-    ArrayParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading array");
-
-        uint8_t *arrayAddress = address + offset_;
-
-        GenericArraySetter* setter = reinterpret_cast<GenericArraySetter *> (address + setFuncOffset_);
-
-        int64_t size = 0;
-        do {
-            size = reader.readArrayBlockSize();
-            for(int64_t i = 0; i < size; ++i) {
-                // create a new map entry and get the address
-                uint8_t *location = (*setter)(arrayAddress);
-                resolver_->parse(reader, location);
-            }
-        } while (size != 0);
-    }
-
-  protected:
-    
-    ArrayParser() :
-        Resolver()
-    {}
-    
-    ResolverPtr resolver_;
-    size_t         offset_;
-    size_t         setFuncOffset_;
-};
-
-class EnumSkipper : public Resolver
-{
-  public:
-
-    EnumSkipper(ResolverFactory &factory, const NodePtr &writer) :
-        Resolver()
-    { }
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        int64_t val = reader.readEnum();
-        DEBUG_OUT("Skipping enum" << val);
-    }
-};
-
-class EnumParser : public Resolver
-{
-  public:
-
-    enum EnumRepresentation {
-        VAL
-    };
-
-    EnumParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-        Resolver(),
-        offset_(offsets.at(0).offset()),
-        readerSize_(reader->names())
-    { 
-        const size_t writerSize = writer->names();
-
-        mapping_.reserve(writerSize);
-
-        for(size_t i = 0; i < writerSize; ++i) {
-            const std::string &name = writer->nameAt(i);
-            size_t readerIndex = readerSize_;
-            reader->nameIndex(name, readerIndex);
-            mapping_.push_back(readerIndex);
-        }
-    }
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        int64_t val = reader.readEnum();
-        assert(static_cast<size_t>(val) < mapping_.size());
-
-        if(mapping_[val] < readerSize_) {
-            EnumRepresentation* location = reinterpret_cast<EnumRepresentation *> (address + offset_);
-            *location = static_cast<EnumRepresentation>(mapping_[val]);
-            DEBUG_OUT("Setting enum" << *location);
-        }
-    }
-
-protected:
-
-    size_t offset_;
-    size_t readerSize_;
-    std::vector<size_t> mapping_;
-    
-};
-
-class UnionSkipper : public Resolver
-{
-  public:
-
-    UnionSkipper(ResolverFactory &factory, const NodePtr &writer);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Skipping union");
-        int64_t choice = reader.readUnion();
-        resolvers_[choice].parse(reader, address);
-    }
-
-  protected:
-    
-    ResolverPtrVector resolvers_;
-};
-
-
-class UnionParser : public Resolver
-{
-  public:
-
-    typedef uint8_t *(*GenericUnionSetter)(uint8_t *, int64_t);
-
-    UnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading union");
-        int64_t writerChoice = reader.readUnion();
-        int64_t *readerChoice = reinterpret_cast<int64_t *>(address + choiceOffset_);
-
-        *readerChoice = choiceMapping_[writerChoice];
-        GenericUnionSetter* setter = reinterpret_cast<GenericUnionSetter *> (address + setFuncOffset_);
-        uint8_t *value = reinterpret_cast<uint8_t *> (address + offset_);
-        uint8_t *location = (*setter)(value, *readerChoice);
-
-        resolvers_[writerChoice].parse(reader, location);
-    }
-
-  protected:
-    
-    ResolverPtrVector resolvers_;
-    std::vector<int64_t> choiceMapping_;
-    size_t offset_;
-    size_t choiceOffset_;
-    size_t setFuncOffset_;
-};
-
-class UnionToNonUnionParser : public Resolver
-{
-  public:
-
-    typedef uint8_t *(*GenericUnionSetter)(uint8_t *, int64_t);
-
-    UnionToNonUnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const Layout &offsets);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading union to non-union");
-        int64_t choice = reader.readUnion();
-        resolvers_[choice].parse(reader, address);
-    }
-
-  protected:
-    
-    ResolverPtrVector resolvers_;
-};
-
-class NonUnionToUnionParser : public Resolver
-{
-  public:
-
-    typedef uint8_t *(*GenericUnionSetter)(uint8_t *, int64_t);
-
-    NonUnionToUnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets);
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading non-union to union");
-
-        int64_t *choice = reinterpret_cast<int64_t *>(address + choiceOffset_);
-        *choice = choice_;
-        GenericUnionSetter* setter = reinterpret_cast<GenericUnionSetter *> (address + setFuncOffset_);
-        uint8_t *value = reinterpret_cast<uint8_t *> (address + offset_);
-        uint8_t *location = (*setter)(value, choice_);
-
-        resolver_->parse(reader, location);
-    }
-
-  protected:
-    
-    ResolverPtr resolver_;
-    size_t choice_;
-    size_t offset_;
-    size_t choiceOffset_;
-    size_t setFuncOffset_;
-};
-
-class FixedSkipper : public Resolver
-{
-  public:
-
-    FixedSkipper(ResolverFactory &factory, const NodePtr &writer) :
-        Resolver() 
-    {
-        size_ = writer->fixedSize();
-    }
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Skipping fixed");
-        uint8_t val[size_];
-        reader.readFixed(val, size_);
-    }
-
-  protected:
-
-    int size_;
-    
-};
-
-class FixedParser : public Resolver
-{
-  public:
-
-    FixedParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-        Resolver() 
-    {
-        size_ = writer->fixedSize();
-        offset_ = offsets.at(0).offset();
-    }
-
-    virtual void parse(Reader &reader, uint8_t *address) const
-    {
-        DEBUG_OUT("Reading fixed");
-        uint8_t *location = reinterpret_cast<uint8_t *> (address + offset_);
-        reader.readFixed(location, size_);
-    }
-
-  protected:
-
-    int size_;
-    size_t offset_;
-    
-};
-
-
-class ResolverFactory : private boost::noncopyable {
-
-    template<typename T>
-    Resolver*
-    constructPrimitiveSkipper(const NodePtr &writer) 
-    {
-        return new PrimitiveSkipper<T>();
-    }
-
-    template<typename T>
-    Resolver*
-    constructPrimitive(const NodePtr &writer, const NodePtr &reader, const Layout &offset)
-    {
-        Resolver *instruction;
-
-        SchemaResolution match = writer->resolve(*reader);
-
-        if (match == RESOLVE_NO_MATCH) {
-            instruction = new PrimitiveSkipper<T>();
-        } 
-        else if (reader->type() == AVRO_UNION) {
-            const CompoundLayout &compoundLayout = static_cast<const CompoundLayout &>(offset);
-            instruction = new NonUnionToUnionParser(*this, writer, reader, compoundLayout);
-        }
-        else if (match == RESOLVE_MATCH) {
-            const PrimitiveLayout &primitiveLayout = static_cast<const PrimitiveLayout &>(offset);
-            instruction = new PrimitiveParser<T>(primitiveLayout);
-        }
-        else if(match == RESOLVE_PROMOTABLE_TO_LONG) {
-            const PrimitiveLayout &primitiveLayout = static_cast<const PrimitiveLayout &>(offset);
-            instruction = new PrimitivePromoter<T, int64_t>(primitiveLayout);
-        }
-        else if(match == RESOLVE_PROMOTABLE_TO_FLOAT) {
-            const PrimitiveLayout &primitiveLayout = static_cast<const PrimitiveLayout &>(offset);
-            instruction = new PrimitivePromoter<T, float>(primitiveLayout);
-        }
-        else if(match == RESOLVE_PROMOTABLE_TO_DOUBLE) {
-            const PrimitiveLayout &primitiveLayout = static_cast<const PrimitiveLayout &>(offset);
-            instruction = new PrimitivePromoter<T, double>(primitiveLayout);
-        }
-        else {
-            assert(0);
-        }
-        return instruction;
-    }
-
-    template<typename Skipper>
-    Resolver*
-    constructCompoundSkipper(const NodePtr &writer) 
-    {
-        return new Skipper(*this, writer);
-    }
-
-
-    template<typename Parser, typename Skipper>
-    Resolver*
-    constructCompound(const NodePtr &writer, const NodePtr &reader, const Layout &offset)
-    {
-        Resolver *instruction;
-
-        SchemaResolution match = RESOLVE_NO_MATCH;
-
-        match = writer->resolve(*reader);
-
-        if (match == RESOLVE_NO_MATCH) {
-            instruction = new Skipper(*this, writer);
-        }
-        else if(writer->type() != AVRO_UNION && reader->type() == AVRO_UNION) {
-            const CompoundLayout &compoundLayout = dynamic_cast<const CompoundLayout &>(offset);
-            instruction = new NonUnionToUnionParser(*this, writer, reader, compoundLayout);
-        }
-        else if(writer->type() == AVRO_UNION && reader->type() != AVRO_UNION) {
-            instruction = new UnionToNonUnionParser(*this, writer, reader, offset);
-        }
-        else {
-            const CompoundLayout &compoundLayout = dynamic_cast<const CompoundLayout &>(offset);
-            instruction = new Parser(*this, writer, reader, compoundLayout);
-        } 
-
-        return instruction;
-    }
-
-  public:
-
-    Resolver *
-    construct(const NodePtr &writer, const NodePtr &reader, const Layout &offset)
-    {
-
-        typedef Resolver* (ResolverFactory::*BuilderFunc)(const NodePtr &writer, const NodePtr &reader, const Layout &offset);
-
-        NodePtr currentWriter = (writer->type() == AVRO_SYMBOLIC) ?
-            resolveSymbol(writer) : writer;
-
-        NodePtr currentReader = (reader->type() == AVRO_SYMBOLIC) ?
-            resolveSymbol(reader) : reader;
-
-        static const BuilderFunc funcs[] = {
-            &ResolverFactory::constructPrimitive<std::string>, 
-            &ResolverFactory::constructPrimitive<std::vector<uint8_t> >,
-            &ResolverFactory::constructPrimitive<int32_t>,
-            &ResolverFactory::constructPrimitive<int64_t>,
-            &ResolverFactory::constructPrimitive<float>,
-            &ResolverFactory::constructPrimitive<double>,
-            &ResolverFactory::constructPrimitive<bool>,
-            &ResolverFactory::constructPrimitive<Null>,
-            &ResolverFactory::constructCompound<RecordParser, RecordSkipper>,
-            &ResolverFactory::constructCompound<EnumParser, EnumSkipper>,
-            &ResolverFactory::constructCompound<ArrayParser, ArraySkipper>,
-            &ResolverFactory::constructCompound<MapParser, MapSkipper>,
-            &ResolverFactory::constructCompound<UnionParser, UnionSkipper>,
-            &ResolverFactory::constructCompound<FixedParser, FixedSkipper>
-        };
-
-        BOOST_STATIC_ASSERT( (sizeof(funcs)/sizeof(BuilderFunc)) == (AVRO_NUM_TYPES) );
-
-        BuilderFunc func = funcs[currentWriter->type()];
-        assert(func);
-
-        return  ((this)->*(func))(currentWriter, currentReader, offset);
-    }
-
-    Resolver *
-    skipper(const NodePtr &writer) 
-    {
-
-        typedef Resolver* (ResolverFactory::*BuilderFunc)(const NodePtr &writer);
-
-        NodePtr currentWriter = (writer->type() == AVRO_SYMBOLIC) ?
-            writer->leafAt(0) : writer;
-
-        static const BuilderFunc funcs[] = {
-            &ResolverFactory::constructPrimitiveSkipper<std::string>, 
-            &ResolverFactory::constructPrimitiveSkipper<std::vector<uint8_t> >,
-            &ResolverFactory::constructPrimitiveSkipper<int32_t>,
-            &ResolverFactory::constructPrimitiveSkipper<int64_t>,
-            &ResolverFactory::constructPrimitiveSkipper<float>,
-            &ResolverFactory::constructPrimitiveSkipper<double>,
-            &ResolverFactory::constructPrimitiveSkipper<bool>,
-            &ResolverFactory::constructPrimitiveSkipper<Null>,
-            &ResolverFactory::constructCompoundSkipper<RecordSkipper>,
-            &ResolverFactory::constructCompoundSkipper<EnumSkipper>,
-            &ResolverFactory::constructCompoundSkipper<ArraySkipper>,
-            &ResolverFactory::constructCompoundSkipper<MapSkipper>,
-            &ResolverFactory::constructCompoundSkipper<UnionSkipper>,
-            &ResolverFactory::constructCompoundSkipper<FixedSkipper>
-        };
-
-        BOOST_STATIC_ASSERT( (sizeof(funcs)/sizeof(BuilderFunc)) == (AVRO_NUM_TYPES) );
-
-        BuilderFunc func = funcs[currentWriter->type()];
-        assert(func);
-
-        return  ((this)->*(func))(currentWriter);
-    }
-};
-
-
-RecordSkipper::RecordSkipper(ResolverFactory &factory, const NodePtr &writer) :
-    Resolver() 
-{
-    size_t leaves = writer->leaves();
-    resolvers_.reserve(leaves);
-    for(size_t i = 0; i < leaves; ++i) {
-        const NodePtr &w = writer->leafAt(i);
-        resolvers_.push_back(factory.skipper(w));
-    }
-}
-
-RecordParser::RecordParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-    Resolver()
-{
-    size_t leaves = writer->leaves();
-    resolvers_.reserve(leaves);
-    for(size_t i = 0; i < leaves; ++i) {
-    
-        const NodePtr &w = writer->leafAt(i);
-
-        const std::string &name = writer->nameAt(i);
-
-        size_t readerIndex = 0;
-        bool found = reader->nameIndex(name, readerIndex);
-
-        if(found) {
-            const NodePtr &r = reader->leafAt(readerIndex);
-            resolvers_.push_back(factory.construct(w, r, offsets.at(readerIndex)));
-        }
-        else {
-            resolvers_.push_back(factory.skipper(w));
-        }
-    }
-}
-
-MapSkipper::MapSkipper(ResolverFactory &factory, const NodePtr &writer) :
-    Resolver(),
-    resolver_(factory.skipper(writer->leafAt(1)))
-{ }
-
-MapParser::MapParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-    Resolver(),
-    resolver_(factory.construct(writer->leafAt(1), reader->leafAt(1), offsets.at(1))),
-    offset_(offsets.offset()),
-    setFuncOffset_( offsets.at(0).offset())
-{ }
-
-ArraySkipper::ArraySkipper(ResolverFactory &factory, const NodePtr &writer) :
-    Resolver(),
-    resolver_(factory.skipper(writer->leafAt(0)))
-{ }
-
-ArrayParser::ArrayParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-    Resolver(),
-    resolver_(factory.construct(writer->leafAt(0), reader->leafAt(0), offsets.at(1))),
-    offset_(offsets.offset()),
-    setFuncOffset_(offsets.at(0).offset())
-{ }
-
-UnionSkipper::UnionSkipper(ResolverFactory &factory, const NodePtr &writer) :
-    Resolver() 
-{
-    size_t leaves = writer->leaves();
-    resolvers_.reserve(leaves);
-    for(size_t i = 0; i < leaves; ++i) {
-    const NodePtr &w = writer->leafAt(i);
-        resolvers_.push_back(factory.skipper(w));
-    }
-}
-
-namespace {
-
-// asumes the writer is NOT a union, and the reader IS a union
-
-SchemaResolution    
-checkUnionMatch(const NodePtr &writer, const NodePtr &reader, size_t &index)
-{
-    SchemaResolution bestMatch = RESOLVE_NO_MATCH;
- 
-    index = 0;
-    size_t leaves = reader->leaves();
-
-    for(size_t i=0; i < leaves; ++i) {
-
-        const NodePtr &leaf = reader->leafAt(i);
-        SchemaResolution newMatch = writer->resolve(*leaf);
-
-        if(newMatch == RESOLVE_MATCH) {
-            bestMatch = newMatch;
-            index = i;
-            break;
-        }
-        if(bestMatch == RESOLVE_NO_MATCH) {
-            bestMatch = newMatch;
-            index = i;
-        }
-    }
-
-    return bestMatch;
-}
-
-};
-
-UnionParser::UnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-    Resolver(),
-    offset_(offsets.offset()),
-    choiceOffset_(offsets.at(0).offset()),
-    setFuncOffset_(offsets.at(1).offset())
-{
-
-    size_t leaves = writer->leaves();
-    resolvers_.reserve(leaves);
-    choiceMapping_.reserve(leaves);
-    for(size_t i = 0; i < leaves; ++i) {
-
-        // for each writer, we need a schema match for the reader
-        const NodePtr &w = writer->leafAt(i);
-        size_t index = 0;
-
-        SchemaResolution match = checkUnionMatch(w, reader, index);
-
-        if(match == RESOLVE_NO_MATCH) {
-            resolvers_.push_back(factory.skipper(w));
-            // push back a non-sensical number
-            choiceMapping_.push_back(reader->leaves());
-        }
-        else {
-            const NodePtr &r = reader->leafAt(index);
-            resolvers_.push_back(factory.construct(w, r, offsets.at(index+2)));
-            choiceMapping_.push_back(index);
-        }
-    }
-}
-
-NonUnionToUnionParser::NonUnionToUnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const CompoundLayout &offsets) :
-    Resolver(),
-    offset_(offsets.offset()),
-    choiceOffset_(offsets.at(0).offset()),
-    setFuncOffset_(offsets.at(1).offset())
-{
-
-    SchemaResolution bestMatch = checkUnionMatch(writer, reader, choice_);
-    assert(bestMatch != RESOLVE_NO_MATCH);
-    resolver_.reset(factory.construct(writer, reader->leafAt(choice_), offsets.at(choice_+2)));
-}
-
-UnionToNonUnionParser::UnionToNonUnionParser(ResolverFactory &factory, const NodePtr &writer, const NodePtr &reader, const Layout &offsets) :
-    Resolver()
-{
-    size_t leaves = writer->leaves();
-    resolvers_.reserve(leaves);
-    for(size_t i = 0; i < leaves; ++i) {
-        const NodePtr &w = writer->leafAt(i);
-        resolvers_.push_back(factory.construct(w, reader, offsets));
-    }
-}
-
-Resolver *constructResolver(const ValidSchema &writerSchema,
-                                    const ValidSchema &readerSchema,
-                                    const Layout &readerLayout)
-{
-    ResolverFactory factory;
-    return factory.construct(writerSchema.root(), readerSchema.root(), readerLayout);
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/ResolverSchema.cc b/branch-1.3/lang/c++/impl/ResolverSchema.cc
deleted file mode 100644
index e18ecc7..0000000
--- a/branch-1.3/lang/c++/impl/ResolverSchema.cc
+++ /dev/null
@@ -1,39 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "ResolverSchema.hh"
-#include "Resolver.hh"
-#include "ValidSchema.hh"
-
-namespace avro {
-
-ResolverSchema::ResolverSchema(
-        const ValidSchema &writerSchema, 
-        const ValidSchema &readerSchema, 
-        const Layout &readerLayout) :
-    resolver_(constructResolver(writerSchema, readerSchema, readerLayout))
-{ }
-
-void
-ResolverSchema::parse(Reader &reader, uint8_t *address) 
-{
-    resolver_->parse(reader, address);
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Schema.cc b/branch-1.3/lang/c++/impl/Schema.cc
deleted file mode 100644
index a76df70..0000000
--- a/branch-1.3/lang/c++/impl/Schema.cc
+++ /dev/null
@@ -1,111 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#include "Schema.hh"
-
-namespace avro {
-
-Schema::Schema() 
-{ }
-
-Schema::~Schema() 
-{ }
-
-Schema::Schema(const NodePtr &node) :
-    node_(node)
-{ }
-
-Schema::Schema(Node *node) :
-    node_(node)
-{ }
-
-RecordSchema::RecordSchema(const std::string &name) :
-    Schema(new NodeRecord)
-{
-    node_->setName(name);
-}
-
-void
-RecordSchema::addField(const std::string &name, const Schema &fieldSchema) 
-{
-    // add the name first. it will throw if the name is a duplicate, preventing
-    // the leaf from being added
-    node_->addName(name);
-
-    node_->addLeaf(fieldSchema.root());
-}
-
-EnumSchema::EnumSchema(const std::string &name) :
-    Schema(new NodeEnum)
-{
-    node_->setName(name);
-}
-
-void
-EnumSchema::addSymbol(const std::string &symbol)
-{
-    node_->addName(symbol);
-}
-
-ArraySchema::ArraySchema(const Schema &itemsSchema) :
-    Schema(new NodeArray)
-{
-    node_->addLeaf(itemsSchema.root());
-}
-
-MapSchema::MapSchema(const Schema &valuesSchema) :
-    Schema(new NodeMap)
-{
-    node_->addLeaf(valuesSchema.root());
-}
-
-UnionSchema::UnionSchema() :
-    Schema(new NodeUnion)
-{ }
-
-void
-UnionSchema::addType(const Schema &typeSchema) 
-{
-    if(typeSchema.type() == AVRO_UNION) {
-        throw Exception("Cannot add unions to unions");
-    }
-
-    if(typeSchema.type() == AVRO_RECORD) {
-        // check for duplicate records
-        size_t types = node_->leaves();
-        for(size_t i = 0; i < types; ++i) {
-            const NodePtr &leaf = node_->leafAt(i);
-            // TODO, more checks?
-            if(leaf->type() == AVRO_RECORD && leaf->name() == typeSchema.root()->name()) {
-                throw Exception("Records in unions cannot have duplicate names");
-            }
-        }
-    }
-
-    node_->addLeaf(typeSchema.root());
-}
-
-FixedSchema::FixedSchema(int size, const std::string &name) :
-    Schema(new NodeFixed)
-{
-    node_->setFixedSize(size);
-    node_->setName(name);
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Types.cc b/branch-1.3/lang/c++/impl/Types.cc
deleted file mode 100644
index 60a2d3e..0000000
--- a/branch-1.3/lang/c++/impl/Types.cc
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <iostream>
-#include <boost/static_assert.hpp>
-#include "Types.hh"
-
-namespace avro {
-
-namespace strings {
-const std::string typeToString[] = {
-    "string",
-    "bytes",
-    "int",
-    "long",
-    "float",
-    "double",
-    "boolean",
-    "null",
-    "record",
-    "enum",
-    "array",
-    "map",
-    "union",
-    "fixed",
-    "symbolic"
-};
-
-BOOST_STATIC_ASSERT( (sizeof(typeToString)/sizeof(std::string)) == (AVRO_NUM_TYPES+1) );
-
-} // namespace strings
-
-
-// this static assert exists because a 32 bit integer is used as a bit-flag for each type,
-// and it would be a problem for this flag if we ever supported more than 32 types
-BOOST_STATIC_ASSERT( AVRO_NUM_TYPES < 32 );
-
-std::ostream &operator<< (std::ostream &os, Type type)
-{
-    if(isAvroTypeOrPseudoType(type)) {
-        os << strings::typeToString[type];
-    }
-    else {
-        os << static_cast<int>(type);
-    }
-    return os;
-}
-
-std::ostream &operator<< (std::ostream &os, const Null &)
-{
-    os << "(null value)";
-    return os;
-}
-
-} // namespace avro
-
diff --git a/branch-1.3/lang/c++/impl/ValidSchema.cc b/branch-1.3/lang/c++/impl/ValidSchema.cc
deleted file mode 100644
index e92fae7..0000000
--- a/branch-1.3/lang/c++/impl/ValidSchema.cc
+++ /dev/null
@@ -1,110 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <boost/format.hpp>
-
-#include "ValidSchema.hh"
-#include "SymbolMap.hh"
-#include "Schema.hh"
-#include "Node.hh"
-
-namespace avro {
-
-    ValidSchema::ValidSchema(const Schema &schema) :
-    root_(schema.root())
-{
-    SymbolMap symbolMap;
-    validate(root_, symbolMap);
-}
-
-ValidSchema::ValidSchema() :
-   root_(NullSchema().root()) 
-{ }
-
-void
-ValidSchema::setSchema(const Schema &schema)
-{
-    const NodePtr &node(schema.root());
-    SymbolMap symbolMap;
-    validate(schema.root(), symbolMap);
-    root_ = node;
-}
-
-bool
-ValidSchema::validate(const NodePtr &node, SymbolMap &symbolMap) 
-{
-    if(!node) {
-        root_.reset(new NodePrimitive(AVRO_NULL));
-    }
-
-    if(!node->isValid()) {
-        throw Exception( boost::format("Schema is invalid, due to bad node of type %1%") % node->type());
-    }
-    if(node->hasName()) {
-        if(node->type() == AVRO_SYMBOLIC) {
-            if(!symbolMap.hasSymbol(node->name())) {
-                throw Exception( boost::format("Symbolic name \"%1%\" is unknown") % node->name());
-            }
-
-            boost::shared_ptr<NodeSymbolic> symNode = boost::static_pointer_cast<NodeSymbolic>(node);
-
-            // if the symbolic link is already resolved, we return true,
-            // otherwise returning false will force it to be resolved
-            return symNode->isSet();
-        }
-        bool registered = symbolMap.registerSymbol(node);
-        if(!registered) {
-            return false;
-        }
-    }
-    node->lock();
-    size_t leaves = node->leaves();
-    for(size_t i = 0; i < leaves; ++i) {
-        const NodePtr &leaf(node->leafAt(i));
-
-        if(! validate(leaf, symbolMap)) {
-
-            // if validate returns false it means a node with this name already
-            // existed in the map, instead of keeping this node twice in the
-            // map (which could potentially create circular shared pointer
-            // links that could not be easily freed), replace this node with a
-            // symbolic link to the original one.
-            
-            NodePtr redirect = symbolMap.locateSymbol(leaf->name());
-            node->setLeafToSymbolic(i, redirect);
-        }
-    }
-
-    return true;
-}
-
-void 
-ValidSchema::toJson(std::ostream &os) const
-{ 
-    root_->printJson(os, 0);
-    os << '\n';
-}
-
-void 
-ValidSchema::toFlatList(std::ostream &os) const
-{ 
-    root_->printBasicInfo(os);
-}
-
-} // namespace avro
-
diff --git a/branch-1.3/lang/c++/impl/ValidatingReader.cc b/branch-1.3/lang/c++/impl/ValidatingReader.cc
deleted file mode 100644
index a89ed56..0000000
--- a/branch-1.3/lang/c++/impl/ValidatingReader.cc
+++ /dev/null
@@ -1,81 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <boost/static_assert.hpp>
-
-#include "ValidatingReader.hh"
-#include "ValidSchema.hh"
-#include "OutputStreamer.hh"
-
-namespace avro {
-
-ValidatingReader::ValidatingReader(const ValidSchema &schema, InputStreamer &in) :
-    validator_(schema),
-    reader_(in)
-{ }
-
-int64_t
-ValidatingReader::readCount()
-{
-    checkSafeToGet(AVRO_LONG);
-    int64_t val;
-    reader_.readValue(val);
-    validator_.advanceWithCount(val);
-    return val;
-}
-
-void 
-ValidatingReader::readRecord()
-{
-    checkSafeToGet(AVRO_RECORD);
-    validator_.advance();
-}
-
-int64_t 
-ValidatingReader::readUnion()
-{
-    checkSafeToGet(AVRO_UNION);
-    validator_.advance();
-    return readCount();
-}
-
-int64_t 
-ValidatingReader::readEnum()
-{
-    checkSafeToGet(AVRO_ENUM);
-    validator_.advance();
-    return readCount();
-}
-
-int64_t 
-ValidatingReader::readMapBlockSize()
-{
-    checkSafeToGet(AVRO_MAP);
-    validator_.advance();
-    return readCount();
-}
-
-int64_t 
-ValidatingReader::readArrayBlockSize()
-{
-    checkSafeToGet(AVRO_ARRAY);
-    validator_.advance();
-    return readCount();
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/ValidatingWriter.cc b/branch-1.3/lang/c++/impl/ValidatingWriter.cc
deleted file mode 100644
index 5c39a9b..0000000
--- a/branch-1.3/lang/c++/impl/ValidatingWriter.cc
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <boost/static_assert.hpp>
-
-#include "ValidatingWriter.hh"
-#include "ValidSchema.hh"
-#include "OutputStreamer.hh"
-#include "AvroTraits.hh"
-
-namespace avro {
-
-ValidatingWriter::ValidatingWriter(const ValidSchema &schema, OutputStreamer &out) :
-    validator_(schema),
-    writer_(out)
-{ }
-
-void
-ValidatingWriter::writeBytes(const void *val, size_t size)
-{
-    checkSafeToPut(AVRO_BYTES);
-    writer_.writeBytes(val, size);
-    validator_.advance();
-}
-
-void 
-ValidatingWriter::writeCount(int64_t count)
-{
-    checkSafeToPut(AVRO_LONG);
-    writer_.writeValue(count);
-    validator_.advanceWithCount(count);
-}
-
-void 
-ValidatingWriter::writeRecord()
-{
-    checkSafeToPut(AVRO_RECORD);
-    validator_.advance();
-}
-
-void 
-ValidatingWriter::writeArrayBlock(int64_t size)
-{
-    checkSafeToPut(AVRO_ARRAY);
-    validator_.advance();
-    writeCount(size);
-}
-
-void 
-ValidatingWriter::writeArrayEnd()
-{
-    writeArrayBlock(0);
-}
-
-void 
-ValidatingWriter::writeMapBlock(int64_t size)
-{
-    checkSafeToPut(AVRO_MAP);
-    validator_.advance();
-    writeCount(size);
-}
-
-void 
-ValidatingWriter::writeMapEnd()
-{
-    writeMapBlock(0);
-}
-
-void 
-ValidatingWriter::writeUnion(int64_t choice)
-{
-    checkSafeToPut(AVRO_UNION);
-    validator_.advance();
-    writeCount(choice);
-}
-
-void 
-ValidatingWriter::writeEnum(int64_t choice)
-{
-    checkSafeToPut(AVRO_ENUM);
-    validator_.advance();
-    writeCount(choice);
-}
-
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Validator.cc b/branch-1.3/lang/c++/impl/Validator.cc
deleted file mode 100644
index f8bc166..0000000
--- a/branch-1.3/lang/c++/impl/Validator.cc
+++ /dev/null
@@ -1,302 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <boost/static_assert.hpp>
-
-#include "Validator.hh"
-#include "ValidSchema.hh"
-#include "NodeImpl.hh"
-
-namespace avro {
-
-Validator::Validator(const ValidSchema &schema) :
-    schema_(schema),
-    parseTree_(schema.root()),
-    nextType_(AVRO_NULL),
-    expectedTypesFlag_(0),
-    compoundStarted_(false),
-    waitingForCount_(false),
-    count_(0)
-{
-    setupOperation(parseTree_);
-}
-
-void 
-Validator::setWaitingForCount()
-{
-    waitingForCount_ = true;
-    count_ = 0;
-    expectedTypesFlag_ = typeToFlag(AVRO_INT) | typeToFlag(AVRO_LONG);
-    nextType_ = AVRO_LONG;
-}
-
-void
-Validator::recordAdvance()
-{
-    // record doesn't use this flag because it doesn't need to set
-    // up anything at the start, but just clear it
-    compoundStarted_ = false;
-
-    // determine the next record entry to process
-    size_t index = (compoundStack_.back().pos)++;
-
-    const NodePtr &node = compoundStack_.back().node;
-    if(index < node->leaves() ) {
-        setupOperation(node->leafAt(index));
-    }
-    else {
-        // done with this record, remove it from the processing stack
-        compoundStack_.pop_back();
-    }
-}
-
-void
-Validator::enumAdvance()
-{
-    if(compoundStarted_) {
-        setWaitingForCount();
-        compoundStarted_ = false;
-    }
-    else {
-        waitingForCount_ = false;
-        compoundStack_.pop_back();
-    }
-}
-
-void
-Validator::countingAdvance()
-{
-    const NodePtr &node = compoundStack_.back().node;
-
-    if(compoundStarted_) {
-        setWaitingForCount();
-        compoundStarted_ = false;
-    }
-    else if(waitingForCount_) {
-        waitingForCount_ = false;
-        if(count_ == 0) {
-            compoundStack_.pop_back();
-        }
-        else {
-            counters_.push_back(count_);
-            setupOperation(node->leafAt(0));
-        }
-    }
-    else {
-
-        size_t index = ++(compoundStack_.back().pos);
-
-        if(index < node->leaves() ) {
-            setupOperation(node->leafAt(index));
-        }
-        else {
-            compoundStack_.back().pos = 0;
-            int count = --counters_.back();
-            if(count == 0) {
-                counters_.pop_back();
-                compoundStarted_ = true;
-                nextType_ = node->type();
-                expectedTypesFlag_ = typeToFlag(nextType_);
-            }
-            else {
-                setupOperation(node->leafAt(0));
-            }
-        }
-    }
-}
-
-void
-Validator::unionAdvance()
-{
-    if(compoundStarted_) {
-        setWaitingForCount();
-        compoundStarted_ = false;
-    }
-    else {
-        waitingForCount_ = false;
-        NodePtr node = compoundStack_.back().node;
-
-        if(count_ < static_cast<int64_t>(node->leaves())) {
-            compoundStack_.pop_back();
-            setupOperation(node->leafAt(count_));
-        }
-        else {
-            throw Exception("Union out of range");
-        }
-    }
-}
-
-void
-Validator::fixedAdvance()
-{
-    compoundStarted_ = false;
-    compoundStack_.pop_back();
-}
-
-int 
-Validator::nextSizeExpected() const
-{
-    return compoundStack_.back().node->fixedSize();
-}
-
-void
-Validator::advance()
-{
-    typedef void (Validator::*AdvanceFunc)();
-
-    // only the compound types need advance functions here
-    static const AdvanceFunc funcs[] = {
-        0, // string
-        0, // bytes
-        0, // int
-        0, // long
-        0, // float
-        0, // double
-        0, // bool
-        0, // null
-        &Validator::recordAdvance,
-        &Validator::enumAdvance,
-        &Validator::countingAdvance,
-        &Validator::countingAdvance,
-        &Validator::unionAdvance,
-        &Validator::fixedAdvance
-    };
-    BOOST_STATIC_ASSERT( (sizeof(funcs)/sizeof(AdvanceFunc)) == (AVRO_NUM_TYPES) );
-
-    expectedTypesFlag_ = 0;
-    // loop until we encounter a next expected type, or we've exited all compound types 
-    while(!expectedTypesFlag_ && !compoundStack_.empty() ) {
-    
-        Type type = compoundStack_.back().node->type();
-
-        AdvanceFunc func = funcs[type];
-
-        // only compound functions are put on the status stack so it is ok to
-        // assume that func is not null
-        assert(func);
-
-        ((this)->*(func))();
-    }
-
-    if(compoundStack_.empty()) {
-        nextType_ = AVRO_NULL;
-    }
-}
-
-void
-Validator::advanceWithCount(int64_t count) 
-{
-    if(!waitingForCount_) {
-        throw Exception("Not expecting count");
-    }
-    else if(count_ < 0) {
-        throw Exception("Count cannot be negative");
-    }
-    count_ = count;
-
-    advance();
-}
-
-void
-Validator::setupFlag(Type type)
-{
-    // use flags instead of strictly types, so that we can be more lax about the type
-    // (for example, a long should be able to accept an int type, but not vice versa)
-    static const flag_t flags[] = {
-        typeToFlag(AVRO_STRING) | typeToFlag(AVRO_BYTES),
-        typeToFlag(AVRO_STRING) | typeToFlag(AVRO_BYTES),
-        typeToFlag(AVRO_INT),
-        typeToFlag(AVRO_INT) | typeToFlag(AVRO_LONG),
-        typeToFlag(AVRO_FLOAT),
-        typeToFlag(AVRO_DOUBLE),
-        typeToFlag(AVRO_BOOL),
-        typeToFlag(AVRO_NULL),
-        typeToFlag(AVRO_RECORD),
-        typeToFlag(AVRO_ENUM),
-        typeToFlag(AVRO_ARRAY),
-        typeToFlag(AVRO_MAP),
-        typeToFlag(AVRO_UNION),
-        typeToFlag(AVRO_FIXED)
-    };
-    BOOST_STATIC_ASSERT( (sizeof(flags)/sizeof(flag_t)) == (AVRO_NUM_TYPES) );
-
-    expectedTypesFlag_ = flags[type];
-}
-
-void
-Validator::setupOperation(const NodePtr &node)
-{
-    nextType_ = node->type();
-
-    if(nextType_ == AVRO_SYMBOLIC) {
-        NodePtr actualNode = resolveSymbol(node);
-        assert(actualNode);
-        setupOperation(actualNode);
-        return;
-    }
-
-    assert(nextType_ < AVRO_SYMBOLIC);
-
-    setupFlag(nextType_);
-
-    if(!isPrimitive(nextType_)) {
-        compoundStack_.push_back(CompoundType(node));
-        compoundStarted_ = true;
-    }
-}
-
-bool 
-Validator::getCurrentRecordName(std::string &name) const
-{
-    bool found = false;
-    name.clear();
-
-    int idx = -1;
-    // if the top of the stack is a record I want this record name
-    if(!compoundStack_.empty() && (isPrimitive(nextType_) || nextType_ == AVRO_RECORD)) {
-        idx = compoundStack_.size() -1;
-    }
-    else {
-        idx = compoundStack_.size() -2;
-    }
-    
-    if(idx >= 0 && compoundStack_[idx].node->type() == AVRO_RECORD) {
-        name = compoundStack_[idx].node->name();
-        found = true;
-    }
-    return found;
-}
-
-bool 
-Validator::getNextFieldName(std::string &name) const
-{
-    bool found = false;
-    name.clear();
-    int idx = isCompound(nextType_) ? compoundStack_.size()-2 : compoundStack_.size()-1;
-    if(idx >= 0 && compoundStack_[idx].node->type() == AVRO_RECORD) {
-        size_t pos = compoundStack_[idx].pos-1;
-        const NodePtr &node = compoundStack_[idx].node;
-        if(pos < node->leaves()) {
-            name = node->nameAt(pos);
-            found = true;
-        }
-    }
-    return found;
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/impl/Zigzag.cc b/branch-1.3/lang/c++/impl/Zigzag.cc
deleted file mode 100644
index dffb654..0000000
--- a/branch-1.3/lang/c++/impl/Zigzag.cc
+++ /dev/null
@@ -1,84 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#include "Zigzag.hh"
-
-namespace avro {
-
-uint64_t 
-encodeZigzag64(int64_t input)
-{
-    return ((input << 1) ^ (input >> 63));
-}
-
-int64_t 
-decodeZigzag64(uint64_t input)
-{
-    return ((input >> 1) ^ -(input & 1));
-}
-
-uint32_t 
-encodeZigzag32(int32_t input)
-{
-    return ((input << 1) ^ (input >> 31));
-}
-
-int32_t 
-decodeZigzag32(uint32_t input)
-{
-    return ((input >> 1) ^ -(input & 1));
-}
-
-size_t 
-encodeInt64(int64_t input, boost::array<uint8_t, 10> &output)
-{
-    // get the zigzag encoding 
-    uint64_t val = encodeZigzag64(input);
-
-    // put values in an array of bytes with variable length encoding
-    const int mask  = 0x7F;
-    output[0] = val & mask;
-    size_t bytesOut = 1;
-    while( val >>=7 ) {
-        output[bytesOut-1] |= 0x80;
-        output[bytesOut++] = (val & mask);
-    }
-
-    return bytesOut;
-}
-
-size_t 
-encodeInt32(int32_t input, boost::array<uint8_t, 5> &output)
-{
-    // get the zigzag encoding 
-    uint32_t val = encodeZigzag32(input);
-
-    // put values in an array of bytes with variable length encoding
-    const int mask  = 0x7F;
-    output[0] = val & mask;
-    size_t bytesOut = 1;
-    while( val >>=7 ) {
-        output[bytesOut-1] |= 0x80;
-        output[bytesOut++] = (val & mask);
-    }
-
-    return bytesOut;
-}
-
-} // namespace avro
diff --git a/branch-1.3/lang/c++/jsonschemas/array b/branch-1.3/lang/c++/jsonschemas/array
deleted file mode 100644
index e5c3761..0000000
--- a/branch-1.3/lang/c++/jsonschemas/array
+++ /dev/null
@@ -1 +0,0 @@
-{ "type" : "array", "items" : "int" , "name":"test" }
diff --git a/branch-1.3/lang/c++/jsonschemas/bigrecord b/branch-1.3/lang/c++/jsonschemas/bigrecord
deleted file mode 100644
index 815a32d..0000000
--- a/branch-1.3/lang/c++/jsonschemas/bigrecord
+++ /dev/null
@@ -1,100 +0,0 @@
-{
-    "type": "record",
-    "name": "RootRecord",
-    "fields": [
-        {
-            "name": "mylong",
-            "type": "long"
-        },
-        {
-            "name": "nestedrecord",
-            "type": {
-                "type": "record",
-                "name": "Nested",
-                "fields": [
-                    {
-                        "name": "inval1",
-                        "type": "double"
-                    },
-                    {
-                        "name": "inval2",
-                        "type": "string"
-                    },
-                    {
-                        "name": "inval3",
-                        "type": "int"
-                    }
-                ]
-            }
-        },
-        {
-            "name": "mymap",
-            "type": {
-                "type": "map",
-                "values": "int"
-            }
-        },
-        {
-            "name": "myarray",
-            "type": {
-                "type": "array",
-                "items": "double"
-            }
-        },
-        {
-            "name": "myenum",
-            "type": {
-                "type": "enum",
-                "name": "ExampleEnum",
-                "symbols": [
-                    "zero",
-                    "one",
-                    "two",
-                    "three"
-                ]
-            }
-        },
-        {
-            "name": "myunion",
-            "type": [
-                "null",
-                {
-                    "type": "map",
-                    "values": "int"
-                },
-                "float"
-            ]
-        },
-        {
-            "name": "anotherunion",
-            "type": [
-                "bytes",
-                "null"
-            ]
-        },
-        {
-            "name": "mybool",
-            "type": "boolean"
-        },
-        {
-            "name": "anothernested",
-            "type": "Nested"
-        },
-        {
-            "name": "myfixed",
-            "type": {
-                "type": "fixed",
-                "size": 16,
-                "name": "md5"
-            }
-        },
-        {
-            "name": "anotherint",
-            "type": "int"
-        },
-        {
-            "name": "bytes",
-            "type": "bytes"
-        }
-    ]
-}
diff --git a/branch-1.3/lang/c++/jsonschemas/bigrecord2 b/branch-1.3/lang/c++/jsonschemas/bigrecord2
deleted file mode 100644
index 73b9a99..0000000
--- a/branch-1.3/lang/c++/jsonschemas/bigrecord2
+++ /dev/null
@@ -1,100 +0,0 @@
-{
-    "type": "record",
-    "name": "RootRecord",
-    "fields": [
-        {
-            "name": "mylong",
-            "type": "double"
-        },
-        {
-            "name": "anotherint",
-            "type": "int"
-        },
-        {
-            "name": "bytes",
-            "type": "bytes"
-        },
-        {
-            "name": "nestedrecord",
-            "type": {
-                "type": "record",
-                "name": "Nested",
-                "fields": [
-                    {
-                        "name": "inval3",
-                        "type": "int"
-                    },
-                    {
-                        "name": "inval2",
-                        "type": "string"
-                    },
-                    {
-                        "name": "inval1",
-                        "type": "double"
-                    }
-                ]
-            }
-        },
-        {
-            "name": "mymap",
-            "type": {
-                "type": "map",
-                "values": "long"
-            }
-        },
-        {
-            "name": "myarray",
-            "type": {
-                "type": "array",
-                "items": "double"
-            }
-        },
-        {
-            "name": "myenum",
-            "type": {
-                "type": "enum",
-                "name": "ExampleEnum",
-                "symbols": [
-                    "three",
-                    "two",
-                    "one",
-                    "zero"
-                ]
-            }
-        },
-        {
-            "name": "myunion",
-            "type": [
-                "null",
-                "float",
-                {
-                    "type": "map",
-                    "values": "float"
-                }
-            ]
-        },
-        {
-            "name": "anotherunion",
-            "type": "bytes"
-        },
-        {
-            "name": "anothernested",
-            "type": "Nested"
-        },
-        {
-            "name": "newbool",
-            "type": "boolean"
-        },
-        {
-            "name": "myfixed",
-            "type": [
-                "float",
-                {
-                 "type": "fixed",
-                 "size": 16,
-                 "name": "md5"
-                }
-            ]
-        }
-    ]
-}
diff --git a/branch-1.3/lang/c++/jsonschemas/enum b/branch-1.3/lang/c++/jsonschemas/enum
deleted file mode 100644
index d979c86..0000000
--- a/branch-1.3/lang/c++/jsonschemas/enum
+++ /dev/null
@@ -1,10 +0,0 @@
-        {
-            "type": "enum",
-            "symbols": [
-                "zero",
-                "int",
-                "two",
-                "three"
-            ],
-            "name": "myenum"
-        }
diff --git a/branch-1.3/lang/c++/jsonschemas/fixed b/branch-1.3/lang/c++/jsonschemas/fixed
deleted file mode 100644
index 2fe4cbc..0000000
--- a/branch-1.3/lang/c++/jsonschemas/fixed
+++ /dev/null
@@ -1 +0,0 @@
-{"type": "fixed", "size" : 16, "name":"md5" }
diff --git a/branch-1.3/lang/c++/jsonschemas/int b/branch-1.3/lang/c++/jsonschemas/int
deleted file mode 100644
index a21274f..0000000
--- a/branch-1.3/lang/c++/jsonschemas/int
+++ /dev/null
@@ -1 +0,0 @@
-"int"
diff --git a/branch-1.3/lang/c++/jsonschemas/map b/branch-1.3/lang/c++/jsonschemas/map
deleted file mode 100644
index 6c84e95..0000000
--- a/branch-1.3/lang/c++/jsonschemas/map
+++ /dev/null
@@ -1,5 +0,0 @@
-{
-    "type": "map",
-    "values": {"type":"int"},
-    "name": "noname"
-}
diff --git a/branch-1.3/lang/c++/jsonschemas/nested b/branch-1.3/lang/c++/jsonschemas/nested
deleted file mode 100644
index 5aa6a75..0000000
--- a/branch-1.3/lang/c++/jsonschemas/nested
+++ /dev/null
@@ -1,17 +0,0 @@
-{
-    "type": "record",
-    "name": "LongList",
-    "fields": [
-        {
-            "name": "value",
-            "type": "long"
-        },
-        {
-            "name": "next",
-            "type": [
-                "LongList",
-                "null"
-            ]
-        }
-    ]
-}
diff --git a/branch-1.3/lang/c++/jsonschemas/nested.error b/branch-1.3/lang/c++/jsonschemas/nested.error
deleted file mode 100644
index 5c82308..0000000
--- a/branch-1.3/lang/c++/jsonschemas/nested.error
+++ /dev/null
@@ -1,17 +0,0 @@
-{
-    "type": "record",
-    "name": "LongList",
-    "fields": [
-        {
-            "name": "value",
-            "type": "long"
-        },
-        {
-            "name": "next",
-            "type": [
-                "LongLister",
-                "null"
-            ]
-        }
-    ]
-}
diff --git a/branch-1.3/lang/c++/jsonschemas/recinrec b/branch-1.3/lang/c++/jsonschemas/recinrec
deleted file mode 100644
index 84b673e..0000000
--- a/branch-1.3/lang/c++/jsonschemas/recinrec
+++ /dev/null
@@ -1,18 +0,0 @@
-{
-  "type": "record", 
-  "name": "Rec1",
-  "fields" : [
-    {"name": "val1", "type": "long"},           
-    {"name": "val2", "type": {
-          "type": "record", 
-          "name": "Rec2",
-          "fields" : [
-            {"name": "inval1", "type": "double"},           
-            {"name": "inval2", "type": "int" }
-          ]
-      }
-    },
-    {"name": "val3", "type": "float"}
-  ]
-}
-
diff --git a/branch-1.3/lang/c++/jsonschemas/record b/branch-1.3/lang/c++/jsonschemas/record
deleted file mode 100644
index f2be606..0000000
--- a/branch-1.3/lang/c++/jsonschemas/record
+++ /dev/null
@@ -1,9 +0,0 @@
-{
-  "type": "record", 
-  "name": "LongList",
-  "fields" : [
-    {"name": "value", "type": "long"},           
-    {"type": "int", "name": "next", "metadata" : "two"}
-  ]
-}
-
diff --git a/branch-1.3/lang/c++/jsonschemas/record2 b/branch-1.3/lang/c++/jsonschemas/record2
deleted file mode 100644
index 788478f..0000000
--- a/branch-1.3/lang/c++/jsonschemas/record2
+++ /dev/null
@@ -1,10 +0,0 @@
-{
-  "type": "record", 
-  "name": "LongList",
-  "fields" : [
-    {"name": "value", "type": "long"},           
-    {"name": "next", "type": ["int", "float"] },
-    {"name": "hello", "type": {"type" : "array" , "items" :"float"}}
-  ]
-}
-
diff --git a/branch-1.3/lang/c++/jsonschemas/union b/branch-1.3/lang/c++/jsonschemas/union
deleted file mode 100644
index ebf26d0..0000000
--- a/branch-1.3/lang/c++/jsonschemas/union
+++ /dev/null
@@ -1 +0,0 @@
-[ "int" , "long" , "float" ]
diff --git a/branch-1.3/lang/c++/jsonschemas/unionwithmap b/branch-1.3/lang/c++/jsonschemas/unionwithmap
deleted file mode 100644
index 33e0c6d..0000000
--- a/branch-1.3/lang/c++/jsonschemas/unionwithmap
+++ /dev/null
@@ -1 +0,0 @@
-[ "int" , "long" , {"type":"map", "values":[ "int", "long" ] } ]
diff --git a/branch-1.3/lang/c++/jsonschemas/verboseint b/branch-1.3/lang/c++/jsonschemas/verboseint
deleted file mode 100644
index 5dccae9..0000000
--- a/branch-1.3/lang/c++/jsonschemas/verboseint
+++ /dev/null
@@ -1 +0,0 @@
-{ "type": "int", "metadata1" : "ju\"nk", metadata2: 123, metadata3 : {"ju{nk"}, "name":"hello", metadata4: ["he]ll}o"]}
diff --git a/branch-1.3/lang/c++/m4/README b/branch-1.3/lang/c++/m4/README
deleted file mode 100644
index 7129fcb..0000000
--- a/branch-1.3/lang/c++/m4/README
+++ /dev/null
@@ -1,3 +0,0 @@
-The macros in this directory came from http://www.nongnu.org/autoconf-archive/index.html
-
-Please refer to the files for their licensing info.
diff --git a/branch-1.3/lang/c++/m4/m4_ax_boost_base.m4 b/branch-1.3/lang/c++/m4/m4_ax_boost_base.m4
deleted file mode 100644
index b2a00b8..0000000
--- a/branch-1.3/lang/c++/m4/m4_ax_boost_base.m4
+++ /dev/null
@@ -1,219 +0,0 @@
-# ===========================================================================
-#          http://www.nongnu.org/autoconf-archive/ax_boost_base.html
-# ===========================================================================
-#
-# SYNOPSIS
-#
-#   AX_BOOST_BASE([MINIMUM-VERSION])
-#
-# DESCRIPTION
-#
-#   Test for the Boost C++ libraries of a particular version (or newer)
-#
-#   If no path to the installed boost library is given the macro searchs
-#   under /usr, /usr/local, /opt and /opt/local and evaluates the
-#   $BOOST_ROOT environment variable. Further documentation is available at
-#   <http://randspringer.de/boost/index.html>.
-#
-#   This macro calls:
-#
-#     AC_SUBST(BOOST_CPPFLAGS) / AC_SUBST(BOOST_LDFLAGS)
-#
-#   And sets:
-#
-#     HAVE_BOOST
-#
-# LICENSE
-#
-#   Copyright (c) 2008 Thomas Porschberg <thomas@randspringer.de>
-#
-#   Copying and distribution of this file, with or without modification, are
-#   permitted in any medium without royalty provided the copyright notice
-#   and this notice are preserved.
-
-AC_DEFUN([AX_BOOST_BASE],
-[
-AC_ARG_WITH([boost],
-	AS_HELP_STRING([--with-boost@<:@=DIR@:>@], [use boost (default is yes) - it is possible to specify the root directory for boost (optional)]),
-	[
-    if test "$withval" = "no"; then
-		want_boost="no"
-    elif test "$withval" = "yes"; then
-        want_boost="yes"
-        ac_boost_path=""
-    else
-	    want_boost="yes"
-        ac_boost_path="$withval"
-	fi
-    ],
-    [want_boost="yes"])
-
-
-AC_ARG_WITH([boost-libdir],
-        AS_HELP_STRING([--with-boost-libdir=LIB_DIR],
-        [Force given directory for boost libraries. Note that this will overwrite library path detection, so use this parameter only if default library detection fails and you know exactly where your boost libraries are located.]),
-        [
-        if test -d $withval
-        then
-                ac_boost_lib_path="$withval"
-        else
-                AC_MSG_ERROR(--with-boost-libdir expected directory name)
-        fi
-        ],
-        [ac_boost_lib_path=""]
-)
-
-if test "x$want_boost" = "xyes"; then
-	boost_lib_version_req=ifelse([$1], ,1.20.0,$1)
-	boost_lib_version_req_shorten=`expr $boost_lib_version_req : '\([[0-9]]*\.[[0-9]]*\)'`
-	boost_lib_version_req_major=`expr $boost_lib_version_req : '\([[0-9]]*\)'`
-	boost_lib_version_req_minor=`expr $boost_lib_version_req : '[[0-9]]*\.\([[0-9]]*\)'`
-	boost_lib_version_req_sub_minor=`expr $boost_lib_version_req : '[[0-9]]*\.[[0-9]]*\.\([[0-9]]*\)'`
-	if test "x$boost_lib_version_req_sub_minor" = "x" ; then
-		boost_lib_version_req_sub_minor="0"
-    	fi
-	WANT_BOOST_VERSION=`expr $boost_lib_version_req_major \* 100000 \+  $boost_lib_version_req_minor \* 100 \+ $boost_lib_version_req_sub_minor`
-	AC_MSG_CHECKING(for boostlib >= $boost_lib_version_req)
-	succeeded=no
-
-	dnl first we check the system location for boost libraries
-	dnl this location ist chosen if boost libraries are installed with the --layout=system option
-	dnl or if you install boost with RPM
-	if test "$ac_boost_path" != ""; then
-		BOOST_LDFLAGS="-L$ac_boost_path/lib"
-		BOOST_CPPFLAGS="-I$ac_boost_path/include"
-	else
-		for ac_boost_path_tmp in /usr /usr/local /opt /opt/local ; do
-			if test -d "$ac_boost_path_tmp/include/boost" && test -r "$ac_boost_path_tmp/include/boost"; then
-				BOOST_LDFLAGS="-L$ac_boost_path_tmp/lib"
-				BOOST_CPPFLAGS="-I$ac_boost_path_tmp/include"
-				break;
-			fi
-		done
-	fi
-
-    dnl overwrite ld flags if we have required special directory with
-    dnl --with-boost-libdir parameter
-    if test "$ac_boost_lib_path" != ""; then
-       BOOST_LDFLAGS="-L$ac_boost_lib_path"
-    fi
-
-	CPPFLAGS_SAVED="$CPPFLAGS"
-	CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
-	export CPPFLAGS
-
-	LDFLAGS_SAVED="$LDFLAGS"
-	LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
-	export LDFLAGS
-
-	AC_LANG_PUSH(C++)
-     	AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
-	@%:@include <boost/version.hpp>
-	]], [[
-	#if BOOST_VERSION >= $WANT_BOOST_VERSION
-	// Everything is okay
-	#else
-	#  error Boost version is too old
-	#endif
-	]])],[
-        AC_MSG_RESULT(yes)
-	succeeded=yes
-	found_system=yes
-       	],[
-       	])
-	AC_LANG_POP([C++])
-
-
-
-	dnl if we found no boost with system layout we search for boost libraries
-	dnl built and installed without the --layout=system option or for a staged(not installed) version
-	if test "x$succeeded" != "xyes"; then
-		_version=0
-		if test "$ac_boost_path" != ""; then
-			if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then
-				for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do
-					_version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'`
-					V_CHECK=`expr $_version_tmp \> $_version`
-					if test "$V_CHECK" = "1" ; then
-						_version=$_version_tmp
-					fi
-					VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'`
-					BOOST_CPPFLAGS="-I$ac_boost_path/include/boost-$VERSION_UNDERSCORE"
-				done
-			fi
-		else
-			for ac_boost_path in /usr /usr/local /opt /opt/local ; do
-				if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then
-					for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do
-						_version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'`
-						V_CHECK=`expr $_version_tmp \> $_version`
-						if test "$V_CHECK" = "1" ; then
-							_version=$_version_tmp
-	               					best_path=$ac_boost_path
-						fi
-					done
-				fi
-			done
-
-			VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'`
-			BOOST_CPPFLAGS="-I$best_path/include/boost-$VERSION_UNDERSCORE"
-            if test "$ac_boost_lib_path" = ""
-            then
-               BOOST_LDFLAGS="-L$best_path/lib"
-            fi
-
-	    		if test "x$BOOST_ROOT" != "x"; then
-				if test -d "$BOOST_ROOT" && test -r "$BOOST_ROOT" && test -d "$BOOST_ROOT/stage/lib" && test -r "$BOOST_ROOT/stage/lib"; then
-					version_dir=`expr //$BOOST_ROOT : '.*/\(.*\)'`
-					stage_version=`echo $version_dir | sed 's/boost_//' | sed 's/_/./g'`
-			        	stage_version_shorten=`expr $stage_version : '\([[0-9]]*\.[[0-9]]*\)'`
-					V_CHECK=`expr $stage_version_shorten \>\= $_version`
-                    if test "$V_CHECK" = "1" -a "$ac_boost_lib_path" = "" ; then
-						AC_MSG_NOTICE(We will use a staged boost library from $BOOST_ROOT)
-						BOOST_CPPFLAGS="-I$BOOST_ROOT"
-						BOOST_LDFLAGS="-L$BOOST_ROOT/stage/lib"
-					fi
-				fi
-	    		fi
-		fi
-
-		CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
-		export CPPFLAGS
-		LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
-		export LDFLAGS
-
-		AC_LANG_PUSH(C++)
-	     	AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
-		@%:@include <boost/version.hpp>
-		]], [[
-		#if BOOST_VERSION >= $WANT_BOOST_VERSION
-		// Everything is okay
-		#else
-		#  error Boost version is too old
-		#endif
-		]])],[
-        	AC_MSG_RESULT(yes)
-		succeeded=yes
-		found_system=yes
-       		],[
-	       	])
-		AC_LANG_POP([C++])
-	fi
-
-	if test "$succeeded" != "yes" ; then
-		if test "$_version" = "0" ; then
-			AC_MSG_ERROR([[We could not detect the boost libraries (version $boost_lib_version_req_shorten or higher). If you have a staged boost library (still not installed) please specify \$BOOST_ROOT in your environment and do not give a PATH to --with-boost option.  If you are sure you have boost installed, then check your version number looking in <boost/version.hpp>. See http://randspringer.de/boost for more documentation.]])
-		else
-			AC_MSG_NOTICE([Your boost libraries seems to old (version $_version).])
-		fi
-	else
-		AC_SUBST(BOOST_CPPFLAGS)
-		AC_SUBST(BOOST_LDFLAGS)
-		AC_DEFINE(HAVE_BOOST,,[define if the Boost library is available])
-	fi
-
-        CPPFLAGS="$CPPFLAGS_SAVED"
-       	LDFLAGS="$LDFLAGS_SAVED"
-fi
-
-])
diff --git a/branch-1.3/lang/c++/m4/m4_ax_boost_regex.m4 b/branch-1.3/lang/c++/m4/m4_ax_boost_regex.m4
deleted file mode 100644
index faebd13..0000000
--- a/branch-1.3/lang/c++/m4/m4_ax_boost_regex.m4
+++ /dev/null
@@ -1,105 +0,0 @@
-# ===========================================================================
-#         http://www.nongnu.org/autoconf-archive/ax_boost_regex.html
-# ===========================================================================
-#
-# SYNOPSIS
-#
-#   AX_BOOST_REGEX
-#
-# DESCRIPTION
-#
-#   Test for Regex library from the Boost C++ libraries. The macro requires
-#   a preceding call to AX_BOOST_BASE. Further documentation is available at
-#   <http://randspringer.de/boost/index.html>.
-#
-#   This macro calls:
-#
-#     AC_SUBST(BOOST_REGEX_LIB)
-#
-#   And sets:
-#
-#     HAVE_BOOST_REGEX
-#
-# LICENSE
-#
-#   Copyright (c) 2008 Thomas Porschberg <thomas@randspringer.de>
-#   Copyright (c) 2008 Michael Tindal
-#
-#   Copying and distribution of this file, with or without modification, are
-#   permitted in any medium without royalty provided the copyright notice
-#   and this notice are preserved.
-
-AC_DEFUN([AX_BOOST_REGEX],
-[
-	AC_ARG_WITH([boost-regex],
-	AS_HELP_STRING([--with-boost-regex@<:@=special-lib@:>@],
-                   [use the Regex library from boost - it is possible to specify a certain library for the linker
-                        e.g. --with-boost-regex=boost_regex-gcc-mt-d-1_33_1 ]),
-        [
-        if test "$withval" = "no"; then
-			want_boost="no"
-        elif test "$withval" = "yes"; then
-            want_boost="yes"
-            ax_boost_user_regex_lib=""
-        else
-		    want_boost="yes"
-        	ax_boost_user_regex_lib="$withval"
-		fi
-        ],
-        [want_boost="yes"]
-	)
-
-	if test "x$want_boost" = "xyes"; then
-        AC_REQUIRE([AC_PROG_CC])
-		CPPFLAGS_SAVED="$CPPFLAGS"
-		CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
-		export CPPFLAGS
-
-		LDFLAGS_SAVED="$LDFLAGS"
-		LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
-		export LDFLAGS
-
-        AC_CACHE_CHECK(whether the Boost::Regex library is available,
-					   ax_cv_boost_regex,
-        [AC_LANG_PUSH([C++])
-			 AC_COMPILE_IFELSE(AC_LANG_PROGRAM([[@%:@include <boost/regex.hpp>
-												]],
-                                   [[boost::regex r(); return 0;]]),
-                   ax_cv_boost_regex=yes, ax_cv_boost_regex=no)
-         AC_LANG_POP([C++])
-		])
-		if test "x$ax_cv_boost_regex" = "xyes"; then
-			AC_DEFINE(HAVE_BOOST_REGEX,,[define if the Boost::Regex library is available])
-            BOOSTLIBDIR=`echo $BOOST_LDFLAGS | sed -e 's/@<:@^\/@:>@*//'`
-            if test "x$ax_boost_user_regex_lib" = "x"; then
-                for libextension in `ls $BOOSTLIBDIR/libboost_regex*.{so,a}* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^lib\(boost_regex.*\)\.so.*$;\1;' -e 's;^lib\(boost_regex.*\)\.a*$;\1;'` ; do
-                     ax_lib=${libextension}
-				    AC_CHECK_LIB($ax_lib, exit,
-                                 [BOOST_REGEX_LIB="-l$ax_lib"; AC_SUBST(BOOST_REGEX_LIB) link_regex="yes"; break],
-                                 [link_regex="no"])
-  				done
-                if test "x$link_regex" != "xyes"; then
-                for libextension in `ls $BOOSTLIBDIR/boost_regex*.{dll,a}* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^\(boost_regex.*\)\.dll.*$;\1;' -e 's;^\(boost_regex.*\)\.a*$;\1;'` ; do
-                     ax_lib=${libextension}
-				    AC_CHECK_LIB($ax_lib, exit,
-                                 [BOOST_REGEX_LIB="-l$ax_lib"; AC_SUBST(BOOST_REGEX_LIB) link_regex="yes"; break],
-                                 [link_regex="no"])
-  				done
-                fi
-
-            else
-               for ax_lib in $ax_boost_user_regex_lib boost_regex-$ax_boost_user_regex_lib; do
-				      AC_CHECK_LIB($ax_lib, main,
-                                   [BOOST_REGEX_LIB="-l$ax_lib"; AC_SUBST(BOOST_REGEX_LIB) link_regex="yes"; break],
-                                   [link_regex="no"])
-               done
-            fi
-			if test "x$link_regex" != "xyes"; then
-				AC_MSG_ERROR(Could not link against $ax_lib !)
-			fi
-		fi
-
-		CPPFLAGS="$CPPFLAGS_SAVED"
-    	LDFLAGS="$LDFLAGS_SAVED"
-	fi
-])
diff --git a/branch-1.3/lang/c++/parser/AvroLex.ll b/branch-1.3/lang/c++/parser/AvroLex.ll
deleted file mode 100644
index 63207aa..0000000
--- a/branch-1.3/lang/c++/parser/AvroLex.ll
+++ /dev/null
@@ -1,203 +0,0 @@
-%{
-/*
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
- http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-// on some systems, won't find an EOF definition 
-#ifndef EOF
-#define EOF (-1)
-#endif
-
-#include "AvroYacc.h"
-
-// this undef is a hack for my mac implementation
-#undef yyFlexLexer
-#include "Compiler.hh"
-
-#define YY_STACK_USED 1
-
-using std::cin;
-using std::cout;
-using std::cerr;
-
-%}
-
-%option c++
-%option noyywrap
-
-%{
-
-int yylex(int *val, void *ctx)
-{
-    avro::CompilerContext *c = static_cast<avro::CompilerContext *>(ctx);
-    int ret = c->lexer().yylex();
-    if( ret > AVRO_LEX_OUTPUT_TEXT_BEGIN && ret < AVRO_LEX_OUTPUT_TEXT_END ) { 
-        c->setText( c->lexer().YYText()) ;
-    }
-    return ret;
-}
-
-%}
-
-%x READTYPE
-%x STARTTYPE
-%x STARTSCHEMA
-%x READNAME
-%x READFIELD
-%x READFIELDS
-%x READFIELDNAME
-%x READSYMBOLS
-%x READSYMBOL
-%x READSIZE
-%x INUNION
-%x INOBJECT
-%x READMETADATA
-%x SKIPJSONSTRING
-%x SKIPJSONARRAY
-%x SKIPJSONOBJECT
-
-ws [ \t\r\n]
-nonws [^ \t\r\n]
-delim {ws}*:{ws}*
-avrotext [a-zA-Z_][a-zA-Z0-9_]*
-startunion \[
-startobject \{
-integer [0-9]+
-anytext .*
-
-%%
-<READTYPE>int                   return AVRO_LEX_INT;
-<READTYPE>long                  return AVRO_LEX_LONG;
-<READTYPE>null                  return AVRO_LEX_NULL;
-<READTYPE>boolean               return AVRO_LEX_BOOL;
-<READTYPE>float                 return AVRO_LEX_FLOAT;
-<READTYPE>double                return AVRO_LEX_DOUBLE;
-<READTYPE>string                return AVRO_LEX_STRING;
-<READTYPE>bytes                 return AVRO_LEX_BYTES;
-<READTYPE>record                return AVRO_LEX_RECORD;
-<READTYPE>enum                  return AVRO_LEX_ENUM;
-<READTYPE>map                   return AVRO_LEX_MAP;
-<READTYPE>array                 return AVRO_LEX_ARRAY;
-<READTYPE>fixed                 return AVRO_LEX_FIXED;
-<READTYPE>{avrotext}            return AVRO_LEX_NAMED_TYPE;
-<READTYPE>\"                    yy_pop_state(); 
-
-<READNAME>{avrotext}            return AVRO_LEX_NAME;
-<READNAME>\"                    yy_pop_state();
-
-<READSYMBOL>{avrotext}          return AVRO_LEX_SYMBOL;
-<READSYMBOL>\"                  yy_pop_state();
-
-<READFIELDNAME>{avrotext}       return AVRO_LEX_FIELD_NAME;
-<READFIELDNAME>\"               yy_pop_state();
-
-<READFIELD>\"type\"{delim}      yy_push_state(STARTSCHEMA); 
-<READFIELD>\"name\"{delim}\"    yy_push_state(READFIELDNAME); 
-<READFIELD>\}                   yy_pop_state(); return AVRO_LEX_FIELD_END;
-<READFIELD>,                    return yytext[0];
-<READFIELD>{nonws}+{delim}      yy_push_state(READMETADATA); return AVRO_LEX_METADATA;
-<READFIELD>{ws}                 ;
-
-<READFIELDS>\{                  yy_push_state(READFIELD); return AVRO_LEX_FIELD;
-<READFIELDS>\]                  yy_pop_state(); return AVRO_LEX_FIELDS_END;
-<READFIELDS>,                   return yytext[0];
-<READFIELDS>{ws}                ;
-
-<READSYMBOLS>\"                 yy_push_state(READSYMBOL); 
-<READSYMBOLS>,                  return yytext[0];
-<READSYMBOLS>\]                 yy_pop_state(); return AVRO_LEX_SYMBOLS_END;
-<READSYMBOLS>{ws}               ;
-
-<READSIZE>{integer}             yy_pop_state(); return AVRO_LEX_SIZE;
-
-<INUNION>\"                     yy_push_state(READTYPE); return AVRO_LEX_SIMPLE_TYPE;
-<INUNION>{startobject}          yy_push_state(INOBJECT); return yytext[0];
-<INUNION>\]                     yy_pop_state(); return yytext[0];
-<INUNION>,                      return yytext[0];
-<INUNION>{ws}                   ;
-
-<SKIPJSONSTRING>\"              yy_pop_state();
-<SKIPJSONSTRING>\\.             ;
-<SKIPJSONSTRING>[^\"\\]+        ;
-
-<SKIPJSONOBJECT>\}              yy_pop_state();
-<SKIPJSONOBJECT>\{              yy_push_state(SKIPJSONOBJECT);
-<SKIPJSONOBJECT>\"              yy_push_state(SKIPJSONSTRING);
-<SKIPJSONOBJECT>[^\{\}\"]+      ;
-
-<SKIPJSONARRAY>\]               yy_pop_state();
-<SKIPJSONARRAY>\[               yy_push_state(SKIPJSONARRAY);
-<SKIPJSONARRAY>\"               yy_push_state(SKIPJSONSTRING);
-<SKIPJSONARRAY>[^\[\]\"]+       ;  
-
-<READMETADATA>\"                yy_pop_state(); yy_push_state(SKIPJSONSTRING);
-<READMETADATA>\{                yy_pop_state(); yy_push_state(SKIPJSONOBJECT);
-<READMETADATA>\[                yy_pop_state(); yy_push_state(SKIPJSONARRAY);
-<READMETADATA>[^\"\{\[,\}]+     yy_pop_state();
-
-<INOBJECT>\"type\"{delim}       yy_push_state(STARTTYPE); return AVRO_LEX_TYPE;
-<INOBJECT>\"name\"{delim}\"     yy_push_state(READNAME); 
-<INOBJECT>\"size\"{delim}       yy_push_state(READSIZE);
-<INOBJECT>\"items\"{delim}      yy_push_state(STARTSCHEMA); return AVRO_LEX_ITEMS;
-<INOBJECT>\"values\"{delim}     yy_push_state(STARTSCHEMA); return AVRO_LEX_VALUES;
-<INOBJECT>\"fields\"{delim}\[   yy_push_state(READFIELDS); return AVRO_LEX_FIELDS; 
-<INOBJECT>\"symbols\"{delim}\[  yy_push_state(READSYMBOLS); return AVRO_LEX_SYMBOLS;
-<INOBJECT>,                     return yytext[0];
-<INOBJECT>\}                    yy_pop_state(); return yytext[0];
-<INOBJECT>{nonws}+{delim}       yy_push_state(READMETADATA); return AVRO_LEX_METADATA;
-<INOBJECT>{ws}                  ;
-
-<STARTTYPE>\"                   yy_pop_state(); yy_push_state(READTYPE); 
-<STARTTYPE>{startunion}         yy_pop_state(); yy_push_state(INUNION); return yytext[0];
-<STARTTYPE>{startobject}        yy_pop_state(); yy_push_state(INOBJECT); return yytext[0];
-
-<STARTSCHEMA>\"                 yy_pop_state(); yy_push_state(READTYPE); return AVRO_LEX_SIMPLE_TYPE;
-<STARTSCHEMA>{startunion}       yy_pop_state(); yy_push_state(INUNION); return yytext[0];
-<STARTSCHEMA>{startobject}      yy_pop_state(); yy_push_state(INOBJECT); return yytext[0];
-
-{startobject}                   yy_push_state(INOBJECT); return yytext[0];
-{startunion}                    yy_push_state(INUNION); return yytext[0];
-\"                              yy_push_state(READTYPE); return AVRO_LEX_SIMPLE_TYPE;
-{ws}                            ;
-<<EOF>>                         {
-#if !YY_FLEX_SUBMINOR_VERSION || YY_FLEX_SUBMINOR_VERSION < 27
-// The versions of flex before 3.5.27 do not free their stack when done, so explcitly free it.
-// Note that versions before did not actually define a subminor macro.
-                                    if (yy_start_stack) {
-                                        yy_flex_free(yy_start_stack);
-                                        yy_start_stack = 0;
-                                    }
-#endif
-#if YY_FLEX_SUBMINOR_VERSION > 35
-// At this time, 3.5.35 is the latest version.
-#warning "Warning:  untested version of flex"
-#endif
-#if YY_FLEX_SUBMINOR_VERSION >= 31 && YY_FLEX_SUBMINOR_VERSION < 34
-// The versions of flex starting 3.5.31 do not free yy_buffer_stack, so do so 
-// explicitly (first yy_delete_buffer must be called to free pointers stored on the stack, then it is
-// safe to remove the stack).  This was fixed in 3.4.34.
-                                    if(yy_buffer_stack) {
-                                        yy_delete_buffer(YY_CURRENT_BUFFER);
-                                        yyfree(yy_buffer_stack);
-                                        yy_buffer_stack = 0;
-                                    }
-#endif
-                                    yyterminate();
-                                }
-
-%%
-
diff --git a/branch-1.3/lang/c++/parser/AvroYacc.yy b/branch-1.3/lang/c++/parser/AvroYacc.yy
deleted file mode 100644
index 20bd629..0000000
--- a/branch-1.3/lang/c++/parser/AvroYacc.yy
+++ /dev/null
@@ -1,200 +0,0 @@
-%{
-/*
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
- http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-#include <boost/format.hpp>
-#include "Compiler.hh"
-#include "Exception.hh"
-
-#define YYLEX_PARAM ctx
-#define YYPARSE_PARAM ctx
-
-void yyerror(const char *str)
-{
-    throw avro::Exception(boost::format("Parser error: %1%") % str);
-}
- 
-extern void *lexer; 
-extern int yylex(int *, void *);
-  
-avro::CompilerContext &context(void *ctx) { 
-    return *static_cast<avro::CompilerContext *>(ctx);
-};
-  
-%}
-
-%pure-parser
-%error-verbose
-
-%token AVRO_LEX_INT AVRO_LEX_LONG 
-%token AVRO_LEX_FLOAT AVRO_LEX_DOUBLE
-%token AVRO_LEX_BOOL AVRO_LEX_NULL 
-%token AVRO_LEX_BYTES AVRO_LEX_STRING 
-%token AVRO_LEX_RECORD AVRO_LEX_ENUM AVRO_LEX_ARRAY AVRO_LEX_MAP AVRO_LEX_UNION AVRO_LEX_FIXED
-
-%token AVRO_LEX_METADATA
-
-%token AVRO_LEX_SYMBOLS AVRO_LEX_SYMBOLS_END
-%token AVRO_LEX_FIELDS AVRO_LEX_FIELDS_END AVRO_LEX_FIELD AVRO_LEX_FIELD_END
-
-%token AVRO_LEX_TYPE AVRO_LEX_ITEMS AVRO_LEX_VALUES 
-
-// Tokens that output text:
-%token AVRO_LEX_OUTPUT_TEXT_BEGIN
-%token AVRO_LEX_NAME
-%token AVRO_LEX_NAMED_TYPE
-%token AVRO_LEX_FIELD_NAME
-%token AVRO_LEX_SYMBOL
-%token AVRO_LEX_SIZE
-%token AVRO_LEX_OUTPUT_TEXT_END
-
-%token AVRO_LEX_SIMPLE_TYPE
-
-%%
-
-avroschema: 
-        simpleprimitive | object | union_t
-        ;
- 
-primitive:
-        AVRO_LEX_INT    { context(ctx).addType(avro::AVRO_INT); }
-        |
-        AVRO_LEX_LONG   { context(ctx).addType(avro::AVRO_LONG); }
-        |
-        AVRO_LEX_FLOAT  { context(ctx).addType(avro::AVRO_FLOAT); }
-        |
-        AVRO_LEX_DOUBLE { context(ctx).addType(avro::AVRO_DOUBLE); }
-        |
-        AVRO_LEX_BOOL   { context(ctx).addType(avro::AVRO_BOOL); }
-        |
-        AVRO_LEX_NULL   { context(ctx).addType(avro::AVRO_NULL); }
-        |
-        AVRO_LEX_BYTES  { context(ctx).addType(avro::AVRO_BYTES); }
-        |
-        AVRO_LEX_STRING { context(ctx).addType(avro::AVRO_STRING); }
-        |
-        AVRO_LEX_NAMED_TYPE { context(ctx).addNamedType(); }
-        ;
-
-simpleprimitive:
-        AVRO_LEX_SIMPLE_TYPE { context(ctx).startType(); } primitive { context(ctx).stopType(); }
-        ;
-
-primitive_t:
-        AVRO_LEX_TYPE primitive
-        ;
-
-array_t:
-        AVRO_LEX_TYPE AVRO_LEX_ARRAY { context(ctx).addType(avro::AVRO_ARRAY); }
-        ;
-
-enum_t: 
-        AVRO_LEX_TYPE AVRO_LEX_ENUM { context(ctx).addType(avro::AVRO_ENUM); }
-        ;
-
-fixed_t:
-        AVRO_LEX_TYPE AVRO_LEX_FIXED { context(ctx).addType(avro::AVRO_FIXED); }
-        ;
-
-map_t: 
-        AVRO_LEX_TYPE AVRO_LEX_MAP { context(ctx).addType(avro::AVRO_MAP); }
-        ;
-
-record_t: 
-        AVRO_LEX_TYPE AVRO_LEX_RECORD { context(ctx).addType(avro::AVRO_RECORD); }
-        ;
-
-type_attribute:
-        array_t | enum_t | fixed_t | map_t | record_t | primitive_t
-        ;
-
-union_t:
-        '[' { context(ctx).startType(); context(ctx).addType(avro::AVRO_UNION); context(ctx).setTypesAttribute(); } 
-        unionlist
-        ']' { context(ctx).stopType(); }
-        ;
-
-object: 
-        '{' { context(ctx).startType(); } 
-         attributelist
-        '}' { context(ctx).stopType(); }
-        ;
-        
-name_attribute:
-        AVRO_LEX_NAME { context(ctx).setNameAttribute(); }
-        ;
-
-size_attribute:
-        AVRO_LEX_SIZE { context(ctx).setSizeAttribute(); }
-        ;
-
-values_attribute:
-        AVRO_LEX_VALUES { context(ctx).setValuesAttribute(); } avroschema 
-        ;
-
-fields_attribute:
-        AVRO_LEX_FIELDS { context(ctx).setFieldsAttribute(); } fieldslist AVRO_LEX_FIELDS_END
-        ;
-
-items_attribute:
-        AVRO_LEX_ITEMS { context(ctx).setItemsAttribute(); } avroschema
-        ;
-
-symbols_attribute:
-        AVRO_LEX_SYMBOLS symbollist AVRO_LEX_SYMBOLS_END
-        ;
-
-attribute:
-        type_attribute | name_attribute | fields_attribute | items_attribute | size_attribute | values_attribute | symbols_attribute | AVRO_LEX_METADATA
-        ;
-
-attributelist: 
-        attribute | attributelist ',' attribute
-        ;
-
-symbol:
-        AVRO_LEX_SYMBOL { context(ctx).setSymbolsAttribute(); }
-        ;
-
-symbollist:
-        symbol | symbollist ',' symbol
-        ;
-
-fieldsetting:
-        fieldname | avroschema | AVRO_LEX_METADATA
-        ;
-
-fieldsettinglist:
-        fieldsetting | fieldsettinglist ',' fieldsetting 
-        ;
-
-fields:
-        AVRO_LEX_FIELD fieldsettinglist AVRO_LEX_FIELD_END
-        ;   
-
-fieldname:
-        AVRO_LEX_FIELD_NAME { context(ctx).textContainsFieldName(); }
-        ;
-
-fieldslist:
-        fields | fieldslist ',' fields
-        ;
-
-unionlist: 
-        avroschema | unionlist ',' avroschema
-        ;
diff --git a/branch-1.3/lang/c++/scripts/gen-cppcode.py b/branch-1.3/lang/c++/scripts/gen-cppcode.py
deleted file mode 100644
index e53ec33..0000000
--- a/branch-1.3/lang/c++/scripts/gen-cppcode.py
+++ /dev/null
@@ -1,679 +0,0 @@
-#!/usr/bin/python
-
-#Licensed to the Apache Software Foundation (ASF) under one
-#or more contributor license agreements.  See the NOTICE file
-#distributed with this work for additional information
-#regarding copyright ownership.  The ASF licenses this file
-#to you under the Apache License, Version 2.0 (the
-#"License"); you may not use this file except in compliance
-#with the License.  You may obtain a copy of the License at
-#
-#http://www.apache.org/licenses/LICENSE-2.0
-#
-#Unless required by applicable law or agreed to in writing, software
-#distributed under the License is distributed on an "AS IS" BASIS,
-#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#See the License for the specific language governing permissions and
-#limitations under the License.
-
-done = False
-
-headers = '''
-#include <stdint.h>
-#include <string>
-#include <vector>
-#include <map>
-#include "Boost.hh"
-#include "Exception.hh"
-#include "AvroSerialize.hh"
-#include "AvroParse.hh"
-#include "Layout.hh"
-'''
-
-typeToC= { 'int' : 'int32_t', 'long' :'int64_t', 'float' : 'float', 'double' : 'double', 
-'boolean' : 'bool', 'null': 'avro::Null', 'string' : 'std::string', 'bytes' : 'std::vector<uint8_t>'} 
-
-structList = []
-structNames = {} 
-forwardDeclareList = []
-
-def addStruct(name, declaration) :
-    if not structNames.has_key(name) :
-        structNames[name] = True
-        structList.append(declaration)
-
-def addForwardDeclare(declaration) :
-    code = 'struct ' + declaration + ';'
-    forwardDeclareList.append(code)
-
-def doPrimitive(type):
-    return (typeToC[type], type)
-
-def doSymbolic(args):
-    addForwardDeclare(args[1])
-    return (args[1], args[1])
-
-def addLayout(name, type, var) :
-    result = '        add(new $offsetType$(offset + offsetof($name$, $var$)));\n'
-    result = result.replace('$name$', name)
-    if typeToC.has_key(type) : 
-        offsetType = 'avro::PrimitiveLayout'
-    else :
-        offsetType = type+ '_Layout'
-    result = result.replace('$offsetType$', offsetType)
-    result = result.replace('$var$', var)
-    return result;
-
-def addSimpleLayout(type) :
-    result = '        add(new $offsetType$);\n'
-    if typeToC.has_key(type) : 
-        offsetType = 'avro::PrimitiveLayout'
-    else :
-        offsetType = type+ '_Layout'
-    return result.replace('$offsetType$', offsetType)
-
-recordfieldTemplate = '$type$ $name$\n'
-recordTemplate = '''struct $name$ {
-
-    $name$ () :
-$initializers$
-    { }
-
-$recordfields$};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    s.writeRecord();
-$serializefields$
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    p.readRecord();
-$parsefields$
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-$offsetlist$    }
-}; 
-'''
-
-def doRecord(args):
-    structDef = recordTemplate;
-    typename = args[1];
-    structDef = structDef.replace('$name$', typename);
-    fields = ''
-    serializefields = ''
-    parsefields = ''
-    initlist = ''
-    offsetlist = ''
-    end = False
-    while not end:
-        line = getNextLine()
-        if line[0] == 'end': 
-            end = True
-            initlist = initlist.rstrip(',\n')
-        elif line[0] == 'name':
-            fieldname = line[1]
-            fieldline = getNextLine()
-            fieldtypename, fieldtype = processType(fieldline)
-            fields += '    ' +  fieldtypename + ' ' + fieldname + ';\n'
-            serializefields += '    serialize(s, val.' + fieldname + ');\n'
-            initlist += '        ' + fieldname + '(),\n'
-            parsefields += '    parse(p, val.' + fieldname + ');\n'
-            offsetlist += addLayout(typename, fieldtype, fieldname)
-    structDef = structDef.replace('$initializers$', initlist)
-    structDef = structDef.replace('$recordfields$', fields)
-    structDef = structDef.replace('$serializefields$', serializefields)
-    structDef = structDef.replace('$parsefields$', parsefields)
-    structDef = structDef.replace('$offsetlist$', offsetlist)
-    addStruct(typename, structDef)
-    return (typename,typename)
-
-uniontypestemplate = 'typedef $type$ Choice$N$Type'
-unionTemplate = '''struct $name$ {
-
-$typedeflist$
-    typedef void* (*GenericSetter)($name$ *, int64_t);
-
-    $name$() : 
-        choice(0), 
-        value(T0()),
-        genericSetter(&$name$::genericSet)
-    { }
-
-$setfuncs$
-#ifdef AVRO_BOOST_NO_ANYREF
-    template<typename T>
-    const T &getValue() const {
-        const T *ptr = boost::any_cast<T>(&value);
-        return *ptr;
-    }
-#else
-    template<typename T>
-    const T &getValue() const {
-        return boost::any_cast<const T&>(value);
-    }
-#endif
-
-    static void *genericSet($name$ *u, int64_t choice) {
-        boost::any *val = &(u->value);
-        void *data;
-        switch (choice) {$switch$
-        }
-        return data;
-    }
-
-    int64_t choice; 
-    boost::any value;
-    GenericSetter genericSetter;
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    s.writeUnion(val.choice);
-    switch(val.choice) {
-$switchserialize$
-    default :
-        throw avro::Exception("Unrecognized union choice");
-    }
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    val.choice = p.readUnion();
-    switch(val.choice) {
-$switchparse$
-    default :
-        throw avro::Exception("Unrecognized union choice");
-    }
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, choice)));
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, genericSetter)));
-$offsetlist$    }
-}; 
-'''
-
-unionser = '      case $choice$:\n        serialize(s, val.getValue< $type$ >());\n        break;\n'
-unionpar = '      case $choice$:\n        { $type$ chosenVal; parse(p, chosenVal); val.value = chosenVal; }\n        break;\n'
-
-setfunc =  '''    void set_$name$(const $type$ &val) {
-        choice = $N$;
-        value =  val;
-    };\n'''
-
-switcher = '''\n          case $N$:
-            *val = T$N$();
-            data = boost::any_cast<T$N$>(val);
-            break;'''
-
-
-def doUnion(args):
-    structDef = unionTemplate
-    uniontypes = ''
-    switchserialize= ''
-    switchparse= ''
-    typename = 'Union_of'
-    setters = ''
-    switches = ''
-    offsetlist = ''
-    i = 0
-    end = False
-    while not end:
-        line = getNextLine()
-        if line[0] == 'end': end = True
-        else :
-            uniontype, name = processType(line)
-            typename += '_' + name
-            uniontypes += '    ' + 'typedef ' + uniontype + ' T' + str(i) + ';\n'
-            switch = unionser
-            switch = switch.replace('$choice$', str(i))
-            switch = switch.replace('$type$', uniontype)
-            switchserialize += switch 
-            switch = unionpar
-            switch = switch.replace('$choice$', str(i))
-            switch = switch.replace('$type$', uniontype)
-            switchparse += switch 
-            setter = setfunc
-            setter = setter.replace('$name$', name)
-            setter = setter.replace('$type$', uniontype)
-            setter = setter.replace('$N$', str(i))
-            setters += setter
-            switch = switcher
-            switches += switch.replace('$N$', str(i))
-            offsetlist += addSimpleLayout(name)
-        i+= 1
-    structDef = structDef.replace('$name$', typename)
-    structDef = structDef.replace('$typedeflist$', uniontypes)
-    structDef = structDef.replace('$switchserialize$', switchserialize)
-    structDef = structDef.replace('$switchparse$', switchparse)
-    structDef = structDef.replace('$setfuncs$', setters)
-    structDef = structDef.replace('$switch$', switches)
-    structDef = structDef.replace('$offsetlist$', offsetlist)
-    addStruct(typename, structDef)
-    return (typename,typename)
-
-enumTemplate = '''struct $name$ {
-
-    enum EnumSymbols {
-        $enumsymbols$
-    };
-
-    $name$() : 
-        value($firstsymbol$) 
-    { }
-
-    EnumSymbols value;
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    s.writeEnum(val.value);
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    val.value = static_cast<$name$::EnumSymbols>(p.readEnum());
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, value)));
-    }
-}; 
-'''
-
-def doEnum(args):
-    structDef = enumTemplate;
-    typename = args[1]
-    structDef = structDef.replace('$name$', typename)
-    end = False
-    symbols = '';
-    firstsymbol = '';
-    while not end:
-        line = getNextLine()
-        if line[0] == 'end': end = True
-        elif line[0] == 'name':
-            if symbols== '' :
-                firstsymbol = line[1]
-            else :
-                symbols += ', '
-            symbols += line[1]
-        else: print "error"
-    structDef = structDef.replace('$enumsymbols$', symbols);
-    structDef = structDef.replace('$firstsymbol$', firstsymbol);
-    addStruct(typename, structDef)
-    return (typename,typename)
-
-arrayTemplate = '''struct $name$ {
-    typedef $valuetype$ ValueType;
-    typedef std::vector<ValueType> ArrayType;
-    typedef ValueType* (*GenericSetter)($name$ *);
-    
-    $name$() :
-        value(),
-        genericSetter(&$name$::genericSet)
-    { }
-
-    static ValueType *genericSet($name$ *array) {
-        array->value.push_back(ValueType());
-        return &array->value.back();
-    }
-
-    void addValue(const ValueType &val) {
-        value.push_back(val);
-    }
-
-    ArrayType value;
-    GenericSetter genericSetter;
-
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    const size_t size = val.value.size();
-    if(size) {
-        s.writeArrayBlock(size);
-        for(size_t i = 0; i < size; ++i) {
-            serialize(s, val.value[i]);
-        }
-    }
-    s.writeArrayEnd();
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    val.value.clear();
-    while(1) {
-        int size = p.readArrayBlockSize();
-        if(size > 0) {
-            val.value.reserve(val.value.size() + size);
-            while (size-- > 0) { 
-                val.value.push_back($name$::ValueType());
-                parse(p, val.value.back());
-            }
-        }
-        else {
-            break;
-        }
-    } 
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, genericSetter)));
-$offsetlist$    }
-}; 
-'''
-
-def doArray(args):
-    structDef = arrayTemplate
-    line = getNextLine()
-    arraytype, typename = processType(line)
-    offsetlist = addSimpleLayout(typename)
-    typename = 'Array_of_' + typename
-
-    structDef = structDef.replace('$name$', typename)
-    structDef = structDef.replace('$valuetype$', arraytype)
-    structDef = structDef.replace('$offsetlist$', offsetlist)
-
-    line = getNextLine()
-    if line[0] != 'end': print 'error'
-
-    addStruct(typename, structDef)
-    return (typename,typename)
-
-mapTemplate = '''struct $name$ {
-    typedef $valuetype$ ValueType;
-    typedef std::map<std::string, ValueType> MapType;
-    typedef ValueType* (*GenericSetter)($name$ *, const std::string &);
-    
-    $name$() :
-        value(),
-        genericSetter(&$name$::genericSet)
-    { }
-
-    void addValue(const std::string &key, const ValueType &val) {
-        value.insert(MapType::value_type(key, val));
-    }
-
-    static ValueType *genericSet($name$ *map, const std::string &key) { 
-        map->value[key] = ValueType();
-        return &(map->value[key]);
-    }
-
-    MapType value;
-    GenericSetter genericSetter;
-
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    if(val.value.size()) {
-        s.writeMapBlock(val.value.size());
-        $name$::MapType::const_iterator iter = val.value.begin();
-        $name$::MapType::const_iterator end  = val.value.end();
-        while(iter!=end) {
-            serialize(s, iter->first);
-            serialize(s, iter->second);
-            ++iter;
-        }
-    }
-    s.writeMapEnd();
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    val.value.clear();
-    while(1) {
-        int size = p.readMapBlockSize();
-        if(size > 0) {
-            while (size-- > 0) { 
-                std::string key;
-                parse(p, key);
-                $name$::ValueType m;
-                parse(p, m);
-                val.value.insert($name$::MapType::value_type(key, m));
-            }
-        }
-        else {
-            break;
-        }
-    } 
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, genericSetter)));
-$offsetlist$    }
-}; 
-'''
-
-def doMap(args):
-    structDef = mapTemplate
-    line = getNextLine() # must be string
-    line = getNextLine()
-    maptype, typename = processType(line);
-
-    offsetlist = addSimpleLayout(typename)
-    typename = 'Map_of_' + typename
-
-    structDef = structDef.replace('$name$', typename)
-    structDef = structDef.replace('$valuetype$', maptype)
-    structDef = structDef.replace('$offsetlist$', offsetlist)
-
-    line = getNextLine()
-    if line[0] != 'end': print 'error'
-    addStruct(typename, structDef)
-    return (typename,typename)
-    
-fixedTemplate = '''struct $name$ {
-    enum {
-        fixedSize = $N$
-    };
-
-    $name$() {
-        memset(value, 0, sizeof(value));
-    }
-    
-    uint8_t value[fixedSize];
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    s.writeFixed(val.value);
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    p.readFixed(val.value);
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, value)));
-    }
-}; 
-'''
-
-def doFixed(args):
-    structDef = fixedTemplate
-    typename = args[1]
-    size = args[2]
-
-    line = getNextLine()
-    if line[0] != 'end': print 'error'
-
-    structDef = structDef.replace('$name$', typename)
-    structDef = structDef.replace('$N$', size)
-    addStruct(typename, structDef)
-    return (typename,typename)
-
-primitiveTemplate = '''struct $name$ {
-    $type$ value;
-};
-
-template <typename Serializer>
-inline void serialize(Serializer &s, const $name$ &val, const boost::true_type &) {
-    s.writeValue(val.value);
-}
-
-template <typename Parser>
-inline void parse(Parser &p, $name$ &val, const boost::true_type &) {
-    p.readValue(val.value);
-}
-
-class $name$_Layout : public avro::CompoundLayout {
-  public:
-    $name$_Layout(size_t offset = 0) :
-        CompoundLayout(offset)
-    {
-        add(new avro::PrimitiveLayout(offset + offsetof($name$, value)));
-    }
-}; 
-'''
-
-def doPrimitiveStruct(type):
-    structDef = primitiveTemplate
-    name =  type.capitalize()
-    structDef = structDef.replace('$name$', name);
-    structDef = structDef.replace('$type$', typeToC[type]);
-    addStruct(name, structDef)
-
-compoundBuilder= { 'record' : doRecord, 'union' : doUnion, 'enum' : doEnum, 
-'map' : doMap, 'array' : doArray, 'fixed' : doFixed, 'symbolic' : doSymbolic } 
-
-def processType(inputs) :
-    type = inputs[0]
-    if typeToC.has_key(type) : 
-        result = doPrimitive(type)
-    else :
-        func = compoundBuilder[type]
-        result = func(inputs)
-    return result
-
-def generateCode() :
-    inputs = getNextLine()
-    type = inputs[0]
-    if typeToC.has_key(type) : 
-        doPrimitiveStruct(type)
-    else :
-        func = compoundBuilder[type]
-        func(inputs)
-
-def getNextLine():
-    try:
-        line = raw_input()
-    except:
-        line = '';
-        globals()["done"] = True
-
-    if line == '':
-        globals()["done"] = True
-    return line.split(' ')
-    
-def writeHeader():
-    print "#ifndef %s_AvroGenerated_hh__" % namespace
-    print "#define %s_AvroGenerated_hh__" % namespace
-    print headers
-    print "namespace %s {\n" % namespace
-
-    for x in forwardDeclareList:
-        print "%s\n" % x
-
-    for x in structList:
-        print "/*----------------------------------------------------------------------------------*/\n"
-        print "%s\n" % x
-
-    print "\n} // namespace %s\n" % namespace
-
-    print "namespace avro {\n"
-    for x in structNames:
-        print 'template <> struct is_serializable<%s::%s> : public boost::true_type{};' % (namespace, x)
-
-    print "\n} // namespace avro\n"
-
-    print "#endif // %s_AvroGenerated_hh__" % namespace
-
-
-def usage():
-    print "-h, --help            print this helpful message"
-    print "-i, --input=FILE      input file to read (default is stdin)"
-    print "-o, --output=PATH     output file to generate (default is stdout)"
-    print "-n, --namespace=LABEL namespace for schema (default is avrouser)"
-
-if __name__ == "__main__":
-    from sys import argv
-    import getopt,sys
-
-    try:
-        opts, args = getopt.getopt(argv[1:], "hi:o:n:", ["help", "input=", "output=", "namespace="])
-
-    except getopt.GetoptError, err:
-        print str(err) 
-        usage()
-        sys.exit(2)
-
-    namespace = 'avrouser'
-
-    savein = sys.stdin              
-    saveout = sys.stdout              
-    inputFile = False
-    outputFile = False
-
-    for o, a in opts:
-        if o in ("-i", "--input"):
-            try:
-                inputFile = open(a, 'r')
-                sys.stdin = inputFile
-            except:
-                print "Could not open file " + a
-                sys.exit() 
-        elif o in ("-o", "--output"):
-            try:
-                outputFile = open(a, 'w')
-                sys.stdout = outputFile
-            except:
-                print "Could not open file " + a
-        elif o in ("-n", "--namespace"):
-            namespace = a
-        elif o in ("-h", "--help"):
-            usage()
-            sys.exit()
-        else:
-            print "Unhandled option: " + o
-            usage()
-            sys.exit()
-
-    generateCode()
-    writeHeader()
-
-    sys.stdin = savein
-    sys.stdout = saveout
-    if inputFile:
-        inputFile.close()
-    if outputFile:
-        outputFile.close()
-
diff --git a/branch-1.3/lang/c++/test/precompile.cc b/branch-1.3/lang/c++/test/precompile.cc
deleted file mode 100644
index 35dc142..0000000
--- a/branch-1.3/lang/c++/test/precompile.cc
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-
-#include "Compiler.hh"
-#include "ValidSchema.hh"
-
-int main()
-{
-    int ret = 0;
-    try {
-        avro::ValidSchema schema;
-        avro::compileJsonSchema(std::cin, schema);
-
-        schema.toFlatList(std::cout);
-    }
-    catch (std::exception &e) {
-        std::cerr << "Failed to parse or compile schema: " << e.what() << std::endl;
-        ret = 1;
-    }
-
-    return ret;
-}
diff --git a/branch-1.3/lang/c++/test/testgen.cc b/branch-1.3/lang/c++/test/testgen.cc
deleted file mode 100644
index 6d1ae69..0000000
--- a/branch-1.3/lang/c++/test/testgen.cc
+++ /dev/null
@@ -1,570 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <string.h>
-#include <stdlib.h>
-#include <fstream>
-#include <sstream>
-#include <boost/test/included/unit_test_framework.hpp>
-
-#include "testgen.hh" // < generated header
-#include "testgen2.hh" // < generated header
-
-#include "OutputStreamer.hh"
-#include "InputStreamer.hh"
-#include "Serializer.hh"
-#include "Writer.hh"
-#include "ValidatingWriter.hh"
-#include "Reader.hh"
-#include "ValidatingReader.hh"
-#include "Node.hh"
-#include "ValidSchema.hh"
-#include "Compiler.hh"
-#include "ResolvingReader.hh"
-#include "ResolverSchema.hh"
-
-std::string gWriter ("jsonschemas/bigrecord");
-std::string gReader ("jsonschemas/bigrecord2");
-
-void printRecord(testgen::RootRecord &record)
-{
-    using namespace testgen;
-    std::cout << "mylong " << record.mylong << '\n';
-    std::cout << "inval1 " << record.nestedrecord.inval1 << '\n';
-    std::cout << "inval2 " << record.nestedrecord.inval2 << '\n';
-    std::cout << "inval3 " << record.nestedrecord.inval3 << '\n';
-
-    Map_of_int::MapType::const_iterator mapiter = record.mymap.value.begin();
-    while(mapiter != record.mymap.value.end()){
-        std::cout << "mymap " << mapiter->first << " " << mapiter->second << '\n';
-        ++mapiter;
-    }
-
-    Array_of_double::ArrayType::iterator arrayiter = record.myarray.value.begin();
-    while(arrayiter != record.myarray.value.end()) {
-        std::cout << "myarray " << *arrayiter << '\n';
-        ++arrayiter;
-    }
-
-    std::cout << "myeum = " << record.myenum.value << '\n';
-
-    if(record.myunion.choice == 1) {
-        const Map_of_int &theMap = record.myunion.getValue<Map_of_int>();
-        mapiter = theMap.value.begin();
-        while(mapiter != theMap.value.end()){
-            std::cout << "unionmap " << mapiter->first << " " << mapiter->second << '\n';
-            ++mapiter;
-        }
-    }
-
-    if(record.anotherunion.choice == 0) {
-        std::cout << "unionbytes ";
-        const std::vector<uint8_t> &val = record.anotherunion.getValue< std::vector<uint8_t> >();
-        for(size_t i = 0; i < val.size(); ++i) {
-            std::cout << i << ":" << static_cast<int>(val[i]) << " ";
-        }
-        std::cout  << '\n';
-    }
-    
-    std::cout << "mybool " << record.mybool << '\n';
-    std::cout << "inval1 " << record.anothernested.inval1 << '\n';
-    std::cout << "inval2 " << record.anothernested.inval2 << '\n';
-    std::cout << "inval3 " << record.anothernested.inval3 << '\n';
-
-    std::cout << "fixed ";
-    for(size_t i = 0; i < record.myfixed.fixedSize; ++i) {
-        std::cout << i << ":" << static_cast<int>(record.myfixed.value[i]) << " ";
-    }
-    std::cout  << '\n';
-
-    std::cout << "anotherint " << record.anotherint << '\n';
-
-    std::cout << "bytes ";
-    for(size_t i = 0; i < record.bytes.size(); ++i) {
-        std::cout << i << ":" << static_cast<int>(record.bytes[i]) << " ";
-    }
-    std::cout  << '\n';
-}
-
-void printRecord(testgen2::RootRecord &record)
-{
-    using namespace testgen2;
-    std::cout << "mylong " << record.mylong << '\n';
-    std::cout << "inval1 " << record.nestedrecord.inval1 << '\n';
-    std::cout << "inval2 " << record.nestedrecord.inval2 << '\n';
-    std::cout << "inval3 " << record.nestedrecord.inval3 << '\n';
-
-    Map_of_long::MapType::const_iterator mapiter = record.mymap.value.begin();
-    while(mapiter != record.mymap.value.end()){
-        std::cout << "mymap " << mapiter->first << " " << mapiter->second << '\n';
-        ++mapiter;
-    }
-
-    Array_of_double::ArrayType::iterator arrayiter = record.myarray.value.begin();
-    while(arrayiter != record.myarray.value.end()) {
-        std::cout << "myarray " << *arrayiter << '\n';
-        ++arrayiter;
-    }
-
-    std::cout << "myeum = " << record.myenum.value << '\n';
-
-    if(record.myunion.choice == 1) {
-        const Map_of_float &theMap = record.myunion.getValue<Map_of_float>();
-        Map_of_float::MapType::const_iterator mapiter = theMap.value.begin();
-        while(mapiter != theMap.value.end()){
-            std::cout << "unionmap " << mapiter->first << " " << mapiter->second << '\n';
-            ++mapiter;
-        }
-    }
-
-    std::cout << "unionbytes ";
-    const std::vector<uint8_t> &val = record.anotherunion;
-    for(size_t i = 0; i < val.size(); ++i) {
-        std::cout << i << ":" << static_cast<int>(val[i]) << " ";
-    }
-    std::cout  << '\n';
-    
-    std::cout << "inval1 " << record.anothernested.inval1 << '\n';
-    std::cout << "inval2 " << record.anothernested.inval2 << '\n';
-    std::cout << "inval3 " << record.anothernested.inval3 << '\n';
-
-    if(record.myfixed.choice == 1) {
-        const md5 &myfixed = record.myfixed.getValue<md5>();
-        std::cout << "fixed ";
-        for(size_t i = 0; i < myfixed.fixedSize; ++i) {
-            std::cout << i << ":" << static_cast<int>(myfixed.value[i]) << " ";
-        }
-        std::cout  << '\n';
-    }
-
-    std::cout << "anotherint " << record.anotherint << '\n';
-
-    std::cout << "bytes ";
-    for(size_t i = 0; i < record.bytes.size(); ++i) {
-        std::cout << i << ":" << static_cast<int>(record.bytes[i]) << " ";
-    }
-    std::cout  << '\n';
-    std::cout << "newbool " << record.newbool << '\n';
-}
-
-
-void setRecord(testgen::RootRecord &myRecord) 
-{
-    using namespace testgen;
-
-    uint8_t fixed[] =  {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
-
-    myRecord.mylong = 212;
-    myRecord.nestedrecord.inval1 = std::numeric_limits<double>::min();
-    myRecord.nestedrecord.inval2 = "hello world";
-    myRecord.nestedrecord.inval3 = std::numeric_limits<int32_t>::max();
-
-    Map_of_int::GenericSetter setter = myRecord.mymap.genericSetter;
-    Map_of_int::ValueType *val = setter(&myRecord.mymap, "one");
-    *val = 100;
-    val = setter(&myRecord.mymap, "two");
-    *val = 200;
-
-    myRecord.myarray.addValue(3434.9);
-    myRecord.myarray.addValue(7343.9);
-    myRecord.myarray.addValue(-63445.9);
-    myRecord.myenum.value = testgen::ExampleEnum::one;
-    testgen::Map_of_int map;
-    map.addValue("one", 1);
-    map.addValue("two", 2);
-    myRecord.myunion.set_Map_of_int(map);
-    std::vector<uint8_t> vec;
-    vec.push_back(1);
-    vec.push_back(2);
-    myRecord.anotherunion.set_bytes(vec);
-    myRecord.mybool = true;
-    myRecord.anothernested.inval1 = std::numeric_limits<double>::max();
-    myRecord.anothernested.inval2 = "goodbye world";
-    myRecord.anothernested.inval3 = std::numeric_limits<int32_t>::min();
-    memcpy(myRecord.myfixed.value, fixed, testgen::md5::fixedSize);
-    myRecord.anotherint = 4534;
-    myRecord.bytes.push_back(10);
-    myRecord.bytes.push_back(20);
-}
-
-struct TestCodeGenerator {
-
-    void serializeToScreen()
-    {
-        std::cout << "Serialize:\n";
-        avro::ScreenStreamer os;
-        avro::Writer writer(os);
-
-        avro::serialize(writer, myRecord_);
-        std::cout << "end Serialize\n";
-    }
-
-    void serializeToScreenValid()
-    {
-        std::cout << "Validated Serialize:\n";
-        avro::ScreenStreamer os;
-        avro::ValidatingWriter writer(schema_, os);
-
-        avro::serialize(writer, myRecord_);
-        std::cout << "end Validated Serialize\n";
-    }
-
-    void checkArray(const testgen::Array_of_double &a1, const testgen::Array_of_double &a2) 
-    {
-        BOOST_CHECK_EQUAL(a1.value.size(), 3U);
-        BOOST_CHECK_EQUAL(a1.value.size(), a2.value.size());
-        for(size_t i = 0; i < a1.value.size(); ++i) {
-            BOOST_CHECK_EQUAL(a1.value[i], a2.value[i]);
-        }
-    }
-
-    void checkMap(const testgen::Map_of_int &map1, const testgen::Map_of_int &map2) 
-    {
-        BOOST_CHECK_EQUAL(map1.value.size(), map2.value.size());
-        testgen::Map_of_int::MapType::const_iterator iter1 = map1.value.begin();
-        testgen::Map_of_int::MapType::const_iterator end   = map1.value.end();
-        testgen::Map_of_int::MapType::const_iterator iter2 = map2.value.begin();
-
-        while(iter1 != end) {
-            BOOST_CHECK_EQUAL(iter1->first, iter2->first);
-            BOOST_CHECK_EQUAL(iter1->second, iter2->second);
-            ++iter1;
-            ++iter2;
-        }
-    }
-
-    void checkBytes(const std::vector<uint8_t> &v1, const std::vector<uint8_t> &v2)
-    {
-        BOOST_CHECK_EQUAL(v1.size(), 2U);
-        BOOST_CHECK_EQUAL(v1.size(), v2.size());
-        for(size_t i = 0; i < v1.size(); ++i) {
-            BOOST_CHECK_EQUAL(v1[i], v2[i]);
-        }
-    }
-
-    void checkNested(const testgen::Nested &rec1, const testgen::Nested &rec2)
-    {
-        BOOST_CHECK_EQUAL(rec1.inval1, rec2.inval1);
-        BOOST_CHECK_EQUAL(rec1.inval2, rec2.inval2);
-        BOOST_CHECK_EQUAL(rec1.inval3, rec2.inval3);
-    }
-
-    void checkOk(const testgen::RootRecord &rec1, const testgen::RootRecord &rec2)
-    {
-        BOOST_CHECK_EQUAL(rec1.mylong, rec1.mylong);
-
-        checkNested(rec1.nestedrecord, rec2.nestedrecord);
-        checkMap(rec1.mymap, rec2.mymap);
-        checkArray(rec1.myarray, rec2.myarray);
-
-        BOOST_CHECK_EQUAL(rec1.myenum.value, rec2.myenum.value);
-
-        BOOST_CHECK_EQUAL(rec1.myunion.choice, rec2.myunion.choice);
-        // in this test I know choice was 1
-        {
-            BOOST_CHECK_EQUAL(rec1.myunion.choice, 1);
-            checkMap(rec1.myunion.getValue<testgen::Map_of_int>(), rec2.myunion.getValue<testgen::Map_of_int>());
-        }
-
-        BOOST_CHECK_EQUAL(rec1.anotherunion.choice, rec2.anotherunion.choice);
-        // in this test I know choice was 0
-        {
-            BOOST_CHECK_EQUAL(rec1.anotherunion.choice, 0);
-            typedef std::vector<uint8_t> mytype;
-            checkBytes(rec1.anotherunion.getValue<mytype>(), rec2.anotherunion.getValue<testgen::Union_of_bytes_null::T0>());
-        }
-
-        checkNested(rec1.anothernested, rec2.anothernested);
-
-        BOOST_CHECK_EQUAL(rec1.mybool, rec2.mybool);
-
-        for(int i = 0; i < static_cast<int>(testgen::md5::fixedSize); ++i) {
-            BOOST_CHECK_EQUAL(rec1.myfixed.value[i], rec2.myfixed.value[i]);
-        }
-        BOOST_CHECK_EQUAL(rec1.anotherint, rec2.anotherint);
-
-        checkBytes(rec1.bytes, rec2.bytes);
-    }
-
-    void testParser()
-    {
-        std::ostringstream ostring;
-        avro::OStreamer os(ostring);
-        avro::Writer s (os);
-
-        avro::serialize(s, myRecord_); 
-
-        testgen::RootRecord inRecord;
-        std::istringstream istring(ostring.str());
-        avro::IStreamer is(istring);
-        avro::Reader p(is);
-        avro::parse(p, inRecord);
-
-        checkOk(myRecord_, inRecord);
-    }
-
-
-    void testParserValid()
-    {
-        std::ostringstream ostring;
-        avro::OStreamer os(ostring);
-        avro::ValidatingWriter s (schema_, os);
-
-        avro::serialize(s, myRecord_);
-
-        testgen::RootRecord inRecord;
-        std::istringstream istring(ostring.str());
-        avro::IStreamer is(istring);
-        avro::ValidatingReader p(schema_, is);
-        avro::parse(p, inRecord);
-
-        //checkOk(myRecord_, inRecord);
-    }
-
-    void testNameIndex()
-    {
-        const avro::NodePtr &node = schema_.root();
-        size_t index = 0;
-        bool found = node->nameIndex("anothernested", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 8U);
-
-        found = node->nameIndex("myenum", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 4U);
-
-        const avro::NodePtr &enumNode = node->leafAt(index);
-        found = enumNode->nameIndex("one", index); 
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 1U);
-    }
-
-    void test() 
-    {
-        std::cout << "Running code generation tests\n";
-
-        testNameIndex();
-
-        serializeToScreen();
-        serializeToScreenValid();
-
-        testParser();
-        testParserValid();
-
-        std::cout << "Finished code generation tests\n";
-    }
-
-    TestCodeGenerator() 
-    {
-        setRecord(myRecord_);
-        std::ifstream in(gWriter.c_str());
-        avro::compileJsonSchema(in, schema_);
-    }
-
-    testgen::RootRecord myRecord_;
-    avro::ValidSchema schema_;
-
-};
-
-struct TestSchemaResolving {
-
-    void checkArray(const testgen::Array_of_double &a1, const testgen2::Array_of_double &a2) 
-    {
-        BOOST_CHECK_EQUAL(a1.value.size(), 3U);
-        BOOST_CHECK_EQUAL(a1.value.size(), a2.value.size());
-        for(size_t i = 0; i < a1.value.size(); ++i) {
-            BOOST_CHECK_EQUAL(a1.value[i], a2.value[i]);
-        }
-    }
-
-    void checkMap(const testgen::Map_of_int &map1, const testgen2::Map_of_long &map2) 
-    {
-        BOOST_CHECK_EQUAL(map1.value.size(), map2.value.size());
-        testgen::Map_of_int::MapType::const_iterator iter1 = map1.value.begin();
-        testgen::Map_of_int::MapType::const_iterator end   = map1.value.end();
-        testgen2::Map_of_long::MapType::const_iterator iter2 = map2.value.begin();
-
-        while(iter1 != end) {
-            BOOST_CHECK_EQUAL(iter1->first, iter2->first);
-            BOOST_CHECK_EQUAL(static_cast<float>(iter1->second), iter2->second);
-            ++iter1;
-            ++iter2;
-        }
-    }
-
-    void checkMap(const testgen::Map_of_int &map1, const testgen2::Map_of_float &map2) 
-    {
-        BOOST_CHECK_EQUAL(map1.value.size(), map2.value.size());
-        testgen::Map_of_int::MapType::const_iterator iter1 = map1.value.begin();
-        testgen::Map_of_int::MapType::const_iterator end   = map1.value.end();
-        testgen2::Map_of_float::MapType::const_iterator iter2 = map2.value.begin();
-
-        while(iter1 != end) {
-            BOOST_CHECK_EQUAL(iter1->first, iter2->first);
-            BOOST_CHECK_EQUAL(static_cast<int64_t>(iter1->second), iter2->second);
-            ++iter1;
-            ++iter2;
-        }
-    }
-
-    void checkBytes(const std::vector<uint8_t> &v1, const std::vector<uint8_t> &v2)
-    {
-        BOOST_CHECK_EQUAL(v1.size(), 2U);
-        BOOST_CHECK_EQUAL(v1.size(), v2.size());
-        for(size_t i = 0; i < v1.size(); ++i) {
-            BOOST_CHECK_EQUAL(v1[i], v2[i]);
-        }
-    }
-
-    void checkNested(const testgen::Nested &rec1, const testgen2::Nested &rec2)
-    {
-        BOOST_CHECK_EQUAL(rec1.inval1, rec2.inval1);
-        BOOST_CHECK_EQUAL(rec1.inval2, rec2.inval2);
-        BOOST_CHECK_EQUAL(rec1.inval3, rec2.inval3);
-    }
-
-    void checkOk(const testgen::RootRecord &rec1, const testgen2::RootRecord &rec2)
-    {
-        BOOST_CHECK_EQUAL(rec1.mylong, rec1.mylong);
-
-        checkNested(rec1.nestedrecord, rec2.nestedrecord);
-        checkMap(rec1.mymap, rec2.mymap);
-        checkArray(rec1.myarray, rec2.myarray);
-
-        // enum was remapped from 1 to 2
-        BOOST_CHECK_EQUAL(rec1.myenum.value, 1);
-        BOOST_CHECK_EQUAL(rec2.myenum.value, 2);
-
-        // in this test I know choice was 1
-        {
-            BOOST_CHECK_EQUAL(rec1.myunion.choice, 1);
-            BOOST_CHECK_EQUAL(rec2.myunion.choice, 2);
-            checkMap(rec1.myunion.getValue<testgen::Map_of_int>(), rec2.myunion.getValue<testgen2::Map_of_float>());
-        }
-
-        {
-            BOOST_CHECK_EQUAL(rec1.anotherunion.choice, 0);
-            typedef std::vector<uint8_t> mytype;
-            checkBytes(rec1.anotherunion.getValue<mytype>(), rec2.anotherunion);
-        }
-
-        checkNested(rec1.anothernested, rec2.anothernested);
-
-        BOOST_CHECK_EQUAL(rec2.newbool, false);
-
-        BOOST_CHECK_EQUAL(rec2.myfixed.choice, 1);
-        {
-            const testgen2::md5 &myfixed2 = rec2.myfixed.getValue<testgen2::md5>();
-            for(int i = 0; i < static_cast<int>(testgen::md5::fixedSize); ++i) {
-                BOOST_CHECK_EQUAL(rec1.myfixed.value[i], myfixed2.value[i]);
-            }
-        }
-    }
-
-    std::string serializeWriteRecordToString()
-    {
-        std::ostringstream ostring;
-        avro::OStreamer os(ostring);
-        avro::Writer s (os);
-        avro::serialize(s, writeRecord_);
-        return ostring.str();
-    }
-
-    void parseData(const std::string &data, avro::ResolverSchema &xSchema)
-    {
-        std::istringstream istring(data);
-        avro::IStreamer is(istring);
-        avro::ResolvingReader r(xSchema, is);
-
-        avro::parse(r, readRecord_);
-    }
-
-    void test()
-    {
-        std::cout << "Running schema resolution tests\n";
-        testgen2::RootRecord_Layout layout;
-
-        avro::ResolverSchema xSchema(writerSchema_, readerSchema_, layout);
-
-        printRecord(writeRecord_);
-
-        std::string writtenData = serializeWriteRecordToString();
-        parseData(writtenData, xSchema);
-
-        printRecord(readRecord_);
-
-        checkOk(writeRecord_, readRecord_);
-        std::cout << "Finished schema resolution tests\n";
-    }
-
-    TestSchemaResolving()
-    {
-        setRecord(writeRecord_);
-        std::ifstream win(gWriter.c_str());
-        avro::compileJsonSchema(win, writerSchema_);
-
-        std::ifstream rin(gReader.c_str());
-        avro::compileJsonSchema(rin, readerSchema_);
-    }
-
-    testgen::RootRecord writeRecord_;
-    avro::ValidSchema writerSchema_;
-
-    testgen2::RootRecord readRecord_;
-    avro::ValidSchema readerSchema_;
-};
-
-template<typename T>
-void addTestCase(boost::unit_test::test_suite &test)
-{
-    boost::shared_ptr<T> newtest( new T );
-    test.add( BOOST_CLASS_TEST_CASE( &T::test, newtest ));
-}
-
-boost::unit_test::test_suite*
-init_unit_test_suite( int argc, char* argv[] ) 
-{
-    using namespace boost::unit_test;
-
-    const char *srcPath = getenv("top_srcdir");
-
-    if(srcPath) {
-        std::string srcPathStr(srcPath);
-        gWriter = srcPathStr + '/' + gWriter;
-        gReader = srcPathStr + '/' + gReader;
-    }
-    else {
-        if(argc > 1) {
-            gWriter = argv[1];
-        }
-
-        if(argc > 2) {
-            gReader = argv[2];
-        }
-    }
-    std::cout << "Using writer schema " << gWriter << std::endl;
-    std::cout << "Using reader schema " << gReader << std::endl;
-
-    test_suite* test= BOOST_TEST_SUITE( "Avro C++ unit test suite" );
-
-    addTestCase<TestCodeGenerator>(*test);
-    addTestCase<TestSchemaResolving>(*test);
-
-    return test;
-}
-
diff --git a/branch-1.3/lang/c++/test/testparser.cc b/branch-1.3/lang/c++/test/testparser.cc
deleted file mode 100644
index 1df384f..0000000
--- a/branch-1.3/lang/c++/test/testparser.cc
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-
-#include "Compiler.hh"
-#include "ValidSchema.hh"
-
-int main()
-{
-    int ret = 0;
-    try {
-        avro::ValidSchema schema;
-        avro::compileJsonSchema(std::cin, schema);
-
-        schema.toJson(std::cout);
-    }
-    catch (std::exception &e) {
-        std::cerr << "Failed to parse or compile schema: " << e.what() << std::endl;
-        ret = 1;
-    }
-
-    return ret;
-}
diff --git a/branch-1.3/lang/c++/test/unittest.cc b/branch-1.3/lang/c++/test/unittest.cc
deleted file mode 100644
index a9582d8..0000000
--- a/branch-1.3/lang/c++/test/unittest.cc
+++ /dev/null
@@ -1,799 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include <iostream>
-#include <fstream>
-#include <sstream>
-#include <boost/test/included/unit_test_framework.hpp>
-
-#include "Zigzag.hh"
-#include "Node.hh"
-#include "Schema.hh"
-#include "ValidSchema.hh"
-#include "OutputStreamer.hh"
-#include "Serializer.hh"
-#include "Parser.hh"
-#include "SymbolMap.hh"
-#include "Compiler.hh"
-#include "SchemaResolution.hh"
-
-#include "AvroSerialize.hh"
-
-using namespace avro;
-
-static const uint8_t fixeddata[16] = {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
-
-struct TestSchema
-{
-    TestSchema() 
-    {}
-
-    void createExampleSchema()
-    {
-        // First construct our complex data type:
-        avro::RecordSchema myRecord("complex");
-   
-        // Now populate my record with fields (each field is another schema):
-        myRecord.addField("real", avro::DoubleSchema());
-        myRecord.addField("imaginary", avro::DoubleSchema());
-
-        // The complex record is the same as used above, let's make a schema 
-        // for an array of these record
-  
-        avro::ArraySchema complexArray(myRecord); 
-
-        avro::ValidSchema validComplexArray(complexArray);
-        validComplexArray.toJson(std::cout);
-    }
-
-    void buildSchema()
-    {
-        RecordSchema record("RootRecord");
-
-        record.addField("mylong", LongSchema());
-
-        IntSchema intSchema;
-        avro::MapSchema map = MapSchema(IntSchema());
-
-        record.addField("mymap", map);
-
-        ArraySchema array = ArraySchema(DoubleSchema());
-
-        const std::string s("myarray");
-        record.addField(s, array);
-
-        EnumSchema myenum("ExampleEnum");
-        myenum.addSymbol("zero");
-        myenum.addSymbol("one");
-        myenum.addSymbol("two");
-        myenum.addSymbol("three");
-
-        bool caught = false;
-        try {
-            myenum.addSymbol("three");
-        }
-        catch(Exception &e) {
-            std::cout << "(intentional) exception: " << e.what() << '\n';
-            caught = true;
-        }
-        BOOST_CHECK_EQUAL(caught, true);
-
-        record.addField("myenum", myenum); 
-
-        UnionSchema onion;
-        onion.addType(NullSchema());
-        onion.addType(map);
-        onion.addType(FloatSchema());
-       
-        record.addField("myunion", onion); 
-
-        RecordSchema nestedRecord("NestedRecord");
-        nestedRecord.addField("floatInNested", FloatSchema());
-
-        record.addField("nested", nestedRecord);
-
-        record.addField("mybool", BoolSchema());
-        FixedSchema fixed(16, "fixed16");
-        record.addField("myfixed", fixed);
-
-        caught = false;
-        try {
-            record.addField("mylong", LongSchema());
-        }
-        catch(Exception &e) {
-            std::cout << "(intentional) exception: " << e.what() << '\n';
-            caught = true;
-        }
-        BOOST_CHECK_EQUAL(caught, true);
-
-        record.addField("mylong2", LongSchema());
-
-        record.addField("anotherint", intSchema);
-
-        schema_.setSchema(record);
-    }
-
-    void checkNameLookup() {
-        NodePtr node = schema_.root();
-
-        size_t index = 0;
-        bool found = node->nameIndex("mylongxxx", index);
-        BOOST_CHECK_EQUAL(found, false);
-
-        found = node->nameIndex("mylong", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 0U);
-
-        found = node->nameIndex("mylong2", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 8U);
-
-        found = node->nameIndex("myenum", index);
-        BOOST_CHECK_EQUAL(found, true);
-        NodePtr enumNode = node->leafAt(index);
-
-        found = enumNode->nameIndex("one", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 1U);
-
-        found = enumNode->nameIndex("three", index);
-        BOOST_CHECK_EQUAL(found, true);
-        BOOST_CHECK_EQUAL(index, 3U);
-
-        found = enumNode->nameIndex("four", index);
-        BOOST_CHECK_EQUAL(found, false);
-    }
-
-    template<typename Serializer>
-    void printUnion(Serializer &s, int path)
-    {
-        s.writeUnion(path);
-        if(path == 0) {
-            std::cout << "Null in union\n";
-            s.writeNull();
-        }
-        else if(path == 1) {
-            std::cout << "Map in union\n";
-            s.writeMapBlock(2);
-            s.writeString("Foo");
-            s.writeInt(16);
-            s.writeString("Bar");
-            s.writeInt(17);
-            s.writeMapBlock(1);
-            s.writeString("FooBar");
-            s.writeInt(18);
-            s.writeMapEnd();
-        }
-        else {
-            std::cout << "Float in union\n";
-            s.writeFloat(200.);
-        }
-    }
-
-    template<typename Serializer>
-    void writeEncoding(Serializer &s, int path)
-    {
-        std::cout << "Record\n";
-        s.writeRecord();
-        s.writeInt(1000);
-
-        std::cout << "Map\n";
-        s.writeMapBlock(2);
-        s.writeString(std::string("Foo"));
-        s.writeInt(16);
-        s.writeString(std::string("Bar"));
-        s.writeInt(17);
-        s.writeMapEnd();
-
-        std::cout << "Array\n";
-        s.writeArrayBlock(2);
-        s.writeDouble(100.0);
-        s.writeDouble(1000.0);
-        s.writeArrayEnd();
-
-        std::cout << "Enum\n";
-        s.writeEnum(3);
-
-        std::cout << "Union\n";
-        printUnion(s, path);
-
-        std::cout << "Record\n";
-        s.writeRecord();
-        s.writeFloat(-101.101f);
-
-        std::cout << "Bool\n";
-        s.writeBool(true);
-
-        std::cout << "Fixed16\n";
-        
-        s.writeFixed(fixeddata);
-
-        std::cout << "Long\n";
-        s.writeLong(7010728798977672067LL);
-
-        std::cout << "Int\n";
-        s.writeInt(-3456);
-    }
-
-    void printEncoding() {
-        std::cout << "Encoding\n";
-        ScreenStreamer os;
-        Serializer<Writer> s(os);
-        writeEncoding(s, 0);
-    }
-
-    void printValidatingEncoding(int path)
-    {
-        std::cout << "Validating Encoding " << path << "\n";
-        ScreenStreamer os;
-        Serializer<ValidatingWriter> s(schema_, os);
-        writeEncoding(s, path);
-    }
-
-    void saveValidatingEncoding(int path) 
-    {
-        std::ofstream out("test.avro");
-        OStreamer os(out);
-        Serializer<ValidatingWriter> s(schema_, os);
-        writeEncoding(s, path);
-    }
-
-    void printNext(Parser<Reader> &p) {
-        // no-op printer
-    }
-
-    void printNext(Parser<ValidatingReader> &p)
-    {
-        std::cout << "Next: \"" << nextType(p);
-        std::string recordName;
-        std::string fieldName;
-        if( currentRecordName(p, recordName) ) {
-            std::cout << "\" record: \"" << recordName;
-        }
-        if( nextFieldName(p, fieldName) ) {
-            std::cout << "\" field: \"" << fieldName;
-        }
-        std::cout << "\"\n";
-
-    }
-
-    template <typename Parser>
-    void readMap(Parser &p)
-    {
-        int64_t size = 0;
-        do { 
-            printNext(p);
-            size = p.readMapBlockSize();
-            std::cout << "Size " << size << '\n';
-            for(int64_t i=0; i < size; ++i) {
-                std::string key;
-                printNext(p);
-                p.readString(key);
-                printNext(p);
-                int32_t intval = p.readInt();
-                std::cout << key << ":" << intval << '\n';
-            }
-        } while (size != 0);
-    }
-
-    template <typename Parser>
-    void readArray(Parser &p)
-    {
-        int64_t size = 0;
-        double d = 0.0;
-        do {
-            printNext(p);
-            size = p.readArrayBlockSize();
-            std::cout << "Size " << size << '\n';
-            for(int64_t i=0; i < size; ++i) {
-                printNext(p);
-                d = p.readDouble();
-                std::cout << i << ":" << d << '\n';
-            }
-        } while(size != 0);
-        BOOST_CHECK_EQUAL(d, 1000.0);
-    }
-
-    template <typename Parser>
-    void readNestedRecord(Parser &p)
-    {
-        printNext(p);
-        p.readRecord();
-        printNext(p);
-        float f = p.readFloat();
-        std::cout << f << '\n';
-        BOOST_CHECK_EQUAL(f, -101.101f);
-    }
-
-    template <typename Parser>
-    void readFixed(Parser &p) {
-
-        boost::array<uint8_t, 16> input;
-        p.readFixed<16>(input);
-        BOOST_CHECK_EQUAL(input.size(), 16U);
-
-        for(int i=0; i< 16; ++i) {
-            std::cout << static_cast<int>(input[i]) << ' ';
-        }
-        std::cout << '\n';
-    }
-
-    template <typename Parser>
-    void readData(Parser &p)
-    {
-        printNext(p);
-        p.readRecord();
-
-        printNext(p);
-        int64_t longval = p.readLong();
-        std::cout << longval << '\n';
-        BOOST_CHECK_EQUAL(longval, 1000);
-
-        readMap(p);
-        readArray(p);
-
-        printNext(p);
-        longval = p.readEnum();
-        std::cout << "Enum choice " << longval << '\n';
-
-        printNext(p);
-        longval = p.readUnion();
-        std::cout << "Union path " << longval << '\n';
-        readMap(p);
-
-        readNestedRecord(p);
-
-        printNext(p);
-        bool boolval = p.readBool();
-        std::cout << boolval << '\n';
-        BOOST_CHECK_EQUAL(boolval, true);
-
-        printNext(p);
-        readFixed(p);
-
-        printNext(p);
-        longval = p.readLong();
-        std::cout << longval << '\n';
-        BOOST_CHECK_EQUAL(longval, 7010728798977672067LL);
-
-        printNext(p);
-        int32_t intval = p.readInt();
-        std::cout << intval << '\n';
-        BOOST_CHECK_EQUAL(intval, -3456);
-    }
-
-    void readRawData() {
-        std::ifstream in("test.avro");
-        IStreamer ins(in);
-        Parser<Reader> p(ins);
-        readData(p);
-    }
-
-    void readValidatedData()
-    {
-        std::ifstream in("test.avro");
-        IStreamer ins(in);
-        Parser<ValidatingReader> p(schema_, ins);
-        readData(p);
-    }
-
-    void test()
-    {
-        std::cout << "Before\n";
-        schema_.toJson(std::cout);
-        schema_.toFlatList(std::cout);
-        buildSchema();
-        std::cout << "After\n";
-        schema_.toJson(std::cout);
-        schema_.toFlatList(std::cout);
-
-        checkNameLookup();
-
-        printEncoding();
-        printValidatingEncoding(0);
-        printValidatingEncoding(1);
-        printValidatingEncoding(2);
-
-        saveValidatingEncoding(1);
-        readRawData();
-        readValidatedData();
-
-        createExampleSchema();
-    }
-
-    ValidSchema schema_;
-};
-
-struct TestEncoding {
-
-    void compare(int32_t val) {
-        uint32_t encoded = encodeZigzag32(val);
-        BOOST_CHECK_EQUAL(decodeZigzag32(encoded), val);
-    }
-
-    void compare(int64_t val) {
-        uint64_t encoded = encodeZigzag64(val);
-        BOOST_CHECK_EQUAL(decodeZigzag64(encoded), val);
-    }
-
-    template<typename IntType>
-    void testEncoding(IntType start, IntType stop)
-    {
-        std::cout << "testing from " << start << " to " << stop << " inclusive\n";
-        IntType val = start;
-        IntType diff = stop - start + 1;
-
-        for(IntType i = 0; i < diff; ++i) {
-            compare(val+i);
-        }
-    }
-
-    template<typename IntType>
-    void testEncoding()
-    {
-        testEncoding<IntType>(std::numeric_limits<IntType>::min(), std::numeric_limits<IntType>::min() + 1000);
-        testEncoding<IntType>(-1000, 1000);
-        testEncoding<IntType>(std::numeric_limits<IntType>::max()-1000, std::numeric_limits<IntType>::max());
-    }
-
-    void test() {
-        testEncoding<int32_t>();
-        testEncoding<int64_t>();
-    }
-
-};
-
-struct TestSymbolMap
-{
-    TestSymbolMap()
-    {}
-
-    void test() 
-    {
-        std::cout << "TestSymbolMap\n";
-        std::string name("myrecord");
-
-        RecordSchema rec(name);
-
-        NodePtr node = map_.locateSymbol(name);
-        BOOST_CHECK(node == 0);
-
-        map_.registerSymbol(rec.root());
-
-        node = map_.locateSymbol(name);
-        BOOST_CHECK(node);
-        BOOST_CHECK_EQUAL(node->name(), name);
-        std::cout << "Found " << name << " registered\n";
-    }
-
-    SymbolMap map_;
-};
-
-struct TestNested
-{
-    TestNested()
-    {}
-
-    void createSchema() 
-    {
-        std::cout << "TestNested\n";
-        RecordSchema rec("LongList");
-        rec.addField("value", LongSchema());
-        UnionSchema next;
-        next.addType(NullSchema());
-        next.addType(rec);
-        rec.addField("next", next);
-        rec.addField("end", BoolSchema());
-
-        schema_.setSchema(rec);
-        schema_.toJson(std::cout);
-        schema_.toFlatList(std::cout);
-    }
-
-    void serializeNoRecurse(OutputStreamer &os)
-    {
-        std::cout << "No recurse\n";
-        Serializer<ValidatingWriter> s(schema_, os);
-        s.writeRecord();
-        s.writeLong(1);
-        s.writeUnion(0);
-        s.writeNull();
-        s.writeBool(true);
-    }
-
-    void serializeRecurse(OutputStreamer &os)
-    {
-        std::cout << "Recurse\n";
-        Serializer<ValidatingWriter> s(schema_, os);
-        s.writeRecord();
-        s.writeLong(1);
-        s.writeUnion(1);
-        {
-            s.writeRecord();
-            s.writeLong(2);
-            s.writeUnion(1);
-            {
-                s.writeRecord();
-                s.writeLong(3);
-                s.writeUnion(0);
-            }
-            s.writeNull();
-        }
-        s.writeBool(true);
-    }
-
-    void validatingParser(InputStreamer &is) 
-    {
-        Parser<ValidatingReader> p(schema_, is);
-        int64_t val = 0;
-        int64_t path = 0;
-    
-        do {
-            p.readRecord();
-            val = p.readLong();
-            std::cout << "longval = " << val << '\n';
-            path = p.readUnion();
-        } while(path == 1);
-
-        p.readNull();
-        bool b = p.readBool();
-        std::cout << "bval = " << b << '\n';
-    }
-
-    void testToScreen() {
-        ScreenStreamer os;
-        serializeNoRecurse(os);
-        serializeRecurse(os);
-    }
-
-    void testParseNoRecurse() {
-        std::ostringstream ostring;
-        OStreamer os(ostring);
-        serializeNoRecurse(os);
-        std::cout << "ParseNoRecurse\n";
-
-        std::istringstream istring(ostring.str());
-        IStreamer is(istring);
-        validatingParser(is);
-    }
-
-    void testParseRecurse() {
-        std::ostringstream ostring;
-        OStreamer os(ostring);
-        serializeRecurse(os);
-        std::cout << "ParseRecurse\n";
-
-        std::istringstream istring(ostring.str());
-        IStreamer is(istring);
-        validatingParser(is);
-    }
-
-
-    void test() {
-        createSchema();
-        testToScreen();
-
-        testParseNoRecurse();
-        testParseRecurse();
-
-    }
-
-    ValidSchema schema_;
-};
-
-struct TestGenerated
-{
-    TestGenerated()
-    {}
-
-    void test() 
-    {
-        std::cout << "TestGenerated\n";
-
-        int32_t val = 100;
-        float   f   = 200.0;
-
-        ScreenStreamer os;
-        Writer writer(os);
-
-        serialize(writer, val);
-        serialize(writer, Null());
-        serialize(writer, f);
-        
-    }
-};
-
-struct TestBadStuff
-{
-    void testBadFile() 
-    {
-        std::cout << "TestBadFile\n";
-
-        avro::ValidSchema schema;
-        std::ifstream in("agjoewejefkjs");
-        std::string error;
-        bool result = avro::compileJsonSchema(in, schema, error);
-        BOOST_CHECK_EQUAL(result, false);
-        std::cout << "(intentional) error: " << error << '\n';
-    }
-
-    void testBadSchema()
-    {
-        std::cout << "TestBadSchema\n";
-
-        std::string str ("{ \"type\" : \"wrong\" }");
-        std::istringstream in(str);
-
-        avro::ValidSchema schema;
-        std::string error;
-        bool result = avro::compileJsonSchema(in, schema, error);
-        BOOST_CHECK_EQUAL(result, false);
-        std::cout << "(intentional) error: " << error << '\n';
-    }
-
-    void test() 
-    {
-        std::cout << "TestBadStuff\n";
-        testBadFile();
-        testBadSchema();
-    }
-};
-
-struct TestResolution
-{
-    TestResolution() :
-        int_(IntSchema()), 
-        long_(LongSchema()),
-        bool_(BoolSchema()), 
-        float_(FloatSchema()), 
-        double_(DoubleSchema()),
-
-        mapOfInt_(MapSchema(IntSchema())),
-        mapOfDouble_(MapSchema(DoubleSchema())),
-
-        arrayOfLong_(ArraySchema(LongSchema())),
-        arrayOfFloat_(ArraySchema(FloatSchema()))
-    {
-        {
-            EnumSchema one("one");
-            one.addSymbol("X");
-            enumOne_.setSchema(one);
-
-            EnumSchema two("two");
-            two.addSymbol("Y");
-            enumTwo_.setSchema(two);
-        }
-    
-        {
-            UnionSchema one;
-            one.addType(IntSchema());
-            one.addType(FloatSchema());
-            unionOne_.setSchema(one);
-
-            UnionSchema two;
-            two.addType(IntSchema());
-            two.addType(DoubleSchema());
-            unionTwo_.setSchema(two);
-        }
-    }
-
-    SchemaResolution resolve(const ValidSchema &writer, const ValidSchema &reader)
-    {
-        return writer.root()->resolve(*reader.root());
-    }
-
-    void test() 
-    {
-        std::cout << "TestResolution\n";
-
-        BOOST_CHECK_EQUAL(resolve(long_, long_), RESOLVE_MATCH); 
-        BOOST_CHECK_EQUAL(resolve(long_, bool_), RESOLVE_NO_MATCH); 
-        BOOST_CHECK_EQUAL(resolve(bool_, long_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(int_, long_), RESOLVE_PROMOTABLE_TO_LONG); 
-        BOOST_CHECK_EQUAL(resolve(long_, int_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(int_, float_), RESOLVE_PROMOTABLE_TO_FLOAT); 
-        BOOST_CHECK_EQUAL(resolve(float_, int_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(int_, double_), RESOLVE_PROMOTABLE_TO_DOUBLE); 
-        BOOST_CHECK_EQUAL(resolve(double_, int_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(long_, float_), RESOLVE_PROMOTABLE_TO_FLOAT); 
-        BOOST_CHECK_EQUAL(resolve(float_, long_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(long_, double_), RESOLVE_PROMOTABLE_TO_DOUBLE); 
-        BOOST_CHECK_EQUAL(resolve(double_, long_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(float_, double_), RESOLVE_PROMOTABLE_TO_DOUBLE); 
-        BOOST_CHECK_EQUAL(resolve(double_, float_), RESOLVE_NO_MATCH); 
-
-        BOOST_CHECK_EQUAL(resolve(int_, mapOfInt_), RESOLVE_NO_MATCH);
-        BOOST_CHECK_EQUAL(resolve(mapOfInt_, int_), RESOLVE_NO_MATCH);
-
-        BOOST_CHECK_EQUAL(resolve(mapOfInt_, mapOfInt_), RESOLVE_MATCH);
-        BOOST_CHECK_EQUAL(resolve(mapOfDouble_, mapOfInt_), RESOLVE_NO_MATCH);
-        BOOST_CHECK_EQUAL(resolve(mapOfInt_, mapOfDouble_), RESOLVE_PROMOTABLE_TO_DOUBLE);
-
-        BOOST_CHECK_EQUAL(resolve(long_, arrayOfLong_), RESOLVE_NO_MATCH);
-        BOOST_CHECK_EQUAL(resolve(arrayOfLong_, long_), RESOLVE_NO_MATCH);
-
-        BOOST_CHECK_EQUAL(resolve(arrayOfLong_, arrayOfLong_), RESOLVE_MATCH);
-        BOOST_CHECK_EQUAL(resolve(arrayOfFloat_, arrayOfLong_), RESOLVE_NO_MATCH);
-        BOOST_CHECK_EQUAL(resolve(arrayOfLong_, arrayOfFloat_), RESOLVE_PROMOTABLE_TO_FLOAT);
-
-        BOOST_CHECK_EQUAL(resolve(enumOne_, enumOne_), RESOLVE_MATCH);
-        BOOST_CHECK_EQUAL(resolve(enumOne_, enumTwo_), RESOLVE_NO_MATCH);
-
-        BOOST_CHECK_EQUAL(resolve(float_, unionOne_), RESOLVE_MATCH);
-        BOOST_CHECK_EQUAL(resolve(double_, unionOne_), RESOLVE_NO_MATCH);
-        BOOST_CHECK_EQUAL(resolve(float_, unionTwo_), RESOLVE_PROMOTABLE_TO_DOUBLE);
-
-        BOOST_CHECK_EQUAL(resolve(unionOne_, float_), RESOLVE_MATCH);
-        BOOST_CHECK_EQUAL(resolve(unionOne_, double_), RESOLVE_PROMOTABLE_TO_DOUBLE);
-        BOOST_CHECK_EQUAL(resolve(unionTwo_, float_), RESOLVE_PROMOTABLE_TO_FLOAT);
-        BOOST_CHECK_EQUAL(resolve(unionOne_, unionTwo_), RESOLVE_MATCH);
-    }
-
-  private:
-
-    ValidSchema int_;
-    ValidSchema long_;
-    ValidSchema bool_;
-    ValidSchema float_;
-    ValidSchema double_;
-
-    ValidSchema mapOfInt_;
-    ValidSchema mapOfDouble_;
-
-    ValidSchema arrayOfLong_;
-    ValidSchema arrayOfFloat_;
-
-    ValidSchema enumOne_;
-    ValidSchema enumTwo_;
-
-    ValidSchema unionOne_;
-    ValidSchema unionTwo_;
-};
-
-
-template<typename T>
-void addTestCase(boost::unit_test::test_suite &test) 
-{
-    boost::shared_ptr<T> newtest( new T );
-    test.add( BOOST_CLASS_TEST_CASE( &T::test, newtest ));
-}
-
-boost::unit_test::test_suite*
-init_unit_test_suite( int argc, char* argv[] ) 
-{
-    using namespace boost::unit_test;
-
-    test_suite* test= BOOST_TEST_SUITE( "Avro C++ unit test suite" );
-
-    addTestCase<TestEncoding>(*test);
-    addTestCase<TestSchema>(*test);
-    addTestCase<TestSymbolMap>(*test);
-    addTestCase<TestNested>(*test);
-    addTestCase<TestGenerated>(*test);
-    addTestCase<TestBadStuff>(*test);
-    addTestCase<TestResolution>(*test);
-
-    return test;
-}
-
diff --git a/branch-1.3/lang/c/.gitignore b/branch-1.3/lang/c/.gitignore
deleted file mode 100644
index 99e575c..0000000
--- a/branch-1.3/lang/c/.gitignore
+++ /dev/null
@@ -1,10 +0,0 @@
-Makefile.in
-aclocal.m4
-autom4te.cache
-config
-config.h.in
-configure
-*.lo
-*.o
-INSTALL
-cscope.*
diff --git a/branch-1.3/lang/c/AUTHORS b/branch-1.3/lang/c/AUTHORS
deleted file mode 100644
index 7b45e94..0000000
--- a/branch-1.3/lang/c/AUTHORS
+++ /dev/null
@@ -1,4 +0,0 @@
-
-See http://hadoop.apache.org/avro/ for a list of authors
-
-
diff --git a/branch-1.3/lang/c/CMakeLists.txt b/branch-1.3/lang/c/CMakeLists.txt
deleted file mode 100644
index e831498..0000000
--- a/branch-1.3/lang/c/CMakeLists.txt
+++ /dev/null
@@ -1,41 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-cmake_minimum_required(VERSION 2.4)
-project(AvroC)
-enable_testing()
-file(READ "${CMAKE_CURRENT_SOURCE_DIR}/../../share/VERSION.txt" AVRO_VERSION)
-
-if(APPLE)
-    set(CMAKE_OSX_ARCHITECTURES "ppc;i386;x86_64" CACHE STRING "Build architectures for Mac OS X" FORCE) 
-endif(APPLE)
-
-if(CMAKE_COMPILER_IS_GNUCC)
-    add_definitions(-W -Wall)
-endif(CMAKE_COMPILER_IS_GNUCC)
-
-include_directories(${AvroC_SOURCE_DIR}/src)
-include_directories(${AvroC_SOURCE_DIR}/jansson/src)
-
-add_subdirectory(src)
-add_subdirectory(examples)
-add_subdirectory(tests)
-
-add_custom_target(pretty
-    "${CMAKE_COMMAND}" -P "${CMAKE_CURRENT_SOURCE_DIR}/cmake_pretty.cmake")
-
diff --git a/branch-1.3/lang/c/COPYING b/branch-1.3/lang/c/COPYING
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/c/COPYING
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/c/ChangeLog b/branch-1.3/lang/c/ChangeLog
deleted file mode 100644
index fd40910..0000000
--- a/branch-1.3/lang/c/ChangeLog
+++ /dev/null
@@ -1,4 +0,0 @@
-
-
-
-
diff --git a/branch-1.3/lang/c/Makefile.am b/branch-1.3/lang/c/Makefile.am
deleted file mode 100644
index 4ed5a2f..0000000
--- a/branch-1.3/lang/c/Makefile.am
+++ /dev/null
@@ -1,22 +0,0 @@
-EXTRA_DIST=version.sh m4
-SUBDIRS = docs jansson src tests examples
-
-# Make sure we never package up '.svn' directories
-dist-hook:
-	find $(distdir) -name '.svn' | xargs rm -rf
-
-# Linux kernel source indent format options
-INDENT_OPTS=-nbad -bap -nbc -bbo -hnl -br -brs -c33 -cd33 -ncdb -ce -ci4 \
--cli0 -d0 -di1 -nfc1 -i8 -ip0 -l80 -lp -npcs -nprs -npsl -sai \
--saf -saw -ncs -nsc -sob -nfca -cp33 -ss -ts8
-
-cscope:
-	cd $(top_srcdir) ; \
-	find src tests examples jansson -iname '*.[chsSly]' > cscope.files ; \
-	cscope -kqb
-
-pretty:
-	@for dir in src tests examples; do \
-	indent $(INDENT_OPTS) $(top_srcdir)/$$dir/*.[c,h]; \
-	mv $(top_srcdir)/$$dir/*~ /tmp; \
-	done
diff --git a/branch-1.3/lang/c/NEWS b/branch-1.3/lang/c/NEWS
deleted file mode 100644
index e70f439..0000000
--- a/branch-1.3/lang/c/NEWS
+++ /dev/null
@@ -1,5 +0,0 @@
-
-For news, visit the Avro web site at
-http://hadoop.apache.org/avro/
-
-
diff --git a/branch-1.3/lang/c/README b/branch-1.3/lang/c/README
deleted file mode 100644
index 4a28425..0000000
--- a/branch-1.3/lang/c/README
+++ /dev/null
@@ -1,8 +0,0 @@
-    ___                      ______
-   /   |_   ___________     / ____/
-  / /| | | / / ___/ __ \   / /     
- / ___ | |/ / /  / /_/ /  / /___   
-/_/  |_|___/_/   \____/   \____/   
-
-======================================================
-Please see the documentation in the "docs" directory.
diff --git a/branch-1.3/lang/c/build.sh b/branch-1.3/lang/c/build.sh
deleted file mode 100755
index 77687b0..0000000
--- a/branch-1.3/lang/c/build.sh
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/bin/bash
-set -e						  # exit on error
-#set -x		
-
-root_dir=$(pwd)
-build_dir="../../build/c"
-dist_dir="../../dist/c"
-version=$(./version.sh project)
-tarball="avro-c-$version.tar.gz"
-doc_dir="../../build/avro-doc-$version/api/c"
-
-function autoreconf_check {
-  if [ ! -f configure ]; then
-    autoreconf -f -i
-  fi
-}
-
-function prepare_build {
-  autoreconf_check
-  clean
-  mkdir -p $build_dir
-  (cd $build_dir && $root_dir/configure)
-}
-
-function clean {
-  if [ -d $build_dir ]; then
-	find $build_dir | xargs chmod 755
-	rm -rf $build_dir
-  fi
-}
-
-case "$1" in
-
-    interop-data-generate)
-	prepare_build
-	make -C $build_dir
-	$build_dir/tests/generate_interop_data "../../share/test/schemas/interop.avsc"  "../../build/interop/data"
-	;;
-
-    interop-data-test)
-	prepare_build
-	make -C $build_dir
-	$build_dir/tests/test_interop_data "../../build/interop/data"
-	;;
-
-    test)
-	prepare_build
-	make -C $build_dir check
-        clean
-	;;
-
-    dist)
-	prepare_build
-	make -C $build_dir distcheck
-	if [ ! -d $dist_dir ]; then 
-           mkdir -p $dist_dir 
-        fi
-	if [ ! -d $doc_dir ]; then
-           mkdir -p $doc_dir
-	fi
-	mv $build_dir/$tarball $dist_dir
-	cp $build_dir/docs/*.html $doc_dir
-        clean
-	;;
-
-    clean)
-        clean
-	;;
-
-    *)
-        echo "Usage: $0 {interop-data-generate|interop-data-test|test|dist|clean}"
-        exit 1
-esac
-
-exit 0
diff --git a/branch-1.3/lang/c/cmake_pretty.cmake b/branch-1.3/lang/c/cmake_pretty.cmake
deleted file mode 100644
index aee633e..0000000
--- a/branch-1.3/lang/c/cmake_pretty.cmake
+++ /dev/null
@@ -1,33 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-# Linux kernel source indent format options
-set(INDENT_OPTS -nbad -bap -nbc -bbo -hnl -br -brs -c33 -cd33 -ncdb -ce -ci4
-    -cli0 -d0 -di1 -nfc1 -i8 -ip0 -l80 -lp -npcs -nprs -npsl -sai
-    -saf -saw -ncs -nsc -sob -nfca -cp33 -ss -ts8)
-
-foreach($dir src tests examples)
-	exec_program(indent
-                 ARGS ${INDENT_OPTS} ${CMAKE_CURRENT_SOURCE_DIR}/${dir}/*.[c,h]
-                 OUTPUT_VARIABLE indent_output
-                 RETURN_VALUE ret)
-    message(STATUS ${indent_output})
-	# TODO: mv ${CMAKE_CURRENT_SOURCE_DIR}/${dir}/*~ /tmp; \
-endforeach()
-
diff --git a/branch-1.3/lang/c/config/.gitignore b/branch-1.3/lang/c/config/.gitignore
deleted file mode 100644
index 72e8ffc..0000000
--- a/branch-1.3/lang/c/config/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-*
diff --git a/branch-1.3/lang/c/configure.in b/branch-1.3/lang/c/configure.in
deleted file mode 100644
index 49f1fbe..0000000
--- a/branch-1.3/lang/c/configure.in
+++ /dev/null
@@ -1,51 +0,0 @@
-#                                               -*- Autoconf -*-
-# Process this file with autoconf to produce a configure script.
-
-AC_PREREQ(2.59)
-AC_INIT([avro-c], m4_esyscmd([./version.sh project]), [http://issues.apache.org/jira/browse/AVRO])
-AC_CONFIG_AUX_DIR([config])
-AM_INIT_AUTOMAKE
-AC_CONFIG_SRCDIR([src/avro.h])
-AC_CONFIG_MACRO_DIR([m4])
-AC_CONFIG_HEADER([config.h])
-
-LIBAVRO_VERSION=m4_esyscmd([./version.sh libtool])
-AC_SUBST(LIBAVRO_VERSION)
-AC_DEFINE_UNQUOTED(LIBAVRO_VERSION, $LIBAVRO_VERSION, [Libtool version info for libavro])
-
-# Checks for programs.
-AC_PROG_CC
-AC_PROG_CXX
-AM_PROG_CC_C_O
-AC_PROG_LIBTOOL
-
-# Checks for header files.
-AC_HEADER_STDC
-AC_CHECK_HEADERS([limits.h stdint.h stdlib.h string.h])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-AC_TYPE_SIZE_T
-
-AC_CONFIG_FILES([
-   Makefile
-   tests/Makefile
-   docs/Makefile
-   src/Makefile
-   examples/Makefile
-])
-AC_CONFIG_SUBDIRS([jansson])
-AC_OUTPUT
-
-echo
-echo
-# figlet -f slant "Avro C"
-echo "    ___                      ______"
-echo "   /   |_   ___________     / ____/"
-echo "  / /| | | / / ___/ __ \   / /     "
-echo " / ___ | |/ / /  / /_/ /  / /___   "
-echo "/_/  |_|___/_/   \____/   \____/   "
-echo
-echo "Version: $VERSION"
-echo "Library: $LIBAVRO_VERSION" 	       
-echo
diff --git a/branch-1.3/lang/c/docs/Makefile.am b/branch-1.3/lang/c/docs/Makefile.am
deleted file mode 100644
index a200c21..0000000
--- a/branch-1.3/lang/c/docs/Makefile.am
+++ /dev/null
@@ -1,8 +0,0 @@
-EXTRA_DIST = index.txt index.html
-
-SUFFIXES = .html
-
-.txt.html:
-	asciidoc --unsafe --attribute=avro_version=@VERSION@ --attribute=libavro_version=@LIBAVRO_VERSION@ --attribute=toc -n -o $@ $<
-
-docs: index.html
diff --git a/branch-1.3/lang/c/docs/index.txt b/branch-1.3/lang/c/docs/index.txt
deleted file mode 100644
index 7c08ea0..0000000
--- a/branch-1.3/lang/c/docs/index.txt
+++ /dev/null
@@ -1,241 +0,0 @@
-Avro C
-======
-
-The current version of Avro is +{avro_version}+.  The current version of +libavro+ is +{libavro_version}+.
-This document was created +{docdate}+.
-
-== Introduction to Avro
-
-Avro is a data serialization system.
-
-Avro provides:
-
-* Rich data structures.
-* A compact, fast, binary data format.
-* A container file, to store persistent data.
-* Remote procedure call (RPC).
-
-This document will focus on the C implementation of Avro.  To learn more about
-Avro in general, http://hadoop.apache.org/avro/[visit the Avro website].
-
-== Introduction to Avro C 
-
-....
-    ___                      ______
-   /   |_   ___________     / ____/
-  / /| | | / / ___/ __ \   / /     
- / ___ | |/ / /  / /_/ /  / /___   
-/_/  |_|___/_/   \____/   \____/   
-                                   
-....
-
-[quote,Waldi Ravens,(walra%moacs11 @ nl.net) 94/03/18]
-____
-A C program is like a fast dance on a newly waxed dance floor by people carrying razors.
-____
-
-The C implementation has been tested on +MacOSX+ and +Linux+ but, over
-time, the number of support OSes should grow.  Please let us know if
-you're using +Avro C+ on other systems. There are no dependencies on 
-external libraries.  We embedded http://www.digip.org/jansson/[Jansson] into
-+Avro C+ for parsing JSON into schema structures.  
-
-The C implementation supports:
-
-* binary encoding/decoding of all primitive and complex data types
-* storage to an Avro Object Container File
-* schema resolution, promotion and projection
-* validating and non-validating mode for writing Avro data
-
-The C implementation is lacking:
-
-* RPC
-
-To learn about the API, take a look at the examples and reference files
-later in this document.
-
-We're always looking for contributions so, if you're a C hacker, please
-feel free to http://hadoop.apache.org/avro/[submit patches to the
-project].
-
-== Reference Counting
-
-+Avro C+ does reference counting for all schema and data objects.
-When the number of references drops to zero, the memory is freed.
-
-For example, to create and free a string, you would use:
-----
-avro_datum_t string = avro_string("This is my string");
-
-...
-avro_datum_decref(string);
-----
-
-Things get a little more complicated when you consider more elaborate
-schema and data structures.
-
-For example, let's say that you create a record with a single
-string field:
-----
-avro_datum_t example = avro_record("Example");
-avro_datum_t solo_field = avro_string("Example field value");
-
-avro_record_set(example, "solo", solo_field);
-
-...
-avro_datum_decref(example);
-----
-
-In this example, the +solo_field+ datum would *not* be freed since it 
-has two references: the original reference and a reference inside
-the +Example+ record.  The +avro_datum_decref(example)+ call drops
-the number of reference to one.  If you are finished with the +solo_field+
-schema, then you need to +avro_schema_decref(solo_field)+ to
-completely dereference the +solo_field+ datum and free it.
-
-== Wrap It and Give It
-
-You'll notice that some datatypes can be "wrapped" and "given".  This
-allows C programmers the freedom to decide who is responsible for
-the memory.  Let's take strings for example.
-
-To create a string datum, you have three different methods:
-----
-avro_datum_t avro_string(const char *str);
-avro_datum_t avro_wrapstring(const char *str);
-avro_datum_t avro_givestring(const char *str);
-----
-
-If you use, +avro_string+ then +Avro C+ will make a copy of your
-string and free it when the datum is dereferenced.  In some cases,
-especially when dealing with large amounts of data, you want 
-to avoid this memory copy.  That's where +avro_wrapstring+ and
-+avro_givestring+ can help.
-
-If you use, +avro_wrapstring+ then +Avro C+ will do no memory 
-management at all.  It will just save a pointer to your data and
-it's your responsibility to free the string.  
-
-WARNING: When using +avro_wrapstring+, do not free the string 
-before you dereference the string datum with +avro_datum_decref()+.
-
-Lastly, if you use +avro_givestring+ then +Avro C+ will free the
-string later when the datum is dereferenced.  In a sense, you
-are "giving" responsibility for freeing the string to +Avro C+.
-
-[WARNING] 
-===============================
-Don't "give" +Avro C+ a string that you haven't allocated from the heap with e.g. +malloc+ or +strdup+.
-
-For example, *don't* do this:
-----
-avro_datum_t bad_idea = avro_givestring("This isn't allocated on the heap");
-----
-===============================
-
-== Schema Validation
-
-If you want to write a datum, you would use the following function
-
-[source,c]
-----
-int avro_write_data(avro_writer_t writer,
-                    avro_schema_t writers_schema, avro_datum_t datum);
-----
-
-If you pass in a +writers_schema+, then you +datum+ will be validated *before*
-it is sent to the +writer+.  This check ensures that your data has the 
-correct format.  If you are certain your datum is correct, you can pass
-a +NULL+ value for +writers_schema+ and +Avro C+ will not validate before
-writing.
-
-NOTE: Data written to an Avro File Object Container is always validated.
-
-== Examples
-
-[quote,Dante Hicks]
-____
-I'm not even supposed to be here today!
-____
-
-Imagine you're a free-lance hacker in Leonardo, New Jersey and you've 
-been approached by the owner of the local *Quick Stop Convenience* store.
-He wants you to create a contact database case he needs to call employees
-to work on their day off.
-
-You might build a simple contact system using Avro C like the following...
-
-[source,c]
-----
-include::../examples/quickstop.c[]
-----
-
-When you compile and run this program, you should get the following output
-
-----
-Successfully added Hicks, Dante id=1
-Successfully added Graves, Randal id=2
-Successfully added Loughran, Veronica id=3
-Successfully added Bree, Caitlin id=4
-Successfully added Silent, Bob id=5
-Successfully added ???, Jay id=6
-
-Avro is compact. Here is the data for all 6 people.
-| 02 0A 44 61 6E 74 65 0A | 48 69 63 6B 73 1C 28 35 |	..Dante.Hicks.(5
-| 35 35 29 20 31 32 33 2D | 34 35 36 37 40 04 0C 52 |	55) 123-4567@..R
-| 61 6E 64 61 6C 0C 47 72 | 61 76 65 73 1C 28 35 35 |	andal.Graves.(55
-| 35 29 20 31 32 33 2D 35 | 36 37 38 3C 06 10 56 65 |	5) 123-5678<..Ve
-| 72 6F 6E 69 63 61 10 4C | 6F 75 67 68 72 61 6E 1C |	ronica.Loughran.
-| 28 35 35 35 29 20 31 32 | 33 2D 30 39 38 37 38 08 |	(555) 123-09878.
-| 0E 43 61 69 74 6C 69 6E | 08 42 72 65 65 1C 28 35 |	.Caitlin.Bree.(5
-| 35 35 29 20 31 32 33 2D | 32 33 32 33 36 0A 06 42 |	55) 123-23236..B
-| 6F 62 0C 53 69 6C 65 6E | 74 1C 28 35 35 35 29 20 |	ob.Silent.(555) 
-| 31 32 33 2D 36 34 32 32 | 3A 0C 06 4A 61 79 06 3F |	123-6422:..Jay.?
-| 3F 3F 1C 28 35 35 35 29 | 20 31 32 33 2D 39 31 38 |	??.(555) 123-918
-| 32 34 .. .. .. .. .. .. | .. .. .. .. .. .. .. .. |	24..............
-
-Now let's read all the records back out
-1 |           Dante |           Hicks |  (555) 123-4567 | 32
-2 |          Randal |          Graves |  (555) 123-5678 | 30
-3 |        Veronica |        Loughran |  (555) 123-0987 | 28
-4 |         Caitlin |            Bree |  (555) 123-2323 | 27
-5 |             Bob |          Silent |  (555) 123-6422 | 29
-6 |             Jay |             ??? |  (555) 123-9182 | 26
-
-
-Use projection to print only the First name and phone numbers
-          Dante |  (555) 123-4567 | 
-         Randal |  (555) 123-5678 | 
-       Veronica |  (555) 123-0987 | 
-        Caitlin |  (555) 123-2323 | 
-            Bob |  (555) 123-6422 | 
-            Jay |  (555) 123-9182 | 
-----
-
-The *Quick Stop* owner was so pleased, he asked you to create a 
-movie database for his *RST Video* store.
-
-== Reference files
-
-=== avro.h
-
-The +avro.h+ header file contains the complete public API
-for +Avro C+.  The documentation is rather sparse right now
-but we'll be adding more information soon.
-
-[source,c]
-----
-include::../src/avro.h[]
-----
-
-=== test_avro_data.c 
-
-Another good way to learn how to encode/decode data in +Avro C+ is
-to look at the +test_avro_data.c+ unit test.  This simple unit test
-checks that all the avro types can be encoded/decoded correctly.
-
-[source,c]
-----
-include::../tests/test_avro_data.c[]
-----
-
diff --git a/branch-1.3/lang/c/examples/CMakeLists.txt b/branch-1.3/lang/c/examples/CMakeLists.txt
deleted file mode 100644
index 17ad7f8..0000000
--- a/branch-1.3/lang/c/examples/CMakeLists.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-add_executable(quickstop quickstop.c)
-target_link_libraries(quickstop avro-static)
-
diff --git a/branch-1.3/lang/c/examples/Makefile.am b/branch-1.3/lang/c/examples/Makefile.am
deleted file mode 100644
index 474ae5b..0000000
--- a/branch-1.3/lang/c/examples/Makefile.am
+++ /dev/null
@@ -1,14 +0,0 @@
-# -pedantic
-AM_CPPFLAGS=-I$(top_srcdir)/src 
-AM_CFLAGS=-Wall -Wextra -Wunused-parameter
-ACLOCAL_AMFLAGS=-I m4
-
-check_PROGRAMS=quickstop
-TESTS=$(check_PROGRAMS)
-
-examples_LDADD=$(top_builddir)/src/libavro.la
-
-quickstop_SOURCES=quickstop.c
-quickstop_LDADD=$(examples_LDADD)
-
-CLEANFILES=quickstop.db
diff --git a/branch-1.3/lang/c/examples/quickstop.c b/branch-1.3/lang/c/examples/quickstop.c
deleted file mode 100644
index 3d4f959..0000000
--- a/branch-1.3/lang/c/examples/quickstop.c
+++ /dev/null
@@ -1,200 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License.
- */
-
-#include <avro.h>
-#include <inttypes.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <unistd.h>
-
-avro_schema_t person_schema;
-int64_t id = 0;
-
-/* A simple schema for our tutorial */
-#define PERSON_SCHEMA \
-"{\"type\":\"record\",\
-  \"name\":\"Person\",\
-  \"fields\":[\
-     {\"name\": \"ID\", \"type\": \"long\"},\
-     {\"name\": \"First\", \"type\": \"string\"},\
-     {\"name\": \"Last\", \"type\": \"string\"},\
-     {\"name\": \"Phone\", \"type\": \"string\"},\
-     {\"name\": \"Age\", \"type\": \"int\"}]}"
-
-/* Parse schema into a schema data structure */
-void init_schema(void)
-{
-	avro_schema_error_t error;
-	if (avro_schema_from_json(PERSON_SCHEMA, sizeof(PERSON_SCHEMA),
-				  &person_schema, &error)) {
-		fprintf(stderr, "Unable to parse person schema\n");
-		exit(EXIT_FAILURE);
-	}
-}
-
-/* Create a datum to match the person schema and save it */
-void
-add_person(avro_file_writer_t db, const char *first, const char *last,
-	   const char *phone, int32_t age)
-{
-	avro_datum_t person = avro_record("Person", NULL);
-
-	avro_datum_t id_datum = avro_int64(++id);
-	avro_datum_t first_datum = avro_string(first);
-	avro_datum_t last_datum = avro_string(last);
-	avro_datum_t age_datum = avro_int32(age);
-	avro_datum_t phone_datum = avro_string(phone);
-
-	if (avro_record_set(person, "ID", id_datum)
-	    || avro_record_set(person, "First", first_datum)
-	    || avro_record_set(person, "Last", last_datum)
-	    || avro_record_set(person, "Age", age_datum)
-	    || avro_record_set(person, "Phone", phone_datum)) {
-		fprintf(stderr, "Unable to create Person datum structure");
-		exit(EXIT_FAILURE);
-	}
-
-	if (avro_file_writer_append(db, person)) {
-		fprintf(stderr,
-			"Unable to write Person datum to memory buffer");
-		exit(EXIT_FAILURE);
-	}
-
-	/* Decrement all our references to prevent memory from leaking */
-	avro_datum_decref(id_datum);
-	avro_datum_decref(first_datum);
-	avro_datum_decref(last_datum);
-	avro_datum_decref(age_datum);
-	avro_datum_decref(phone_datum);
-	avro_datum_decref(person);
-
-	fprintf(stdout, "Successfully added %s, %s id=%"PRId64"\n", last, first, id);
-}
-
-int print_person(avro_file_reader_t db, avro_schema_t reader_schema)
-{
-	int rval;
-	avro_datum_t person;
-
-	rval = avro_file_reader_read(db, reader_schema, &person);
-	if (rval == 0) {
-		int64_t i64;
-		int32_t i32;
-		char *p;
-		avro_datum_t id_datum, first_datum, last_datum, phone_datum,
-		    age_datum;
-
-		if (avro_record_get(person, "ID", &id_datum) == 0) {
-			avro_int64_get(id_datum, &i64);
-			fprintf(stdout, "%"PRId64" | ", i64);
-		}
-		if (avro_record_get(person, "First", &first_datum) == 0) {
-			avro_string_get(first_datum, &p);
-			fprintf(stdout, "%15s | ", p);
-		}
-		if (avro_record_get(person, "Last", &last_datum) == 0) {
-			avro_string_get(last_datum, &p);
-			fprintf(stdout, "%15s | ", p);
-		}
-		if (avro_record_get(person, "Phone", &phone_datum) == 0) {
-			avro_string_get(phone_datum, &p);
-			fprintf(stdout, "%15s | ", p);
-		}
-		if (avro_record_get(person, "Age", &age_datum) == 0) {
-			avro_int32_get(age_datum, &i32);
-			fprintf(stdout, "%d", i32);
-		}
-		fprintf(stdout, "\n");
-
-		/* We no longer need this memory */
-		avro_datum_decref(person);
-	}
-	return rval;
-}
-
-int main(void)
-{
-	int rval;
-	avro_file_reader_t dbreader;
-	avro_file_writer_t db;
-	avro_schema_t projection_schema, first_name_schema, phone_schema;
-	int64_t i;
-	const char *dbname = "quickstop.db";
-
-	/* Initialize the schema structure from JSON */
-	init_schema();
-
-	/* Delete the database if it exists */
-	unlink(dbname);
-	/* Create a new database */
-	rval = avro_file_writer_create(dbname, person_schema, &db);
-	if (rval) {
-		fprintf(stderr, "There was an error creating %s\n", dbname);
-		exit(EXIT_FAILURE);
-	}
-	/* Add people to the database */
-	add_person(db, "Dante", "Hicks", "(555) 123-4567", 32);
-	add_person(db, "Randal", "Graves", "(555) 123-5678", 30);
-	add_person(db, "Veronica", "Loughran", "(555) 123-0987", 28);
-	add_person(db, "Caitlin", "Bree", "(555) 123-2323", 27);
-	add_person(db, "Bob", "Silent", "(555) 123-6422", 29);
-	add_person(db, "Jay", "???", "(555) 123-9182", 26);
-	avro_file_writer_close(db);
-
-	fprintf(stdout, "\nNow let's read all the records back out\n");
-
-	/* Read all the records and print them */
-	avro_file_reader(dbname, &dbreader);
-	for (i = 0; i < id; i++) {
-		if (print_person(dbreader, NULL)) {
-			fprintf(stderr, "Error printing person\n");
-			exit(EXIT_FAILURE);
-		}
-	}
-	avro_file_reader_close(dbreader);
-
-	/* You can also use projection, to only decode only the data you are
-	   interested in.  This is particularly useful when you have 
-	   huge data sets and you'll only interest in particular fields
-	   e.g. your contacts First name and phone number */
-	projection_schema = avro_schema_record("Person", NULL);
-	first_name_schema = avro_schema_string();
-	phone_schema = avro_schema_string();
-	avro_schema_record_field_append(projection_schema, "First",
-					first_name_schema);
-	avro_schema_record_field_append(projection_schema, "Phone",
-					phone_schema);
-
-	/* Read only the record you're interested in */
-	fprintf(stdout,
-		"\n\nUse projection to print only the First name and phone numbers\n");
-	avro_file_reader(dbname, &dbreader);
-	for (i = 0; i < id; i++) {
-		if (print_person(dbreader, projection_schema)) {
-			fprintf(stderr, "Error printing person\n");
-			exit(EXIT_FAILURE);
-		}
-	}
-	avro_file_reader_close(dbreader);
-	avro_schema_decref(first_name_schema);
-	avro_schema_decref(phone_schema);
-	avro_schema_decref(projection_schema);
-
-	/* We don't need this schema anymore */
-	avro_schema_decref(person_schema);
-	return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/.gitignore b/branch-1.3/lang/c/jansson/.gitignore
deleted file mode 100644
index df81091..0000000
--- a/branch-1.3/lang/c/jansson/.gitignore
+++ /dev/null
@@ -1,6 +0,0 @@
-config.guess
-config.sub
-depcomp
-install-sh
-ltmain.sh
-missing
diff --git a/branch-1.3/lang/c/jansson/CHANGES b/branch-1.3/lang/c/jansson/CHANGES
deleted file mode 100644
index abe0062..0000000
--- a/branch-1.3/lang/c/jansson/CHANGES
+++ /dev/null
@@ -1,71 +0,0 @@
-Version 1.1.3, released 2009-12-18
-
-* Encode reals correctly, so that first encoding and then decoding a
-  real always produces the same value
-* Don't export private symbols in libjansson.so
-
-
-Version 1.1.2, released 2009-11-08
-
-* Fix a bug where an error message was not produced if the input file
-  could not be opened in json_load_file()
-* Fix an assertion failure in decoder caused by a minus sign without a
-  digit after it
-* Remove an unneeded include for stdint.h in jansson.h
-
-
-Version 1.1.1, released 2009-10-26
-
-* All documentation files were not distributed with v1.1; build
-  documentation in make distcheck to prevent this in the future
-* Fix v1.1 release date in CHANGES
-
-
-Version 1.1, released 2009-10-20
-
-* API additions and improvements:
-  - Extend array and object APIs
-  - Add functions to modify integer, real and string values
-  - Improve argument validation
-  - Use unsigned int instead of uint32_t for encoding flags
-* Enhance documentation
-  - Add getting started guide and tutorial
-  - Fix some typos
-  - General clarifications and cleanup
-* Check for integer and real overflows and underflows in decoder
-* Make singleton values thread-safe (true, false and null)
-* Enhance circular reference handling
-* Don't define -std=c99 in AM_CFLAGS
-* Add C++ guards to jansson.h
-* Minor performance and portability improvements
-* Expand test coverage
-
-
-Version 1.0.4, released 2009-10-11
-
-* Relax Autoconf version requirement to 2.59
-* Make Jansson compile on platforms where plain char is unsigned
-* Fix API tests for object
-
-
-Version 1.0.3, released 2009-09-14
-
-* Check for integer and real overflows and underflows in decoder
-* Use the Python json module for tests, or simplejson if the json
-  module is not found
-* Distribute changelog (this file)
-
-
-Version 1.0.2, released 2009-09-08
-
-* Handle EOF correctly in decoder
-
-
-Version 1.0.1, released 2009-09-04
-
-* Fixed broken json_is_boolean()
-
-
-Version 1.0, released 2009-08-25
-
-* Initial release
diff --git a/branch-1.3/lang/c/jansson/LICENSE b/branch-1.3/lang/c/jansson/LICENSE
deleted file mode 100644
index 6c6dbed..0000000
--- a/branch-1.3/lang/c/jansson/LICENSE
+++ /dev/null
@@ -1,19 +0,0 @@
-Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE.
diff --git a/branch-1.3/lang/c/jansson/Makefile.am b/branch-1.3/lang/c/jansson/Makefile.am
deleted file mode 100644
index e64f397..0000000
--- a/branch-1.3/lang/c/jansson/Makefile.am
+++ /dev/null
@@ -1,7 +0,0 @@
-EXTRA_DIST = CHANGES LICENSE README.rst
-SUBDIRS = doc src test
-
-distcheck-hook:
-	sphinx-build -b html -W \
-		$(distdir)/doc \
-		$(distdir)/_build/doc/.build/html
diff --git a/branch-1.3/lang/c/jansson/README.rst b/branch-1.3/lang/c/jansson/README.rst
deleted file mode 100644
index ebd3941..0000000
--- a/branch-1.3/lang/c/jansson/README.rst
+++ /dev/null
@@ -1,63 +0,0 @@
-Jansson README
-==============
-
-Jansson_ is a C library for encoding, decoding and manipulating JSON
-data. Its main features and design principles are:
-
-- Simple and intuitive API and data model
-
-- Comprehensive documentation
-
-- No dependencies on other libraries
-
-- Full Unicode support (UTF-8)
-
-- Extensive test suite
-
-Jansson is licensed under the `MIT license`_; see LICENSE in the
-source distribution for details.
-
-
-Compilation and Installation
-----------------------------
-
-If you obtained a source tarball, just use the standard autotools
-commands::
-
-   $ ./configure && make && make install
-
-If the source has been checked out from a Git repository, the
-./configure script has to be generated fist. The easiest way is to use
-autoreconf::
-
-   $ autoreconf -i
-
-To run the test suite, invoke::
-
-   $ make check
-
-Python_ is required to run the tests.
-
-
-Documentation
--------------
-
-Documentation is in the ``doc/`` subdirectory. It's written in
-reStructuredText_ with Sphinx_ annotations, so reading it in plain may
-be inconvenient. For this reason, prebuilt HTML documentation is
-available at http://www.digip.org/jansson/doc/.
-
-To generate HTML documentation yourself, invoke::
-
-   cd doc/
-   sphinx-build . .build/html
-
-... and point your browser to ``.build/html/index.html``. Sphinx_ is
-required to generate the documentation.
-
-
-.. _Jansson: http://www.digip.org/jansson/
-.. _`MIT license`: http://www.opensource.org/licenses/mit-license.php
-.. _Python: http://www.python.org/
-.. _reStructuredText: http://docutils.sourceforge.net/rst.html
-.. _Sphinx: http://sphinx.pocoo.org/
diff --git a/branch-1.3/lang/c/jansson/configure.ac b/branch-1.3/lang/c/jansson/configure.ac
deleted file mode 100644
index d2d7cdc..0000000
--- a/branch-1.3/lang/c/jansson/configure.ac
+++ /dev/null
@@ -1,29 +0,0 @@
-AC_PREREQ([2.59])
-AC_INIT([jansson], [1.1.3], [petri@digip.org])
-
-AM_INIT_AUTOMAKE([foreign])
-
-AC_CONFIG_SRCDIR([src/value.c])
-AC_CONFIG_HEADERS([config.h])
-
-# Checks for programs.
-AC_PROG_CC
-AC_PROG_LIBTOOL
-
-# Checks for libraries.
-
-# Checks for header files.
-
-# Checks for typedefs, structures, and compiler characteristics.
-
-# Checks for library functions.
-
-AC_CONFIG_FILES([
-        Makefile
-        doc/Makefile
-        src/Makefile
-        test/Makefile
-        test/testdata/Makefile
-        test/testprogs/Makefile
-])
-AC_OUTPUT
diff --git a/branch-1.3/lang/c/jansson/doc/Makefile.am b/branch-1.3/lang/c/jansson/doc/Makefile.am
deleted file mode 100644
index b00ee56..0000000
--- a/branch-1.3/lang/c/jansson/doc/Makefile.am
+++ /dev/null
@@ -1,7 +0,0 @@
-EXTRA_DIST = \
-	conf.py apiref.rst gettingstarted.rst github_commits.c index.rst \
-	tutorial.rst ext/refcounting.py
-
-clean-local:
-	rm -rf .build
-	rm -f ext/refcounting.pyc
diff --git a/branch-1.3/lang/c/jansson/doc/README b/branch-1.3/lang/c/jansson/doc/README
deleted file mode 100644
index 3b10d2a..0000000
--- a/branch-1.3/lang/c/jansson/doc/README
+++ /dev/null
@@ -1,5 +0,0 @@
-To build the documentation, invoke
-
-    sphinx-build . .build/html
-
-in this directory. Then point your browser to .build/html/index.html.
diff --git a/branch-1.3/lang/c/jansson/doc/apiref.rst b/branch-1.3/lang/c/jansson/doc/apiref.rst
deleted file mode 100644
index 3ba5fbf..0000000
--- a/branch-1.3/lang/c/jansson/doc/apiref.rst
+++ /dev/null
@@ -1,610 +0,0 @@
-.. _apiref:
-
-*************
-API Reference
-*************
-
-.. highlight:: c
-
-Preliminaries
-=============
-
-All declarations are in :file:`jansson.h`, so it's enough to
-
-::
-
-   #include <jansson.h>
-
-in each source file.
-
-All constants are prefixed ``JSON_`` and other identifiers with
-``json_``. Type names are suffixed with ``_t`` and ``typedef``\ 'd so
-that the ``struct`` keyword need not be used.
-
-
-Value Representation
-====================
-
-The JSON specification (:rfc:`4627`) defines the following data types:
-*object*, *array*, *string*, *number*, *boolean*, and *null*. JSON
-types are used dynamically; arrays and objects can hold any other data
-type, including themselves. For this reason, Jansson's type system is
-also dynamic in nature. There's one C type to represent all JSON
-values, and this structure knows the type of the JSON value it holds.
-
-.. ctype:: json_t
-
-  This data structure is used throughout the library to represent all
-  JSON values. It always contains the type of the JSON value it holds
-  and the value's reference count. The rest depends on the type of the
-  value.
-
-Objects of :ctype:`json_t` are always used through a pointer. There
-are APIs for querying the type, manipulating the reference count, and
-for constructing and manipulating values of different types.
-
-Unless noted otherwise, all API functions return an error value if an
-error occurs. Depending on the function's signature, the error value
-is either *NULL* or -1. Invalid arguments or invalid input are
-apparent sources for errors. Memory allocation and I/O operations may
-also cause errors.
-
-
-Type
-----
-
-The type of a JSON value is queried and tested using the following
-functions:
-
-.. ctype:: enum json_type
-
-   The type of a JSON value. The following members are defined:
-
-   +-------------------------+
-   | :const:`JSON_OBJECT`    |
-   +-------------------------+
-   | :const:`JSON_ARRAY`     |
-   +-------------------------+
-   | :const:`JSON_STRING`    |
-   +-------------------------+
-   | :const:`JSON_INTEGER`   |
-   +-------------------------+
-   | :const:`JSON_REAL`      |
-   +-------------------------+
-   | :const:`JSON_TRUE`      |
-   +-------------------------+
-   | :const:`JSON_FALSE`     |
-   +-------------------------+
-   | :const:`JSON_NULL`      |
-   +-------------------------+
-
-   These correspond to JSON object, array, string, number, boolean and
-   null. A number is represented by either a value of the type
-   :const:`JSON_INTEGER` or of the type :const:`JSON_REAL`. A true
-   boolean value is represented by a value of the type
-   :const:`JSON_TRUE` and false by a value of the type
-   :const:`JSON_FALSE`.
-
-.. cfunction:: int json_typeof(const json_t *json)
-
-   Return the type of the JSON value (a :ctype:`json_type` cast to
-   :ctype:`int`). *json* MUST NOT be *NULL*. This function is actually
-   implemented as a macro for speed.
-
-.. cfunction:: json_is_object(const json_t *json)
-               json_is_array(const json_t *json)
-               json_is_string(const json_t *json)
-               json_is_integer(const json_t *json)
-               json_is_real(const json_t *json)
-               json_is_true(const json_t *json)
-               json_is_false(const json_t *json)
-               json_is_null(const json_t *json)
-
-   These functions (actually macros) return true (non-zero) for values
-   of the given type, and false (zero) for values of other types and
-   for *NULL*.
-
-.. cfunction:: json_is_number(const json_t *json)
-
-   Returns true for values of types :const:`JSON_INTEGER` and
-   :const:`JSON_REAL`, and false for other types and for *NULL*.
-
-.. cfunction:: json_is_boolean(const json_t *json)
-
-   Returns true for types :const:`JSON_TRUE` and :const:`JSON_FALSE`,
-   and false for values of other types and for *NULL*.
-
-
-.. _apiref-reference-count:
-
-Reference Count
----------------
-
-The reference count is used to track whether a value is still in use
-or not. When a value is created, it's reference count is set to 1. If
-a reference to a value is kept (e.g. a value is stored somewhere for
-later use), its reference count is incremented, and when the value is
-no longer needed, the reference count is decremented. When the
-reference count drops to zero, there are no references left, and the
-value can be destroyed.
-
-The following functions are used to manipulate the reference count.
-
-.. cfunction:: json_t *json_incref(json_t *json)
-
-   Increment the reference count of *json* if it's not non-*NULL*.
-   Returns *json*.
-
-.. cfunction:: void json_decref(json_t *json)
-
-   Decrement the reference count of *json*. As soon as a call to
-   :cfunc:`json_decref()` drops the reference count to zero, the value
-   is destroyed and it can no longer be used.
-
-Functions creating new JSON values set the reference count to 1. These
-functions are said to return a **new reference**. Other functions
-returning (existing) JSON values do not normally increase the
-reference count. These functions are said to return a **borrowed
-reference**. So, if the user will hold a reference to a value returned
-as a borrowed reference, he must call :cfunc:`json_incref`. As soon as
-the value is no longer needed, :cfunc:`json_decref` should be called
-to release the reference.
-
-Normally, all functions accepting a JSON value as an argument will
-manage the reference, i.e. increase and decrease the reference count
-as needed. However, some functions **steal** the reference, i.e. they
-have the same result as if the user called :cfunc:`json_decref()` on
-the argument right after calling the function. These are usually
-convenience functions for adding new references to containers and not
-to worry about the reference count.
-
-In the following sections it is clearly documented whether a function
-will return a new or borrowed reference or steal a reference to its
-argument.
-
-
-Circular References
--------------------
-
-A circular reference is created when an object or an array is,
-directly or indirectly, inserted inside itself. The direct case is
-simple::
-
-  json_t *obj = json_object();
-  json_object_set(obj, "foo", obj);
-
-Jansson will refuse to do this, and :cfunc:`json_object_set()` (and
-all the other such functions for objects and arrays) will return with
-an error status. The indirect case is the dangerous one::
-
-  json_t *arr1 = json_array(), *arr2 = json_array();
-  json_array_append(arr1, arr2);
-  json_array_append(arr2, arr1);
-
-In this example, the array ``arr2`` is contained in the array
-``arr1``, and vice versa. Jansson cannot check for this kind of
-indirect circular references without a performance hit, so it's up to
-the user to avoid them.
-
-If a circular reference is created, the memory consumed by the values
-cannot be freed by :cfunc:`json_decref()`. The reference counts never
-drops to zero because the values are keeping the circular reference to
-themselves. Moreover, trying to encode the values with any of the
-encoding functions will fail. The encoder detects circular references
-and returns an error status.
-
-
-True, False and Null
-====================
-
-These values are implemented as singletons, so each of these functions
-returns the same value each time.
-
-.. cfunction:: json_t *json_true(void)
-
-   .. refcounting:: new
-
-   Returns the JSON true value.
-
-.. cfunction:: json_t *json_false(void)
-
-   .. refcounting:: new
-
-   Returns the JSON false value.
-
-.. cfunction:: json_t *json_null(void)
-
-   .. refcounting:: new
-
-   Returns the JSON null value.
-
-
-String
-======
-
-.. cfunction:: json_t *json_string(const char *value)
-
-   .. refcounting:: new
-
-   Returns a new JSON string, or *NULL* on error. *value* must be a
-   valid UTF-8 encoded Unicode string.
-
-.. cfunction:: const char *json_string_value(const json_t *string)
-
-   Returns the associated value of *string* as a null terminated UTF-8
-   encoded string, or *NULL* if *string* is not a JSON string.
-
-.. cfunction:: int json_string_set(const json_t *string, const char *value)
-
-   Sets the associated value of *string* to *value*. *value* must be a
-   valid UTF-8 encoded Unicode string. Returns 0 on success and -1 on
-   error.
-
-   .. versionadded:: 1.1
-
-
-Number
-======
-
-.. cfunction:: json_t *json_integer(int value)
-
-   .. refcounting:: new
-
-   Returns a new JSON integer, or *NULL* on error.
-
-.. cfunction:: int json_integer_value(const json_t *integer)
-
-   Returns the associated value of *integer*, or 0 if *json* is not a
-   JSON integer.
-
-.. cfunction:: int json_integer_set(const json_t *integer, int value)
-
-   Sets the associated value of *integer* to *value*. Returns 0 on
-   success and -1 if *integer* is not a JSON integer.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: json_t *json_real(double value)
-
-   .. refcounting:: new
-
-   Returns a new JSON real, or *NULL* on error.
-
-.. cfunction:: double json_real_value(const json_t *real)
-
-   Returns the associated value of *real*, or 0.0 if *real* is not a
-   JSON real.
-
-.. cfunction:: int json_real_set(const json_t *real, double value)
-
-   Sets the associated value of *real* to *value*. Returns 0 on
-   success and -1 if *real* is not a JSON real.
-
-   .. versionadded:: 1.1
-
-In addition to the functions above, there's a common query function
-for integers and reals:
-
-.. cfunction:: double json_number_value(const json_t *json)
-
-   Returns the associated value of the JSON integer or JSON real
-   *json*, cast to double regardless of the actual type. If *json* is
-   neither JSON real nor JSON integer, 0.0 is returned.
-
-
-Array
-=====
-
-A JSON array is an ordered collection of other JSON values.
-
-.. cfunction:: json_t *json_array(void)
-
-   .. refcounting:: new
-
-   Returns a new JSON array, or *NULL* on error. Initially, the array
-   is empty.
-
-.. cfunction:: unsigned int json_array_size(const json_t *array)
-
-   Returns the number of elements in *array*, or 0 if *array* is NULL
-   or not a JSON array.
-
-.. cfunction:: json_t *json_array_get(const json_t *array, unsigned int index)
-
-   .. refcounting:: borrow
-
-   Returns the element in *array* at position *index*. The valid range
-   for *index* is from 0 to the return value of
-   :cfunc:`json_array_size()` minus 1. If *array* is not a JSON array,
-   if *array* is *NULL*, or if *index* is out of range, *NULL* is
-   returned.
-
-.. cfunction:: int json_array_set(json_t *array, unsigned int index, json_t *value)
-
-   Replaces the element in *array* at position *index* with *value*.
-   The valid range for *index* is from 0 to the return value of
-   :cfunc:`json_array_size()` minus 1. Returns 0 on success and -1 on
-   error.
-
-.. cfunction:: int json_array_set_new(json_t *array, unsigned int index, json_t *value)
-
-   Like :cfunc:`json_array_set()` but steals the reference to *value*.
-   This is useful when *value* is newly created and not used after
-   the call.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_append(json_t *array, json_t *value)
-
-   Appends *value* to the end of *array*, growing the size of *array*
-   by 1. Returns 0 on success and -1 on error.
-
-.. cfunction:: int json_array_append_new(json_t *array, json_t *value)
-
-   Like :cfunc:`json_array_append()` but steals the reference to
-   *value*. This is useful when *value* is newly created and not used
-   after the call.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_insert(json_t *array, unsigned int index, json_t *value)
-
-   Inserts *value* to *array* at position *index*, shifting the
-   elements at *index* and after it one position towards the end of
-   the array. Returns 0 on success and -1 on error.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_insert_new(json_t *array, unsigned int index, json_t *value)
-
-   Like :cfunc:`json_array_insert()` but steals the reference to
-   *value*. This is useful when *value* is newly created and not used
-   after the call.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_remove(json_t *array, unsigned int index)
-
-   Removes the element in *array* at position *index*, shifting the
-   elements after *index* one position towards the start of the array.
-   Returns 0 on success and -1 on error.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_clear(json_t *array)
-
-   Removes all elements from *array*. Returns 0 on sucess and -1 on
-   error.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_array_extend(json_t *array, json_t *other_array)
-
-   Appends all elements in *other_array* to the end of *array*.
-   Returns 0 on success and -1 on error.
-
-   .. versionadded:: 1.1
-
-
-Object
-======
-
-A JSON object is a dictionary of key-value pairs, where the key is a
-Unicode string and the value is any JSON value.
-
-.. cfunction:: json_t *json_object(void)
-
-   .. refcounting:: new
-
-   Returns a new JSON object, or *NULL* on error. Initially, the
-   object is empty.
-
-.. cfunction:: unsigned int json_object_size(const json_t *object)
-
-   Returns the number of elements in *object*, or 0 if *object* is not
-   a JSON object.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: json_t *json_object_get(const json_t *object, const char *key)
-
-   .. refcounting:: borrow
-
-   Get a value corresponding to *key* from *object*. Returns *NULL* if
-   *key* is not found and on error.
-
-.. cfunction:: int json_object_set(json_t *object, const char *key, json_t *value)
-
-   Set the value of *key* to *value* in *object*. *key* must be a
-   valid null terminated UTF-8 encoded Unicode string. If there
-   already is a value for *key*, it is replaced by the new value.
-   Returns 0 on success and -1 on error.
-
-.. cfunction:: int json_object_set_new(json_t *object, const char *key, json_t *value)
-
-   Like :cfunc:`json_object_set()` but steals the reference to
-   *value*. This is useful when *value* is newly created and not used
-   after the call.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_object_del(json_t *object, const char *key)
-
-   Delete *key* from *object* if it exists. Returns 0 on success, or
-   -1 if *key* was not found.
-
-
-.. cfunction:: int json_object_clear(json_t *object)
-
-   Remove all elements from *object*. Returns 0 on success and -1 if
-   *object* is not a JSON object.
-
-   .. versionadded:: 1.1
-
-.. cfunction:: int json_object_update(json_t *object, json_t *other)
-
-   Update *object* with the key-value pairs from *other*, overwriting
-   existing keys. Returns 0 on success or -1 on error.
-
-   .. versionadded:: 1.1
-
-
-The following functions implement an iteration protocol for objects:
-
-.. cfunction:: void *json_object_iter(json_t *object)
-
-   Returns an opaque iterator which can be used to iterate over all
-   key-value pairs in *object*, or *NULL* if *object* is empty.
-
-.. cfunction:: void *json_object_iter_next(json_t *object, void *iter)
-
-   Returns an iterator pointing to the next key-value pair in *object*
-   after *iter*, or *NULL* if the whole object has been iterated
-   through.
-
-.. cfunction:: const char *json_object_iter_key(void *iter)
-
-   Extract the associated key from *iter*.
-
-.. cfunction:: json_t *json_object_iter_value(void *iter)
-
-   .. refcounting:: borrow
-
-   Extract the associated value from *iter*.
-
-The iteration protocol can be used for example as follows::
-
-   /* obj is a JSON object */
-   const char *key;
-   json_t *value;
-   void *iter = json_object_iter(obj);
-   while(iter)
-   {
-       key = json_object_iter_key(iter);
-       value = json_object_iter_value(iter);
-       /* use key and value ... */
-       iter = json_object_iter_next(obj, iter);
-   }
-
-
-Encoding
-========
-
-This sections describes the functions that can be used to encode
-values to JSON. Only objects and arrays can be encoded, since they are
-the only valid "root" values of a JSON text.
-
-Each function takes a *flags* parameter that controls some aspects of
-how the data is encoded. Its default value is 0. The following macros
-can be ORed together to obtain *flags*.
-
-``JSON_INDENT(n)``
-   Pretty-print the result, indenting arrays and objects by *n*
-   spaces. The valid range for *n* is between 0 and 255, other values
-   result in an undefined output. If ``JSON_INDENT`` is not used or
-   *n* is 0, no pretty-printing is done and the result is a compact
-   representation.
-
-The following functions perform the actual JSON encoding. The result
-is in UTF-8.
-
-.. cfunction:: char *json_dumps(const json_t *root, unsigned long flags)
-
-   Returns the JSON representation of *root* as a string, or *NULL* on
-   error. *flags* is described above. The return value must be freed
-   by the caller using :cfunc:`free()`.
-
-.. cfunction:: int json_dumpf(const json_t *root, FILE *output, unsigned long flags)
-
-   Write the JSON representation of *root* to the stream *output*.
-   *flags* is described above. Returns 0 on success and -1 on error.
-   If an error occurs, something may have already been written to
-   *output*. In this case, the output is undefined and most likely not
-   valid JSON.
-
-.. cfunction:: int json_dump_file(const json_t *json, const char *path, unsigned long flags)
-
-   Write the JSON representation of *root* to the file *path*. If
-   *path* already exists, it is overwritten. *flags* is described
-   above. Returns 0 on success and -1 on error.
-
-
-Decoding
-========
-
-This sections describes the functions that can be used to decode JSON
-text to the Jansson representation of JSON data. The JSON
-specification requires that a JSON text is either a serialized array
-or object, and this requirement is also enforced with the following
-functions.
-
-The only supported character encoding is UTF-8 (which ASCII is a
-subset of).
-
-.. ctype:: json_error_t
-
-   This data structure is used to return information on decoding
-   errors from the decoding functions. Its definition is repeated
-   here::
-
-      #define JSON_ERROR_TEXT_LENGTH  160
-
-      typedef struct {
-          char text[JSON_ERROR_TEXT_LENGTH];
-          int line;
-      } json_error_t;
-
-   *line* is the line number on which the error occurred, or -1 if
-   this information is not available. *text* contains the error
-   message (in UTF-8), or an empty string if a message is not
-   available.
-
-   The normal usef of :ctype:`json_error_t` is to allocate it normally
-   on the stack, and pass a pointer to a decoding function. Example::
-
-      int main() {
-          json_t *json;
-          json_error_t error;
-
-          json = json_load_file("/path/to/file.json", &error);
-          if(!json) {
-              /* the error variable contains error information */
-          }
-          ...
-      }
-
-   Also note that if the decoding succeeded (``json != NULL`` in the
-   above example), the contents of ``error`` are unspecified.
-
-   All decoding functions also accept *NULL* as the
-   :ctype:`json_error_t` pointer, in which case no error information
-   is returned to the caller.
-
-The following functions perform the actual JSON decoding.
-
-.. cfunction:: json_t *json_loads(const char *input, json_error_t *error)
-
-   .. refcounting:: new
-
-   Decodes the JSON string *input* and returns the array or object it
-   contains, or *NULL* on error, in which case *error* is filled with
-   information about the error. See above for discussion on the
-   *error* parameter.
-
-.. cfunction:: json_t *json_loadf(FILE *input, json_error_t *error)
-
-   .. refcounting:: new
-
-   Decodes the JSON text in stream *input* and returns the array or
-   object it contains, or *NULL* on error, in which case *error* is
-   filled with information about the error. See above for discussion
-   on the *error* parameter.
-
-.. cfunction:: json_t *json_load_file(const char *path, json_error_t *error)
-
-   .. refcounting:: new
-
-   Decodes the JSON text in file *path* and returns the array or
-   object it contains, or *NULL* on error, in which case *error* is
-   filled with information about the error. See above for discussion
-   on the *error* parameter.
diff --git a/branch-1.3/lang/c/jansson/doc/conf.py b/branch-1.3/lang/c/jansson/doc/conf.py
deleted file mode 100644
index 377e75d..0000000
--- a/branch-1.3/lang/c/jansson/doc/conf.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Jansson documentation build configuration file, created by
-# sphinx-quickstart on Thu Jul 30 11:35:32 2009.
-#
-# This file is execfile()d with the current directory set to its containing dir.
-#
-# The contents of this file are pickled, so don't put values in the namespace
-# that aren't pickleable (module imports are okay, they're removed automatically).
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import sys, os
-sys.path.insert(0, os.path.abspath('ext'))
-
-# If your extensions (or modules documented by autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.append(os.path.abspath('.'))
-
-# General configuration
-# ---------------------
-
-# Add any Sphinx extension module names here, as strings. They can be extensions
-# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = ['refcounting']
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = []
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-#source_encoding = 'utf-8'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-project = u'Jansson'
-copyright = u'2009, Petri Lehtinen'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = '1.1'
-# The full version, including alpha/beta/rc tags.
-release = '1.1.3'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of documents that shouldn't be included in the build.
-#unused_docs = []
-
-# List of directories, relative to source directory, that shouldn't be searched
-# for source files.
-exclude_trees = ['.build']
-
-# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-
-# Options for HTML output
-# -----------------------
-
-# The style sheet to use for HTML and HTML Help pages. A file of that name
-# must exist either in Sphinx' static/ path, or in one of the custom paths
-# given in html_static_path.
-html_style = 'default.css'
-
-# The name for this set of Sphinx documents.  If None, it defaults to
-# "<project> v<release> documentation".
-#html_title = None
-
-# A shorter title for the navigation bar.  Default is the same as html_title.
-#html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-#html_logo = None
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-#html_favicon = None
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = []
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-#html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-#html_additional_pages = {}
-
-# If false, no module index is generated.
-#html_use_modindex = True
-
-# If false, no index is generated.
-#html_use_index = True
-
-# If true, the index is split into individual pages for each letter.
-#html_split_index = False
-
-# If true, the reST sources are included in the HTML build as _sources/<name>.
-#html_copy_source = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a <link> tag referring to it.  The value of this option must be the
-# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
-
-# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = ''
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'Janssondoc'
-
-
-# Options for LaTeX output
-# ------------------------
-
-# The paper size ('letter' or 'a4').
-#latex_paper_size = 'letter'
-
-# The font size ('10pt', '11pt' or '12pt').
-#latex_font_size = '10pt'
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title, author, document class [howto/manual]).
-latex_documents = [
-  ('index', 'Jansson.tex', ur'Jansson Documentation',
-   ur'Petri Lehtinen', 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-#latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-#latex_use_parts = False
-
-# Additional stuff for the LaTeX preamble.
-#latex_preamble = ''
-
-# Documents to append as an appendix to all manuals.
-#latex_appendices = []
-
-# If false, no module index is generated.
-#latex_use_modindex = True
diff --git a/branch-1.3/lang/c/jansson/doc/ext/refcounting.py b/branch-1.3/lang/c/jansson/doc/ext/refcounting.py
deleted file mode 100644
index 506b2ee..0000000
--- a/branch-1.3/lang/c/jansson/doc/ext/refcounting.py
+++ /dev/null
@@ -1,59 +0,0 @@
-"""
-    refcounting
-    ~~~~~~~~~~~
-
-    Reference count annotations for C API functions. Has the same
-    result as the sphinx.ext.refcounting extension but works for all
-    functions regardless of the signature, and the reference counting
-    information is written inline with the documentation instead of a
-    separate file.
-
-    Adds a new directive "refcounting". The directive has no content
-    and one required positional parameter:: "new" or "borrow".
-
-    Example:
-
-    .. cfunction:: json_t *json_object(void)
-
-       .. refcounting:: new
-
-       <description of the json_object function>
-
-    :copyright: Copyright 2009 Petri Lehtinen <petri@digip.org>
-    :license: MIT, see LICENSE for details.
-"""
-
-from docutils import nodes
-
-class refcounting(nodes.emphasis): pass
-
-def visit(self, node):
-    self.visit_emphasis(node)
-
-def depart(self, node):
-    self.depart_emphasis(node)
-
-def html_visit(self, node):
-    self.body.append(self.starttag(node, 'em', '', CLASS='refcount'))
-
-def html_depart(self, node):
-    self.body.append('</em>')
-
-
-def refcounting_directive(name, arguments, options, content, lineno,
-                   content_offset, block_text, state, state_machine):
-    if arguments[0] == 'borrow':
-        text = 'Return value: Borrowed reference.'
-    elif arguments[0] == 'new':
-        text = 'Return value: New reference.'
-    else:
-        raise Error('Valid arguments: new, borrow')
-
-    return [refcounting(text, text)]
-
-def setup(app):
-    app.add_node(refcounting,
-                 html=(html_visit, html_depart),
-                 latex=(visit, depart),
-                 text=(visit, depart))
-    app.add_directive('refcounting', refcounting_directive, 0, (1, 0, 0))
diff --git a/branch-1.3/lang/c/jansson/doc/gettingstarted.rst b/branch-1.3/lang/c/jansson/doc/gettingstarted.rst
deleted file mode 100644
index 00dd16a..0000000
--- a/branch-1.3/lang/c/jansson/doc/gettingstarted.rst
+++ /dev/null
@@ -1,104 +0,0 @@
-***************
-Getting Started
-***************
-
-.. highlight:: c
-
-Compiling and Installing Jansson
-================================
-
-The Jansson source is available at
-http://www.digip.org/jansson/releases/.
-
-Unpack the source tarball and change to the source directory:
-
-.. parsed-literal::
-
-    bunzip2 -c jansson-|release|.tar.bz2 | tar xf -
-    cd jansson-|release|
-
-The source uses GNU Autotools (autoconf_, automake_, libtool_), so
-compiling and installing is extremely simple::
-
-    ./configure
-    make
-    make check
-    make install
-
-To change the destination directory (``/usr/local`` by default), use
-the ``--prefix=DIR`` argument to ``./configure``. See ``./configure
---help`` for the list of all possible installation options. (There are
-no options to customize the resulting Jansson binary.)
-
-The command ``make check`` runs the test suite distributed with
-Jansson. Python_ is required to run the tests. This step is not
-strictly necessary, but it may find possible problems that Jansson has
-on your platform. If any problems are found, please report them.
-
-If you obtained the source from a Git repository (or any other source
-control system), there's no ``./configure`` script as it's not kept in
-version control. To create the script, Autotools needs to be
-bootstrapped. There are many ways to do this, but the easiest one is
-to use ``autoreconf``::
-
-    autoreconf -vi
-
-This command creates the ``./configure`` script, which can then be
-used as described in the previous section.
-
-.. _autoconf: http://www.gnu.org/software/autoconf/
-.. _automake: http://www.gnu.org/software/automake/
-.. _libtool: http://www.gnu.org/software/libtool/
-.. _Python: http://www.python.org/
-
-
-Installing Prebuilt Binary Packages
------------------------------------
-
-Binary ``.deb`` packages for Ubuntu are available in `this PPA`_ at
-Launchpad_. Follow the instructions in the PPA ("Technical details
-about this PPA" link) to take the PPA into use. Then install the -dev
-package::
-
-  sudo apt-get install libjansson-dev
-
-.. _this PPA: http://launchpad.net/~petri/+archive/ppa
-.. _Launchpad: http://launchpad.net/
-
-
-Building the Documentation
---------------------------
-
-(This subsection describes how to build the HTML documentation you are
-currently reading, so it can be safely skipped.)
-
-Documentation is in the ``doc/`` subdirectory. It's written in
-reStructuredText_ with Sphinx_ annotations. To generate the HTML
-documentation, invoke::
-
-   cd doc/
-   sphinx-build . .build/html
-
-... and point your browser to ``.build/html/index.html``. Sphinx_ is
-required to generate the documentation.
-
-.. _reStructuredText: http://docutils.sourceforge.net/rst.html
-.. _Sphinx: http://sphinx.pocoo.org/
-
-
-Compiling Programs Using Jansson
-================================
-
-Jansson involves one C header file, :file:`jansson.h`, so it's enough
-to put the line
-
-::
-
-    #include <jansson.h>
-
-in the beginning of every source file that uses Jansson.
-
-There's also just one library to link with, ``libjansson``. Compile and
-link the program as follows::
-
-    cc -o prog prog.c -ljansson
diff --git a/branch-1.3/lang/c/jansson/doc/github_commits.c b/branch-1.3/lang/c/jansson/doc/github_commits.c
deleted file mode 100644
index 75b5d34..0000000
--- a/branch-1.3/lang/c/jansson/doc/github_commits.c
+++ /dev/null
@@ -1,171 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <stdlib.h>
-#include <string.h>
-
-#include <jansson.h>
-#include <curl/curl.h>
-
-#define BUFFER_SIZE  (256 * 1024)  /* 256 KB */
-
-#define URL_FORMAT   "http://github.com/api/v2/json/commits/list/%s/%s/master"
-#define URL_SIZE     256
-
-/* Return the offset of the first newline in text or the length of
-   text if there's no newline */
-static int newline_offset(const char *text)
-{
-    const char *newline = strchr(text, '\n');
-    if(!newline)
-        return strlen(text);
-    else
-        return (int)(newline - text);
-}
-
-struct write_result
-{
-    char *data;
-    int pos;
-};
-
-static size_t write_response(void *ptr, size_t size, size_t nmemb, void *stream)
-{
-    struct write_result *result = (struct write_result *)stream;
-
-    if(result->pos + size * nmemb >= BUFFER_SIZE - 1)
-    {
-        fprintf(stderr, "error: too small buffer\n");
-        return 0;
-    }
-
-    memcpy(result->data + result->pos, ptr, size * nmemb);
-    result->pos += size * nmemb;
-
-    return size * nmemb;
-}
-
-static char *request(const char *url)
-{
-    CURL *curl;
-    CURLcode status;
-    char *data;
-    long code;
-
-    curl = curl_easy_init();
-    data = malloc(BUFFER_SIZE);
-    if(!curl || !data)
-        return NULL;
-
-    struct write_result write_result = {
-        .data = data,
-        .pos = 0
-    };
-
-    curl_easy_setopt(curl, CURLOPT_URL, url);
-    curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_response);
-    curl_easy_setopt(curl, CURLOPT_WRITEDATA, &write_result);
-
-    status = curl_easy_perform(curl);
-    if(status != 0)
-    {
-        fprintf(stderr, "error: unable to request data from %s:\n", url);
-        fprintf(stderr, "%s\n", curl_easy_strerror(status));
-        return NULL;
-    }
-
-    curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &code);
-    if(code != 200)
-    {
-        fprintf(stderr, "error: server responded with code %ld\n", code);
-        return NULL;
-    }
-
-    curl_easy_cleanup(curl);
-    curl_global_cleanup();
-
-    /* zero-terminate the result */
-    data[write_result.pos] = '\0';
-
-    return data;
-}
-
-int main(int argc, char *argv[])
-{
-    unsigned int i;
-    char *text;
-    char url[URL_SIZE];
-
-    json_t *root;
-    json_error_t error;
-    json_t *commits;
-
-    if(argc != 3)
-    {
-        fprintf(stderr, "usage: %s USER REPOSITORY\n\n", argv[0]);
-        fprintf(stderr, "List commits at USER's REPOSITORY.\n\n");
-        return 2;
-    }
-
-    snprintf(url, URL_SIZE, URL_FORMAT, argv[1], argv[2]);
-
-    text = request(url);
-    if(!text)
-        return 1;
-
-    root = json_loads(text, &error);
-    free(text);
-
-    if(!root)
-    {
-        fprintf(stderr, "error: on line %d: %s\n", error.line, error.text);
-        return 1;
-    }
-
-    commits = json_object_get(root, "commits");
-    if(!json_is_array(commits))
-    {
-        fprintf(stderr, "error: commits is not an array\n");
-        return 1;
-    }
-
-    for(i = 0; i < json_array_size(commits); i++)
-    {
-        json_t *commit, *id, *message;
-        const char *message_text;
-
-        commit = json_array_get(commits, i);
-        if(!json_is_object(commit))
-        {
-            fprintf(stderr, "error: commit %d is not an object\n", i + 1);
-            return 1;
-        }
-
-        id = json_object_get(commit, "id");
-        if(!json_is_string(id))
-        {
-            fprintf(stderr, "error: commit %d: id is not a string\n", i + 1);
-            return 1;
-        }
-
-        message = json_object_get(commit, "message");
-        if(!json_is_string(message))
-        {
-            fprintf(stderr, "error: commit %d: message is not a string\n", i + 1);
-            return 1;
-        }
-
-        message_text = json_string_value(message);
-        printf("%.8s %.*s\n",
-               json_string_value(id),
-               newline_offset(message_text),
-               message_text);
-    }
-
-    json_decref(root);
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/doc/index.rst b/branch-1.3/lang/c/jansson/doc/index.rst
deleted file mode 100644
index 1ae9bf3..0000000
--- a/branch-1.3/lang/c/jansson/doc/index.rst
+++ /dev/null
@@ -1,44 +0,0 @@
-Jansson Documentation
-=====================
-
-This is the documentation for Jansson_ |release|, last updated |today|.
-
-Introduction
-------------
-
-Jansson_ is a C library for encoding, decoding and manipulating JSON
-data. Its main features and design principles are:
-
-- Simple and intuitive API and data model
-
-- Comprehensive documentation
-
-- No dependencies on other libraries
-
-- Full Unicode support (UTF-8)
-
-- Extensive test suite
-
-Jansson is licensed under the `MIT license`_; see LICENSE in the
-source distribution for details.
-
-
-.. _`MIT license`: http://www.opensource.org/licenses/mit-license.php
-.. _Jansson: http://www.digip.org/jansson/
-
-Contents
---------
-
-.. toctree::
-   :maxdepth: 2
-
-   gettingstarted
-   tutorial
-   apiref
-
-
-Indices and Tables
-==================
-
-* :ref:`genindex`
-* :ref:`search`
diff --git a/branch-1.3/lang/c/jansson/doc/tutorial.rst b/branch-1.3/lang/c/jansson/doc/tutorial.rst
deleted file mode 100644
index aaeb554..0000000
--- a/branch-1.3/lang/c/jansson/doc/tutorial.rst
+++ /dev/null
@@ -1,275 +0,0 @@
-.. _tutorial:
-
-********
-Tutorial
-********
-
-.. highlight:: c
-
-In this tutorial, we create a program that fetches the latest commits
-of a repository in GitHub_ over the web. One of the response formats
-supported by `GitHub API`_ is JSON, so the result can be parsed using
-Jansson.
-
-To stick to the the scope of this tutorial, we will only cover the the
-parts of the program related to handling JSON data. For the best user
-experience, the full source code is available:
-:download:`github_commits.c`. To compile it (on Unix-like systems with
-gcc), use the following command::
-
-    gcc -o github_commits github_commits.c -ljansson -lcurl
-
-libcurl_ is used to communicate over the web, so it is required to
-compile the program.
-
-The command line syntax is::
-
-    github_commits USER REPOSITORY
-
-``USER`` is a GitHub user ID and ``REPOSITORY`` is the repository
-name. Please note that the GitHub API is rate limited, so if you run
-the program too many times within a short period of time, the sever
-starts to respond with an error.
-
-.. _GitHub: http://github.com/
-.. _GitHub API: http://develop.github.com/
-.. _libcurl: http://curl.haxx.se/
-
-
-.. _tutorial-github-commits-api:
-
-The GitHub Commits API
-======================
-
-The `GitHub commits API`_ is used by sending HTTP requests to URLs
-starting with ``http://github.com/api/v2/json/commits/``. Our program
-only lists the latest commits, so the rest of the URL is
-``list/USER/REPOSITORY/BRANCH``, where ``USER``, ``REPOSITORY`` and
-``BRANCH`` are the GitHub user ID, the name of the repository, and the
-name of the branch whose commits are to be listed, respectively.
-
-GitHub responds with a JSON object of the following form:
-
-.. code-block:: none
-
-    {
-        "commits": [
-            {
-                "id": "<the commit ID>",
-                "message": "<the commit message>",
-                <more fields, not important to this tutorial>
-            },
-            {
-                "id": "<the commit ID>",
-                "message": "<the commit message>",
-                <more fields, not important to this tutorial>
-            },
-            <more commits...>
-        ]
-    }
-
-In our program, the HTTP request is sent using the following
-function::
-
-    static char *request(const char *url);
-
-It takes the URL as a parameter, preforms a HTTP GET request, and
-returns a newly allocated string that contains the response body. If
-the request fails, an error message is printed to stderr and the
-return value is *NULL*. For full details, refer to :download:`the code
-<github_commits.c>`, as the actual implementation is not important
-here.
-
-.. _GitHub commits API: http://develop.github.com/p/commits.html
-
-.. _tutorial-the-program:
-
-The Program
-===========
-
-First the includes::
-
-    #include <string.h>
-    #include <jansson.h>
-
-Like all the programs using Jansson, we need to include
-:file:`jansson.h`.
-
-The following definitions are used to build the GitHub commits API
-request URL::
-
-   #define URL_FORMAT   "http://github.com/api/v2/json/commits/list/%s/%s/master"
-   #define URL_SIZE     256
-
-The following function is used when formatting the result to find the
-first newline in the commit message::
-
-    /* Return the offset of the first newline in text or the length of
-       text if there's no newline */
-    static int newline_offset(const char *text)
-    {
-        const char *newline = strchr(text, '\n');
-        if(!newline)
-            return strlen(text);
-        else
-            return (int)(newline - text);
-    }
-
-The main function follows. In the beginning, we first declare a bunch
-of variables and check the command line parameters::
-
-    unsigned int i;
-    char *text;
-    char url[URL_SIZE];
-
-    json_t *root;
-    json_error_t error;
-    json_t *commits;
-
-    if(argc != 3)
-    {
-        fprintf(stderr, "usage: %s USER REPOSITORY\n\n", argv[0]);
-        fprintf(stderr, "List commits at USER's REPOSITORY.\n\n");
-        return 2;
-    }
-
-Then we build the request URL using the user and repository names
-given as command line parameters::
-
-    snprintf(url, URL_SIZE, URL_FORMAT, argv[1], argv[2]);
-
-This uses the ``URL_SIZE`` and ``URL_FORMAT`` constants defined above.
-Now we're ready to actually request the JSON data over the web::
-
-    text = request(url);
-    if(!text)
-        return 1;
-
-If an error occurs, our function ``request`` prints the error and
-returns *NULL*, so it's enough to just return 1 from the main
-function.
-
-Next we'll call :cfunc:`json_loads()` to decode the JSON text we got
-as a response::
-
-    root = json_loads(text, &error);
-    free(text);
-
-    if(!root)
-    {
-        fprintf(stderr, "error: on line %d: %s\n", error.line, error.text);
-        return 1;
-    }
-
-We don't need the JSON text anymore, so we can free the ``text``
-variable right after decoding it. If :cfunc:`json_loads()` fails, it
-returns *NULL* and sets error information to the :ctype:`json_error_t`
-structure given as the second parameter. In this case, our program
-prints the error information out and returns 1 from the main function.
-
-Now we're ready to extract the data out of the decoded JSON response.
-The structure of the response JSON was explained in section
-:ref:`tutorial-github-commits-api`.
-
-First, we'll extract the ``commits`` array from the JSON response::
-
-    commits = json_object_get(root, "commits");
-    if(!json_is_array(commits))
-    {
-        fprintf(stderr, "error: commits is not an array\n");
-        return 1;
-    }
-
-This is the array that contains objects describing latest commits in
-the repository. We check that the returned value really is an array.
-If the key ``commits`` doesn't exist, :cfunc:`json_object_get()`
-returns *NULL*, but :cfunc:`json_is_array()` handles this case, too.
-
-Then we proceed to loop over all the commits in the array::
-
-    for(i = 0; i < json_array_size(commits); i++)
-    {
-        json_t *commit, *id, *message;
-        const char *message_text;
-
-        commit = json_array_get(commits, i);
-        if(!json_is_object(commit))
-        {
-            fprintf(stderr, "error: commit %d is not an object\n", i + 1);
-            return 1;
-        }
-    ...
-
-The function :cfunc:`json_array_size()` returns the size of a JSON
-array. First, we again declare some variables and then extract the
-i'th element of the ``commits`` array using :cfunc:`json_array_get()`.
-We also check that the resulting value is a JSON object.
-
-Next we'll extract the commit ID and commit message, and check that
-they both are JSON strings::
-
-        id = json_object_get(commit, "id");
-        if(!json_is_string(id))
-        {
-            fprintf(stderr, "error: commit %d: id is not a string\n", i + 1);
-            return 1;
-        }
-
-        message = json_object_get(commit, "message");
-        if(!json_is_string(message))
-        {
-            fprintf(stderr, "error: commit %d: message is not a string\n", i + 1);
-            return 1;
-        }
-    ...
-
-And finally, we'll print the first 8 characters of the commit ID and
-the first line of the commit message. A C-style string is extracted
-from a JSON string using :cfunc:`json_string_value()`::
-
-        message_text = json_string_value(message);
-        printf("%.8s %.*s\n",
-               json_string_value(id),
-               newline_offset(message_text),
-               message_text);
-    }
-
-After sending the HTTP request, we decoded the JSON text using
-:cfunc:`json_loads()`, remember? It returns a *new reference* to the
-JSON value it decodes. When we're finished with the value, we'll need
-to decrease the reference count using :cfunc:`json_decref()`. This way
-Jansson can release the resources::
-
-    json_decref(root);
-    return 0;
-
-For a detailed explanation of reference counting in Jansson, see
-:ref:`apiref-reference-count` in :ref:`apiref`.
-
-The program's ready, let's test it and view the latest commits in
-Jansson's repository::
-
-    $ ./github_commits akheron jansson
-    86dc1d62 Fix indentation
-    b67e130f json_dumpf: Document the output shortage on error
-    4cd77771 Enhance handling of circular references
-    79009e62 json_dumps: Close the strbuffer if dumping fails
-    76999799 doc: Fix a small typo in apiref
-    22af193a doc/Makefile.am: Remove *.pyc in clean
-    951d091f Make integer, real and string mutable
-    185e107d Don't use non-portable asprintf()
-    ca7703fb Merge branch '1.0'
-    12cd4e8c jansson 1.0.4
-    <etc...>
-
-
-Conclusion
-==========
-
-In this tutorial, we implemented a program that fetches the latest
-commits of a GitHub repository using the GitHub commits API. Jansson
-was used to decode the JSON response and to extract the commit data.
-
-This tutorial only covered a small part of Jansson. For example, we
-did not create or manipulate JSON values at all. Proceed to
-:ref:`apiref` to explore all features of Jansson.
diff --git a/branch-1.3/lang/c/jansson/src/Makefile.am b/branch-1.3/lang/c/jansson/src/Makefile.am
deleted file mode 100644
index 489bf9e..0000000
--- a/branch-1.3/lang/c/jansson/src/Makefile.am
+++ /dev/null
@@ -1,20 +0,0 @@
-noinst_HEADERS = jansson.h
-
-noinst_LTLIBRARIES = libjansson.la
-libjansson_la_SOURCES = \
-	dump.c \
-	hashtable.c \
-	hashtable.h \
-	jansson_private.h \
-	load.c \
-	strbuffer.c \
-	strbuffer.h \
-	utf.c \
-	utf.h \
-	util.h \
-	value.c
-libjansson_la_LDFLAGS = \
-	-export-symbols-regex '^json_' \
-	-version-info 1:2:1
-
-AM_CFLAGS = -Wall -Wextra -Werror
diff --git a/branch-1.3/lang/c/jansson/src/dump.c b/branch-1.3/lang/c/jansson/src/dump.c
deleted file mode 100644
index bd12a7b..0000000
--- a/branch-1.3/lang/c/jansson/src/dump.c
+++ /dev/null
@@ -1,321 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#define _GNU_SOURCE
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <jansson.h>
-#include "jansson_private.h"
-#include "strbuffer.h"
-
-#define MAX_INTEGER_STR_LENGTH  100
-#define MAX_REAL_STR_LENGTH     100
-
-typedef int (*dump_func)(const char *buffer, int size, void *data);
-
-struct string
-{
-    char *buffer;
-    int length;
-    int size;
-};
-
-static int dump_to_strbuffer(const char *buffer, int size, void *data)
-{
-    return strbuffer_append_bytes((strbuffer_t *)data, buffer, size);
-}
-
-static int dump_to_file(const char *buffer, int size, void *data)
-{
-    FILE *dest = (FILE *)data;
-    if(fwrite(buffer, size, 1, dest) != 1)
-        return -1;
-    return 0;
-}
-
-/* 256 spaces (the maximum indentation size) */
-static char whitespace[] = "                                                                                                                                                                                                                                                                ";
-
-static int dump_indent(unsigned long flags, int depth, dump_func dump, void *data)
-{
-    if(JSON_INDENT(flags) > 0)
-    {
-        int i, ws_count = JSON_INDENT(flags);
-
-        if(dump("\n", 1, data))
-            return -1;
-
-        for(i = 0; i < depth; i++)
-        {
-            if(dump(whitespace, ws_count, data))
-                return -1;
-        }
-    }
-    return 0;
-}
-
-static int dump_string(const char *str, dump_func dump, void *data)
-{
-    const char *end;
-
-    if(dump("\"", 1, data))
-        return -1;
-
-    end = str;
-    while(1)
-    {
-        const char *text;
-        char seq[7];
-        int length;
-
-        while(*end && *end != '\\' && *end != '"' && (unsigned char)*end > 0x1F)
-            end++;
-
-        if(end != str) {
-            if(dump(str, end - str, data))
-                return -1;
-        }
-
-        if(!*end)
-            break;
-
-        /* handle \, ", and control codes */
-        length = 2;
-        switch(*end)
-        {
-            case '\\': text = "\\\\"; break;
-            case '\"': text = "\\\""; break;
-            case '\b': text = "\\b"; break;
-            case '\f': text = "\\f"; break;
-            case '\n': text = "\\n"; break;
-            case '\r': text = "\\r"; break;
-            case '\t': text = "\\t"; break;
-            default:
-            {
-                sprintf(seq, "\\u00%02x", *end);
-                text = seq;
-                length = 6;
-                break;
-            }
-        }
-
-        if(dump(text, length, data))
-            return -1;
-
-        end++;
-        str = end;
-    }
-
-    return dump("\"", 1, data);
-}
-
-static int do_dump(const json_t *json, unsigned long flags, int depth,
-                   dump_func dump, void *data)
-{
-    switch(json_typeof(json)) {
-        case JSON_NULL:
-            return dump("null", 4, data);
-
-        case JSON_TRUE:
-            return dump("true", 4, data);
-
-        case JSON_FALSE:
-            return dump("false", 5, data);
-
-        case JSON_INTEGER:
-        {
-            char buffer[MAX_INTEGER_STR_LENGTH];
-            int size;
-
-            size = snprintf(buffer, MAX_INTEGER_STR_LENGTH, "%d", json_integer_value(json));
-            if(size >= MAX_INTEGER_STR_LENGTH)
-                return -1;
-
-            return dump(buffer, size, data);
-        }
-
-        case JSON_REAL:
-        {
-            char buffer[MAX_REAL_STR_LENGTH];
-            int size;
-
-            size = snprintf(buffer, MAX_REAL_STR_LENGTH, "%.17g",
-                            json_real_value(json));
-            if(size >= MAX_REAL_STR_LENGTH)
-                return -1;
-
-            /* Make sure there's a dot or 'e' in the output. Otherwise
-               a real is converted to an integer when decoding */
-            if(strchr(buffer, '.') == NULL &&
-               strchr(buffer, 'e') == NULL)
-            {
-                if(size + 2 >= MAX_REAL_STR_LENGTH) {
-                    /* No space to append ".0" */
-                    return -1;
-                }
-                buffer[size] = '.';
-                buffer[size + 1] = '0';
-                size += 2;
-            }
-
-            return dump(buffer, size, data);
-        }
-
-        case JSON_STRING:
-            return dump_string(json_string_value(json), dump, data);
-
-        case JSON_ARRAY:
-        {
-            int i;
-            int n;
-            json_array_t *array;
-
-            /* detect circular references */
-            array = json_to_array(json);
-            if(array->visited)
-                return -1;
-            array->visited = 1;
-
-            n = json_array_size(json);
-
-            if(dump("[", 1, data))
-                return -1;
-            if(n == 0)
-                return dump("]", 1, data);
-            if(dump_indent(flags, depth + 1, dump, data))
-                return -1;
-
-            for(i = 0; i < n; ++i) {
-                if(do_dump(json_array_get(json, i), flags, depth + 1,
-                           dump, data))
-                    return -1;
-
-                if(i < n - 1)
-                {
-                    if(dump(",", 1, data) ||
-                       dump_indent(flags, depth + 1, dump, data))
-                        return -1;
-                }
-                else
-                {
-                    if(dump_indent(flags, depth, dump, data))
-                        return -1;
-                }
-            }
-
-            array->visited = 0;
-            return dump("]", 1, data);
-        }
-
-        case JSON_OBJECT:
-        {
-            json_object_t *object;
-            void *iter;
-
-            /* detect circular references */
-            object = json_to_object(json);
-            if(object->visited)
-                return -1;
-            object->visited = 1;
-
-            iter = json_object_iter((json_t *)json);
-
-            if(dump("{", 1, data))
-                return -1;
-            if(!iter)
-                return dump("}", 1, data);
-            if(dump_indent(flags, depth + 1, dump, data))
-                return -1;
-
-            while(iter)
-            {
-                void *next = json_object_iter_next((json_t *)json, iter);
-
-                dump_string(json_object_iter_key(iter), dump, data);
-                if(dump(": ", 2, data) ||
-                   do_dump(json_object_iter_value(iter), flags, depth + 1,
-                           dump, data))
-                    return -1;
-
-                if(next)
-                {
-                    if(dump(",", 1, data) ||
-                       dump_indent(flags, depth + 1, dump, data))
-                        return -1;
-                }
-                else
-                {
-                    if(dump_indent(flags, depth, dump, data))
-                        return -1;
-                }
-
-                iter = next;
-            }
-
-            object->visited = 0;
-            return dump("}", 1, data);
-        }
-
-        default:
-            /* not reached */
-            return -1;
-    }
-}
-
-
-char *json_dumps(const json_t *json, unsigned long flags)
-{
-    strbuffer_t strbuff;
-    char *result;
-
-    if(!json_is_array(json) && !json_is_object(json))
-        return NULL;
-
-    if(strbuffer_init(&strbuff))
-        return NULL;
-
-    if(do_dump(json, flags, 0, dump_to_strbuffer, (void *)&strbuff)) {
-        strbuffer_close(&strbuff);
-        return NULL;
-    }
-
-    if(dump_to_strbuffer("\n", 1, (void *)&strbuff)) {
-        strbuffer_close(&strbuff);
-        return NULL;
-    }
-
-    result = strdup(strbuffer_value(&strbuff));
-    strbuffer_close(&strbuff);
-
-    return result;
-}
-
-int json_dumpf(const json_t *json, FILE *output, unsigned long flags)
-{
-    if(!json_is_array(json) && !json_is_object(json))
-        return -1;
-
-    if(do_dump(json, flags, 0, dump_to_file, (void *)output))
-        return -1;
-    return dump_to_file("\n", 1, (void *)output);
-}
-
-int json_dump_file(const json_t *json, const char *path, unsigned long flags)
-{
-    int result;
-
-    FILE *output = fopen(path, "w");
-    if(!output)
-        return -1;
-
-    result = json_dumpf(json, output, flags);
-
-    fclose(output);
-    return result;
-}
diff --git a/branch-1.3/lang/c/jansson/src/hashtable.c b/branch-1.3/lang/c/jansson/src/hashtable.c
deleted file mode 100644
index 05dc167..0000000
--- a/branch-1.3/lang/c/jansson/src/hashtable.c
+++ /dev/null
@@ -1,339 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * This library is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <stdlib.h>
-#include "hashtable.h"
-
-typedef struct hashtable_list list_t;
-typedef struct hashtable_pair pair_t;
-typedef struct hashtable_bucket bucket_t;
-
-#define container_of(ptr_, type_, member_)                      \
-    ((type_ *)((char *)ptr_ - (size_t)&((type_ *)0)->member_))
-
-#define list_to_pair(list_)  container_of(list_, pair_t, list)
-
-static inline void list_init(list_t *list)
-{
-    list->next = list;
-    list->prev = list;
-}
-
-static inline void list_insert(list_t *list, list_t *node)
-{
-    node->next = list;
-    node->prev = list->prev;
-    list->prev->next = node;
-    list->prev = node;
-}
-
-static inline void list_remove(list_t *list)
-{
-    list->prev->next = list->next;
-    list->next->prev = list->prev;
-}
-
-static inline int bucket_is_empty(hashtable_t *hashtable, bucket_t *bucket)
-{
-    return bucket->first == &hashtable->list && bucket->first == bucket->last;
-}
-
-static void insert_to_bucket(hashtable_t *hashtable, bucket_t *bucket,
-                             list_t *list)
-{
-    if(bucket_is_empty(hashtable, bucket))
-    {
-        list_insert(&hashtable->list, list);
-        bucket->first = bucket->last = list;
-    }
-    else
-    {
-        list_insert(bucket->first, list);
-        bucket->first = list;
-    }
-}
-
-static unsigned int primes[] = {
-    5, 13, 23, 53, 97, 193, 389, 769, 1543, 3079, 6151, 12289, 24593,
-    49157, 98317, 196613, 393241, 786433, 1572869, 3145739, 6291469,
-    12582917, 25165843, 50331653, 100663319, 201326611, 402653189,
-    805306457, 1610612741
-};
-static const unsigned int num_primes = sizeof(primes) / sizeof(unsigned int);
-
-static inline unsigned int num_buckets(hashtable_t *hashtable)
-{
-    return primes[hashtable->num_buckets];
-}
-
-
-static pair_t *hashtable_find_pair(hashtable_t *hashtable, bucket_t *bucket,
-                                   const void *key, unsigned int hash)
-{
-    list_t *list;
-    pair_t *pair;
-
-    if(bucket_is_empty(hashtable, bucket))
-        return NULL;
-
-    list = bucket->first;
-    while(1)
-    {
-        pair = list_to_pair(list);
-        if(pair->hash == hash && hashtable->cmp_keys(pair->key, key))
-            return pair;
-
-        if(list == bucket->last)
-            break;
-
-        list = list->next;
-    }
-
-    return NULL;
-}
-
-/* returns 0 on success, -1 if key was not found */
-static int hashtable_do_del(hashtable_t *hashtable,
-                            const void *key, unsigned int hash)
-{
-    pair_t *pair;
-    bucket_t *bucket;
-    unsigned int index;
-
-    index = hash % num_buckets(hashtable);
-    bucket = &hashtable->buckets[index];
-
-    pair = hashtable_find_pair(hashtable, bucket, key, hash);
-    if(!pair)
-        return -1;
-
-    if(&pair->list == bucket->first && &pair->list == bucket->last)
-        bucket->first = bucket->last = &hashtable->list;
-
-    else if(&pair->list == bucket->first)
-        bucket->first = pair->list.next;
-
-    else if(&pair->list == bucket->last)
-        bucket->last = pair->list.prev;
-
-    list_remove(&pair->list);
-
-    if(hashtable->free_key)
-        hashtable->free_key(pair->key);
-    if(hashtable->free_value)
-        hashtable->free_value(pair->value);
-
-    free(pair);
-    hashtable->size--;
-
-    return 0;
-}
-
-static void hashtable_do_clear(hashtable_t *hashtable)
-{
-    list_t *list, *next;
-    pair_t *pair;
-
-    for(list = hashtable->list.next; list != &hashtable->list; list = next)
-    {
-        next = list->next;
-        pair = list_to_pair(list);
-        if(hashtable->free_key)
-            hashtable->free_key(pair->key);
-        if(hashtable->free_value)
-            hashtable->free_value(pair->value);
-        free(pair);
-    }
-}
-
-static int hashtable_do_rehash(hashtable_t *hashtable)
-{
-    list_t *list, *next;
-    pair_t *pair;
-    unsigned int i, index, new_size;
-
-    free(hashtable->buckets);
-
-    hashtable->num_buckets++;
-    new_size = num_buckets(hashtable);
-
-    hashtable->buckets = malloc(new_size * sizeof(bucket_t));
-    if(!hashtable->buckets)
-        return -1;
-
-    for(i = 0; i < num_buckets(hashtable); i++)
-    {
-        hashtable->buckets[i].first = hashtable->buckets[i].last =
-            &hashtable->list;
-    }
-
-    list = hashtable->list.next;
-    list_init(&hashtable->list);
-
-    for(; list != &hashtable->list; list = next) {
-        next = list->next;
-        pair = list_to_pair(list);
-        index = pair->hash % new_size;
-        insert_to_bucket(hashtable, &hashtable->buckets[index], &pair->list);
-    }
-
-    return 0;
-}
-
-
-hashtable_t *hashtable_create(key_hash_fn hash_key, key_cmp_fn cmp_keys,
-                              free_fn free_key, free_fn free_value)
-{
-    hashtable_t *hashtable = malloc(sizeof(hashtable_t));
-    if(!hashtable)
-        return NULL;
-
-    if(hashtable_init(hashtable, hash_key, cmp_keys, free_key, free_value))
-    {
-        free(hashtable);
-        return NULL;
-    }
-
-    return hashtable;
-}
-
-void hashtable_destroy(hashtable_t *hashtable)
-{
-    hashtable_close(hashtable);
-    free(hashtable);
-}
-
-int hashtable_init(hashtable_t *hashtable,
-                   key_hash_fn hash_key, key_cmp_fn cmp_keys,
-                   free_fn free_key, free_fn free_value)
-{
-    unsigned int i;
-
-    hashtable->size = 0;
-    hashtable->num_buckets = 0;  /* index to primes[] */
-    hashtable->buckets = malloc(num_buckets(hashtable) * sizeof(bucket_t));
-    if(!hashtable->buckets)
-        return -1;
-
-    list_init(&hashtable->list);
-
-    hashtable->hash_key = hash_key;
-    hashtable->cmp_keys = cmp_keys;
-    hashtable->free_key = free_key;
-    hashtable->free_value = free_value;
-
-    for(i = 0; i < num_buckets(hashtable); i++)
-    {
-        hashtable->buckets[i].first = hashtable->buckets[i].last =
-            &hashtable->list;
-    }
-
-    return 0;
-}
-
-void hashtable_close(hashtable_t *hashtable)
-{
-    hashtable_do_clear(hashtable);
-    free(hashtable->buckets);
-}
-
-int hashtable_set(hashtable_t *hashtable, void *key, void *value)
-{
-    pair_t *pair;
-    bucket_t *bucket;
-    unsigned int hash, index;
-
-    hash = hashtable->hash_key(key);
-
-    /* if the key already exists, delete it */
-    hashtable_do_del(hashtable, key, hash);
-
-    /* rehash if the load ratio exceeds 1 */
-    if(hashtable->size >= num_buckets(hashtable))
-        if(hashtable_do_rehash(hashtable))
-            return -1;
-
-    pair = malloc(sizeof(pair_t));
-    if(!pair)
-        return -1;
-
-    pair->key = key;
-    pair->value = value;
-    pair->hash = hash;
-    list_init(&pair->list);
-
-    index = hash % num_buckets(hashtable);
-    bucket = &hashtable->buckets[index];
-
-    insert_to_bucket(hashtable, bucket, &pair->list);
-
-    hashtable->size++;
-    return 0;
-}
-
-void *hashtable_get(hashtable_t *hashtable, const void *key)
-{
-    pair_t *pair;
-    unsigned int hash;
-    bucket_t *bucket;
-
-    hash = hashtable->hash_key(key);
-    bucket = &hashtable->buckets[hash % num_buckets(hashtable)];
-
-    pair = hashtable_find_pair(hashtable, bucket, key, hash);
-    if(!pair)
-        return NULL;
-
-    return pair->value;
-}
-
-int hashtable_del(hashtable_t *hashtable, const void *key)
-{
-    unsigned int hash = hashtable->hash_key(key);
-    return hashtable_do_del(hashtable, key, hash);
-}
-
-void hashtable_clear(hashtable_t *hashtable)
-{
-    unsigned int i;
-
-    hashtable_do_clear(hashtable);
-
-    for(i = 0; i < num_buckets(hashtable); i++)
-    {
-        hashtable->buckets[i].first = hashtable->buckets[i].last =
-            &hashtable->list;
-    }
-
-    list_init(&hashtable->list);
-    hashtable->size = 0;
-}
-
-void *hashtable_iter(hashtable_t *hashtable)
-{
-    return hashtable_iter_next(hashtable, &hashtable->list);
-}
-
-void *hashtable_iter_next(hashtable_t *hashtable, void *iter)
-{
-    list_t *list = (list_t *)iter;
-    if(list->next == &hashtable->list)
-        return NULL;
-    return list->next;
-}
-
-void *hashtable_iter_key(void *iter)
-{
-    pair_t *pair = list_to_pair((list_t *)iter);
-    return pair->key;
-}
-
-void *hashtable_iter_value(void *iter)
-{
-    pair_t *pair = list_to_pair((list_t *)iter);
-    return pair->value;
-}
diff --git a/branch-1.3/lang/c/jansson/src/hashtable.h b/branch-1.3/lang/c/jansson/src/hashtable.h
deleted file mode 100644
index 81a0af5..0000000
--- a/branch-1.3/lang/c/jansson/src/hashtable.h
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * This library is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef HASHTABLE_H
-#define HASHTABLE_H
-
-typedef unsigned int (*key_hash_fn)(const void *key);
-typedef int (*key_cmp_fn)(const void *key1, const void *key2);
-typedef void (*free_fn)(void *key);
-
-struct hashtable_list {
-    struct hashtable_list *prev;
-    struct hashtable_list *next;
-};
-
-struct hashtable_pair {
-    void *key;
-    void *value;
-    unsigned int hash;
-    struct hashtable_list list;
-};
-
-struct hashtable_bucket {
-    struct hashtable_list *first;
-    struct hashtable_list *last;
-};
-
-typedef struct hashtable {
-    unsigned int size;
-    struct hashtable_bucket *buckets;
-    unsigned int num_buckets;  /* index to primes[] */
-    struct hashtable_list list;
-
-    key_hash_fn hash_key;
-    key_cmp_fn cmp_keys;  /* returns non-zero for equal keys */
-    free_fn free_key;
-    free_fn free_value;
-} hashtable_t;
-
-/**
- * hashtable_create - Create a hashtable object
- *
- * @hash_key: The key hashing function
- * @cmp_keys: The key compare function. Returns non-zero for equal and
- *     zero for unequal unequal keys
- * @free_key: If non-NULL, called for a key that is no longer referenced.
- * @free_value: If non-NULL, called for a value that is no longer referenced.
- *
- * Returns a new hashtable object that should be freed with
- * hashtable_destroy when it's no longer used, or NULL on failure (out
- * of memory).
- */
-hashtable_t *hashtable_create(key_hash_fn hash_key, key_cmp_fn cmp_keys,
-                              free_fn free_key, free_fn free_value);
-
-/**
- * hashtable_destroy - Destroy a hashtable object
- *
- * @hashtable: The hashtable
- *
- * Destroys a hashtable created with hashtable_create().
- */
-void hashtable_destroy(hashtable_t *hashtable);
-
-/**
- * hashtable_init - Initialize a hashtable object
- *
- * @hashtable: The (statically allocated) hashtable object
- * @hash_key: The key hashing function
- * @cmp_keys: The key compare function. Returns non-zero for equal and
- *     zero for unequal unequal keys
- * @free_key: If non-NULL, called for a key that is no longer referenced.
- * @free_value: If non-NULL, called for a value that is no longer referenced.
- *
- * Initializes a statically allocated hashtable object. The object
- * should be cleared with hashtable_close when it's no longer used.
- *
- * Returns 0 on success, -1 on error (out of memory).
- */
-int hashtable_init(hashtable_t *hashtable,
-                   key_hash_fn hash_key, key_cmp_fn cmp_keys,
-                   free_fn free_key, free_fn free_value);
-
-/**
- * hashtable_close - Release all resources used by a hashtable object
- *
- * @hashtable: The hashtable
- *
- * Destroys a statically allocated hashtable object.
- */
-void hashtable_close(hashtable_t *hashtable);
-
-/**
- * hashtable_set - Add/modify value in hashtable
- *
- * @hashtable: The hashtable object
- * @key: The key
- * @value: The value
- *
- * If a value with the given key already exists, its value is replaced
- * with the new value.
- *
- * Key and value are "stealed" in the sense that hashtable frees them
- * automatically when they are no longer used. The freeing is
- * accomplished by calling free_key and free_value functions that were
- * supplied to hashtable_new. In case one or both of the free
- * functions is NULL, the corresponding item is not "stealed".
- *
- * Returns 0 on success, -1 on failure (out of memory).
- */
-int hashtable_set(hashtable_t *hashtable, void *key, void *value);
-
-/**
- * hashtable_get - Get a value associated with a key
- *
- * @hashtable: The hashtable object
- * @key: The key
- *
- * Returns value if it is found, or NULL otherwise.
- */
-void *hashtable_get(hashtable_t *hashtable, const void *key);
-
-/**
- * hashtable_del - Remove a value from the hashtable
- *
- * @hashtable: The hashtable object
- * @key: The key
- *
- * Returns 0 on success, or -1 if the key was not found.
- */
-int hashtable_del(hashtable_t *hashtable, const void *key);
-
-/**
- * hashtable_clear - Clear hashtable
- *
- * @hashtable: The hashtable object
- *
- * Removes all items from the hashtable.
- */
-void hashtable_clear(hashtable_t *hashtable);
-
-/**
- * hashtable_iter - Iterate over hashtable
- *
- * @hashtable: The hashtable object
- *
- * Returns an opaque iterator to the first element in the hashtable.
- * The iterator should be passed to hashtable_iter_* functions.
- * The hashtable items are not iterated over in any particular order.
- *
- * There's no need to free the iterator in any way. The iterator is
- * valid as long as the item that is referenced by the iterator is not
- * deleted. Other values may be added or deleted. In particular,
- * hashtable_iter_next() may be called on an iterator, and after that
- * the key/value pair pointed by the old iterator may be deleted.
- */
-void *hashtable_iter(hashtable_t *hashtable);
-
-/**
- * hashtable_iter_next - Advance an iterator
- *
- * @hashtable: The hashtable object
- * @iter: The iterator
- *
- * Returns a new iterator pointing to the next element in the
- * hashtable or NULL if the whole hastable has been iterated over.
- */
-void *hashtable_iter_next(hashtable_t *hashtable, void *iter);
-
-/**
- * hashtable_iter_key - Retrieve the key pointed by an iterator
- *
- * @iter: The iterator
- */
-void *hashtable_iter_key(void *iter);
-
-/**
- * hashtable_iter_value - Retrieve the value pointed by an iterator
- *
- * @iter: The iterator
- */
-void *hashtable_iter_value(void *iter);
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/jansson.h b/branch-1.3/lang/c/jansson/src/jansson.h
deleted file mode 100644
index ac6736f..0000000
--- a/branch-1.3/lang/c/jansson/src/jansson.h
+++ /dev/null
@@ -1,154 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef JANSSON_H
-#define JANSSON_H
-
-#include <stdio.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/* types */
-
-typedef enum {
-    JSON_OBJECT,
-    JSON_ARRAY,
-    JSON_STRING,
-    JSON_INTEGER,
-    JSON_REAL,
-    JSON_TRUE,
-    JSON_FALSE,
-    JSON_NULL
-} json_type;
-
-typedef struct {
-    json_type type;
-    unsigned long refcount;
-} json_t;
-
-#define json_typeof(json)      ((json)->type)
-#define json_is_object(json)   (json && json_typeof(json) == JSON_OBJECT)
-#define json_is_array(json)    (json && json_typeof(json) == JSON_ARRAY)
-#define json_is_string(json)   (json && json_typeof(json) == JSON_STRING)
-#define json_is_integer(json)  (json && json_typeof(json) == JSON_INTEGER)
-#define json_is_real(json)     (json && json_typeof(json) == JSON_REAL)
-#define json_is_number(json)   (json_is_integer(json) || json_is_real(json))
-#define json_is_true(json)     (json && json_typeof(json) == JSON_TRUE)
-#define json_is_false(json)    (json && json_typeof(json) == JSON_FALSE)
-#define json_is_boolean(json)  (json_is_true(json) || json_is_false(json))
-#define json_is_null(json)     (json && json_typeof(json) == JSON_NULL)
-
-/* construction, destruction, reference counting */
-
-json_t *json_object(void);
-json_t *json_array(void);
-json_t *json_string(const char *value);
-json_t *json_integer(int value);
-json_t *json_real(double value);
-json_t *json_true(void);
-json_t *json_false(void);
-json_t *json_null(void);
-
-static inline json_t *json_incref(json_t *json)
-{
-    if(json && json->refcount != (unsigned int)-1)
-        ++json->refcount;
-    return json;
-}
-
-/* do not call json_delete directly */
-void json_delete(json_t *json);
-
-static inline void json_decref(json_t *json)
-{
-    if(json && json->refcount != (unsigned int)-1 && --json->refcount == 0)
-        json_delete(json);
-}
-
-
-/* getters, setters, manipulation */
-
-unsigned int json_object_size(const json_t *object);
-json_t *json_object_get(const json_t *object, const char *key);
-int json_object_set_new(json_t *object, const char *key, json_t *value);
-int json_object_del(json_t *object, const char *key);
-int json_object_clear(json_t *object);
-int json_object_update(json_t *object, json_t *other);
-void *json_object_iter(json_t *object);
-void *json_object_iter_next(json_t *object, void *iter);
-const char *json_object_iter_key(void *iter);
-json_t *json_object_iter_value(void *iter);
-
-static inline
-int json_object_set(json_t *object, const char *key, json_t *value)
-{
-    return json_object_set_new(object, key, json_incref(value));
-}
-
-unsigned int json_array_size(const json_t *array);
-json_t *json_array_get(const json_t *array, unsigned int index);
-int json_array_set_new(json_t *array, unsigned int index, json_t *value);
-int json_array_append_new(json_t *array, json_t *value);
-int json_array_insert_new(json_t *array, unsigned int index, json_t *value);
-int json_array_remove(json_t *array, unsigned int index);
-int json_array_clear(json_t *array);
-int json_array_extend(json_t *array, json_t *other);
-
-static inline
-int json_array_set(json_t *array, unsigned int index, json_t *value)
-{
-    return json_array_set_new(array, index, json_incref(value));
-}
-
-static inline
-int json_array_append(json_t *array, json_t *value)
-{
-    return json_array_append_new(array, json_incref(value));
-}
-
-static inline
-int json_array_insert(json_t *array, unsigned int index, json_t *value)
-{
-    return json_array_insert_new(array, index, json_incref(value));
-}
-
-const char *json_string_value(const json_t *string);
-int json_integer_value(const json_t *integer);
-double json_real_value(const json_t *real);
-double json_number_value(const json_t *json);
-
-int json_string_set(const json_t *string, const char *value);
-int json_integer_set(const json_t *integer, int value);
-int json_real_set(const json_t *real, double value);
-
-
-/* loading, printing */
-
-#define JSON_ERROR_TEXT_LENGTH  160
-
-typedef struct {
-    char text[JSON_ERROR_TEXT_LENGTH];
-    int line;
-} json_error_t;
-
-json_t *json_loads(const char *input, json_error_t *error);
-json_t *json_loadf(FILE *input, json_error_t *error);
-json_t *json_load_file(const char *path, json_error_t *error);
-
-#define JSON_INDENT(n)   (n & 0xFF)
-
-char *json_dumps(const json_t *json, unsigned long flags);
-int json_dumpf(const json_t *json, FILE *output, unsigned long flags);
-int json_dump_file(const json_t *json, const char *path, unsigned long flags);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/jansson_private.h b/branch-1.3/lang/c/jansson/src/jansson_private.h
deleted file mode 100644
index 317f05a..0000000
--- a/branch-1.3/lang/c/jansson/src/jansson_private.h
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef JANSSON_PRIVATE_H
-#define JANSSON_PRIVATE_H
-
-#include "jansson.h"
-#include "hashtable.h"
-
-#define container_of(ptr_, type_, member_)  \
-    ((type_ *)((char *)ptr_ - (size_t)&((type_ *)0)->member_))
-
-typedef struct {
-    json_t json;
-    hashtable_t hashtable;
-    int visited;
-} json_object_t;
-
-typedef struct {
-    json_t json;
-    unsigned int size;
-    unsigned int entries;
-    json_t **table;
-    int visited;
-} json_array_t;
-
-typedef struct {
-    json_t json;
-    char *value;
-} json_string_t;
-
-typedef struct {
-    json_t json;
-    double value;
-} json_real_t;
-
-typedef struct {
-    json_t json;
-    int value;
-} json_integer_t;
-
-#define json_to_object(json_)  container_of(json_, json_object_t, json)
-#define json_to_array(json_)   container_of(json_, json_array_t, json)
-#define json_to_string(json_)  container_of(json_, json_string_t, json)
-#define json_to_real(json_)   container_of(json_, json_real_t, json)
-#define json_to_integer(json_) container_of(json_, json_integer_t, json)
-
-int json_object_set_nocheck(json_t *json, const char *key, json_t *value);
-json_t *json_string_nocheck(const char *value);
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/load.c b/branch-1.3/lang/c/jansson/src/load.c
deleted file mode 100644
index 005e03c..0000000
--- a/branch-1.3/lang/c/jansson/src/load.c
+++ /dev/null
@@ -1,885 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#define _GNU_SOURCE
-#include <ctype.h>
-#include <errno.h>
-#include <limits.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <stdarg.h>
-#include <assert.h>
-
-#include <jansson.h>
-#include "jansson_private.h"
-#include "strbuffer.h"
-#include "utf.h"
-
-#define TOKEN_INVALID         -1
-#define TOKEN_EOF              0
-#define TOKEN_STRING         256
-#define TOKEN_INTEGER        257
-#define TOKEN_REAL           258
-#define TOKEN_TRUE           259
-#define TOKEN_FALSE          260
-#define TOKEN_NULL           261
-
-/* read one byte from stream, return EOF on end of file */
-typedef int (*get_func)(void *data);
-
-/* return non-zero if end of file has been reached */
-typedef int (*eof_func)(void *data);
-
-typedef struct {
-    get_func get;
-    eof_func eof;
-    void *data;
-    int stream_pos;
-    char buffer[5];
-    int buffer_pos;
-} stream_t;
-
-
-typedef struct {
-    stream_t stream;
-    strbuffer_t saved_text;
-    int token;
-    int line, column;
-    union {
-        char *string;
-        int integer;
-        double real;
-    } value;
-} lex_t;
-
-
-/*** error reporting ***/
-
-static void error_init(json_error_t *error)
-{
-    if(error)
-    {
-        error->text[0] = '\0';
-        error->line = -1;
-    }
-}
-
-static void error_set(json_error_t *error, const lex_t *lex,
-                      const char *msg, ...)
-{
-    va_list ap;
-    char text[JSON_ERROR_TEXT_LENGTH];
-
-    if(!error || error->text[0] != '\0') {
-        /* error already set */
-        return;
-    }
-
-    va_start(ap, msg);
-    vsnprintf(text, JSON_ERROR_TEXT_LENGTH, msg, ap);
-    va_end(ap);
-
-    if(lex)
-    {
-        const char *saved_text = strbuffer_value(&lex->saved_text);
-        error->line = lex->line;
-        if(saved_text && saved_text[0])
-        {
-            if(lex->saved_text.length <= 20) {
-                snprintf(error->text, JSON_ERROR_TEXT_LENGTH,
-                         "%s near '%s'", text, saved_text);
-            }
-            else
-                snprintf(error->text, JSON_ERROR_TEXT_LENGTH, "%s", text);
-        }
-        else
-        {
-            snprintf(error->text, JSON_ERROR_TEXT_LENGTH,
-                     "%s near end of file", text);
-        }
-    }
-    else
-    {
-        error->line = -1;
-        snprintf(error->text, JSON_ERROR_TEXT_LENGTH, "%s", text);
-    }
-}
-
-
-/*** lexical analyzer ***/
-
-void stream_init(stream_t *stream, get_func get, eof_func eof, void *data)
-{
-    stream->get = get;
-    stream->eof = eof;
-    stream->data = data;
-    stream->stream_pos = 0;
-    stream->buffer[0] = '\0';
-    stream->buffer_pos = 0;
-}
-
-static char stream_get(stream_t *stream, json_error_t *error)
-{
-    char c;
-
-    if(!stream->buffer[stream->buffer_pos])
-    {
-        stream->buffer[0] = stream->get(stream->data);
-        stream->buffer_pos = 0;
-
-        c = stream->buffer[0];
-
-        if((unsigned char)c >= 0x80 && c != (char)EOF)
-        {
-            /* multi-byte UTF-8 sequence */
-            int i, count;
-
-            count = utf8_check_first(c);
-            if(!count)
-                goto out;
-
-            assert(count >= 2);
-
-            for(i = 1; i < count; i++)
-                stream->buffer[i] = stream->get(stream->data);
-
-            if(!utf8_check_full(stream->buffer, count))
-                goto out;
-
-            stream->stream_pos += count;
-            stream->buffer[count] = '\0';
-        }
-        else {
-            stream->buffer[1] = '\0';
-            stream->stream_pos++;
-        }
-    }
-
-    return stream->buffer[stream->buffer_pos++];
-
-out:
-    error_set(error, NULL, "unable to decode byte 0x%x at position %d",
-              (unsigned char)c, stream->stream_pos);
-
-    stream->buffer[0] = EOF;
-    stream->buffer[1] = '\0';
-    stream->buffer_pos = 1;
-
-    return EOF;
-}
-
-static void stream_unget(stream_t *stream, char c)
-{
-    assert(stream->buffer_pos > 0);
-    stream->buffer_pos--;
-    assert(stream->buffer[stream->buffer_pos] == c);
-}
-
-
-static int lex_get(lex_t *lex, json_error_t *error)
-{
-    return stream_get(&lex->stream, error);
-}
-
-static int lex_eof(lex_t *lex)
-{
-    return lex->stream.eof(lex->stream.data);
-}
-
-static void lex_save(lex_t *lex, char c)
-{
-    strbuffer_append_byte(&lex->saved_text, c);
-}
-
-static int lex_get_save(lex_t *lex, json_error_t *error)
-{
-    char c = stream_get(&lex->stream, error);
-    lex_save(lex, c);
-    return c;
-}
-
-static void lex_unget_unsave(lex_t *lex, char c)
-{
-    char d;
-    stream_unget(&lex->stream, c);
-    d = strbuffer_pop(&lex->saved_text);
-    assert(c == d);
-}
-
-static void lex_save_cached(lex_t *lex)
-{
-    while(lex->stream.buffer[lex->stream.buffer_pos] != '\0')
-    {
-        lex_save(lex, lex->stream.buffer[lex->stream.buffer_pos]);
-        lex->stream.buffer_pos++;
-    }
-}
-
-/* assumes that str points to 'u' plus at least 4 valid hex digits */
-static int decode_unicode_escape(const char *str)
-{
-    int i;
-    int value = 0;
-
-    assert(str[0] == 'u');
-
-    for(i = 1; i <= 4; i++) {
-        char c = str[i];
-        value <<= 4;
-        if(isdigit(c))
-            value += c - '0';
-        else if(islower(c))
-            value += c - 'a' + 10;
-        else if(isupper(c))
-            value += c - 'A' + 10;
-        else
-            assert(0);
-    }
-
-    return value;
-}
-
-static void lex_scan_string(lex_t *lex, json_error_t *error)
-{
-    char c;
-    const char *p;
-    char *t;
-    int i;
-
-    lex->value.string = NULL;
-    lex->token = TOKEN_INVALID;
-
-    c = lex_get_save(lex, error);
-
-    while(c != '"') {
-        if(c == (char)EOF) {
-            lex_unget_unsave(lex, c);
-            if(lex_eof(lex))
-                error_set(error, lex, "premature end of input");
-            goto out;
-        }
-
-        else if((unsigned char)c <= 0x1F) {
-            /* control character */
-            lex_unget_unsave(lex, c);
-            if(c == '\n')
-                error_set(error, lex, "unexpected newline", c);
-            else
-                error_set(error, lex, "control character 0x%x", c);
-            goto out;
-        }
-
-        else if(c == '\\') {
-            c = lex_get_save(lex, error);
-            if(c == 'u') {
-                c = lex_get_save(lex, error);
-                for(i = 0; i < 4; i++) {
-                    if(!isxdigit(c)) {
-                        lex_unget_unsave(lex, c);
-                        error_set(error, lex, "invalid escape");
-                        goto out;
-                    }
-                    c = lex_get_save(lex, error);
-                }
-            }
-            else if(c == '"' || c == '\\' || c == '/' || c == 'b' ||
-                    c == 'f' || c == 'n' || c == 'r' || c == 't')
-                c = lex_get_save(lex, error);
-            else {
-                lex_unget_unsave(lex, c);
-                error_set(error, lex, "invalid escape");
-                goto out;
-            }
-        }
-        else
-            c = lex_get_save(lex, error);
-    }
-
-    /* the actual value is at most of the same length as the source
-       string, because:
-         - shortcut escapes (e.g. "\t") (length 2) are converted to 1 byte
-         - a single \uXXXX escape (length 6) is converted to at most 3 bytes
-         - two \uXXXX escapes (length 12) forming an UTF-16 surrogate pair
-           are converted to 4 bytes
-    */
-    lex->value.string = malloc(lex->saved_text.length + 1);
-    if(!lex->value.string) {
-        /* this is not very nice, since TOKEN_INVALID is returned */
-        goto out;
-    }
-
-    /* the target */
-    t = lex->value.string;
-
-    /* + 1 to skip the " */
-    p = strbuffer_value(&lex->saved_text) + 1;
-
-    while(*p != '"') {
-        if(*p == '\\') {
-            p++;
-            if(*p == 'u') {
-                char buffer[4];
-                int length;
-                int value;
-
-                value = decode_unicode_escape(p);
-                p += 5;
-
-                if(0xD800 <= value && value <= 0xDBFF) {
-                    /* surrogate pair */
-                    if(*p == '\\' && *(p + 1) == 'u') {
-                        int value2 = decode_unicode_escape(++p);
-                        p += 5;
-
-                        if(0xDC00 <= value2 && value2 <= 0xDFFF) {
-                            /* valid second surrogate */
-                            value =
-                                ((value - 0xD800) << 10) +
-                                (value2 - 0xDC00) +
-                                0x10000;
-                        }
-                        else {
-                            /* invalid second surrogate */
-                            error_set(error, lex,
-                                      "invalid Unicode '\\u%04X\\u%04X'",
-                                      value, value2);
-                            goto out;
-                        }
-                    }
-                    else {
-                        /* no second surrogate */
-                        error_set(error, lex, "invalid Unicode '\\u%04X'",
-                                  value);
-                        goto out;
-                    }
-                }
-                else if(0xDC00 <= value && value <= 0xDFFF) {
-                    error_set(error, lex, "invalid Unicode '\\u%04X'", value);
-                    goto out;
-                }
-                else if(value == 0)
-                {
-                    error_set(error, lex, "\\u0000 is not allowed");
-                    goto out;
-                }
-
-                if(utf8_encode(value, buffer, &length))
-                    assert(0);
-
-                memcpy(t, buffer, length);
-                t += length;
-            }
-            else {
-                switch(*p) {
-                    case '"': case '\\': case '/':
-                        *t = *p; break;
-                    case 'b': *t = '\b'; break;
-                    case 'f': *t = '\f'; break;
-                    case 'n': *t = '\n'; break;
-                    case 'r': *t = '\r'; break;
-                    case 't': *t = '\t'; break;
-                    default: assert(0);
-                }
-                t++;
-                p++;
-            }
-        }
-        else
-            *(t++) = *(p++);
-    }
-    *t = '\0';
-    lex->token = TOKEN_STRING;
-    return;
-
-out:
-    free(lex->value.string);
-}
-
-static int lex_scan_number(lex_t *lex, char c, json_error_t *error)
-{
-    const char *saved_text;
-    char *end;
-    double value;
-
-    lex->token = TOKEN_INVALID;
-
-    if(c == '-')
-        c = lex_get_save(lex, error);
-
-    if(c == '0') {
-        c = lex_get_save(lex, error);
-        if(isdigit(c)) {
-            lex_unget_unsave(lex, c);
-            goto out;
-        }
-    }
-    else if(isdigit(c)) {
-        c = lex_get_save(lex, error);
-        while(isdigit(c))
-            c = lex_get_save(lex, error);
-    }
-    else {
-      lex_unget_unsave(lex, c);
-      goto out;
-    }
-
-    if(c != '.' && c != 'E' && c != 'e') {
-        long value;
-
-        lex_unget_unsave(lex, c);
-
-        saved_text = strbuffer_value(&lex->saved_text);
-        value = strtol(saved_text, &end, 10);
-        assert(end == saved_text + lex->saved_text.length);
-
-        if((value == LONG_MAX && errno == ERANGE) || value > INT_MAX) {
-            error_set(error, lex, "too big integer");
-            goto out;
-        }
-        else if((value == LONG_MIN && errno == ERANGE) || value < INT_MIN) {
-            error_set(error, lex, "too big negative integer");
-            goto out;
-        }
-
-        lex->token = TOKEN_INTEGER;
-        lex->value.integer = (int)value;
-        return 0;
-    }
-
-    if(c == '.') {
-        c = lex_get(lex, error);
-        if(!isdigit(c))
-            goto out;
-        lex_save(lex, c);
-
-        c = lex_get_save(lex, error);
-        while(isdigit(c))
-            c = lex_get_save(lex, error);
-    }
-
-    if(c == 'E' || c == 'e') {
-        c = lex_get_save(lex, error);
-        if(c == '+' || c == '-')
-            c = lex_get_save(lex, error);
-
-        if(!isdigit(c)) {
-            lex_unget_unsave(lex, c);
-            goto out;
-        }
-
-        c = lex_get_save(lex, error);
-        while(isdigit(c))
-            c = lex_get_save(lex, error);
-    }
-
-    lex_unget_unsave(lex, c);
-
-    saved_text = strbuffer_value(&lex->saved_text);
-    value = strtod(saved_text, &end);
-    assert(end == saved_text + lex->saved_text.length);
-
-    if(value == 0 && errno == ERANGE) {
-        error_set(error, lex, "real number underflow");
-        goto out;
-    }
-
-    /* Cannot test for +/-HUGE_VAL because the HUGE_VAL constant is
-       only defined in C99 mode. So let's trust in sole errno. */
-    else if(errno == ERANGE) {
-        error_set(error, lex, "real number overflow");
-        goto out;
-    }
-
-    lex->token = TOKEN_REAL;
-    lex->value.real = value;
-    return 0;
-
-out:
-    return -1;
-}
-
-static int lex_scan(lex_t *lex, json_error_t *error)
-{
-    char c;
-
-    strbuffer_clear(&lex->saved_text);
-
-    if(lex->token == TOKEN_STRING) {
-        free(lex->value.string);
-        lex->value.string = NULL;
-    }
-
-    c = lex_get(lex, error);
-    while(c == ' ' || c == '\t' || c == '\n' || c == '\r')
-    {
-        if(c == '\n')
-            lex->line++;
-
-        c = lex_get(lex, error);
-    }
-
-    if(c == (char)EOF) {
-        if(lex_eof(lex))
-            lex->token = TOKEN_EOF;
-        else
-            lex->token = TOKEN_INVALID;
-        goto out;
-    }
-
-    lex_save(lex, c);
-
-    if(c == '{' || c == '}' || c == '[' || c == ']' || c == ':' || c == ',')
-        lex->token = c;
-
-    else if(c == '"')
-        lex_scan_string(lex, error);
-
-    else if(isdigit(c) || c == '-') {
-        if(lex_scan_number(lex, c, error))
-            goto out;
-    }
-
-    else if(isupper(c) || islower(c)) {
-        /* eat up the whole identifier for clearer error messages */
-        const char *saved_text;
-
-        c = lex_get_save(lex, error);
-        while(isupper(c) || islower(c))
-            c = lex_get_save(lex, error);
-        lex_unget_unsave(lex, c);
-
-        saved_text = strbuffer_value(&lex->saved_text);
-
-        if(strcmp(saved_text, "true") == 0)
-            lex->token = TOKEN_TRUE;
-        else if(strcmp(saved_text, "false") == 0)
-            lex->token = TOKEN_FALSE;
-        else if(strcmp(saved_text, "null") == 0)
-            lex->token = TOKEN_NULL;
-        else
-            lex->token = TOKEN_INVALID;
-    }
-
-    else {
-        /* save the rest of the input UTF-8 sequence to get an error
-           message of valid UTF-8 */
-        lex_save_cached(lex);
-        lex->token = TOKEN_INVALID;
-    }
-
-out:
-    return lex->token;
-}
-
-static char *lex_steal_string(lex_t *lex)
-{
-    char *result = NULL;
-    if(lex->token == TOKEN_STRING)
-    {
-        result = lex->value.string;
-        lex->value.string = NULL;
-    }
-    return result;
-}
-
-static int lex_init(lex_t *lex, get_func get, eof_func eof, void *data)
-{
-    stream_init(&lex->stream, get, eof, data);
-    if(strbuffer_init(&lex->saved_text))
-        return -1;
-
-    lex->token = TOKEN_INVALID;
-    lex->line = 1;
-
-    return 0;
-}
-
-static void lex_close(lex_t *lex)
-{
-    if(lex->token == TOKEN_STRING)
-        free(lex->value.string);
-    strbuffer_close(&lex->saved_text);
-}
-
-
-/*** parser ***/
-
-static json_t *parse_value(lex_t *lex, json_error_t *error);
-
-static json_t *parse_object(lex_t *lex, json_error_t *error)
-{
-    json_t *object = json_object();
-    if(!object)
-        return NULL;
-
-    lex_scan(lex, error);
-    if(lex->token == '}')
-        return object;
-
-    while(1) {
-        char *key;
-        json_t *value;
-
-        if(lex->token != TOKEN_STRING) {
-            error_set(error, lex, "string or '}' expected");
-            goto error;
-        }
-
-        key = lex_steal_string(lex);
-        if(!key)
-            return NULL;
-
-        lex_scan(lex, error);
-        if(lex->token != ':') {
-            free(key);
-            error_set(error, lex, "':' expected");
-            goto error;
-        }
-
-        lex_scan(lex, error);
-        value = parse_value(lex, error);
-        if(!value) {
-            free(key);
-            goto error;
-        }
-
-        if(json_object_set_nocheck(object, key, value)) {
-            free(key);
-            json_decref(value);
-            goto error;
-        }
-
-        json_decref(value);
-        free(key);
-
-        lex_scan(lex, error);
-        if(lex->token != ',')
-            break;
-
-        lex_scan(lex, error);
-    }
-
-    if(lex->token != '}') {
-        error_set(error, lex, "'}' expected");
-        goto error;
-    }
-
-    return object;
-
-error:
-    json_decref(object);
-    return NULL;
-}
-
-static json_t *parse_array(lex_t *lex, json_error_t *error)
-{
-    json_t *array = json_array();
-    if(!array)
-        return NULL;
-
-    lex_scan(lex, error);
-    if(lex->token == ']')
-        return array;
-
-    while(lex->token) {
-        json_t *elem = parse_value(lex, error);
-        if(!elem)
-            goto error;
-
-        if(json_array_append(array, elem)) {
-            json_decref(elem);
-            goto error;
-        }
-        json_decref(elem);
-
-        lex_scan(lex, error);
-        if(lex->token != ',')
-            break;
-
-        lex_scan(lex, error);
-    }
-
-    if(lex->token != ']') {
-        error_set(error, lex, "']' expected");
-        goto error;
-    }
-
-    return array;
-
-error:
-    json_decref(array);
-    return NULL;
-}
-
-static json_t *parse_value(lex_t *lex, json_error_t *error)
-{
-    json_t *json;
-
-    switch(lex->token) {
-        case TOKEN_STRING: {
-            json = json_string_nocheck(lex->value.string);
-            break;
-        }
-
-        case TOKEN_INTEGER: {
-            json = json_integer(lex->value.integer);
-            break;
-        }
-
-        case TOKEN_REAL: {
-            json = json_real(lex->value.real);
-            break;
-        }
-
-        case TOKEN_TRUE:
-            json = json_true();
-            break;
-
-        case TOKEN_FALSE:
-            json = json_false();
-            break;
-
-        case TOKEN_NULL:
-            json = json_null();
-            break;
-
-        case '{':
-            json = parse_object(lex, error);
-            break;
-
-        case '[':
-            json = parse_array(lex, error);
-            break;
-
-        case TOKEN_INVALID:
-            error_set(error, lex, "invalid token");
-            return NULL;
-
-        default:
-            error_set(error, lex, "unexpected token");
-            return NULL;
-    }
-
-    if(!json)
-        return NULL;
-
-    return json;
-}
-
-static json_t *parse_json(lex_t *lex, json_error_t *error)
-{
-    error_init(error);
-
-    lex_scan(lex, error);
-    if(lex->token != '[' && lex->token != '{') {
-        error_set(error, lex, "'[' or '{' expected");
-        return NULL;
-    }
-
-    return parse_value(lex, error);
-}
-
-typedef struct
-{
-    const char *data;
-    int pos;
-} string_data_t;
-
-static int string_get(void *data)
-{
-    char c;
-    string_data_t *stream = (string_data_t *)data;
-    c = stream->data[stream->pos];
-    if(c == '\0')
-        return EOF;
-    else
-    {
-        stream->pos++;
-        return c;
-    }
-}
-
-static int string_eof(void *data)
-{
-    string_data_t *stream = (string_data_t *)data;
-    return (stream->data[stream->pos] == '\0');
-}
-
-json_t *json_loads(const char *string, json_error_t *error)
-{
-    lex_t lex;
-    json_t *result;
-
-    string_data_t stream_data = {
-        .data = string,
-        .pos = 0
-    };
-
-    if(lex_init(&lex, string_get, string_eof, (void *)&stream_data))
-        return NULL;
-
-    result = parse_json(&lex, error);
-    if(!result)
-        goto out;
-
-    lex_scan(&lex, error);
-    if(lex.token != TOKEN_EOF) {
-        error_set(error, &lex, "end of file expected");
-        json_decref(result);
-        result = NULL;
-    }
-
-out:
-    lex_close(&lex);
-    return result;
-}
-
-json_t *json_loadf(FILE *input, json_error_t *error)
-{
-    lex_t lex;
-    json_t *result;
-
-    if(lex_init(&lex, (get_func)fgetc, (eof_func)feof, input))
-        return NULL;
-
-    result = parse_json(&lex, error);
-    if(!result)
-        goto out;
-
-    lex_scan(&lex, error);
-    if(lex.token != TOKEN_EOF) {
-        error_set(error, &lex, "end of file expected");
-        json_decref(result);
-        result = NULL;
-    }
-
-out:
-    lex_close(&lex);
-    return result;
-}
-
-json_t *json_load_file(const char *path, json_error_t *error)
-{
-    json_t *result;
-    FILE *fp;
-
-    error_init(error);
-
-    fp = fopen(path, "r");
-    if(!fp)
-    {
-        error_set(error, NULL, "unable to open %s: %s",
-                  path, strerror(errno));
-        return NULL;
-    }
-
-    result = json_loadf(fp, error);
-
-    fclose(fp);
-    return result;
-}
diff --git a/branch-1.3/lang/c/jansson/src/strbuffer.c b/branch-1.3/lang/c/jansson/src/strbuffer.c
deleted file mode 100644
index 1a71a53..0000000
--- a/branch-1.3/lang/c/jansson/src/strbuffer.c
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#define _GNU_SOURCE
-#include <stdlib.h>
-#include <string.h>
-#include "strbuffer.h"
-#include "util.h"
-
-#define STRBUFFER_MIN_SIZE  16
-#define STRBUFFER_FACTOR    2
-
-int strbuffer_init(strbuffer_t *strbuff)
-{
-    strbuff->size = STRBUFFER_MIN_SIZE;
-    strbuff->length = 0;
-
-    strbuff->value = malloc(strbuff->size);
-    if(!strbuff->value)
-        return -1;
-
-    /* initialize to empty */
-    strbuff->value[0] = '\0';
-    return 0;
-}
-
-void strbuffer_close(strbuffer_t *strbuff)
-{
-    free(strbuff->value);
-    strbuff->size = 0;
-    strbuff->length = 0;
-    strbuff->value = NULL;
-}
-
-void strbuffer_clear(strbuffer_t *strbuff)
-{
-    strbuff->length = 0;
-    strbuff->value[0] = '\0';
-}
-
-const char *strbuffer_value(const strbuffer_t *strbuff)
-{
-    return strbuff->value;
-}
-
-char *strbuffer_steal_value(strbuffer_t *strbuff)
-{
-    char *result = strbuff->value;
-    strbuffer_init(strbuff);
-    return result;
-}
-
-int strbuffer_append(strbuffer_t *strbuff, const char *string)
-{
-    return strbuffer_append_bytes(strbuff, string, strlen(string));
-}
-
-int strbuffer_append_byte(strbuffer_t *strbuff, char byte)
-{
-    return strbuffer_append_bytes(strbuff, &byte, 1);
-}
-
-int strbuffer_append_bytes(strbuffer_t *strbuff, const char *data, int size)
-{
-    if(strbuff->length + size >= strbuff->size)
-    {
-        strbuff->size = max(strbuff->size * STRBUFFER_FACTOR,
-                            strbuff->length + size + 1);
-
-        strbuff->value = realloc(strbuff->value, strbuff->size);
-        if(!strbuff->value)
-            return -1;
-    }
-
-    memcpy(strbuff->value + strbuff->length, data, size);
-    strbuff->length += size;
-    strbuff->value[strbuff->length] = '\0';
-
-    return 0;
-}
-
-char strbuffer_pop(strbuffer_t *strbuff)
-{
-    if(strbuff->length > 0) {
-        char c = strbuff->value[--strbuff->length];
-        strbuff->value[strbuff->length] = '\0';
-        return c;
-    }
-    else
-        return '\0';
-}
diff --git a/branch-1.3/lang/c/jansson/src/strbuffer.h b/branch-1.3/lang/c/jansson/src/strbuffer.h
deleted file mode 100644
index 4afefa9..0000000
--- a/branch-1.3/lang/c/jansson/src/strbuffer.h
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef STRBUFFER_H
-#define STRBUFFER_H
-
-typedef struct {
-    char *value;
-    int length;   /* bytes used */
-    int size;     /* bytes allocated */
-} strbuffer_t;
-
-int strbuffer_init(strbuffer_t *strbuff);
-void strbuffer_close(strbuffer_t *strbuff);
-
-void strbuffer_clear(strbuffer_t *strbuff);
-
-const char *strbuffer_value(const strbuffer_t *strbuff);
-char *strbuffer_steal_value(strbuffer_t *strbuff);
-
-int strbuffer_append(strbuffer_t *strbuff, const char *string);
-int strbuffer_append_byte(strbuffer_t *strbuff, char byte);
-int strbuffer_append_bytes(strbuffer_t *strbuff, const char *data, int size);
-
-char strbuffer_pop(strbuffer_t *strbuff);
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/utf.c b/branch-1.3/lang/c/jansson/src/utf.c
deleted file mode 100644
index cf2e8e4..0000000
--- a/branch-1.3/lang/c/jansson/src/utf.c
+++ /dev/null
@@ -1,159 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <string.h>
-
-int utf8_encode(int codepoint, char *buffer, int *size)
-{
-    if(codepoint < 0)
-        return -1;
-    else if(codepoint < 0x80)
-    {
-        buffer[0] = (char)codepoint;
-        *size = 1;
-    }
-    else if(codepoint < 0x800)
-    {
-        buffer[0] = 0xC0 + ((codepoint & 0x7C0) >> 6);
-        buffer[1] = 0x80 + ((codepoint & 0x03F));
-        *size = 2;
-    }
-    else if(codepoint < 0x10000)
-    {
-        buffer[0] = 0xE0 + ((codepoint & 0xF000) >> 12);
-        buffer[1] = 0x80 + ((codepoint & 0x0FC0) >> 6);
-        buffer[2] = 0x80 + ((codepoint & 0x003F));
-        *size = 3;
-    }
-    else if(codepoint <= 0x10FFFF)
-    {
-        buffer[0] = 0xF0 + ((codepoint & 0x1C0000) >> 18);
-        buffer[1] = 0x80 + ((codepoint & 0x03F000) >> 12);
-        buffer[2] = 0x80 + ((codepoint & 0x000FC0) >> 6);
-        buffer[3] = 0x80 + ((codepoint & 0x00003F));
-        *size = 4;
-    }
-    else
-        return -1;
-
-    return 0;
-}
-
-int utf8_check_first(char byte)
-{
-    unsigned char u = (unsigned char)byte;
-
-    if(u < 0x80)
-        return 1;
-
-    if(0x80 <= u && u <= 0xBF) {
-        /* second, third or fourth byte of a multi-byte
-           sequence, i.e. a "continuation byte" */
-        return 0;
-    }
-    else if(u == 0xC0 || u == 0xC1) {
-        /* overlong encoding of an ASCII byte */
-        return 0;
-    }
-    else if(0xC2 <= u && u <= 0xDF) {
-        /* 2-byte sequence */
-        return 2;
-    }
-
-    else if(0xE0 <= u && u <= 0xEF) {
-        /* 3-byte sequence */
-        return 3;
-    }
-    else if(0xF0 <= u && u <= 0xF4) {
-        /* 4-byte sequence */
-        return 4;
-    }
-    else { /* u >= 0xF5 */
-        /* Restricted (start of 4-, 5- or 6-byte sequence) or invalid
-           UTF-8 */
-        return 0;
-    }
-}
-
-int utf8_check_full(const char *buffer, int size)
-{
-    int i, value = 0;
-    unsigned char u = (unsigned char)buffer[0];
-
-    if(size == 2)
-    {
-        value = u & 0x1F;
-    }
-    else if(size == 3)
-    {
-        value = u & 0xF;
-    }
-    else if(size == 4)
-    {
-        value = u & 0x7;
-    }
-    else
-        return 0;
-
-    for(i = 1; i < size; i++)
-    {
-        u = (unsigned char)buffer[i];
-
-        if(u < 0x80 || u > 0xBF) {
-            /* not a continuation byte */
-            return 0;
-        }
-
-        value = (value << 6) + (u & 0x3F);
-    }
-
-    if(value > 0x10FFFF) {
-        /* not in Unicode range */
-        return 0;
-    }
-
-    else if(0xD800 <= value && value <= 0xDFFF) {
-        /* invalid code point (UTF-16 surrogate halves) */
-        return 0;
-    }
-
-    else if((size == 2 && value < 0x80) ||
-            (size == 3 && value < 0x800) ||
-            (size == 4 && value < 0x10000)) {
-        /* overlong encoding */
-        return 0;
-    }
-
-    return 1;
-}
-
-int utf8_check_string(const char *string, int length)
-{
-    int i;
-
-    if(length == -1)
-        length = strlen(string);
-
-    for(i = 0; i < length; i++)
-    {
-        int count = utf8_check_first(string[i]);
-        if(count == 0)
-            return 0;
-        else if(count > 1)
-        {
-            if(i + count > length)
-                return 0;
-
-            if(!utf8_check_full(&string[i], count))
-                return 0;
-
-            i += count - 1;
-        }
-    }
-
-    return 1;
-}
diff --git a/branch-1.3/lang/c/jansson/src/utf.h b/branch-1.3/lang/c/jansson/src/utf.h
deleted file mode 100644
index 75d7b6e..0000000
--- a/branch-1.3/lang/c/jansson/src/utf.h
+++ /dev/null
@@ -1,18 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef UTF_H
-#define UTF_H
-
-int utf8_encode(int codepoint, char *buffer, int *size);
-
-int utf8_check_first(char byte);
-int utf8_check_full(const char *buffer, int size);
-
-int utf8_check_string(const char *string, int length);
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/util.h b/branch-1.3/lang/c/jansson/src/util.h
deleted file mode 100644
index 66066c5..0000000
--- a/branch-1.3/lang/c/jansson/src/util.h
+++ /dev/null
@@ -1,13 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef UTIL_H
-#define UTIL_H
-
-#define max(a, b)  ((a) > (b) ? (a) : (b))
-
-#endif
diff --git a/branch-1.3/lang/c/jansson/src/value.c b/branch-1.3/lang/c/jansson/src/value.c
deleted file mode 100644
index 6497f85..0000000
--- a/branch-1.3/lang/c/jansson/src/value.c
+++ /dev/null
@@ -1,676 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#define _GNU_SOURCE
-#include <stdlib.h>
-#include <string.h>
-
-#include <jansson.h>
-#include "hashtable.h"
-#include "jansson_private.h"
-#include "utf.h"
-#include "util.h"
-
-
-static inline void json_init(json_t *json, json_type type)
-{
-    json->type = type;
-    json->refcount = 1;
-}
-
-
-/*** object ***/
-
-static unsigned int hash_string(const void *key)
-{
-    const char *str = (const char *)key;
-    unsigned int hash = 5381;
-    unsigned int c;
-
-    while((c = (unsigned int)*str))
-    {
-        hash = ((hash << 5) + hash) + c;
-        str++;
-    }
-
-    return hash;
-}
-
-static int string_equal(const void *key1, const void *key2)
-{
-    return strcmp((const char *)key1, (const char *)key2) == 0;
-}
-
-static void value_decref(void *value)
-{
-    json_decref((json_t *)value);
-}
-
-json_t *json_object(void)
-{
-    json_object_t *object = malloc(sizeof(json_object_t));
-    if(!object)
-        return NULL;
-    json_init(&object->json, JSON_OBJECT);
-
-    if(hashtable_init(&object->hashtable, hash_string, string_equal,
-                      free, value_decref))
-    {
-        free(object);
-        return NULL;
-    }
-
-    object->visited = 0;
-
-    return &object->json;
-}
-
-static void json_delete_object(json_object_t *object)
-{
-    hashtable_close(&object->hashtable);
-    free(object);
-}
-
-unsigned int json_object_size(const json_t *json)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json))
-        return -1;
-
-    object = json_to_object(json);
-    return object->hashtable.size;
-}
-
-json_t *json_object_get(const json_t *json, const char *key)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json))
-        return NULL;
-
-    object = json_to_object(json);
-    return hashtable_get(&object->hashtable, key);
-}
-
-int json_object_set_new_nocheck(json_t *json, const char *key, json_t *value)
-{
-    json_object_t *object;
-
-    if(!key || !value)
-        return -1;
-
-    if(!json_is_object(json) || json == value)
-    {
-        json_decref(value);
-        return -1;
-    }
-    object = json_to_object(json);
-
-    if(hashtable_set(&object->hashtable, strdup(key), value))
-    {
-        json_decref(value);
-        return -1;
-    }
-
-    return 0;
-}
-
-int json_object_set_nocheck(json_t *json, const char *key, json_t *value)
-{
-    return json_object_set_new_nocheck(json, key, json_incref(value));
-}
-
-int json_object_set_new(json_t *json, const char *key, json_t *value)
-{
-    if(!key || !utf8_check_string(key, -1))
-    {
-        json_decref(value);
-        return -1;
-    }
-
-    return json_object_set_new_nocheck(json, key, value);
-}
-
-int json_object_del(json_t *json, const char *key)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json))
-        return -1;
-
-    object = json_to_object(json);
-    return hashtable_del(&object->hashtable, key);
-}
-
-int json_object_clear(json_t *json)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json))
-        return -1;
-
-    object = json_to_object(json);
-    hashtable_clear(&object->hashtable);
-
-    return 0;
-}
-
-int json_object_update(json_t *object, json_t *other)
-{
-    void *iter;
-
-    if(!json_is_object(object) || !json_is_object(other))
-        return -1;
-
-    iter = json_object_iter(other);
-    while(iter) {
-        const char *key;
-        json_t *value;
-
-        key = json_object_iter_key(iter);
-        value = json_object_iter_value(iter);
-
-        if(json_object_set(object, key, value))
-            return -1;
-
-        iter = json_object_iter_next(other, iter);
-    }
-
-    return 0;
-}
-
-void *json_object_iter(json_t *json)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json))
-        return NULL;
-
-    object = json_to_object(json);
-    return hashtable_iter(&object->hashtable);
-}
-
-void *json_object_iter_next(json_t *json, void *iter)
-{
-    json_object_t *object;
-
-    if(!json_is_object(json) || iter == NULL)
-        return NULL;
-
-    object = json_to_object(json);
-    return hashtable_iter_next(&object->hashtable, iter);
-}
-
-const char *json_object_iter_key(void *iter)
-{
-    if(!iter)
-        return NULL;
-
-    return (const char *)hashtable_iter_key(iter);
-}
-
-json_t *json_object_iter_value(void *iter)
-{
-    if(!iter)
-        return NULL;
-
-    return (json_t *)hashtable_iter_value(iter);
-}
-
-
-/*** array ***/
-
-json_t *json_array(void)
-{
-    json_array_t *array = malloc(sizeof(json_array_t));
-    if(!array)
-        return NULL;
-    json_init(&array->json, JSON_ARRAY);
-
-    array->entries = 0;
-    array->size = 8;
-
-    array->table = malloc(array->size * sizeof(json_t *));
-    if(!array->table) {
-        free(array);
-        return NULL;
-    }
-
-    array->visited = 0;
-
-    return &array->json;
-}
-
-static void json_delete_array(json_array_t *array)
-{
-    unsigned int i;
-
-    for(i = 0; i < array->entries; i++)
-        json_decref(array->table[i]);
-
-    free(array->table);
-    free(array);
-}
-
-unsigned int json_array_size(const json_t *json)
-{
-    if(!json_is_array(json))
-        return 0;
-
-    return json_to_array(json)->entries;
-}
-
-json_t *json_array_get(const json_t *json, unsigned int index)
-{
-    json_array_t *array;
-    if(!json_is_array(json))
-        return NULL;
-    array = json_to_array(json);
-
-    if(index >= array->entries)
-        return NULL;
-
-    return array->table[index];
-}
-
-int json_array_set_new(json_t *json, unsigned int index, json_t *value)
-{
-    json_array_t *array;
-
-    if(!value)
-        return -1;
-
-    if(!json_is_array(json) || json == value)
-    {
-        json_decref(value);
-        return -1;
-    }
-    array = json_to_array(json);
-
-    if(index >= array->entries)
-    {
-        json_decref(value);
-        return -1;
-    }
-
-    json_decref(array->table[index]);
-    array->table[index] = value;
-
-    return 0;
-}
-
-static void array_move(json_array_t *array, unsigned int dest,
-                       unsigned int src, unsigned int count)
-{
-    memmove(&array->table[dest], &array->table[src], count * sizeof(json_t *));
-}
-
-static void array_copy(json_t **dest, unsigned int dpos,
-                       json_t **src, unsigned int spos,
-                       unsigned int count)
-{
-    memcpy(&dest[dpos], &src[spos], count * sizeof(json_t *));
-}
-
-static json_t **json_array_grow(json_array_t *array,
-                                unsigned int amount,
-                                int copy)
-{
-    unsigned int new_size;
-    json_t **old_table, **new_table;
-
-    if(array->entries + amount <= array->size)
-        return array->table;
-
-    old_table = array->table;
-
-    new_size = max(array->size + amount, array->size * 2);
-    new_table = malloc(new_size * sizeof(json_t *));
-    if(!new_table)
-        return NULL;
-
-    array->size = new_size;
-    array->table = new_table;
-
-    if(copy) {
-        array_copy(array->table, 0, old_table, 0, array->entries);
-        free(old_table);
-        return array->table;
-    }
-
-    return old_table;
-}
-
-int json_array_append_new(json_t *json, json_t *value)
-{
-    json_array_t *array;
-
-    if(!value)
-        return -1;
-
-    if(!json_is_array(json) || json == value)
-    {
-        json_decref(value);
-        return -1;
-    }
-    array = json_to_array(json);
-
-    if(!json_array_grow(array, 1, 1)) {
-        json_decref(value);
-        return -1;
-    }
-
-    array->table[array->entries] = value;
-    array->entries++;
-
-    return 0;
-}
-
-int json_array_insert_new(json_t *json, unsigned int index, json_t *value)
-{
-    json_array_t *array;
-    json_t **old_table;
-
-    if(!value)
-        return -1;
-
-    if(!json_is_array(json) || json == value) {
-        json_decref(value);
-        return -1;
-    }
-    array = json_to_array(json);
-
-    if(index > array->entries) {
-        json_decref(value);
-        return -1;
-    }
-
-    old_table = json_array_grow(array, 1, 0);
-    if(!old_table) {
-        json_decref(value);
-        return -1;
-    }
-
-    if(old_table != array->table) {
-        array_copy(array->table, 0, old_table, 0, index);
-        array_copy(array->table, index + 1, old_table, index,
-                   array->entries - index);
-        free(old_table);
-    }
-    else
-        array_move(array, index + 1, index, array->entries - index);
-
-    array->table[index] = value;
-    array->entries++;
-
-    return 0;
-}
-
-int json_array_remove(json_t *json, unsigned int index)
-{
-    json_array_t *array;
-
-    if(!json_is_array(json))
-        return -1;
-    array = json_to_array(json);
-
-    if(index >= array->entries)
-        return -1;
-
-    json_decref(array->table[index]);
-
-    array_move(array, index, index + 1, array->entries - index);
-    array->entries--;
-
-    return 0;
-}
-
-int json_array_clear(json_t *json)
-{
-    json_array_t *array;
-    unsigned int i;
-
-    if(!json_is_array(json))
-        return -1;
-    array = json_to_array(json);
-
-    for(i = 0; i < array->entries; i++)
-        json_decref(array->table[i]);
-
-    array->entries = 0;
-    return 0;
-}
-
-int json_array_extend(json_t *json, json_t *other_json)
-{
-    json_array_t *array, *other;
-    unsigned int i;
-
-    if(!json_is_array(json) || !json_is_array(other_json))
-        return -1;
-    array = json_to_array(json);
-    other = json_to_array(other_json);
-
-    if(!json_array_grow(array, other->entries, 1))
-        return -1;
-
-    for(i = 0; i < other->entries; i++)
-        json_incref(other->table[i]);
-
-    array_copy(array->table, array->entries, other->table, 0, other->entries);
-
-    array->entries += other->entries;
-    return 0;
-}
-
-
-/*** string ***/
-
-json_t *json_string_nocheck(const char *value)
-{
-    json_string_t *string;
-
-    if(!value)
-        return NULL;
-
-    string = malloc(sizeof(json_string_t));
-    if(!string)
-        return NULL;
-    json_init(&string->json, JSON_STRING);
-
-    string->value = strdup(value);
-    if(!string->value) {
-        free(string);
-        return NULL;
-    }
-
-    return &string->json;
-}
-
-json_t *json_string(const char *value)
-{
-    if(!value || !utf8_check_string(value, -1))
-        return NULL;
-
-    return json_string_nocheck(value);
-}
-
-const char *json_string_value(const json_t *json)
-{
-    if(!json_is_string(json))
-        return NULL;
-
-    return json_to_string(json)->value;
-}
-
-int json_string_set(const json_t *json, const char *value)
-{
-    char *dup;
-    json_string_t *string;
-
-    if(!json_is_string(json) || !value || !utf8_check_string(value, -1))
-        return -1;
-
-    dup = strdup(value);
-    if(!dup)
-        return -1;
-
-    string = json_to_string(json);
-    free(string->value);
-    string->value = dup;
-
-    return 0;
-}
-
-static void json_delete_string(json_string_t *string)
-{
-    free(string->value);
-    free(string);
-}
-
-
-/*** integer ***/
-
-json_t *json_integer(int value)
-{
-    json_integer_t *integer = malloc(sizeof(json_integer_t));
-    if(!integer)
-        return NULL;
-    json_init(&integer->json, JSON_INTEGER);
-
-    integer->value = value;
-    return &integer->json;
-}
-
-int json_integer_value(const json_t *json)
-{
-    if(!json_is_integer(json))
-        return 0;
-
-    return json_to_integer(json)->value;
-}
-
-int json_integer_set(const json_t *json, int value)
-{
-    if(!json_is_integer(json))
-        return -1;
-
-    json_to_integer(json)->value = value;
-
-    return 0;
-}
-
-static void json_delete_integer(json_integer_t *integer)
-{
-    free(integer);
-}
-
-
-/*** real ***/
-
-json_t *json_real(double value)
-{
-    json_real_t *real = malloc(sizeof(json_real_t));
-    if(!real)
-        return NULL;
-    json_init(&real->json, JSON_REAL);
-
-    real->value = value;
-    return &real->json;
-}
-
-double json_real_value(const json_t *json)
-{
-    if(!json_is_real(json))
-        return 0;
-
-    return json_to_real(json)->value;
-}
-
-int json_real_set(const json_t *json, double value)
-{
-    if(!json_is_real(json))
-        return 0;
-
-    json_to_real(json)->value = value;
-
-    return 0;
-}
-
-static void json_delete_real(json_real_t *real)
-{
-    free(real);
-}
-
-
-/*** number ***/
-
-double json_number_value(const json_t *json)
-{
-    if(json_is_integer(json))
-        return json_integer_value(json);
-    else if(json_is_real(json))
-        return json_real_value(json);
-    else
-        return 0.0;
-}
-
-
-/*** simple values ***/
-
-json_t *json_true(void)
-{
-    static json_t the_true = {
-        .type = JSON_TRUE,
-        .refcount = (unsigned int)1
-    };
-    return &the_true;
-}
-
-
-json_t *json_false(void)
-{
-    static json_t the_false = {
-        .type = JSON_FALSE,
-        .refcount = (unsigned int)1
-    };
-    return &the_false;
-}
-
-
-json_t *json_null(void)
-{
-    static json_t the_null = {
-        .type = JSON_NULL,
-        .refcount = (unsigned int)1
-    };
-    return &the_null;
-}
-
-
-/*** deletion ***/
-
-void json_delete(json_t *json)
-{
-    if(json_is_object(json))
-        json_delete_object(json_to_object(json));
-
-    else if(json_is_array(json))
-        json_delete_array(json_to_array(json));
-
-    else if(json_is_string(json))
-        json_delete_string(json_to_string(json));
-
-    else if(json_is_integer(json))
-        json_delete_integer(json_to_integer(json));
-
-    else if(json_is_real(json))
-        json_delete_real(json_to_real(json));
-
-    /* json_delete is not called for true, false or null */
-}
diff --git a/branch-1.3/lang/c/jansson/test/Makefile.am b/branch-1.3/lang/c/jansson/test/Makefile.am
deleted file mode 100644
index ed11110..0000000
--- a/branch-1.3/lang/c/jansson/test/Makefile.am
+++ /dev/null
@@ -1,22 +0,0 @@
-DIST_SUBDIRS = testprogs testdata
-SUBDIRS = testprogs
-
-check_PROGRAMS = loadf_dumpf loads_dumps load_file_dump_file
-
-AM_CPPFLAGS = -I$(top_srcdir)/src
-AM_CFLAGS = -Wall -Werror
-LDFLAGS = -static  # for speed and Valgrind
-LDADD = ../src/libjansson.la
-
-TESTS = test-api test-invalid test-valid
-
-EXTRA_DIST = \
-	test-api \
-	test-invalid \
-	test-valid \
-	run-test \
-	json-compare.py \
-	split-testfile.py
-
-clean-local:
-	rm -rf testlogs
diff --git a/branch-1.3/lang/c/jansson/test/json-compare.py b/branch-1.3/lang/c/jansson/test/json-compare.py
deleted file mode 100755
index f91530e..0000000
--- a/branch-1.3/lang/c/jansson/test/json-compare.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/python
-#
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-import sys
-try:
-    import json
-except ImportError:
-    import simplejson as json
-
-def load(filename):
-    try:
-        jsonfile = open(filename)
-    except IOError, err:
-        print >>sys.stderr, "unable to load %s: %s" % \
-            (filename, err.strerror)
-        sys.exit(1)
-
-    try:
-        jsondata = json.load(jsonfile)
-    except ValueError, err:
-        print "%s is malformed: %s" % (filename, err)
-        sys.exit(1)
-    finally:
-        jsonfile.close()
-
-    return jsondata
-
-def main():
-    if len(sys.argv) != 3:
-        print >>sys.stderr, "usage: %s json1 json2" % sys.argv[0]
-        return 2
-
-    json1 = load(sys.argv[1])
-    json2 = load(sys.argv[2])
-    if json1 == json2:
-        return 0
-    else:
-        return 1
-
-if __name__ == '__main__':
-    sys.exit(main() or 0)
diff --git a/branch-1.3/lang/c/jansson/test/load_file_dump_file.c b/branch-1.3/lang/c/jansson/test/load_file_dump_file.c
deleted file mode 100644
index ac55afe..0000000
--- a/branch-1.3/lang/c/jansson/test/load_file_dump_file.c
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <stdio.h>
-#include <jansson.h>
-
-int main(int argc, char *argv[])
-{
-    json_t *json;
-    json_error_t error;
-
-    if(argc != 3) {
-        fprintf(stderr, "usage: %s infile outfile\n", argv[0]);
-        return 2;
-    }
-
-    json = json_load_file(argv[1], &error);
-    if(!json) {
-        fprintf(stderr, "%d\n%s\n", error.line, error.text);
-        return 1;
-    }
-
-    json_dump_file(json, argv[2], 0);
-    json_decref(json);
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/loadf_dumpf.c b/branch-1.3/lang/c/jansson/test/loadf_dumpf.c
deleted file mode 100644
index 75b889f..0000000
--- a/branch-1.3/lang/c/jansson/test/loadf_dumpf.c
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <stdio.h>
-#include <jansson.h>
-
-int main(int argc, char *argv[])
-{
-    json_t *json;
-    json_error_t error;
-
-    if(argc != 1) {
-        fprintf(stderr, "usage: %s\n", argv[0]);
-        return 2;
-    }
-
-    json = json_loadf(stdin, &error);
-    if(!json) {
-        fprintf(stderr, "%d\n%s\n", error.line, error.text);
-        return 1;
-    }
-
-    /* loadf_dumpf indents, others don't, so dumping with and without
-       indenting is tested */
-    json_dumpf(json, stdout, JSON_INDENT(4));
-    json_decref(json);
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/loads_dumps.c b/branch-1.3/lang/c/jansson/test/loads_dumps.c
deleted file mode 100644
index fefe1ba..0000000
--- a/branch-1.3/lang/c/jansson/test/loads_dumps.c
+++ /dev/null
@@ -1,47 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <jansson.h>
-
-#define BUFFER_SIZE (256 * 1024)
-
-int main(int argc, char *argv[])
-{
-    json_t *json;
-    json_error_t error;
-    int count;
-    char buffer[BUFFER_SIZE];
-    char *result;
-
-    if(argc != 1) {
-        fprintf(stderr, "usage: %s\n", argv[0]);
-        return 2;
-    }
-
-    count = fread(buffer, 1, BUFFER_SIZE, stdin);
-    if(count < 0 || count >= BUFFER_SIZE) {
-        fprintf(stderr, "unable to read input\n");
-        return 1;
-    }
-    buffer[count] = '\0';
-
-    json = json_loads(buffer, &error);
-    if(!json) {
-        fprintf(stderr, "%d\n%s\n", error.line, error.text);
-        return 1;
-    }
-
-    result = json_dumps(json, 0);
-    json_decref(json);
-
-    puts(result);
-    free(result);
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/run-test b/branch-1.3/lang/c/jansson/test/run-test
deleted file mode 100644
index cd74ee2..0000000
--- a/branch-1.3/lang/c/jansson/test/run-test
+++ /dev/null
@@ -1,57 +0,0 @@
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-VALGRIND_CMDLINE="valgrind --leak-check=full --show-reachable=yes --track-origins=yes -q"
-
-run_testprog() {
-    local prog=$1
-    local prefix=$2
-    if [ -n "$VALGRIND" ]; then
-        local runner="$VALGRIND_CMDLINE "
-    fi
-
-    case "$prog" in
-        load_file_dump_file)
-            $runner./$prog \
-                $prefix.in \
-                $prefix.$prog.stdout \
-                2>$prefix.$prog.stderr
-            ;;
-        *)
-            $runner./$prog \
-                <$prefix.in \
-                >$prefix.$prog.stdout \
-                2>$prefix.$prog.stderr
-            ;;
-    esac
-
-    if [ -n "$VALGRIND" ]; then
-        # Check for Valgrind error output. The valgrind option
-        # --error-exitcode is not enough because Valgrind doesn't
-        # think unfreed allocs are errors.
-        if grep -E -q '^==[0-9]+== ' $prefix.$prog.stderr; then
-            echo "### $prefix ($prog) failed:" >&2
-            echo "valgrind detected an error" >&2
-            echo "for details, see test/$prefix.$prog.stderr" >&2
-            exit 1
-        fi
-    fi
-}
-
-for testfile in $TESTFILES; do
-    tmpdir="testlogs/`basename $testfile`"
-    rm -rf $tmpdir
-    mkdir -p $tmpdir
-    if echo "$testfile" | grep -q -E -e '-strip$'; then
-        opts="--strip"
-    fi
-    ${srcdir}/split-testfile.py $opts $testfile $tmpdir | while read name; do
-        run_test loadf_dumpf $tmpdir/$name
-        run_test loads_dumps $tmpdir/$name
-        run_test load_file_dump_file $tmpdir/$name
-        printf '.'
-    done || exit 1
-    echo
-done
diff --git a/branch-1.3/lang/c/jansson/test/split-testfile.py b/branch-1.3/lang/c/jansson/test/split-testfile.py
deleted file mode 100755
index 97a489e..0000000
--- a/branch-1.3/lang/c/jansson/test/split-testfile.py
+++ /dev/null
@@ -1,68 +0,0 @@
-#!/usr/bin/python
-#
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-import os
-import sys
-from optparse import OptionParser
-
-def strip_file(filename):
-    with open(filename) as fobj:
-        data = fobj.read()
-    with open(filename, 'w') as fobj:
-        fobj.write(data.strip())
-
-def open_files(outdir, i, name):
-    basename = '%02d_%s' % (i, name)
-    print basename
-    input_path = os.path.join(outdir, basename + '.in')
-    output_path = os.path.join(outdir, basename + '.out')
-    return open(input_path, 'w'), open(output_path, 'w')
-
-def main():
-    parser = OptionParser('usage: %prog [options] inputfile outputdir')
-    parser.add_option('--strip', help='strip whitespace from input',
-                      action='store_true', default=False)
-    options, args = parser.parse_args()
-
-    if len(args) != 2:
-        parser.print_help()
-        return 2
-
-    infile = os.path.normpath(args[0])
-    outdir = os.path.normpath(args[1])
-
-    if not os.path.exists(outdir):
-        print >>sys.stderr, 'output directory %r does not exist!' % outdir
-        return 1
-
-    n = 0
-    current = None
-    input, output = None, None
-
-    for line in open(infile):
-        if line.startswith('==== '):
-            n += 1
-            if input is not None and output is not None:
-                input.close()
-                output.close()
-                if options.strip:
-                    strip_file(input.name)
-            input, output = open_files(outdir, n, line[5:line.find(' ====\n')])
-            current = input
-        elif line == '====\n':
-            current = output
-        else:
-            current.write(line)
-
-    if input is not None and output is not None:
-        input.close()
-        output.close()
-
-    print >>sys.stderr, "%s: %d test cases" % (infile, n)
-
-if __name__ == '__main__':
-    sys.exit(main() or 0)
diff --git a/branch-1.3/lang/c/jansson/test/test-api b/branch-1.3/lang/c/jansson/test/test-api
deleted file mode 100755
index 997bef1..0000000
--- a/branch-1.3/lang/c/jansson/test/test-api
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/sh
-#
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-VALGRIND_CMDLINE="valgrind --leak-check=full --show-reachable=yes --track-origins=yes -q"
-LOGDIR="testlogs/api"
-N=`find testprogs -type f | wc -l`
-
-echo "testprogs: $N tests"
-
-rm -rf $LOGDIR
-mkdir -p $LOGDIR
-
-if [ -n "$VALGRIND" ]; then
-    runner="$VALGRIND_CMDLINE "
-fi
-
-i=1
-failed=
-for prog in testprogs/*; do
-    [ -x $prog ] || continue
-    t=`basename $prog`
-    logbase="testlogs/api/`printf '%02d-%s' $i $t`"
-    if ! $runner./$prog >$logbase.stdout 2>$logbase.stderr; then
-        echo >&2
-        echo "### $prog failed:" >&2
-        cat $logbase.stderr
-        exit 1
-    fi
-    if [ -n "$VALGRIND" ]; then
-        # Check for Valgrind error output. The valgrind option
-        # --error-exitcode is not enough because Valgrind doesn't
-        # think unfreed allocs are errors.
-        if grep -E -q '^==[0-9]+== ' $logbase.stderr; then
-            echo "### $prog failed:" >&2
-            echo "valgrind detected an error" >&2
-            echo "for details, see test/$logbase.stderr" >&2
-            exit 1
-        fi
-    fi
-    printf '.'
-done
-echo
diff --git a/branch-1.3/lang/c/jansson/test/test-invalid b/branch-1.3/lang/c/jansson/test/test-invalid
deleted file mode 100755
index b83c2e7..0000000
--- a/branch-1.3/lang/c/jansson/test/test-invalid
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/sh
-#
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-TESTFILES="${srcdir}/testdata/invalid ${srcdir}/testdata/invalid-strip ${srcdir}/testdata/invalid-unicode"
-
-run_test() {
-    local prog=$1
-    local prefix=$2
-
-    run_testprog $prog $prefix
-    if ! cmp $prefix.out $prefix.$prog.stderr >/dev/null; then
-        echo >&2
-        echo "### $prefix ($prog) failed:" >&2
-        cat $prefix.in >&2
-        echo "### expected output:" >&2
-        cat $prefix.out >&2
-        echo "### actual output:" >&2
-        cat $prefix.$prog.stderr >&2
-        exit 1
-    fi
-}
-
-. ${srcdir}/run-test
diff --git a/branch-1.3/lang/c/jansson/test/test-valid b/branch-1.3/lang/c/jansson/test/test-valid
deleted file mode 100755
index 46d4edf..0000000
--- a/branch-1.3/lang/c/jansson/test/test-valid
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/sh
-#
-# Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
-#
-# Jansson is free software; you can redistribute it and/or modify
-# it under the terms of the MIT license. See LICENSE for details.
-
-TESTFILES="${srcdir}/testdata/valid ${srcdir}/testdata/valid-strip"
-
-run_test() {
-    local prog=$1
-    local prefix=$2
-
-    run_testprog $prog $prefix
-
-    if ! ${srcdir}/json-compare.py $prefix.in $prefix.$prog.stdout \
-        >$prefix.$prog.cmp-stdout
-    then
-        echo >&2
-        echo "### $prefix ($prog) failed:" >&2
-        cat $prefix.in >&2
-        if [ -f $prefix.$prog.stdout ]; then
-            echo "### output:" >&2
-            cat $prefix.$prog.stdout >&2
-        fi
-        if [ -s $prefix.$prog.stdout ]; then
-            echo "### compare output:" >&2
-            cat $prefix.$prog.cmp-stdout >&2
-        fi
-        exit 1
-    fi
-}
-
-. ${srcdir}/run-test
diff --git a/branch-1.3/lang/c/jansson/test/testdata/Makefile.am b/branch-1.3/lang/c/jansson/test/testdata/Makefile.am
deleted file mode 100644
index bb84cf5..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/Makefile.am
+++ /dev/null
@@ -1 +0,0 @@
-EXTRA_DIST = invalid invalid-strip invalid-unicode valid valid-strip
diff --git a/branch-1.3/lang/c/jansson/test/testdata/invalid b/branch-1.3/lang/c/jansson/test/testdata/invalid
deleted file mode 100644
index 7fede67..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/invalid
+++ /dev/null
@@ -1,235 +0,0 @@
-==== empty ====
-====
-1
-'[' or '{' expected near end of file
-==== null ====
-null
-====
-1
-'[' or '{' expected near 'null'
-==== lone-open-brace ====
-{
-====
-2
-string or '}' expected near end of file
-==== lone-open-bracket ====
-[
-====
-2
-']' expected near end of file
-==== bracket-comma ====
-[,
-====
-1
-unexpected token near ','
-==== bracket-one-comma ====
-[1,
-====
-2
-']' expected near end of file
-==== unterminated-string ====
-["a
-====
-1
-unexpected newline near '"a'
-==== unterminated-array ====
-["a"
-====
-2
-']' expected near end of file
-==== apostrophe ====
-['
-====
-1
-invalid token near '''
-==== brace-comma ====
-{,
-====
-1
-string or '}' expected near ','
-==== unterminated-empty-key ====
-{"
-====
-1
-unexpected newline near '"'
-==== unterminated-key ====
-{"a
-====
-1
-unexpected newline near '"a'
-==== object-no-colon ====
-{"a"
-====
-2
-':' expected near end of file
-==== object-apostrophes ====
-{'a'
-====
-1
-string or '}' expected near '''
-==== object-no-value ====
-{"a":
-====
-2
-unexpected token near end of file
-==== object-unterminated-value ====
-{"a":"a
-====
-1
-unexpected newline near '"a'
-==== object-garbage-at-end ====
-{"a":"a" 123}
-====
-1
-'}' expected near '123'
-==== unterminated-object-and-array ====
-{[
-====
-1
-string or '}' expected near '['
-==== unterminated-array-and-object ====
-[{
-====
-2
-string or '}' expected near end of file
-==== object-in-unterminated-array ====
-[{}
-====
-2
-']' expected near end of file
-==== extra-comma-in-array ====
-[1,]
-====
-1
-unexpected token near ']'
-==== extra-command-in-multiline-array ====
-[1,
-2,
-3,
-4,
-5,
-]
-====
-6
-unexpected token near ']'
-==== real-truncated-at-point ====
-[1.]
-====
-1
-invalid token near '1.'
-==== real-truncated-at-e ====
-[1e]
-====
-1
-invalid token near '1e'
-==== real-garbage-after-e ====
-[1ea]
-====
-1
-invalid token near '1e'
-==== real-positive-overflow ====
-[123123e100000]
-====
-1
-real number overflow near '123123e100000'
-==== real-negative-overflow ====
-[-123123e100000]
-====
-1
-real number overflow near '-123123e100000'
-==== real-underflow ====
-[123e-10000000]
-====
-1
-real number underflow near '123e-10000000'
-==== integer-starting-with-zero ====
-[012]
-====
-1
-invalid token near '0'
-==== negative-integer-starting-with-zero ====
-[-012]
-====
-1
-invalid token near '-0'
-==== too-big-positive-integer ====
-[123123123123123]
-====
-1
-too big integer near '123123123123123'
-==== too-big-negative-integer ====
-[-123123123123123]
-====
-1
-too big negative integer near '-123123123123123'
-==== invalid-identifier ====
-[troo
-====
-1
-invalid token near 'troo'
-==== minus-sign-without-number ====
-[-foo]
-====
-1
-invalid token near '-'
-==== invalid-negative-integerr ====
-[-123foo]
-====
-1
-']' expected near 'foo'
-==== invalid-negative-real ====
-[-123.123foo]
-====
-1
-']' expected near 'foo'
-==== invalid-escape ====
-["\a <-- invalid escape"]
-====
-1
-invalid escape near '"\'
-==== tab-character-in-string ====
-["	 <-- tab character"]
-====
-1
-control character 0x9 near '"'
-==== null-byte-in-string ====
-["\u0000 (null byte not allowed)"]
-====
-1
-\u0000 is not allowed
-==== truncated-unicode-surrogate ====
-["\uDADA (first surrogate without the second)"]
-====
-1
-invalid Unicode '\uDADA'
-==== invalid-second-surrogate ====
-["\uD888\u3210 (first surrogate and invalid second surrogate)"]
-====
-1
-invalid Unicode '\uD888\u3210'
-==== lone-second-surrogate ====
-["\uDFAA (second surrogate on it's own)"]
-====
-1
-invalid Unicode '\uDFAA'
-==== unicode-identifier ====

-====
-1
-'[' or '{' expected near 'å'
-==== ascii-unicode-identifier ====
-aå
-====
-1
-'[' or '{' expected near 'a'
-==== garbage-at-the-end ====
-[1,2,3]foo
-====
-1
-end of file expected near 'foo'
-==== garbage-after-newline ====
-[1,2,3]
-foo
-====
-2
-end of file expected near 'foo'
diff --git a/branch-1.3/lang/c/jansson/test/testdata/invalid-strip b/branch-1.3/lang/c/jansson/test/testdata/invalid-strip
deleted file mode 100644
index 1efdc25..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/invalid-strip
+++ /dev/null
@@ -1,235 +0,0 @@
-==== empty ====
-====
-1
-'[' or '{' expected near end of file
-==== null ====
-null
-====
-1
-'[' or '{' expected near 'null'
-==== lone-open-brace ====
-{
-====
-1
-string or '}' expected near end of file
-==== lone-open-bracket ====
-[
-====
-1
-']' expected near end of file
-==== bracket-comma ====
-[,
-====
-1
-unexpected token near ','
-==== bracket-one-comma ====
-[1,
-====
-1
-']' expected near end of file
-==== unterminated-string ====
-["a
-====
-1
-premature end of input near '"a'
-==== unterminated-array ====
-["a"
-====
-1
-']' expected near end of file
-==== apostrophe ====
-['
-====
-1
-invalid token near '''
-==== brace-comma ====
-{,
-====
-1
-string or '}' expected near ','
-==== unterminated-empty-key ====
-{"
-====
-1
-premature end of input near '"'
-==== unterminated-key ====
-{"a
-====
-1
-premature end of input near '"a'
-==== object-no-colon ====
-{"a"
-====
-1
-':' expected near end of file
-==== object-apostrophes ====
-{'a'
-====
-1
-string or '}' expected near '''
-==== object-no-value ====
-{"a":
-====
-1
-unexpected token near end of file
-==== object-unterminated-value ====
-{"a":"a
-====
-1
-premature end of input near '"a'
-==== object-garbage-at-end ====
-{"a":"a" 123}
-====
-1
-'}' expected near '123'
-==== unterminated-object-and-array ====
-{[
-====
-1
-string or '}' expected near '['
-==== unterminated-array-and-object ====
-[{
-====
-1
-string or '}' expected near end of file
-==== object-in-unterminated-array ====
-[{}
-====
-1
-']' expected near end of file
-==== extra-comma-in-array ====
-[1,]
-====
-1
-unexpected token near ']'
-==== extra-command-in-multiline-array ====
-[1,
-2,
-3,
-4,
-5,
-]
-====
-6
-unexpected token near ']'
-==== real-truncated-at-point ====
-[1.]
-====
-1
-invalid token near '1.'
-==== real-truncated-at-e ====
-[1e]
-====
-1
-invalid token near '1e'
-==== real-garbage-after-e ====
-[1ea]
-====
-1
-invalid token near '1e'
-==== real-positive-overflow ====
-[123123e100000]
-====
-1
-real number overflow near '123123e100000'
-==== real-negative-overflow ====
-[-123123e100000]
-====
-1
-real number overflow near '-123123e100000'
-==== real-underflow ====
-[123e-10000000]
-====
-1
-real number underflow near '123e-10000000'
-==== integer-starting-with-zero ====
-[012]
-====
-1
-invalid token near '0'
-==== negative-integer-starting-with-zero ====
-[-012]
-====
-1
-invalid token near '-0'
-==== too-big-positive-integer ====
-[123123123123123]
-====
-1
-too big integer near '123123123123123'
-==== too-big-negative-integer ====
-[-123123123123123]
-====
-1
-too big negative integer near '-123123123123123'
-==== invalid-identifier ====
-[troo
-====
-1
-invalid token near 'troo'
-==== minus-sign-without-number ====
-[-foo]
-====
-1
-invalid token near '-'
-==== invalid-negative-integerr ====
-[-123foo]
-====
-1
-']' expected near 'foo'
-==== invalid-negative-real ====
-[-123.123foo]
-====
-1
-']' expected near 'foo'
-==== invalid-escape ====
-["\a <-- invalid escape"]
-====
-1
-invalid escape near '"\'
-==== tab-character-in-string ====
-["	 <-- tab character"]
-====
-1
-control character 0x9 near '"'
-==== null-byte-in-string ====
-["\u0000 (null byte not allowed)"]
-====
-1
-\u0000 is not allowed
-==== truncated-unicode-surrogate ====
-["\uDADA (first surrogate without the second)"]
-====
-1
-invalid Unicode '\uDADA'
-==== invalid-second-surrogate ====
-["\uD888\u3210 (first surrogate and invalid second surrogate)"]
-====
-1
-invalid Unicode '\uD888\u3210'
-==== lone-second-surrogate ====
-["\uDFAA (second surrogate on it's own)"]
-====
-1
-invalid Unicode '\uDFAA'
-==== unicode-identifier ====

-====
-1
-'[' or '{' expected near 'å'
-==== ascii-unicode-identifier ====
-aå
-====
-1
-'[' or '{' expected near 'a'
-==== garbage-at-the-end ====
-[1,2,3]foo
-====
-1
-end of file expected near 'foo'
-==== garbage-after-newline ====
-[1,2,3]
-foo
-====
-2
-end of file expected near 'foo'
diff --git a/branch-1.3/lang/c/jansson/test/testdata/invalid-unicode b/branch-1.3/lang/c/jansson/test/testdata/invalid-unicode
deleted file mode 100644
index 84db51f..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/invalid-unicode
+++ /dev/null
@@ -1,90 +0,0 @@
-==== lone-invalid-utf-8 ====

-====
--1
-unable to decode byte 0xe5 at position 0
-==== invalid-utf-8-in-string ====
-["å <-- invalid UTF-8"]
-====
--1
-unable to decode byte 0xe5 at position 2
-==== invalid-utf-8-in-array ====
-[å]
-====
--1
-unable to decode byte 0xe5 at position 1
-==== invalid-utf-8-in-identifier ====
-[aå]
-====
--1
-unable to decode byte 0xe5 at position 2
-==== invalid-utf-8-in-escape ====
-["\uå"]
-====
--1
-unable to decode byte 0xe5 at position 4
-==== invalid-utf-8-after-backslash ====
-["\å"]
-====
--1
-unable to decode byte 0xe5 at position 3
-==== invalid-utf-8-in-int ====
-[0å]
-====
--1
-unable to decode byte 0xe5 at position 2
-==== invalid-utf-8-in-bigger-int ====
-[123å]
-====
--1
-unable to decode byte 0xe5 at position 4
-==== invalid-utf-8-in-real-after-e ====
-[1eå]
-====
--1
-unable to decode byte 0xe5 at position 3
-==== invalid-utf-8-in-exponent ====
-[1e1å]
-====
--1
-unable to decode byte 0xe5 at position 4
-==== lone-utf-8-continuation-byte ====
-[""]
-====
--1
-unable to decode byte 0x81 at position 2
-==== overlong-ascii-encoding ====
-["Á"]
-====
--1
-unable to decode byte 0xc1 at position 2
-==== restricted-utf-8 ====
-["ý"]
-====
--1
-unable to decode byte 0xfd at position 2
-==== not-in-unicode-range ====
-["ô¿¿¿"]
-====
--1
-unable to decode byte 0xf4 at position 2
-==== overlong-3-byte-encoding ====
-["à€¢ <-- overlong encoding"]
-====
--1
-unable to decode byte 0xe0 at position 2
-==== overlong-4-byte-encoding ====
-["ð€€¢ <-- overlong encoding"]
-====
--1
-unable to decode byte 0xf0 at position 2
-==== truncated-utf-8 ====
-["àÿ <-- truncated UTF-8"]
-====
--1
-unable to decode byte 0xe0 at position 2
-==== encoded-surrogate-half ====
-["í¢« <-- encoded surrogate half"]
-====
--1
-unable to decode byte 0xed at position 2
diff --git a/branch-1.3/lang/c/jansson/test/testdata/valid b/branch-1.3/lang/c/jansson/test/testdata/valid
deleted file mode 100644
index 50cfc63..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/valid
+++ /dev/null
@@ -1,68 +0,0 @@
-==== empty-string ====
-[""]
-==== short-string ====
-["a"]
-==== simple-ascii-string ====
-["abcdefghijklmnopqrstuvwxyz1234567890 "]
-==== utf-8-string ====
-["€þıœəßð some utf-8 ĸʒ×ŋµåäö𝄞"]
-==== string-escapes ====
-["\"\\\/\b\f\n\r\t"]
-==== one-byte-utf-8 ====
-["\u002c one-byte UTF-8"]
-==== two-byte-utf-8 ====
-["\u0123 two-byte UTF-8"]
-==== three-byte-utf-8 ====
-["\u0821 three-byte UTF-8"]
-==== utf-surrogate-four-byte-encoding ====
-["\uD834\uDD1E surrogate, four-byte UTF-8"]
-==== escaped-utf-control-char ====
-["\u0012 escaped control character"]
-==== simple-int-0 ====
-[0]
-==== simple-int-1 ====
-[1]
-==== simple-int-123 ====
-[123]
-==== negative-zero ====
-[-0]
-==== negative-one ====
-[-1]
-==== negative-int ====
-[-123]
-==== simple-real ====
-[123.456789]
-==== real-exponent ====
-[123e45]
-==== real-capital-e ====
-[1E22]
-==== real-positive-exponent ====
-[1e+2]
-==== real-negative-exponent ====
-[1e-2]
-==== real-capital-e-positive-exponent ====
-[1E+2]
-==== real-capital-e-negative-exponent ====
-[1E-2]
-==== real-fraction-exponent ====
-[123.456e78]
-==== true ====
-[true]
-==== false ====
-[false]
-==== null ====
-[null]
-==== empty-array ====
-[]
-==== empty-object-in-array ====
-[{}]
-==== complex-array ====
-[1,2,3,4,
-"a", "b", "c",
-{"foo": "bar", "core": "dump"},
-true, false, true, true, null, false
-]
-==== empty-object ====
-{}
-==== simple-object ====
-{"a":[]}
diff --git a/branch-1.3/lang/c/jansson/test/testdata/valid-strip b/branch-1.3/lang/c/jansson/test/testdata/valid-strip
deleted file mode 100644
index 50cfc63..0000000
--- a/branch-1.3/lang/c/jansson/test/testdata/valid-strip
+++ /dev/null
@@ -1,68 +0,0 @@
-==== empty-string ====
-[""]
-==== short-string ====
-["a"]
-==== simple-ascii-string ====
-["abcdefghijklmnopqrstuvwxyz1234567890 "]
-==== utf-8-string ====
-["€þıœəßð some utf-8 ĸʒ×ŋµåäö𝄞"]
-==== string-escapes ====
-["\"\\\/\b\f\n\r\t"]
-==== one-byte-utf-8 ====
-["\u002c one-byte UTF-8"]
-==== two-byte-utf-8 ====
-["\u0123 two-byte UTF-8"]
-==== three-byte-utf-8 ====
-["\u0821 three-byte UTF-8"]
-==== utf-surrogate-four-byte-encoding ====
-["\uD834\uDD1E surrogate, four-byte UTF-8"]
-==== escaped-utf-control-char ====
-["\u0012 escaped control character"]
-==== simple-int-0 ====
-[0]
-==== simple-int-1 ====
-[1]
-==== simple-int-123 ====
-[123]
-==== negative-zero ====
-[-0]
-==== negative-one ====
-[-1]
-==== negative-int ====
-[-123]
-==== simple-real ====
-[123.456789]
-==== real-exponent ====
-[123e45]
-==== real-capital-e ====
-[1E22]
-==== real-positive-exponent ====
-[1e+2]
-==== real-negative-exponent ====
-[1e-2]
-==== real-capital-e-positive-exponent ====
-[1E+2]
-==== real-capital-e-negative-exponent ====
-[1E-2]
-==== real-fraction-exponent ====
-[123.456e78]
-==== true ====
-[true]
-==== false ====
-[false]
-==== null ====
-[null]
-==== empty-array ====
-[]
-==== empty-object-in-array ====
-[{}]
-==== complex-array ====
-[1,2,3,4,
-"a", "b", "c",
-{"foo": "bar", "core": "dump"},
-true, false, true, true, null, false
-]
-==== empty-object ====
-{}
-==== simple-object ====
-{"a":[]}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/Makefile.am b/branch-1.3/lang/c/jansson/test/testprogs/Makefile.am
deleted file mode 100644
index 41807ed..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/Makefile.am
+++ /dev/null
@@ -1,12 +0,0 @@
-check_PROGRAMS = test_array test_load test_simple test_number test_object
-
-test_array_SOURCES = test_array.c util.h
-test_load_SOURCES = test_load.c util.h
-test_simple_SOURCES = test_simple.c util.h
-test_number_SOURCES = test_number.c util.h
-test_object_SOURCES = test_object.c util.h
-
-AM_CPPFLAGS = -I$(top_srcdir)/src
-AM_CFLAGS = -Wall -Werror
-LDFLAGS = -static  # for speed and Valgrind
-LDADD = ../../src/libjansson.la
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/test_array.c b/branch-1.3/lang/c/jansson/test/testprogs/test_array.c
deleted file mode 100644
index 45bfefd..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/test_array.c
+++ /dev/null
@@ -1,400 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <jansson.h>
-#include "util.h"
-
-static void test_misc(void)
-{
-    json_t *array, *five, *seven, *value;
-    int i;
-
-    array = json_array();
-    five = json_integer(5);
-    seven = json_integer(7);
-
-    if(!array)
-        fail("unable to create array");
-    if(!five || !seven)
-        fail("unable to create integer");
-
-    if(json_array_size(array) != 0)
-        fail("empty array has nonzero size");
-
-    if(!json_array_append(array, NULL))
-        fail("able to append NULL");
-
-    if(json_array_append(array, five))
-        fail("unable to append");
-
-    if(json_array_size(array) != 1)
-        fail("wrong array size");
-
-    value = json_array_get(array, 0);
-    if(!value)
-        fail("unable to get item");
-    if(value != five)
-        fail("got wrong value");
-
-    if(json_array_append(array, seven))
-        fail("unable to append value");
-
-    if(json_array_size(array) != 2)
-        fail("wrong array size");
-
-    value = json_array_get(array, 1);
-    if(!value)
-        fail("unable to get item");
-    if(value != seven)
-        fail("got wrong value");
-
-    if(json_array_set(array, 0, seven))
-        fail("unable to set value");
-
-    if(!json_array_set(array, 0, NULL))
-        fail("able to set NULL");
-
-    if(json_array_size(array) != 2)
-        fail("wrong array size");
-
-    value = json_array_get(array, 0);
-    if(!value)
-        fail("unable to get item");
-    if(value != seven)
-        fail("got wrong value");
-
-    if(json_array_get(array, 2) != NULL)
-        fail("able to get value out of bounds");
-
-    if(!json_array_set(array, 2, seven))
-        fail("able to set value out of bounds");
-
-    for(i = 2; i < 30; i++) {
-        if(json_array_append(array, seven))
-            fail("unable to append value");
-
-        if(json_array_size(array) != i + 1)
-            fail("wrong array size");
-    }
-
-    for(i = 0; i < 30; i++) {
-        value = json_array_get(array, i);
-        if(!value)
-            fail("unable to get item");
-        if(value != seven)
-            fail("got wrong value");
-    }
-
-    if(json_array_set_new(array, 15, json_integer(123)))
-        fail("unable to set new value");
-
-    value = json_array_get(array, 15);
-    if(!json_is_integer(value) || json_integer_value(value) != 123)
-        fail("json_array_set_new works incorrectly");
-
-    if(!json_array_set_new(array, 15, NULL))
-        fail("able to set_new NULL value");
-
-    if(json_array_append_new(array, json_integer(321)))
-        fail("unable to append new value");
-
-    value = json_array_get(array, json_array_size(array) - 1);
-    if(!json_is_integer(value) || json_integer_value(value) != 321)
-        fail("json_array_append_new works incorrectly");
-
-    if(!json_array_append_new(array, NULL))
-        fail("able to append_new NULL value");
-
-    json_decref(five);
-    json_decref(seven);
-    json_decref(array);
-}
-
-static void test_insert(void)
-{
-    json_t *array, *five, *seven, *eleven, *value;
-    int i;
-
-    array = json_array();
-    five = json_integer(5);
-    seven = json_integer(7);
-    eleven = json_integer(11);
-
-    if(!array)
-        fail("unable to create array");
-    if(!five || !seven || !eleven)
-        fail("unable to create integer");
-
-
-    if(!json_array_insert(array, 1, five))
-        fail("able to insert value out of bounds");
-
-
-    if(json_array_insert(array, 0, five))
-        fail("unable to insert value in an empty array");
-
-    if(json_array_get(array, 0) != five)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_size(array) != 1)
-        fail("array size is invalid after insertion");
-
-
-    if(json_array_insert(array, 1, seven))
-        fail("unable to insert value at the end of an array");
-
-    if(json_array_get(array, 0) != five)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_get(array, 1) != seven)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_size(array) != 2)
-        fail("array size is invalid after insertion");
-
-
-    if(json_array_insert(array, 1, eleven))
-        fail("unable to insert value in the middle of an array");
-
-    if(json_array_get(array, 0) != five)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_get(array, 1) != eleven)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_get(array, 2) != seven)
-        fail("json_array_insert works incorrectly");
-
-    if(json_array_size(array) != 3)
-        fail("array size is invalid after insertion");
-
-
-    if(json_array_insert_new(array, 2, json_integer(123)))
-        fail("unable to insert value in the middle of an array");
-
-    value = json_array_get(array, 2);
-    if(!json_is_integer(value) || json_integer_value(value) != 123)
-        fail("json_array_insert_new works incorrectly");
-
-    if(json_array_size(array) != 4)
-        fail("array size is invalid after insertion");
-
-
-    for(i = 0; i < 20; i++) {
-        if(json_array_insert(array, 0, seven))
-            fail("unable to insert value at the begining of an array");
-    }
-
-    for(i = 0; i < 20; i++) {
-        if(json_array_get(array, i) != seven)
-            fail("json_aray_insert works incorrectly");
-    }
-
-    if(json_array_size(array) != 24)
-        fail("array size is invalid after loop insertion");
-
-    json_decref(five);
-    json_decref(seven);
-    json_decref(eleven);
-    json_decref(array);
-}
-
-static void test_remove(void)
-{
-    json_t *array, *five, *seven;
-
-    array = json_array();
-    five = json_integer(5);
-    seven = json_integer(7);
-
-    if(!array)
-        fail("unable to create array");
-    if(!five)
-        fail("unable to create integer");
-    if(!seven)
-        fail("unable to create integer");
-
-
-    if(!json_array_remove(array, 0))
-        fail("able to remove an unexisting index");
-
-
-    if(json_array_append(array, five))
-        fail("unable to append");
-
-    if(!json_array_remove(array, 1))
-        fail("able to remove an unexisting index");
-
-    if(json_array_remove(array, 0))
-        fail("unable to remove");
-
-    if(json_array_size(array) != 0)
-        fail("array size is invalid after removing");
-
-
-    if(json_array_append(array, five) ||
-       json_array_append(array, seven) ||
-       json_array_append(array, five) ||
-       json_array_append(array, seven))
-        fail("unable to append");
-
-    if(json_array_remove(array, 2))
-        fail("unable to remove");
-
-    if(json_array_size(array) != 3)
-        fail("array size is invalid after removing");
-
-    if(json_array_get(array, 0) != five ||
-       json_array_get(array, 1) != seven ||
-       json_array_get(array, 2) != seven)
-        fail("remove works incorrectly");
-
-    json_decref(five);
-    json_decref(seven);
-    json_decref(array);
-}
-
-static void test_clear(void)
-{
-    json_t *array, *five, *seven;
-    int i;
-
-    array = json_array();
-    five = json_integer(5);
-    seven = json_integer(7);
-
-    if(!array)
-        fail("unable to create array");
-    if(!five || !seven)
-        fail("unable to create integer");
-
-    for(i = 0; i < 10; i++) {
-        if(json_array_append(array, five))
-            fail("unable to append");
-    }
-    for(i = 0; i < 10; i++) {
-        if(json_array_append(array, seven))
-            fail("unable to append");
-    }
-
-    if(json_array_size(array) != 20)
-        fail("array size is invalid after appending");
-
-    if(json_array_clear(array))
-        fail("unable to clear");
-
-    if(json_array_size(array) != 0)
-        fail("array size is invalid after clearing");
-
-    json_decref(five);
-    json_decref(seven);
-    json_decref(array);
-}
-
-static void test_extend(void)
-{
-    json_t *array1, *array2, *five, *seven;
-    int i;
-
-    array1 = json_array();
-    array2 = json_array();
-    five = json_integer(5);
-    seven = json_integer(7);
-
-    if(!array1 || !array2)
-        fail("unable to create array");
-    if(!five || !seven)
-        fail("unable to create integer");
-
-    for(i = 0; i < 10; i++) {
-        if(json_array_append(array1, five))
-            fail("unable to append");
-    }
-    for(i = 0; i < 10; i++) {
-        if(json_array_append(array2, seven))
-            fail("unable to append");
-    }
-
-    if(json_array_size(array1) != 10 || json_array_size(array2) != 10)
-        fail("array size is invalid after appending");
-
-    if(json_array_extend(array1, array2))
-        fail("unable to extend");
-
-    for(i = 0; i < 10; i++) {
-        if(json_array_get(array1, i) != five)
-            fail("invalid array contents after extending");
-    }
-    for(i = 10; i < 20; i++) {
-        if(json_array_get(array1, i) != seven)
-            fail("invalid array contents after extending");
-    }
-
-    json_decref(five);
-    json_decref(seven);
-    json_decref(array1);
-    json_decref(array2);
-}
-
-static void test_circular()
-{
-    json_t *array1, *array2;
-
-    /* the simple cases are checked */
-
-    array1 = json_array();
-    if(!array1)
-        fail("unable to create array");
-
-    if(json_array_append(array1, array1) == 0)
-        fail("able to append self");
-
-    if(json_array_insert(array1, 0, array1) == 0)
-        fail("able to insert self");
-
-    if(json_array_append_new(array1, json_true()))
-        fail("failed to append true");
-
-    if(json_array_set(array1, 0, array1) == 0)
-        fail("able to set self");
-
-    json_decref(array1);
-
-
-    /* create circular references */
-
-    array1 = json_array();
-    array2 = json_array();
-    if(!array1 || !array2)
-        fail("unable to create array");
-
-    if(json_array_append(array1, array2) ||
-       json_array_append(array2, array1))
-        fail("unable to append");
-
-    /* circularity is detected when dumping */
-    if(json_dumps(array1, 0) != NULL)
-        fail("able to dump circulars");
-
-    /* decref twice to deal with the circular references */
-    json_decref(array1);
-    json_decref(array2);
-    json_decref(array1);
-}
-
-
-int main()
-{
-    test_misc();
-    test_insert();
-    test_remove();
-    test_clear();
-    test_extend();
-    test_circular();
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/test_load.c b/branch-1.3/lang/c/jansson/test/testprogs/test_load.c
deleted file mode 100644
index 4d8fa88..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/test_load.c
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <jansson.h>
-#include <string.h>
-#include "util.h"
-
-int main()
-{
-    json_t *json;
-    json_error_t error;
-
-    json = json_load_file("/path/to/nonexistent/file.json", &error);
-    if(error.line != -1)
-        fail("json_load_file returned an invalid line number");
-    if(strcmp(error.text, "unable to open /path/to/nonexistent/file.json: No such file or directory") != 0)
-        fail("json_load_file returned an invalid error message");
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/test_number.c b/branch-1.3/lang/c/jansson/test/testprogs/test_number.c
deleted file mode 100644
index a634125..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/test_number.c
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <jansson.h>
-#include "util.h"
-
-int main()
-{
-    json_t *integer, *real;
-    int i;
-    double d;
-
-    integer = json_integer(5);
-    real = json_real(100.1);
-
-    if(!integer)
-        fail("unable to create integer");
-    if(!real)
-        fail("unable to create real");
-
-    i = json_integer_value(integer);
-    if(i != 5)
-        fail("wrong integer value");
-
-    d = json_real_value(real);
-    if(d != 100.1)
-        fail("wrong real value");
-
-    d = json_number_value(integer);
-    if(d != 5.0)
-        fail("wrong number value");
-    d = json_number_value(real);
-    if(d != 100.1)
-        fail("wrong number value");
-
-    json_decref(integer);
-    json_decref(real);
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/test_object.c b/branch-1.3/lang/c/jansson/test/testprogs/test_object.c
deleted file mode 100644
index 57650e8..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/test_object.c
+++ /dev/null
@@ -1,298 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <jansson.h>
-#include <string.h>
-#include "util.h"
-
-static void test_clear()
-{
-    json_t *object, *ten;
-
-    object = json_object();
-    ten = json_integer(10);
-
-    if(!object)
-        fail("unable to create object");
-    if(!ten)
-        fail("unable to create integer");
-
-    if(json_object_set(object, "a", ten) ||
-       json_object_set(object, "b", ten) ||
-       json_object_set(object, "c", ten) ||
-       json_object_set(object, "d", ten) ||
-       json_object_set(object, "e", ten))
-        fail("unable to set value");
-
-    if(json_object_size(object) != 5)
-        fail("invalid size");
-
-    json_object_clear(object);
-
-    if(json_object_size(object) != 0)
-        fail("invalid size after clear");
-
-    json_decref(ten);
-    json_decref(object);
-}
-
-static void test_update()
-{
-    json_t *object, *other, *nine, *ten;
-
-    object = json_object();
-    other = json_object();
-
-    nine = json_integer(9);
-    ten = json_integer(10);
-
-    if(!object || !other)
-        fail("unable to create object");
-    if(!nine || !ten)
-        fail("unable to create integer");
-
-
-    /* update an empty object with an empty object */
-
-    if(json_object_update(object, other))
-        fail("unable to update an emtpy object with an empty object");
-
-    if(json_object_size(object) != 0)
-        fail("invalid size after update");
-
-    if(json_object_size(other) != 0)
-        fail("invalid size for updater after update");
-
-
-    /* update an empty object with a nonempty object */
-
-    if(json_object_set(other, "a", ten) ||
-       json_object_set(other, "b", ten) ||
-       json_object_set(other, "c", ten) ||
-       json_object_set(other, "d", ten) ||
-       json_object_set(other, "e", ten))
-        fail("unable to set value");
-
-    if(json_object_update(object, other))
-        fail("unable to update an empty object");
-
-    if(json_object_size(object) != 5)
-        fail("invalid size after update");
-
-    if(json_object_get(object, "a") != ten ||
-       json_object_get(object, "b") != ten ||
-       json_object_get(object, "c") != ten ||
-       json_object_get(object, "d") != ten ||
-       json_object_get(object, "e") != ten)
-        fail("update works incorrectly");
-
-
-    /* perform the same update again */
-
-    if(json_object_update(object, other))
-        fail("unable to update an empty object");
-
-    if(json_object_size(object) != 5)
-        fail("invalid size after update");
-
-    if(json_object_get(object, "a") != ten ||
-       json_object_get(object, "b") != ten ||
-       json_object_get(object, "c") != ten ||
-       json_object_get(object, "d") != ten ||
-       json_object_get(object, "e") != ten)
-        fail("update works incorrectly");
-
-
-    /* update a nonempty object with a nonempty object with both old
-       and new keys */
-
-    if(json_object_clear(other))
-        fail("clear failed");
-
-    if(json_object_set(other, "a", nine) ||
-       json_object_set(other, "b", nine) ||
-       json_object_set(other, "f", nine) ||
-       json_object_set(other, "g", nine) ||
-       json_object_set(other, "h", nine))
-        fail("unable to set value");
-
-    if(json_object_update(object, other))
-        fail("unable to update a nonempty object");
-
-    if(json_object_size(object) != 8)
-        fail("invalid size after update");
-
-    if(json_object_get(object, "a") != nine ||
-       json_object_get(object, "b") != nine ||
-       json_object_get(object, "f") != nine ||
-       json_object_get(object, "g") != nine ||
-       json_object_get(object, "h") != nine)
-        fail("update works incorrectly");
-
-    json_decref(nine);
-    json_decref(ten);
-    json_decref(other);
-    json_decref(object);
-}
-
-static void test_circular()
-{
-    json_t *object1, *object2;
-
-    object1 = json_object();
-    object2 = json_object();
-    if(!object1 || !object2)
-        fail("unable to create object");
-
-    /* the simple case is checked */
-    if(json_object_set(object1, "a", object1) == 0)
-        fail("able to set self");
-
-    /* create circular references */
-    if(json_object_set(object1, "a", object2) ||
-       json_object_set(object2, "a", object1))
-        fail("unable to set value");
-
-    /* circularity is detected when dumping */
-    if(json_dumps(object1, 0) != NULL)
-        fail("able to dump circulars");
-
-    /* decref twice to deal with the circular references */
-    json_decref(object1);
-    json_decref(object2);
-    json_decref(object1);
-}
-
-static void test_misc()
-{
-    json_t *object, *string, *other_string, *value;
-    void *iter;
-
-    object = json_object();
-    string = json_string("test");
-    other_string = json_string("other");
-
-    if(!object)
-        fail("unable to create object");
-    if(!string || !other_string)
-        fail("unable to create string");
-
-    if(json_object_get(object, "a"))
-        fail("value for nonexisting key");
-
-    if(json_object_set(object, "a", string))
-        fail("unable to set value");
-
-    if(!json_object_set(object, NULL, string))
-        fail("able to set NULL key");
-
-    if(!json_object_set(object, "a", NULL))
-        fail("able to set NULL value");
-
-    iter = json_object_iter(object);
-    if(!iter)
-        fail("unable to get iterator");
-
-    if(strcmp(json_object_iter_key(iter), "a"))
-        fail("iterating failed: wrong key");
-    if(json_object_iter_value(iter) != string)
-        fail("iterating failed: wrong value");
-    if(json_object_iter_next(object, iter) != NULL)
-        fail("able to iterate over the end");
-
-    /* invalid UTF-8 in key */
-    if(!json_object_set(object, "a\xefz", string))
-        fail("able to set invalid unicode key");
-
-    value = json_object_get(object, "a");
-    if(!value)
-        fail("no value for existing key");
-    if(value != string)
-        fail("got different value than what was added");
-
-    /* "a", "lp" and "px" collide in a five-bucket hashtable */
-    if(json_object_set(object, "b", string) ||
-       json_object_set(object, "lp", string) ||
-       json_object_set(object, "px", string))
-        fail("unable to set value");
-
-    value = json_object_get(object, "a");
-    if(!value)
-        fail("no value for existing key");
-    if(value != string)
-        fail("got different value than what was added");
-
-    if(json_object_set(object, "a", other_string))
-        fail("unable to replace an existing key");
-
-    value = json_object_get(object, "a");
-    if(!value)
-        fail("no value for existing key");
-    if(value != other_string)
-        fail("got different value than what was set");
-
-    if(!json_object_del(object, "nonexisting"))
-        fail("able to delete a nonexisting key");
-
-    if(json_object_del(object, "px"))
-        fail("unable to delete an existing key");
-
-    if(json_object_del(object, "a"))
-        fail("unable to delete an existing key");
-
-    if(json_object_del(object, "lp"))
-        fail("unable to delete an existing key");
-
-
-    /* add many keys to initiate rehashing */
-
-    if(json_object_set(object, "a", string))
-        fail("unable to set value");
-
-    if(json_object_set(object, "lp", string))
-        fail("unable to set value");
-
-    if(json_object_set(object, "px", string))
-        fail("unable to set value");
-
-    if(json_object_set(object, "c", string))
-        fail("unable to set value");
-
-    if(json_object_set(object, "d", string))
-        fail("unable to set value");
-
-    if(json_object_set(object, "e", string))
-        fail("unable to set value");
-
-
-    if(json_object_set_new(object, "foo", json_integer(123)))
-        fail("unable to set new value");
-
-    value = json_object_get(object, "foo");
-    if(!json_is_integer(value) || json_integer_value(value) != 123)
-        fail("json_object_set_new works incorrectly");
-
-    if(!json_object_set_new(object, NULL, json_integer(432)))
-        fail("able to set_new NULL key");
-
-    if(!json_object_set_new(object, "foo", NULL))
-        fail("able to set_new NULL value");
-
-    json_decref(string);
-    json_decref(other_string);
-    json_decref(object);
-}
-
-int main()
-{
-    test_misc();
-    test_clear();
-    test_update();
-    test_circular();
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/test_simple.c b/branch-1.3/lang/c/jansson/test/testprogs/test_simple.c
deleted file mode 100644
index 9d7691e..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/test_simple.c
+++ /dev/null
@@ -1,127 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#include <string.h>
-#include <jansson.h>
-#include "util.h"
-
-/* Call the simple functions not covered by other tests of the public API */
-int main()
-{
-    json_t *value;
-
-    value = json_integer(1);
-    if(json_typeof(value) != JSON_INTEGER)
-        fail("json_typeof failed");
-
-    if(json_is_object(value))
-        fail("json_is_object failed");
-
-    if(json_is_array(value))
-        fail("json_is_array failed");
-
-    if(json_is_string(value))
-        fail("json_is_string failed");
-
-    if(!json_is_integer(value))
-        fail("json_is_integer failed");
-
-    if(json_is_real(value))
-        fail("json_is_real failed");
-
-    if(!json_is_number(value))
-        fail("json_is_number failed");
-
-    if(json_is_true(value))
-        fail("json_is_true failed");
-
-    if(json_is_false(value))
-        fail("json_is_false failed");
-
-    if(json_is_boolean(value))
-        fail("json_is_boolean failed");
-
-    if(json_is_null(value))
-        fail("json_is_null failed");
-
-    json_decref(value);
-
-
-    value = json_string("foo");
-    if(!value)
-        fail("json_string failed");
-    if(strcmp(json_string_value(value), "foo"))
-        fail("invalid string value");
-
-    if(json_string_set(value, "bar"))
-        fail("json_string_set failed");
-    if(strcmp(json_string_value(value), "bar"))
-        fail("invalid string value");
-
-    json_decref(value);
-
-    value = json_string(NULL);
-    if(value)
-        fail("json_string(NULL) failed");
-
-    /* invalid UTF-8  */
-    value = json_string("a\xefz");
-    if(value)
-        fail("json_string(<invalid utf-8>) failed");
-
-
-    value = json_integer(123);
-    if(!value)
-        fail("json_integer failed");
-    if(json_integer_value(value) != 123)
-        fail("invalid integer value");
-    if(json_number_value(value) != 123.0)
-        fail("invalid number value");
-
-    if(json_integer_set(value, 321))
-        fail("json_integer_set failed");
-    if(json_integer_value(value) != 321)
-        fail("invalid integer value");
-    if(json_number_value(value) != 321.0)
-        fail("invalid number value");
-
-    json_decref(value);
-
-    value = json_real(123.123);
-    if(!value)
-        fail("json_real failed");
-    if(json_real_value(value) != 123.123)
-        fail("invalid integer value");
-    if(json_number_value(value) != 123.123)
-        fail("invalid number value");
-
-    if(json_real_set(value, 321.321))
-        fail("json_real_set failed");
-    if(json_real_value(value) != 321.321)
-        fail("invalid real value");
-    if(json_number_value(value) != 321.321)
-        fail("invalid number value");
-
-    json_decref(value);
-
-    value = json_true();
-    if(!value)
-        fail("json_true failed");
-    json_decref(value);
-
-    value = json_false();
-    if(!value)
-        fail("json_false failed");
-    json_decref(value);
-
-    value = json_null();
-    if(!value)
-        fail("json_null failed");
-    json_decref(value);
-
-    return 0;
-}
diff --git a/branch-1.3/lang/c/jansson/test/testprogs/util.h b/branch-1.3/lang/c/jansson/test/testprogs/util.h
deleted file mode 100644
index ec8e0e4..0000000
--- a/branch-1.3/lang/c/jansson/test/testprogs/util.h
+++ /dev/null
@@ -1,20 +0,0 @@
-/*
- * Copyright (c) 2009 Petri Lehtinen <petri@digip.org>
- *
- * Jansson is free software; you can redistribute it and/or modify
- * it under the terms of the MIT license. See LICENSE for details.
- */
-
-#ifndef TESTPROGS_UTIL_H
-#define TESTPROGS_UTIL_H
-
-#include <stdlib.h>
-
-#define fail(msg)                                                \
-    do {                                                         \
-        fprintf(stderr, "%s:%s:%d: %s\n",                        \
-                __FILE__, __FUNCTION__, __LINE__, msg);          \
-        exit(1);                                                 \
-    } while(0)
-
-#endif
diff --git a/branch-1.3/lang/c/m4/.gitignore b/branch-1.3/lang/c/m4/.gitignore
deleted file mode 100644
index 72e8ffc..0000000
--- a/branch-1.3/lang/c/m4/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-*
diff --git a/branch-1.3/lang/c/src/CMakeLists.txt b/branch-1.3/lang/c/src/CMakeLists.txt
deleted file mode 100644
index 854d504..0000000
--- a/branch-1.3/lang/c/src/CMakeLists.txt
+++ /dev/null
@@ -1,75 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-set(AVRO_SRC
-    avro.h
-    avro_private.h
-    datafile.c
-    datum.c
-    datum.h
-    datum_equal.c
-    datum_read.c
-    datum_size.c
-    datum_skip.c
-    datum_validate.c
-    datum_write.c
-    dump.c
-    dump.h
-    encoding.h
-    encoding_binary.c
-    io.c
-    schema.c
-    schema.h
-    schema_equal.c
-    st.c
-    st.h
-)
-
-set(JANSSON_SRC
-    ../jansson/src/dump.c
-    ../jansson/src/hashtable.c
-    ../jansson/src/hashtable.h
-    ../jansson/src/jansson.h
-    ../jansson/src/jansson_private.h
-    ../jansson/src/load.c
-    ../jansson/src/strbuffer.c
-    ../jansson/src/strbuffer.h
-    ../jansson/src/utf.c
-    ../jansson/src/utf.h
-    ../jansson/src/util.h
-    ../jansson/src/value.c
-)
-
-source_group(Avro FILES ${AVRO_SRC})
-source_group(Jansson FILES ${JANSSON_SRC})
-
-add_library(avro-static STATIC ${AVRO_SRC} ${JANSSON_SRC})
-set_target_properties(avro-static PROPERTIES OUTPUT_NAME avro)
-add_library(avro-shared SHARED ${AVRO_SRC} ${JANSSON_SRC})
-set_target_properties(avro-shared PROPERTIES
-        OUTPUT_NAME avro
-        SOVERSION ${AVRO_VERSION})
-
-install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/avro.h DESTINATION include)
-install(TARGETS avro-static avro-shared
-        RUNTIME DESTINATION bin
-        LIBRARY DESTINATION lib
-        ARCHIVE DESTINATION lib
-)
-
diff --git a/branch-1.3/lang/c/src/Makefile.am b/branch-1.3/lang/c/src/Makefile.am
deleted file mode 100644
index 3bb1293..0000000
--- a/branch-1.3/lang/c/src/Makefile.am
+++ /dev/null
@@ -1,17 +0,0 @@
-# -pedantic
-AM_CPPFLAGS=-I$(top_srcdir)/jansson/src
-AM_CFLAGS=-Wall -Wextra -Wunused-parameter
-ACLOCAL_AMFLAGS=-I m4
-
-include_HEADERS = avro.h
-
-lib_LTLIBRARIES = libavro.la
-libavro_la_SOURCES = st.c st.h schema.c schema.h schema_equal.c \
-datum.c datum_equal.c datum_validate.c datum_read.c datum_skip.c datum_write.c datum_size.c datum.h \
-io.c dump.c dump.h encoding_binary.c \
-avro_private.h encoding.h datafile.c
-libavro_la_LIBADD = $(top_builddir)/jansson/src/.libs/libjansson.a
-libavro_la_LDFLAGS = \
-        -version-info $(LIBAVRO_VERSION) \
-        -release $(VERSION) \
-        -export-dynamic
diff --git a/branch-1.3/lang/c/src/avro.h b/branch-1.3/lang/c/src/avro.h
deleted file mode 100644
index 8831431..0000000
--- a/branch-1.3/lang/c/src/avro.h
+++ /dev/null
@@ -1,285 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-#ifndef AVRO_H
-#define AVRO_H
-#ifdef __cplusplus
-extern "C" {
-#define CLOSE_EXTERN }
-#else
-#define CLOSE_EXTERN
-#endif
-
-#include <stdio.h>
-#include <stdint.h>
-
-enum avro_type_t {
-	AVRO_STRING,
-	AVRO_BYTES,
-	AVRO_INT32,
-	AVRO_INT64,
-	AVRO_FLOAT,
-	AVRO_DOUBLE,
-	AVRO_BOOLEAN,
-	AVRO_NULL,
-	AVRO_RECORD,
-	AVRO_ENUM,
-	AVRO_FIXED,
-	AVRO_MAP,
-	AVRO_ARRAY,
-	AVRO_UNION,
-	AVRO_LINK
-};
-typedef enum avro_type_t avro_type_t;
-
-enum avro_class_t {
-	AVRO_SCHEMA,
-	AVRO_DATUM
-};
-typedef enum avro_class_t avro_class_t;
-
-struct avro_obj_t {
-	avro_type_t type;
-	avro_class_t class_type;
-	unsigned long refcount;
-};
-
-#define avro_classof(obj)     ((obj)->class_type)
-#define is_avro_schema(obj)   (obj && avro_classof(obj) == AVRO_SCHEMA)
-#define is_avro_datum(obj)    (obj && avro_classof(obj) == AVRO_DATUM)
-
-#define avro_typeof(obj)      ((obj)->type)
-#define is_avro_string(obj)   (obj && avro_typeof(obj) == AVRO_STRING)
-#define is_avro_bytes(obj)    (obj && avro_typeof(obj) == AVRO_BYTES)
-#define is_avro_int32(obj)    (obj && avro_typeof(obj) == AVRO_INT32)
-#define is_avro_int64(obj)    (obj && avro_typeof(obj) == AVRO_INT64)
-#define is_avro_float(obj)    (obj && avro_typeof(obj) == AVRO_FLOAT)
-#define is_avro_double(obj)   (obj && avro_typeof(obj) == AVRO_DOUBLE)
-#define is_avro_boolean(obj)  (obj && avro_typeof(obj) == AVRO_BOOLEAN)
-#define is_avro_null(obj)     (obj && avro_typeof(obj) == AVRO_NULL)
-#define is_avro_primitive(obj)(is_avro_string(obj) \
-                             ||is_avro_bytes(obj) \
-                             ||is_avro_int32(obj) \
-                             ||is_avro_int64(obj) \
-                             ||is_avro_float(obj) \
-                             ||is_avro_double(obj) \
-                             ||is_avro_boolean(obj) \
-                             ||is_avro_null(obj))
-#define is_avro_record(obj)   (obj && avro_typeof(obj) == AVRO_RECORD)
-#define is_avro_enum(obj)     (obj && avro_typeof(obj) == AVRO_ENUM)
-#define is_avro_fixed(obj)    (obj && avro_typeof(obj) == AVRO_FIXED)
-#define is_avro_named_type(obj)(is_avro_record(obj) \
-                              ||is_avro_enum(obj) \
-                              ||is_avro_fixed(obj))
-#define is_avro_map(obj)      (obj && avro_typeof(obj) == AVRO_MAP)
-#define is_avro_array(obj)    (obj && avro_typeof(obj) == AVRO_ARRAY)
-#define is_avro_union(obj)    (obj && avro_typeof(obj) == AVRO_UNION)
-#define is_avro_complex_type(obj) (!(is_avro_primitive(obj))
-#define is_avro_link(obj)     (obj && avro_typeof(obj) == AVRO_LINK)
-
-typedef struct avro_reader_t_ *avro_reader_t;
-typedef struct avro_writer_t_ *avro_writer_t;
-
-/*
- * schema 
- */
-typedef struct avro_obj_t *avro_schema_t;
-
-avro_schema_t avro_schema_string(void);
-avro_schema_t avro_schema_bytes(void);
-avro_schema_t avro_schema_int(void);
-avro_schema_t avro_schema_long(void);
-avro_schema_t avro_schema_float(void);
-avro_schema_t avro_schema_double(void);
-avro_schema_t avro_schema_boolean(void);
-avro_schema_t avro_schema_null(void);
-
-avro_schema_t avro_schema_record(const char *name, const char *space);
-avro_schema_t avro_schema_record_field_get(const avro_schema_t
-					   record, const char *field_name);
-int avro_schema_record_field_append(const avro_schema_t record,
-				    const char *field_name,
-				    const avro_schema_t type);
-
-avro_schema_t avro_schema_enum(const char *name);
-int avro_schema_enum_symbol_append(const avro_schema_t
-				   enump, const char *symbol);
-
-avro_schema_t avro_schema_fixed(const char *name, const int64_t len);
-avro_schema_t avro_schema_map(const avro_schema_t values);
-avro_schema_t avro_schema_array(const avro_schema_t items);
-
-avro_schema_t avro_schema_union(void);
-int avro_schema_union_append(const avro_schema_t
-			     union_schema, const avro_schema_t schema);
-
-avro_schema_t avro_schema_link(avro_schema_t schema);
-
-typedef struct avro_schema_error_t_ *avro_schema_error_t;
-int avro_schema_from_json(const char *jsontext,
-			  const int32_t len,
-			  avro_schema_t * schema, avro_schema_error_t * error);
-int avro_schema_to_json(avro_schema_t schema, avro_writer_t out);
-
-int avro_schema_to_specific(avro_schema_t schema, const char *prefix);
-
-const char *avro_schema_name(const avro_schema_t schema);
-avro_schema_t avro_schema_copy(avro_schema_t schema);
-int avro_schema_equal(avro_schema_t a, avro_schema_t b);
-
-avro_schema_t avro_schema_incref(avro_schema_t schema);
-void avro_schema_decref(avro_schema_t schema);
-
-/*
- * io 
- */
-avro_reader_t avro_reader_file(FILE * fp);
-avro_writer_t avro_writer_file(FILE * fp);
-avro_reader_t avro_reader_memory(const char *buf, int64_t len);
-avro_writer_t avro_writer_memory(const char *buf, int64_t len);
-
-int avro_read(avro_reader_t reader, void *buf, int64_t len);
-int avro_skip(avro_reader_t reader, int64_t len);
-int avro_write(avro_writer_t writer, void *buf, int64_t len);
-
-void avro_writer_reset(avro_writer_t writer);
-int64_t avro_writer_tell(avro_writer_t writer);
-void avro_writer_flush(avro_writer_t writer);
-
-void avro_writer_dump(avro_writer_t writer, FILE * fp);
-void avro_reader_dump(avro_reader_t reader, FILE * fp);
-
-void avro_reader_free(avro_reader_t reader);
-void avro_writer_free(avro_writer_t writer);
-
-/*
- * datum 
- */
-
-/* constructors */
-typedef struct avro_obj_t *avro_datum_t;
-avro_datum_t avro_string(const char *str);
-avro_datum_t avro_wrapstring(const char *str);
-avro_datum_t avro_givestring(const char *str);
-avro_datum_t avro_bytes(const char *buf, int64_t len);
-avro_datum_t avro_wrapbytes(const char *buf, int64_t len);
-avro_datum_t avro_givebytes(const char *buf, int64_t len);
-avro_datum_t avro_int32(int32_t i);
-avro_datum_t avro_int64(int64_t l);
-avro_datum_t avro_float(float f);
-avro_datum_t avro_double(double d);
-avro_datum_t avro_boolean(int8_t i);
-avro_datum_t avro_null(void);
-avro_datum_t avro_record(const char *name, const char *space);
-avro_datum_t avro_enum(const char *name, int i);
-avro_datum_t avro_fixed(const char *name, const char *bytes,
-			const int64_t size);
-avro_datum_t avro_wrapfixed(const char *name, const char *bytes,
-			    const int64_t size);
-avro_datum_t avro_givefixed(const char *name, const char *bytes,
-			    const int64_t size);
-avro_datum_t avro_map(void);
-avro_datum_t avro_array(void);
-avro_datum_t avro_union(int64_t discriminant, const avro_datum_t datum);
-
-/* getters */
-int avro_string_get(avro_datum_t datum, char **p);
-int avro_bytes_get(avro_datum_t datum, char **bytes, int64_t * size);
-int avro_int32_get(avro_datum_t datum, int32_t * i);
-int avro_int64_get(avro_datum_t datum, int64_t * l);
-int avro_float_get(avro_datum_t datum, float *f);
-int avro_double_get(avro_datum_t datum, double *d);
-int avro_boolean_get(avro_datum_t datum, int8_t * i);
-
-int avro_fixed_get(avro_datum_t datum, char **bytes, int64_t * size);
-int avro_record_get(const avro_datum_t record, const char *field_name,
-		    avro_datum_t * value);
-int avro_map_get(const avro_datum_t datum, const char *key,
-		 avro_datum_t * value);
-
-/* setters */
-int avro_string_set(avro_datum_t datum, const char *p);
-int avro_givestring_set(avro_datum_t datum, const char *p);
-int avro_wrapstring_set(avro_datum_t datum, const char *p);
-
-int avro_bytes_set(avro_datum_t datum, const char *bytes, const int64_t size);
-int avro_givebytes_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size);
-int avro_wrapbytes_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size);
-
-int avro_int32_set(avro_datum_t datum, const int32_t i);
-int avro_int64_set(avro_datum_t datum, const int64_t l);
-int avro_float_set(avro_datum_t datum, const float f);
-int avro_double_set(avro_datum_t datum, const double d);
-int avro_boolean_set(avro_datum_t datum, const int8_t i);
-
-int avro_fixed_set(avro_datum_t datum, const char *bytes, const int64_t size);
-int avro_givefixed_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size);
-int avro_wrapfixed_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size);
-
-int avro_record_set(const avro_datum_t record, const char *field_name,
-		    const avro_datum_t value);
-int avro_map_set(const avro_datum_t map, const char *key,
-		 const avro_datum_t value);
-int avro_array_append_datum(const avro_datum_t array_datum,
-			    const avro_datum_t datum);
-
-/* reference counting */
-avro_datum_t avro_datum_incref(avro_datum_t value);
-void avro_datum_decref(avro_datum_t value);
-
-void avro_datum_print(avro_datum_t value, FILE * fp);
-
-int avro_datum_equal(avro_datum_t a, avro_datum_t b);
-
-int avro_schema_match(avro_schema_t writers_schema,
-		      avro_schema_t readers_schema);
-
-int avro_schema_datum_validate(avro_schema_t
-			       expected_schema, avro_datum_t datum);
-
-int avro_read_data(avro_reader_t reader,
-		   avro_schema_t writer_schema,
-		   avro_schema_t reader_schema, avro_datum_t * datum);
-int avro_skip_data(avro_reader_t reader, avro_schema_t writer_schema);
-int avro_write_data(avro_writer_t writer,
-		    avro_schema_t writer_schema, avro_datum_t datum);
-int64_t avro_size_data(avro_writer_t writer,
-		       avro_schema_t writer_schema, avro_datum_t datum);
-
-/* File object container */
-typedef struct avro_file_reader_t_ *avro_file_reader_t;
-typedef struct avro_file_writer_t_ *avro_file_writer_t;
-
-int avro_file_writer_create(const char *path, avro_schema_t schema,
-			    avro_file_writer_t * writer);
-int avro_file_writer_open(const char *path, avro_file_writer_t * writer);
-int avro_file_reader(const char *path, avro_file_reader_t * reader);
-
-int avro_file_writer_append(avro_file_writer_t writer, avro_datum_t datum);
-int avro_file_writer_sync(avro_file_writer_t writer);
-int avro_file_writer_flush(avro_file_writer_t writer);
-int avro_file_writer_close(avro_file_writer_t writer);
-
-int avro_file_reader_read(avro_file_reader_t reader,
-			  avro_schema_t readers_schema, avro_datum_t * datum);
-int avro_file_reader_close(avro_file_reader_t reader);
-
-CLOSE_EXTERN
-#endif
diff --git a/branch-1.3/lang/c/src/avro_private.h b/branch-1.3/lang/c/src/avro_private.h
deleted file mode 100644
index 0d3d124..0000000
--- a/branch-1.3/lang/c/src/avro_private.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-#ifndef AVRO_PRIVATE_H
-#define AVRO_PRIVATE_H
-
-#include "avro.h"
-
-#ifdef HAVE_CONFIG_H
-/* This is only true for now in the autotools build */
-#include "config.h"
-#endif
-
-#define check(rval, call) { rval = call; if(rval) return rval; }
-
-#define AVRO_UNUSED(var) (void)var;
-
-#define container_of(ptr_, type_, member_)  \
-    ((type_ *)((char *)ptr_ - (size_t)&((type_ *)0)->member_))
-
-#endif
diff --git a/branch-1.3/lang/c/src/datafile.c b/branch-1.3/lang/c/src/datafile.c
deleted file mode 100644
index 920a10f..0000000
--- a/branch-1.3/lang/c/src/datafile.c
+++ /dev/null
@@ -1,362 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include "encoding.h"
-#include <stdio.h>
-#include <stdlib.h>
-#include <errno.h>
-#include <fcntl.h>
-#include <time.h>
-#include <string.h>
-
-struct avro_file_reader_t_ {
-	avro_schema_t writers_schema;
-	avro_reader_t reader;
-	char sync[16];
-	int64_t blocks_read;
-	int64_t blocks_total;
-	int64_t current_blocklen;
-};
-
-struct avro_file_writer_t_ {
-	avro_schema_t writers_schema;
-	avro_writer_t writer;
-	char sync[16];
-	int block_count;
-	avro_writer_t datum_writer;
-	char datum_buffer[16 * 1024];
-};
-
-/* TODO: should we just read /dev/random? */
-static void generate_sync(avro_file_writer_t w)
-{
-	unsigned int i;
-	srand(time(NULL));
-	for (i = 0; i < sizeof(w->sync); i++) {
-		w->sync[i] = ((double)rand() / (RAND_MAX + 1.0)) * 255;
-	}
-}
-
-static int write_sync(avro_file_writer_t w)
-{
-	return avro_write(w->writer, w->sync, sizeof(w->sync));
-}
-
-static int write_header(avro_file_writer_t w)
-{
-	int rval;
-	uint8_t version = 1;
-	/* TODO: remove this static buffer */
-	avro_writer_t schema_writer;
-	char schema_buf[64 * 1024];
-	const avro_encoding_t *enc = &avro_binary_encoding;
-
-	/* Generate random sync */
-	generate_sync(w);
-
-	check(rval, avro_write(w->writer, "Obj", 3));
-	check(rval, avro_write(w->writer, &version, 1));
-
-	check(rval, enc->write_long(w->writer, 3));
-	check(rval, enc->write_string(w->writer, "avro.sync"));
-	check(rval, enc->write_bytes(w->writer, w->sync, sizeof(w->sync)));
-	check(rval, enc->write_string(w->writer, "avro.codec"));
-	check(rval, enc->write_bytes(w->writer, "null", 4));
-	check(rval, enc->write_string(w->writer, "avro.schema"));
-	schema_writer = avro_writer_memory(schema_buf, sizeof(schema_buf));
-	rval = avro_schema_to_json(w->writers_schema, schema_writer);
-	if (rval) {
-		avro_writer_free(schema_writer);
-		return rval;
-	}
-	check(rval,
-	      enc->write_bytes(w->writer, schema_buf,
-			       avro_writer_tell(schema_writer)));
-	check(rval, enc->write_long(w->writer, 0));
-	return write_sync(w);
-}
-
-static int
-file_writer_init_fp(const char *path, const char *mode, avro_file_writer_t w)
-{
-	FILE *fp = fopen(path, mode);
-	if (!fp) {
-		return ENOMEM;
-	}
-	w->writer = avro_writer_file(fp);
-	if (!w->writer) {
-		return ENOMEM;
-	}
-	return 0;
-}
-
-static int
-file_writer_create(const char *path, avro_schema_t schema, avro_file_writer_t w)
-{
-	int rval = file_writer_init_fp(path, "wx", w);
-	if (rval) {
-		check(rval, file_writer_init_fp(path, "w", w));
-	}
-
-	w->datum_writer =
-	    avro_writer_memory(w->datum_buffer, sizeof(w->datum_buffer));
-	if (!w->datum_writer) {
-		avro_writer_free(w->writer);
-		return ENOMEM;
-	}
-
-	w->writers_schema = schema;
-	return write_header(w);
-}
-
-int
-avro_file_writer_create(const char *path, avro_schema_t schema,
-			avro_file_writer_t * writer)
-{
-	avro_file_writer_t w;
-	int rval;
-	if (!path || !is_avro_schema(schema) || !writer) {
-		return EINVAL;
-	}
-	w = malloc(sizeof(struct avro_file_writer_t_));
-	if (!w) {
-		return ENOMEM;
-	}
-	rval = file_writer_create(path, schema, w);
-	if (rval) {
-		free(w);
-		return rval;
-	}
-	*writer = w;
-	return 0;
-}
-
-static int file_read_header(avro_reader_t reader,
-			    avro_schema_t * writers_schema, char *sync,
-			    int synclen)
-{
-	int rval;
-	avro_schema_t meta_schema;
-	avro_schema_t meta_values_schema;
-	avro_datum_t meta;
-	char magic[4];
-	avro_datum_t schema_bytes;
-	char *p;
-	int64_t len;
-	avro_schema_error_t schema_error;
-
-	check(rval, avro_read(reader, magic, sizeof(magic)));
-	if (magic[0] != 'O' || magic[1] != 'b' || magic[2] != 'j'
-	    || magic[3] != 1) {
-		return EILSEQ;
-	}
-	meta_values_schema = avro_schema_bytes();
-	meta_schema = avro_schema_map(meta_values_schema);
-	rval = avro_read_data(reader, meta_schema, NULL, &meta);
-	if (rval) {
-		return EILSEQ;
-	}
-	check(rval, avro_map_get(meta, "avro.schema", &schema_bytes));
-	avro_bytes_get(schema_bytes, &p, &len);
-	check(rval,
-	      avro_schema_from_json(p, len, writers_schema, &schema_error));
-	avro_schema_decref(meta);
-	return avro_read(reader, sync, synclen);
-}
-
-static int file_writer_open(const char *path, avro_file_writer_t w)
-{
-	int rval;
-	FILE *fp;
-	avro_reader_t reader;
-
-	fp = fopen(path, "r");
-	if (!fp) {
-		return errno;
-	}
-	reader = avro_reader_file(fp);
-	if (!reader) {
-		return ENOMEM;
-	}
-	rval =
-	    file_read_header(reader, &w->writers_schema, w->sync,
-			     sizeof(w->sync));
-	avro_reader_free(reader);
-	/* Position to end of file and get ready to write */
-	rval = file_writer_init_fp(path, "a", w);
-	if (rval) {
-		free(w);
-	}
-	return rval;
-}
-
-int avro_file_writer_open(const char *path, avro_file_writer_t * writer)
-{
-	avro_file_writer_t w;
-	int rval;
-	if (!path || !writer) {
-		return EINVAL;
-	}
-	w = malloc(sizeof(struct avro_file_writer_t_));
-	if (!w) {
-		return ENOMEM;
-	}
-	rval = file_writer_open(path, w);
-	if (rval) {
-		free(w);
-		return rval;
-	}
-
-	*writer = w;
-	return 0;
-}
-
-static int file_read_block_count(avro_file_reader_t r)
-{
-	int rval;
-	const avro_encoding_t *enc = &avro_binary_encoding;
-	check(rval, enc->read_long(r->reader, &r->blocks_total));
-	check(rval, enc->read_long(r->reader, &r->current_blocklen));
-	r->blocks_read = 0;
-	return 0;
-}
-
-int avro_file_reader(const char *path, avro_file_reader_t * reader)
-{
-	int rval;
-	FILE *fp;
-	avro_file_reader_t r = malloc(sizeof(struct avro_file_reader_t_));
-	if (!r) {
-		return ENOMEM;
-	}
-
-	fp = fopen(path, "r");
-	if (!fp) {
-		return errno;
-	}
-	r->reader = avro_reader_file(fp);
-	if (!r->reader) {
-		return ENOMEM;
-	}
-	rval = file_read_header(r->reader, &r->writers_schema, r->sync,
-				sizeof(r->sync));
-	if (rval == 0) {
-		rval = file_read_block_count(r);
-		if (rval == 0) {
-			*reader = r;
-		}
-	}
-	return rval;
-}
-
-static int file_write_block(avro_file_writer_t w)
-{
-	const avro_encoding_t *enc = &avro_binary_encoding;
-	int rval;
-
-	if (w->block_count) {
-		int64_t blocklen = avro_writer_tell(w->datum_writer);
-		/* Write the block count */
-		check(rval, enc->write_long(w->writer, w->block_count));
-		/* Write the block length */
-		check(rval, enc->write_long(w->writer, blocklen));
-		/* Write the block */
-		check(rval, avro_write(w->writer, w->datum_buffer, blocklen));
-		/* Write the sync marker */
-		check(rval, write_sync(w));
-		/* Reset the datum writer */
-		avro_writer_reset(w->datum_writer);
-		w->block_count = 0;
-	}
-	return 0;
-}
-
-int avro_file_writer_append(avro_file_writer_t w, avro_datum_t datum)
-{
-	int rval;
-	if (!w || !datum) {
-		return EINVAL;
-	}
-	rval = avro_write_data(w->datum_writer, w->writers_schema, datum);
-	if (rval) {
-		check(rval, file_write_block(w));
-		rval =
-		    avro_write_data(w->datum_writer, w->writers_schema, datum);
-		if (rval) {
-			/* TODO: if the datum encoder larger than our buffer,
-			   just write a single large datum */
-			return rval;
-		}
-	}
-	w->block_count++;
-	return 0;
-}
-
-int avro_file_writer_sync(avro_file_writer_t w)
-{
-	return file_write_block(w);
-}
-
-int avro_file_writer_flush(avro_file_writer_t w)
-{
-	int rval;
-	check(rval, file_write_block(w));
-	avro_writer_flush(w->writer);
-	return 0;
-}
-
-int avro_file_writer_close(avro_file_writer_t w)
-{
-	int rval;
-	check(rval, avro_file_writer_flush(w));
-	avro_writer_free(w->writer);
-	return 0;
-}
-
-int avro_file_reader_read(avro_file_reader_t r, avro_schema_t readers_schema,
-			  avro_datum_t * datum)
-{
-	int rval;
-	char sync[16];
-
-	if (!r || !datum) {
-		return EINVAL;
-	}
-
-	check(rval,
-	      avro_read_data(r->reader, r->writers_schema, readers_schema,
-			     datum));
-	r->blocks_read++;
-
-	if (r->blocks_read == r->blocks_total) {
-		check(rval, avro_read(r->reader, sync, sizeof(sync)));
-		if (memcmp(r->sync, sync, sizeof(r->sync)) != 0) {
-			/* wrong sync bytes */
-			return EILSEQ;
-		}
-		/* For now, ignore errors (e.g. EOF) */
-		file_read_block_count(r);
-	}
-	return 0;
-}
-
-int avro_file_reader_close(avro_file_reader_t reader)
-{
-	avro_reader_free(reader->reader);
-	return 0;
-}
diff --git a/branch-1.3/lang/c/src/datum.c b/branch-1.3/lang/c/src/datum.c
deleted file mode 100644
index 2e62ecc..0000000
--- a/branch-1.3/lang/c/src/datum.c
+++ /dev/null
@@ -1,860 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdlib.h>
-#include <string.h>
-#include <errno.h>
-#include "datum.h"
-#include "encoding.h"
-
-#define DEFAULT_TABLE_SIZE 32
-
-static void avro_datum_init(avro_datum_t datum, avro_type_t type)
-{
-	datum->type = type;
-	datum->class_type = AVRO_DATUM;
-	datum->refcount = 1;
-}
-
-static avro_datum_t avro_string_private(char *str,
-					void (*string_free) (void *ptr))
-{
-	struct avro_string_datum_t *datum =
-	    malloc(sizeof(struct avro_string_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->s = str;
-	datum->free = string_free;
-
-	avro_datum_init(&datum->obj, AVRO_STRING);
-	return &datum->obj;
-}
-
-avro_datum_t avro_string(const char *str)
-{
-	char *p = strdup(str);
-	if (!p) {
-		return NULL;
-	}
-	return avro_string_private(p, free);
-}
-
-avro_datum_t avro_givestring(const char *str)
-{
-	return avro_string_private((char *)str, free);
-}
-
-avro_datum_t avro_wrapstring(const char *str)
-{
-	return avro_string_private((char *)str, NULL);
-}
-
-int avro_string_get(avro_datum_t datum, char **p)
-{
-	if (!(is_avro_datum(datum) && is_avro_string(datum)) || !p) {
-		return EINVAL;
-	}
-	*p = avro_datum_to_string(datum)->s;
-	return 0;
-}
-
-static int avro_string_set_private(avro_datum_t datum, const char *p,
-				   void (*string_free) (void *ptr))
-{
-	struct avro_string_datum_t *string;
-	if (!(is_avro_datum(datum) && is_avro_string(datum)) || !p) {
-		return EINVAL;
-	}
-	string = avro_datum_to_string(datum);
-	if (string->free) {
-		string->free(string->s);
-	}
-	string->free = string_free;
-	string->s = (char *)p;
-	return 0;
-}
-
-int avro_string_set(avro_datum_t datum, const char *p)
-{
-	char *string_copy = strdup(p);
-	int rval;
-	if (!string_copy) {
-		return ENOMEM;
-	}
-	rval = avro_string_set_private(datum, p, free);
-	if (rval) {
-		free(string_copy);
-	}
-	return rval;
-}
-
-int avro_givestring_set(avro_datum_t datum, const char *p)
-{
-	return avro_string_set_private(datum, p, free);
-}
-
-int avro_wrapstring_set(avro_datum_t datum, const char *p)
-{
-	return avro_string_set_private(datum, p, NULL);
-}
-
-static avro_datum_t avro_bytes_private(char *bytes, int64_t size,
-				       void (*bytes_free) (void *ptr))
-{
-	struct avro_bytes_datum_t *datum;
-	datum = malloc(sizeof(struct avro_bytes_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->bytes = bytes;
-	datum->size = size;
-	datum->free = bytes_free;
-
-	avro_datum_init(&datum->obj, AVRO_BYTES);
-	return &datum->obj;
-}
-
-avro_datum_t avro_bytes(const char *bytes, int64_t size)
-{
-	char *bytes_copy = malloc(size);
-	if (!bytes_copy) {
-		return NULL;
-	}
-	memcpy(bytes_copy, bytes, size);
-	return avro_bytes_private(bytes_copy, size, free);
-}
-
-avro_datum_t avro_givebytes(const char *bytes, int64_t size)
-{
-	return avro_bytes_private((char *)bytes, size, free);
-}
-
-avro_datum_t avro_wrapbytes(const char *bytes, int64_t size)
-{
-	return avro_bytes_private((char *)bytes, size, NULL);
-}
-
-static int avro_bytes_set_private(avro_datum_t datum, const char *bytes,
-				  const int64_t size,
-				  void (*bytes_free) (void *ptr))
-{
-	struct avro_bytes_datum_t *b;
-
-	AVRO_UNUSED(size);
-
-	if (!(is_avro_datum(datum) && is_avro_bytes(datum))) {
-		return EINVAL;
-	}
-
-	b = avro_datum_to_bytes(datum);
-	if (b->free) {
-		b->free(b->bytes);
-	}
-
-	b->free = bytes_free;
-	b->bytes = (char *)bytes;
-	return 0;
-}
-
-int avro_bytes_set(avro_datum_t datum, const char *bytes, const int64_t size)
-{
-	int rval;
-	char *bytes_copy = malloc(size);
-	if (!bytes_copy) {
-		return ENOMEM;
-	}
-	memcpy(bytes_copy, bytes, size);
-	rval = avro_bytes_set_private(datum, bytes, size, free);
-	if (rval) {
-		free(bytes_copy);
-	}
-	return rval;
-}
-
-int avro_givebytes_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size)
-{
-	return avro_bytes_set_private(datum, bytes, size, free);
-}
-
-int avro_wrapbytes_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size)
-{
-	return avro_bytes_set_private(datum, bytes, size, NULL);
-}
-
-int avro_bytes_get(avro_datum_t datum, char **bytes, int64_t * size)
-{
-	if (!(is_avro_datum(datum) && is_avro_bytes(datum)) || !bytes || !size) {
-		return EINVAL;
-	}
-	*bytes = avro_datum_to_bytes(datum)->bytes;
-	*size = avro_datum_to_bytes(datum)->size;
-	return 0;
-}
-
-avro_datum_t avro_int32(int32_t i)
-{
-	struct avro_int32_datum_t *datum =
-	    malloc(sizeof(struct avro_int32_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->i32 = i;
-
-	avro_datum_init(&datum->obj, AVRO_INT32);
-	return &datum->obj;
-}
-
-int avro_int32_get(avro_datum_t datum, int32_t * i)
-{
-	if (!(is_avro_datum(datum) && is_avro_int32(datum)) || !i) {
-		return EINVAL;
-	}
-	*i = avro_datum_to_int32(datum)->i32;
-	return 0;
-}
-
-int avro_int32_set(avro_datum_t datum, const int32_t i)
-{
-	struct avro_int32_datum_t *intp;
-	if (!(is_avro_datum(datum) && is_avro_int32(datum))) {
-		return EINVAL;
-	}
-	intp = avro_datum_to_int32(datum);
-	intp->i32 = i;
-	return 0;
-}
-
-avro_datum_t avro_int64(int64_t l)
-{
-	struct avro_int64_datum_t *datum =
-	    malloc(sizeof(struct avro_int64_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->i64 = l;
-
-	avro_datum_init(&datum->obj, AVRO_INT64);
-	return &datum->obj;
-}
-
-int avro_int64_get(avro_datum_t datum, int64_t * l)
-{
-	if (!(is_avro_datum(datum) && is_avro_int64(datum)) || !l) {
-		return EINVAL;
-	}
-	*l = avro_datum_to_int64(datum)->i64;
-	return 0;
-}
-
-int avro_int64_set(avro_datum_t datum, const int64_t l)
-{
-	struct avro_int64_datum_t *longp;
-	if (!(is_avro_datum(datum) && is_avro_int64(datum))) {
-		return EINVAL;
-	}
-	longp = avro_datum_to_int64(datum);
-	longp->i64 = l;
-	return 0;
-}
-
-avro_datum_t avro_float(float f)
-{
-	struct avro_float_datum_t *datum =
-	    malloc(sizeof(struct avro_float_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->f = f;
-
-	avro_datum_init(&datum->obj, AVRO_FLOAT);
-	return &datum->obj;
-}
-
-int avro_float_set(avro_datum_t datum, const float f)
-{
-	struct avro_float_datum_t *floatp;
-	if (!(is_avro_datum(datum) && is_avro_float(datum))) {
-		return EINVAL;
-	}
-	floatp = avro_datum_to_float(datum);
-	floatp->f = f;
-	return 0;
-}
-
-int avro_float_get(avro_datum_t datum, float *f)
-{
-	if (!(is_avro_datum(datum) && is_avro_float(datum)) || !f) {
-		return EINVAL;
-	}
-	*f = avro_datum_to_float(datum)->f;
-	return 0;
-}
-
-avro_datum_t avro_double(double d)
-{
-	struct avro_double_datum_t *datum =
-	    malloc(sizeof(struct avro_double_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->d = d;
-
-	avro_datum_init(&datum->obj, AVRO_DOUBLE);
-	return &datum->obj;
-}
-
-int avro_double_set(avro_datum_t datum, const double d)
-{
-	struct avro_double_datum_t *doublep;
-	if (!(is_avro_datum(datum) && is_avro_double(datum))) {
-		return EINVAL;
-	}
-	doublep = avro_datum_to_double(datum);
-	doublep->d = d;
-	return 0;
-}
-
-int avro_double_get(avro_datum_t datum, double *d)
-{
-	if (!(is_avro_datum(datum) && is_avro_double(datum)) || !d) {
-		return EINVAL;
-	}
-	*d = avro_datum_to_double(datum)->d;
-	return 0;
-}
-
-avro_datum_t avro_boolean(int8_t i)
-{
-	struct avro_boolean_datum_t *datum =
-	    malloc(sizeof(struct avro_boolean_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->i = i;
-	avro_datum_init(&datum->obj, AVRO_BOOLEAN);
-	return &datum->obj;
-}
-
-int avro_boolean_set(avro_datum_t datum, const int8_t i)
-{
-	struct avro_boolean_datum_t *booleanp;
-	if (!(is_avro_datum(datum) && is_avro_boolean(datum))) {
-		return EINVAL;
-	}
-	booleanp = avro_datum_to_boolean(datum);
-	booleanp->i = i;
-	return 0;
-}
-
-int avro_boolean_get(avro_datum_t datum, int8_t * i)
-{
-	if (!(is_avro_datum(datum) && is_avro_boolean(datum)) || !i) {
-		return EINVAL;
-	}
-	*i = avro_datum_to_boolean(datum)->i;
-	return 0;
-}
-
-avro_datum_t avro_null(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_NULL,
-		.class_type = AVRO_DATUM,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_datum_t avro_union(int64_t discriminant, avro_datum_t value)
-{
-	struct avro_union_datum_t *datum =
-	    malloc(sizeof(struct avro_union_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->discriminant = discriminant;
-	datum->value = avro_datum_incref(value);
-
-	avro_datum_init(&datum->obj, AVRO_UNION);
-	return &datum->obj;
-}
-
-avro_datum_t avro_record(const char *name, const char *space)
-{
-	struct avro_record_datum_t *datum =
-	    malloc(sizeof(struct avro_record_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->name = strdup(name);
-	if (!datum->name) {
-		free(datum);
-		return NULL;
-	}
-	datum->space = space ? strdup(space) : NULL;
-	if (space && !datum->space) {
-		free((void *)datum->name);
-		free((void *)datum);
-		return NULL;
-	}
-	datum->field_order = st_init_numtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!datum->field_order) {
-		if (space) {
-			free((void *)datum->space);
-		}
-		free((char *)datum->name);
-		free(datum);
-		return NULL;
-	}
-	datum->fields_byname = st_init_strtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!datum->fields_byname) {
-		st_free_table(datum->field_order);
-		if (space) {
-			free((void *)datum->space);
-		}
-		free((char *)datum->name);
-		free(datum);
-		return NULL;
-	}
-
-	avro_datum_init(&datum->obj, AVRO_RECORD);
-	return &datum->obj;
-}
-
-int
-avro_record_get(const avro_datum_t datum, const char *field_name,
-		avro_datum_t * field)
-{
-	union {
-		avro_datum_t field;
-		st_data_t data;
-	} val;
-	if (is_avro_datum(datum) && is_avro_record(datum) && field_name) {
-		if (st_lookup
-		    (avro_datum_to_record(datum)->fields_byname,
-		     (st_data_t) field_name, &(val.data))) {
-			*field = val.field;
-			return 0;
-		}
-	}
-	return EINVAL;
-}
-
-int
-avro_record_set(const avro_datum_t datum, const char *field_name,
-		const avro_datum_t field_value)
-{
-	char *key = (char *)field_name;
-	avro_datum_t old_field;
-
-	if (is_avro_datum(datum) && is_avro_record(datum) && field_name) {
-		if (avro_record_get(datum, field_name, &old_field) == 0) {
-			/* Overriding old value */
-			avro_datum_decref(old_field);
-		} else {
-			/* Inserting new value */
-			struct avro_record_datum_t *record =
-			    avro_datum_to_record(datum);
-			key = strdup(field_name);
-			if (!key) {
-				return ENOMEM;
-			}
-			st_insert(record->field_order,
-				  record->field_order->num_entries,
-				  (st_data_t) key);
-		}
-		avro_datum_incref(field_value);
-		st_insert(avro_datum_to_record(datum)->fields_byname,
-			  (st_data_t) key, (st_data_t) field_value);
-		return 0;
-	}
-	return EINVAL;
-}
-
-avro_datum_t avro_enum(const char *name, int i)
-{
-	struct avro_enum_datum_t *datum =
-	    malloc(sizeof(struct avro_enum_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->name = strdup(name);
-	datum->value = i;
-
-	avro_datum_init(&datum->obj, AVRO_ENUM);
-	return &datum->obj;
-}
-
-static avro_datum_t avro_fixed_private(const char *name, const char *bytes,
-				       const int64_t size,
-				       void (*fixed_free) (void *ptr))
-{
-	struct avro_fixed_datum_t *datum =
-	    malloc(sizeof(struct avro_fixed_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->name = strdup(name);
-	datum->size = size;
-	datum->bytes = (char *)bytes;
-	datum->free = fixed_free;
-
-	avro_datum_init(&datum->obj, AVRO_FIXED);
-	return &datum->obj;
-}
-
-avro_datum_t avro_fixed(const char *name, const char *bytes, const int64_t size)
-{
-	char *bytes_copy = malloc(size);
-	if (!bytes_copy) {
-		return NULL;
-	}
-	memcpy(bytes_copy, bytes, size);
-	return avro_fixed_private(name, bytes, size, free);
-}
-
-avro_datum_t avro_wrapfixed(const char *name, const char *bytes,
-			    const int64_t size)
-{
-	return avro_fixed_private(name, bytes, size, NULL);
-}
-
-avro_datum_t avro_givefixed(const char *name, const char *bytes,
-			    const int64_t size)
-{
-	return avro_fixed_private(name, bytes, size, free);
-}
-
-static int avro_fixed_set_private(avro_datum_t datum, const char *bytes,
-				  const int64_t size,
-				  void (*fixed_free) (void *ptr))
-{
-	struct avro_fixed_datum_t *fixed;
-
-	AVRO_UNUSED(size);
-
-	if (!(is_avro_datum(datum) && is_avro_fixed(datum))) {
-		return EINVAL;
-	}
-
-	fixed = avro_datum_to_fixed(datum);
-	if (fixed->free) {
-		fixed->free(fixed->bytes);
-	}
-
-	fixed->free = fixed_free;
-	fixed->bytes = (char *)bytes;
-	return 0;
-}
-
-int avro_fixed_set(avro_datum_t datum, const char *bytes, const int64_t size)
-{
-	int rval;
-	char *bytes_copy = malloc(size);
-	if (!bytes_copy) {
-		return ENOMEM;
-	}
-	memcpy(bytes_copy, bytes, size);
-	rval = avro_fixed_set_private(datum, bytes, size, free);
-	if (rval) {
-		free(bytes_copy);
-	}
-	return rval;
-}
-
-int avro_givefixed_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size)
-{
-	return avro_fixed_set_private(datum, bytes, size, free);
-}
-
-int avro_wrapfixed_set(avro_datum_t datum, const char *bytes,
-		       const int64_t size)
-{
-	return avro_fixed_set_private(datum, bytes, size, NULL);
-}
-
-int avro_fixed_get(avro_datum_t datum, char **bytes, int64_t * size)
-{
-	if (!(is_avro_datum(datum) && is_avro_fixed(datum)) || !bytes || !size) {
-		return EINVAL;
-	}
-	*bytes = avro_datum_to_fixed(datum)->bytes;
-	*size = avro_datum_to_fixed(datum)->size;
-	return 0;
-}
-
-avro_datum_t avro_map(void)
-{
-	struct avro_map_datum_t *datum =
-	    malloc(sizeof(struct avro_map_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->map = st_init_strtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!datum->map) {
-		free(datum);
-		return NULL;
-	}
-
-	avro_datum_init(&datum->obj, AVRO_MAP);
-	return &datum->obj;
-}
-
-int
-avro_map_get(const avro_datum_t datum, const char *key, avro_datum_t * value)
-{
-	struct avro_map_datum_t *map;
-	union {
-		avro_datum_t datum;
-		st_data_t data;
-	} val;
-
-	if (!(is_avro_datum(datum) && is_avro_map(datum) && key && value)) {
-		return EINVAL;
-	}
-
-	map = avro_datum_to_map(datum);
-	if (st_lookup(map->map, (st_data_t) key, &(val.data))) {
-		*value = val.datum;
-		return 0;
-	}
-	return EINVAL;
-}
-
-int
-avro_map_set(const avro_datum_t datum, const char *key,
-	     const avro_datum_t value)
-{
-	char *save_key = (char *)key;
-	avro_datum_t old_datum;
-
-	if (!is_avro_datum(datum) || !is_avro_map(datum) || !key
-	    || !is_avro_datum(value)) {
-		return EINVAL;
-	}
-
-	if (avro_map_get(datum, key, &old_datum) == 0) {
-		/* Overwriting an old value */
-		avro_datum_decref(old_datum);
-	} else {
-		/* Inserting a new value */
-		save_key = strdup(key);
-		if (!save_key) {
-			return ENOMEM;
-		}
-	}
-	avro_datum_incref(value);
-	st_insert(avro_datum_to_map(datum)->map, (st_data_t) save_key,
-		  (st_data_t) value);
-	return 0;
-}
-
-avro_datum_t avro_array(void)
-{
-	struct avro_array_datum_t *datum =
-	    malloc(sizeof(struct avro_array_datum_t));
-	if (!datum) {
-		return NULL;
-	}
-	datum->els = st_init_numtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!datum->els) {
-		free(datum);
-		return NULL;
-	}
-
-	avro_datum_init(&datum->obj, AVRO_ARRAY);
-	return &datum->obj;
-}
-
-int
-avro_array_append_datum(const avro_datum_t array_datum,
-			const avro_datum_t datum)
-{
-	struct avro_array_datum_t *array;
-	if (!is_avro_datum(array_datum) || !is_avro_array(array_datum)
-	    || !is_avro_datum(datum)) {
-		return EINVAL;
-	}
-	array = avro_datum_to_array(array_datum);
-	st_insert(array->els, array->els->num_entries,
-		  (st_data_t) avro_datum_incref(datum));
-	return 0;
-}
-
-static int char_datum_free_foreach(char *key, avro_datum_t datum, void *arg)
-{
-	AVRO_UNUSED(arg);
-
-	avro_datum_decref(datum);
-	free(key);
-	return ST_DELETE;
-}
-
-static int array_free_foreach(int i, avro_datum_t datum, void *arg)
-{
-	AVRO_UNUSED(i);
-	AVRO_UNUSED(arg);
-
-	avro_datum_decref(datum);
-	return ST_DELETE;
-}
-
-static void avro_datum_free(avro_datum_t datum)
-{
-	if (is_avro_datum(datum)) {
-		switch (avro_typeof(datum)) {
-		case AVRO_STRING:{
-				struct avro_string_datum_t *string;
-				string = avro_datum_to_string(datum);
-				if (string->free) {
-					string->free(string->s);
-				}
-				free(string);
-			}
-			break;
-		case AVRO_BYTES:{
-				struct avro_bytes_datum_t *bytes;
-				bytes = avro_datum_to_bytes(datum);
-				if (bytes->free) {
-					bytes->free(bytes->bytes);
-				}
-				free(bytes);
-			}
-			break;
-		case AVRO_INT32:{
-				struct avro_int32_datum_t *i;
-				i = avro_datum_to_int32(datum);
-				free(i);
-			}
-			break;
-		case AVRO_INT64:{
-				struct avro_int64_datum_t *l;
-				l = avro_datum_to_int64(datum);
-				free(l);
-			}
-			break;
-		case AVRO_FLOAT:{
-				struct avro_float_datum_t *f;
-				f = avro_datum_to_float(datum);
-				free(f);
-			}
-			break;
-		case AVRO_DOUBLE:{
-				struct avro_double_datum_t *d;
-				d = avro_datum_to_double(datum);
-				free(d);
-			}
-			break;
-		case AVRO_BOOLEAN:{
-				struct avro_boolean_datum_t *b;
-				b = avro_datum_to_boolean(datum);
-				free(b);
-			}
-			break;
-		case AVRO_NULL:
-			/* Nothing allocated */
-			break;
-
-		case AVRO_RECORD:{
-				struct avro_record_datum_t *record;
-				record = avro_datum_to_record(datum);
-				free((void *)record->name);
-				if (record->space) {
-					free((void *)record->space);
-				}
-				st_foreach(record->fields_byname,
-					   char_datum_free_foreach, 0);
-				st_free_table(record->field_order);
-				st_free_table(record->fields_byname);
-				free(record);
-			}
-			break;
-		case AVRO_ENUM:{
-				struct avro_enum_datum_t *enump;
-				enump = avro_datum_to_enum(datum);
-				free((void *)enump->name);
-				free(enump);
-			}
-			break;
-		case AVRO_FIXED:{
-				struct avro_fixed_datum_t *fixed;
-				fixed = avro_datum_to_fixed(datum);
-				free((void *)fixed->name);
-				if (fixed->free) {
-					fixed->free((void *)fixed->bytes);
-				}
-				free(fixed);
-			}
-			break;
-		case AVRO_MAP:{
-				struct avro_map_datum_t *map;
-				map = avro_datum_to_map(datum);
-				st_foreach(map->map, char_datum_free_foreach,
-					   0);
-				st_free_table(map->map);
-				free(map);
-			}
-			break;
-		case AVRO_ARRAY:{
-				struct avro_array_datum_t *array;
-				array = avro_datum_to_array(datum);
-				st_foreach(array->els, array_free_foreach, 0);
-				st_free_table(array->els);
-				free(array);
-			}
-			break;
-		case AVRO_UNION:{
-				struct avro_union_datum_t *unionp;
-				unionp = avro_datum_to_union(datum);
-				avro_datum_decref(unionp->value);
-				free(unionp);
-			}
-			break;
-		case AVRO_LINK:{
-				/* TODO */
-			}
-			break;
-		}
-	}
-}
-
-avro_datum_t avro_datum_incref(avro_datum_t datum)
-{
-	if (datum && datum->refcount != (unsigned int)-1) {
-		++datum->refcount;
-	}
-	return datum;
-}
-
-void avro_datum_decref(avro_datum_t datum)
-{
-	if (datum && datum->refcount != (unsigned int)-1
-	    && --datum->refcount == 0) {
-		avro_datum_free(datum);
-	}
-}
-
-void avro_datum_print(avro_datum_t value, FILE * fp)
-{
-	AVRO_UNUSED(value);
-	AVRO_UNUSED(fp);
-}
diff --git a/branch-1.3/lang/c/src/datum.h b/branch-1.3/lang/c/src/datum.h
deleted file mode 100644
index 581258f..0000000
--- a/branch-1.3/lang/c/src/datum.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#ifndef AVRO_DATUM_H
-#define AVRO_DATUM_H
-#include "avro.h"		/* for avro_schema_t */
-#include "avro_private.h"
-#include "st.h"
-
-struct avro_string_datum_t {
-	struct avro_obj_t obj;
-	char *s;
-	void (*free) (void *ptr);
-};
-
-struct avro_bytes_datum_t {
-	struct avro_obj_t obj;
-	char *bytes;
-	int64_t size;
-	void (*free) (void *ptr);
-};
-
-struct avro_int32_datum_t {
-	struct avro_obj_t obj;
-	int32_t i32;
-};
-
-struct avro_int64_datum_t {
-	struct avro_obj_t obj;
-	int64_t i64;
-};
-
-struct avro_float_datum_t {
-	struct avro_obj_t obj;
-	float f;
-};
-
-struct avro_double_datum_t {
-	struct avro_obj_t obj;
-	double d;
-};
-
-struct avro_boolean_datum_t {
-	struct avro_obj_t obj;
-	int8_t i;
-};
-
-struct avro_fixed_datum_t {
-	struct avro_obj_t obj;
-	char *name;
-	char *bytes;
-	int64_t size;
-	void (*free) (void *ptr);
-};
-
-struct avro_map_datum_t {
-	struct avro_obj_t obj;
-	st_table *map;
-};
-
-struct avro_record_datum_t {
-	struct avro_obj_t obj;
-	const char *name;
-	const char *space;
-	st_table *field_order;
-	st_table *fields_byname;
-};
-
-struct avro_enum_datum_t {
-	struct avro_obj_t obj;
-	const char *name;
-	int value;
-};
-
-struct avro_array_datum_t {
-	struct avro_obj_t obj;
-	st_table *els;
-};
-
-struct avro_union_datum_t {
-	struct avro_obj_t obj;
-	int64_t discriminant;
-	avro_datum_t value;
-};
-
-#define avro_datum_to_string(datum_)    (container_of(datum_, struct avro_string_datum_t, obj))
-#define avro_datum_to_bytes(datum_)     (container_of(datum_, struct avro_bytes_datum_t, obj))
-#define avro_datum_to_int32(datum_)     (container_of(datum_, struct avro_int32_datum_t, obj))
-#define avro_datum_to_int64(datum_)     (container_of(datum_, struct avro_int64_datum_t, obj))
-#define avro_datum_to_float(datum_)     (container_of(datum_, struct avro_float_datum_t, obj))
-#define avro_datum_to_double(datum_)    (container_of(datum_, struct avro_double_datum_t, obj))
-#define avro_datum_to_boolean(datum_)   (container_of(datum_, struct avro_boolean_datum_t, obj))
-#define avro_datum_to_fixed(datum_)     (container_of(datum_, struct avro_fixed_datum_t, obj))
-#define avro_datum_to_map(datum_)       (container_of(datum_, struct avro_map_datum_t, obj))
-#define avro_datum_to_record(datum_)    (container_of(datum_, struct avro_record_datum_t, obj))
-#define avro_datum_to_enum(datum_)      (container_of(datum_, struct avro_enum_datum_t, obj))
-#define avro_datum_to_array(datum_)     (container_of(datum_, struct avro_array_datum_t, obj))
-#define avro_datum_to_union(datum_)	(container_of(datum_, struct avro_union_datum_t, obj))
-
-#endif
diff --git a/branch-1.3/lang/c/src/datum_equal.c b/branch-1.3/lang/c/src/datum_equal.c
deleted file mode 100644
index b797b59..0000000
--- a/branch-1.3/lang/c/src/datum_equal.c
+++ /dev/null
@@ -1,180 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <string.h>
-#include "datum.h"
-
-static int
-array_equal(struct avro_array_datum_t *a, struct avro_array_datum_t *b)
-{
-	long i;
-
-	if (a->els->num_entries != b->els->num_entries) {
-		return 0;
-	}
-	for (i = 0; i < a->els->num_entries; i++) {
-		union {
-			st_data_t data;
-			avro_datum_t datum;
-		} ael, bel;
-		st_lookup(a->els, i, &ael.data);
-		st_lookup(b->els, i, &bel.data);
-		if (!avro_datum_equal(ael.datum, bel.datum)) {
-			return 0;
-		}
-	}
-	return 1;
-}
-
-struct st_equal_args {
-	int rval;
-	st_table *st;
-};
-
-static int
-st_equal_foreach(char *key, avro_datum_t datum, struct st_equal_args *args)
-{
-	union {
-		avro_datum_t datum_other;
-		st_data_t data;
-	} val;
-	if (!st_lookup(args->st, (st_data_t) key, &(val.data))) {
-		args->rval = 0;
-		return ST_STOP;
-	}
-	if (!avro_datum_equal(datum, val.datum_other)) {
-		args->rval = 0;
-		return ST_STOP;
-	}
-	return ST_CONTINUE;
-}
-
-static int map_equal(struct avro_map_datum_t *a, struct avro_map_datum_t *b)
-{
-	struct st_equal_args args = { 1, b->map };
-	if (a->map->num_entries != b->map->num_entries) {
-		return 0;
-	}
-	st_foreach(a->map, st_equal_foreach, (st_data_t) & args);
-	return args.rval;
-}
-
-static int record_equal(struct avro_record_datum_t *a,
-			struct avro_record_datum_t *b)
-{
-	struct st_equal_args args = { 1, b->fields_byname };
-	if (strcmp(a->name, b->name)) {
-		/* This have different names */
-		return 0;
-	}
-	if (a->space && b->space) {
-		/* They have different namespaces */
-		if (strcmp(a->space, b->space)) {
-			return 0;
-		}
-	} else if (a->space || b->space) {
-		/* One has a namespace, one doesn't */
-		return 0;
-	}
-	if (a->fields_byname->num_entries != b->fields_byname->num_entries) {
-		return 0;
-	}
-	st_foreach(a->fields_byname, st_equal_foreach, (st_data_t) & args);
-	return args.rval;
-}
-
-static int enum_equal(struct avro_enum_datum_t *a, struct avro_enum_datum_t *b)
-{
-	return strcmp(a->name, b->name) == 0 && a->value == b->value;
-}
-
-static int fixed_equal(struct avro_fixed_datum_t *a,
-		       struct avro_fixed_datum_t *b)
-{
-	return a->size == b->size && memcmp(a->bytes, b->bytes, a->size) == 0;
-}
-
-static int union_equal(struct avro_union_datum_t *a,
-		       struct avro_union_datum_t *b)
-{
-	/* XXX: not sure. a->discriminant == b->discriminant important? */
-	return avro_datum_equal(a->value, b->value);
-}
-
-int avro_datum_equal(const avro_datum_t a, const avro_datum_t b)
-{
-	if (!(is_avro_datum(a) && is_avro_datum(b))) {
-		return 0;
-	}
-	if (avro_typeof(a) != avro_typeof(b)) {
-		return 0;
-	}
-	switch (avro_typeof(a)) {
-	case AVRO_STRING:
-		return strcmp(avro_datum_to_string(a)->s,
-			      avro_datum_to_string(b)->s) == 0;
-	case AVRO_BYTES:
-		return (avro_datum_to_bytes(a)->size ==
-			avro_datum_to_bytes(b)->size)
-		    && memcmp(avro_datum_to_bytes(a)->bytes,
-			      avro_datum_to_bytes(b)->bytes,
-			      avro_datum_to_bytes(a)->size) == 0;
-	case AVRO_INT32:
-		return avro_datum_to_int32(a)->i32 ==
-		    avro_datum_to_int32(b)->i32;
-	case AVRO_INT64:
-		return avro_datum_to_int64(a)->i64 ==
-		    avro_datum_to_int64(b)->i64;
-	case AVRO_FLOAT:
-		return avro_datum_to_float(a)->f == avro_datum_to_float(b)->f;
-	case AVRO_DOUBLE:
-		return avro_datum_to_double(a)->d == avro_datum_to_double(b)->d;
-	case AVRO_BOOLEAN:
-		return avro_datum_to_boolean(a)->i ==
-		    avro_datum_to_boolean(b)->i;
-	case AVRO_NULL:
-		return 1;
-	case AVRO_ARRAY:
-		return array_equal(avro_datum_to_array(a),
-				   avro_datum_to_array(b));
-	case AVRO_MAP:
-		return map_equal(avro_datum_to_map(a), avro_datum_to_map(b));
-
-	case AVRO_RECORD:
-		return record_equal(avro_datum_to_record(a),
-				    avro_datum_to_record(b));
-
-	case AVRO_ENUM:
-		return enum_equal(avro_datum_to_enum(a), avro_datum_to_enum(b));
-
-	case AVRO_FIXED:
-		return fixed_equal(avro_datum_to_fixed(a),
-				   avro_datum_to_fixed(b));
-
-	case AVRO_UNION:
-		return union_equal(avro_datum_to_union(a),
-				   avro_datum_to_union(b));
-
-	case AVRO_LINK:
-		/*
-		 * TODO 
-		 */
-		return 0;
-	}
-	return 0;
-}
diff --git a/branch-1.3/lang/c/src/datum_read.c b/branch-1.3/lang/c/src/datum_read.c
deleted file mode 100644
index cb77729..0000000
--- a/branch-1.3/lang/c/src/datum_read.c
+++ /dev/null
@@ -1,448 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdlib.h>
-#include <errno.h>
-#include <string.h>
-#include "encoding.h"
-#include "schema.h"
-#include "datum.h"
-
-int
-avro_schema_match(avro_schema_t writers_schema, avro_schema_t readers_schema)
-{
-	if (!is_avro_schema(writers_schema) || !is_avro_schema(readers_schema)) {
-		return 0;
-	}
-
-	switch (avro_typeof(writers_schema)) {
-	case AVRO_UNION:
-		return 1;
-
-	case AVRO_INT32:
-		return is_avro_int32(readers_schema)
-		    || is_avro_int64(readers_schema)
-		    || is_avro_float(readers_schema)
-		    || is_avro_double(readers_schema);
-
-	case AVRO_INT64:
-		return is_avro_int64(readers_schema)
-		    || is_avro_float(readers_schema)
-		    || is_avro_double(readers_schema);
-
-	case AVRO_FLOAT:
-		return is_avro_float(readers_schema)
-		    || is_avro_double(readers_schema);
-
-	case AVRO_STRING:
-	case AVRO_BYTES:
-	case AVRO_DOUBLE:
-	case AVRO_BOOLEAN:
-	case AVRO_NULL:
-		return avro_typeof(writers_schema) ==
-		    avro_typeof(readers_schema);
-
-	case AVRO_RECORD:
-		return is_avro_record(readers_schema)
-		    && strcmp(avro_schema_name(writers_schema),
-			      avro_schema_name(readers_schema)) == 0;
-
-	case AVRO_FIXED:
-		return is_avro_fixed(readers_schema)
-		    && strcmp(avro_schema_name(writers_schema),
-			      avro_schema_name(readers_schema)) == 0
-		    && (avro_schema_to_fixed(writers_schema))->size ==
-		    (avro_schema_to_fixed(readers_schema))->size;
-
-	case AVRO_ENUM:
-		return is_avro_enum(readers_schema)
-		    && strcmp(avro_schema_to_enum(writers_schema)->name,
-			      avro_schema_to_enum(readers_schema)->name) == 0;
-
-	case AVRO_MAP:
-		return is_avro_map(readers_schema)
-		    && avro_typeof(avro_schema_to_map(writers_schema)->values)
-		    == avro_typeof(avro_schema_to_map(readers_schema)->values);
-
-	case AVRO_ARRAY:
-		return is_avro_array(readers_schema)
-		    && avro_typeof(avro_schema_to_array(writers_schema)->items)
-		    == avro_typeof(avro_schema_to_array(readers_schema)->items);
-
-	case AVRO_LINK:
-		/*
-		 * TODO 
-		 */
-		break;
-	}
-
-	return 0;
-}
-
-static int
-read_enum(avro_reader_t reader, const avro_encoding_t * enc,
-	  struct avro_enum_schema_t *writers_schema,
-	  struct avro_enum_schema_t *readers_schema, avro_datum_t * datum)
-{
-	int rval;
-	int64_t index;
-
-	AVRO_UNUSED(readers_schema);
-
-	check(rval, enc->read_long(reader, &index));
-	*datum = avro_enum(writers_schema->name, index);
-	return 0;
-}
-
-static int
-read_array(avro_reader_t reader, const avro_encoding_t * enc,
-	   struct avro_array_schema_t *writers_schema,
-	   struct avro_array_schema_t *readers_schema, avro_datum_t * datum)
-{
-	int rval;
-	int64_t i;
-	int64_t block_count;
-	int64_t block_size;
-	avro_datum_t array_datum;
-
-	rval = enc->read_long(reader, &block_count);
-	if (rval) {
-		return rval;
-	}
-
-	array_datum = avro_array();
-	while (block_count != 0) {
-		if (block_count < 0) {
-			block_count = block_count * -1;
-			rval = enc->read_long(reader, &block_size);
-			if (rval) {
-				return rval;
-			}
-		}
-
-		for (i = 0; i < block_count; i++) {
-			avro_datum_t datum;
-
-			rval =
-			    avro_read_data(reader, writers_schema->items,
-					   readers_schema->items, &datum);
-			if (rval) {
-				return rval;
-			}
-			rval = avro_array_append_datum(array_datum, datum);
-			if (rval) {
-				avro_datum_decref(array_datum);
-				return rval;
-			}
-			avro_datum_decref(datum);
-		}
-
-		rval = enc->read_long(reader, &block_count);
-		if (rval) {
-			return rval;
-		}
-	}
-	*datum = array_datum;
-	return 0;
-}
-
-static int
-read_map(avro_reader_t reader, const avro_encoding_t * enc,
-	 struct avro_map_schema_t *writers_schema,
-	 struct avro_map_schema_t *readers_schema, avro_datum_t * datum)
-{
-	int rval;
-	int64_t i, block_count;
-	avro_datum_t map = avro_map();
-
-	rval = enc->read_long(reader, &block_count);
-	if (rval) {
-		return rval;
-	}
-	while (block_count != 0) {
-		int64_t block_size;
-		if (block_count < 0) {
-			block_count = block_count * -1;
-			rval = enc->read_long(reader, &block_size);
-			if (rval) {
-				return rval;
-			}
-		}
-		for (i = 0; i < block_count; i++) {
-			char *key;
-			avro_datum_t value;
-			rval = enc->read_string(reader, &key);
-			if (rval) {
-				return rval;
-			}
-			rval =
-			    avro_read_data(reader,
-					   avro_schema_to_map(writers_schema)->
-					   values,
-					   avro_schema_to_map(readers_schema)->
-					   values, &value);
-			if (rval) {
-				free(key);
-				return rval;
-			}
-			rval = avro_map_set(map, key, value);
-			if (rval) {
-				free(key);
-				return rval;
-			}
-			avro_datum_decref(value);
-			free(key);
-		}
-		rval = enc->read_long(reader, &block_count);
-		if (rval) {
-			return rval;
-		}
-	}
-	*datum = map;
-	return 0;
-}
-
-static int
-read_union(avro_reader_t reader, const avro_encoding_t * enc,
-	   struct avro_union_schema_t *writers_schema,
-	   struct avro_union_schema_t *readers_schema, avro_datum_t * datum)
-{
-	int rval;
-	int64_t discriminant;
-	avro_datum_t value;
-	union {
-		st_data_t data;
-		avro_schema_t schema;
-	} val;
-
-	AVRO_UNUSED(readers_schema);
-
-	check(rval, enc->read_long(reader, &discriminant));
-	if (!st_lookup(writers_schema->branches, discriminant, &val.data)) {
-		return EILSEQ;
-	}
-	check(rval, avro_read_data(reader, val.schema, NULL, &value));
-	*datum = avro_union(discriminant, value);
-	avro_datum_decref(value);
-	return 0;
-}
-
-/* TODO: handle default values in fields */
-static int
-read_record(avro_reader_t reader, const avro_encoding_t * enc,
-	    struct avro_record_schema_t *writers_schema,
-	    struct avro_record_schema_t *readers_schema, avro_datum_t * datum)
-{
-	int rval;
-	long i;
-	avro_datum_t record;
-	avro_datum_t field_datum;
-
-	AVRO_UNUSED(enc);
-
-	record = *datum =
-	    avro_record(writers_schema->name, writers_schema->space);
-	for (i = 0; i < writers_schema->fields->num_entries; i++) {
-		union {
-			st_data_t data;
-			struct avro_record_field_t *field;
-		} rfield, wfield;
-		st_lookup(writers_schema->fields, i, &wfield.data);
-		if (st_lookup
-		    (readers_schema->fields_byname,
-		     (st_data_t) wfield.field->name, &rfield.data)) {
-			rval =
-			    avro_read_data(reader, wfield.field->type,
-					   rfield.field->type, &field_datum);
-			if (rval) {
-				return rval;
-			}
-			rval =
-			    avro_record_set(record, wfield.field->name,
-					    field_datum);
-			if (rval) {
-				return rval;
-			}
-			avro_datum_decref(field_datum);
-		} else {
-			rval = avro_skip_data(reader, wfield.field->type);
-			if (rval) {
-				return rval;
-			}
-		}
-	}
-	return 0;
-}
-
-int
-avro_read_data(avro_reader_t reader, avro_schema_t writers_schema,
-	       avro_schema_t readers_schema, avro_datum_t * datum)
-{
-	int rval = EINVAL;
-	const avro_encoding_t *enc = &avro_binary_encoding;
-
-	if (!reader || !is_avro_schema(writers_schema) || !datum) {
-		return EINVAL;
-	}
-
-	if (readers_schema == NULL) {
-		readers_schema = writers_schema;
-	} else if (!avro_schema_match(writers_schema, readers_schema)) {
-		return EINVAL;
-	}
-
-	switch (avro_typeof(writers_schema)) {
-	case AVRO_NULL:
-		rval = enc->read_null(reader);
-		if (!rval) {
-			*datum = avro_null();
-		}
-		break;
-
-	case AVRO_BOOLEAN:
-		{
-			int8_t b;
-			rval = enc->read_boolean(reader, &b);
-			if (!rval) {
-				*datum = avro_boolean(b);
-			}
-		}
-		break;
-
-	case AVRO_STRING:
-		{
-			char *s;
-			rval = enc->read_string(reader, &s);
-			if (!rval) {
-				*datum = avro_givestring(s);
-			}
-		}
-		break;
-
-	case AVRO_INT32:
-		{
-			int32_t i;
-			rval = enc->read_int(reader, &i);
-			if (!rval) {
-				*datum = avro_int32(i);
-			}
-		}
-		break;
-
-	case AVRO_INT64:
-		{
-			int64_t l;
-			rval = enc->read_long(reader, &l);
-			if (!rval) {
-				*datum = avro_int64(l);
-			}
-		}
-		break;
-
-	case AVRO_FLOAT:
-		{
-			float f;
-			rval = enc->read_float(reader, &f);
-			if (!rval) {
-				*datum = avro_float(f);
-			}
-		}
-		break;
-
-	case AVRO_DOUBLE:
-		{
-			double d;
-			rval = enc->read_double(reader, &d);
-			if (!rval) {
-				*datum = avro_double(d);
-			}
-		}
-		break;
-
-	case AVRO_BYTES:
-		{
-			char *bytes;
-			int64_t len;
-			rval = enc->read_bytes(reader, &bytes, &len);
-			if (!rval) {
-				*datum = avro_givebytes(bytes, len);
-			}
-		}
-		break;
-
-	case AVRO_FIXED:{
-			char *bytes;
-			const char *name =
-			    avro_schema_to_fixed(writers_schema)->name;
-			int64_t size =
-			    avro_schema_to_fixed(writers_schema)->size;
-
-			bytes = malloc(size);
-			if (!bytes) {
-				return ENOMEM;
-			}
-			rval = avro_read(reader, bytes, size);
-			if (!rval) {
-				*datum = avro_givefixed(name, bytes, size);
-			}
-		}
-		break;
-
-	case AVRO_ENUM:
-		rval =
-		    read_enum(reader, enc, avro_schema_to_enum(writers_schema),
-			      avro_schema_to_enum(readers_schema), datum);
-		break;
-
-	case AVRO_ARRAY:
-		rval =
-		    read_array(reader, enc,
-			       avro_schema_to_array(writers_schema),
-			       avro_schema_to_array(readers_schema), datum);
-		break;
-
-	case AVRO_MAP:
-		rval =
-		    read_map(reader, enc, avro_schema_to_map(writers_schema),
-			     avro_schema_to_map(readers_schema), datum);
-		break;
-
-	case AVRO_UNION:
-		rval =
-		    read_union(reader, enc,
-			       avro_schema_to_union(writers_schema),
-			       avro_schema_to_union(readers_schema), datum);
-		break;
-
-	case AVRO_RECORD:
-		rval =
-		    read_record(reader, enc,
-				avro_schema_to_record(writers_schema),
-				avro_schema_to_record(readers_schema), datum);
-		break;
-
-	case AVRO_LINK:
-		rval =
-		    avro_read_data(reader,
-				   (avro_schema_to_link(writers_schema))->to,
-				   readers_schema, datum);
-		break;
-	}
-
-	return rval;
-}
diff --git a/branch-1.3/lang/c/src/datum_size.c b/branch-1.3/lang/c/src/datum_size.c
deleted file mode 100644
index 8efb74b..0000000
--- a/branch-1.3/lang/c/src/datum_size.c
+++ /dev/null
@@ -1,295 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <errno.h>
-#include <assert.h>
-#include <string.h>
-#include "schema.h"
-#include "datum.h"
-#include "encoding.h"
-
-#define size_check(rval, call) { rval = call; if(rval) return rval; }
-#define size_accum(rval, size, call) { rval = call; if (rval < 0) return rval; else size += rval; }
-
-static int64_t size_datum(avro_writer_t writer, const avro_encoding_t * enc,
-			  avro_schema_t writers_schema, avro_datum_t datum);
-
-static int64_t
-size_record(avro_writer_t writer, const avro_encoding_t * enc,
-	    struct avro_record_schema_t *schema, avro_datum_t datum)
-{
-	int rval;
-	long i;
-	int64_t size;
-	avro_datum_t field_datum;
-
-	size = 0;
-	if (schema) {
-		for (i = 0; i < schema->fields->num_entries; i++) {
-			union {
-				st_data_t data;
-				struct avro_record_field_t *field;
-			} val;
-			st_lookup(schema->fields, i, &val.data);
-			size_check(rval,
-				   avro_record_get(datum, val.field->name,
-						   &field_datum));
-			size_accum(rval, size,
-				   size_datum(writer, enc, val.field->type,
-					      field_datum));
-		}
-	} else {
-		/* No schema.  Just write the record datum */
-		struct avro_record_datum_t *record =
-		    avro_datum_to_record(datum);
-		for (i = 0; i < record->field_order->num_entries; i++) {
-			union {
-				st_data_t data;
-				char *name;
-			} val;
-			st_lookup(record->field_order, i, &val.data);
-			size_check(rval,
-				   avro_record_get(datum, val.name,
-						   &field_datum));
-			size_accum(rval, size,
-				   size_datum(writer, enc, NULL, field_datum));
-		}
-	}
-	return size;
-}
-
-static int64_t
-size_enum(avro_writer_t writer, const avro_encoding_t * enc,
-	  struct avro_enum_schema_t *enump, struct avro_enum_datum_t *datum)
-{
-	AVRO_UNUSED(enump);
-
-	return enc->size_long(writer, datum->value);
-}
-
-struct size_map_args {
-	int rval;
-	int64_t size;
-	avro_writer_t writer;
-	const avro_encoding_t *enc;
-	avro_schema_t values_schema;
-};
-
-static int
-size_map_foreach(char *key, avro_datum_t datum, struct size_map_args *args)
-{
-	int rval = args->enc->size_string(args->writer, key);
-	if (rval < 0) {
-		args->rval = rval;
-		return ST_STOP;
-	} else {
-		args->size += rval;
-	}
-	rval = size_datum(args->writer, args->enc, args->values_schema, datum);
-	if (rval < 0) {
-		args->rval = rval;
-		return ST_STOP;
-	} else {
-		args->size += rval;
-	}
-	return ST_CONTINUE;
-}
-
-static int64_t
-size_map(avro_writer_t writer, const avro_encoding_t * enc,
-	 struct avro_map_schema_t *writers_schema,
-	 struct avro_map_datum_t *datum)
-{
-	int rval;
-	int64_t size;
-	struct size_map_args args = { 0, 0, writer, enc,
-		writers_schema ? writers_schema->values : NULL
-	};
-
-	size = 0;
-	if (datum->map->num_entries) {
-		size_accum(rval, size,
-			   enc->size_long(writer, datum->map->num_entries));
-		st_foreach(datum->map, size_map_foreach, (st_data_t) & args);
-		size += args.size;
-	}
-	if (!args.rval) {
-		size_accum(rval, size, enc->size_long(writer, 0));
-	}
-	return size;
-}
-
-static int64_t
-size_array(avro_writer_t writer, const avro_encoding_t * enc,
-	   struct avro_array_schema_t *schema, struct avro_array_datum_t *array)
-{
-	int rval;
-	long i;
-	int64_t size;
-
-	size = 0;
-	if (array->els->num_entries) {
-		size_accum(rval, size,
-			   enc->size_long(writer, array->els->num_entries));
-		for (i = 0; i < array->els->num_entries; i++) {
-			union {
-				st_data_t data;
-				avro_datum_t datum;
-			} val;
-			st_lookup(array->els, i, &val.data);
-			size_accum(rval, size,
-				   size_datum(writer, enc,
-					      schema ? schema->items : NULL,
-					      val.datum));
-		}
-	}
-	size_accum(rval, size, enc->size_long(writer, 0));
-	return size;
-}
-
-static int64_t
-size_union(avro_writer_t writer, const avro_encoding_t * enc,
-	   struct avro_union_schema_t *schema,
-	   struct avro_union_datum_t *unionp)
-{
-	int rval;
-	int64_t size;
-	avro_schema_t write_schema = NULL;
-
-	size = 0;
-	size_accum(rval, size, enc->size_long(writer, unionp->discriminant));
-	if (schema) {
-		union {
-			st_data_t data;
-			avro_schema_t schema;
-		} val;
-		if (!st_lookup
-		    (schema->branches, unionp->discriminant, &val.data)) {
-			return -EINVAL;
-		}
-		write_schema = val.schema;
-	}
-	size_accum(rval, size,
-		   size_datum(writer, enc, write_schema, unionp->value));
-	return size;
-}
-
-static int64_t size_datum(avro_writer_t writer, const avro_encoding_t * enc,
-			  avro_schema_t writers_schema, avro_datum_t datum)
-{
-	if (is_avro_schema(writers_schema) && is_avro_link(writers_schema)) {
-		return size_datum(writer, enc,
-				  (avro_schema_to_link(writers_schema))->to,
-				  datum);
-	}
-
-	switch (avro_typeof(datum)) {
-	case AVRO_NULL:
-		return enc->size_null(writer);
-
-	case AVRO_BOOLEAN:
-		return enc->size_boolean(writer,
-					 avro_datum_to_boolean(datum)->i);
-
-	case AVRO_STRING:
-		return enc->size_string(writer, avro_datum_to_string(datum)->s);
-
-	case AVRO_BYTES:
-		return enc->size_bytes(writer,
-				       avro_datum_to_bytes(datum)->bytes,
-				       avro_datum_to_bytes(datum)->size);
-
-	case AVRO_INT32:
-	case AVRO_INT64:{
-			int64_t val = avro_typeof(datum) == AVRO_INT32 ?
-			    avro_datum_to_int32(datum)->i32 :
-			    avro_datum_to_int64(datum)->i64;
-			if (is_avro_schema(writers_schema)) {
-				/* handle promotion */
-				if (is_avro_float(writers_schema)) {
-					return enc->size_float(writer,
-							       (float)val);
-				} else if (is_avro_double(writers_schema)) {
-					return enc->size_double(writer,
-								(double)val);
-				}
-			}
-			return enc->size_long(writer, val);
-		}
-
-	case AVRO_FLOAT:{
-			float val = avro_datum_to_float(datum)->f;
-			if (is_avro_schema(writers_schema)
-			    && is_avro_double(writers_schema)) {
-				/* handle promotion */
-				return enc->size_double(writer, (double)val);
-			}
-			return enc->size_float(writer, val);
-		}
-
-	case AVRO_DOUBLE:
-		return enc->size_double(writer, avro_datum_to_double(datum)->d);
-
-	case AVRO_RECORD:
-		return size_record(writer, enc,
-				   avro_schema_to_record(writers_schema),
-				   datum);
-
-	case AVRO_ENUM:
-		return size_enum(writer, enc,
-				 avro_schema_to_enum(writers_schema),
-				 avro_datum_to_enum(datum));
-
-	case AVRO_FIXED:
-		return avro_datum_to_fixed(datum)->size;
-
-	case AVRO_MAP:
-		return size_map(writer, enc,
-				avro_schema_to_map(writers_schema),
-				avro_datum_to_map(datum));
-
-	case AVRO_ARRAY:
-		return size_array(writer, enc,
-				  avro_schema_to_array(writers_schema),
-				  avro_datum_to_array(datum));
-
-	case AVRO_UNION:
-		return size_union(writer, enc,
-				  avro_schema_to_union(writers_schema),
-				  avro_datum_to_union(datum));
-
-	case AVRO_LINK:
-		break;
-	}
-
-	return 0;
-}
-
-int64_t avro_size_data(avro_writer_t writer, avro_schema_t writers_schema,
-		       avro_datum_t datum)
-{
-	if (!writer || !is_avro_datum(datum)) {
-		return -EINVAL;
-	}
-	/* Only validate datum if a writer's schema is provided */
-	if (is_avro_schema(writers_schema)
-	    && !avro_schema_datum_validate(writers_schema, datum)) {
-		return -EINVAL;
-	}
-	return size_datum(writer, &avro_binary_encoding, writers_schema, datum);
-}
diff --git a/branch-1.3/lang/c/src/datum_skip.c b/branch-1.3/lang/c/src/datum_skip.c
deleted file mode 100644
index ef4e53b..0000000
--- a/branch-1.3/lang/c/src/datum_skip.c
+++ /dev/null
@@ -1,227 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdlib.h>
-#include <errno.h>
-#include <string.h>
-#include "encoding.h"
-#include "schema.h"
-
-static int skip_array(avro_reader_t reader, const avro_encoding_t * enc,
-		      struct avro_array_schema_t *writers_schema)
-{
-	int rval;
-	int64_t i;
-	int64_t block_count;
-	int64_t block_size;
-
-	rval = enc->read_long(reader, &block_count);
-	if (rval) {
-		return rval;
-	}
-
-	while (block_count != 0) {
-		if (block_count < 0) {
-			block_count = block_count * -1;
-			rval = enc->read_long(reader, &block_size);
-			if (rval) {
-				return rval;
-			}
-		}
-
-		for (i = 0; i < block_count; i++) {
-			rval = avro_skip_data(reader, writers_schema->items);
-			if (rval) {
-				return rval;
-			}
-		}
-
-		rval = enc->read_long(reader, &block_count);
-		if (rval) {
-			return rval;
-		}
-	}
-	return 0;
-}
-
-static int skip_map(avro_reader_t reader, const avro_encoding_t * enc,
-		    struct avro_map_schema_t *writers_schema)
-{
-	int rval;
-	int64_t i, block_count;
-
-	rval = enc->read_long(reader, &block_count);
-	if (rval) {
-		return rval;
-	}
-	while (block_count != 0) {
-		int64_t block_size;
-		if (block_count < 0) {
-			block_count = block_count * -1;
-			rval = enc->read_long(reader, &block_size);
-			if (rval) {
-				return rval;
-			}
-		}
-		for (i = 0; i < block_count; i++) {
-			rval = enc->skip_string(reader);
-			if (rval) {
-				return rval;
-			}
-			rval =
-			    avro_skip_data(reader,
-					   avro_schema_to_map(writers_schema)->
-					   values);
-			if (rval) {
-				return rval;
-			}
-		}
-		rval = enc->read_long(reader, &block_count);
-		if (rval) {
-			return rval;
-		}
-	}
-	return 0;
-}
-
-static int skip_union(avro_reader_t reader, const avro_encoding_t * enc,
-		      struct avro_union_schema_t *writers_schema)
-{
-	int rval;
-	int64_t index;
-	union {
-		st_data_t data;
-		avro_schema_t schema;
-	} val;
-
-	rval = enc->read_long(reader, &index);
-	if (rval) {
-		return rval;
-	}
-	if (!st_lookup(writers_schema->branches, index, &val.data)) {
-		return EILSEQ;
-	}
-	return avro_skip_data(reader, val.schema);
-}
-
-static int skip_record(avro_reader_t reader, const avro_encoding_t * enc,
-		       struct avro_record_schema_t *writers_schema)
-{
-	int rval;
-	long i;
-
-	AVRO_UNUSED(enc);
-
-	for (i = 0; i < writers_schema->fields->num_entries; i++) {
-		union {
-			st_data_t data;
-			struct avro_record_field_t *field;
-		} val;
-		st_lookup(writers_schema->fields, i, &val.data);
-		rval = avro_skip_data(reader, val.field->type);
-		if (rval) {
-			return rval;
-		}
-	}
-	return 0;
-}
-
-int avro_skip_data(avro_reader_t reader, avro_schema_t writers_schema)
-{
-	int rval = EINVAL;
-	const avro_encoding_t *enc = &avro_binary_encoding;
-
-	if (!reader || !is_avro_schema(writers_schema)) {
-		return EINVAL;
-	}
-
-	switch (avro_typeof(writers_schema)) {
-	case AVRO_NULL:
-		rval = enc->skip_null(reader);
-		break;
-
-	case AVRO_BOOLEAN:
-		rval = enc->skip_boolean(reader);
-		break;
-
-	case AVRO_STRING:
-		rval = enc->skip_string(reader);
-		break;
-
-	case AVRO_INT32:
-		rval = enc->skip_int(reader);
-		break;
-
-	case AVRO_INT64:
-		rval = enc->skip_long(reader);
-		break;
-
-	case AVRO_FLOAT:
-		rval = enc->skip_float(reader);
-		break;
-
-	case AVRO_DOUBLE:
-		rval = enc->skip_double(reader);
-		break;
-
-	case AVRO_BYTES:
-		rval = enc->skip_bytes(reader);
-		break;
-
-	case AVRO_FIXED:
-		rval =
-		    avro_skip(reader,
-			      avro_schema_to_fixed(writers_schema)->size);
-		break;
-
-	case AVRO_ENUM:
-		rval = enc->skip_long(reader);
-		break;
-
-	case AVRO_ARRAY:
-		rval =
-		    skip_array(reader, enc,
-			       avro_schema_to_array(writers_schema));
-		break;
-
-	case AVRO_MAP:
-		rval =
-		    skip_map(reader, enc, avro_schema_to_map(writers_schema));
-		break;
-
-	case AVRO_UNION:
-		rval =
-		    skip_union(reader, enc,
-			       avro_schema_to_union(writers_schema));
-		break;
-
-	case AVRO_RECORD:
-		rval =
-		    skip_record(reader, enc,
-				avro_schema_to_record(writers_schema));
-		break;
-
-	case AVRO_LINK:
-		rval =
-		    avro_skip_data(reader,
-				   (avro_schema_to_link(writers_schema))->to);
-		break;
-	}
-
-	return rval;
-}
diff --git a/branch-1.3/lang/c/src/datum_validate.c b/branch-1.3/lang/c/src/datum_validate.c
deleted file mode 100644
index a3f92e3..0000000
--- a/branch-1.3/lang/c/src/datum_validate.c
+++ /dev/null
@@ -1,193 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <limits.h>
-#include <errno.h>
-#include <string.h>
-#include "schema.h"
-#include "datum.h"
-#include "st.h"
-
-struct validate_st {
-	avro_schema_t expected_schema;
-	int rval;
-};
-
-static int
-schema_map_validate_foreach(char *key, avro_datum_t datum,
-			    struct validate_st *vst)
-{
-	AVRO_UNUSED(key);
-
-	if (!avro_schema_datum_validate(vst->expected_schema, datum)) {
-		vst->rval = 0;
-		return ST_STOP;
-	}
-	return ST_CONTINUE;
-}
-
-int
-avro_schema_datum_validate(avro_schema_t expected_schema, avro_datum_t datum)
-{
-	int rval;
-	long i;
-
-	if (!is_avro_schema(expected_schema) || !is_avro_datum(datum)) {
-		return EINVAL;
-	}
-
-	switch (avro_typeof(expected_schema)) {
-	case AVRO_NULL:
-		return is_avro_null(datum);
-
-	case AVRO_BOOLEAN:
-		return is_avro_boolean(datum);
-
-	case AVRO_STRING:
-		return is_avro_string(datum);
-
-	case AVRO_BYTES:
-		return is_avro_bytes(datum);
-
-	case AVRO_INT32:
-		return is_avro_int32(datum)
-		    || (is_avro_int64(datum)
-			&& (INT_MIN <= avro_datum_to_int64(datum)->i64
-			    && avro_datum_to_int64(datum)->i64 <= INT_MAX));
-
-	case AVRO_INT64:
-		return is_avro_int32(datum) || is_avro_int64(datum);
-
-	case AVRO_FLOAT:
-		return is_avro_int32(datum) || is_avro_int64(datum)
-		    || is_avro_float(datum);
-
-	case AVRO_DOUBLE:
-		return is_avro_int32(datum) || is_avro_int64(datum)
-		    || is_avro_float(datum) || is_avro_double(datum);
-
-	case AVRO_FIXED:
-		return (is_avro_fixed(datum)
-			&& (avro_schema_to_fixed(expected_schema)->size ==
-			    avro_datum_to_fixed(datum)->size));
-
-	case AVRO_ENUM:
-		if (is_avro_enum(datum)) {
-			long value = avro_datum_to_enum(datum)->value;
-			long max_value =
-			    avro_schema_to_enum(expected_schema)->symbols->
-			    num_entries;
-			return 0 <= value && value <= max_value;
-		}
-		return 0;
-
-	case AVRO_ARRAY:
-		if (is_avro_array(datum)) {
-			struct avro_array_datum_t *array =
-			    avro_datum_to_array(datum);
-
-			for (i = 0; i < array->els->num_entries; i++) {
-				union {
-					st_data_t data;
-					avro_datum_t datum;
-				} val;
-				st_lookup(array->els, i, &val.data);
-				if (!avro_schema_datum_validate
-				    ((avro_schema_to_array
-				      (expected_schema))->items, val.datum)) {
-					return 0;
-				}
-			}
-			return 1;
-		}
-		return 0;
-
-	case AVRO_MAP:
-		if (is_avro_map(datum)) {
-			struct validate_st vst =
-			    { avro_schema_to_map(expected_schema)->values, 1
-			};
-			st_foreach(avro_datum_to_map(datum)->map,
-				   schema_map_validate_foreach,
-				   (st_data_t) & vst);
-			return vst.rval;
-		}
-		break;
-
-	case AVRO_UNION:
-		if (is_avro_union(datum)) {
-			struct avro_union_schema_t *union_schema =
-			    avro_schema_to_union(expected_schema);
-			struct avro_union_datum_t *union_datum =
-			    avro_datum_to_union(datum);
-			union {
-				st_data_t data;
-				avro_schema_t schema;
-			} val;
-
-			if (!st_lookup
-			    (union_schema->branches, union_datum->discriminant,
-			     &val.data)) {
-				return 0;
-			}
-			return avro_schema_datum_validate(val.schema,
-							  union_datum->value);
-		}
-		break;
-
-	case AVRO_RECORD:
-		if (is_avro_record(datum)) {
-			struct avro_record_schema_t *record_schema =
-			    avro_schema_to_record(expected_schema);
-			for (i = 0; i < record_schema->fields->num_entries; i++) {
-				avro_datum_t field_datum;
-				union {
-					st_data_t data;
-					struct avro_record_field_t *field;
-				} val;
-				st_lookup(record_schema->fields, i, &val.data);
-
-				rval =
-				    avro_record_get(datum, val.field->name,
-						    &field_datum);
-				if (rval) {
-					/*
-					 * TODO: check for default values 
-					 */
-					return rval;
-				}
-				if (!avro_schema_datum_validate
-				    (val.field->type, field_datum)) {
-					return 0;
-				}
-			}
-			return 1;
-		}
-		break;
-
-	case AVRO_LINK:
-		{
-			return
-			    avro_schema_datum_validate((avro_schema_to_link
-							(expected_schema))->to,
-						       datum);
-		}
-		break;
-	}
-	return 0;
-}
diff --git a/branch-1.3/lang/c/src/datum_write.c b/branch-1.3/lang/c/src/datum_write.c
deleted file mode 100644
index 9ddeebd..0000000
--- a/branch-1.3/lang/c/src/datum_write.c
+++ /dev/null
@@ -1,287 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <errno.h>
-#include <assert.h>
-#include <string.h>
-#include "schema.h"
-#include "datum.h"
-#include "encoding.h"
-
-static int write_datum(avro_writer_t writer, const avro_encoding_t * enc,
-		       avro_schema_t writers_schema, avro_datum_t datum);
-
-static int
-write_record(avro_writer_t writer, const avro_encoding_t * enc,
-	     struct avro_record_schema_t *schema, avro_datum_t datum)
-{
-	int rval;
-	long i;
-	avro_datum_t field_datum;
-
-	if (schema) {
-		for (i = 0; i < schema->fields->num_entries; i++) {
-			union {
-				st_data_t data;
-				struct avro_record_field_t *field;
-			} val;
-			st_lookup(schema->fields, i, &val.data);
-			check(rval,
-			      avro_record_get(datum, val.field->name,
-					      &field_datum));
-			check(rval,
-			      write_datum(writer, enc, val.field->type,
-					  field_datum));
-		}
-	} else {
-		/* No schema.  Just write the record datum */
-		struct avro_record_datum_t *record =
-		    avro_datum_to_record(datum);
-		for (i = 0; i < record->field_order->num_entries; i++) {
-			union {
-				st_data_t data;
-				char *name;
-			} val;
-			st_lookup(record->field_order, i, &val.data);
-			check(rval,
-			      avro_record_get(datum, val.name, &field_datum));
-			check(rval,
-			      write_datum(writer, enc, NULL, field_datum));
-		}
-	}
-	return 0;
-}
-
-static int
-write_enum(avro_writer_t writer, const avro_encoding_t * enc,
-	   struct avro_enum_schema_t *enump, struct avro_enum_datum_t *datum)
-{
-	AVRO_UNUSED(enump);
-
-	return enc->write_long(writer, datum->value);
-}
-
-struct write_map_args {
-	int rval;
-	avro_writer_t writer;
-	const avro_encoding_t *enc;
-	avro_schema_t values_schema;
-};
-
-static int
-write_map_foreach(char *key, avro_datum_t datum, struct write_map_args *args)
-{
-	int rval = args->enc->write_string(args->writer, key);
-	if (rval) {
-		args->rval = rval;
-		return ST_STOP;
-	}
-	rval = write_datum(args->writer, args->enc, args->values_schema, datum);
-	if (rval) {
-		args->rval = rval;
-		return ST_STOP;
-	}
-	return ST_CONTINUE;
-}
-
-static int
-write_map(avro_writer_t writer, const avro_encoding_t * enc,
-	  struct avro_map_schema_t *writers_schema,
-	  struct avro_map_datum_t *datum)
-{
-	int rval;
-	struct write_map_args args =
-	    { 0, writer, enc, writers_schema ? writers_schema->values : NULL };
-
-	if (datum->map->num_entries) {
-		rval = enc->write_long(writer, datum->map->num_entries);
-		if (rval) {
-			return rval;
-		}
-		st_foreach(datum->map, write_map_foreach, (st_data_t) & args);
-	}
-	if (!args.rval) {
-		rval = enc->write_long(writer, 0);
-		if (rval) {
-			return rval;
-		}
-		return 0;
-	}
-	return args.rval;
-}
-
-static int
-write_array(avro_writer_t writer, const avro_encoding_t * enc,
-	    struct avro_array_schema_t *schema,
-	    struct avro_array_datum_t *array)
-{
-	int rval;
-	long i;
-
-	if (array->els->num_entries) {
-		rval = enc->write_long(writer, array->els->num_entries);
-		if (rval) {
-			return rval;
-		}
-		for (i = 0; i < array->els->num_entries; i++) {
-			union {
-				st_data_t data;
-				avro_datum_t datum;
-			} val;
-			st_lookup(array->els, i, &val.data);
-			check(rval,
-			      write_datum(writer, enc,
-					  schema ? schema->items : NULL,
-					  val.datum));
-		}
-	}
-	return enc->write_long(writer, 0);
-}
-
-static int
-write_union(avro_writer_t writer, const avro_encoding_t * enc,
-	    struct avro_union_schema_t *schema,
-	    struct avro_union_datum_t *unionp)
-{
-	int rval;
-	avro_schema_t write_schema = NULL;
-
-	check(rval, enc->write_long(writer, unionp->discriminant));
-	if (schema) {
-		union {
-			st_data_t data;
-			avro_schema_t schema;
-		} val;
-		if (!st_lookup
-		    (schema->branches, unionp->discriminant, &val.data)) {
-			return EINVAL;
-		}
-		write_schema = val.schema;
-	}
-	return write_datum(writer, enc, write_schema, unionp->value);
-}
-
-static int write_datum(avro_writer_t writer, const avro_encoding_t * enc,
-		       avro_schema_t writers_schema, avro_datum_t datum)
-{
-	if (is_avro_schema(writers_schema) && is_avro_link(writers_schema)) {
-		return write_datum(writer, enc,
-				   (avro_schema_to_link(writers_schema))->to,
-				   datum);
-	}
-
-	switch (avro_typeof(datum)) {
-	case AVRO_NULL:
-		return enc->write_null(writer);
-
-	case AVRO_BOOLEAN:
-		return enc->write_boolean(writer,
-					  avro_datum_to_boolean(datum)->i);
-
-	case AVRO_STRING:
-		return enc->write_string(writer,
-					 avro_datum_to_string(datum)->s);
-
-	case AVRO_BYTES:
-		return enc->write_bytes(writer,
-					avro_datum_to_bytes(datum)->bytes,
-					avro_datum_to_bytes(datum)->size);
-
-	case AVRO_INT32:
-	case AVRO_INT64:{
-			int64_t val = avro_typeof(datum) == AVRO_INT32 ?
-			    avro_datum_to_int32(datum)->i32 :
-			    avro_datum_to_int64(datum)->i64;
-			if (is_avro_schema(writers_schema)) {
-				/* handle promotion */
-				if (is_avro_float(writers_schema)) {
-					return enc->write_float(writer,
-								(float)val);
-				} else if (is_avro_double(writers_schema)) {
-					return enc->write_double(writer,
-								 (double)val);
-				}
-			}
-			return enc->write_long(writer, val);
-		}
-
-	case AVRO_FLOAT:{
-			float val = avro_datum_to_float(datum)->f;
-			if (is_avro_schema(writers_schema)
-			    && is_avro_double(writers_schema)) {
-				/* handle promotion */
-				return enc->write_double(writer, (double)val);
-			}
-			return enc->write_float(writer, val);
-		}
-
-	case AVRO_DOUBLE:
-		return enc->write_double(writer,
-					 avro_datum_to_double(datum)->d);
-
-	case AVRO_RECORD:
-		return write_record(writer, enc,
-				    avro_schema_to_record(writers_schema),
-				    datum);
-
-	case AVRO_ENUM:
-		return write_enum(writer, enc,
-				  avro_schema_to_enum(writers_schema),
-				  avro_datum_to_enum(datum));
-
-	case AVRO_FIXED:
-		return avro_write(writer,
-				  avro_datum_to_fixed(datum)->bytes,
-				  avro_datum_to_fixed(datum)->size);
-
-	case AVRO_MAP:
-		return write_map(writer, enc,
-				 avro_schema_to_map(writers_schema),
-				 avro_datum_to_map(datum));
-
-	case AVRO_ARRAY:
-		return write_array(writer, enc,
-				   avro_schema_to_array(writers_schema),
-				   avro_datum_to_array(datum));
-
-	case AVRO_UNION:
-		return write_union(writer, enc,
-				   avro_schema_to_union(writers_schema),
-				   avro_datum_to_union(datum));
-
-	case AVRO_LINK:
-		break;
-	}
-
-	return 0;
-}
-
-int avro_write_data(avro_writer_t writer, avro_schema_t writers_schema,
-		    avro_datum_t datum)
-{
-	if (!writer || !is_avro_datum(datum)) {
-		return EINVAL;
-	}
-	/* Only validate datum if a writer's schema is provided */
-	if (is_avro_schema(writers_schema)
-	    && !avro_schema_datum_validate(writers_schema, datum)) {
-		return EINVAL;
-	}
-	return write_datum(writer, &avro_binary_encoding,
-			   writers_schema, datum);
-}
diff --git a/branch-1.3/lang/c/src/dump.c b/branch-1.3/lang/c/src/dump.c
deleted file mode 100644
index 5ce8132..0000000
--- a/branch-1.3/lang/c/src/dump.c
+++ /dev/null
@@ -1,56 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include "dump.h"
-#include <ctype.h>
-#include <string.h>
-
-static void dump_line(FILE * out, const caddr_t addr, const long len)
-{
-	int i;
-	fprintf(out, "|");
-	for (i = 0; i < 16; i++) {
-		if (i < len) {
-			fprintf(out, " %02X", ((uint8_t *) addr)[i]);
-		} else {
-			fprintf(out, " ..");
-		}
-		if (!((i + 1) % 8)) {
-			fprintf(out, " |");
-		}
-	}
-	fprintf(out, "\t");
-	for (i = 0; i < 16; i++) {
-		char c = 0x7f & ((uint8_t *) addr)[i];
-		if (i < len && isprint(c)) {
-			fprintf(out, "%c", c);
-		} else {
-			fprintf(out, ".");
-		}
-	}
-}
-
-void dump(FILE * out, const caddr_t addr, const long len)
-{
-	int i;
-	for (i = 0; i < len; i += 16) {
-		dump_line(out, addr + i, (len - i) < 16 ? (len - i) : 16);
-		fprintf(out, "\n");
-	}
-	fflush(out);
-}
diff --git a/branch-1.3/lang/c/src/dump.h b/branch-1.3/lang/c/src/dump.h
deleted file mode 100644
index 975375b..0000000
--- a/branch-1.3/lang/c/src/dump.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#ifndef DUMP_H
-#define DUMP_H
-
-#include <stdio.h>
-#include <sys/types.h>
-
-void dump(FILE * out, const caddr_t addr, const long len);
-
-#endif
diff --git a/branch-1.3/lang/c/src/encoding.h b/branch-1.3/lang/c/src/encoding.h
deleted file mode 100644
index 3f02302..0000000
--- a/branch-1.3/lang/c/src/encoding.h
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-#ifndef AVRO_ENCODING_H
-#define AVRO_ENCODING_H
-#include "avro.h"
-
-/*
- * TODO: this will need more functions when JSON encoding is added 
- */
-struct avro_encoding_t {
-	const char *description;
-	/*
-	 * string 
-	 */
-	int (*read_string) (avro_reader_t reader, char **s);
-	int (*skip_string) (avro_reader_t reader);
-	int (*write_string) (avro_writer_t writer, const char *s);
-	 int64_t(*size_string) (avro_writer_t writer, const char *s);
-	/*
-	 * bytes 
-	 */
-	int (*read_bytes) (avro_reader_t reader, char **bytes, int64_t * len);
-	int (*skip_bytes) (avro_reader_t reader);
-	int (*write_bytes) (avro_writer_t writer,
-			    const char *bytes, const int64_t len);
-	 int64_t(*size_bytes) (avro_writer_t writer,
-			       const char *bytes, const int64_t len);
-	/*
-	 * int 
-	 */
-	int (*read_int) (avro_reader_t reader, int32_t * i);
-	int (*skip_int) (avro_reader_t reader);
-	int (*write_int) (avro_writer_t writer, const int32_t i);
-	 int64_t(*size_int) (avro_writer_t writer, const int32_t i);
-	/*
-	 * long 
-	 */
-	int (*read_long) (avro_reader_t reader, int64_t * l);
-	int (*skip_long) (avro_reader_t reader);
-	int (*write_long) (avro_writer_t writer, const int64_t l);
-	 int64_t(*size_long) (avro_writer_t writer, const int64_t l);
-	/*
-	 * float 
-	 */
-	int (*read_float) (avro_reader_t reader, float *f);
-	int (*skip_float) (avro_reader_t reader);
-	int (*write_float) (avro_writer_t writer, const float f);
-	 int64_t(*size_float) (avro_writer_t writer, const float f);
-	/*
-	 * double 
-	 */
-	int (*read_double) (avro_reader_t reader, double *d);
-	int (*skip_double) (avro_reader_t reader);
-	int (*write_double) (avro_writer_t writer, const double d);
-	 int64_t(*size_double) (avro_writer_t writer, const double d);
-	/*
-	 * boolean 
-	 */
-	int (*read_boolean) (avro_reader_t reader, int8_t * b);
-	int (*skip_boolean) (avro_reader_t reader);
-	int (*write_boolean) (avro_writer_t writer, const int8_t b);
-	 int64_t(*size_boolean) (avro_writer_t writer, const int8_t b);
-	/*
-	 * null 
-	 */
-	int (*read_null) (avro_reader_t reader);
-	int (*skip_null) (avro_reader_t reader);
-	int (*write_null) (avro_writer_t writer);
-	 int64_t(*size_null) (avro_writer_t writer);
-};
-typedef struct avro_encoding_t avro_encoding_t;
-
-#define AVRO_WRITE(writer, buf, len) \
-{ int rval = avro_write( writer, buf, len ); if(rval) return rval; }
-#define AVRO_READ(reader, buf, len)  \
-{ int rval = avro_read( reader, buf, len ); if(rval) return rval; }
-#define AVRO_SKIP(reader, len) \
-{ int rval = avro_skip( reader, len); if (rval) return rval; }
-
-extern const avro_encoding_t avro_binary_encoding;	/* in
-							 * encoding_binary 
-							 */
-
-#endif
diff --git a/branch-1.3/lang/c/src/encoding_binary.c b/branch-1.3/lang/c/src/encoding_binary.c
deleted file mode 100644
index 819711e..0000000
--- a/branch-1.3/lang/c/src/encoding_binary.c
+++ /dev/null
@@ -1,431 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include "encoding.h"
-#include <stdlib.h>
-#include <limits.h>
-#include <errno.h>
-#include <string.h>
-#include <sys/types.h>
-
-#define MAX_VARINT_BUF_SIZE 10
-
-static int read_long(avro_reader_t reader, int64_t * l)
-{
-	uint64_t value = 0;
-	uint8_t b;
-	int offset = 0;
-	do {
-		if (offset == MAX_VARINT_BUF_SIZE) {
-			/*
-			 * illegal byte sequence 
-			 */
-			return EILSEQ;
-		}
-		AVRO_READ(reader, &b, 1);
-		value |= (int64_t) (b & 0x7F) << (7 * offset);
-		++offset;
-	}
-	while (b & 0x80);
-	*l = ((value >> 1) ^ -(value & 1));
-	return 0;
-}
-
-static int skip_long(avro_reader_t reader)
-{
-	uint8_t b;
-	int offset = 0;
-	do {
-		if (offset == MAX_VARINT_BUF_SIZE) {
-			return EILSEQ;
-		}
-		AVRO_READ(reader, &b, 1);
-		++offset;
-	}
-	while (b & 0x80);
-	return 0;
-}
-
-static int write_long(avro_writer_t writer, int64_t l)
-{
-	char buf[MAX_VARINT_BUF_SIZE];
-	uint8_t bytes_written = 0;
-	uint64_t n = (l << 1) ^ (l >> 63);
-	while (n & ~0x7F) {
-		buf[bytes_written++] = (char)((((uint8_t) n) & 0x7F) | 0x80);
-		n >>= 7;
-	}
-	buf[bytes_written++] = (char)n;
-	AVRO_WRITE(writer, buf, bytes_written);
-	return 0;
-}
-
-static int64_t size_long(avro_writer_t writer, int64_t l)
-{
-	AVRO_UNUSED(writer);
-
-	int64_t len = 0;
-	uint64_t n = (l << 1) ^ (l >> 63);
-	while (n & ~0x7F) {
-		len++;
-		n >>= 7;
-	}
-	len++;
-	return len;
-}
-
-static int read_int(avro_reader_t reader, int32_t * i)
-{
-	int64_t l;
-	int rval = read_long(reader, &l);
-	if (rval) {
-		return rval;
-	}
-	if (!(INT_MIN <= l && l <= INT_MAX)) {
-		return ERANGE;
-	}
-	*i = l;
-	return 0;
-}
-
-static int skip_int(avro_reader_t reader)
-{
-	return skip_long(reader);
-}
-
-static int write_int(avro_writer_t writer, const int32_t i)
-{
-	int64_t l = i;
-	return write_long(writer, l);
-}
-
-static int64_t size_int(avro_writer_t writer, const int32_t i)
-{
-	int64_t l = i;
-	return size_long(writer, l);
-}
-
-static int read_bytes(avro_reader_t reader, char **bytes, int64_t * len)
-{
-	int rval = read_long(reader, len);
-	if (rval) {
-		return rval;
-	}
-	*bytes = malloc(*len + 1);
-	if (!*bytes) {
-		return ENOMEM;
-	}
-	(*bytes)[*len] = '\0';
-	AVRO_READ(reader, *bytes, *len);
-	return 0;
-}
-
-static int skip_bytes(avro_reader_t reader)
-{
-	int64_t len;
-	int rval = read_long(reader, &len);
-	if (rval) {
-		return rval;
-	}
-	AVRO_SKIP(reader, len);
-	return 0;
-}
-
-static int
-write_bytes(avro_writer_t writer, const char *bytes, const int64_t len)
-{
-	int rval;
-	if (len < 0) {
-		return EINVAL;
-	}
-	rval = write_long(writer, len);
-	if (rval) {
-		return rval;
-	}
-	AVRO_WRITE(writer, (char *)bytes, len);
-	return 0;
-}
-
-static int64_t
-size_bytes(avro_writer_t writer, const char *bytes, const int64_t len)
-{
-	AVRO_UNUSED(bytes);
-
-	return size_long(writer, len) + len;
-}
-
-static int read_string(avro_reader_t reader, char **s)
-{
-	int64_t len;
-	return read_bytes(reader, s, &len);
-}
-
-static int skip_string(avro_reader_t reader)
-{
-	return skip_bytes(reader);
-}
-
-static int write_string(avro_writer_t writer, const char *s)
-{
-	int64_t len = strlen(s);
-	return write_bytes(writer, s, len);
-}
-
-static int64_t size_string(avro_writer_t writer, const char *s)
-{
-	int64_t len = strlen(s);
-	return size_bytes(writer, s, len);
-}
-
-static int read_float(avro_reader_t reader, float *f)
-{
-#if BYTE_ORDER == BIG_ENDIAN
-	uint8_t buf[4];
-#endif
-	union {
-		float f;
-		int32_t i;
-	} v;
-#if BYTE_ORDER == BIG_ENDIAN
-	AVRO_READ(reader, buf, 4);
-	v.i = ((int32_t) buf[0] << 0)
-	    | ((int32_t) buf[1] << 8)
-	    | ((int32_t) buf[2] << 16) | ((int32_t) buf[3] << 24);
-#else
-	AVRO_READ(reader, (void *)&v.i, 4);
-#endif
-	*f = v.f;
-	return 0;
-}
-
-static int skip_float(avro_reader_t reader)
-{
-	AVRO_SKIP(reader, 4);
-	return 0;
-}
-
-static int write_float(avro_writer_t writer, const float f)
-{
-#if BYTE_ORDER == BIG_ENDIAN
-	uint8_t buf[4];
-#endif
-	union {
-		float f;
-		int32_t i;
-	} v;
-
-	v.f = f;
-#if BYTE_ORDER == BIG_ENDIAN
-	buf[0] = (uint8_t) (v.i >> 0);
-	buf[1] = (uint8_t) (v.i >> 8);
-	buf[2] = (uint8_t) (v.i >> 16);
-	buf[3] = (uint8_t) (v.i >> 24);
-	AVRO_WRITE(writer, buf, 4);
-#else
-	AVRO_WRITE(writer, (void *)&v.i, 4);
-#endif
-	return 0;
-}
-
-static int64_t size_float(avro_writer_t writer, const float f)
-{
-	AVRO_UNUSED(writer);
-	AVRO_UNUSED(f);
-
-	return 4;
-}
-
-static int read_double(avro_reader_t reader, double *d)
-{
-#if BYTE_ORDER == BIG_ENDIAN
-	uint8_t buf[8];
-#endif
-	union {
-		double d;
-		int64_t l;
-	} v;
-
-#if BYTE_ORDER == BIG_ENDIAN
-	AVRO_READ(reader, buf, 8);
-	v.l = ((int64_t) buf[0] << 0)
-	    | ((int64_t) buf[1] << 8)
-	    | ((int64_t) buf[2] << 16)
-	    | ((int64_t) buf[3] << 24)
-	    | ((int64_t) buf[4] << 32)
-	    | ((int64_t) buf[5] << 40)
-	    | ((int64_t) buf[6] << 48) | ((int64_t) buf[7] << 56);
-#else
-	AVRO_READ(reader, (void *)&v.l, 8);
-#endif
-	*d = v.d;
-	return 0;
-}
-
-static int skip_double(avro_reader_t reader)
-{
-	AVRO_SKIP(reader, 8);
-	return 0;
-}
-
-static int write_double(avro_writer_t writer, const double d)
-{
-#if BYTE_ORDER == BIG_ENDIAN
-	uint8_t buf[8];
-#endif
-	union {
-		double d;
-		int64_t l;
-	} v;
-
-	v.d = d;
-#if BYTE_ORDER == BIG_ENDIAN
-	buf[0] = (uint8_t) (v.l >> 0);
-	buf[1] = (uint8_t) (v.l >> 8);
-	buf[2] = (uint8_t) (v.l >> 16);
-	buf[3] = (uint8_t) (v.l >> 24);
-	buf[4] = (uint8_t) (v.l >> 32);
-	buf[5] = (uint8_t) (v.l >> 40);
-	buf[6] = (uint8_t) (v.l >> 48);
-	buf[7] = (uint8_t) (v.l >> 56);
-	AVRO_WRITE(writer, buf, 8);
-#else
-	AVRO_WRITE(writer, (void *)&v.l, 8);
-#endif
-	return 0;
-}
-
-static int64_t size_double(avro_writer_t writer, const double d)
-{
-	AVRO_UNUSED(writer);
-	AVRO_UNUSED(d);
-
-	return 8;
-}
-
-static int read_boolean(avro_reader_t reader, int8_t * b)
-{
-	AVRO_READ(reader, b, 1);
-	return 0;
-}
-
-static int skip_boolean(avro_reader_t reader)
-{
-	AVRO_SKIP(reader, 1);
-	return 0;
-}
-
-static int write_boolean(avro_writer_t writer, const int8_t b)
-{
-	AVRO_WRITE(writer, (char *)&b, 1);
-	return 0;
-}
-
-static int64_t size_boolean(avro_writer_t writer, const int8_t b)
-{
-	AVRO_UNUSED(writer);
-	AVRO_UNUSED(b);
-
-	return 1;
-}
-
-static int read_skip_null(avro_reader_t reader)
-{
-	/*
-	 * no-op 
-	 */
-	AVRO_UNUSED(reader);
-
-	return 0;
-}
-
-static int write_null(avro_writer_t writer)
-{
-	/*
-	 * no-op 
-	 */
-	AVRO_UNUSED(writer);
-
-	return 0;
-}
-
-static int64_t size_null(avro_writer_t writer)
-{
-	AVRO_UNUSED(writer);
-
-	return 0;
-}
-
-const avro_encoding_t avro_binary_encoding = {
-	.description = "BINARY FORMAT",
-	/*
-	 * string 
-	 */
-	.read_string = read_string,
-	.skip_string = skip_string,
-	.write_string = write_string,
-	.size_string = size_string,
-	/*
-	 * bytes 
-	 */
-	.read_bytes = read_bytes,
-	.skip_bytes = skip_bytes,
-	.write_bytes = write_bytes,
-	.size_bytes = size_bytes,
-	/*
-	 * int 
-	 */
-	.read_int = read_int,
-	.skip_int = skip_int,
-	.write_int = write_int,
-	.size_int = size_int,
-	/*
-	 * long 
-	 */
-	.read_long = read_long,
-	.skip_long = skip_long,
-	.write_long = write_long,
-	.size_long = size_long,
-	/*
-	 * float 
-	 */
-	.read_float = read_float,
-	.skip_float = skip_float,
-	.write_float = write_float,
-	.size_float = size_float,
-	/*
-	 * double 
-	 */
-	.read_double = read_double,
-	.skip_double = skip_double,
-	.write_double = write_double,
-	.size_double = size_double,
-	/*
-	 * boolean 
-	 */
-	.read_boolean = read_boolean,
-	.skip_boolean = skip_boolean,
-	.write_boolean = write_boolean,
-	.size_boolean = size_boolean,
-	/*
-	 * null 
-	 */
-	.read_null = read_skip_null,
-	.skip_null = read_skip_null,
-	.write_null = write_null,
-	.size_null = size_null
-};
diff --git a/branch-1.3/lang/c/src/io.c b/branch-1.3/lang/c/src/io.c
deleted file mode 100644
index d6604b2..0000000
--- a/branch-1.3/lang/c/src/io.c
+++ /dev/null
@@ -1,364 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdio.h>
-#include <stdlib.h>
-#include <errno.h>
-#include <string.h>
-#include "dump.h"
-
-enum avro_io_type_t {
-	AVRO_FILE_IO,
-	AVRO_MEMORY_IO
-};
-typedef enum avro_io_type_t avro_io_type_t;
-
-struct avro_reader_t_ {
-	avro_io_type_t type;
-	unsigned long refcount;
-};
-
-struct avro_writer_t_ {
-	avro_io_type_t type;
-	unsigned long refcount;
-};
-
-struct _avro_reader_file_t {
-	struct avro_reader_t_ reader;
-	FILE *fp;
-	char *cur;
-	char *end;
-	char buffer[4096];
-};
-
-struct _avro_writer_file_t {
-	struct avro_writer_t_ writer;
-	FILE *fp;
-};
-
-struct _avro_reader_memory_t {
-	struct avro_reader_t_ reader;
-	const char *buf;
-	int64_t len;
-	int64_t read;
-};
-
-struct _avro_writer_memory_t {
-	struct avro_writer_t_ writer;
-	const char *buf;
-	int64_t len;
-	int64_t written;
-};
-
-#define avro_io_typeof(obj)      ((obj)->type)
-#define is_memory_io(obj)        (obj && avro_io_typeof(obj) == AVRO_MEMORY_IO)
-#define is_file_io(obj)          (obj && avro_io_typeof(obj) == AVRO_FILE_IO)
-
-#define avro_reader_to_memory(reader_)  container_of(reader_, struct _avro_reader_memory_t, reader)
-#define avro_reader_to_file(reader_)    container_of(reader_, struct _avro_reader_file_t, reader)
-#define avro_writer_to_memory(writer_)  container_of(writer_, struct _avro_writer_memory_t, writer)
-#define avro_writer_to_file(writer_)    container_of(writer_, struct _avro_writer_file_t, writer)
-
-static void reader_init(avro_reader_t reader, avro_io_type_t type)
-{
-	reader->type = type;
-	reader->refcount = 1;
-}
-
-static void writer_init(avro_writer_t writer, avro_io_type_t type)
-{
-	writer->type = type;
-	writer->refcount = 1;
-}
-
-avro_reader_t avro_reader_file(FILE * fp)
-{
-	struct _avro_reader_file_t *file_reader =
-	    malloc(sizeof(struct _avro_reader_file_t));
-	if (!file_reader) {
-		return NULL;
-	}
-	memset(file_reader, 0, sizeof(struct _avro_reader_file_t));
-	file_reader->fp = fp;
-	reader_init(&file_reader->reader, AVRO_FILE_IO);
-	return &file_reader->reader;
-}
-
-avro_writer_t avro_writer_file(FILE * fp)
-{
-	struct _avro_writer_file_t *file_writer =
-	    malloc(sizeof(struct _avro_writer_file_t));
-	if (!file_writer) {
-		return NULL;
-	}
-	file_writer->fp = fp;
-	writer_init(&file_writer->writer, AVRO_FILE_IO);
-	return &file_writer->writer;
-}
-
-avro_reader_t avro_reader_memory(const char *buf, int64_t len)
-{
-	struct _avro_reader_memory_t *mem_reader =
-	    malloc(sizeof(struct _avro_reader_memory_t));
-	if (!mem_reader) {
-		return NULL;
-	}
-	mem_reader->buf = buf;
-	mem_reader->len = len;
-	mem_reader->read = 0;
-	reader_init(&mem_reader->reader, AVRO_MEMORY_IO);
-	return &mem_reader->reader;
-}
-
-avro_writer_t avro_writer_memory(const char *buf, int64_t len)
-{
-	struct _avro_writer_memory_t *mem_writer =
-	    malloc(sizeof(struct _avro_writer_memory_t));
-	if (!mem_writer) {
-		return NULL;
-	}
-	mem_writer->buf = buf;
-	mem_writer->len = len;
-	mem_writer->written = 0;
-	writer_init(&mem_writer->writer, AVRO_MEMORY_IO);
-	return &mem_writer->writer;
-}
-
-static int
-avro_read_memory(struct _avro_reader_memory_t *reader, void *buf, int64_t len)
-{
-	if (len > 0) {
-		if ((reader->len - reader->read) < len) {
-			return ENOSPC;
-		}
-		memcpy(buf, reader->buf + reader->read, len);
-		reader->read += len;
-	}
-	return 0;
-}
-
-#define bytes_available(reader) (reader->end - reader->cur)
-#define buffer_reset(reader) {reader->cur = reader->end = reader->buffer;}
-
-static int
-avro_read_file(struct _avro_reader_file_t *reader, void *buf, int64_t len)
-{
-	int64_t needed = len;
-	void *p = buf;
-	int rval;
-
-	if (len == 0) {
-		return 0;
-	}
-
-	if (needed > (int64_t) sizeof(reader->buffer)) {
-		if (bytes_available(reader) > 0) {
-			memcpy(p, reader->cur, bytes_available(reader));
-			p += bytes_available(reader);
-			needed -= bytes_available(reader);
-			buffer_reset(reader);
-		}
-		rval = fread(p, 1, needed, reader->fp);
-		if (rval != needed) {
-			return -1;
-		}
-		return 0;
-	} else if (needed <= bytes_available(reader)) {
-		memcpy(p, reader->cur, needed);
-		reader->cur += needed;
-		return 0;
-	} else {
-		memcpy(p, reader->cur, bytes_available(reader));
-		p += bytes_available(reader);
-		needed -= bytes_available(reader);
-
-		rval =
-		    fread(reader->buffer, 1, sizeof(reader->buffer),
-			  reader->fp);
-		if (rval == 0) {
-			return -1;
-		}
-		reader->cur = reader->buffer;
-		reader->end = reader->cur + rval;
-
-		if (bytes_available(reader) < needed) {
-			return -1;
-		}
-		memcpy(p, reader->cur, needed);
-		reader->cur += needed;
-		return 0;
-	}
-	return -1;
-}
-
-int avro_read(avro_reader_t reader, void *buf, int64_t len)
-{
-	if (buf && len >= 0) {
-		if (is_memory_io(reader)) {
-			return avro_read_memory(avro_reader_to_memory(reader),
-						buf, len);
-		} else if (is_file_io(reader)) {
-			return avro_read_file(avro_reader_to_file(reader), buf,
-					      len);
-		}
-	}
-	return EINVAL;
-}
-
-static int avro_skip_memory(struct _avro_reader_memory_t *reader, int64_t len)
-{
-	if (len > 0) {
-		if ((reader->len - reader->read) < len) {
-			return ENOSPC;
-		}
-		reader->read += len;
-	}
-	return 0;
-}
-
-static int avro_skip_file(struct _avro_reader_file_t *reader, int64_t len)
-{
-	int rval;
-	int64_t needed = len;
-
-	if (len == 0) {
-		return 0;
-	}
-	if (needed <= bytes_available(reader)) {
-		reader->cur += needed;
-	} else {
-		needed -= bytes_available(reader);
-		buffer_reset(reader);
-		rval = fseek(reader->fp, needed, SEEK_CUR);
-		if (rval < 0) {
-			return rval;
-		}
-	}
-	return 0;
-}
-
-int avro_skip(avro_reader_t reader, int64_t len)
-{
-	if (len >= 0) {
-		if (is_memory_io(reader)) {
-			return avro_skip_memory(avro_reader_to_memory(reader),
-						len);
-		} else if (is_file_io(reader)) {
-			return avro_skip_file(avro_reader_to_file(reader), len);
-		}
-	}
-	return 0;
-}
-
-static int
-avro_write_memory(struct _avro_writer_memory_t *writer, void *buf, int64_t len)
-{
-	if (len) {
-		if ((writer->len - writer->written) < len) {
-			return ENOSPC;
-		}
-		memcpy((void *)(writer->buf + writer->written), buf, len);
-		writer->written += len;
-	}
-	return 0;
-}
-
-static int
-avro_write_file(struct _avro_writer_file_t *writer, void *buf, int64_t len)
-{
-	int rval;
-	if (len > 0) {
-		rval = fwrite(buf, len, 1, writer->fp);
-		if (rval == 0) {
-			return feof(writer->fp) ? -1 : 0;
-		}
-	}
-	return 0;
-}
-
-int avro_write(avro_writer_t writer, void *buf, int64_t len)
-{
-	if (buf && len >= 0) {
-		if (is_memory_io(writer)) {
-			return avro_write_memory(avro_writer_to_memory(writer),
-						 buf, len);
-		} else if (is_file_io(writer)) {
-			return avro_write_file(avro_writer_to_file(writer), buf,
-					       len);
-		}
-	}
-	return EINVAL;
-}
-
-void avro_writer_reset(avro_writer_t writer)
-{
-	if (is_memory_io(writer)) {
-		avro_writer_to_memory(writer)->written = 0;
-	}
-}
-
-int64_t avro_writer_tell(avro_writer_t writer)
-{
-	if (is_memory_io(writer)) {
-		return avro_writer_to_memory(writer)->written;
-	}
-	return -1;
-}
-
-void avro_writer_flush(avro_writer_t writer)
-{
-	if (is_file_io(writer)) {
-		fflush(avro_writer_to_file(writer)->fp);
-	}
-}
-
-void avro_writer_dump(avro_writer_t writer, FILE * fp)
-{
-	if (is_memory_io(writer)) {
-		dump(fp, (char *)avro_writer_to_memory(writer)->buf,
-		     avro_writer_to_memory(writer)->written);
-	}
-}
-
-void avro_reader_dump(avro_reader_t reader, FILE * fp)
-{
-	if (is_memory_io(reader)) {
-		dump(fp, (char *)avro_reader_to_memory(reader)->buf,
-		     avro_reader_to_memory(reader)->read);
-	}
-}
-
-void avro_reader_free(avro_reader_t reader)
-{
-	if (is_memory_io(reader)) {
-		free(avro_reader_to_memory(reader));
-	} else if (is_file_io(reader)) {
-		fclose(avro_reader_to_file(reader)->fp);
-		free(avro_reader_to_file(reader));
-	}
-}
-
-void avro_writer_free(avro_writer_t writer)
-{
-	if (is_memory_io(writer)) {
-		free(avro_writer_to_memory(writer));
-	} else if (is_file_io(writer)) {
-		fclose(avro_writer_to_file(writer)->fp);
-		free(avro_writer_to_file(writer));
-	}
-}
diff --git a/branch-1.3/lang/c/src/schema.c b/branch-1.3/lang/c/src/schema.c
deleted file mode 100644
index 09339ac..0000000
--- a/branch-1.3/lang/c/src/schema.c
+++ /dev/null
@@ -1,1212 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <inttypes.h>
-#include <stdlib.h>
-#include <string.h>
-#include <errno.h>
-#include <ctype.h>
-
-#include "jansson.h"
-#include "st.h"
-#include "schema.h"
-
-#define DEFAULT_TABLE_SIZE 32
-
-struct avro_schema_error_t_ {
-	st_table *named_schemas;
-	json_error_t json_error;
-};
-
-static void avro_schema_init(avro_schema_t schema, avro_type_t type)
-{
-	schema->type = type;
-	schema->class_type = AVRO_SCHEMA;
-	schema->refcount = 1;
-}
-
-static int is_avro_id(const char *name)
-{
-	size_t i, len;
-	if (name) {
-		len = strlen(name);
-		if (len < 1) {
-			return 0;
-		}
-		for (i = 0; i < len; i++) {
-			if (!(isalpha(name[i])
-			      || name[i] == '_' || (i && isdigit(name[i])))) {
-				return 0;
-			}
-		}
-		/*
-		 * starts with [A-Za-z_] subsequent [A-Za-z0-9_] 
-		 */
-		return 1;
-	}
-	return 0;
-}
-
-static int record_free_foreach(int i, struct avro_record_field_t *field,
-			       void *arg)
-{
-	AVRO_UNUSED(i);
-	AVRO_UNUSED(arg);
-
-	free(field->name);
-	avro_schema_decref(field->type);
-	free(field);
-	return ST_DELETE;
-}
-
-static int enum_free_foreach(int i, char *sym, void *arg)
-{
-	AVRO_UNUSED(i);
-	AVRO_UNUSED(arg);
-
-	free(sym);
-	return ST_DELETE;
-}
-
-static int union_free_foreach(int i, avro_schema_t schema, void *arg)
-{
-	AVRO_UNUSED(i);
-	AVRO_UNUSED(arg);
-
-	avro_schema_decref(schema);
-	return ST_DELETE;
-}
-
-static void avro_schema_free(avro_schema_t schema)
-{
-	if (is_avro_schema(schema)) {
-		switch (avro_typeof(schema)) {
-		case AVRO_STRING:
-		case AVRO_BYTES:
-		case AVRO_INT32:
-		case AVRO_INT64:
-		case AVRO_FLOAT:
-		case AVRO_DOUBLE:
-		case AVRO_BOOLEAN:
-		case AVRO_NULL:
-			/* no memory allocated for primitives */
-			return;
-
-		case AVRO_RECORD:{
-				struct avro_record_schema_t *record;
-				record = avro_schema_to_record(schema);
-				free(record->name);
-				if (record->space) {
-					free(record->space);
-				}
-				st_foreach(record->fields, record_free_foreach,
-					   0);
-				st_free_table(record->fields_byname);
-				st_free_table(record->fields);
-				free(record);
-			}
-			break;
-
-		case AVRO_ENUM:{
-				struct avro_enum_schema_t *enump;
-				enump = avro_schema_to_enum(schema);
-				free(enump->name);
-				st_foreach(enump->symbols, enum_free_foreach,
-					   0);
-				st_free_table(enump->symbols);
-				st_free_table(enump->symbols_byname);
-				free(enump);
-			}
-			break;
-
-		case AVRO_FIXED:{
-				struct avro_fixed_schema_t *fixed;
-				fixed = avro_schema_to_fixed(schema);
-				free((char *)fixed->name);
-				free(fixed);
-			}
-			break;
-
-		case AVRO_MAP:{
-				struct avro_map_schema_t *map;
-				map = avro_schema_to_map(schema);
-				avro_schema_decref(map->values);
-				free(map);
-			}
-			break;
-
-		case AVRO_ARRAY:{
-				struct avro_array_schema_t *array;
-				array = avro_schema_to_array(schema);
-				avro_schema_decref(array->items);
-				free(array);
-			}
-			break;
-		case AVRO_UNION:{
-				struct avro_union_schema_t *unionp;
-				unionp = avro_schema_to_union(schema);
-				st_foreach(unionp->branches, union_free_foreach,
-					   0);
-				st_free_table(unionp->branches);
-				free(unionp);
-			}
-			break;
-
-		case AVRO_LINK:{
-				struct avro_link_schema_t *link;
-				link = avro_schema_to_link(schema);
-				avro_schema_decref(link->to);
-				free(link);
-			}
-			break;
-		}
-	}
-}
-
-avro_schema_t avro_schema_incref(avro_schema_t schema)
-{
-	if (schema && schema->refcount != (unsigned int)-1) {
-		++schema->refcount;
-	}
-	return schema;
-}
-
-void avro_schema_decref(avro_schema_t schema)
-{
-	if (schema && schema->refcount != (unsigned int)-1
-	    && --schema->refcount == 0) {
-		avro_schema_free(schema);
-	}
-}
-
-avro_schema_t avro_schema_string(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_STRING,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_bytes(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_BYTES,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_int(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_INT32,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_long(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_INT64,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_float(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_FLOAT,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_double(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_DOUBLE,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_boolean(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_BOOLEAN,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_null(void)
-{
-	static struct avro_obj_t obj = {
-		.type = AVRO_NULL,
-		.class_type = AVRO_SCHEMA,
-		.refcount = 1
-	};
-	return &obj;
-}
-
-avro_schema_t avro_schema_fixed(const char *name, const int64_t size)
-{
-	struct avro_fixed_schema_t *fixed =
-	    malloc(sizeof(struct avro_fixed_schema_t));
-	if (!fixed) {
-		return NULL;
-	}
-	if (!is_avro_id(name)) {
-		return NULL;
-	}
-	fixed->name = strdup(name);
-	fixed->size = size;
-	avro_schema_init(&fixed->obj, AVRO_FIXED);
-	return &fixed->obj;
-}
-
-avro_schema_t avro_schema_union(void)
-{
-	struct avro_union_schema_t *schema =
-	    malloc(sizeof(struct avro_union_schema_t));
-	if (!schema) {
-		return NULL;
-	}
-	schema->branches = st_init_numtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!schema->branches) {
-		free(schema);
-		return NULL;
-	}
-
-	avro_schema_init(&schema->obj, AVRO_UNION);
-	return &schema->obj;
-}
-
-int
-avro_schema_union_append(const avro_schema_t union_schema,
-			 const avro_schema_t schema)
-{
-	struct avro_union_schema_t *unionp;
-	if (!union_schema || !schema || !is_avro_union(union_schema)) {
-		return EINVAL;
-	}
-	unionp = avro_schema_to_union(union_schema);
-	st_insert(unionp->branches, unionp->branches->num_entries,
-		  (st_data_t) schema);
-	avro_schema_incref(schema);
-	return 0;
-}
-
-avro_schema_t avro_schema_array(const avro_schema_t items)
-{
-	struct avro_array_schema_t *array =
-	    malloc(sizeof(struct avro_array_schema_t));
-	if (!array) {
-		return NULL;
-	}
-	array->items = avro_schema_incref(items);
-	avro_schema_init(&array->obj, AVRO_ARRAY);
-	return &array->obj;
-}
-
-avro_schema_t avro_schema_map(const avro_schema_t values)
-{
-	struct avro_map_schema_t *map =
-	    malloc(sizeof(struct avro_map_schema_t));
-	if (!map) {
-		return NULL;
-	}
-	map->values = avro_schema_incref(values);
-	avro_schema_init(&map->obj, AVRO_MAP);
-	return &map->obj;
-}
-
-avro_schema_t avro_schema_enum(const char *name)
-{
-	struct avro_enum_schema_t *enump;
-
-	if (!is_avro_id(name)) {
-		return NULL;
-	}
-	enump = malloc(sizeof(struct avro_enum_schema_t));
-	if (!enump) {
-		return NULL;
-	}
-	enump->name = strdup(name);
-	if (!enump->name) {
-		free(enump);
-		return NULL;
-	}
-	enump->symbols = st_init_numtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!enump->symbols) {
-		free(enump->name);
-		free(enump);
-		return NULL;
-	}
-	enump->symbols_byname = st_init_strtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!enump->symbols_byname) {
-		st_free_table(enump->symbols);
-		free(enump->name);
-		free(enump);
-		return NULL;
-	}
-	avro_schema_init(&enump->obj, AVRO_ENUM);
-	return &enump->obj;
-}
-
-int
-avro_schema_enum_symbol_append(const avro_schema_t enum_schema,
-			       const char *symbol)
-{
-	struct avro_enum_schema_t *enump;
-	char *sym;
-	long idx;
-	if (!enum_schema || !symbol || !is_avro_enum(enum_schema)) {
-		return EINVAL;
-	}
-	enump = avro_schema_to_enum(enum_schema);
-	sym = strdup(symbol);
-	if (!sym) {
-		return ENOMEM;
-	}
-	idx = enump->symbols->num_entries;
-	st_insert(enump->symbols, (st_data_t) idx, (st_data_t) sym);
-	st_insert(enump->symbols_byname, (st_data_t) sym, (st_data_t) idx);
-	return 0;
-}
-
-int
-avro_schema_record_field_append(const avro_schema_t record_schema,
-				const char *field_name,
-				const avro_schema_t field_schema)
-{
-	struct avro_record_schema_t *record;
-	struct avro_record_field_t *new_field;
-	if (!field_name || !field_schema || !is_avro_schema(record_schema)
-	    || !is_avro_record(record_schema) || record_schema == field_schema
-	    || !is_avro_id(field_name)) {
-		return EINVAL;
-	}
-	record = avro_schema_to_record(record_schema);
-	new_field = malloc(sizeof(struct avro_record_field_t));
-	if (!new_field) {
-		return ENOMEM;
-	}
-	new_field->name = strdup(field_name);
-	new_field->type = avro_schema_incref(field_schema);
-	st_insert(record->fields, record->fields->num_entries,
-		  (st_data_t) new_field);
-	st_insert(record->fields_byname, (st_data_t) new_field->name,
-		  (st_data_t) new_field);
-	return 0;
-}
-
-avro_schema_t avro_schema_record(const char *name, const char *space)
-{
-	struct avro_record_schema_t *record;
-	if (!is_avro_id(name)) {
-		return NULL;
-	}
-	record = malloc(sizeof(struct avro_record_schema_t));
-	if (!record) {
-		return NULL;
-	}
-	record->name = strdup(name);
-	if (!record->name) {
-		free(record);
-		return NULL;
-	}
-	record->space = space ? strdup(space) : NULL;
-	if (space && !record->space) {
-		free(record->name);
-		free(record);
-		return NULL;
-	}
-	record->fields = st_init_numtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!record->fields) {
-		if (record->space) {
-			free(record->space);
-		}
-		free(record->name);
-		free(record);
-		return NULL;
-	}
-	record->fields_byname = st_init_strtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!record->fields_byname) {
-		st_free_table(record->fields);
-		free(record->name);
-		free(record);
-		return NULL;
-	}
-
-	avro_schema_init(&record->obj, AVRO_RECORD);
-	return &record->obj;
-}
-
-static int
-save_named_schemas(const char *name, avro_schema_t schema,
-		   avro_schema_error_t * error)
-{
-	st_table *st = (*error)->named_schemas;
-	return st_insert(st, (st_data_t) name, (st_data_t) schema);
-}
-
-static avro_schema_t
-find_named_schemas(const char *name, avro_schema_error_t * error)
-{
-	st_table *st = (*error)->named_schemas;
-	union {
-		avro_schema_t schema;
-		st_data_t data;
-	} val;
-	if (st_lookup(st, (st_data_t) name, &(val.data))) {
-		return val.schema;
-	}
-	return NULL;
-};
-
-avro_schema_t avro_schema_link(avro_schema_t to)
-{
-	struct avro_link_schema_t *link;
-	if (!is_avro_named_type(to)) {
-		return NULL;
-	}
-	link = malloc(sizeof(struct avro_link_schema_t));
-	if (!link) {
-		return NULL;
-	}
-	link->to = avro_schema_incref(to);
-	avro_schema_init(&link->obj, AVRO_LINK);
-	return &link->obj;
-}
-
-static int
-avro_type_from_json_t(json_t * json, avro_type_t * type,
-		      avro_schema_error_t * error, avro_schema_t * named_type)
-{
-	json_t *json_type;
-	const char *type_str;
-
-	if (json_is_array(json)) {
-		*type = AVRO_UNION;
-		return 0;
-	} else if (json_is_object(json)) {
-		json_type = json_object_get(json, "type");
-	} else {
-		json_type = json;
-	}
-	if (!json_is_string(json_type)) {
-		return EINVAL;
-	}
-	type_str = json_string_value(json_type);
-	if (!type_str) {
-		return EINVAL;
-	}
-	/*
-	 * TODO: gperf/re2c this 
-	 */
-	if (strcmp(type_str, "string") == 0) {
-		*type = AVRO_STRING;
-	} else if (strcmp(type_str, "bytes") == 0) {
-		*type = AVRO_BYTES;
-	} else if (strcmp(type_str, "int") == 0) {
-		*type = AVRO_INT32;
-	} else if (strcmp(type_str, "long") == 0) {
-		*type = AVRO_INT64;
-	} else if (strcmp(type_str, "float") == 0) {
-		*type = AVRO_FLOAT;
-	} else if (strcmp(type_str, "double") == 0) {
-		*type = AVRO_DOUBLE;
-	} else if (strcmp(type_str, "boolean") == 0) {
-		*type = AVRO_BOOLEAN;
-	} else if (strcmp(type_str, "null") == 0) {
-		*type = AVRO_NULL;
-	} else if (strcmp(type_str, "record") == 0) {
-		*type = AVRO_RECORD;
-	} else if (strcmp(type_str, "enum") == 0) {
-		*type = AVRO_ENUM;
-	} else if (strcmp(type_str, "array") == 0) {
-		*type = AVRO_ARRAY;
-	} else if (strcmp(type_str, "map") == 0) {
-		*type = AVRO_MAP;
-	} else if (strcmp(type_str, "fixed") == 0) {
-		*type = AVRO_FIXED;
-	} else if ((*named_type = find_named_schemas(type_str, error))) {
-		*type = AVRO_LINK;
-	} else {
-		return EINVAL;
-	}
-	return 0;
-}
-
-static int
-avro_schema_from_json_t(json_t * json, avro_schema_t * schema,
-			avro_schema_error_t * error)
-{
-	avro_type_t type = 0;
-	unsigned int i;
-	avro_schema_t named_schemas = NULL;
-
-	if (avro_type_from_json_t(json, &type, error, &named_schemas)) {
-		return EINVAL;
-	}
-
-	switch (type) {
-	case AVRO_LINK:
-		*schema = avro_schema_link(named_schemas);
-		break;
-
-	case AVRO_STRING:
-		*schema = avro_schema_string();
-		break;
-
-	case AVRO_BYTES:
-		*schema = avro_schema_bytes();
-		break;
-
-	case AVRO_INT32:
-		*schema = avro_schema_int();
-		break;
-
-	case AVRO_INT64:
-		*schema = avro_schema_long();
-		break;
-
-	case AVRO_FLOAT:
-		*schema = avro_schema_float();
-		break;
-
-	case AVRO_DOUBLE:
-		*schema = avro_schema_double();
-		break;
-
-	case AVRO_BOOLEAN:
-		*schema = avro_schema_boolean();
-		break;
-
-	case AVRO_NULL:
-		*schema = avro_schema_null();
-		break;
-
-	case AVRO_RECORD:
-		{
-			json_t *json_name = json_object_get(json, "name");
-			json_t *json_namespace =
-			    json_object_get(json, "namespace");
-			json_t *json_fields = json_object_get(json, "fields");
-			unsigned int num_fields;
-			const char *record_name;
-			const char *record_namespace;
-
-			if (!json_is_string(json_name)) {
-				return EINVAL;
-			}
-			if (!json_is_array(json_fields)) {
-				return EINVAL;
-			}
-			num_fields = json_array_size(json_fields);
-			if (num_fields == 0) {
-				return EINVAL;
-			}
-			record_name = json_string_value(json_name);
-			if (!record_name) {
-				return EINVAL;
-			}
-			if (json_is_string(json_namespace)) {
-				record_namespace =
-				    json_string_value(json_namespace);
-			} else {
-				record_namespace = NULL;
-			}
-			*schema =
-			    avro_schema_record(record_name, record_namespace);
-			if (save_named_schemas(record_name, *schema, error)) {
-				return ENOMEM;
-			}
-			for (i = 0; i < num_fields; i++) {
-				json_t *json_field =
-				    json_array_get(json_fields, i);
-				json_t *json_field_name;
-				json_t *json_field_type;
-				avro_schema_t json_field_type_schema;
-				int field_rval;
-
-				if (!json_is_object(json_field)) {
-					avro_schema_decref(*schema);
-					return EINVAL;
-				}
-				json_field_name =
-				    json_object_get(json_field, "name");
-				if (!json_field_name) {
-					avro_schema_decref(*schema);
-					return EINVAL;
-				}
-				json_field_type =
-				    json_object_get(json_field, "type");
-				if (!json_field_type) {
-					avro_schema_decref(*schema);
-					return EINVAL;
-				}
-				field_rval =
-				    avro_schema_from_json_t(json_field_type,
-							    &json_field_type_schema,
-							    error);
-				if (field_rval) {
-					avro_schema_decref(*schema);
-					return field_rval;
-				}
-				field_rval =
-				    avro_schema_record_field_append(*schema,
-								    json_string_value
-								    (json_field_name),
-								    json_field_type_schema);
-				if (field_rval != 0) {
-					avro_schema_decref(*schema);
-					return field_rval;
-				}
-			}
-		}
-		break;
-
-	case AVRO_ENUM:
-		{
-			json_t *json_name = json_object_get(json, "name");
-			json_t *json_symbols = json_object_get(json, "symbols");
-			const char *name;
-			unsigned int num_symbols;
-
-			if (!json_is_string(json_name)) {
-				return EINVAL;
-			}
-			if (!json_is_array(json_symbols)) {
-				return EINVAL;
-			}
-
-			name = json_string_value(json_name);
-			if (!name) {
-				return EINVAL;
-			}
-			num_symbols = json_array_size(json_symbols);
-			if (num_symbols == 0) {
-				return EINVAL;
-			}
-			*schema = avro_schema_enum(name);
-			if (save_named_schemas(name, *schema, error)) {
-				return ENOMEM;
-			}
-			for (i = 0; i < num_symbols; i++) {
-				int enum_rval;
-				json_t *json_symbol =
-				    json_array_get(json_symbols, i);
-				const char *symbol;
-				if (!json_is_string(json_symbol)) {
-					avro_schema_decref(*schema);
-					return EINVAL;
-				}
-				symbol = json_string_value(json_symbol);
-				enum_rval =
-				    avro_schema_enum_symbol_append(*schema,
-								   symbol);
-				if (enum_rval != 0) {
-					avro_schema_decref(*schema);
-					return enum_rval;
-				}
-			}
-		}
-		break;
-
-	case AVRO_ARRAY:
-		{
-			int items_rval;
-			json_t *json_items = json_object_get(json, "items");
-			avro_schema_t items_schema;
-			if (!json_items) {
-				return EINVAL;
-			}
-			items_rval =
-			    avro_schema_from_json_t(json_items, &items_schema,
-						    error);
-			if (items_rval) {
-				return items_rval;
-			}
-			*schema = avro_schema_array(items_schema);
-		}
-		break;
-
-	case AVRO_MAP:
-		{
-			int values_rval;
-			json_t *json_values = json_object_get(json, "values");
-			avro_schema_t values_schema;
-
-			if (!json_values) {
-				return EINVAL;
-			}
-			values_rval =
-			    avro_schema_from_json_t(json_values, &values_schema,
-						    error);
-			if (values_rval) {
-				return values_rval;
-			}
-			*schema = avro_schema_map(values_schema);
-		}
-		break;
-
-	case AVRO_UNION:
-		{
-			unsigned int num_schemas = json_array_size(json);
-			avro_schema_t s;
-			if (num_schemas == 0) {
-				return EINVAL;
-			}
-			*schema = avro_schema_union();
-			for (i = 0; i < num_schemas; i++) {
-				int schema_rval;
-				json_t *schema_json = json_array_get(json, i);
-				if (!schema_json) {
-					return EINVAL;
-				}
-				schema_rval =
-				    avro_schema_from_json_t(schema_json, &s,
-							    error);
-				if (schema_rval != 0) {
-					avro_schema_decref(*schema);
-					return schema_rval;
-				}
-				schema_rval =
-				    avro_schema_union_append(*schema, s);
-				if (schema_rval != 0) {
-					avro_schema_decref(*schema);
-					return schema_rval;
-				}
-			}
-		}
-		break;
-
-	case AVRO_FIXED:
-		{
-			json_t *json_size = json_object_get(json, "size");
-			json_t *json_name = json_object_get(json, "name");
-			int size;
-			const char *name;
-			if (!json_is_integer(json_size)) {
-				return EINVAL;
-			}
-			if (!json_is_string(json_name)) {
-				return EINVAL;
-			}
-			size = json_integer_value(json_size);
-			name = json_string_value(json_name);
-			*schema = avro_schema_fixed(name, size);
-			if (save_named_schemas(name, *schema, error)) {
-				return ENOMEM;
-			}
-		}
-		break;
-
-	default:
-		return EINVAL;
-	}
-	return 0;
-}
-
-int
-avro_schema_from_json(const char *jsontext, const int32_t len,
-		      avro_schema_t * schema, avro_schema_error_t * e)
-{
-	json_t *root;
-	int rval = 0;
-	avro_schema_error_t error;
-
-	AVRO_UNUSED(len);
-
-	if (!jsontext || !schema) {
-		return EINVAL;
-	}
-
-	error = malloc(sizeof(struct avro_schema_error_t_));
-	if (!error) {
-		return ENOMEM;
-	}
-	*e = error;
-
-	error->named_schemas = st_init_strtable_with_size(DEFAULT_TABLE_SIZE);
-	if (!error->named_schemas) {
-		free(error);
-		return ENOMEM;
-	}
-
-	root = json_loads(jsontext, &error->json_error);
-	if (!root) {
-		st_free_table(error->named_schemas);
-		free(error);
-		return EINVAL;
-	}
-
-	/*
-	 * json_dumpf(root, stderr, 0); 
-	 */
-	rval = avro_schema_from_json_t(root, schema, e);
-	json_decref(root);
-	st_free_table(error->named_schemas);
-	if (rval == 0) {
-		/* no need for an error return */
-		free(error);
-	}
-	return rval;
-}
-
-avro_schema_t avro_schema_copy(avro_schema_t schema)
-{
-	long i;
-	avro_schema_t new_schema = NULL;
-	if (!schema) {
-		return NULL;
-	}
-	switch (avro_typeof(schema)) {
-	case AVRO_STRING:
-	case AVRO_BYTES:
-	case AVRO_INT32:
-	case AVRO_INT64:
-	case AVRO_FLOAT:
-	case AVRO_DOUBLE:
-	case AVRO_BOOLEAN:
-	case AVRO_NULL:
-		/*
-		 * No need to copy primitives since they're static 
-		 */
-		new_schema = schema;
-		break;
-
-	case AVRO_RECORD:
-		{
-			struct avro_record_schema_t *record_schema =
-			    avro_schema_to_record(schema);
-			new_schema =
-			    avro_schema_record(record_schema->name,
-					       record_schema->space);
-			for (i = 0; i < record_schema->fields->num_entries; i++) {
-				union {
-					st_data_t data;
-					struct avro_record_field_t *field;
-				} val;
-				st_lookup(record_schema->fields, i, &val.data);
-				avro_schema_t type_copy =
-				    avro_schema_copy(val.field->type);
-				avro_schema_record_field_append(new_schema,
-								val.field->name,
-								type_copy);
-			}
-		}
-		break;
-
-	case AVRO_ENUM:
-		{
-			struct avro_enum_schema_t *enum_schema =
-			    avro_schema_to_enum(schema);
-			new_schema = avro_schema_enum(enum_schema->name);
-			for (i = 0; i < enum_schema->symbols->num_entries; i++) {
-				union {
-					st_data_t data;
-					char *sym;
-				} val;
-				st_lookup(enum_schema->symbols, i, &val.data);
-				avro_schema_enum_symbol_append(new_schema,
-							       val.sym);
-			}
-		}
-		break;
-
-	case AVRO_FIXED:
-		{
-			struct avro_fixed_schema_t *fixed_schema =
-			    avro_schema_to_fixed(schema);
-			new_schema =
-			    avro_schema_fixed(fixed_schema->name,
-					      fixed_schema->size);
-		}
-		break;
-
-	case AVRO_MAP:
-		{
-			struct avro_map_schema_t *map_schema =
-			    avro_schema_to_map(schema);
-			avro_schema_t values_copy =
-			    avro_schema_copy(map_schema->values);
-			if (!values_copy) {
-				return NULL;
-			}
-			new_schema = avro_schema_map(values_copy);
-		}
-		break;
-
-	case AVRO_ARRAY:
-		{
-			struct avro_array_schema_t *array_schema =
-			    avro_schema_to_array(schema);
-			avro_schema_t items_copy =
-			    avro_schema_copy(array_schema->items);
-			if (!items_copy) {
-				return NULL;
-			}
-			new_schema = avro_schema_array(items_copy);
-		}
-		break;
-
-	case AVRO_UNION:
-		{
-			struct avro_union_schema_t *union_schema =
-			    avro_schema_to_union(schema);
-
-			new_schema = avro_schema_union();
-			for (i = 0; i < union_schema->branches->num_entries;
-			     i++) {
-				avro_schema_t schema_copy;
-				union {
-					st_data_t data;
-					avro_schema_t schema;
-				} val;
-				st_lookup(union_schema->branches, i, &val.data);
-				schema_copy = avro_schema_copy(val.schema);
-				if (avro_schema_union_append
-				    (new_schema, schema_copy)) {
-					avro_schema_decref(new_schema);
-					return NULL;
-				}
-			}
-		}
-		break;
-
-	case AVRO_LINK:
-		{
-			struct avro_link_schema_t *link_schema =
-			    avro_schema_to_link(schema);
-			/*
-			 * TODO: use an avro_schema_copy of to instead of pointing to
-			 * the same reference 
-			 */
-			avro_schema_incref(link_schema->to);
-			new_schema = avro_schema_link(link_schema->to);
-		}
-		break;
-
-	default:
-		return NULL;
-	}
-	return new_schema;
-}
-
-const char *avro_schema_name(const avro_schema_t schema)
-{
-	if (is_avro_record(schema)) {
-		return (avro_schema_to_record(schema))->name;
-	} else if (is_avro_enum(schema)) {
-		return (avro_schema_to_enum(schema))->name;
-	} else if (is_avro_fixed(schema)) {
-		return (avro_schema_to_fixed(schema))->name;
-	}
-	return NULL;
-}
-
-/* simple helper for writing strings */
-static int avro_write_str(avro_writer_t out, const char *str)
-{
-	return avro_write(out, (char *)str, strlen(str));
-}
-
-static int write_field(avro_writer_t out, struct avro_record_field_t *field)
-{
-	int rval;
-	check(rval, avro_write_str(out, "{\"name\":\""));
-	check(rval, avro_write_str(out, field->name));
-	check(rval, avro_write_str(out, "\",\"type\":"));
-	check(rval, avro_schema_to_json(field->type, out));
-	return avro_write_str(out, "}");
-}
-
-static int write_record(avro_writer_t out, struct avro_record_schema_t *record)
-{
-	int rval;
-	long i;
-
-	check(rval, avro_write_str(out, "{\"type\":\"record\",\"name\":\""));
-	check(rval, avro_write_str(out, record->name));
-	check(rval, avro_write_str(out, "\","));
-	if (record->space) {
-		check(rval, avro_write_str(out, "\"namespace\":\""));
-		check(rval, avro_write_str(out, record->space));
-		check(rval, avro_write_str(out, "\","));
-	}
-	check(rval, avro_write_str(out, "\"fields\":["));
-	for (i = 0; i < record->fields->num_entries; i++) {
-		union {
-			st_data_t data;
-			struct avro_record_field_t *field;
-		} val;
-		st_lookup(record->fields, i, &val.data);
-		if (i) {
-			check(rval, avro_write_str(out, ","));
-		}
-		check(rval, write_field(out, val.field));
-	}
-	return avro_write_str(out, "]}");
-}
-
-static int write_enum(avro_writer_t out, struct avro_enum_schema_t *enump)
-{
-	int rval;
-	long i;
-	check(rval, avro_write_str(out, "{\"type\":\"enum\",\"name\":\""));
-	check(rval, avro_write_str(out, enump->name));
-	check(rval, avro_write_str(out, "\",\"symbols\":["));
-
-	for (i = 0; i < enump->symbols->num_entries; i++) {
-		union {
-			st_data_t data;
-			char *sym;
-		} val;
-		st_lookup(enump->symbols, i, &val.data);
-		if (i) {
-			check(rval, avro_write_str(out, ","));
-		}
-		check(rval, avro_write_str(out, "\""));
-		check(rval, avro_write_str(out, val.sym));
-		check(rval, avro_write_str(out, "\""));
-	}
-	return avro_write_str(out, "]}");
-}
-static int write_fixed(avro_writer_t out, struct avro_fixed_schema_t *fixed)
-{
-	int rval;
-	char size[16];
-	check(rval, avro_write_str(out, "{\"type\":\"fixed\",\"name\":\""));
-	check(rval, avro_write_str(out, fixed->name));
-	check(rval, avro_write_str(out, "\",\"size\":"));
-	snprintf(size, sizeof(size), "%"PRId64, fixed->size);
-	check(rval, avro_write_str(out, size));
-	return avro_write_str(out, "}");
-}
-static int write_map(avro_writer_t out, struct avro_map_schema_t *map)
-{
-	int rval;
-	check(rval, avro_write_str(out, "{\"type\":\"map\",\"values\":"));
-	check(rval, avro_schema_to_json(map->values, out));
-	return avro_write_str(out, "}");
-}
-static int write_array(avro_writer_t out, struct avro_array_schema_t *array)
-{
-	int rval;
-	check(rval, avro_write_str(out, "{\"type\":\"array\",\"items\":"));
-	check(rval, avro_schema_to_json(array->items, out));
-	return avro_write_str(out, "}");
-}
-static int write_union(avro_writer_t out, struct avro_union_schema_t *unionp)
-{
-	int rval;
-	long i;
-	check(rval, avro_write_str(out, "["));
-
-	for (i = 0; i < unionp->branches->num_entries; i++) {
-		union {
-			st_data_t data;
-			avro_schema_t schema;
-		} val;
-		st_lookup(unionp->branches, i, &val.data);
-		if (i) {
-			check(rval, avro_write_str(out, ","));
-		}
-		check(rval, avro_schema_to_json(val.schema, out));
-	}
-	return avro_write_str(out, "]");
-}
-static int write_link(avro_writer_t out, struct avro_link_schema_t *link)
-{
-	int rval;
-	check(rval, avro_write_str(out, "\""));
-	check(rval, avro_write_str(out, avro_schema_name(link->to)));
-	return avro_write_str(out, "\"");
-}
-
-int avro_schema_to_json(avro_schema_t schema, avro_writer_t out)
-{
-	int rval;
-
-	if (!is_avro_schema(schema) || !out) {
-		return EINVAL;
-	}
-
-	if (is_avro_primitive(schema)) {
-		check(rval, avro_write_str(out, "{\"type\":\""));
-	}
-
-	switch (avro_typeof(schema)) {
-	case AVRO_STRING:
-		check(rval, avro_write_str(out, "string"));
-		break;
-	case AVRO_BYTES:
-		check(rval, avro_write_str(out, "bytes"));
-		break;
-	case AVRO_INT32:
-		check(rval, avro_write_str(out, "int"));
-		break;
-	case AVRO_INT64:
-		check(rval, avro_write_str(out, "long"));
-		break;
-	case AVRO_FLOAT:
-		check(rval, avro_write_str(out, "float"));
-		break;
-	case AVRO_DOUBLE:
-		check(rval, avro_write_str(out, "double"));
-		break;
-	case AVRO_BOOLEAN:
-		check(rval, avro_write_str(out, "boolean"));
-		break;
-	case AVRO_NULL:
-		check(rval, avro_write_str(out, "null"));
-		break;
-	case AVRO_RECORD:
-		return write_record(out, avro_schema_to_record(schema));
-	case AVRO_ENUM:
-		return write_enum(out, avro_schema_to_enum(schema));
-	case AVRO_FIXED:
-		return write_fixed(out, avro_schema_to_fixed(schema));
-	case AVRO_MAP:
-		return write_map(out, avro_schema_to_map(schema));
-	case AVRO_ARRAY:
-		return write_array(out, avro_schema_to_array(schema));
-	case AVRO_UNION:
-		return write_union(out, avro_schema_to_union(schema));
-	case AVRO_LINK:
-		return write_link(out, avro_schema_to_link(schema));
-	}
-
-	if (is_avro_primitive(schema)) {
-		return avro_write_str(out, "\"}");
-	}
-	return EINVAL;
-}
diff --git a/branch-1.3/lang/c/src/schema.h b/branch-1.3/lang/c/src/schema.h
deleted file mode 100644
index 847073a..0000000
--- a/branch-1.3/lang/c/src/schema.h
+++ /dev/null
@@ -1,81 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-#ifndef AVRO_SCHEMA_H
-#define AVRO_SCHEMA_H
-
-#include "avro.h"
-#include "avro_private.h"
-#include "st.h"
-
-struct avro_record_field_t {
-	char *name;
-	avro_schema_t type;
-	/*
-	 * TODO: default values 
-	 */
-};
-
-struct avro_record_schema_t {
-	struct avro_obj_t obj;
-	char *name;
-	char *space;
-	st_table *fields;
-	st_table *fields_byname;
-};
-
-struct avro_enum_schema_t {
-	struct avro_obj_t obj;
-	char *name;
-	st_table *symbols;
-	st_table *symbols_byname;
-};
-
-struct avro_array_schema_t {
-	struct avro_obj_t obj;
-	avro_schema_t items;
-};
-
-struct avro_map_schema_t {
-	struct avro_obj_t obj;
-	avro_schema_t values;
-};
-
-struct avro_union_schema_t {
-	struct avro_obj_t obj;
-	st_table *branches;
-};
-
-struct avro_fixed_schema_t {
-	struct avro_obj_t obj;
-	const char *name;
-	int64_t size;
-};
-
-struct avro_link_schema_t {
-	struct avro_obj_t obj;
-	avro_schema_t to;
-};
-
-#define avro_schema_to_record(schema_)  (container_of(schema_, struct avro_record_schema_t, obj))
-#define avro_schema_to_enum(schema_)    (container_of(schema_, struct avro_enum_schema_t, obj))
-#define avro_schema_to_array(schema_)   (container_of(schema_, struct avro_array_schema_t, obj))
-#define avro_schema_to_map(schema_)     (container_of(schema_, struct avro_map_schema_t, obj))
-#define avro_schema_to_union(schema_)   (container_of(schema_, struct avro_union_schema_t, obj))
-#define avro_schema_to_fixed(schema_)   (container_of(schema_, struct avro_fixed_schema_t, obj))
-#define avro_schema_to_link(schema_)    (container_of(schema_, struct avro_link_schema_t, obj))
-
-#endif
diff --git a/branch-1.3/lang/c/src/schema_equal.c b/branch-1.3/lang/c/src/schema_equal.c
deleted file mode 100644
index 564096a..0000000
--- a/branch-1.3/lang/c/src/schema_equal.c
+++ /dev/null
@@ -1,191 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include "schema.h"
-#include <string.h>
-
-static int
-schema_record_equal(struct avro_record_schema_t *a,
-		    struct avro_record_schema_t *b)
-{
-	long i;
-	if (strcmp(a->name, b->name)) {
-		/*
-		 * They have different names 
-		 */
-		return 0;
-	}
-	if (a->space && b->space) {
-		/* They have different namespaces */
-		if (strcmp(a->space, b->space)) {
-			return 0;
-		}
-	} else if (a->space || b->space) {
-		/* One has a namespace, one doesn't */
-		return 0;
-	}
-	for (i = 0; i < a->fields->num_entries; i++) {
-		union {
-			st_data_t data;
-			struct avro_record_field_t *f;
-		} fa, fb;
-		st_lookup(a->fields, i, &fa.data);
-		if (!st_lookup(b->fields, i, &fb.data)) {
-			return 0;
-		}
-		if (strcmp(fa.f->name, fb.f->name)) {
-			/*
-			 * They have fields with different names 
-			 */
-			return 0;
-		}
-		if (!avro_schema_equal(fa.f->type, fb.f->type)) {
-			/*
-			 * They have fields with different schemas 
-			 */
-			return 0;
-		}
-	}
-	return 1;
-}
-
-static int
-schema_enum_equal(struct avro_enum_schema_t *a, struct avro_enum_schema_t *b)
-{
-	long i;
-	if (strcmp(a->name, b->name)) {
-		/*
-		 * They have different names 
-		 */
-		return 0;
-	}
-	for (i = 0; i < a->symbols->num_entries; i++) {
-		union {
-			st_data_t data;
-			char *sym;
-		} sa, sb;
-		st_lookup(a->symbols, i, &sa.data);
-		if (!st_lookup(b->symbols, i, &sb.data)) {
-			return 0;
-		}
-		if (strcmp(sa.sym, sb.sym) != 0) {
-			/*
-			 * They have different symbol names 
-			 */
-			return 0;
-		}
-	}
-	return 1;
-}
-
-static int
-schema_fixed_equal(struct avro_fixed_schema_t *a, struct avro_fixed_schema_t *b)
-{
-	if (strcmp(a->name, b->name)) {
-		/*
-		 * They have different names 
-		 */
-		return 0;
-	}
-	return (a->size == b->size);
-}
-
-static int
-schema_map_equal(struct avro_map_schema_t *a, struct avro_map_schema_t *b)
-{
-	return avro_schema_equal(a->values, b->values);
-}
-
-static int
-schema_array_equal(struct avro_array_schema_t *a, struct avro_array_schema_t *b)
-{
-	return avro_schema_equal(a->items, b->items);
-}
-
-static int
-schema_union_equal(struct avro_union_schema_t *a, struct avro_union_schema_t *b)
-{
-	long i;
-	for (i = 0; i < a->branches->num_entries; i++) {
-		union {
-			st_data_t data;
-			avro_schema_t schema;
-		} ab, bb;
-		st_lookup(a->branches, i, &ab.data);
-		if (!st_lookup(b->branches, i, &bb.data)) {
-			return 0;
-		}
-		if (!avro_schema_equal(ab.schema, bb.schema)) {
-			/*
-			 * They don't have the same schema types 
-			 */
-			return 0;
-		}
-	}
-	return 1;
-}
-
-static int
-schema_link_equal(struct avro_link_schema_t *a, struct avro_link_schema_t *b)
-{
-	/*
-	 * NOTE: links can only be used for named types. They are used in
-	 * recursive schemas so we just check the name of the schema pointed
-	 * to instead of a deep check.  Otherwise, we recurse forever... 
-	 */
-	return (strcmp(avro_schema_name(a->to), avro_schema_name(b->to)) == 0);
-}
-
-int avro_schema_equal(avro_schema_t a, avro_schema_t b)
-{
-	if (!a || !b) {
-		/*
-		 * this is an error. protecting from segfault. 
-		 */
-		return 0;
-	} else if (a == b) {
-		/*
-		 * an object is equal to itself 
-		 */
-		return 1;
-	} else if (avro_typeof(a) != avro_typeof(b)) {
-		return 0;
-	} else if (is_avro_record(a)) {
-		return schema_record_equal(avro_schema_to_record(a),
-					   avro_schema_to_record(b));
-	} else if (is_avro_enum(a)) {
-		return schema_enum_equal(avro_schema_to_enum(a),
-					 avro_schema_to_enum(b));
-	} else if (is_avro_fixed(a)) {
-		return schema_fixed_equal(avro_schema_to_fixed(a),
-					  avro_schema_to_fixed(b));
-	} else if (is_avro_map(a)) {
-		return schema_map_equal(avro_schema_to_map(a),
-					avro_schema_to_map(b));
-	} else if (is_avro_array(a)) {
-		return schema_array_equal(avro_schema_to_array(a),
-					  avro_schema_to_array(b));
-	} else if (is_avro_union(a)) {
-		return schema_union_equal(avro_schema_to_union(a),
-					  avro_schema_to_union(b));
-	} else if (is_avro_link(a)) {
-		return schema_link_equal(avro_schema_to_link(a),
-					 avro_schema_to_link(b));
-	}
-	return 1;
-}
diff --git a/branch-1.3/lang/c/src/schema_specific.c b/branch-1.3/lang/c/src/schema_specific.c
deleted file mode 100644
index 34ada84..0000000
--- a/branch-1.3/lang/c/src/schema_specific.c
+++ /dev/null
@@ -1,230 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdio.h>
-#include <errno.h>
-#include <string.h>
-#include "schema.h"
-
-enum specific_state {
-	START_STATE,
-};
-typedef enum specific_state specific_state;
-
-struct specific_ctx {
-	FILE *header;
-	FILE *source;
-	int depth;
-	specific_state state;
-};
-typedef struct specific_ctx specific_ctx;
-
-static void indent(specific_ctx * ctx, FILE * fp)
-{
-	int i;
-	for (i = 0; i < ctx->depth; i++) {
-		fprintf(fp, "   ");
-	}
-}
-
-static int avro_schema_to_source(avro_schema_t schema, specific_ctx * ctx)
-{
-	switch (schema->type) {
-	default:
-		return 0;
-	}
-	return EINVAL;
-}
-
-static int avro_schema_to_header(avro_schema_t schema, specific_ctx * ctx)
-{
-	size_t i;
-	FILE *fp = ctx->header;
-
-	indent(ctx, fp);
-	ctx->depth++;
-
-	if (is_avro_primitive(schema) && !ctx->name) {
-		return 0;
-	}
-
-	switch (schema->type) {
-	case AVRO_STRING:
-		fprintf(fp, "char *%s;\n", ctx->name);
-		break;
-
-	case AVRO_BYTES:
-		fprintf(fp, "struct %s { size_t %s_len; char *%s_val } %s;\n",
-			ctx->name, ctx->name, ctx->name, ctx->name);
-		break;
-
-	case AVRO_INT:
-		fprintf(fp, "int %s;\n", ctx->name);
-		break;
-
-	case AVRO_LONG:
-		fprintf(fp, "long %s;\n", ctx->name);
-		break;
-
-	case AVRO_FLOAT:
-		fprintf(fp, "float %s;\n", ctx->name);
-		break;
-
-	case AVRO_DOUBLE:
-		fprintf(fp, "double %s;\n", ctx->name);
-		break;
-
-	case AVRO_BOOLEAN:
-		fprintf(fp, "int %s; /* boolean */\n", ctx->name);
-		break;
-
-	case AVRO_NULL:
-		break;
-
-	case AVRO_RECORD:
-		{
-			struct schema_record_t *record_schema =
-			    avro_schema_to_record(schema);
-			fprintf(fp, "struct %s {\n", record_schema->name);
-			for (i = 0; i < record_schema->num_fields; i++) {
-				struct record_field_t *field =
-				    record_schema->fields[i];
-				ctx->name = field->name;
-				avro_schema_to_header(field->type, ctx);
-				ctx->name = NULL;
-			}
-			fprintf(fp, "};\n");
-			fprintf(fp, "typedef struct %s %s;\n\n",
-				record_schema->name, record_schema->name);
-		}
-		break;
-
-	case AVRO_ENUM:
-		{
-			struct schema_enum_t *enum_schema =
-			    avro_schema_to_enum(schema);
-			fprintf(fp, "enum %s {\n", enum_schema->name);
-			ctx->depth++;
-			for (i = 0; i < enum_schema->num_symbols; i++) {
-				indent(ctx, fp);
-				fprintf(fp, "%s = %ld,\n",
-					enum_schema->symbols[i], i);
-			}
-			ctx->depth--;
-			fprintf(fp, "};\n");
-			fprintf(fp, "typedef enum %s %s;\n\n",
-				enum_schema->name, enum_schema->name);
-		}
-		break;
-
-	case AVRO_FIXED:
-		{
-			struct schema_fixed_t *fixed_schema =
-			    avro_schema_to_fixed(schema);
-			fprintf(fp, "char %s[%ld];\n", fixed_schema->name,
-				fixed_schema->size);
-		}
-		break;
-
-	case AVRO_MAP:
-		{
-
-		}
-		break;
-
-	case AVRO_ARRAY:
-		{
-			struct schema_array_t *array_schema =
-			    avro_schema_to_array(schema);
-			if (!ctx->name) {
-				break;
-			}
-			fprintf(fp, "struct { size_t %s_len; ", ctx->name);
-			if (is_avro_named_type(array_schema->items)) {
-				fprintf(fp, "%s",
-					avro_schema_name(array_schema->items));
-			} else if (is_avro_link(array_schema->items)) {
-				struct schema_link_t *link_schema =
-				    avro_schema_to_link(array_schema->items);
-				fprintf(fp, "struct %s",
-					avro_schema_name(link_schema->to));
-			} else {
-				avro_schema_to_header(array_schema->items, ctx);
-			}
-			fprintf(fp, " *%s_val;} %s;\n", ctx->name, ctx->name);
-		}
-		break;
-	case AVRO_UNION:
-		{
-			struct schema_union_t *union_schema =
-			    avro_schema_to_array(schema);
-			if (!ctx->name) {
-				break;
-			}
-			fprintf(fp, "union {\n");
-			for (i = 0; i < union_schema->num_schemas; i++) {
-				avro_schema_to_header(union_schema->schemas[i],
-						      ctx);
-			}
-			fprintf(fp, "%s_u;\n");
-		}
-		break;
-	case AVRO_LINK:
-		break;
-	default:
-		return EINVAL;
-	}
-
-	ctx->depth--;
-	return 0;
-}
-
-int avro_schema_to_specific(avro_schema_t schema, const char *prefix)
-{
-	specific_ctx ctx;
-	char buf[1024];
-	int rval;
-
-	if (!schema) {
-		return EINVAL;
-	}
-
-	memset(&ctx, 0, sizeof(ctx));
-	snprintf(buf, sizeof(buf), "%s_avro.h", prefix);
-	ctx.header = fopen(buf, "w");
-	if (!ctx.header) {
-		return errno;
-	}
-	snprintf(buf, sizeof(buf), "%s_avro.c", prefix);
-	ctx.source = fopen(buf, "w");
-	if (!ctx.source) {
-		return errno;
-	}
-
-	rval = avro_schema_to_header(schema, &ctx);
-	if (rval) {
-		goto out;
-	}
-
-	rval = avro_schema_to_source(schema, &ctx);
-
-      out:
-	fclose(ctx.header);
-	fclose(ctx.source);
-	return rval;
-}
diff --git a/branch-1.3/lang/c/src/st.c b/branch-1.3/lang/c/src/st.c
deleted file mode 100644
index a767279..0000000
--- a/branch-1.3/lang/c/src/st.c
+++ /dev/null
@@ -1,574 +0,0 @@
-/*
- * This is a public domain general purpose hash table package written by
- * Peter Moore @ UCB. 
- */
-
-/*
- * static char sccsid[] = "@(#) st.c 5.1 89/12/14 Crucible"; 
- */
-
-#include "avro_private.h"
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include "st.h"
-
-typedef struct st_table_entry st_table_entry;
-
-struct st_table_entry {
-	unsigned int hash;
-	st_data_t key;
-	st_data_t record;
-	st_table_entry *next;
-};
-
-#define ST_DEFAULT_MAX_DENSITY 5
-#define ST_DEFAULT_INIT_TABLE_SIZE 11
-
-	/*
-	 * DEFAULT_MAX_DENSITY is the default for the largest we allow the
-	 * average number of items per bin before increasing the number of
-	 * bins
-	 *
-	 * DEFAULT_INIT_TABLE_SIZE is the default for the number of bins
-	 * allocated initially
-	 *
-	 */
-static int numcmp(long, long);
-static int numhash(long);
-static struct st_hash_type type_numhash = {
-	numcmp,
-	numhash,
-};
-
-/*
- * extern int strcmp(const char *, const char *); 
- */
-static int strhash(const char *);
-static struct st_hash_type type_strhash = {
-	strcmp,
-	strhash,
-};
-
-static void rehash(st_table *);
-
-#ifdef RUBY
-#define malloc xmalloc
-#define calloc xcalloc
-#endif
-
-#define alloc(type) (type*)malloc((unsigned)sizeof(type))
-#define Calloc(n,s) (char*)calloc((n),(s))
-
-#define EQUAL(table,x,y) ((x)==(y) || (*table->type->compare)((x),(y)) == 0)
-
-#define do_hash(key,table) (unsigned int)(*(table)->type->hash)((key))
-#define do_hash_bin(key,table) (do_hash(key, table)%(table)->num_bins)
-
-/*
- * MINSIZE is the minimum size of a dictionary.
- */
-
-#define MINSIZE 8
-
-/*
- * Table of prime numbers 2^n+a, 2<=n<=30. 
- */
-static long primes[] = {
-	8 + 3,
-	16 + 3,
-	32 + 5,
-	64 + 3,
-	128 + 3,
-	256 + 27,
-	512 + 9,
-	1024 + 9,
-	2048 + 5,
-	4096 + 3,
-	8192 + 27,
-	16384 + 43,
-	32768 + 3,
-	65536 + 45,
-	131072 + 29,
-	262144 + 3,
-	524288 + 21,
-	1048576 + 7,
-	2097152 + 17,
-	4194304 + 15,
-	8388608 + 9,
-	16777216 + 43,
-	33554432 + 35,
-	67108864 + 15,
-	134217728 + 29,
-	268435456 + 3,
-	536870912 + 11,
-	1073741824 + 85,
-	0
-};
-
-static int new_size(size)
-int size;
-{
-	unsigned int i;
-
-#if 0
-	for (i = 3; i < 31; i++) {
-		if ((1 << i) > size)
-			return 1 << i;
-	}
-	return -1;
-#else
-	int newsize;
-
-	for (i = 0, newsize = MINSIZE;
-	     i < sizeof(primes) / sizeof(primes[0]); i++, newsize <<= 1) {
-		if (newsize > size)
-			return primes[i];
-	}
-	/*
-	 * Ran out of polynomials 
-	 */
-	return -1;		/* should raise exception */
-#endif
-}
-
-#ifdef HASH_LOG
-static int collision = 0;
-static int init_st = 0;
-
-static void stat_col()
-{
-	FILE *f = fopen("/tmp/col", "w");
-	fprintf(f, "collision: %d\n", collision);
-	fclose(f);
-}
-#endif
-
-st_table *st_init_table_with_size(type, size)
-struct st_hash_type *type;
-int size;
-{
-	st_table *tbl;
-
-#ifdef HASH_LOG
-	if (init_st == 0) {
-		init_st = 1;
-		atexit(stat_col);
-	}
-#endif
-
-	size = new_size(size);	/* round up to prime number */
-
-	tbl = alloc(st_table);
-	tbl->type = type;
-	tbl->num_entries = 0;
-	tbl->num_bins = size;
-	tbl->bins = (st_table_entry **) Calloc(size, sizeof(st_table_entry *));
-
-	return tbl;
-}
-
-st_table *st_init_table(type)
-struct st_hash_type *type;
-{
-	return st_init_table_with_size(type, 0);
-}
-
-st_table *st_init_numtable(void)
-{
-	return st_init_table(&type_numhash);
-}
-
-st_table *st_init_numtable_with_size(size)
-int size;
-{
-	return st_init_table_with_size(&type_numhash, size);
-}
-
-st_table *st_init_strtable(void)
-{
-	return st_init_table(&type_strhash);
-}
-
-st_table *st_init_strtable_with_size(size)
-int size;
-{
-	return st_init_table_with_size(&type_strhash, size);
-}
-
-void st_free_table(table)
-st_table *table;
-{
-	register st_table_entry *ptr, *next;
-	int i;
-
-	for (i = 0; i < table->num_bins; i++) {
-		ptr = table->bins[i];
-		while (ptr != 0) {
-			next = ptr->next;
-			free(ptr);
-			ptr = next;
-		}
-	}
-	free(table->bins);
-	free(table);
-}
-
-#define PTR_NOT_EQUAL(table, ptr, hash_val, key) \
-((ptr) != 0 && (ptr->hash != (hash_val) || !EQUAL((table), (key), (ptr)->key)))
-
-#ifdef HASH_LOG
-#define COLLISION collision++
-#else
-#define COLLISION
-#endif
-
-#define FIND_ENTRY(table, ptr, hash_val, bin_pos) do {\
-    bin_pos = hash_val%(table)->num_bins;\
-    ptr = (table)->bins[bin_pos];\
-    if (PTR_NOT_EQUAL(table, ptr, hash_val, key)) {\
-	COLLISION;\
-	while (PTR_NOT_EQUAL(table, ptr->next, hash_val, key)) {\
-	    ptr = ptr->next;\
-	}\
-	ptr = ptr->next;\
-    }\
-} while (0)
-
-int st_lookup(table, key, value)
-st_table *table;
-register st_data_t key;
-st_data_t *value;
-{
-	unsigned int hash_val, bin_pos;
-	register st_table_entry *ptr;
-
-	hash_val = do_hash(key, table);
-	FIND_ENTRY(table, ptr, hash_val, bin_pos);
-
-	if (ptr == 0) {
-		return 0;
-	} else {
-		if (value != 0)
-			*value = ptr->record;
-		return 1;
-	}
-}
-
-#define ADD_DIRECT(table, key, value, hash_val, bin_pos)\
-do {\
-    st_table_entry *entry;\
-    if (table->num_entries/(table->num_bins) > ST_DEFAULT_MAX_DENSITY) {\
-	rehash(table);\
-        bin_pos = hash_val % table->num_bins;\
-    }\
-    \
-    entry = alloc(st_table_entry);\
-    \
-    entry->hash = hash_val;\
-    entry->key = key;\
-    entry->record = value;\
-    entry->next = table->bins[bin_pos];\
-    table->bins[bin_pos] = entry;\
-    table->num_entries++;\
-} while (0)
-
-int st_insert(table, key, value)
-register st_table *table;
-register st_data_t key;
-st_data_t value;
-{
-	unsigned int hash_val, bin_pos;
-	register st_table_entry *ptr;
-
-	hash_val = do_hash(key, table);
-	FIND_ENTRY(table, ptr, hash_val, bin_pos);
-
-	if (ptr == 0) {
-		ADD_DIRECT(table, key, value, hash_val, bin_pos);
-		return 0;
-	} else {
-		ptr->record = value;
-		return 1;
-	}
-}
-
-void st_add_direct(table, key, value)
-st_table *table;
-st_data_t key;
-st_data_t value;
-{
-	unsigned int hash_val, bin_pos;
-
-	hash_val = do_hash(key, table);
-	bin_pos = hash_val % table->num_bins;
-	ADD_DIRECT(table, key, value, hash_val, bin_pos);
-}
-
-static void rehash(table)
-register st_table *table;
-{
-	register st_table_entry *ptr, *next, **new_bins;
-	int i, old_num_bins = table->num_bins, new_num_bins;
-	unsigned int hash_val;
-
-	new_num_bins = new_size(old_num_bins + 1);
-	new_bins =
-	    (st_table_entry **) Calloc(new_num_bins, sizeof(st_table_entry *));
-
-	for (i = 0; i < old_num_bins; i++) {
-		ptr = table->bins[i];
-		while (ptr != 0) {
-			next = ptr->next;
-			hash_val = ptr->hash % new_num_bins;
-			ptr->next = new_bins[hash_val];
-			new_bins[hash_val] = ptr;
-			ptr = next;
-		}
-	}
-	free(table->bins);
-	table->num_bins = new_num_bins;
-	table->bins = new_bins;
-}
-
-st_table *st_copy(old_table)
-st_table *old_table;
-{
-	st_table *new_table;
-	st_table_entry *ptr, *entry;
-	int i, num_bins = old_table->num_bins;
-
-	new_table = alloc(st_table);
-	if (new_table == 0) {
-		return 0;
-	}
-
-	*new_table = *old_table;
-	new_table->bins = (st_table_entry **)
-	    Calloc((unsigned)num_bins, sizeof(st_table_entry *));
-
-	if (new_table->bins == 0) {
-		free(new_table);
-		return 0;
-	}
-
-	for (i = 0; i < num_bins; i++) {
-		new_table->bins[i] = 0;
-		ptr = old_table->bins[i];
-		while (ptr != 0) {
-			entry = alloc(st_table_entry);
-			if (entry == 0) {
-				free(new_table->bins);
-				free(new_table);
-				return 0;
-			}
-			*entry = *ptr;
-			entry->next = new_table->bins[i];
-			new_table->bins[i] = entry;
-			ptr = ptr->next;
-		}
-	}
-	return new_table;
-}
-
-int st_delete(table, key, value)
-register st_table *table;
-register st_data_t *key;
-st_data_t *value;
-{
-	unsigned int hash_val;
-	st_table_entry *tmp;
-	register st_table_entry *ptr;
-
-	hash_val = do_hash_bin(*key, table);
-	ptr = table->bins[hash_val];
-
-	if (ptr == 0) {
-		if (value != 0)
-			*value = 0;
-		return 0;
-	}
-
-	if (EQUAL(table, *key, ptr->key)) {
-		table->bins[hash_val] = ptr->next;
-		table->num_entries--;
-		if (value != 0)
-			*value = ptr->record;
-		*key = ptr->key;
-		free(ptr);
-		return 1;
-	}
-
-	for (; ptr->next != 0; ptr = ptr->next) {
-		if (EQUAL(table, ptr->next->key, *key)) {
-			tmp = ptr->next;
-			ptr->next = ptr->next->next;
-			table->num_entries--;
-			if (value != 0)
-				*value = tmp->record;
-			*key = tmp->key;
-			free(tmp);
-			return 1;
-		}
-	}
-
-	return 0;
-}
-
-int st_delete_safe(table, key, value, never)
-register st_table *table;
-register st_data_t *key;
-st_data_t *value;
-st_data_t never;
-{
-	unsigned int hash_val;
-	register st_table_entry *ptr;
-
-	hash_val = do_hash_bin(*key, table);
-	ptr = table->bins[hash_val];
-
-	if (ptr == 0) {
-		if (value != 0)
-			*value = 0;
-		return 0;
-	}
-
-	for (; ptr != 0; ptr = ptr->next) {
-		if ((ptr->key != never) && EQUAL(table, ptr->key, *key)) {
-			table->num_entries--;
-			*key = ptr->key;
-			if (value != 0)
-				*value = ptr->record;
-			ptr->key = ptr->record = never;
-			return 1;
-		}
-	}
-
-	return 0;
-}
-
-static int delete_never(key, value, never)
-st_data_t key, value, never;
-{
-	AVRO_UNUSED(key);
-
-	if (value == never)
-		return ST_DELETE;
-	return ST_CONTINUE;
-}
-
-void st_cleanup_safe(table, never)
-st_table *table;
-st_data_t never;
-{
-	int num_entries = table->num_entries;
-
-	st_foreach(table, delete_never, never);
-	table->num_entries = num_entries;
-}
-
-int st_foreach(table, func, arg)
-st_table *table;
-int (*func) ();
-st_data_t arg;
-{
-	st_table_entry *ptr, *last, *tmp;
-	enum st_retval retval;
-	int i;
-
-	for (i = 0; i < table->num_bins; i++) {
-		last = 0;
-		for (ptr = table->bins[i]; ptr != 0;) {
-			retval = (*func) (ptr->key, ptr->record, arg);
-			switch (retval) {
-			case ST_CHECK:	/* check if hash is modified during
-					 * iteration */
-				tmp = 0;
-				if (i < table->num_bins) {
-					for (tmp = table->bins[i]; tmp;
-					     tmp = tmp->next) {
-						if (tmp == ptr)
-							break;
-					}
-				}
-				if (!tmp) {
-					/*
-					 * call func with error notice 
-					 */
-					return 1;
-				}
-				/*
-				 * fall through 
-				 */
-			case ST_CONTINUE:
-				last = ptr;
-				ptr = ptr->next;
-				break;
-			case ST_STOP:
-				return 0;
-			case ST_DELETE:
-				tmp = ptr;
-				if (last == 0) {
-					table->bins[i] = ptr->next;
-				} else {
-					last->next = ptr->next;
-				}
-				ptr = ptr->next;
-				free(tmp);
-				table->num_entries--;
-			}
-		}
-	}
-	return 0;
-}
-
-static int strhash(string)
-register const char *string;
-{
-	register int c;
-
-#ifdef HASH_ELFHASH
-	register unsigned int h = 0, g;
-
-	while ((c = *string++) != '\0') {
-		h = (h << 4) + c;
-		if (g = h & 0xF0000000)
-			h ^= g >> 24;
-		h &= ~g;
-	}
-	return h;
-#elif defined(HASH_PERL)
-	register int val = 0;
-
-	while ((c = *string++) != '\0') {
-		val += c;
-		val += (val << 10);
-		val ^= (val >> 6);
-	}
-	val += (val << 3);
-	val ^= (val >> 11);
-
-	return val + (val << 15);
-#else
-	register int val = 0;
-
-	while ((c = *string++) != '\0') {
-		val = val * 997 + c;
-	}
-
-	return val + (val >> 5);
-#endif
-}
-
-static int numcmp(x, y)
-long x, y;
-{
-	return x != y;
-}
-
-static int numhash(n)
-long n;
-{
-	return n;
-}
diff --git a/branch-1.3/lang/c/src/st.h b/branch-1.3/lang/c/src/st.h
deleted file mode 100644
index ebc4df6..0000000
--- a/branch-1.3/lang/c/src/st.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * This is a public domain general purpose hash table package written by
- * Peter Moore @ UCB. 
- */
-
-/*
- * @(#) st.h 5.1 89/12/14 
- */
-
-#ifndef ST_INCLUDED
-#define ST_INCLUDED
-
-#include <stdint.h>		/* for uintptr_t */
-
-typedef uintptr_t st_data_t;
-typedef struct st_table st_table;
-
-struct st_hash_type {
-	int (*compare) ();
-	int (*hash) ();
-};
-
-struct st_table {
-	struct st_hash_type *type;
-	int num_bins;
-	int num_entries;
-	struct st_table_entry **bins;
-};
-
-#define st_is_member(table,key) st_lookup(table,key,(st_data_t *)0)
-
-enum st_retval { ST_CONTINUE, ST_STOP, ST_DELETE, ST_CHECK };
-
-#ifndef _
-# define _(args) args
-#endif
-#ifndef ANYARGS
-# ifdef __cplusplus
-#   define ANYARGS ...
-# else
-#   define ANYARGS
-# endif
-#endif
-
-st_table *st_init_table _((struct st_hash_type *));
-st_table *st_init_table_with_size _((struct st_hash_type *, int));
-st_table *st_init_numtable _((void));
-st_table *st_init_numtable_with_size _((int));
-st_table *st_init_strtable _((void));
-st_table *st_init_strtable_with_size _((int));
-int st_delete _((st_table *, st_data_t *, st_data_t *));
-int st_delete_safe _((st_table *, st_data_t *, st_data_t *, st_data_t));
-int st_insert _((st_table *, st_data_t, st_data_t));
-int st_lookup _((st_table *, st_data_t, st_data_t *));
-int st_foreach _((st_table *, int (*)(ANYARGS), st_data_t));
-void st_add_direct _((st_table *, st_data_t, st_data_t));
-void st_free_table _((st_table *));
-void st_cleanup_safe _((st_table *, st_data_t));
-st_table *st_copy _((st_table *));
-
-#define ST_NUMCMP	((int (*)()) 0)
-#define ST_NUMHASH	((int (*)()) -2)
-
-#define st_numcmp	ST_NUMCMP
-#define st_numhash	ST_NUMHASH
-
-int st_strhash();
-
-#endif				/* ST_INCLUDED */
diff --git a/branch-1.3/lang/c/tests/CMakeLists.txt b/branch-1.3/lang/c/tests/CMakeLists.txt
deleted file mode 100644
index e3ff6bf..0000000
--- a/branch-1.3/lang/c/tests/CMakeLists.txt
+++ /dev/null
@@ -1,35 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-add_executable(generate_interop_data generate_interop_data.c)
-target_link_libraries(generate_interop_data avro-static)
-add_executable(test_interop_data test_interop_data.c)
-target_link_libraries(test_interop_data avro-static)
-
-add_executable(test_avro_schema test_avro_schema.c)
-target_link_libraries(test_avro_schema avro-static)
-add_test(test_avro_schema ${CMAKE_COMMAND} -E chdir ${AvroC_SOURCE_DIR}/tests ${CMAKE_CURRENT_BINARY_DIR}/test_avro_schema)
-
-add_executable(test_avro_data test_avro_data.c)
-target_link_libraries(test_avro_data avro-static)
-add_test(test_avro_data ${CMAKE_COMMAND} -E chdir ${AvroC_SOURCE_DIR}/tests ${CMAKE_CURRENT_BINARY_DIR}/test_avro_data)
-
-add_executable(test_cpp test_cpp.cpp)
-target_link_libraries(test_cpp avro-static)
-add_test(test_cpp ${CMAKE_COMMAND} -E chdir ${AvroC_SOURCE_DIR}/tests ${CMAKE_CURRENT_BINARY_DIR}/test_cpp)
diff --git a/branch-1.3/lang/c/tests/Makefile.am b/branch-1.3/lang/c/tests/Makefile.am
deleted file mode 100644
index 97efc04..0000000
--- a/branch-1.3/lang/c/tests/Makefile.am
+++ /dev/null
@@ -1,29 +0,0 @@
-# -pedantic
-AM_CPPFLAGS=-I$(top_srcdir)/src
-AM_CFLAGS=-Wall -Wextra -Wunused-parameter
-ACLOCAL_AMFLAGS=-I m4
-
-EXTRA_DIST=schema_tests test_valgrind
-
-check_PROGRAMS=test_avro_schema test_avro_data test_cpp
-
-noinst_PROGRAMS=generate_interop_data test_interop_data
-
-test_LDADD=$(top_builddir)/src/libavro.la
-
-test_avro_schema_SOURCES=test_avro_schema.c
-test_avro_schema_LDADD=$(test_LDADD)
-
-test_avro_data_SOURCES=test_avro_data.c
-test_avro_data_LDADD=$(test_LDADD)
-
-test_cpp_SOURCES=test_cpp.cpp
-test_cpp_LDADD=$(test_LDADD)
-
-generate_interop_data_SOURCES=generate_interop_data.c
-generate_interop_data_LDADD=$(test_LDADD)
-
-test_interop_data_SOURCES=test_interop_data.c
-test_interop_data_LDADD=$(test_LDADD)
-
-TESTS=$(check_PROGRAMS) test_valgrind
diff --git a/branch-1.3/lang/c/tests/generate_interop_data.c b/branch-1.3/lang/c/tests/generate_interop_data.c
deleted file mode 100644
index 217b78f..0000000
--- a/branch-1.3/lang/c/tests/generate_interop_data.c
+++ /dev/null
@@ -1,112 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdio.h>
-#include <stdlib.h>
-#include <unistd.h>
-
-int main(int argc, char *argv[])
-{
-	int rval;
-	avro_file_writer_t file_writer;
-	avro_file_reader_t file_reader;
-	char outpath[128];
-	FILE *fp;
-	char jsontext[16 * 1024];
-	avro_schema_t schema;
-	avro_schema_error_t schema_error;
-	avro_datum_t interop;
-	avro_datum_t array_datum;
-	avro_datum_t node_datum;
-	avro_datum_t union_datum;
-	avro_datum_t out_datum;
-	enum Kind {
-		KIND_A,
-		KIND_B,
-		KIND_C
-	};
-
-	if (argc != 3) {
-		exit(EXIT_FAILURE);
-	}
-	snprintf(outpath, sizeof(outpath), "%s/c.avro", argv[2]);
-	fprintf(stderr, "Writing to %s\n", outpath);
-
-	fp = fopen(argv[1], "r");
-	rval = fread(jsontext, 1, sizeof(jsontext) - 1, fp);
-	jsontext[rval] = '\0';
-
-	check(rval,
-	      avro_schema_from_json(jsontext, rval, &schema, &schema_error));
-	check(rval, avro_file_writer_create(outpath, schema, &file_writer));
-
-	/* TODO: create a method for generating random data from schema */
-	interop = avro_record("Interop", "org.apache.avro");
-	avro_record_set(interop, "intField", avro_int32(42));
-	avro_record_set(interop, "longField", avro_int64(4242));
-	avro_record_set(interop, "stringField",
-			avro_wrapstring("Follow your bliss."));
-	avro_record_set(interop, "boolField", avro_boolean(1));
-	avro_record_set(interop, "floatField", avro_float(3.14159265));
-	avro_record_set(interop, "doubleField", avro_double(2.71828183));
-	avro_record_set(interop, "bytesField", avro_bytes("abcd", 4));
-	avro_record_set(interop, "nullField", avro_null());
-
-	array_datum = avro_array();
-	avro_array_append_datum(array_datum, avro_double(1.0));
-	avro_array_append_datum(array_datum, avro_double(2.0));
-	avro_array_append_datum(array_datum, avro_double(3.0));
-	avro_record_set(interop, "arrayField", array_datum);
-
-	avro_record_set(interop, "mapField", avro_map());
-	union_datum = avro_union(1, avro_double(1.61803399));
-	avro_record_set(interop, "unionField", union_datum);
-	avro_record_set(interop, "enumField", avro_enum("Kind", KIND_A));
-	avro_record_set(interop, "fixedField",
-			avro_fixed("MD5", "1234567890123456", 16));
-
-	node_datum = avro_record("Node", NULL);
-	avro_record_set(node_datum, "label",
-			avro_wrapstring("If you label me, you negate me."));
-	avro_record_set(node_datum, "children", avro_array());
-	avro_record_set(interop, "recordField", node_datum);
-
-	rval = avro_file_writer_append(file_writer, interop);
-	if (rval) {
-		fprintf(stderr, "Unable to append data to interop file!\n");
-		exit(EXIT_FAILURE);
-	} else {
-		fprintf(stderr, "Successfully appended datum to file\n");
-	}
-
-	check(rval, avro_file_writer_close(file_writer));
-	fprintf(stderr, "Closed writer.\n");
-
-	check(rval, avro_file_reader(outpath, &file_reader));
-	fprintf(stderr, "Re-reading datum to verify\n");
-	check(rval, avro_file_reader_read(file_reader, NULL, &out_datum));
-	fprintf(stderr, "Verifying datum...");
-	if (!avro_datum_equal(interop, out_datum)) {
-		fprintf(stderr, "fail!\n");
-		exit(EXIT_FAILURE);
-	}
-	fprintf(stderr, "ok\n");
-	check(rval, avro_file_reader_close(file_reader));
-	fprintf(stderr, "Closed reader.\n");
-	return 0;
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonarray_symbols b/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonarray_symbols
deleted file mode 100644
index f4dae95..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonarray_symbols
+++ /dev/null
@@ -1,3 +0,0 @@
-{"type": "enum",
- "name": "Status",
- "symbols": "Normal Caution Critical"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonstring_name b/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonstring_name
deleted file mode 100644
index baa13d9..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/enum_nonstring_name
+++ /dev/null
@@ -1,3 +0,0 @@
-{"type": "enum",
- "name": [ 0, 1, 1, 2, 3, 5, 8 ],
- "symbols": ["Golden", "Mean"]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/enum_without_name b/branch-1.3/lang/c/tests/schema_tests/fail/enum_without_name
deleted file mode 100644
index 57f6853..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/enum_without_name
+++ /dev/null
@@ -1,3 +0,0 @@
-{"type": "enum"
-  "symbols" : ["I", "will", "fail", "no", "name"]
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_name b/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_name
deleted file mode 100644
index fbf96ab..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_name
+++ /dev/null
@@ -1,2 +0,0 @@
-{"type": "fixed",
- "size": 314}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_size b/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_size
deleted file mode 100644
index 15c5e78..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/fixed_without_size
+++ /dev/null
@@ -1,2 +0,0 @@
-{"type": "fixed",
- "name": "Missing size"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/illegal_type b/branch-1.3/lang/c/tests/schema_tests/fail/illegal_type
deleted file mode 100644
index e65c046..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/illegal_type
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"panther"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/invalid_avro_id b/branch-1.3/lang/c/tests/schema_tests/fail/invalid_avro_id
deleted file mode 100644
index c684e7d..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/invalid_avro_id
+++ /dev/null
@@ -1,3 +0,0 @@
-{ "name" : "2d2",
-  "type": "enum",
-  "symbols" : [ "c3po" ] }
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_name b/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_name
deleted file mode 100644
index ba62d52..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_name
+++ /dev/null
@@ -1,5 +0,0 @@
-{"type": "record",
- "name": "Address",
- "fields": [
-     {"type": "string"},
-     {"type": "string", "name": "City"}]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_type b/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_type
deleted file mode 100644
index b449f3b..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_field_missing_type
+++ /dev/null
@@ -1,5 +0,0 @@
-{"type": "record",
- "name": "Event",
- "fields": [ 
-    { "name": "Sponsor"},
-    { "name": "City", "type": "string"}]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_invalid_reference b/branch-1.3/lang/c/tests/schema_tests/fail/record_with_invalid_reference
deleted file mode 100644
index 49b3590..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_invalid_reference
+++ /dev/null
@@ -1,7 +0,0 @@
-{ "type": "record",
-  "name": "recursive",
-  "fields": [
-    { "name": "label", "type": "string" },
-    { "name": "children", "type": {"type": "array", "items": "foobar"} }
-  ]
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonarray_fields b/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonarray_fields
deleted file mode 100644
index b81fbe3..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonarray_fields
+++ /dev/null
@@ -1,3 +0,0 @@
-{ "type": "record",
-  "fields": "His vision, from the constantly passing bars,"
-  "name", "Rainer" }
diff --git a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonstring_name b/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonstring_name
deleted file mode 100644
index 0ded9c5..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/fail/record_with_nonstring_name
+++ /dev/null
@@ -1,3 +0,0 @@
-{"name": ["Tom", "Jerry"],
- "type": "record",
- "fields": [ {"name": "name", "type": "string"} ]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/array b/branch-1.3/lang/c/tests/schema_tests/pass/array
deleted file mode 100644
index d695049..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/array
+++ /dev/null
@@ -1 +0,0 @@
-{"type": "array", "items": "long"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/boolean_full b/branch-1.3/lang/c/tests/schema_tests/pass/boolean_full
deleted file mode 100644
index 69d3579..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/boolean_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"boolean"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/bytes_full b/branch-1.3/lang/c/tests/schema_tests/pass/bytes_full
deleted file mode 100644
index 3b91ef0..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/bytes_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"bytes"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/double_full b/branch-1.3/lang/c/tests/schema_tests/pass/double_full
deleted file mode 100644
index dbd22f7..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/double_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"double"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/enum b/branch-1.3/lang/c/tests/schema_tests/pass/enum
deleted file mode 100644
index 749b0a3..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/enum
+++ /dev/null
@@ -1,4 +0,0 @@
-{ "type": "enum",
-  "name": "three_stooges",
-  "symbols" : [ "Moe", "Larry", "Curly" ]
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/extra_attributes b/branch-1.3/lang/c/tests/schema_tests/pass/extra_attributes
deleted file mode 100644
index 49885b9..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/extra_attributes
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"string", "ignored": "value"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/fixed b/branch-1.3/lang/c/tests/schema_tests/pass/fixed
deleted file mode 100644
index 0449ebc..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/fixed
+++ /dev/null
@@ -1 +0,0 @@
-{"type": "fixed", "size": 16, "name": "md5"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/float_full b/branch-1.3/lang/c/tests/schema_tests/pass/float_full
deleted file mode 100644
index fbd1164..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/float_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"float"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/int_full b/branch-1.3/lang/c/tests/schema_tests/pass/int_full
deleted file mode 100644
index 92b134d..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/int_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"int"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/interop.avsc b/branch-1.3/lang/c/tests/schema_tests/pass/interop.avsc
deleted file mode 100644
index 8cfbba2..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/interop.avsc
+++ /dev/null
@@ -1,28 +0,0 @@
-{"type": "record", "name":"Interop", "namespace": "org.apache.avro",
-  "fields": [
-      {"name": "intField", "type": "int"},
-      {"name": "longField", "type": "long"},
-      {"name": "stringField", "type": "string"},
-      {"name": "boolField", "type": "boolean"},
-      {"name": "floatField", "type": "float"},
-      {"name": "doubleField", "type": "double"},
-      {"name": "bytesField", "type": "bytes"},
-      {"name": "nullField", "type": "null"},
-      {"name": "arrayField", "type": {"type": "array", "items": "double"}},
-      {"name": "mapField", "type":
-       {"type": "map", "values":
-        {"type": "record", "name": "Foo",
-         "fields": [{"name": "label", "type": "string"}]}}},
-      {"name": "unionField", "type":
-       ["boolean", "double", {"type": "array", "items": "bytes"}]},
-      {"name": "enumField", "type":
-       {"type": "enum", "name": "Kind", "symbols": ["A","B","C"]}},
-      {"name": "fixedField", "type":
-       {"type": "fixed", "name": "MD5", "size": 16}},
-      {"name": "recordField", "type":
-       {"type": "record", "name": "Node",
-        "fields": [
-            {"name": "label", "type": "string"},
-            {"name": "children", "type": {"type": "array", "items": "Node"}}]}}
-  ]
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/long_full b/branch-1.3/lang/c/tests/schema_tests/pass/long_full
deleted file mode 100644
index ccfd917..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/long_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"long"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/map b/branch-1.3/lang/c/tests/schema_tests/pass/map
deleted file mode 100644
index 436d961..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/map
+++ /dev/null
@@ -1 +0,0 @@
-{"type" : "map", "values": "long"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/null_full b/branch-1.3/lang/c/tests/schema_tests/pass/null_full
deleted file mode 100644
index cae8767..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/null_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"null"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/record b/branch-1.3/lang/c/tests/schema_tests/pass/record
deleted file mode 100644
index 43ac456..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/record
+++ /dev/null
@@ -1,5 +0,0 @@
-{"name": "person",
- "type": "record",
- "fields": [ {"name": "height", "type": "long"},
-             {"name": "weight", "type": "long"},
-             {"name": "name", "type": "string"}]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/record_fields_with_defaults b/branch-1.3/lang/c/tests/schema_tests/pass/record_fields_with_defaults
deleted file mode 100644
index 545ccbb..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/record_fields_with_defaults
+++ /dev/null
@@ -1,6 +0,0 @@
-{"name": "person",
- "type": "record",
- "fields": [ {"name": "height", "type": "long"},
-             {"name": "weight", "type": "long"},
-             {"name": "name", "type": "string"},
-             {"name": "hacker", "type": "boolean", "default": false}]}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/recursive_record b/branch-1.3/lang/c/tests/schema_tests/pass/recursive_record
deleted file mode 100644
index 0967bb4..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/recursive_record
+++ /dev/null
@@ -1,7 +0,0 @@
-{ "type": "record",
-  "name": "recursive",
-  "fields": [
-    { "name": "label", "type": "string" },
-    { "name": "children", "type": {"type": "array", "items": "recursive"} }
-  ]
-}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/string_extra_attributes b/branch-1.3/lang/c/tests/schema_tests/pass/string_extra_attributes
deleted file mode 100644
index 49885b9..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/string_extra_attributes
+++ /dev/null
@@ -1 +0,0 @@
-{"type":"string", "ignored": "value"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/string_full b/branch-1.3/lang/c/tests/schema_tests/pass/string_full
deleted file mode 100644
index 5566b9f..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/string_full
+++ /dev/null
@@ -1 +0,0 @@
-{"type": "string"}
diff --git a/branch-1.3/lang/c/tests/schema_tests/pass/union b/branch-1.3/lang/c/tests/schema_tests/pass/union
deleted file mode 100644
index ef2b6ec..0000000
--- a/branch-1.3/lang/c/tests/schema_tests/pass/union
+++ /dev/null
@@ -1 +0,0 @@
-["string", "long", "null"]
diff --git a/branch-1.3/lang/c/tests/test_avro_data.c b/branch-1.3/lang/c/tests/test_avro_data.c
deleted file mode 100644
index 1aed8ae..0000000
--- a/branch-1.3/lang/c/tests/test_avro_data.c
+++ /dev/null
@@ -1,356 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <inttypes.h>
-#include <limits.h>
-#include <stdlib.h>
-#include <string.h>
-#include <time.h>
-
-char buf[4096];
-avro_reader_t reader;
-avro_writer_t writer;
-
-typedef int (*avro_test) (void);
-
-void init_rand(void)
-{
-	srand(time(NULL));
-}
-
-double rand_number(double from, double to)
-{
-	double range = to - from;
-	return from + ((double)rand() / (RAND_MAX + 1.0)) * range;
-}
-
-int64_t rand_int64(void)
-{
-	return (int64_t) rand_number(LONG_MIN, LONG_MAX);
-}
-
-int32_t rand_int32(void)
-{
-	return (int32_t) rand_number(INT_MIN, INT_MAX);
-}
-
-void
-write_read_check(avro_schema_t writers_schema,
-		 avro_schema_t readers_schema, avro_datum_t datum, char *type)
-{
-	avro_datum_t datum_out;
-	int validate;
-
-	for (validate = 0; validate <= 1; validate++) {
-
-		reader = avro_reader_memory(buf, sizeof(buf));
-		writer = avro_writer_memory(buf, sizeof(buf));
-
-		/* Validating read/write */
-		if (avro_write_data
-		    (writer, validate ? writers_schema : NULL, datum)) {
-			fprintf(stderr, "Unable to write %s validate=%d\n",
-				type, validate);
-			exit(EXIT_FAILURE);
-		}
-		int64_t size =
-		    avro_size_data(writer, validate ? writers_schema : NULL,
-				   datum);
-		if (size != avro_writer_tell(writer)) {
-			fprintf(stderr,
-				"Unable to calculate size %s validate=%d (%"PRId64" != %"PRId64")\n",
-				type, validate, size, avro_writer_tell(writer));
-			exit(EXIT_FAILURE);
-		}
-		if (avro_read_data
-		    (reader, writers_schema, readers_schema, &datum_out)) {
-			fprintf(stderr, "Unable to read %s validate=%d\n", type,
-				validate);
-			exit(EXIT_FAILURE);
-		}
-		if (!avro_datum_equal(datum, datum_out)) {
-			fprintf(stderr,
-				"Unable to encode/decode %s validate=%d\n",
-				type, validate);
-			exit(EXIT_FAILURE);
-		}
-
-		avro_reader_dump(reader, stderr);
-		avro_datum_decref(datum_out);
-		avro_reader_free(reader);
-		avro_writer_free(writer);
-	}
-}
-
-static int test_string(void)
-{
-	unsigned int i;
-	const char *strings[] = { "Four score and seven years ago",
-		"our father brought forth on this continent",
-		"a new nation", "conceived in Liberty",
-		"and dedicated to the proposition that all men are created equal."
-	};
-	avro_schema_t writer_schema = avro_schema_string();
-	for (i = 0; i < sizeof(strings) / sizeof(strings[0]); i++) {
-		avro_datum_t datum = avro_wrapstring(strings[i]);
-		write_read_check(writer_schema, NULL, datum, "string");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(writer_schema);
-	return 0;
-}
-
-static int test_bytes(void)
-{
-	char bytes[] = { 0xDE, 0xAD, 0xBE, 0xEF };
-	avro_schema_t writer_schema = avro_schema_bytes();
-	avro_datum_t datum = avro_wrapbytes(bytes, sizeof(bytes));
-
-	write_read_check(writer_schema, NULL, datum, "bytes");
-	avro_datum_decref(datum);
-	avro_schema_decref(writer_schema);
-	return 0;
-}
-
-static int test_int32(void)
-{
-	int i;
-	avro_schema_t writer_schema = avro_schema_int();
-	for (i = 0; i < 100; i++) {
-		avro_datum_t datum = avro_int32(rand_int32());
-		write_read_check(writer_schema, NULL, datum, "int");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(writer_schema);
-	return 0;
-}
-
-static int test_int64(void)
-{
-	int i;
-	avro_schema_t writer_schema = avro_schema_long();
-	for (i = 0; i < 100; i++) {
-		avro_datum_t datum = avro_int64(rand_int64());
-		write_read_check(writer_schema, NULL, datum, "long");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(writer_schema);
-	return 0;
-}
-
-static int test_double(void)
-{
-	int i;
-	avro_schema_t schema = avro_schema_double();
-	for (i = 0; i < 100; i++) {
-		avro_datum_t datum = avro_double(rand_number(-1.0E10, 1.0E10));
-		write_read_check(schema, NULL, datum, "double");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_float(void)
-{
-	int i;
-	avro_schema_t schema = avro_schema_double();
-	for (i = 0; i < 100; i++) {
-		avro_datum_t datum = avro_double(rand_number(-1.0E10, 1.0E10));
-		write_read_check(schema, NULL, datum, "float");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_boolean(void)
-{
-	int i;
-	avro_schema_t schema = avro_schema_boolean();
-	for (i = 0; i <= 1; i++) {
-		avro_datum_t datum = avro_boolean(i);
-		write_read_check(schema, NULL, datum, "boolean");
-		avro_datum_decref(datum);
-	}
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_null(void)
-{
-	avro_schema_t schema = avro_schema_null();
-	avro_datum_t datum = avro_null();
-	write_read_check(schema, NULL, datum, "null");
-	avro_datum_decref(datum);
-	return 0;
-}
-
-static int test_record(void)
-{
-	avro_schema_t schema = avro_schema_record("person", NULL);
-	avro_datum_t datum = avro_record("person", NULL);
-	avro_datum_t name_datum, age_datum;
-
-	avro_schema_record_field_append(schema, "name", avro_schema_string());
-	avro_schema_record_field_append(schema, "age", avro_schema_int());
-
-	name_datum = avro_wrapstring("Joseph Campbell");
-	age_datum = avro_int32(83);
-
-	avro_record_set(datum, "name", name_datum);
-	avro_record_set(datum, "age", age_datum);
-
-	write_read_check(schema, NULL, datum, "record");
-
-	avro_datum_decref(name_datum);
-	avro_datum_decref(age_datum);
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_enum(void)
-{
-	enum avro_languages {
-		AVRO_C,
-		AVRO_CPP,
-		AVRO_PYTHON,
-		AVRO_RUBY,
-		AVRO_JAVA
-	};
-	avro_schema_t schema = avro_schema_enum("language");
-	avro_datum_t datum = avro_enum("language", AVRO_C);
-
-	avro_schema_enum_symbol_append(schema, "C");
-	avro_schema_enum_symbol_append(schema, "C++");
-	avro_schema_enum_symbol_append(schema, "Python");
-	avro_schema_enum_symbol_append(schema, "Ruby");
-	avro_schema_enum_symbol_append(schema, "Java");
-
-	write_read_check(schema, NULL, datum, "enum");
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_array(void)
-{
-	int i, rval;
-	avro_schema_t schema = avro_schema_array(avro_schema_int());
-	avro_datum_t datum = avro_array();
-
-	for (i = 0; i < 10; i++) {
-		avro_datum_t i32_datum = avro_int32(i);
-		rval = avro_array_append_datum(datum, i32_datum);
-		avro_datum_decref(i32_datum);
-		if (rval) {
-			exit(EXIT_FAILURE);
-		}
-	}
-
-	write_read_check(schema, NULL, datum, "array");
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_map(void)
-{
-	avro_schema_t schema = avro_schema_map(avro_schema_long());
-	avro_datum_t datum = avro_map();
-	int64_t i = 0;
-	char *nums[] =
-	    { "zero", "one", "two", "three", "four", "five", "six", NULL };
-	while (nums[i]) {
-		avro_datum_t i_datum = avro_int64(i);
-		avro_map_set(datum, nums[i], i_datum);
-		avro_datum_decref(i_datum);
-		i++;
-	}
-	write_read_check(schema, NULL, datum, "map");
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_union(void)
-{
-	avro_schema_t schema = avro_schema_union();
-	avro_datum_t union_datum;
-	avro_datum_t datum;
-
-	avro_schema_union_append(schema, avro_schema_string());
-	avro_schema_union_append(schema, avro_schema_int());
-	avro_schema_union_append(schema, avro_schema_null());
-
-	datum = avro_wrapstring("Follow your bliss.");
-	union_datum = avro_union(0, datum);
-
-	write_read_check(schema, NULL, union_datum, "union");
-	avro_datum_decref(union_datum);
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-static int test_fixed(void)
-{
-	char bytes[] = { 0xD, 0xA, 0xD, 0xA, 0xB, 0xA, 0xB, 0xA };
-	avro_schema_t schema = avro_schema_fixed("msg", sizeof(bytes));
-	avro_datum_t datum = avro_wrapfixed("msg", bytes, sizeof(bytes));
-	write_read_check(schema, NULL, datum, "fixed");
-	avro_datum_decref(datum);
-	avro_schema_decref(schema);
-	return 0;
-}
-
-int main(void)
-{
-	unsigned int i;
-	struct avro_tests {
-		char *name;
-		avro_test func;
-	} tests[] = {
-		{
-		"string", test_string}, {
-		"bytes", test_bytes}, {
-		"int", test_int32}, {
-		"long", test_int64}, {
-		"float", test_float}, {
-		"double", test_double}, {
-		"boolean", test_boolean}, {
-		"null", test_null}, {
-		"record", test_record}, {
-		"enum", test_enum}, {
-		"array", test_array}, {
-		"map", test_map}, {
-		"fixed", test_fixed}, {
-		"union", test_union}
-	};
-
-	init_rand();
-	for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
-		struct avro_tests *test = tests + i;
-		fprintf(stderr, "**** Running %s tests ****\n", test->name);
-		if (test->func() != 0) {
-			return EXIT_FAILURE;
-		}
-	}
-	return EXIT_SUCCESS;
-}
diff --git a/branch-1.3/lang/c/tests/test_avro_schema.c b/branch-1.3/lang/c/tests/test_avro_schema.c
deleted file mode 100644
index bbf1333..0000000
--- a/branch-1.3/lang/c/tests/test_avro_schema.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0 
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- * implied.  See the License for the specific language governing
- * permissions and limitations under the License. 
- */
-
-#include "avro_private.h"
-#include <stdio.h>
-#include <stdlib.h>
-#include <sys/types.h>
-#include <dirent.h>
-
-int test_cases = 0;
-avro_writer_t avro_stderr;
-
-static void run_tests(char *dirpath, int should_pass)
-{
-	char jsontext[4096];
-	size_t jsonlen, rval;
-	char filepath[1024];
-	DIR *dir;
-	struct dirent *dent;
-	FILE *fp;
-	avro_schema_t schema;
-	avro_schema_error_t avro_schema_error;
-
-	dir = opendir(dirpath);
-	if (dir == NULL) {
-		fprintf(stderr, "Unable to open '%s'\n", dirpath);
-		exit(EXIT_FAILURE);
-	}
-	do {
-		dent = readdir(dir);
-		if (dent && dent->d_name[0] != '.') {
-			int test_rval;
-			snprintf(filepath, sizeof(filepath), "%s/%s", dirpath,
-				 dent->d_name);
-			fprintf(stderr, "TEST %s...", filepath);
-			jsonlen = 0;
-			fp = fopen(filepath, "r");
-			if (!fp) {
-				fprintf(stderr, "can't open!\n");
-				exit(EXIT_FAILURE);
-			}
-			rval = fread(jsontext, 1, sizeof(jsontext) - 1, fp);
-			jsontext[rval] = '\0';
-			test_rval =
-			    avro_schema_from_json(jsontext, jsonlen, &schema,
-						  &avro_schema_error);
-			test_cases++;
-			if (test_rval == 0) {
-				if (should_pass) {
-					avro_schema_t schema_copy =
-					    avro_schema_copy(schema);
-					fprintf(stderr, "pass\n");
-					avro_schema_to_json(schema,
-							    avro_stderr);
-					fprintf(stderr, "\n");
-					if (!avro_schema_equal
-					    (schema, schema_copy)) {
-						fprintf(stderr,
-							"failed to avro_schema_equal(schema,avro_schema_copy())\n");
-						exit(EXIT_FAILURE);
-					}
-					avro_schema_decref(schema_copy);
-					avro_schema_decref(schema);
-				} else {
-					/*
-					 * Unexpected success 
-					 */
-					fprintf(stderr,
-						"fail! (shouldn't succeed but did)\n");
-					exit(EXIT_FAILURE);
-				}
-			} else {
-				if (should_pass) {
-					fprintf(stderr,
-						"fail! (should have succeeded but didn't)\n");
-					exit(EXIT_FAILURE);
-				} else {
-					fprintf(stderr, "pass\n");
-				}
-			}
-		}
-	}
-	while (dent != NULL);
-}
-
-int main(int argc, char *argv[])
-{
-	char *srcdir = getenv("srcdir");
-	char path[1024];
-
-	AVRO_UNUSED(argc);
-	AVRO_UNUSED(argv);
-
-	if (!srcdir) {
-		srcdir = ".";
-	}
-
-	avro_stderr = avro_writer_file(stderr);
-
-	/*
-	 * Run the tests that should pass 
-	 */
-	snprintf(path, sizeof(path), "%s/schema_tests/pass", srcdir);
-	fprintf(stderr, "RUNNING %s\n", path);
-	run_tests(path, 1);
-	snprintf(path, sizeof(path), "%s/schema_tests/fail", srcdir);
-	fprintf(stderr, "RUNNING %s\n", path);
-	run_tests(path, 0);
-
-	fprintf(stderr, "==================================================\n");
-	fprintf(stderr,
-		"Finished running %d schema test cases successfully \n",
-		test_cases);
-	fprintf(stderr, "==================================================\n");
-
-	avro_writer_free(avro_stderr);
-	return EXIT_SUCCESS;
-}
diff --git a/branch-1.3/lang/c/tests/test_cpp.cpp b/branch-1.3/lang/c/tests/test_cpp.cpp
deleted file mode 100644
index 5c2fe50..0000000
--- a/branch-1.3/lang/c/tests/test_cpp.cpp
+++ /dev/null
@@ -1,10 +0,0 @@
-#include "avro.h"
-
-int main(int argc, char **argv)
-{
-    (void)argc;
-    (void)argv;
-
-	return 0;
-}
-
diff --git a/branch-1.3/lang/c/tests/test_interop_data.c b/branch-1.3/lang/c/tests/test_interop_data.c
deleted file mode 100644
index d7bd75a..0000000
--- a/branch-1.3/lang/c/tests/test_interop_data.c
+++ /dev/null
@@ -1,8 +0,0 @@
-#include "avro_private.h"
-#include <stdio.h>
-#include <unistd.h>
-
-int main(void)
-{
-	return 0;
-}
diff --git a/branch-1.3/lang/c/tests/test_valgrind b/branch-1.3/lang/c/tests/test_valgrind
deleted file mode 100755
index d2b1f44..0000000
--- a/branch-1.3/lang/c/tests/test_valgrind
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/sh
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to you under the Apache License, Version 2.0 
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.  See the License for the specific language governing
-# permissions and limitations under the License. 
-set +e
-set -x
-
-if ! which valgrind; then
-	echo "Unable to find valgrind installed. Test will not run."
-	# This special exit value will show that we skipped this test
-	exit 77
-fi
-
-LD_LIBRARY_PATH="../src/.libs/" valgrind --leak-check=full --show-reachable=yes -q .libs/test_avro_data 2>&1 |\
-grep -E '^==[0-9]+== '
-if [ $? -eq 0 ]; then
-	# Expression found. Test failed.
-	exit 1
-else
-	# We're all clean
-	exit 0
-fi
diff --git a/branch-1.3/lang/c/version.sh b/branch-1.3/lang/c/version.sh
deleted file mode 100755
index fe6c5fc..0000000
--- a/branch-1.3/lang/c/version.sh
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/bin/bash
-#
-# This script is used to generate version numbers for autotools
-#
-# The top-level main version is collected from the top-level build.xml
-#
-# The information for libtool is maintained manually since
-# the public API for the C library can change independent of the project
-#
-# Do each of these steps in order and libtool will do the right thing
-# (1) If there are changes to libavro:
-#         libavro_micro_version++
-#         libavro_interface_age++ 
-#         libavro_binary_age++
-# (2) If any functions have been added:
-#         libavro_interface_age = 0
-# (3) If backwards compatibility has been broken:
-#         libavro_binary_age = 0
-#         libavro_interface_age = 0
-#
-libavro_micro_version=21
-libavro_interface_age=0
-libavro_binary_age=0
-
-# IGNORE EVERYTHING ELSE FROM HERE DOWN.........
-if test $# != 1; then
-	echo "USAGE: $0 CMD"
-  	echo "  where CMD is one of: project, libtool, libcurrent, librevision, libage"
-	exit 1
-fi
-
-# http://sources.redhat.com/autobook/autobook/autobook_91.html
-# 'Current' is the most recent interface number that this library implements
-libcurrent=$(($libavro_micro_version - $libavro_interface_age))
-# The implementation number of the 'current' interface
-librevision=$libavro_interface_age
-# The difference between the newest and oldest interfaces that this library implements
-# In other words, the library implements all the interface numbers in the range from 
-# number 'current - age' to current
-libage=$(($libavro_binary_age - $libavro_interface_age))
-
-if test "$1" = "project"; then
-	project_ver="undef"
-        version_file="../../share/VERSION.txt"
-	if test -f $version_file; then
-		project_ver=$(cat $version_file)
-	fi
-	printf "%s" $project_ver
-elif test "$1" = "libtool"; then
-	# useful for the -version-info flag for libtool
-	printf "%d:%d:%d" $libcurrent $librevision $libage
-elif test "$1" = "libcurrent"; then
-	printf "%d" $libcurrent
-elif test "$1" = "librevision"; then
-	printf "%d" $librevision
-elif test "$1" = "libage"; then
-	printf "%d" $libage
-fi
diff --git a/branch-1.3/lang/java/.eclipse_launchers/AllTests.launch b/branch-1.3/lang/java/.eclipse_launchers/AllTests.launch
deleted file mode 100644
index f66750c..0000000
--- a/branch-1.3/lang/java/.eclipse_launchers/AllTests.launch
+++ /dev/null
@@ -1,36 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<launchConfiguration type="org.eclipse.jdt.junit.launchconfig">
-<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS">
-<listEntry value="/avro"/>
-</listAttribute>
-<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES">
-<listEntry value="4"/>
-</listAttribute>
-<listAttribute key="org.eclipse.debug.ui.favoriteGroups">
-<listEntry value="org.eclipse.debug.ui.launchGroup.debug"/>
-<listEntry value="org.eclipse.debug.ui.launchGroup.run"/>
-</listAttribute>
-<stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value="=avro"/>
-<booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/>
-<stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/>
-<stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/>
-<stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value=""/>
-<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="avro"/>
-<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Dtest.dir=${workspace_loc:avro}/build/test -Dtest.genavro.dir=${workspace_loc:avro}/src/test/genavro"/>
-</launchConfiguration>
diff --git a/branch-1.3/lang/java/.gitignore b/branch-1.3/lang/java/.gitignore
deleted file mode 100644
index 92c171c..0000000
--- a/branch-1.3/lang/java/.gitignore
+++ /dev/null
@@ -1,19 +0,0 @@
-#   Licensed to the Apache Software Foundation (ASF) under one or more
-#   contributor license agreements.  See the NOTICE file distributed with
-#   this work for additional information regarding copyright ownership.
-#   The ASF licenses this file to You under the Apache License, Version 2.0
-#   (the "License"); you may not use this file except in compliance with
-#   the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-/build
-/.classpath
-/.eclipse
-/.project
diff --git a/branch-1.3/lang/java/build.xml b/branch-1.3/lang/java/build.xml
deleted file mode 100644
index 32c0c1b..0000000
--- a/branch-1.3/lang/java/build.xml
+++ /dev/null
@@ -1,616 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<project name="Avro" default="compile"
-    xmlns:ivy="antlib:org.apache.ivy.ant"
-    xmlns:mvn="urn:maven-artifact-ant">
-
-  <!-- Load user's default properties. -->
-  <property file="${user.home}/build.properties" />
-
-  <property name="Org" value="Apache"/>
-  <property name="org" value="apache"/>
-  <property name="Name" value="Avro"/>
-  <property name="name" value="avro"/>
-
-  <loadresource property="version">
-    <file file="${basedir}/../../share/VERSION.txt"/>
-  </loadresource>
-  <property name="fullname" value="${name}-${version}"/>
-  <property name="year" value="2010"/>
-
-  <property name="share.dir" value="${basedir}/../../share"/>  	
-  <property name="top.build" value="${basedir}/../../build"/>  	
-  <property name="dist.dir" value="${basedir}/../../dist/java"/>
-
-  <property name="src.dir" value="${basedir}/src"/>  	
-  <property name="java.src.dir" value="${src.dir}/java"/>  	
-  <property name="build.dir" value="${basedir}/build"/>
-  <property name="lib.dir" value="${basedir}/lib"/>
-
-  <property name="build.classes" value="${build.dir}/classes"/>
-  <property name="build.doc" value="${build.dir}/doc"/>
-  <property name="build.javadoc" value="${build.doc}/api/java"/>
-  <property name="build.javadoc.log" value="${build.dir}/javadoc.log"/>
-
-  <property name="test.count" value="100"/>
-  <property name="test.junit.output.format" value="plain"/>
-  <property name="test.java.src.dir" value="${basedir}/src/test/java"/>
-  <property name="test.schemata.dir" value="${share.dir}/test/schemas"/>
-  <property name="test.genavro.dir" value="${basedir}/src/test/genavro"/>
-  <property name="test.genavro.mode" value="run" />
-  <property name="test.java.build.dir" value="${build.dir}/test"/>
-  <property name="test.java.generated.build.dir" value="${test.java.build.dir}/generated"/>
-  <property name="test.java.generated.dir" value="${test.java.generated.build.dir}/src"/>
-  <property name="test.java.classes" value="${test.java.build.dir}/classes"/>
-  <property name="test.java.generated.classes" value="${test.java.generated.build.dir}/classes"/>
-  <property name="test.java.include" value="Test*"/>
-
-  <property name="javadoc.link.java"
-	    value="http://java.sun.com/javase/6/docs/api/"/>
-  <property name="javadoc.link.jackson"
-	    value="http://jackson.codehaus.org/0.9.3/javadoc/"/>
-  <property name="javadoc.link.servlet"
-	    value="http://java.sun.com/products/servlet/2.3/javadoc/"/>
-  <property name="javadoc.packages" value="org.${org}.${name}.*"/>
-
-  <property name="javac.encoding" value="ISO-8859-1"/>
-  <property name="javac.debug" value="on"/>
-  <property name="javac.optimize" value="on"/>
-  <property name="javac.deprecation" value="off"/>
-  <property name="javac.version" value="1.6"/>
-  <property name="javac.args" value=""/>
-  <property name="javac.args.warnings" value="-Xlint:unchecked"/>
-
-  <property name="ivy.version" value="2.1.0"/>
-  <property name="ivy.url"
-	    value="http://repo2.maven.org/maven2/org/apache/ivy/ivy" />
-  <property name="ivy.home" value="${user.home}/.ant" />
-  <property name="ivy.lib" value="${build.dir}/lib"/>
-  <property name="ivy.test.lib" value="${build.dir}/test/lib"/>
-  <property name="mvn.repo"
-	    value="https://repository.apache.org/content/repositories/snapshots"/>
-
-  <!-- Eclipse properties -->
-  <property name="build.dir.eclipse" value=".eclipse"/>
-  <property name="build.dir.eclipse-main-classes" value="${build.dir.eclipse}/classes-main"/>
-  <property name="build.dir.eclipse-main-generated-classes" value="${build.dir.eclipse}/classes-main-generated"/>
-  <property name="build.dir.eclipse-test-classes" value="${build.dir.eclipse}/classes-test"/>
-
-  <!-- the normal classpath -->
-  <path id="libs">
-    <fileset dir="${ivy.lib}">
-      <include name="**/*.jar" />
-    </fileset>
-  </path>
-
-  <path id="java.classpath">
-    <pathelement location="${build.classes}"/>
-    <fileset dir="${lib.dir}">
-      <include name="**/*.jar" />
-      <exclude name="**/excluded/" />
-    </fileset>
-    <fileset dir="${ant.home}/lib">
-      <include name="ant.jar" />
-    </fileset>
-    <path refid="libs" />
-  </path>
-
-  <path id="test.libs">
-    <fileset dir="${ivy.test.lib}">
-      <include name="**/*.jar" />
-    </fileset>
-  </path>
-
-  <path id="test.java.classpath">
-    <pathelement location="${test.java.classes}" />
-    <pathelement location="${test.java.generated.classes}" />
-    <path refid="java.classpath"/>
-    <path refid="test.libs"/>
-  </path>
-
-  <target name="init">
-    <mkdir dir="${build.dir}"/>
-    <mkdir dir="${build.classes}"/>
- 
-    <mkdir dir="${test.java.build.dir}"/>
-    <mkdir dir="${test.java.classes}"/>
-    <mkdir dir="${test.java.generated.classes}"/>
-
-    <mkdir dir="${ivy.lib}"/>
-    <mkdir dir="${ivy.test.lib}"/>
-    <condition property="ivy.jar.exists">
-      <available file="${lib.dir}/ivy-${ivy.version}.jar"/>
-    </condition>
-  </target>
-
-  <target name="javacc" depends="ivy-retrieve-build">
-    <mkdir dir="${build.dir}/src/org/apache/avro/genavro"/>
-    <copy tofile="${ivy.lib}/javacc.jar" file="${ivy.lib}/javacc-5.0.jar"
-        overwrite="true"/>
-    <javacc target="${java.src.dir}/org/apache/avro/genavro/genavro.jj"
-            outputdirectory="${build.dir}/src/org/apache/avro/genavro"
-            javacchome="${ivy.lib}"/>
-  </target>
-
-  <target name="compile" depends="javacc,ivy-retrieve">
-    <java-compiler
-       excludes="**/ipc/** **/*Requestor.java **/*Responder.java **/tool/**">
-      <src path="${build.dir}/src"/>
-      <src path="${java.src.dir}"/>
-    </java-compiler>
-    <java-avro-compiler/>
-    <java-compiler>
-      <src path="${build.dir}/src"/>
-      <src path="${java.src.dir}"/>
-    </java-compiler>
-    <copy todir="${build.classes}"> 
-      <fileset file="${basedir}/../../NOTICE.txt"/>
-      <fileset file="${basedir}/../../share/VERSION.txt"/>
-    </copy>
-  </target>
-
-  <target name="ivy-download" unless="ivy.jar.exists" depends="init">
-    <delete dir="${lib.dir}"
-	    includes="ivy-*.jar" excludes="ivy-${ivy.version}.jar"/>
-    <get src="${ivy.url}/${ivy.version}/ivy-${ivy.version}.jar"
-         dest="${lib.dir}/ivy-${ivy.version}.jar" usetimestamp="true"/>
-  </target>
-
-  <target name="ivy-init" depends="ivy-download" unless="ivy.initialized">
-    <taskdef resource="org/apache/ivy/ant/antlib.xml"
-             uri="antlib:org.apache.ivy.ant" classpathref="java.classpath"/>
-    <!-- ensure that ivy taskdef is only run once, otw ant will error -->
-    <property name="ivy.initialized" value="true"/>
-  </target>
-
-  <target name="ivy-retrieve" depends="init,ivy-init">
-    <ivy:retrieve type="jar" conf="default"
-		  pattern="${ivy.lib}/[artifact]-[revision].[ext]"/>
-  </target>
-
-  <target name="ivy-retrieve-tools" depends="init,ivy-init">
-    <!-- Place in separate directory, since these artificats will
-         be packaged in the tools jar. -->
-    <ivy:retrieve type="jar" conf="tools"
-		  pattern="${ivy.lib}/tools/[artifact]-[revision].[ext]"/>
-  </target>
-
-  <target name="ivy-retrieve-test" depends="init,ivy-init">
-    <ivy:retrieve type="jar" conf="test"
-		  pattern="${ivy.test.lib}/[artifact]-[revision].[ext]"/>
-  </target>
-
-  <target name="ivy-retrieve-build" depends="init,ivy-init">
-    <ivy:retrieve type="jar" conf="build"
-		  pattern="${ivy.lib}/[artifact]-[revision].[ext]"/>
-  </target>
-
-  <macrodef name="java-compiler">
-    <attribute name="dest" default="${build.classes}"/>
-    <attribute name="includes" default="**/*.java"/>
-    <attribute name="excludes" default=""/>
-    <attribute name="classpath" default="java.classpath"/>
-    <element name="src" implicit="yes"/>
-    <sequential>
-      <javac 
-	 destdir="@{dest}"
-         includes="@{includes}"
-         excludes="@{excludes}" 
-	 encoding="${javac.encoding}" 
-	 debug="${javac.debug}"
-	 optimize="${javac.optimize}"
-	 target="${javac.version}"
-	 source="${javac.version}"
-	 deprecation="${javac.deprecation}">
-	<compilerarg line="${javac.args} ${javac.args.warnings}" />
-	<classpath refid="@{classpath}"/>
-        <src />
-      </javac>
-    </sequential>
-  </macrodef>
-
-  <target name="jar" depends="compile" description="Build jar file.">
-    <jar jarfile="${build.dir}/${fullname}.jar"
-         basedir="${build.classes}">
-      <manifest>
-        <section name="org/${org}/${name}">
-          <attribute name="Implementation-Title" value="${Name}"/>
-          <attribute name="Implementation-Version" value="${version}"/>
-          <attribute name="Implementation-Vendor" value="${Org}"/>
-        </section>
-      </manifest>
-    </jar>
-  </target>
-  
-  <target name="checkstyle-java" depends="compile">
-    <taskdef resource="checkstyletask.properties">
-      <classpath refid="test.java.classpath" />
-    </taskdef>
-    <checkstyle config="${java.src.dir}/checkstyle.xml"
-		classpathref="test.java.classpath">
-	<fileset dir="${java.src.dir}">
-          <include name="**/*.java"/>
-        </fileset>
-	<fileset dir="${test.java.src.dir}">
-          <include name="**/*.java"/>
-        </fileset>
-    </checkstyle>
-  </target>
-
-  <target name="compile-test-java" depends="ivy-retrieve-test,ivy-retrieve-tools,compile">
-    <java-avro-compiler src="${test.schemata.dir}"
-			generated="${test.java.generated.dir}"
-			dest="${test.java.generated.classes}"
-			classpath="test.java.classpath"/>
-    <java-compiler dest="${test.java.classes}"
-		   classpath="test.java.classpath">
-      <src path="${test.java.src.dir}" />
-    </java-compiler>
-    <taskdef
-       name="paranamer" 
-       classname="com.thoughtworks.paranamer.ant.ParanamerGeneratorTask">
-      <classpath refid="java.classpath" />
-    </taskdef>
-    <paranamer sourceDirectory="${test.java.generated.dir}"
-	       outputDirectory="${test.java.generated.classes}"/>
-    <paranamer sourceDirectory="${test.java.src.dir}"
-	       outputDirectory="${test.java.classes}"/>
-  </target>
-
-  <macrodef name="java-avro-compiler">
-    <attribute name="src" default="${share.dir}/schemas"/>
-    <attribute name="generated" default="${build.dir}/src"/>
-    <attribute name="dest" default="${build.classes}"/>
-    <attribute name="classpath" default="java.classpath"/>
-
-    <sequential>
-      <taskdef name="protocol"
-	       classname="org.apache.avro.specific.ProtocolTask">
-	<classpath refid="java.classpath" />
-      </taskdef>
-      <taskdef name="schema" classname="org.apache.avro.specific.SchemaTask">
-	<classpath refid="java.classpath" />
-      </taskdef>
-
-      <mkdir dir="@{generated}"/>
-      
-      <protocol destdir="@{generated}">
-	<fileset dir="@{src}">
-	  <include name="**/*.avpr" />
-	</fileset>
-      </protocol>
-      
-      <schema destdir="@{generated}">
-	<fileset dir="@{src}">
-	  <include name="**/*.avsc" />
-	</fileset>
-      </schema>
-
-      <java-compiler dest="@{dest}"
-		     classpath="@{classpath}">
-        <src path="@{generated}" />
-      </java-compiler>
-    </sequential>
-  </macrodef>
-
-  <target name="test" depends="test-java,test-tools"/>
-
-  <target name="rat" depends="ivy-retrieve-test">
-    <typedef resource="org/apache/rat/anttasks/antlib.xml"
-             uri="antlib:org.apache.rat.anttasks">
-      <classpath refid="test.java.classpath"/>
-    </typedef>
-
-    <rat:report xmlns:rat="antlib:org.apache.rat.anttasks"
-		reportFile="${top.build}/rat-report.log">
-      <fileset dir="${top.build}/avro-src-${version}/"
-	       excludesfile="${share.dir}/rat-excludes.txt"/>
-    </rat:report>
-    <condition property="rat.passed">
-      <isfileselected file="${top.build}/rat-report.log">
-	<containsregexp expression="^0 Unknown Licenses"/>
-      </isfileselected>
-    </condition>
-    <fail unless="rat.passed">Unknown licenses: See build/rat-report.log.</fail>
-  </target>
-
-  <macrodef name="test-runner">
-    <attribute name="files.location" />
-    <attribute name="tests.pattern" />
-    <attribute name="test.dir" default="${test.java.build.dir}" />
-    <sequential>
-      <junit showoutput="yes"
-             printsummary="withOutAndErr"
-             haltonfailure="no"
-             fork="yes" forkMode="once"
-             errorProperty="tests.failed" failureProperty="tests.failed">
-        <sysproperty key="test.count" value="${test.count}"/>
-        <sysproperty key="test.dir" value="@{test.dir}"/>
-        <sysproperty key="share.dir" value="${share.dir}"/>
-        <sysproperty key="test.validate" value="${test.validate}"/>
-        <sysproperty key="test.genavro.dir" value="${test.genavro.dir}" />
-        <sysproperty key="test.genavro.mode" value="${test.genavro.mode}" />
-        <classpath refid="test.java.classpath"/>
-        <formatter type="${test.junit.output.format}"/>
-        <batchtest todir="${test.java.build.dir}" unless="testcase">
-          <fileset dir="@{files.location}"
-		   includes="@{tests.pattern}"
-		   excludes="**/${test.java.exclude}.java" />
-        </batchtest>
-        <batchtest todir="${test.java.build.dir}" if="testcase">
-          <fileset dir="@{files.location}" includes="**/${testcase}.java"/>
-        </batchtest>
-      </junit>
-      <fail if="tests.failed">Tests Failed!</fail>
-    </sequential>
-  </macrodef>
-
-  <target name="test-java" depends="unit-test-java,checkstyle-java,javadoc"
- 	  description="Run java tests" />
-
-  <target name="unit-test-java" depends="compile-test-java"
- 	  description="Run java unit tests">
-    <test-runner files.location="${test.java.src.dir}" tests.pattern="**/${test.java.include}.java"/>
-  </target>
-
-  <target name="interop-data-generate" depends="compile-test-java"
-	  description="Generate java interop data files.">
-    <mkdir dir="${top.build}/interop/data"/>
-    <java classname="org.apache.avro.RandomData"
-      classpathref="test.java.classpath">
-      <arg value="${share.dir}/test/schemas/interop.avsc"/>
-      <arg value="${top.build}/interop/data/java.avro"/>
-      <arg value="${test.count}"/>
-    </java>
-  </target>
-
-  <target name="interop-data-test"
-    description="Run java data file interoperability tests">
-    <test-runner files.location="${test.java.classes}"
-		 tests.pattern="**/TestDataFile$InteropTest.class"
-		 test.dir="${top.build}/interop/data" />
-  </target>
-
-  <target name="interop-rpc-start" depends="compile-test-java"
-    description="Start the daemons for rpc interoperability tests">
-    <delete dir="${share.dir}/test/interop/rpc/java"/>
-    <mkdir dir="${share.dir}/test/interop/rpc/java"/>
-    <!-- Start the servers. As servers block the ant main thread, these need 
-    to be created in parallel threads--> 
-    <parallel>
-      <daemons>
-        <java classname="org.apache.avro.TestProtocolSpecific$InteropTest">
-          <classpath refid="test.java.classpath"/>
-          <sysproperty key="test.dir"
-		       value="${share.dir}/test/interop/rpc/java"/>
-        </java>
-      </daemons>
-
-      <!-- Give some time to start -->
-      <sequential>
-        <sleep seconds="2"/>
-      </sequential>
-    </parallel>
-  </target>
-
-  <target name="interop-rpc-test" 
-    description="Run java rpc interoperability tests">
-    <test-runner files.location="${test.java.classes}" tests.pattern="**/TestProtocolSpecific$InteropTest.class" />
-  </target>
-
-  <target name="tools" depends="compile,ivy-retrieve-tools"
-	  description="Build standalone tools jar file">
-    <jar jarfile="${build.dir}/avro-tools-${version}.jar">
-      <manifest>
-        <attribute name="Main-Class" value="org.apache.avro.tool.Main"/>
-        <attribute name="Implementation-Title" value="${Name}"/>
-        <attribute name="Implementation-Version" value="${version}"/>
-        <attribute name="Implementation-Vendor" value="${Org}"/>
-      </manifest>
-      <fileset dir="${build.classes}" />
-      <zipgroupfileset dir="${ivy.lib}/tools" includes="*.jar"/>
-    </jar>
-    <chmod file="${build.dir}/avro-tools-${version}.jar" perm="ugo+x"/>
-  </target>
-
-  <target name="test-tools" depends="tools,compile-test-java"
-   description="Tests tools">
-    <exec executable="${basedir}/src/test/bin/test_tools.sh"
-          failonerror="true">
-      <env key="TOOLS" value="${build.dir}/avro-tools-${version}.jar"/>
-      <env key="TMPDIR" value="${test.java.build.dir}/tools"/>
-    </exec>
-  </target>
-
-  <target name="javadoc" depends="compile" description="Generate javadoc">
-    <mkdir dir="${build.javadoc}"/>
-    <record name="${build.javadoc.log}" action="start"/>
-    <javadoc
-      overview="${java.src.dir}/overview.html"
-      packagenames="org.${org}.${name}.*"
-      destdir="${build.javadoc}"
-      author="true"
-      version="true"
-      use="true"
-      windowtitle="${Name} ${version} API"
-      doctitle="${Name} ${version} API"
-      bottom="Copyright &amp;copy; ${year} The ${Org} Software Foundation"
-      >
-        <packageset dir="${java.src.dir}"/>
-	
-        <link href="${javadoc.link.java}"/>
-        <link href="${javadoc.link.jackson}"/>
-        <link href="${javadoc.link.servlet}"/>
-
-        <classpath >
-          <path refid="java.classpath" />
-        </classpath>
-
-    </javadoc>
-    <record name="${build.javadoc.log}" action="stop"/>
-    <condition property="javadoc.warnings">
-      <isfileselected file="${build.javadoc.log}">
-	<contains text=": warning - "/>
-      </isfileselected>
-    </condition>
-    <fail if="javadoc.warnings">Javadoc warnings!</fail>
-  </target>
-
-  <target name="source">
-    <jar jarfile="${build.dir}/${fullname}-sources.jar">
-      <fileset dir="${java.src.dir}" includes="**/*.java"/>
-    </jar>
-  </target>
-
-  <target name="javadoc-jar" depends="javadoc">
-    <jar jarfile="${build.dir}/${fullname}-javadoc.jar">
-      <fileset dir="${build.javadoc}" includes="**/*"/>
-    </jar>
-  </target>
-
-  <target name="pom" depends="ivy-init">
-    <ivy:makepom ivyfile="${basedir}/ivy.xml"
-		 pomfile="${dist.dir}/${fullname}.pom">
-      <mapping conf="default" scope="compile"/>
-      <mapping conf="test" scope="test"/>
-    </ivy:makepom>
-  </target>
-
-  <target name="dist" depends="jar, tools, javadoc, pom, source, javadoc-jar"
-	  description="Build distribution">
-    <mkdir dir="${dist.dir}"/>
-
-    <copy todir="${dist.dir}"> 
-      <fileset file="${build.dir}/${fullname}.jar"/>
-      <fileset file="${build.dir}/${fullname}-sources.jar"/>
-      <fileset file="${build.dir}/${fullname}-javadoc.jar"/>
-      <fileset file="${build.dir}/avro-tools-${version}.jar"/>
-    </copy>
-    <chmod file="${dist.dir}/avro-tools-${version}.jar" perm="ugo+x"/>
-
-    <copy todir="${top.build}/avro-doc-${version}/api/java">
-      <fileset dir="${build.javadoc}"/>
-    </copy>
-
-  </target>
-
-  <target name="clean" description="Delete build files, and their directories">
-    <delete dir="${build.dir}"/>
-  </target>
-
-  <!-- Begin Eclipse targets -->
-  <property name="ant-eclipse.dir" value="${build.dir}/ant-eclipse"/>
-  <property name="ant-eclipse.lib.dir" value="${ant-eclipse.dir}/lib"/>
-
-  <available file="${ant-eclipse.lib.dir}/ant-eclipse-1.0-jvm1.2.jar"
-        property="ant-eclipse.jar.exists"/>
-
-  <path id="ant-eclipse.classpath">
-    <fileset dir="${ant-eclipse.lib.dir}">
-      <include name="**/*.jar" />
-    </fileset>
-  </path>
-
-  <target name="ant-eclipse-retrieve" unless="ant-eclipse.jar.exists"
-          depends="ivy-init"
-          description="Retrieves the ant-eclipse binary.">
-    <mkdir dir="${ant-eclipse.lib.dir}"/>
-    <ivy:settings id="ivy.ant-eclipse.settings"
-        file="${basedir}/ivysettings-ant-eclipse.xml" />
-    <ivy:retrieve conf="eclipse"
-        settingsRef="ivy.ant-eclipse.settings"
-		pattern="${build.dir}/[module]-[revision].[ext]"/>
-
-    <untar src="${build.dir}/ant-eclipse-1.0.bz2"
-           dest="${ant-eclipse.dir}" compression="bzip2">
-      <patternset>
-        <include name="lib/ant-eclipse-1.0-jvm1.2.jar"/>
-      </patternset>
-    </untar>
-    <delete file="${build.dir}/ant-eclipse-1.0.bz2" />
-  </target>
-
-  <target name="eclipse" 
-          depends="compile-test-java,ant-eclipse-retrieve"
-          description="Create eclipse project files">
-    <taskdef name="eclipse"
-             classname="prantl.ant.eclipse.EclipseTask"
-             classpathref="ant-eclipse.classpath" />
-    <eclipse updatealways="true">
-      <project name="${ant.project.name}" />
-      <classpath>
-        <!-- Use Eclipse to compile Avro runtime -->
-        <source path="${java.src.dir}"
-                output="${build.dir.eclipse-main-classes}" />
-        <source path="${test.java.src.dir}"
-                output="${build.dir.eclipse-test-classes}" />
-        <!-- Even though these are generated, Eclipse can compile these -->
-        <source path="${build.dir}/src"
-                output="${build.dir.eclipse-main-generated-classes}" />
-        <output path="${build.dir.eclipse-main-classes}" />
-        <!-- Tests require that the generated code has paranamer applied to it,
-             so we use the ant build for that. -->
-        <library path="${test.java.generated.classes}" source="${test.java.generated.dir}"/>
-        <library pathref="libs" exported="true" />
-        <library pathref="test.libs" exported="false" />
-       </classpath>
-    </eclipse>
-  </target>
-
-  <target name="clean-eclipse" description="Clean eclipse files">
-    <delete file=".classpath" />
-    <delete file=".eclipse" />
-    <delete file=".project" />
-    <delete dir=".settings" />
-    <delete dir="${ant-eclipse.lib.dir}" />
-    <delete dir="${build.dir.eclipse}" />
-  </target>
-  <!-- End Eclipse targets -->
-
-  <target name="mvn-install" depends="jar,pom,source,javadoc-jar"
-	  description="Installs avro to local m2 cache">
-    <typedef resource="org/apache/maven/artifact/ant/antlib.xml"
-	     uri="urn:maven-artifact-ant"
-	     classpathref="java.classpath"/>
-    <mvn:pom file="${dist.dir}/${fullname}.pom" id="avro"/>
-    <mvn:install file="${build.dir}/${fullname}.jar">
-      <attach file="${build.dir}/${fullname}-sources.jar"
-           classifier="sources" />
-      <attach file="${build.dir}/${fullname}-javadoc.jar"
-           classifier="javadoc" />
-      <pom refid="avro"/>
-    </mvn:install>
-  </target>
-
-  <target name="mvn-deploy" depends="jar,pom,source,javadoc-jar"
-     description="Deploys Avro to Maven repo.">
-    <typedef resource="org/apache/maven/artifact/ant/antlib.xml"
-	     uri="urn:maven-artifact-ant"
-	     classpathref="java.classpath"/>
-    <mvn:pom file="${dist.dir}/${fullname}.pom" id="avro"/>
-    <mvn:deploy file="${build.dir}/${fullname}.jar">
-      <remoteRepository id="avro.mvn.repo" url="${mvn.repo}"/>
-      <attach file="${build.dir}/${fullname}-sources.jar"
-           classifier="sources" />
-      <attach file="${build.dir}/${fullname}-javadoc.jar"
-           classifier="javadoc" />
-      <pom refid="avro"/>
-    </mvn:deploy>
-  </target>
-
-</project>
diff --git a/branch-1.3/lang/java/ivy.xml b/branch-1.3/lang/java/ivy.xml
deleted file mode 100644
index a72c0a6..0000000
--- a/branch-1.3/lang/java/ivy.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<ivy-module version="2.0"
-            xmlns:e="http://ant.apache.org/ivy/extra">
-
-  <info organisation="org.apache.hadoop"
-    module="${name}" revision="${version}">
-    <license name="Apache 2.0"/>
-    <ivyauthor name="Apache Hadoop" url="http://hadoop.apache.org"/>
-    <description>Avro</description>
-  </info>
-
-  <configurations defaultconfmapping="default">
-    <conf name="default"/> <!-- "runtime" configuration -->
-    <conf name="test"/>
-    <conf name="eclipse"/>
-    <conf name="tools" extends="default"/>
-    <conf name="build" extends="default"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.codehaus.jackson" name="jackson-mapper-asl"
-        rev="1.4.2"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="1.5.11"/>
-    <dependency org="org.slf4j" name="slf4j-simple" rev="1.5.11"
-        conf="test->default;tools->default"/>
-    <dependency org="com.thoughtworks.paranamer" name="paranamer"
-        rev="2.2"/>
-    <dependency org="com.thoughtworks.paranamer" name="paranamer-ant"
-        rev="2.2"/>
-    <dependency org="org.mortbay.jetty" name="jetty"
-        rev="6.1.22"/>
-    <dependency org="junit" name="junit" rev="4.8.1" conf="test->default"/>
-    <dependency org="checkstyle" name="checkstyle" rev="5.0"
-        conf="test->default"/>
-    <dependency org="org.apache.rat" name="apache-rat" rev="0.6"
-        conf="test->default"/>
-    <dependency org="ant-eclipse" name="ant-eclipse" rev="1.0"
-        conf="eclipse->default">
-        <artifact name="ant-eclipse" type="bz2" />
-    </dependency>
-    <dependency org="net.java.dev.javacc" name="javacc" rev="5.0" 
-        conf="build->default" />
-    <dependency org="commons-lang" name="commons-lang" rev="2.5" />
-    <dependency org="org.apache.maven" name="maven-ant-tasks" rev="2.1.0"
-        conf="build->default"/>
-    <dependency org="net.sf.jopt-simple" name="jopt-simple" rev="3.2"
-        conf="build->default;test->default;tools->default"/>
-  </dependencies>
-
-</ivy-module>
diff --git a/branch-1.3/lang/java/ivysettings-ant-eclipse.xml b/branch-1.3/lang/java/ivysettings-ant-eclipse.xml
deleted file mode 100644
index 8ea34c2..0000000
--- a/branch-1.3/lang/java/ivysettings-ant-eclipse.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivysettings>
-    <settings defaultResolver="ant-eclipse"/>
-    <resolvers>
-        <url name="ant-eclipse">
-            <artifact pattern="http://downloads.sourceforge.net/project/[module]/[module]/[revision]/[module]-[revision].bin.tar.[ext]" />
-        </url>
-    </resolvers>
-</ivysettings>
diff --git a/branch-1.3/lang/java/lib/.gitignore b/branch-1.3/lang/java/lib/.gitignore
deleted file mode 100644
index 55f1e9b..0000000
--- a/branch-1.3/lang/java/lib/.gitignore
+++ /dev/null
@@ -1,16 +0,0 @@
-#   Licensed to the Apache Software Foundation (ASF) under one or more
-#   contributor license agreements.  See the NOTICE file distributed with
-#   this work for additional information regarding copyright ownership.
-#   The ASF licenses this file to You under the Apache License, Version 2.0
-#   (the "License"); you may not use this file except in compliance with
-#   the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-/ivy-*.jar
diff --git a/branch-1.3/lang/java/lib/ant-1.6.5-LICENSE.txt b/branch-1.3/lang/java/lib/ant-1.6.5-LICENSE.txt
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/java/lib/ant-1.6.5-LICENSE.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/java/lib/asm-3.0-LICENSE.txt b/branch-1.3/lang/java/lib/asm-3.0-LICENSE.txt
deleted file mode 100644
index d0cd82d..0000000
--- a/branch-1.3/lang/java/lib/asm-3.0-LICENSE.txt
+++ /dev/null
@@ -1,28 +0,0 @@
-
- ASM: a very small and fast Java bytecode manipulation framework
- Copyright (c) 2000-2005 INRIA, France Telecom
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
- 1. Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in the
-    documentation and/or other materials provided with the distribution.
- 3. Neither the name of the copyright holders nor the names of its
-    contributors may be used to endorse or promote products derived from
-    this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
- LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/branch-1.3/lang/java/lib/jackson-1.0.1-LICENSE.txt b/branch-1.3/lang/java/lib/jackson-1.0.1-LICENSE.txt
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/java/lib/jackson-1.0.1-LICENSE.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/java/lib/paranamer-1.5-LICENSE.txt b/branch-1.3/lang/java/lib/paranamer-1.5-LICENSE.txt
deleted file mode 100644
index 80d4325..0000000
--- a/branch-1.3/lang/java/lib/paranamer-1.5-LICENSE.txt
+++ /dev/null
@@ -1,38 +0,0 @@
-Copyright (c) 2006 Paul Hammant & ThoughtWorks, Inc
-All rights reserved.
-
-You may consider all of ParaNamer as public domain which is explained here (courtesy of Creative Commons):
-
-      Copyright-Only Dedication (based on United States law) or Public Domain Certification
-
-  The person or persons who have associated work with this document (the "Dedicator" or
-  "Certifier") hereby either (a) certifies that, to the best of his knowledge, the work of authorship
-  identified is in the public domain of the country from which the work is published, or (b)
-  hereby dedicates whatever copyright the dedicators holds in the work of authorship identified
-  below (the "Work") to the public domain. A certifier, moreover, dedicates any copyright interest
-  he may have in the associated work, and for these purposes, is described as a "dedicator" below.
-
-  A certifier has taken reasonable steps to verify the copyright status of this work. Certifier
-  recognizes that his good faith efforts may not shield him from liability if in fact the work
-  certified is not in the public domain.
-
-  Dedicator makes this dedication for the benefit of the public at large and to the detriment of
-  the Dedicator's heirs and successors. Dedicator intends this dedication to be an overt act of
-  relinquishment in perpetuity of all present and future rights under copyright law, whether
-  vested or contingent, in the Work. Dedicator understands that such relinquishment of all rights
-  includes the relinquishment of all rights to enforce (by lawsuit or otherwise) those copyrights in
-  the Work.
-
-  Dedicator recognizes that, once placed in the public domain, the Work may be freely
-  reproduced, distributed, transmitted, used, modified, built upon, or otherwise exploited by
-  anyone for any purpose, commercial or non-commercial, and in any way, including by
-  methods that have not yet been invented or conceived.
-
-See the full license at http://creativecommons.org/licenses/publicdomain/
-
-My own notes: Feel free to"
-
-  * contribute code back to the project (or not).
-  * fork all or portions of the project, and give it a license of your choosing.
-  * link to it (import) for the purposes of an 'extend' or 'use' from a project of any license
-  * credit the ParaNamer project (or not if you prefer) when you use it.
\ No newline at end of file
diff --git a/branch-1.3/lang/java/lib/qdox-1.9.1-LICENSE.txt b/branch-1.3/lang/java/lib/qdox-1.9.1-LICENSE.txt
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/java/lib/qdox-1.9.1-LICENSE.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/java/lib/slf4j-1.5.8-LICENSE.txt b/branch-1.3/lang/java/lib/slf4j-1.5.8-LICENSE.txt
deleted file mode 100644
index f1c9e37..0000000
--- a/branch-1.3/lang/java/lib/slf4j-1.5.8-LICENSE.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-Copyright (c) 2004-2008 QOS.ch

-All rights reserved.

-

-Permission is hereby granted, free  of charge, to any person obtaining

-a  copy  of this  software  and  associated  documentation files  (the

-"Software"), to  deal in  the Software without  restriction, including

-without limitation  the rights to  use, copy, modify,  merge, publish,

-distribute,  sublicense, and/or sell  copies of  the Software,  and to

-permit persons to whom the Software  is furnished to do so, subject to

-the following conditions:

-

-The  above  copyright  notice  and  this permission  notice  shall  be

-included in all copies or substantial portions of the Software.

-

-THE  SOFTWARE IS  PROVIDED  "AS  IS", WITHOUT  WARRANTY  OF ANY  KIND,

-EXPRESS OR  IMPLIED, INCLUDING  BUT NOT LIMITED  TO THE  WARRANTIES OF

-MERCHANTABILITY,    FITNESS    FOR    A   PARTICULAR    PURPOSE    AND

-NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE

-LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION

-OF CONTRACT, TORT OR OTHERWISE,  ARISING FROM, OUT OF OR IN CONNECTION

-WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

-

-

-

diff --git a/branch-1.3/lang/java/src/java/checkstyle.xml b/branch-1.3/lang/java/src/java/checkstyle.xml
deleted file mode 100644
index a51089a..0000000
--- a/branch-1.3/lang/java/src/java/checkstyle.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<!DOCTYPE module PUBLIC
-    "-//Puppy Crawl//DTD Check Configuration 1.2//EN"
-    "http://www.puppycrawl.com/dtds/configuration_1_2.dtd">
-
-<module name="Checker">
-    <module name="FileTabCharacter"/>
-    <module name="NewlineAtEndOfFile">
-        <property name="lineSeparator" value="lf"/>
-    </module>
-
-    <module name="TreeWalker">
-        <module name="ConstantName"/>
-        <module name="LocalFinalVariableName"/>
-        <module name="LocalVariableName"/>
-        <module name="MemberName"/>
-        <module name="MethodName"/>
-        <module name="PackageName"/>
-        <module name="ParameterName"/>
-        <module name="StaticVariableName"/>
-        <module name="TypeName"/>
-
-        <module name="AvoidStarImport"/>
-        <module name="RedundantImport"/>
-        <module name="UnusedImports"/>
-
-        <module name="RedundantModifier"/>
-
-        <module name="EmptyStatement"/>
-        <module name="IllegalInstantiation"/>
-        <module name="RedundantThrows"/>
-        <module name="SimplifyBooleanExpression"/>
-        <module name="SimplifyBooleanReturn"/>
-
-        <module name="InterfaceIsType"/>
-
-        <module name="ArrayTypeStyle"/>
-        <module name="UpperEll"/>
-
-    </module>
-</module>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/AvroRuntimeException.java b/branch-1.3/lang/java/src/java/org/apache/avro/AvroRuntimeException.java
deleted file mode 100644
index abbaaa0..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/AvroRuntimeException.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro;
-
-/** Base Avro exception. */
-public class AvroRuntimeException extends RuntimeException {
-  public AvroRuntimeException(Throwable cause) { super(cause); }
-  public AvroRuntimeException(String message) { super(message); }
-  public AvroRuntimeException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/AvroTypeException.java b/branch-1.3/lang/java/src/java/org/apache/avro/AvroTypeException.java
deleted file mode 100644
index a4a1982..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/AvroTypeException.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro;
-
-
-/** Thrown when an illegal type is used. */
-public class AvroTypeException extends AvroRuntimeException {
-  public AvroTypeException(String message) { super(message); }
-  public AvroTypeException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/Protocol.java b/branch-1.3/lang/java/src/java/org/apache/avro/Protocol.java
deleted file mode 100644
index 1b7aeee..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/Protocol.java
+++ /dev/null
@@ -1,388 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.io.ByteArrayInputStream;
-import java.io.File;
-import java.io.StringWriter;
-import java.io.IOException;
-import java.security.MessageDigest;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Collection;
-
-import org.apache.avro.Schema.Field;
-import org.codehaus.jackson.JsonNode;
-import org.codehaus.jackson.JsonParser;
-import org.codehaus.jackson.JsonGenerator;
-
-/** A set of messages forming an application protocol.
- * <p> A protocol consists of:
- * <ul>
- * <li>a <i>name</i> for the protocol;
- * <li>an optional <i>namespace</i>, further qualifying the name;
- * <li>a list of <i>types</i>, or named {@link Schema schemas};
- * <li>a list of <i>errors</i>, or named {@link Schema schemas} for exceptions;
- * <li>a list of named <i>messages</i>, each of which specifies,
- *   <ul>
- *   <li><i>request</i>, the parameter schemas;
- *   <li><i>response</i>, the response schema;
- *   <li><i>errors</i>, a list of potential error schema names.
- *   </ul>
- * </ul>
- */
-public class Protocol {
-  /** The version of the protocol specification implemented here. */
-  public static final long VERSION = 1;
-
-  /** A protocol message. */
-  public class Message {
-    private String name;
-    private String doc;
-    private Schema request;
-    private Schema response;
-    private Schema errors;
-    
-    /** Construct a message. */
-    private Message(String name, String doc, Schema request,
-                    Schema response, Schema errors) {
-      this.name = name;
-      this.doc = doc;
-      this.request = request;
-      this.response = response;
-      this.errors = errors;
-    }
-
-    /** The name of this message. */
-    public String getName() { return name; }
-    /** The parameters of this message. */
-    public Schema getRequest() { return request; }
-    /** The returned data. */
-    public Schema getResponse() { return response; }
-    /** Errors that might be thrown. */
-    public Schema getErrors() { return errors; }
-    
-    public String toString() {
-      try {
-        StringWriter writer = new StringWriter();
-        JsonGenerator gen = Schema.FACTORY.createJsonGenerator(writer);
-        toJson(gen);
-        gen.flush();
-        return writer.toString();
-      } catch (IOException e) {
-        throw new AvroRuntimeException(e);
-      }
-    }
-    void toJson(JsonGenerator gen) throws IOException {
-      gen.writeStartObject();
-
-      gen.writeFieldName("request");
-      request.fieldsToJson(types, gen);
-
-      gen.writeFieldName("response");
-      response.toJson(types, gen);
-
-      List<Schema> errTypes = errors.getTypes();  // elide system error
-      if (errTypes.size() > 1) {
-        Schema errs = Schema.createUnion(errTypes.subList(1, errTypes.size()));
-        gen.writeFieldName("errors");
-        errs.toJson(types, gen);
-      }
-
-      gen.writeEndObject();
-    }
-
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof Message)) return false;
-      Message that = (Message)o;
-      return this.name.equals(that.name)
-        && this.request.equals(that.request)
-        && this.response.equals(that.response)
-        && this.errors.equals(that.errors);
-    }
-
-    public int hashCode() {
-      return name.hashCode()
-        + request.hashCode() + response.hashCode() + errors.hashCode();
-    }
-
-    public String getDoc() {
-      return doc;
-    }
-
-  }
-
-  private String name;
-  private String namespace;
-  private String doc;
-
-  private Schema.Names types = new Schema.Names();
-  private Map<String,Message> messages = new LinkedHashMap<String,Message>();
-  private byte[] md5;
-
-  /** An error that can be thrown by any message. */
-  public static final Schema SYSTEM_ERROR = Schema.create(Schema.Type.STRING);
-
-  /** Union type for generating system errors. */
-  public static final Schema SYSTEM_ERRORS;
-  static {
-    List<Schema> errors = new ArrayList<Schema>();
-    errors.add(SYSTEM_ERROR);
-    SYSTEM_ERRORS = Schema.createUnion(errors);
-  }
-
-  private Protocol() {}
-
-  public Protocol(String name, String namespace) {
-    this.name = name;
-    this.namespace = namespace;
-  }
-
-  /** The name of this protocol. */
-  public String getName() { return name; }
-
-  /** The namespace of this protocol.  Qualifies its name. */
-  public String getNamespace() { return namespace; }
-  
-  /** Doc string for this protocol. */
-  public String getDoc() { return doc; }
-
-  /** The types of this protocol. */
-  public Collection<Schema> getTypes() { return types.values(); }
-
-  /** Returns the named type. */
-  public Schema getType(String name) { return types.get(name); }
-
-  /** Set the types of this protocol. */
-  public void setTypes(Collection<Schema> newTypes) {
-    types = new Schema.Names();
-    for (Schema s : newTypes)
-      types.add(s);
-  }
-
-  /** The messages of this protocol. */
-  public Map<String,Message> getMessages() { return messages; }
-
-  public Message createMessage(String name, String doc, Schema request,
-                               Schema response, Schema errors) {
-    return new Message(name, doc, request, response, errors);
-  }
-
-
-  public boolean equals(Object o) {
-    if (o == this) return true;
-    if (!(o instanceof Protocol)) return false;
-    Protocol that = (Protocol)o;
-    return this.name.equals(that.name)
-      && this.namespace.equals(that.namespace)
-      && this.types.equals(that.types)
-      && this.messages.equals(that.messages);
-  }
-  
-  public int hashCode() {
-    return name.hashCode() + namespace.hashCode()
-      + types.hashCode() + messages.hashCode();
-  }
-
-  /** Render this as <a href="http://json.org/">JSON</a>.*/
-  @Override
-  public String toString() { return toString(false); }
-
-  /** Render this as <a href="http://json.org/">JSON</a>.
-   * @param pretty if true, pretty-print JSON.
-   */
-  public String toString(boolean pretty) {
-    try {
-      StringWriter writer = new StringWriter();
-      JsonGenerator gen = Schema.FACTORY.createJsonGenerator(writer);
-      if (pretty) gen.useDefaultPrettyPrinter();
-      toJson(gen);
-      gen.flush();
-      return writer.toString();
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-  void toJson(JsonGenerator gen) throws IOException {
-    types.space(namespace);
-
-    gen.writeStartObject();
-    gen.writeStringField("protocol", name);
-    gen.writeStringField("namespace", namespace);
-    
-    gen.writeArrayFieldStart("types");
-    Schema.Names resolved = new Schema.Names(namespace);
-    for (Schema type : types.values())
-      if (!resolved.contains(type))
-        type.toJson(resolved, gen);
-    gen.writeEndArray();
-    
-    gen.writeObjectFieldStart("messages");
-    for (Map.Entry<String,Message> e : messages.entrySet()) {
-      gen.writeFieldName(e.getKey());
-      e.getValue().toJson(gen);
-    }
-    gen.writeEndObject();
-    gen.writeEndObject();
-  }
-
-  /** Return the MD5 hash of the text of this protocol. */
-  public byte[] getMD5() {
-    if (md5 == null)
-      try {
-        md5 = MessageDigest.getInstance("MD5")
-          .digest(this.toString().getBytes("UTF-8"));
-      } catch (Exception e) {
-        throw new AvroRuntimeException(e);
-      }
-    return md5;
-  }
-
-  /** Read a protocol from a Json file. */
-  public static Protocol parse(File file) throws IOException {
-    return parse(Schema.FACTORY.createJsonParser(file));
-  }
-
-  /** Read a protocol from a Json string. */
-  public static Protocol parse(String string) {
-    try {
-      return parse(Schema.FACTORY.createJsonParser
-                   (new ByteArrayInputStream(string.getBytes("UTF-8"))));
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  private static Protocol parse(JsonParser parser) {
-    try {
-      Protocol protocol = new Protocol();
-      protocol.parse(Schema.MAPPER.readTree(parser));
-      return protocol;
-    } catch (IOException e) {
-      throw new SchemaParseException(e);
-    }
-  }
-
-  private void parse(JsonNode json) {
-    parseNamespace(json);
-    parseName(json);
-    parseTypes(json);
-    parseMessages(json);
-    parseDoc(json);
-  }
-
-  private void parseNamespace(JsonNode json) {
-    JsonNode nameNode = json.get("namespace");
-    if (nameNode == null) return;                 // no namespace defined
-    this.namespace = nameNode.getTextValue();
-    types.space(this.namespace);
-  }
-  
-  private void parseDoc(JsonNode json) {
-    this.doc = parseDocNode(json);
-  }
-
-  private String parseDocNode(JsonNode json) {
-    JsonNode nameNode = json.get("doc");
-    if (nameNode == null) return null;                 // no doc defined
-    return nameNode.getTextValue();
-  }
-
-  private void parseName(JsonNode json) {
-    JsonNode nameNode = json.get("protocol");
-    if (nameNode == null)
-      throw new SchemaParseException("No protocol name specified: "+json);
-    this.name = nameNode.getTextValue();
-  }
-
-  private void parseTypes(JsonNode json) {
-    JsonNode defs = json.get("types");
-    if (defs == null) return;                    // no types defined
-    if (!defs.isArray())
-      throw new SchemaParseException("Types not an array: "+defs);
-    for (JsonNode type : defs) {
-      if (!type.isObject())
-        throw new SchemaParseException("Type not an object: "+type);
-      Schema.parse(type, types);
-    }
-  }
-
-  private void parseMessages(JsonNode json) {
-    JsonNode defs = json.get("messages");
-    if (defs == null) return;                    // no messages defined
-    for (Iterator<String> i = defs.getFieldNames(); i.hasNext();) {
-      String prop = i.next();
-      this.messages.put(prop, parseMessage(prop, defs.get(prop)));
-    }
-  }
-
-  private Message parseMessage(String messageName, JsonNode json) {
-    JsonNode requestNode = json.get("request");
-    if (requestNode == null || !requestNode.isArray())
-      throw new SchemaParseException("No request specified: "+json);
-    List<Field> fields = new ArrayList<Field>();
-    for (JsonNode field : requestNode) {
-      JsonNode fieldNameNode = field.get("name");
-      if (fieldNameNode == null)
-        throw new SchemaParseException("No param name: "+field);
-      JsonNode fieldTypeNode = field.get("type");
-      if (fieldTypeNode == null)
-        throw new SchemaParseException("No param type: "+field);
-      String name = fieldNameNode.getTextValue();
-      fields.add(new Field(name, Schema.parse(fieldTypeNode,types),
-                           null /* message fields don't have docs */,
-                           field.get("default")));
-    }
-    Schema request = Schema.createRecord(fields);
-    
-    JsonNode responseNode = json.get("response");
-    if (responseNode == null)
-      throw new SchemaParseException("No response specified: "+json);
-    Schema response = Schema.parse(responseNode, types);
-
-    List<Schema> errs = new ArrayList<Schema>();
-    errs.add(SYSTEM_ERROR);                       // every method can throw
-    JsonNode decls = json.get("errors");
-    if (decls != null) {
-      if (!decls.isArray())
-        throw new SchemaParseException("Errors not an array: "+json);
-      for (JsonNode decl : decls) {
-        String name = decl.getTextValue();
-        Schema schema = this.types.get(name);
-        if (schema == null)
-          throw new SchemaParseException("Undefined error: "+name);
-        if (!schema.isError())
-          throw new SchemaParseException("Not an error: "+name);
-        errs.add(schema);
-      }
-    }
-    String doc = parseDocNode(json);
-    return new Message(messageName, doc, request, response,
-                       Schema.createUnion(errs));
-  }
-
-  public static void main(String[] args) throws Exception {
-    System.out.println(Protocol.parse(new File(args[0])));
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/Schema.java b/branch-1.3/lang/java/src/java/org/apache/avro/Schema.java
deleted file mode 100644
index 2e821c9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/Schema.java
+++ /dev/null
@@ -1,1087 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.io.File;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.StringReader;
-import java.io.StringWriter;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.Iterator;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.IdentityHashMap;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonNode;
-import org.codehaus.jackson.JsonParseException;
-import org.codehaus.jackson.JsonParser;
-import org.codehaus.jackson.JsonGenerator;
-import org.codehaus.jackson.map.ObjectMapper;
-
-/** An abstract data type.
- * <p>A schema may be one of:
- * <ul>
- * <li>A <i>record</i>, mapping field names to field value data;
- * <li>An <i>enum</i>, containing one of a small set of symbols;
- * <li>An <i>array</i> of values, all of the same schema;
- * <li>A <i>map</i>, containing string/value pairs, of a declared schema;
- * <li>A <i>union</i> of other schemas;
- * <li>A <i>fixed</i> sized binary object;
- * <li>A unicode <i>string</i>;
- * <li>A sequence of <i>bytes</i>;
- * <li>A 32-bit signed <i>int</i>;
- * <li>A 64-bit signed <i>long</i>;
- * <li>A 32-bit IEEE single-<i>float</i>; or
- * <li>A 64-bit IEEE <i>double</i>-float; or
- * <li>A <i>boolean</i>; or
- * <li><i>null</i>.
- * </ul>
- * 
- * A schema can be constructed using one of its static <tt>createXXX</tt>
- * methods. The schema objects are <i>logically</i> immutable.
- * There are only two mutating methods - {@link #setFields(List)} and
- * {@link #addProp(String, String)}. The following restrictions apply on these
- * two methods.
- * <ul>
- * <li> {@link #setFields(List)}, can be called at most once. This method exists
- * in order to enable clients to build recursive schemas.
- * <li> {@link #addProp(String, String)} can be called with property names
- * that are not present already. It is not possible to change or delete an
- * existing property.
- * </ul>
- */
-public abstract class Schema {
-  static final JsonFactory FACTORY = new JsonFactory();
-  static final ObjectMapper MAPPER = new ObjectMapper(FACTORY);
-
-  static {
-    FACTORY.enable(JsonParser.Feature.ALLOW_COMMENTS);
-    FACTORY.setCodec(MAPPER);
-  }
-
-  /** The type of a schema. */
-  public enum Type {
-    RECORD, ENUM, ARRAY, MAP, UNION, FIXED, STRING, BYTES,
-      INT, LONG, FLOAT, DOUBLE, BOOLEAN, NULL;
-    private String name;
-    private Type() { this.name = this.name().toLowerCase(); }
-  };
-
-  private final Type type;
-  Map<String,String> props = new HashMap<String,String>(1);
-
-  Schema(Type type) { this.type = type; }
-
-  /** Create a schema for a primitive type. */
-  public static Schema create(Type type) {
-    switch (type) {
-    case STRING:  return new StringSchema();
-    case BYTES:   return new BytesSchema();
-    case INT:     return new IntSchema();
-    case LONG:    return new LongSchema();
-    case FLOAT:   return new FloatSchema();
-    case DOUBLE:  return new DoubleSchema();
-    case BOOLEAN: return new BooleanSchema();
-    case NULL:    return new NullSchema();
-    default: throw new AvroRuntimeException("Can't create a: "+type);
-    }
-  }
-
-  private static final Set<String> RESERVED_PROPS = new HashSet<String>();
-  static {
-    Collections.addAll(RESERVED_PROPS,
-                       "fields", "items", "name", "namespace",
-                       "size", "symbols", "values", "type");
-  }
-
-  /**
-   * Returns the value of the named property in this schema.
-   * Returns <tt>null</tt> if there is no property with that name.
-   */
-  public synchronized String getProp(String name) {
-    return props.get(name);
-  }
-
-  /**
-   * Adds a property with the given name <tt>name</tt> and
-   * value <tt>value</tt>. Neither <tt>name</tt> nor <tt>value</tt> can be
-   * <tt>null</tt>. It is illegal to add a property if another with
-   * the same name but different value already exists in this schema.
-   * 
-   * @param name The name of the property to add
-   * @param value The value for the property to add
-   */
-  public synchronized void addProp(String name, String value) {
-    if (RESERVED_PROPS.contains(name)) {
-      throw new AvroRuntimeException("Can't set a reserved property: " + name);
-    }
-    
-    if (value == null) {
-      throw new AvroRuntimeException(
-          "Can't set a null value for property: " + name);
-    }
-    
-    String v = props.get(name);
-    if (v != null) {
-      if (! v.equals(value)) {
-        throw new AvroRuntimeException("Can't overwrite property: " + name);
-      }
-    } else {
-      props.put(name, value);
-    }
-  }
-
-  /** Create an anonymous record schema. */
-  public static Schema createRecord(List<Field> fields) {
-    Schema result = createRecord(null, null, null, false);
-    result.setFields(fields);
-    return result;
-  }
-
-  /** Create a named record schema. */
-  public static Schema createRecord(String name, String doc, String namespace,
-                                    boolean isError) {
-    return new RecordSchema(new Name(name, namespace), doc, isError);
-  }
-
-  /** Create an enum schema. */
-  public static Schema createEnum(String name, String doc, String namespace,
-                                  List<String> values) {
-    return new EnumSchema(new Name(name, namespace), doc,
-        new LockableArrayList<String>(values));
-  }
-
-  /** Create an array schema. */
-  public static Schema createArray(Schema elementType) {
-    return new ArraySchema(elementType);
-  }
-
-  /** Create a map schema. */
-  public static Schema createMap(Schema valueType) {
-    return new MapSchema(valueType);
-  }
-
-  /** Create a union schema. */
-  public static Schema createUnion(List<Schema> types) {
-    return new UnionSchema(new LockableArrayList<Schema>(types));
-  }
-
-  /** Create a union schema. */
-  public static Schema createFixed(String name, String doc, String space,
-      int size) {
-    return new FixedSchema(new Name(name, space), doc, size);
-  }
-
-  /** Return the type of this schema. */
-  public Type getType() { return type; }
-
-  /**
-   * If this is a record, returns the Field with the
-   * given name <tt>fieldName</tt>. If there is no field by that name, a
-   * <tt>null</tt> is returned.
-   */
-  public Field getField(String fieldname) {
-    throw new AvroRuntimeException("Not a record: "+this);
-  }
-
-  /**
-   * If this is a record, returns the fields in it. The returned
-   * list is in the order of their positions.
-   */
-  public List<Field> getFields() {
-    throw new AvroRuntimeException("Not a record: "+this);
-  }
-
-  /**
-   * If this is a record, set its fields. The fields can be set
-   * only once in a schema.
-   */
-  public void setFields(List<Field> fields) {
-    throw new AvroRuntimeException("Not a record: "+this);
-  }
-
-  /** If this is an enum, return its symbols. */
-  public List<String> getEnumSymbols() {
-    throw new AvroRuntimeException("Not an enum: "+this);
-  }    
-
-  /** If this is an enum, return a symbol's ordinal value. */
-  public int getEnumOrdinal(String symbol) {
-    throw new AvroRuntimeException("Not an enum: "+this);
-  }    
-  
-  /** If this is an enum, returns true if it contains given symbol. */
-  public boolean hasEnumSymbol(String symbol) {
-    throw new AvroRuntimeException("Not an enum: "+this);
-  }
-
-  /** If this is a record, enum or fixed, returns its name, otherwise the name
-   * of the primitive type. */
-  public String getName() { return type.name; }
-
-  /** If this is a record, enum, or fixed, returns its docstring,
-   * if available.  Otherwise, returns null. */
-  public String getDoc() {
-    return null;
-  }
-
-  /** If this is a record, enum or fixed, returns its namespace, if any. */
-  public String getNamespace() {
-    throw new AvroRuntimeException("Not a named type: "+this);
-  }
-
-  /** If this is a record, enum or fixed, returns its namespace-qualified name,
-   * if any. */
-  public String getFullName() {
-    throw new AvroRuntimeException("Not a named type: "+this);
-  }
-
-  /** Returns true if this record is an error type. */
-  public boolean isError() {
-    throw new AvroRuntimeException("Not a record: "+this);
-  }
-
-  /** If this is an array, returns its element type. */
-  public Schema getElementType() {
-    throw new AvroRuntimeException("Not an array: "+this);
-  }
-
-  /** If this is a map, returns its value type. */
-  public Schema getValueType() {
-    throw new AvroRuntimeException("Not a map: "+this);
-  }
-
-  /** If this is a union, returns its types. */
-  public List<Schema> getTypes() {
-    throw new AvroRuntimeException("Not a union: "+this);
-  }
-
-  /** If this is fixed, returns its size. */
-  public int getFixedSize() {
-    throw new AvroRuntimeException("Not fixed: "+this);
-  }
-
-  /** Render this as <a href="http://json.org/">JSON</a>.*/
-  @Override
-  public String toString() { return toString(false); }
-
-  /** Render this as <a href="http://json.org/">JSON</a>.
-   * @param pretty if true, pretty-print JSON.
-   */
-  public String toString(boolean pretty) {
-    try {
-      StringWriter writer = new StringWriter();
-      JsonGenerator gen = FACTORY.createJsonGenerator(writer);
-      if (pretty) gen.useDefaultPrettyPrinter();
-      toJson(new Names(), gen);
-      gen.flush();
-      return writer.toString();
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  void toJson(Names names, JsonGenerator gen) throws IOException {
-    if (props.size() == 0) {                      // no props defined
-      gen.writeString(getName());                 // just write name
-    } else {
-      gen.writeStartObject();
-      gen.writeStringField("type", getName());
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-  }
-
-  void writeProps(JsonGenerator gen) throws IOException {
-    for (Map.Entry<String,String> e : props.entrySet())
-      gen.writeStringField(e.getKey(), e.getValue());
-  }
-
-  void fieldsToJson(Names names, JsonGenerator gen) throws IOException {
-    throw new AvroRuntimeException("Not a record: "+this);
-  }
-
-  public boolean equals(Object o) {
-    if (o == this) return true;
-    if (!(o instanceof Schema)) return false;
-    Schema that = (Schema)o;
-    if (!(this.type == that.type)) return false;
-    return props.equals(that.props);
-  }
-  public int hashCode() { return getType().hashCode() + props.hashCode(); }
-
-  /** A field within a record. */
-  public static class Field {
-
-    /** How values of this field should be ordered when sorting records. */
-    public enum Order {
-      ASCENDING, DESCENDING, IGNORE;
-      private String name;
-      private Order() { this.name = this.name().toLowerCase(); }
-    };
-
-    private final String name;    // name of the field.
-    private int position = -1;
-    private final Schema schema;
-    private final String doc;
-    private final JsonNode defaultValue;
-    private final Order order;
-
-    public Field(String name, Schema schema, String doc,
-        JsonNode defaultValue) {
-      this(name, schema, doc, defaultValue, Order.ASCENDING);
-    }
-    public Field(String name, Schema schema, String doc,
-        JsonNode defaultValue, Order order) {
-      this.name = name;
-      this.schema = schema;
-      this.doc = doc;
-      this.defaultValue = defaultValue;
-      this.order = order;
-    }
-    public String name() { return name; };
-    /** The position of this field within the record. */
-    public int pos() { return position; }
-    /** This field's {@link Schema}. */
-    public Schema schema() { return schema; }
-    /** Field's documentation within the record, if set. May return null. */
-    public String doc() { return doc; }
-    public JsonNode defaultValue() { return defaultValue; }
-    public Order order() { return order; }
-    public boolean equals(Object other) {
-      if (other == this) return true;
-      if (!(other instanceof Field)) return false;
-      Field that = (Field) other;
-      return (position == that.position) &&
-        (schema.equals(that.schema)) &&
-        (defaultValue == null
-         ? that.defaultValue == null
-         : (defaultValue.equals(that.defaultValue)));
-    }
-    public int hashCode() { return schema.hashCode(); }
-  }
-
-  private static class Name {
-    private final String name;
-    private final String space;
-    private final String full;
-    public Name(String name, String space) {
-      if (name == null) {                         // anonymous
-        this.name = this.space = this.full = null;
-        return;
-      }
-      int lastDot = name.lastIndexOf('.');
-      if (lastDot < 0) {                          // unqualified name
-        this.space = space;                       // use default space
-        this.name = name;
-      } else {                                    // qualified name
-        this.space = name.substring(0, lastDot);  // get space from name
-        this.name = name.substring(lastDot+1, name.length());
-      }
-      this.full = (this.space == null) ? this.name : this.space+"."+this.name;
-    }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof Name)) return false;
-      Name that = (Name)o;
-      return full==null ? that.full==null : full.equals(that.full);
-    }
-    public int hashCode() {
-      return full==null ? 0 : full.hashCode();
-    }
-    public String toString() { return full; }
-    public void writeName(Names names, JsonGenerator gen) throws IOException {
-      if (name != null) gen.writeStringField("name", name);
-      if (space != null) {
-        if (!space.equals(names.space()))
-          gen.writeStringField("namespace", space);
-        if (names.space() == null)                // default namespace
-          names.space(space);
-      }
-    }
-  }
-
-  private static abstract class NamedSchema extends Schema {
-    private final Name name;
-    private final String doc;
-    public NamedSchema(Type type, Name name, String doc) {
-      super(type);
-      this.name = name;
-      this.doc = doc;
-      if (PRIMITIVES.containsKey(name.full)) {
-        throw new AvroTypeException("Schemas may not be named after primitives: " + name.full);
-      }
-    }
-    public String getName() { return name.name; }
-    public String getDoc() { return doc; }
-    public String getNamespace() { return name.space; }
-    public String getFullName() { return name.full; }
-    public boolean writeNameRef(Names names, JsonGenerator gen)
-      throws IOException {
-      if (this.equals(names.get(name))) {
-        if (name.space == null || name.space.equals(names.space()))
-          gen.writeString(name.name);             // in default namespace
-        else
-          gen.writeString(name.full);             // use fully-qualified name
-        return true;
-      } else if (name.name != null) {
-        names.put(name, this);
-      }
-      return false;
-    }
-    public void writeName(Names names, JsonGenerator gen) throws IOException {
-      name.writeName(names, gen);
-    }
-    public boolean equalNames(NamedSchema that) {
-      return this.name.equals(that.name);
-    }
-    public int hashCode() {
-      return getType().hashCode() + name.hashCode() + props.hashCode();
-    }
-  }
-
-  private static class SeenPair {
-    private Object s1; private Object s2;
-    private SeenPair(Object s1, Object s2) { this.s1 = s1; this.s2 = s2; }
-    public boolean equals(Object o) {
-      return this.s1 == ((SeenPair)o).s1 && this.s2 == ((SeenPair)o).s2;
-    }
-    public int hashCode() {
-      return System.identityHashCode(s1) + System.identityHashCode(s2);
-    }
-  }
-
-  private static final ThreadLocal<Set> SEEN_EQUALS = new ThreadLocal<Set>() {
-    protected Set initialValue() { return new HashSet(); }
-  };
-  private static final ThreadLocal<Map> SEEN_HASHCODE = new ThreadLocal<Map>() {
-    protected Map initialValue() { return new IdentityHashMap(); }
-  };
-
-  @SuppressWarnings(value="unchecked")
-  private static class RecordSchema extends NamedSchema {
-    private List<Field> fields;
-    private Map<String, Field> fieldMap;
-    private final boolean isError;
-    public RecordSchema(Name name, String doc, boolean isError) {
-      super(Type.RECORD, name, doc);
-      this.isError = isError;
-    }
-    public boolean isError() { return isError; }
-
-    @Override
-    public Field getField(String fieldname) {
-      return fieldMap.get(fieldname);
-    }
-
-    @Override
-    public List<Field> getFields() {
-      return fields;
-    }
-
-    @Override
-    public void setFields(List<Field> fields) {
-      if (this.fields != null) {
-        throw new AvroRuntimeException("Fields are already set");
-      }
-      int i = 0;
-      fieldMap = new HashMap<String, Field>();
-      LockableArrayList ff = new LockableArrayList();
-      for (Field f : fields) {
-        if (f.position != -1)
-          throw new AvroRuntimeException("Field already used: " + f);
-        f.position = i++;
-        fieldMap.put(f.name(), f);
-        ff.add(f);
-      }
-      this.fields = ff.lock();
-    }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof RecordSchema)) return false;
-      RecordSchema that = (RecordSchema)o;
-      if (!equalNames(that)) return false;
-      if (!props.equals(that.props)) return false;
-      Set seen = SEEN_EQUALS.get();
-      SeenPair here = new SeenPair(this, o);
-      if (seen.contains(here)) return true;       // prevent stack overflow
-      try {
-        seen.add(here);
-        return fields.equals(((RecordSchema)o).fields);
-      } finally {
-        seen.remove(here);
-      }
-    }
-    public int hashCode() {
-      Map seen = SEEN_HASHCODE.get();
-      if (seen.containsKey(this)) return 0;       // prevent stack overflow
-      try {
-        seen.put(this, this);
-        return super.hashCode() + fields.hashCode();
-      } finally {
-        seen.remove(this);
-      }
-    }
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      if (writeNameRef(names, gen)) return;
-      gen.writeStartObject();
-      gen.writeStringField("type", isError?"error":"record");
-      writeName(names, gen);
-      gen.writeFieldName("fields");
-      fieldsToJson(names, gen);
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-
-    void fieldsToJson(Names names, JsonGenerator gen) throws IOException {
-      gen.writeStartArray();
-      for (Field f : fields) {
-        gen.writeStartObject();
-        gen.writeStringField("name", f.name());
-        gen.writeFieldName("type");
-        f.schema().toJson(names, gen);
-        if (f.defaultValue() != null) {
-          gen.writeFieldName("default");
-          gen.writeTree(f.defaultValue());
-        }
-        if (f.order() != Field.Order.ASCENDING)
-          gen.writeStringField("order", f.order().name);
-        gen.writeEndObject();
-      }
-      gen.writeEndArray();
-    }
-  }
-
-  private static class EnumSchema extends NamedSchema {
-    private final List<String> symbols;
-    private final Map<String,Integer> ordinals;
-    public EnumSchema(Name name, String doc,
-        LockableArrayList<String> symbols) {
-      super(Type.ENUM, name, doc);
-      this.symbols = symbols.lock();
-      this.ordinals = new HashMap<String,Integer>();
-      int i = 0;
-      for (String symbol : symbols)
-        if (ordinals.put(symbol, i++) != null)
-          throw new SchemaParseException("Duplicate enum symbol: "+symbol);
-    }
-    public List<String> getEnumSymbols() { return symbols; }
-    public boolean hasEnumSymbol(String symbol) { 
-      return ordinals.containsKey(symbol); }
-    public int getEnumOrdinal(String symbol) { return ordinals.get(symbol); }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof EnumSchema)) return false;
-      EnumSchema that = (EnumSchema)o;
-      return equalNames(that)
-        && symbols.equals(that.symbols)
-        && props.equals(that.props);
-    }
-    public int hashCode() { return super.hashCode() + symbols.hashCode(); }
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      if (writeNameRef(names, gen)) return;
-      gen.writeStartObject();
-      gen.writeStringField("type", "enum");
-      writeName(names, gen);
-      gen.writeArrayFieldStart("symbols");
-      for (String symbol : symbols)
-        gen.writeString(symbol);
-      gen.writeEndArray();
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-  }
-
-  private static class ArraySchema extends Schema {
-    private final Schema elementType;
-    public ArraySchema(Schema elementType) {
-      super(Type.ARRAY);
-      this.elementType = elementType;
-    }
-    public Schema getElementType() { return elementType; }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof ArraySchema)) return false;
-      ArraySchema that = (ArraySchema)o;
-      return elementType.equals(that.elementType) && props.equals(that.props);
-    }
-    public int hashCode() {
-      return getType().hashCode() + elementType.hashCode() + props.hashCode();
-    }
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      gen.writeStartObject();
-      gen.writeStringField("type", "array");
-      gen.writeFieldName("items");
-      elementType.toJson(names, gen);
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-  }
-
-  private static class MapSchema extends Schema {
-    private final Schema valueType;
-    public MapSchema(Schema valueType) {
-      super(Type.MAP);
-      this.valueType = valueType;
-    }
-    public Schema getValueType() { return valueType; }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof MapSchema)) return false;
-      MapSchema that = (MapSchema)o;
-      return valueType.equals(that.valueType) && props.equals(that.props);
-    }
-    public int hashCode() {
-      return getType().hashCode() + valueType.hashCode() + props.hashCode();
-    }
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      gen.writeStartObject();
-      gen.writeStringField("type", "map");
-      gen.writeFieldName("values");
-      valueType.toJson(names, gen);
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-  }
-
-  private static class UnionSchema extends Schema {
-    private final List<Schema> types;
-    public UnionSchema(LockableArrayList<Schema> types) {
-      super(Type.UNION);
-      this.types = types.lock();
-      int seen = 0;
-      Set<String> seenNames = new HashSet<String>();
-      for (Schema type : types) {                 // check legality of union
-        switch (type.getType()) {
-        case UNION: 
-          throw new AvroRuntimeException("Nested union: "+this);
-        case RECORD:
-        case FIXED:
-        case ENUM:
-          String fullname = type.getFullName();
-          if (fullname != null) {
-            if (seenNames.add(fullname)) {
-              continue;
-            } else {
-              throw new AvroRuntimeException("Duplicate name in union:" + fullname);
-            }
-          } else {
-            throw new AvroRuntimeException("Nameless Record, Fixed, or Enum in union:"+this);
-          }
-        default:
-          int mask = 1 << type.getType().ordinal();
-          if ((seen & mask) != 0)
-            throw new AvroRuntimeException("Ambiguous union: "+this);
-          seen |= mask;
-        }
-      }
-    }
-    public List<Schema> getTypes() { return types; }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof UnionSchema)) return false;
-      UnionSchema that = (UnionSchema)o;
-      return types.equals(that.types) && props.equals(that.props);
-    }
-    public int hashCode() {
-      return getType().hashCode() + types.hashCode() + props.hashCode();
-    }
-    
-    @Override
-    public void addProp(String name, String value) {
-      throw new AvroRuntimeException("Can't set properties on a union: "+this);
-    }
-    
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      gen.writeStartArray();
-      for (Schema type : types)
-        type.toJson(names, gen);
-      gen.writeEndArray();
-    }
-  }
-
-  private static class FixedSchema extends NamedSchema {
-    private final int size;
-    public FixedSchema(Name name, String doc, int size) {
-      super(Type.FIXED, name, doc);
-      if (size < 0)
-        throw new IllegalArgumentException("Invalid fixed size: "+size);
-      this.size = size;
-    }
-    public int getFixedSize() { return size; }
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      if (!(o instanceof FixedSchema)) return false;
-      FixedSchema that = (FixedSchema)o;
-      return equalNames(that) && size == that.size && props.equals(that.props);
-    }
-    public int hashCode() { return super.hashCode() + size; }
-    void toJson(Names names, JsonGenerator gen) throws IOException {
-      if (writeNameRef(names, gen)) return;
-      gen.writeStartObject();
-      gen.writeStringField("type", "fixed");
-      writeName(names, gen);
-      gen.writeNumberField("size", size);
-      writeProps(gen);
-      gen.writeEndObject();
-    }
-  }
-
-  private static class StringSchema extends Schema {
-    public StringSchema() { super(Type.STRING); }
-  }
-
-  private static class BytesSchema extends Schema {
-    public BytesSchema() { super(Type.BYTES); }
-  }
-
-  private static class IntSchema extends Schema {
-    public IntSchema() { super(Type.INT); }
-  }
-
-  private static class LongSchema extends Schema {
-    public LongSchema() { super(Type.LONG); }
-  }
-
-  private static class FloatSchema extends Schema {
-    public FloatSchema() { super(Type.FLOAT); }
-  }
-
-  private static class DoubleSchema extends Schema {
-    public DoubleSchema() { super(Type.DOUBLE); }
-  }
-
-  private static class BooleanSchema extends Schema {
-    public BooleanSchema() { super(Type.BOOLEAN); }
-  }
-  
-  private static class NullSchema extends Schema {
-    public NullSchema() { super(Type.NULL); }
-  }
-
-  /**
-   * Constructs a Schema object from JSON schema file <tt>file</tt>.
-   * The contents of <tt>file</tt> is expected to be in UTF-8 format.
-   * @param file  The file to read the schema from.
-   * @return  The freshly built Schema.
-   * @throws IOException if there was trouble reading the contents
-   * @throws JsonParseException if the contents are invalid
-   */
-  public static Schema parse(File file) throws IOException {
-    JsonParser parser = FACTORY.createJsonParser(file);
-    try {
-      return Schema.parse(MAPPER.readTree(parser), new Names());
-    } catch (JsonParseException e) {
-      throw new SchemaParseException(e);
-    }
-  }
-
-  /**
-   * Constructs a Schema object from JSON schema stream <tt>in</tt>.
-   * The contents of <tt>in</tt> is expected to be in UTF-8 format.
-   * @param in  The input stream to read the schema from.
-   * @return  The freshly built Schema.
-   * @throws IOException if there was trouble reading the contents
-   * @throws JsonParseException if the contents are invalid
-   */
-  public static Schema parse(InputStream in) throws IOException {
-    JsonParser parser = FACTORY.createJsonParser(in);
-    try {
-      return Schema.parse(MAPPER.readTree(parser), new Names());
-    } catch (JsonParseException e) {
-      throw new SchemaParseException(e);
-    }
-  }
-
-  /** Construct a schema from <a href="http://json.org/">JSON</a> text. */
-  public static Schema parse(String jsonSchema) {
-    return parse(parseJson(jsonSchema), new Names());
-  }
-
-  static final Map<String,Type> PRIMITIVES = new HashMap<String,Type>();
-  static {
-    PRIMITIVES.put("string",  Type.STRING);
-    PRIMITIVES.put("bytes",   Type.BYTES);
-    PRIMITIVES.put("int",     Type.INT);
-    PRIMITIVES.put("long",    Type.LONG);
-    PRIMITIVES.put("float",   Type.FLOAT);
-    PRIMITIVES.put("double",  Type.DOUBLE);
-    PRIMITIVES.put("boolean", Type.BOOLEAN);
-    PRIMITIVES.put("null",    Type.NULL);
-  }
-
-  static class Names extends LinkedHashMap<Name, Schema> {
-    private String space;                         // default namespace
-
-    public Names() {}
-    public Names(String space) { this.space = space; }
-
-    public String space() { return space; }
-    public void space(String space) { this.space = space; }
-
-    @Override
-    public Schema get(Object o) {
-      Name name;
-      if (o instanceof String) {
-        Type primitive = PRIMITIVES.get((String)o);
-        if (primitive != null) return Schema.create(primitive);
-        name = new Name((String)o, space);
-      } else {
-        name = (Name)o;
-      }
-      return super.get(name);
-    }
-    public boolean contains(Schema schema) {
-      return get(((NamedSchema)schema).name) != null;
-    }
-    public void add(Schema schema) {
-      put(((NamedSchema)schema).name, schema);
-    }
-    @Override
-    public Schema put(Name name, Schema schema) {
-      if (containsKey(name))
-        throw new SchemaParseException("Can't redefine: "+name);
-      return super.put(name, schema);
-    }
-  }
-
-  /** @see #parse(String) */
-  static Schema parse(JsonNode schema, Names names) {
-    if (schema.isTextual()) {                     // name
-      Schema result = names.get(schema.getTextValue());
-      if (result == null)
-        throw new SchemaParseException("Undefined name: "+schema);
-      return result;
-    } else if (schema.isObject()) {
-      Schema result;
-      String type = getRequiredText(schema, "type", "No type");
-      Name name = null;
-      String savedSpace = null;
-      String doc = null;
-      if (type.equals("record") || type.equals("error")
-          || type.equals("enum") || type.equals("fixed")) {
-        String space = getOptionalText(schema, "namespace");
-        doc = getOptionalText(schema, "doc");
-        if (space == null)
-          space = names.space();
-        name = new Name(getRequiredText(schema, "name", "No name in schema"),
-                        space);
-        if (name.space != null) {                 // set default namespace
-          savedSpace = names.space();
-          names.space(name.space);
-        }
-      }
-      if (PRIMITIVES.containsKey(type)) {         // primitive
-        result = create(PRIMITIVES.get(type));
-      } else if (type.equals("record") || type.equals("error")) { // record
-        List<Field> fields = new ArrayList<Field>();
-        result = new RecordSchema(name, doc, type.equals("error"));
-        if (name != null) names.add(result);
-        JsonNode fieldsNode = schema.get("fields");
-        if (fieldsNode == null || !fieldsNode.isArray())
-          throw new SchemaParseException("Record has no fields: "+schema);
-        for (JsonNode field : fieldsNode) {
-          String fieldName = getRequiredText(field, "name", "No field name");
-          String fieldDoc = getOptionalText(field, "doc");
-          JsonNode fieldTypeNode = field.get("type");
-          if (fieldTypeNode == null)
-            throw new SchemaParseException("No field type: "+field);
-          Schema fieldSchema = parse(fieldTypeNode, names);
-          Field.Order order = Field.Order.ASCENDING;
-          JsonNode orderNode = field.get("order");
-          if (orderNode != null)
-            order = Field.Order.valueOf(orderNode.getTextValue().toUpperCase());
-          fields.add(new Field(fieldName, fieldSchema,
-              fieldDoc, field.get("default"), order));
-        }
-        result.setFields(fields);
-      } else if (type.equals("enum")) {           // enum
-        JsonNode symbolsNode = schema.get("symbols");
-        if (symbolsNode == null || !symbolsNode.isArray())
-          throw new SchemaParseException("Enum has no symbols: "+schema);
-        LockableArrayList<String> symbols = new LockableArrayList<String>();
-        for (JsonNode n : symbolsNode)
-          symbols.add(n.getTextValue());
-        result = new EnumSchema(name, doc, symbols);
-        if (name != null) names.add(result);
-      } else if (type.equals("array")) {          // array
-        JsonNode itemsNode = schema.get("items");
-        if (itemsNode == null)
-          throw new SchemaParseException("Array has no items type: "+schema);
-        result = new ArraySchema(parse(itemsNode, names));
-      } else if (type.equals("map")) {            // map
-        JsonNode valuesNode = schema.get("values");
-        if (valuesNode == null)
-          throw new SchemaParseException("Map has no values type: "+schema);
-        result = new MapSchema(parse(valuesNode, names));
-      } else if (type.equals("fixed")) {          // fixed
-        JsonNode sizeNode = schema.get("size");
-        if (sizeNode == null || !sizeNode.isInt())
-          throw new SchemaParseException("Invalid or no size: "+schema);
-        result = new FixedSchema(name, doc, sizeNode.getIntValue());
-        if (name != null) names.add(result);
-      } else
-        throw new SchemaParseException("Type not supported: "+type);
-      Iterator<String> i = schema.getFieldNames();
-      while (i.hasNext()) {                       // add properties
-        String prop = i.next();
-        String value = schema.get(prop).getTextValue();
-        if (!RESERVED_PROPS.contains(prop) && value != null) // ignore reserved
-          result.addProp(prop, value);
-      }
-      if (savedSpace != null)
-        names.space(savedSpace);                  // restore space
-      return result;
-    } else if (schema.isArray()) {                // union
-      LockableArrayList<Schema> types =
-        new LockableArrayList<Schema>(schema.size());
-      for (JsonNode typeNode : schema)
-        types.add(parse(typeNode, names));
-      return new UnionSchema(types);
-    } else {
-      throw new SchemaParseException("Schema not yet supported: "+schema);
-    }
-  }
-
-  /** Extracts text value associated to key from the container JsonNode,
-   * and throws {@link SchemaParseException} if it doesn't exist.
-   *
-   * @param container Container where to find key.
-   * @param key Key to look for in container.
-   * @param error String to prepend to the SchemaParseException.
-   * @return
-   */
-  private static String getRequiredText(JsonNode container, String key,
-      String error) {
-    String out = getOptionalText(container, key);
-    if (null == out) {
-      throw new SchemaParseException(error + ": " + container);
-    }
-    return out;
-  }
-
-  /** Extracts text value associated to key from the container JsonNode. */
-  private static String getOptionalText(JsonNode container, String key) {
-    JsonNode jsonNode = container.get(key);
-    return jsonNode != null ? jsonNode.getTextValue() : null;
-  }
-
-  static JsonNode parseJson(String s) {
-    try {
-      return MAPPER.readTree(FACTORY.createJsonParser(new StringReader(s)));
-    } catch (JsonParseException e) {
-      throw new RuntimeException(e);
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  /**
-   * No change is permitted on LockableArrayList once lock() has been
-   * called on it.
-   * @param <E>
-   */
-  
-  /*
-   * This class keeps a boolean variable <tt>locked</tt> which is set
-   * to <tt>true</tt> in the lock() method. It's legal to call
-   * lock() any number of times. Any lock() other than the first one
-   * is a no-op.
-   * 
-   * This class throws <tt>IllegalStateException</tt> if a mutating
-   * operation is performed after being locked. Since modifications through
-   * iterator also use the list's mutating operations, this effectively
-   * blocks all modifications.
-   */
-  static class LockableArrayList<E> extends ArrayList<E> {
-    private static final long serialVersionUID = 1L;
-    private boolean locked = false;
-    
-    public LockableArrayList() {
-    }
-
-    public LockableArrayList(int size) {
-      super(size);
-    }
-
-    public LockableArrayList(List<E> types) {
-      super(types);
-    }
-
-    public List<E> lock() {
-      locked = true;
-      return this;
-    }
-
-    private void ensureUnlocked() {
-      if (locked) {
-        throw new IllegalStateException();
-      }
-    }
-
-    public boolean add(E e) {
-      ensureUnlocked();
-      return super.add(e);
-    }
-    
-    public boolean remove(Object o) {
-      ensureUnlocked();
-      return super.remove(o);
-    }
-    
-    public E remove(int index) {
-      ensureUnlocked();
-      return super.remove(index);
-    }
-      
-    public boolean addAll(Collection<? extends E> c) {
-      ensureUnlocked();
-      return super.addAll(c);
-    }
-    
-    public boolean addAll(int index, Collection<? extends E> c) {
-      ensureUnlocked();
-      return super.addAll(index, c);
-    }
-    
-    public boolean removeAll(Collection<?> c) {
-      ensureUnlocked();
-      return super.removeAll(c);
-    }
-    
-    public boolean retainAll(Collection<?> c) {
-      ensureUnlocked();
-      return super.retainAll(c);
-    }
-    
-    public void clear() {
-      ensureUnlocked();
-      super.clear();
-    }
-
-  }
-  
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/SchemaParseException.java b/branch-1.3/lang/java/src/java/org/apache/avro/SchemaParseException.java
deleted file mode 100644
index 60a413d..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/SchemaParseException.java
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro;
-
-/** Thrown for errors parsing schemas and protocols. */
-public class SchemaParseException extends AvroRuntimeException {
-  public SchemaParseException(Throwable cause) { super(cause); }
-  public SchemaParseException(String message) { super(message); }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/Codec.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/Codec.java
deleted file mode 100644
index 7dad14e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/Codec.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-
-/** 
- * Interface for Avro-supported compression codecs for data files.
- *
- * This is currently exclusively an internal-facing API.
- */
-abstract class Codec {
-  /** Name of the codec; written to the file's metadata. */
-  abstract String getName();
-  /** Compresses the input data */
-  abstract ByteBuffer compress(ByteBuffer uncompressedData) throws IOException;
-  /** Decompress the data  */
-  abstract ByteBuffer decompress(ByteBuffer compressedData) throws IOException;
-  /** 
-   * Codecs must implement an equals() method.  Two codecs, A and B are equal
-   * if: the result of A and B decompressing content compressed by A is the same
-   * AND the retult of A and B decompressing content compressed by B is the same
-   **/
-  @Override
-  public abstract boolean equals(Object other);
-  /** 
-   * Codecs must implement a hashCode() method that is consistent with equals().*/
-  @Override
-  public abstract int hashCode();
-  @Override
-  public String toString() {
-    return getName();
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/CodecFactory.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/CodecFactory.java
deleted file mode 100644
index be1c942..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/CodecFactory.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.zip.Deflater;
-
-import org.apache.avro.AvroRuntimeException;
-
-/** Encapsulates the ability to specify and configure a compression codec. */
-public abstract class CodecFactory {
-  /** Null codec, for no compression. */
-  public static CodecFactory nullCodec() { 
-    return NullCodec.OPTION; 
-  };
-  
-  /** Deflate codec, with specific compression.
-   * compressionLevel should be between 1 and 9, inclusive. */
-  public static CodecFactory deflateCodec(int compressionLevel) { 
-    return new DeflateCodec.Option(compressionLevel); 
-  };
-  
-  /** Creates internal Codec. */
-  protected abstract Codec createInstance();
-  
-  /** Mapping of string names (stored as metas) and codecs. 
-   * Note that currently options (like compression level)
-   * are not recoverable. */
-  private static final Map<String, CodecFactory> REGISTERED = 
-    new HashMap<String, CodecFactory>();
-
-  private static final int DEFAULT_DEFLATE_LEVEL = Deflater.DEFAULT_COMPRESSION;
-
-  static {
-    addCodec("null", nullCodec());
-    addCodec("deflate", deflateCodec(DEFAULT_DEFLATE_LEVEL));
-  }
-
-  /** Maps a codec name into a CodecOption. */
-  public static CodecFactory fromString(String s) {
-    CodecFactory o = REGISTERED.get(s);
-    if (o == null) {
-      throw new AvroRuntimeException("Unrecognized codec: " + s);
-    }
-    return o;
-  }
-  
-  /** Adds a new codec implementation.  If name already had
-   * a codec associated with it, returns the previous codec. */
-  public static CodecFactory addCodec(String name, CodecFactory c) {
-    return REGISTERED.put(name, c);
-  }
-  
-  @Override
-  public String toString() {
-    Codec instance = this.createInstance();
-    return instance.toString();
-  }
-  
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileConstants.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileConstants.java
deleted file mode 100644
index b6c5758..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileConstants.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.file;
-
-/**
- * Constants used in data files.
- */
-class DataFileConstants {
-  public static final byte VERSION = 1;
-  public static final byte[] MAGIC = new byte[] {
-    (byte)'O', (byte)'b', (byte)'j', VERSION
-  };
-  public static final long FOOTER_BLOCK = -1;
-  public static final int SYNC_SIZE = 16;
-  public static final int DEFAULT_SYNC_INTERVAL = 1000*SYNC_SIZE; 
-
-  public static final String SCHEMA = "avro.schema";
-  public static final String CODEC = "avro.codec";
-  public static final String NULL_CODEC = "null";
-  public static final String DEFLATE_CODEC = "deflate";
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileReader.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileReader.java
deleted file mode 100644
index 32654b8..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileReader.java
+++ /dev/null
@@ -1,169 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.IOException;
-import java.io.EOFException;
-import java.io.InputStream;
-import java.io.File;
-
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.DatumReader;
-import static org.apache.avro.file.DataFileConstants.SYNC_SIZE;
-
-/** Random access to files written with {@link DataFileWriter}.
- * @see DataFileWriter
- */
-public class DataFileReader<D> extends DataFileStream<D> {
-  private SeekableInputStream sin;
-  private long blockStart;
-
-  /** Construct a reader for a file. */
-  public DataFileReader(File file, DatumReader<D> reader) throws IOException {
-    this(new SeekableFileInput(file), reader);
-  }
-
-  /** Construct a reader for a file. */
-  public DataFileReader(SeekableInput sin, DatumReader<D> reader)
-    throws IOException {
-    super(reader);
-    this.sin = new SeekableInputStream(sin);
-    initialize(this.sin);
-  }
-
-  /** Move to a specific, known synchronization point, one returned from {@link
-   * DataFileWriter#sync()} while writing.  If synchronization points were not
-   * saved while writing a file, use {#sync(long)} instead. */
-  public void seek(long position) throws IOException {
-    sin.seek(position);
-    vin = DecoderFactory.defaultFactory().createBinaryDecoder(this.sin, vin);
-    blockRemaining = 0;
-    blockStart = position;
-  }
-
-  /** Move to the next synchronization point after a position. To process a
-   * range of file entires, call this with the starting position, then check
-   * {@link #pastSync(long)} with the end point before each call to {@link
-   * #next()}. */
-  public void sync(long position) throws IOException {
-    seek(position);
-    try {
-      int i=0, b;
-      InputStream in = vin.inputStream();
-      vin.readFixed(syncBuffer);
-    do {
-      int j = 0;
-      for (; j < SYNC_SIZE; j++) {
-        if (sync[j] != syncBuffer[(i+j)%SYNC_SIZE])
-          break;
-      }
-      if (j == SYNC_SIZE) {                       // matched a complete sync
-        blockStart = position + i + SYNC_SIZE;
-        return;
-      }
-      b = in.read();
-      syncBuffer[i++%SYNC_SIZE] = (byte)b;
-    } while (b != -1);
-    } catch (EOFException e) {
-      // fall through
-    }
-    // if no match or EOF set start to the end position
-      blockStart = sin.tell();
-    //System.out.println("block start location after EOF: " + blockStart );
-      return;
-  }
-
-  @Override
-  protected void blockFinished() throws IOException {
-    blockStart = sin.tell() - vin.inputStream().available();
-  }
-
-  /** Return true if past the next synchronization point after a position. */ 
-  public boolean pastSync(long position) throws IOException {
-    return ((blockStart >= position+SYNC_SIZE)||(blockStart >= sin.length()));
-  }
-
-  private static class SeekableInputStream extends InputStream 
-  implements SeekableInput {
-    private final byte[] oneByte = new byte[1];
-    private SeekableInput in;
-
-    SeekableInputStream(SeekableInput in) throws IOException {
-        this.in = in;
-      }
-    
-    @Override
-    public void seek(long p) throws IOException {
-      if (p < 0)
-        throw new IOException("Illegal seek: " + p);
-      in.seek(p);
-      }
-
-    @Override
-    public long tell() throws IOException {
-      return in.tell();
-    }
-
-    @Override
-    public long length() throws IOException {
-      return in.length();
-    }
-
-    @Override
-    public int read(byte[] b) throws IOException {
-      return in.read(b, 0, b.length);
-      }
-    
-    @Override
-    public int read(byte[] b, int off, int len) throws IOException {
-      return in.read(b, off, len);
-    }
-
-    @Override
-    public int read() throws IOException {
-      int n = read(oneByte, 0, 1);
-      if (n == 1) {
-        return oneByte[0] & 0xff;
-      } else {
-        return n;
-    }
-    };
-
-    @Override
-    public long skip(long skip) throws IOException {
-      long position = in.tell();
-      long length = in.length();
-      long remaining = length - position;
-      if (remaining > skip) {
-        in.seek(skip);
-        return in.tell() - position;
-      } else {
-        in.seek(remaining);
-        return in.tell() - position;
-    }
-  }
-
-    @Override
-    public int available() throws IOException {
-      long remaining = (in.length() - in.tell());
-      return (remaining > Integer.MAX_VALUE) ? Integer.MAX_VALUE
-          : (int) remaining;
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileStream.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileStream.java
deleted file mode 100644
index 0edf0cb..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileStream.java
+++ /dev/null
@@ -1,269 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.Closeable;
-import java.io.UnsupportedEncodingException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.NoSuchElementException;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.BinaryDecoder;
-import org.apache.avro.io.DatumReader;
-
-/** Streaming access to files written by {@link DataFileWriter}.  Use {@link
- * DataFileReader} for file-based input.
- * @see DataFileWriter
- */
-public class DataFileStream<D> implements Iterator<D>, Iterable<D>, Closeable {
-
-  private Schema schema;
-  private DatumReader<D> reader;
-  private long blockSize;
-  private boolean availableBlock = false;
-
-  /** Decoder on raw input stream.  (Used for metadata.) */
-  BinaryDecoder vin;
-  /** Secondary decoder, for datums.
-   *  (Different than vin for block segments.) */
-  BinaryDecoder datumIn = null;
-
-  Map<String,byte[]> meta = new HashMap<String,byte[]>();
-
-  long blockRemaining;                          // # entries remaining in block
-  byte[] sync = new byte[DataFileConstants.SYNC_SIZE];
-  byte[] syncBuffer = new byte[DataFileConstants.SYNC_SIZE];
-  private Codec codec;
-
-  /** Construct a reader for an input stream.  For file-based input, use 
-   * {@link DataFileReader}.  This will buffer, wrapping with a 
-   * {@link java.io.BufferedInputStream}
-   * is not necessary. */
-  public DataFileStream(InputStream in, DatumReader<D> reader)
-    throws IOException {
-    this.reader = reader;
-    initialize(in);
-  }
-
-  /**
-   * create an unitialized DataFileStream
-   */
-  protected DataFileStream(DatumReader<D> reader) throws IOException {
-    this.reader = reader;
-  }
-  
-  void initialize(InputStream in) throws IOException {
-    this.vin = DecoderFactory.defaultFactory().createBinaryDecoder(in, vin);
-    byte[] magic = new byte[DataFileConstants.MAGIC.length];
-    try {
-      vin.readFixed(magic);                         // read magic
-    } catch (IOException e) {
-      throw new IOException("Not a data file.");
-    }
-    if (!Arrays.equals(DataFileConstants.MAGIC, magic))
-      throw new IOException("Not a data file.");
-
-    long l = vin.readMapStart();                  // read meta data
-    if (l > 0) {
-      do {
-        for (long i = 0; i < l; i++) {
-          String key = vin.readString(null).toString();
-          ByteBuffer value = vin.readBytes(null);
-          byte[] bb = new byte[value.remaining()];
-          value.get(bb);
-          meta.put(key, bb);
-        }
-      } while ((l = vin.mapNext()) != 0);
-    }
-    vin.readFixed(sync);                          // read sync
-
-    this.codec = resolveCodec();
-    this.schema = Schema.parse(getMetaString(DataFileConstants.SCHEMA));
-
-    reader.setSchema(schema);
-  }
-
-  Codec resolveCodec() {
-    String codecStr = getMetaString(DataFileConstants.CODEC);
-    if (codecStr != null) {
-      return CodecFactory.fromString(codecStr).createInstance();
-    } else {
-      return CodecFactory.nullCodec().createInstance();
-    }
-  }
-
-  /** Return the schema used in this file. */
-  public Schema getSchema() { return schema; }
-
-  /** Return the value of a metadata property. */
-  public byte[] getMeta(String key) {
-    return meta.get(key);
-  }
-  /** Return the value of a metadata property. */
-  public String getMetaString(String key) {
-    byte[] value = getMeta(key);
-    if (value == null) {
-      return null;
-    }
-    try {
-      return new String(value, "UTF-8");
-    } catch (UnsupportedEncodingException e) {
-      throw new RuntimeException(e);
-    }
-  }
-  /** Return the value of a metadata property. */
-  public long getMetaLong(String key) {
-    return Long.parseLong(getMetaString(key));
-  }
-
-  /** Returns an iterator over entries in this file.  Note that this iterator
-   * is shared with other users of the file: it does not contain a separate
-   * pointer into the file. */
-  public Iterator<D> iterator() { return this; }
-
-  private DataBlock block = null;
-  /** True if more entries remain in this file. */
-  public boolean hasNext() {
-    try {
-      if (blockRemaining == 0) {
-        // check that the previous block was finished
-        if (null != datumIn) {
-          boolean atEnd = datumIn.isEnd();
-          if (!atEnd) {
-            throw new IOException("Block read partially, the data may be corrupt");
-          }
-        }
-        if (hasNextBlock()) {
-          block = nextBlock(block);
-          ByteBuffer blockBuffer = ByteBuffer.wrap(block.data, 0, block.blockSize);
-          blockBuffer = codec.decompress(blockBuffer);
-          datumIn = DecoderFactory.defaultFactory().createBinaryDecoder(
-              blockBuffer.array(), blockBuffer.arrayOffset() +
-              blockBuffer.position(), blockBuffer.remaining(), datumIn);
-        }
-      }
-      return blockRemaining != 0;
-    } catch (EOFException e) {                    // at EOF
-      return false;
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  /** Read the next datum in the file.
-   * @throws NoSuchElementException if no more remain in the file.
-   */
-  public D next() {
-    try {
-      return next(null);
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  /** Read the next datum from the file.
-   * @param reuse an instance to reuse.
-   * @throws NoSuchElementException if no more remain in the file.
-   */
-  public D next(D reuse) throws IOException {
-    if (!hasNext())
-      throw new NoSuchElementException();
-    D result = reader.read(reuse, datumIn);
-    if (0 == --blockRemaining) {
-      blockFinished();
-    }
-    return result;
-  }
-
-  protected void blockFinished() throws IOException {
-    // nothing for the stream impl
-  }
-
-  boolean hasNextBlock() {
-    try {
-      if (availableBlock) return true;
-      if (vin.isEnd()) return false;
-      blockRemaining = vin.readLong();      // read block count
-      blockSize = vin.readLong();           // read block size
-      if (blockSize > Integer.MAX_VALUE ||
-          blockSize < 0) {
-        throw new IOException("Block size invalid or too large for this " +
-          "implementation: " + blockSize);
-      }
-      availableBlock = true;
-      return true;
-    } catch (EOFException eof) {
-      return false;
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  DataBlock nextBlock(DataBlock reuse) throws IOException {
-    if (!hasNextBlock()) {
-      throw new NoSuchElementException();
-    }
-    if (reuse == null || reuse.data.length < (int) blockSize) {
-      reuse = new DataBlock(blockRemaining, (int) blockSize);
-    } else {
-      reuse.numEntries = blockRemaining;
-      reuse.blockSize = (int)blockSize;
-    }
-    // throws if it can't read the size requested
-    vin.readFixed(reuse.data, 0, reuse.blockSize);
-    skipSync();
-    availableBlock = false;
-    return reuse;
-  }
-
-  void skipSync() throws IOException {
-    vin.readFixed(syncBuffer);
-    if (!Arrays.equals(syncBuffer, sync))
-      throw new IOException("Invalid sync!");
-  }
-
-  /** Not supported. */
-  public void remove() { throw new UnsupportedOperationException(); }
-
-  /** Close this reader. */
-  public void close() throws IOException {
-    vin.inputStream().close();
-  }
-
-  static class DataBlock {
-    byte[] data;
-    long numEntries;
-    int blockSize;
-    DataBlock(long numEntries, int blockSize) {
-      this.data = new byte[blockSize];
-      this.numEntries = numEntries;
-      this.blockSize = blockSize;
-    }
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileWriter.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileWriter.java
deleted file mode 100644
index f749138..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/DataFileWriter.java
+++ /dev/null
@@ -1,371 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.BufferedOutputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.Closeable;
-import java.io.File;
-import java.io.FileDescriptor;
-import java.io.FileNotFoundException;
-import java.io.FileOutputStream;
-import java.io.FilterOutputStream;
-import java.io.Flushable;
-import java.io.IOException;
-import java.io.OutputStream;
-import java.io.RandomAccessFile;
-import java.io.UnsupportedEncodingException;
-import java.nio.ByteBuffer;
-import java.rmi.server.UID;
-import java.security.MessageDigest;
-import java.security.NoSuchAlgorithmException;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema;
-import org.apache.avro.file.DataFileStream.DataBlock;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Encoder;
-
-/** Stores in a file a sequence of data conforming to a schema.  The schema is
- * stored in the file with the data.  Each datum in a file is of the same
- * schema.  Data is written with a {@link DatumWriter}.  Data is grouped into
- * <i>blocks</i>.  A synchronization marker is written between blocks, so that
- * files may be split.  Blocks may be compressed.  Extensible metadata is
- * stored at the end of the file.  Files may be appended to.
- * @see DataFileReader
- */
-public class DataFileWriter<D> implements Closeable, Flushable {
-  private Schema schema;
-  private DatumWriter<D> dout;
-
-  private BufferedFileOutputStream out;
-  private Encoder vout;
-
-  private final Map<String,byte[]> meta = new HashMap<String,byte[]>();
-
-  private long blockCount;                       // # entries in current block
-
-  private NonCopyingByteArrayOutputStream buffer;
-  private Encoder bufOut;
-
-  private byte[] sync;                          // 16 random bytes
-  private int syncInterval = DataFileConstants.DEFAULT_SYNC_INTERVAL;
-
-  private boolean isOpen;
-  private Codec codec;
-
-  /** Construct a writer, not yet open. */
-  public DataFileWriter(DatumWriter<D> dout) {
-    this.dout = dout;
-  }
-  
-  private void assertOpen() {
-    if (!isOpen) throw new AvroRuntimeException("not open");
-  }
-  private void assertNotOpen() {
-    if (isOpen) throw new AvroRuntimeException("already open");
-  }
-  
-  /** 
-   * Configures this writer to use the given codec. 
-   * May not be reset after writes have begun.
-   */
-  public DataFileWriter<D> setCodec(CodecFactory c) {
-    assertNotOpen();
-    this.codec = c.createInstance();
-    setMetaInternal(DataFileConstants.CODEC, codec.getName());
-    return this;
-  }
-
-  /**
-   * Set the synchronization interval for this file, in bytes. 
-   * Valid values range from 32 to 2^30
-   * Suggested values are between 2K and 2M
-   * 
-   * Invalid values throw IllegalArgumentException
-   * 
-   * @param syncInterval 
-   *   the approximate number of uncompressed bytes to write in each block
-   * @return 
-   *   this DataFileWriter
-   */
-  public DataFileWriter<D> setSyncInterval(int syncInterval) {
-    if (syncInterval < 32 || syncInterval > (1 << 30)) {
-      throw new IllegalArgumentException("Invalid syncInterval value: " + syncInterval);
-    }
-    this.syncInterval = syncInterval;
-    return this;
-  }
-
-  /** Open a new file for data matching a schema. */
-  public DataFileWriter<D> create(Schema schema, File file) throws IOException {
-    return create(schema, new FileOutputStream(file));
-  }
-
-  /** Open a new file for data matching a schema. */
-  public DataFileWriter<D> create(Schema schema, OutputStream outs)
-    throws IOException {
-    assertNotOpen();
-
-    this.schema = schema;
-    setMetaInternal(DataFileConstants.SCHEMA, schema.toString());
-    this.sync = generateSync();
-
-    init(outs);
-
-    out.write(DataFileConstants.MAGIC);           // write magic
-
-    vout.writeMapStart();                         // write metadata
-    vout.setItemCount(meta.size());
-    for (Map.Entry<String,byte[]> entry : meta.entrySet()) {
-      vout.startItem();
-      vout.writeString(entry.getKey());
-      vout.writeBytes(entry.getValue());
-    }
-    vout.writeMapEnd();
-    vout.flush(); //vout may be buffered, flush before writing to out
-
-    out.write(sync);                              // write initial sync
-
-    return this;
-  }
-
-  /** Open a writer appending to an existing file. */
-  public DataFileWriter<D> appendTo(File file) throws IOException {
-    assertNotOpen();
-    if (!file.exists())
-      throw new FileNotFoundException("Not found: "+file);
-    RandomAccessFile raf = new RandomAccessFile(file, "r");
-    FileDescriptor fd = raf.getFD();
-    DataFileReader<D> reader =
-      new DataFileReader<D>(new SeekableFileInput(fd),
-                            new GenericDatumReader<D>());
-    this.schema = reader.getSchema();
-    this.sync = reader.sync;
-    this.meta.putAll(reader.meta);
-    byte[] codecBytes = this.meta.get(DataFileConstants.CODEC);
-    if (codecBytes != null) {
-      String strCodec = new String(codecBytes, "UTF-8");
-      this.codec = CodecFactory.fromString(strCodec).createInstance();
-    } else {
-      this.codec = CodecFactory.nullCodec().createInstance();
-    }
-    raf.close();
-
-    init(new FileOutputStream(file, true));
-
-    return this;
-  }
-
-  private void init(OutputStream outs) throws IOException {
-    this.out = new BufferedFileOutputStream(outs);
-    this.vout = new BinaryEncoder(out);
-    dout.setSchema(schema);
-    buffer = new NonCopyingByteArrayOutputStream(
-        Math.min((int)(syncInterval * 1.25), Integer.MAX_VALUE/2 -1));
-    this.bufOut = new BinaryEncoder(buffer);
-    if (this.codec == null) {
-      this.codec = CodecFactory.nullCodec().createInstance();
-    }
-    this.isOpen = true;
-  }
-
-  private static byte[] generateSync() {
-    try {
-      MessageDigest digester = MessageDigest.getInstance("MD5");
-      long time = System.currentTimeMillis();
-      digester.update((new UID()+"@"+time).getBytes());
-      return digester.digest();
-    } catch (NoSuchAlgorithmException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  private DataFileWriter<D> setMetaInternal(String key, byte[] value) {
-    assertNotOpen();
-    meta.put(key, value);
-    return this;
-  }
-  
-  public DataFileWriter<D> setMetaInternal(String key, String value) {
-    try {
-      return setMetaInternal(key, value.getBytes("UTF-8"));
-    } catch (UnsupportedEncodingException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  /** Set a metadata property. */
-  public DataFileWriter<D> setMeta(String key, byte[] value) {
-    if (isReserved(key)) {
-      throw new AvroRuntimeException("Cannot set reserved meta key: " + key);
-    }
-    return setMetaInternal(key, value);
-  }
-  
-  private boolean isReserved(String key) {
-    return key.startsWith("avro.");
-  }
-
-  /** Set a metadata property. */
-  public DataFileWriter<D> setMeta(String key, String value) {
-    try {
-      return setMeta(key, value.getBytes("UTF-8"));
-    } catch (UnsupportedEncodingException e) {
-      throw new RuntimeException(e);
-    }
-  }
-  /** Set a metadata property. */
-  public DataFileWriter<D> setMeta(String key, long value) {
-    return setMeta(key, Long.toString(value));
-  }
-
-  /** Append a datum to the file. */
-  public void append(D datum) throws IOException {
-    assertOpen();
-    dout.write(datum, bufOut);
-    blockCount++;
-    if (buffer.size() >= syncInterval)
-      writeBlock();
-  }
-
-  /**
-   * Appends data from another file.  otherFile must have the same schema.
-   * Data blocks will be copied without de-serializing data.  If the codecs
-   * of the two files are compatible, data blocks are copied directly without
-   * decompression.  If the codecs are not compatible, blocks from otherFile
-   * are uncompressed and then compressed using this file's codec.
-   * <p/>
-   * If the recompress flag is set all blocks are decompressed and then compressed
-   * using this file's codec.  This is useful when the two files have compatible
-   * compression codecs but different codec options.  For example, one might
-   * append a file compressed with deflate at compression level 1 to a file with
-   * deflate at compression level 7.  If <i>recompress</i> is false, blocks
-   * will be copied without changing the compression level.  If true, they will
-   * be converted to the new compression level.
-   * @param otherFile
-   * @param recompress
-   * @throws IOException
-   */
-  public void appendAllFrom(DataFileStream<D> otherFile, boolean recompress) throws IOException {
-    assertOpen();
-    // make sure other file has same schema
-    Schema otherSchema = otherFile.getSchema();
-    if (!this.schema.equals(otherSchema)) {
-      throw new IOException("Schema from file " + otherFile + " does not match");
-    }
-    // flush anything written so far
-    writeBlock();
-    Codec otherCodec = otherFile.resolveCodec();
-    DataBlock nextBlockRaw = null;
-    if (codec.equals(otherCodec) && !recompress) {
-      // copy raw bytes
-      while(otherFile.hasNextBlock()) {
-        nextBlockRaw = otherFile.nextBlock(nextBlockRaw);
-        writeRawBlock(nextBlockRaw);
-      }
-    } else {
-      while(otherFile.hasNextBlock()) {
-        nextBlockRaw = otherFile.nextBlock(nextBlockRaw);
-        ByteBuffer uncompressedData = otherCodec.decompress(ByteBuffer.wrap(
-            nextBlockRaw.data, 0, nextBlockRaw.blockSize));
-        ByteBuffer compressed = codec.compress(uncompressedData);
-        nextBlockRaw.data = compressed.array();
-        nextBlockRaw.blockSize = compressed.remaining();
-        writeRawBlock(nextBlockRaw);
-      }
-    }
-  }
-  
-  private void writeRawBlock(DataBlock rawBlock) throws IOException {
-    vout.writeLong(rawBlock.numEntries);
-    vout.writeLong(rawBlock.blockSize);
-    vout.writeFixed(rawBlock.data, 0, rawBlock.blockSize);
-    vout.writeFixed(sync);
-  }
-  
-  private void writeBlock() throws IOException {
-    if (blockCount > 0) {
-      vout.writeLong(blockCount);
-      ByteBuffer uncompressed = buffer.getByteArrayAsByteBuffer();
-      ByteBuffer block = codec.compress(uncompressed);
-      vout.writeLong(block.remaining());
-      vout.writeFixed(block.array(), block.position() + block.arrayOffset(), 
-          block.remaining());
-      buffer.reset();
-      blockCount = 0;
-      vout.writeFixed(sync);
-    }
-  }
-
-  /** Return the current position as a value that may be passed to {@link
-   * DataFileReader#seek(long)}.  Forces the end of the current block,
-   * emitting a synchronization marker. */
-  public long sync() throws IOException {
-    assertOpen();
-    writeBlock();
-    return out.tell();
-  }
-
-  /** Flush the current state of the file. */
-  public void flush() throws IOException {
-    sync();
-    vout.flush();
-    out.flush();
-  }
-
-  /** Close the file. */
-  public void close() throws IOException {
-    flush();
-    out.close();
-    isOpen = false;
-  }
-
-  private class BufferedFileOutputStream extends BufferedOutputStream {
-    private long position;                         // start of buffer
-
-    private class PositionFilter extends FilterOutputStream {
-      public PositionFilter(OutputStream out) throws IOException { super(out); }
-      public void write(byte[] b, int off, int len) throws IOException {
-        out.write(b, off, len);
-        position += len;                           // update on write
-      }
-    }
-
-    public BufferedFileOutputStream(OutputStream out) throws IOException {
-      super(null);
-      this.out = new PositionFilter(out);
-    }
-
-    public long tell() { return position+count; }
-  }
-
-  static class NonCopyingByteArrayOutputStream extends ByteArrayOutputStream {
-    NonCopyingByteArrayOutputStream(int initialSize) {
-      super(initialSize);
-    }
-    ByteBuffer getByteArrayAsByteBuffer() {
-      return ByteBuffer.wrap(buf, 0, count);
-    }
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/DeflateCodec.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/DeflateCodec.java
deleted file mode 100644
index 29bd8a9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/DeflateCodec.java
+++ /dev/null
@@ -1,145 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.ByteArrayInputStream;
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-import java.util.zip.Deflater;
-import java.util.zip.DeflaterOutputStream;
-import java.util.zip.Inflater;
-import java.util.zip.InflaterInputStream;
-
-import org.apache.avro.file.DataFileWriter.NonCopyingByteArrayOutputStream;
-
-/** 
- * Implements DEFLATE (RFC1951) compression and decompression. 
- *
- * Note that there is a distinction between RFC1951 (deflate)
- * and RFC1950 (zlib).  zlib adds an extra 2-byte header
- * at the front, and a 4-byte checksum at the end.  The
- * code here, by passing "true" as the "nowrap" option to
- * {@link Inflater} and {@link Deflater}, is using
- * RFC1951.
- */
-class DeflateCodec extends Codec {
-  
-  static class Option extends CodecFactory {
-    private int compressionLevel;
-
-    public Option(int compressionLevel) {
-      this.compressionLevel = compressionLevel;
-    }
-
-    @Override
-    protected Codec createInstance() {
-      return new DeflateCodec(compressionLevel);
-    }
-  }
-
-  NonCopyingByteArrayOutputStream compressionBuffer;
-  private Deflater deflater;
-  private Inflater inflater;
-  //currently only do 'nowrap' -- RFC 1951, not zlib
-  private boolean nowrap = true; 
-  private int compressionLevel;
-
-  public DeflateCodec(int compressionLevel) {
-    this.compressionLevel = compressionLevel;
-  }
-
-  @Override
-  String getName() {
-    return DataFileConstants.DEFLATE_CODEC;
-  }
-
-  @Override
-  ByteBuffer compress(ByteBuffer data) throws IOException {
-    if (compressionBuffer == null) {
-      compressionBuffer = new NonCopyingByteArrayOutputStream(
-          data.remaining());
-    }
-    if (deflater == null) {
-      deflater = new Deflater(compressionLevel, nowrap);
-    }
-    // Pass output through deflate
-    DeflaterOutputStream deflaterStream = 
-      new DeflaterOutputStream(compressionBuffer, deflater);
-    deflaterStream.write(data.array(),
-        data.position() + data.arrayOffset(),
-        data.limit() + data.arrayOffset());
-    deflaterStream.finish();
-    ByteBuffer result = compressionBuffer.getByteArrayAsByteBuffer();
-    deflater.reset();
-    compressionBuffer.reset();
-    return result;
-  }
-
-  @Override
-  ByteBuffer decompress(ByteBuffer data) throws IOException {
-    if (compressionBuffer == null) {
-      compressionBuffer = new NonCopyingByteArrayOutputStream(
-          data.remaining());
-    }
-    if (inflater == null) {
-      inflater = new Inflater(nowrap);
-    }
-    InputStream uncompressed = new InflaterInputStream(
-        new ByteArrayInputStream(data.array(),
-            data.position() + data.arrayOffset(),
-            data.remaining()), inflater);
-    int read;
-    byte[] buff = new byte[2048];
-    try {
-      while (true) {
-        read = uncompressed.read(buff);
-        if (read < 0) break;
-        compressionBuffer.write(buff, 0, read);
-      } 
-    } catch (EOFException e) {
-      // sometimes InflaterInputStream.read
-      // throws this instead of returning -1
-    }
-    ByteBuffer result = compressionBuffer.getByteArrayAsByteBuffer();
-    inflater.reset();
-    compressionBuffer.reset();
-    return result;
-  }
-
-  @Override
-  public int hashCode() {
-    return nowrap ? 0 : 1;
-  }
-
-  @Override
-  public boolean equals(Object obj) {
-    if (this == obj)
-      return true;
-    if (getClass() != obj.getClass())
-      return false;
-    DeflateCodec other = (DeflateCodec)obj;
-    return (this.nowrap == other.nowrap);
-  }
-
-  @Override
-  public String toString() {
-    return getName() + "-" + compressionLevel;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/LengthLimitedInputStream.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/LengthLimitedInputStream.java
deleted file mode 100644
index a4b4199..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/LengthLimitedInputStream.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.FilterInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-
-/** Represents a substream of certain length. */
-class LengthLimitedInputStream extends FilterInputStream {
-
-  /** Bytes remaining. */
-  private long remaining;
-
-  protected LengthLimitedInputStream(InputStream in, long maxLength) {
-    super(in);
-    remaining = maxLength;
-  }
-
-  @Override
-  public int read() throws IOException {
-    if (remaining > 0) {
-      int v = super.read();
-      if (v != -1) {
-        remaining--;
-      }
-      return v;
-    }
-    return -1;
-  }
-
-  @Override
-  public int read(byte[] b) throws IOException {
-    return read(b, 0, b.length);
-  }
-
-  /**
-   * Returns at most Integer.MAX_VALUE.
-   */
-  private int remainingInt() {
-    return (int)Math.min(remaining, Integer.MAX_VALUE);
-  }
-
-  @Override
-  public int read(byte[] b, int off, int len) throws IOException {
-    if (remaining == 0) {
-      return -1;
-    }
-    if (len > remaining) {
-      len = remainingInt();
-    }
-    int v = super.read(b, off, len);
-    if (v != -1) {
-      remaining -= v;
-    }
-    return v;
-  }
-
-  @Override
-  public int available() throws IOException {
-    return Math.min(super.available(), remainingInt());
-  }
-
-  @Override
-  public long skip(long n) throws IOException {
-    long v = super.skip(Math.min(remaining, n));
-    remaining -= v;
-    return v;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/NullCodec.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/NullCodec.java
deleted file mode 100644
index 307b973..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/NullCodec.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-
-/** Implements "null" (pass through) codec. */
-final class NullCodec extends Codec {
-  
-  private static final NullCodec INSTANCE = new NullCodec();
-
-  static class Option extends CodecFactory {
-    @Override
-    protected Codec createInstance() {
-      return INSTANCE;
-    }
-  }
-
-  /** No options available for NullCodec. */
-  public static final CodecFactory OPTION = new Option();
-
-  @Override
-  String getName() {
-    return DataFileConstants.NULL_CODEC;
-  }
-
-  @Override
-  ByteBuffer compress(ByteBuffer buffer) throws IOException {
-    return buffer;
-  }
-
-  @Override
-  ByteBuffer decompress(ByteBuffer data) throws IOException {
-    return data;
-  }
-
-  @Override
-  public boolean equals(Object other) {
-    if (this == other)
-      return true;
-    return (this.getClass() == other.getClass());
-  }
-
-  @Override
-  public int hashCode() {
-    return 2;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableFileInput.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableFileInput.java
deleted file mode 100644
index afa0675..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableFileInput.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileDescriptor;
-import java.io.IOException;
-
-/** A {@link FileInputStream} that implements {@link SeekableInput}. */
-public class SeekableFileInput
-  extends FileInputStream implements SeekableInput {
-
-  public SeekableFileInput(File file) throws IOException { super(file); }
-  public SeekableFileInput(FileDescriptor fd) throws IOException { super(fd); }
-
-  public void seek(long p) throws IOException { getChannel().position(p); }
-  public long tell() throws IOException { return getChannel().position(); }
-  public long length() throws IOException { return getChannel().size(); }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableInput.java b/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableInput.java
deleted file mode 100644
index 73fa69a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/SeekableInput.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import java.io.IOException;
-
-/** An InputStream that supports seek and tell. */
-public interface SeekableInput {
-
-  /** Set the position for the next {@link java.io.InputStream#read(byte[],int,int) read()}. */
-  void seek(long p) throws IOException;
-
-  /** Return the position of the next {@link java.io.InputStream#read(byte[],int,int) read()}. */
-  long tell() throws IOException;
-
-  /** Return the length of the file. */
-  long length() throws IOException;
-
-  /** Equivalent to {@link java.io.InputStream#read(byte[],int,int)}. */
-  int read(byte[] b, int off, int len) throws IOException;
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/file/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/file/package.html
deleted file mode 100644
index be67e66..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/file/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-A container file for Avro data.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/genavro/genavro.jj b/branch-1.3/lang/java/src/java/org/apache/avro/genavro/genavro.jj
deleted file mode 100644
index 7b2f4c3..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/genavro/genavro.jj
+++ /dev/null
@@ -1,1374 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- * Some portions of this file were modeled after the example Java 1.5
- * parser included with JavaCC. The following license applies to those
- * portions:
- *
- * Copyright (c) 2006, Sun Microsystems, Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- *     * Redistributions of source code must retain the above copyright notice,
- *       this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in the
- *       documentation and/or other materials provided with the distribution.
- *     * Neither the name of the Sun Microsystems, Inc. nor the names of its
- *       contributors may be used to endorse or promote products derived from
- *       this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- * THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-options {
-  JAVA_UNICODE_ESCAPE = true;
-  UNICODE_INPUT = true;
-  ERROR_REPORTING = true;
-  STATIC = false;
-  FORCE_LA_CHECK = true;
-  JDK_VERSION = "1.6";
-}
-
-PARSER_BEGIN(GenAvro)
-
-package org.apache.avro.genavro;
-
-import java.io.*;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.*;
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.*;
-
-import org.apache.commons.lang.StringEscapeUtils;
-
-/**
- * Grammar to parse a higher-level language into an Avro Schema.
- *
- * Note: each instance is not thread-safe, but multiple separate
- * instances are safely independent.
- */
-public class GenAvro
-{
-  String namespace;
-  Map<String,Schema> names = new LinkedHashMap<String,Schema>();
-}
-
-PARSER_END(GenAvro)
-
-/* WHITE SPACE */
-
-SKIP :
-{
-  " "
-| "\t"
-| "\n"
-| "\r"
-| "\f"
-}
-
-/* COMMENTS */
-
-MORE :
-{
-  <"/**" ~["/"]> { input_stream.backup(1); } : IN_FORMAL_COMMENT
-|
-  "/*" : IN_MULTI_LINE_COMMENT
-}
-
-SPECIAL_TOKEN :
-{
-  <SINGLE_LINE_COMMENT: "//" (~["\n", "\r"])* ("\n" | "\r" | "\r\n")?>
-}
-
-<IN_FORMAL_COMMENT>
-SPECIAL_TOKEN :
-{
-  <FORMAL_COMMENT: "*/" > : DEFAULT
-}
-
-<IN_MULTI_LINE_COMMENT>
-SPECIAL_TOKEN :
-{
-  <MULTI_LINE_COMMENT: "*/" > : DEFAULT
-}
-
-<IN_FORMAL_COMMENT,IN_MULTI_LINE_COMMENT>
-MORE :
-{
-  < ~[] >
-}
-
-/* RESERVED WORDS AND LITERALS */
-
-TOKEN :
-{
-  < ARRAY: "array" >
-| < BOOLEAN: "boolean" >
-| < DOUBLE: "double" >
-| < ENUM: "enum" >
-| < ERROR: "error" >
-| < FALSE: "false" >
-| < FIXED: "fixed" >
-| < FLOAT: "float" >
-| < INT: "int" >
-| < LONG: "long" >
-| < MAP: "map" >
-| < BYTES: "bytes" >
-| < STRING: "string" >
-| < NULL: "null" >
-| < PROTOCOL: "protocol" >
-| < RECORD: "record" >
-| < THROWS: "throws" >
-| < TRUE: "true" >
-| < UNION: "union" >
-| < VOID: "void" >
-}
-
-/* LITERALS */
-
-TOKEN :
-{
-  < INTEGER_LITERAL:
-        <DECIMAL_LITERAL> (["l","L"])?
-      | <HEX_LITERAL> (["l","L"])?
-      | <OCTAL_LITERAL> (["l","L"])?
-  >
-|
-  < #DECIMAL_LITERAL: ["1"-"9"] (["0"-"9"])* >
-|
-  < #HEX_LITERAL: "0" ["x","X"] (["0"-"9","a"-"f","A"-"F"])+ >
-|
-  < #OCTAL_LITERAL: "0" (["0"-"7"])* >
-|
-  < FLOATING_POINT_LITERAL:
-        <DECIMAL_FLOATING_POINT_LITERAL>
-      | <HEXADECIMAL_FLOATING_POINT_LITERAL>
-  >
-|
-  < #DECIMAL_FLOATING_POINT_LITERAL:
-        (["0"-"9"])+ "." (["0"-"9"])* (<DECIMAL_EXPONENT>)? (["f","F","d","D"])?
-      | "." (["0"-"9"])+ (<DECIMAL_EXPONENT>)? (["f","F","d","D"])?
-      | (["0"-"9"])+ <DECIMAL_EXPONENT> (["f","F","d","D"])?
-      | (["0"-"9"])+ (<DECIMAL_EXPONENT>)? ["f","F","d","D"]
-  >
-|
-  < #DECIMAL_EXPONENT: ["e","E"] (["+","-"])? (["0"-"9"])+ >
-|
-  < #HEXADECIMAL_FLOATING_POINT_LITERAL:
-        "0" ["x", "X"] (["0"-"9","a"-"f","A"-"F"])+ (".")? <HEXADECIMAL_EXPONENT> (["f","F","d","D"])?
-      | "0" ["x", "X"] (["0"-"9","a"-"f","A"-"F"])* "." (["0"-"9","a"-"f","A"-"F"])+ <HEXADECIMAL_EXPONENT> (["f","F","d","D"])?
-  >
-|
-  < #HEXADECIMAL_EXPONENT: ["p","P"] (["+","-"])? (["0"-"9"])+ >
-|
-  < CHARACTER_LITERAL:
-      "'"
-      (   (~["'","\\","\n","\r"])
-        | ("\\"
-            ( ["n","t","b","r","f","\\","'","\""]
-            | ["0"-"7"] ( ["0"-"7"] )?
-            | ["0"-"3"] ["0"-"7"] ["0"-"7"]
-            )
-          )
-      )
-      "'"
-  >
-|
-  < STRING_LITERAL:
-      "\""
-      (   (~["\"","\\","\n","\r"])
-        | ("\\"
-            ( ["n","t","b","r","f","\\","'","\""]
-            | ["0"-"7"] ( ["0"-"7"] )?
-            | ["0"-"3"] ["0"-"7"] ["0"-"7"]
-            )
-          )
-      )*
-      "\""
-  >
-}
-
-/* IDENTIFIERS */
-
-TOKEN :
-{
-  < IDENTIFIER: <LETTER> (<PART_LETTER>)* >
-|
-  < #LETTER:
-      [  // all chars for which Character.isIdentifierStart is true
-         "$",
-         "A"-"Z",
-         "_",
-         "a"-"z",
-         "\u00a2"-"\u00a5",
-         "\u00aa",
-         "\u00b5",
-         "\u00ba",
-         "\u00c0"-"\u00d6",
-         "\u00d8"-"\u00f6",
-         "\u00f8"-"\u021f",
-         "\u0222"-"\u0233",
-         "\u0250"-"\u02ad",
-         "\u02b0"-"\u02b8",
-         "\u02bb"-"\u02c1",
-         "\u02d0"-"\u02d1",
-         "\u02e0"-"\u02e4",
-         "\u02ee",
-         "\u037a",
-         "\u0386",
-         "\u0388"-"\u038a",
-         "\u038c",
-         "\u038e"-"\u03a1",
-         "\u03a3"-"\u03ce",
-         "\u03d0"-"\u03d7",
-         "\u03da"-"\u03f3",
-         "\u0400"-"\u0481",
-         "\u048c"-"\u04c4",
-         "\u04c7"-"\u04c8",
-         "\u04cb"-"\u04cc",
-         "\u04d0"-"\u04f5",
-         "\u04f8"-"\u04f9",
-         "\u0531"-"\u0556",
-         "\u0559",
-         "\u0561"-"\u0587",
-         "\u05d0"-"\u05ea",
-         "\u05f0"-"\u05f2",
-         "\u0621"-"\u063a",
-         "\u0640"-"\u064a",
-         "\u0671"-"\u06d3",
-         "\u06d5",
-         "\u06e5"-"\u06e6",
-         "\u06fa"-"\u06fc",
-         "\u0710",
-         "\u0712"-"\u072c",
-         "\u0780"-"\u07a5",
-         "\u0905"-"\u0939",
-         "\u093d",
-         "\u0950",
-         "\u0958"-"\u0961",
-         "\u0985"-"\u098c",
-         "\u098f"-"\u0990",
-         "\u0993"-"\u09a8",
-         "\u09aa"-"\u09b0",
-         "\u09b2",
-         "\u09b6"-"\u09b9",
-         "\u09dc"-"\u09dd",
-         "\u09df"-"\u09e1",
-         "\u09f0"-"\u09f3",
-         "\u0a05"-"\u0a0a",
-         "\u0a0f"-"\u0a10",
-         "\u0a13"-"\u0a28",
-         "\u0a2a"-"\u0a30",
-         "\u0a32"-"\u0a33",
-         "\u0a35"-"\u0a36",
-         "\u0a38"-"\u0a39",
-         "\u0a59"-"\u0a5c",
-         "\u0a5e",
-         "\u0a72"-"\u0a74",
-         "\u0a85"-"\u0a8b",
-         "\u0a8d",
-         "\u0a8f"-"\u0a91",
-         "\u0a93"-"\u0aa8",
-         "\u0aaa"-"\u0ab0",
-         "\u0ab2"-"\u0ab3",
-         "\u0ab5"-"\u0ab9",
-         "\u0abd",
-         "\u0ad0",
-         "\u0ae0",
-         "\u0b05"-"\u0b0c",
-         "\u0b0f"-"\u0b10",
-         "\u0b13"-"\u0b28",
-         "\u0b2a"-"\u0b30",
-         "\u0b32"-"\u0b33",
-         "\u0b36"-"\u0b39",
-         "\u0b3d",
-         "\u0b5c"-"\u0b5d",
-         "\u0b5f"-"\u0b61",
-         "\u0b85"-"\u0b8a",
-         "\u0b8e"-"\u0b90",
-         "\u0b92"-"\u0b95",
-         "\u0b99"-"\u0b9a",
-         "\u0b9c",
-         "\u0b9e"-"\u0b9f",
-         "\u0ba3"-"\u0ba4",
-         "\u0ba8"-"\u0baa",
-         "\u0bae"-"\u0bb5",
-         "\u0bb7"-"\u0bb9",
-         "\u0c05"-"\u0c0c",
-         "\u0c0e"-"\u0c10",
-         "\u0c12"-"\u0c28",
-         "\u0c2a"-"\u0c33",
-         "\u0c35"-"\u0c39",
-         "\u0c60"-"\u0c61",
-         "\u0c85"-"\u0c8c",
-         "\u0c8e"-"\u0c90",
-         "\u0c92"-"\u0ca8",
-         "\u0caa"-"\u0cb3",
-         "\u0cb5"-"\u0cb9",
-         "\u0cde",
-         "\u0ce0"-"\u0ce1",
-         "\u0d05"-"\u0d0c",
-         "\u0d0e"-"\u0d10",
-         "\u0d12"-"\u0d28",
-         "\u0d2a"-"\u0d39",
-         "\u0d60"-"\u0d61",
-         "\u0d85"-"\u0d96",
-         "\u0d9a"-"\u0db1",
-         "\u0db3"-"\u0dbb",
-         "\u0dbd",
-         "\u0dc0"-"\u0dc6",
-         "\u0e01"-"\u0e30",
-         "\u0e32"-"\u0e33",
-         "\u0e3f"-"\u0e46",
-         "\u0e81"-"\u0e82",
-         "\u0e84",
-         "\u0e87"-"\u0e88",
-         "\u0e8a",
-         "\u0e8d",
-         "\u0e94"-"\u0e97",
-         "\u0e99"-"\u0e9f",
-         "\u0ea1"-"\u0ea3",
-         "\u0ea5",
-         "\u0ea7",
-         "\u0eaa"-"\u0eab",
-         "\u0ead"-"\u0eb0",
-         "\u0eb2"-"\u0eb3",
-         "\u0ebd",
-         "\u0ec0"-"\u0ec4",
-         "\u0ec6",
-         "\u0edc"-"\u0edd",
-         "\u0f00",
-         "\u0f40"-"\u0f47",
-         "\u0f49"-"\u0f6a",
-         "\u0f88"-"\u0f8b",
-         "\u1000"-"\u1021",
-         "\u1023"-"\u1027",
-         "\u1029"-"\u102a",
-         "\u1050"-"\u1055",
-         "\u10a0"-"\u10c5",
-         "\u10d0"-"\u10f6",
-         "\u1100"-"\u1159",
-         "\u115f"-"\u11a2",
-         "\u11a8"-"\u11f9",
-         "\u1200"-"\u1206",
-         "\u1208"-"\u1246",
-         "\u1248",
-         "\u124a"-"\u124d",
-         "\u1250"-"\u1256",
-         "\u1258",
-         "\u125a"-"\u125d",
-         "\u1260"-"\u1286",
-         "\u1288",
-         "\u128a"-"\u128d",
-         "\u1290"-"\u12ae",
-         "\u12b0",
-         "\u12b2"-"\u12b5",
-         "\u12b8"-"\u12be",
-         "\u12c0",
-         "\u12c2"-"\u12c5",
-         "\u12c8"-"\u12ce",
-         "\u12d0"-"\u12d6",
-         "\u12d8"-"\u12ee",
-         "\u12f0"-"\u130e",
-         "\u1310",
-         "\u1312"-"\u1315",
-         "\u1318"-"\u131e",
-         "\u1320"-"\u1346",
-         "\u1348"-"\u135a",
-         "\u13a0"-"\u13f4",
-         "\u1401"-"\u166c",
-         "\u166f"-"\u1676",
-         "\u1681"-"\u169a",
-         "\u16a0"-"\u16ea",
-         "\u1780"-"\u17b3",
-         "\u17db",
-         "\u1820"-"\u1877",
-         "\u1880"-"\u18a8",
-         "\u1e00"-"\u1e9b",
-         "\u1ea0"-"\u1ef9",
-         "\u1f00"-"\u1f15",
-         "\u1f18"-"\u1f1d",
-         "\u1f20"-"\u1f45",
-         "\u1f48"-"\u1f4d",
-         "\u1f50"-"\u1f57",
-         "\u1f59",
-         "\u1f5b",
-         "\u1f5d",
-         "\u1f5f"-"\u1f7d",
-         "\u1f80"-"\u1fb4",
-         "\u1fb6"-"\u1fbc",
-         "\u1fbe",
-         "\u1fc2"-"\u1fc4",
-         "\u1fc6"-"\u1fcc",
-         "\u1fd0"-"\u1fd3",
-         "\u1fd6"-"\u1fdb",
-         "\u1fe0"-"\u1fec",
-         "\u1ff2"-"\u1ff4",
-         "\u1ff6"-"\u1ffc",
-         "\u203f"-"\u2040",
-         "\u207f",
-         "\u20a0"-"\u20af",
-         "\u2102",
-         "\u2107",
-         "\u210a"-"\u2113",
-         "\u2115",
-         "\u2119"-"\u211d",
-         "\u2124",
-         "\u2126",
-         "\u2128",
-         "\u212a"-"\u212d",
-         "\u212f"-"\u2131",
-         "\u2133"-"\u2139",
-         "\u2160"-"\u2183",
-         "\u3005"-"\u3007",
-         "\u3021"-"\u3029",
-         "\u3031"-"\u3035",
-         "\u3038"-"\u303a",
-         "\u3041"-"\u3094",
-         "\u309d"-"\u309e",
-         "\u30a1"-"\u30fe",
-         "\u3105"-"\u312c",
-         "\u3131"-"\u318e",
-         "\u31a0"-"\u31b7",
-         "\u3400"-"\u4db5",
-         "\u4e00"-"\u9fa5",
-         "\ua000"-"\ua48c",
-         "\uac00"-"\ud7a3",
-         "\uf900"-"\ufa2d",
-         "\ufb00"-"\ufb06",
-         "\ufb13"-"\ufb17",
-         "\ufb1d",
-         "\ufb1f"-"\ufb28",
-         "\ufb2a"-"\ufb36",
-         "\ufb38"-"\ufb3c",
-         "\ufb3e",
-         "\ufb40"-"\ufb41",
-         "\ufb43"-"\ufb44",
-         "\ufb46"-"\ufbb1",
-         "\ufbd3"-"\ufd3d",
-         "\ufd50"-"\ufd8f",
-         "\ufd92"-"\ufdc7",
-         "\ufdf0"-"\ufdfb",
-         "\ufe33"-"\ufe34",
-         "\ufe4d"-"\ufe4f",
-         "\ufe69",
-         "\ufe70"-"\ufe72",
-         "\ufe74",
-         "\ufe76"-"\ufefc",
-         "\uff04",
-         "\uff21"-"\uff3a",
-         "\uff3f",
-         "\uff41"-"\uff5a",
-         "\uff65"-"\uffbe",
-         "\uffc2"-"\uffc7",
-         "\uffca"-"\uffcf",
-         "\uffd2"-"\uffd7",
-         "\uffda"-"\uffdc",
-         "\uffe0"-"\uffe1",
-         "\uffe5"-"\uffe6"
-      ]
-  >
-|
-  < #PART_LETTER:
-      [  // all chars for which Character.isIdentifierPart is true
-         "\u0000"-"\u0008",
-         "\u000e"-"\u001b",
-         "$",
-         "0"-"9",
-         "A"-"Z",
-         "_",
-         "a"-"z",
-         "\u007f"-"\u009f",
-         "\u00a2"-"\u00a5",
-         "\u00aa",
-         "\u00b5",
-         "\u00ba",
-         "\u00c0"-"\u00d6",
-         "\u00d8"-"\u00f6",
-         "\u00f8"-"\u021f",
-         "\u0222"-"\u0233",
-         "\u0250"-"\u02ad",
-         "\u02b0"-"\u02b8",
-         "\u02bb"-"\u02c1",
-         "\u02d0"-"\u02d1",
-         "\u02e0"-"\u02e4",
-         "\u02ee",
-         "\u0300"-"\u034e",
-         "\u0360"-"\u0362",
-         "\u037a",
-         "\u0386",
-         "\u0388"-"\u038a",
-         "\u038c",
-         "\u038e"-"\u03a1",
-         "\u03a3"-"\u03ce",
-         "\u03d0"-"\u03d7",
-         "\u03da"-"\u03f3",
-         "\u0400"-"\u0481",
-         "\u0483"-"\u0486",
-         "\u048c"-"\u04c4",
-         "\u04c7"-"\u04c8",
-         "\u04cb"-"\u04cc",
-         "\u04d0"-"\u04f5",
-         "\u04f8"-"\u04f9",
-         "\u0531"-"\u0556",
-         "\u0559",
-         "\u0561"-"\u0587",
-         "\u0591"-"\u05a1",
-         "\u05a3"-"\u05b9",
-         "\u05bb"-"\u05bd",
-         "\u05bf",
-         "\u05c1"-"\u05c2",
-         "\u05c4",
-         "\u05d0"-"\u05ea",
-         "\u05f0"-"\u05f2",
-         "\u0621"-"\u063a",
-         "\u0640"-"\u0655",
-         "\u0660"-"\u0669",
-         "\u0670"-"\u06d3",
-         "\u06d5"-"\u06dc",
-         "\u06df"-"\u06e8",
-         "\u06ea"-"\u06ed",
-         "\u06f0"-"\u06fc",
-         "\u070f"-"\u072c",
-         "\u0730"-"\u074a",
-         "\u0780"-"\u07b0",
-         "\u0901"-"\u0903",
-         "\u0905"-"\u0939",
-         "\u093c"-"\u094d",
-         "\u0950"-"\u0954",
-         "\u0958"-"\u0963",
-         "\u0966"-"\u096f",
-         "\u0981"-"\u0983",
-         "\u0985"-"\u098c",
-         "\u098f"-"\u0990",
-         "\u0993"-"\u09a8",
-         "\u09aa"-"\u09b0",
-         "\u09b2",
-         "\u09b6"-"\u09b9",
-         "\u09bc",
-         "\u09be"-"\u09c4",
-         "\u09c7"-"\u09c8",
-         "\u09cb"-"\u09cd",
-         "\u09d7",
-         "\u09dc"-"\u09dd",
-         "\u09df"-"\u09e3",
-         "\u09e6"-"\u09f3",
-         "\u0a02",
-         "\u0a05"-"\u0a0a",
-         "\u0a0f"-"\u0a10",
-         "\u0a13"-"\u0a28",
-         "\u0a2a"-"\u0a30",
-         "\u0a32"-"\u0a33",
-         "\u0a35"-"\u0a36",
-         "\u0a38"-"\u0a39",
-         "\u0a3c",
-         "\u0a3e"-"\u0a42",
-         "\u0a47"-"\u0a48",
-         "\u0a4b"-"\u0a4d",
-         "\u0a59"-"\u0a5c",
-         "\u0a5e",
-         "\u0a66"-"\u0a74",
-         "\u0a81"-"\u0a83",
-         "\u0a85"-"\u0a8b",
-         "\u0a8d",
-         "\u0a8f"-"\u0a91",
-         "\u0a93"-"\u0aa8",
-         "\u0aaa"-"\u0ab0",
-         "\u0ab2"-"\u0ab3",
-         "\u0ab5"-"\u0ab9",
-         "\u0abc"-"\u0ac5",
-         "\u0ac7"-"\u0ac9",
-         "\u0acb"-"\u0acd",
-         "\u0ad0",
-         "\u0ae0",
-         "\u0ae6"-"\u0aef",
-         "\u0b01"-"\u0b03",
-         "\u0b05"-"\u0b0c",
-         "\u0b0f"-"\u0b10",
-         "\u0b13"-"\u0b28",
-         "\u0b2a"-"\u0b30",
-         "\u0b32"-"\u0b33",
-         "\u0b36"-"\u0b39",
-         "\u0b3c"-"\u0b43",
-         "\u0b47"-"\u0b48",
-         "\u0b4b"-"\u0b4d",
-         "\u0b56"-"\u0b57",
-         "\u0b5c"-"\u0b5d",
-         "\u0b5f"-"\u0b61",
-         "\u0b66"-"\u0b6f",
-         "\u0b82"-"\u0b83",
-         "\u0b85"-"\u0b8a",
-         "\u0b8e"-"\u0b90",
-         "\u0b92"-"\u0b95",
-         "\u0b99"-"\u0b9a",
-         "\u0b9c",
-         "\u0b9e"-"\u0b9f",
-         "\u0ba3"-"\u0ba4",
-         "\u0ba8"-"\u0baa",
-         "\u0bae"-"\u0bb5",
-         "\u0bb7"-"\u0bb9",
-         "\u0bbe"-"\u0bc2",
-         "\u0bc6"-"\u0bc8",
-         "\u0bca"-"\u0bcd",
-         "\u0bd7",
-         "\u0be7"-"\u0bef",
-         "\u0c01"-"\u0c03",
-         "\u0c05"-"\u0c0c",
-         "\u0c0e"-"\u0c10",
-         "\u0c12"-"\u0c28",
-         "\u0c2a"-"\u0c33",
-         "\u0c35"-"\u0c39",
-         "\u0c3e"-"\u0c44",
-         "\u0c46"-"\u0c48",
-         "\u0c4a"-"\u0c4d",
-         "\u0c55"-"\u0c56",
-         "\u0c60"-"\u0c61",
-         "\u0c66"-"\u0c6f",
-         "\u0c82"-"\u0c83",
-         "\u0c85"-"\u0c8c",
-         "\u0c8e"-"\u0c90",
-         "\u0c92"-"\u0ca8",
-         "\u0caa"-"\u0cb3",
-         "\u0cb5"-"\u0cb9",
-         "\u0cbe"-"\u0cc4",
-         "\u0cc6"-"\u0cc8",
-         "\u0cca"-"\u0ccd",
-         "\u0cd5"-"\u0cd6",
-         "\u0cde",
-         "\u0ce0"-"\u0ce1",
-         "\u0ce6"-"\u0cef",
-         "\u0d02"-"\u0d03",
-         "\u0d05"-"\u0d0c",
-         "\u0d0e"-"\u0d10",
-         "\u0d12"-"\u0d28",
-         "\u0d2a"-"\u0d39",
-         "\u0d3e"-"\u0d43",
-         "\u0d46"-"\u0d48",
-         "\u0d4a"-"\u0d4d",
-         "\u0d57",
-         "\u0d60"-"\u0d61",
-         "\u0d66"-"\u0d6f",
-         "\u0d82"-"\u0d83",
-         "\u0d85"-"\u0d96",
-         "\u0d9a"-"\u0db1",
-         "\u0db3"-"\u0dbb",
-         "\u0dbd",
-         "\u0dc0"-"\u0dc6",
-         "\u0dca",
-         "\u0dcf"-"\u0dd4",
-         "\u0dd6",
-         "\u0dd8"-"\u0ddf",
-         "\u0df2"-"\u0df3",
-         "\u0e01"-"\u0e3a",
-         "\u0e3f"-"\u0e4e",
-         "\u0e50"-"\u0e59",
-         "\u0e81"-"\u0e82",
-         "\u0e84",
-         "\u0e87"-"\u0e88",
-         "\u0e8a",
-         "\u0e8d",
-         "\u0e94"-"\u0e97",
-         "\u0e99"-"\u0e9f",
-         "\u0ea1"-"\u0ea3",
-         "\u0ea5",
-         "\u0ea7",
-         "\u0eaa"-"\u0eab",
-         "\u0ead"-"\u0eb9",
-         "\u0ebb"-"\u0ebd",
-         "\u0ec0"-"\u0ec4",
-         "\u0ec6",
-         "\u0ec8"-"\u0ecd",
-         "\u0ed0"-"\u0ed9",
-         "\u0edc"-"\u0edd",
-         "\u0f00",
-         "\u0f18"-"\u0f19",
-         "\u0f20"-"\u0f29",
-         "\u0f35",
-         "\u0f37",
-         "\u0f39",
-         "\u0f3e"-"\u0f47",
-         "\u0f49"-"\u0f6a",
-         "\u0f71"-"\u0f84",
-         "\u0f86"-"\u0f8b",
-         "\u0f90"-"\u0f97",
-         "\u0f99"-"\u0fbc",
-         "\u0fc6",
-         "\u1000"-"\u1021",
-         "\u1023"-"\u1027",
-         "\u1029"-"\u102a",
-         "\u102c"-"\u1032",
-         "\u1036"-"\u1039",
-         "\u1040"-"\u1049",
-         "\u1050"-"\u1059",
-         "\u10a0"-"\u10c5",
-         "\u10d0"-"\u10f6",
-         "\u1100"-"\u1159",
-         "\u115f"-"\u11a2",
-         "\u11a8"-"\u11f9",
-         "\u1200"-"\u1206",
-         "\u1208"-"\u1246",
-         "\u1248",
-         "\u124a"-"\u124d",
-         "\u1250"-"\u1256",
-         "\u1258",
-         "\u125a"-"\u125d",
-         "\u1260"-"\u1286",
-         "\u1288",
-         "\u128a"-"\u128d",
-         "\u1290"-"\u12ae",
-         "\u12b0",
-         "\u12b2"-"\u12b5",
-         "\u12b8"-"\u12be",
-         "\u12c0",
-         "\u12c2"-"\u12c5",
-         "\u12c8"-"\u12ce",
-         "\u12d0"-"\u12d6",
-         "\u12d8"-"\u12ee",
-         "\u12f0"-"\u130e",
-         "\u1310",
-         "\u1312"-"\u1315",
-         "\u1318"-"\u131e",
-         "\u1320"-"\u1346",
-         "\u1348"-"\u135a",
-         "\u1369"-"\u1371",
-         "\u13a0"-"\u13f4",
-         "\u1401"-"\u166c",
-         "\u166f"-"\u1676",
-         "\u1681"-"\u169a",
-         "\u16a0"-"\u16ea",
-         "\u1780"-"\u17d3",
-         "\u17db",
-         "\u17e0"-"\u17e9",
-         "\u180b"-"\u180e",
-         "\u1810"-"\u1819",
-         "\u1820"-"\u1877",
-         "\u1880"-"\u18a9",
-         "\u1e00"-"\u1e9b",
-         "\u1ea0"-"\u1ef9",
-         "\u1f00"-"\u1f15",
-         "\u1f18"-"\u1f1d",
-         "\u1f20"-"\u1f45",
-         "\u1f48"-"\u1f4d",
-         "\u1f50"-"\u1f57",
-         "\u1f59",
-         "\u1f5b",
-         "\u1f5d",
-         "\u1f5f"-"\u1f7d",
-         "\u1f80"-"\u1fb4",
-         "\u1fb6"-"\u1fbc",
-         "\u1fbe",
-         "\u1fc2"-"\u1fc4",
-         "\u1fc6"-"\u1fcc",
-         "\u1fd0"-"\u1fd3",
-         "\u1fd6"-"\u1fdb",
-         "\u1fe0"-"\u1fec",
-         "\u1ff2"-"\u1ff4",
-         "\u1ff6"-"\u1ffc",
-         "\u200c"-"\u200f",
-         "\u202a"-"\u202e",
-         "\u203f"-"\u2040",
-         "\u206a"-"\u206f",
-         "\u207f",
-         "\u20a0"-"\u20af",
-         "\u20d0"-"\u20dc",
-         "\u20e1",
-         "\u2102",
-         "\u2107",
-         "\u210a"-"\u2113",
-         "\u2115",
-         "\u2119"-"\u211d",
-         "\u2124",
-         "\u2126",
-         "\u2128",
-         "\u212a"-"\u212d",
-         "\u212f"-"\u2131",
-         "\u2133"-"\u2139",
-         "\u2160"-"\u2183",
-         "\u3005"-"\u3007",
-         "\u3021"-"\u302f",
-         "\u3031"-"\u3035",
-         "\u3038"-"\u303a",
-         "\u3041"-"\u3094",
-         "\u3099"-"\u309a",
-         "\u309d"-"\u309e",
-         "\u30a1"-"\u30fe",
-         "\u3105"-"\u312c",
-         "\u3131"-"\u318e",
-         "\u31a0"-"\u31b7",
-         "\u3400"-"\u4db5",
-         "\u4e00"-"\u9fa5",
-         "\ua000"-"\ua48c",
-         "\uac00"-"\ud7a3",
-         "\uf900"-"\ufa2d",
-         "\ufb00"-"\ufb06",
-         "\ufb13"-"\ufb17",
-         "\ufb1d"-"\ufb28",
-         "\ufb2a"-"\ufb36",
-         "\ufb38"-"\ufb3c",
-         "\ufb3e",
-         "\ufb40"-"\ufb41",
-         "\ufb43"-"\ufb44",
-         "\ufb46"-"\ufbb1",
-         "\ufbd3"-"\ufd3d",
-         "\ufd50"-"\ufd8f",
-         "\ufd92"-"\ufdc7",
-         "\ufdf0"-"\ufdfb",
-         "\ufe20"-"\ufe23",
-         "\ufe33"-"\ufe34",
-         "\ufe4d"-"\ufe4f",
-         "\ufe69",
-         "\ufe70"-"\ufe72",
-         "\ufe74",
-         "\ufe76"-"\ufefc",
-         "\ufeff",
-         "\uff04",
-         "\uff10"-"\uff19",
-         "\uff21"-"\uff3a",
-         "\uff3f",
-         "\uff41"-"\uff5a",
-         "\uff65"-"\uffbe",
-         "\uffc2"-"\uffc7",
-         "\uffca"-"\uffcf",
-         "\uffd2"-"\uffd7",
-         "\uffda"-"\uffdc",
-         "\uffe0"-"\uffe1",
-         "\uffe5"-"\uffe6",
-         "\ufff9"-"\ufffb"
-      ]
-  >
-}
-
-/* SEPARATORS */
-
-TOKEN :
-{
-  < LPAREN: "(" >
-| < RPAREN: ")" >
-| < LBRACE: "{" >
-| < RBRACE: "}" >
-| < SEMICOLON: ";" >
-| < COMMA: "," >
-| < AT: "@" >
-| < DOT: "." >
-}
-
-TOKEN :
-{
-  < LT: "<" >
-| < GT: ">" >
-| < TICK: "`" >
-}
-
-
-/********************************************
- * THE GENAVRO LANGUAGE GRAMMAR STARTS HERE *
- ********************************************/
-
-/**
- * The input to genavro is a CompilationUnit, which is currently
- * just a single Protocol.
- */
-Protocol CompilationUnit():
-{
-  Protocol p;
-}
-{
-  p = ProtocolDeclaration()
-  ( < "\u001a" > )?
-  ( <STUFF_TO_IGNORE: ~[]> )?
-  <EOF>
-  { return p; }
-}
-
-/*
- * Declaration syntax follows.
- */
-Schema NamedSchemaDeclaration():
-{
-  Schema s;
-  Map<String, String> props = new HashMap<String, String>();
-  String savedSpace = this.namespace;
-}
-{
-  ( SchemaProperty(props) )*
-  {
-    if (props.containsKey("namespace")) {
-      this.namespace = props.get("namespace");
-    }
-  }
- (
-     s = FixedDeclaration()
-   | s = EnumDeclaration()
-   | s = RecordDeclaration()
- )
- {
-   this.namespace = savedSpace;
-   return s;
- }
-}
-
-Schema UnionDefinition():
-{
-  Schema s;
-  List<Schema> schemata = new ArrayList<Schema>();
-}
-{
- // TODO should probably disallow other unions here in the parser?
-
-  "union"
-  "{"
-  s = Type()
-  { schemata.add(s); }
-
-  (
-    ","
-    s = Type()
-    { schemata.add(s); }
-  )*
-  "}"
-  {
-    return Schema.createUnion(schemata);
-  }
-}
-
-
-Protocol ProtocolDeclaration():
-{
-  String name;
-  Protocol p;
-  Map<String, String> props = new HashMap<String, String>();
-}
-{
-  ( SchemaProperty(props) )*
-  {
-    if (props.containsKey("namespace")) {
-      namespace = props.get("namespace");
-    }
-  }
- "protocol"
-   name = Identifier()
- {
-   p = new Protocol(name, namespace);
- }
- ProtocolBody(p)
- {
-   return p;
- }
-}
-
-
-Schema EnumDeclaration():
-{
-  String name;
-  List<String> symbols;
-}
-{
-  "enum"
-  name = Identifier()
-  symbols = EnumBody()
-  {
-    Schema s = Schema.createEnum(name, null, this.namespace, symbols);
-    names.put(s.getFullName(), s);
-    return s;
-  }
-}
-
-List<String> EnumBody():
-{
-  List<String> symbols = new ArrayList<String>();
-}
-{
-  "{"
-  [ EnumConstant(symbols) ( LOOKAHEAD(2) "," EnumConstant(symbols) )* ]
-  "}"
-  {
-    return symbols;
-  }
-}
-
-void EnumConstant(List<String> symbols):
-{
-  String sym;
-}
-{
-  sym = Identifier() { symbols.add(sym); }
-}
-
-void ProtocolBody(Protocol p):
-{
-  Schema schema;
-  Message message;
-  List<Schema> schemata = new ArrayList<Schema>();
-}
-{
-  "{"
-  (
-      schema = NamedSchemaDeclaration()
-      {
-        schemata.add(schema);
-      }
-
-    | message = MessageDeclaration(p)
-      {
-        // TODO this is kind of sketch!!
-        p.getMessages().put(message.getName(), message);
-      }
-  ) *
-  "}"
-
-  {
-    p.setTypes(schemata);
-  }
-}
-
-
-Schema FixedDeclaration():
-{
-  String name;
-  Token sizeTok;
-}
-{
-  "fixed" name = Identifier() "(" sizeTok = <INTEGER_LITERAL> ")"
-  ";"
-  {
-    Schema s = Schema.createFixed(name, null, this.namespace,
-                                  Integer.parseInt(sizeTok.image));
-    names.put(s.getFullName(), s);
-    return s;
-  }
-}
-
-Schema RecordDeclaration():
-{
-  String name;
-  List<Field> fields = new ArrayList<Field>();
-  boolean isError;
-}
-{
-  (
-    "record" { isError = false; }
-    | "error" { isError = true; }
-  )
-  name = Identifier()
-  {
-    Schema result = Schema.createRecord(
-      name, null, this.namespace, isError);
-    names.put(result.getFullName(), result);
-  }
-  "{"
-    ( FieldDeclaration(fields) )*
-  "}"
-  {
-    result.setFields(fields);
-    return result;
-  }
-}
-
-void SchemaProperty(Map<String, String> properties):
-{
-  String key;
-  String val;
-}
-{
-  "@" key = Identifier() "(" val = Literal() ")"
-  {
-    if (properties.containsKey(key)) {
-      throw new ParseException("Property '" + key + "' already specified " +
-                               " at line " + token.beginLine + ", column " +
-                               token.beginColumn);
-    }
-    properties.put(key, val);
-  }
-}
-
-
-void FieldDeclaration(List<Field> fields):
-{
-  Schema type;
-  Map<String, String> props = new HashMap<String, String>();
-}
-{
-  // TODO should we be able to specify properties on any Type?
-  // or just on field declarations as done here
-
-  ( SchemaProperty(props) )*
-  type = Type()
-  VariableDeclarator(type, fields) ( "," VariableDeclarator(type, fields) )*
-  ";"
-  {
-    for (Map.Entry<String, String> propEntry : props.entrySet()) {
-      type.addProp(propEntry.getKey(), propEntry.getValue());
-    }
-  }
-}
-
-void VariableDeclarator(Schema type, List<Field> fields):
-{
-  String name;
-}
-{
-  name = Identifier()
-  {
-    fields.add(new Field(name, type, null, null));
-  }
-}
-
-
-Message MessageDeclaration(Protocol p):
-{
-  String name;
-  Schema request;
-  Schema response;
-  List<Schema> errorSchemata = new ArrayList<Schema>();
-  errorSchemata.add(Protocol.SYSTEM_ERROR);
-}
-{
-  response = ResultType()
-  name = Identifier()
-  request = FormalParameters()
-  [ "throws" ErrorList(errorSchemata) ]
-  ";"
-  {
-    Schema errors = Schema.createUnion(errorSchemata);
-    return p.createMessage(name, null, request, response, errors);
-  }
-}
-
-void ErrorList(List<Schema> errors):
-{
-  Schema s;
-}
-{
-  s = ReferenceType() { errors.add(s); }
-  ( "," s = ReferenceType() { errors.add(s); } )*
-}
-
-Schema FormalParameters():
-{
-  List<Field> fields = new ArrayList<Field>();
-}
-{
-  (
-    "(" [ FormalParameter(fields) ( "," FormalParameter(fields) )* ] ")"
-  )
-  {
-    return Schema.createRecord(fields);
-  }
-}
-
-void FormalParameter(List<Field> fields):
-{
-  Schema type;
-}
-{
-  type = Type()
-  VariableDeclarator(type, fields)
-}
-
-Schema Type():
-{
-  Schema s;
-}
-{
-  (
-      LOOKAHEAD(2) s = ReferenceType()
-    | s = PrimitiveType()
-    | s = UnionDefinition()
-    | s = ArrayType()
-    | s = MapType()
-  )
-  {
-    return s;
-  }
-}
-
-Schema ArrayType():
-{
-  Schema elemSchema;
-}
-{
-  "array" "<" elemSchema = Type() ">"
-  {
-    return Schema.createArray(elemSchema);
-  }
-}
-
-Schema MapType():
-{
-  Schema elemSchema;
-}
-{
-  "map" "<" elemSchema = Type() ">"
-  {
-    return Schema.createMap(elemSchema);
-  }
-}
-
-/**
- * A reference to some other existing type
- */
-Schema ReferenceType():
-{
-  String part;
-  Token tok;
-  StringBuilder sb = new StringBuilder();
-}
-{
-  (
-    part = Identifier() { sb.append(part); }
-    ("." tok = AnyIdentifier() { sb.append(".").append(tok.image); })*
-    )
-  {
-    String name = sb.toString();
-    if ((name.indexOf('.') == -1) && namespace != null)
-      name = namespace + "." + name;
-    Schema type = names.get(name);
-    if (type == null) {
-      throw new ParseException("Undefined name '" + name + "'" +
-                               " at line " + token.beginLine + ", column " +
-                               token.beginColumn);
-    }
-    return type;
-  }
-}
-
-Schema PrimitiveType():
-{}
-{
-  "boolean" { return Schema.create(Type.BOOLEAN); }
-| "bytes" { return Schema.create(Type.BYTES); }
-| "int" { return Schema.create(Type.INT); }
-| "string" { return Schema.create(Type.STRING); }
-| "float" { return Schema.create(Type.FLOAT); }
-| "double" { return Schema.create(Type.DOUBLE); }
-| "long" { return Schema.create(Type.LONG); }
-| "null" { return Schema.create(Type.NULL); }
-}
-
-/**
- * Result types are like other types, except we provide "void" as
- * an alias of "null"
- */
-Schema ResultType():
-{
-  Schema schema;
-}
-{
-  LOOKAHEAD(2)
-    "void"          { return Schema.create(Type.NULL); }
-  | schema = Type() { return schema; }
-}
-
-
-String Literal():
-{ Token t; }
-{
-  (
-    t = <INTEGER_LITERAL>
-  | t = <FLOATING_POINT_LITERAL>
-  | t = <CHARACTER_LITERAL>
-  | t = BooleanLiteral()
-  | t = NullLiteral()
-  )
-  {
-    return t.image;
-  }
-| t = <STRING_LITERAL>
-  {
-    String betweenQuotes = t.image.substring(1, t.image.length() - 1);
-    return StringEscapeUtils.unescapeJavaScript(betweenQuotes);
-  }
-}
-
-Token BooleanLiteral():
-{ Token t; }
-{
-  (   t = "true"
-    | t = "false"
-  )
-  {
-    return t;
-  }
-}
-
-Token NullLiteral():
-{
-  Token t;
-}
-{
-  t = "null" { return t; }
-}
-
-String Identifier():
-{
-  Token t;
-}
-{
-  ( t = <IDENTIFIER> { return t.image; } )
-| ( "`" t = AnyIdentifier() "`" {
-    return t.image;
-  })
-}
-
-Token AnyIdentifier():
-{
-  Token t;
-}
-{
-  (t = <ARRAY> |
-   t = <BOOLEAN> |
-   t = <DOUBLE> |
-   t = <ENUM> |
-   t = <ERROR> |
-   t = <FALSE> |
-   t = <FIXED> |
-   t = <FLOAT> |
-   t = <INT> |
-   t = <LONG> |
-   t = <MAP> |
-   t = <BYTES> |
-   t = <STRING> |
-   t = <PROTOCOL> |
-   t = <RECORD> |
-   t = <THROWS> |
-   t = <TRUE> |
-   t = <UNION> |
-   t = <VOID> |
-   t = <IDENTIFIER>)
-  {
-    return t;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericArray.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericArray.java
deleted file mode 100644
index 028f1d1..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericArray.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-
-/** An array of objects. */
-public interface GenericArray<T> extends Iterable<T>, GenericContainer {
-  /** The number of elements contained in this array. */
-  long size();
-
-  /** Reset the size of the array to zero. */
-  void clear();
-
-  /** Add an element to this array. */
-  void add(T element);
-
-  /** The current content of the location where {@link #add(Object)} would next
-   * store an element, if any.  This permits reuse of arrays and their elements
-   * without allocating new objects. */
-  T peek();
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericContainer.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericContainer.java
deleted file mode 100644
index e1c2472..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericContainer.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import org.apache.avro.Schema;
-
-/** Contains data of other types. */
-public interface GenericContainer {
-  /** The schema of this instance. */
-  Schema getSchema();
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericData.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericData.java
deleted file mode 100644
index 94e46f9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericData.java
+++ /dev/null
@@ -1,491 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import java.nio.ByteBuffer;
-import java.util.Iterator;
-import java.util.Arrays;
-import java.util.Map;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.Schema.Type;
-import org.apache.avro.util.Utf8;
-import org.apache.avro.io.BinaryData;
-
-/** Utilities for generic Java data. */
-public class GenericData {
-
-  private static final GenericData INSTANCE = new GenericData();
-  
-  /** Return the singleton instance. */
-  public static GenericData get() { return INSTANCE; }
-
-  protected GenericData() {}
-  
-  /** Default implementation of {@link GenericRecord}. */
-  public static class Record implements GenericRecord, Comparable<Record> {
-    private final Schema schema;
-    private final Object[] values;
-    public Record(Schema schema) {
-      if (schema == null || !Type.RECORD.equals(schema.getType()))
-        throw new AvroRuntimeException("Not a record schema: "+schema);
-      this.schema = schema;
-      this.values = new Object[schema.getFields().size()];
-    }
-    @Override public Schema getSchema() { return schema; }
-    @Override public void put(String key, Object value) {
-      values[schema.getField(key).pos()] = value;
-    }
-    @Override public void put(int i, Object v) { values[i] = v; }
-    @Override public Object get(String key) {
-      Field field = schema.getField(key);
-      if (field == null) return null;
-      return values[field.pos()];
-    }
-    @Override public Object get(int i) { return values[i]; }
-    @Override public boolean equals(Object o) {
-      if (o == this) return true;                 // identical object
-      if (!(o instanceof Record)) return false;   // not a record
-      Record that = (Record)o;
-      if (!schema.getFullName().equals(that.schema.getFullName()))
-        return false;                             // not the same schema
-      return this.compareTo(that) == 0;
-    }
-    @Override public int hashCode() {
-      return GenericData.get().hashCode(this, schema);
-    }
-    @Override public int compareTo(Record that) {
-      return GenericData.get().compare(this, that, schema);
-    }
-    @Override public String toString() {
-      return GenericData.get().toString(this);
-    }
-  }
-
-  /** Default implementation of {@link GenericArray}. */
-  @SuppressWarnings(value="unchecked")
-  public static class Array<T>
-    implements GenericArray<T>, Comparable<Array<T>> {
-    private static final Object[] EMPTY = new Object[0];
-    private final Schema schema;
-    private int size;
-    private Object[] elements = EMPTY;
-    public Array(int capacity, Schema schema) {
-      if (schema == null || !Type.ARRAY.equals(schema.getType()))
-        throw new AvroRuntimeException("Not an array schema: "+schema);
-      this.schema = schema;
-      if (capacity != 0)
-        elements = new Object[capacity];
-    }
-    public Schema getSchema() { return schema; }
-    public long size() { return size; }
-    public void clear() { size = 0; }
-    public Iterator<T> iterator() {
-      return new Iterator<T>() {
-        private int position = 0;
-        public boolean hasNext() { return position < size; }
-        public T next() { return (T)elements[position++]; }
-        public void remove() { throw new UnsupportedOperationException(); }
-      };
-    }
-    public void add(T o) {
-      if (size == elements.length) {
-        Object[] newElements = new Object[(size * 3)/2 + 1];
-        System.arraycopy(elements, 0, newElements, 0, size);
-        elements = newElements;
-      }
-      elements[size++] = o;
-    }
-    public T peek() {
-      return (size < elements.length) ? (T)elements[size] : null;
-    }
-    public int hashCode() {
-      return GenericData.get().hashCode(this, schema);
-    }
-    public boolean equals(Object o) {
-      if (o == this) return true;                 // identical object
-      if (!(o instanceof Array)) return false;    // not an array
-      Array that = (Array)o;
-      if (!schema.equals(that.schema))
-        return false;                             // not the same schema
-      return this.compareTo(that) == 0;
-    }
-    public int compareTo(Array<T> that) {
-      return GenericData.get().compare(this, that, this.getSchema());
-    }
-    public String toString() {
-      StringBuffer buffer = new StringBuffer();
-      buffer.append("[");
-      int count = 0;
-      for (T e : this) {
-        buffer.append(e==null ? "null" : e.toString());
-        if (++count < size())
-          buffer.append(", ");
-      }
-      buffer.append("]");
-      return buffer.toString();
-    }
-  }
-
-  /** Default implementation of {@link GenericFixed}. */
-  public static class Fixed implements GenericFixed, Comparable<Fixed> {
-    private byte[] bytes;
-
-    public Fixed(Schema schema) { bytes(new byte[schema.getFixedSize()]); }
-    public Fixed(byte[] bytes) { bytes(bytes); }
-
-    protected Fixed() {}
-    public void bytes(byte[] bytes) { this.bytes = bytes; }
-
-    public byte[] bytes() { return bytes; }
-
-    public boolean equals(Object o) {
-      if (o == this) return true;
-      return o instanceof GenericFixed
-        && Arrays.equals(bytes, ((GenericFixed)o).bytes());
-    }
-
-    public int hashCode() { return Arrays.hashCode(bytes); }
-
-    public String toString() { return Arrays.toString(bytes); }
-
-    public int compareTo(Fixed that) {
-      return BinaryData.compareBytes(this.bytes, 0, this.bytes.length,
-                                     that.bytes, 0, that.bytes.length);
-    }
-  }
-
-  /** Returns true if a Java datum matches a schema. */
-  public boolean validate(Schema schema, Object datum) {
-    switch (schema.getType()) {
-    case RECORD:
-      if (!(datum instanceof IndexedRecord)) return false;
-      IndexedRecord fields = (IndexedRecord)datum;
-      for (Field f : schema.getFields()) {
-        if (!validate(f.schema(), fields.get(f.pos())))
-          return false;
-      }
-      return true;
-    case ENUM:
-      return schema.getEnumSymbols().contains(datum);
-    case ARRAY:
-      if (!(datum instanceof GenericArray)) return false;
-      for (Object element : (GenericArray)datum)
-        if (!validate(schema.getElementType(), element))
-          return false;
-      return true;
-    case MAP:
-      if (!(datum instanceof Map)) return false;
-      @SuppressWarnings(value="unchecked")
-      Map<Object,Object> map = (Map<Object,Object>)datum;
-      for (Map.Entry<Object,Object> entry : map.entrySet())
-        if (!validate(schema.getValueType(), entry.getValue()))
-          return false;
-      return true;
-    case UNION:
-      for (Schema type : schema.getTypes())
-        if (validate(type, datum))
-          return true;
-      return false;
-    case FIXED:
-      return datum instanceof GenericFixed
-        && ((GenericFixed)datum).bytes().length==schema.getFixedSize();
-    case STRING:  return isString(datum);
-    case BYTES:   return isBytes(datum);
-    case INT:     return datum instanceof Integer;
-    case LONG:    return datum instanceof Long;
-    case FLOAT:   return datum instanceof Float;
-    case DOUBLE:  return datum instanceof Double;
-    case BOOLEAN: return datum instanceof Boolean;
-    case NULL:    return datum == null;
-    default: return false;
-    }
-  }
-
-  /** Renders a Java datum as <a href="http://www.json.org/">JSON</a>. */
-  public String toString(Object datum) {
-    StringBuilder buffer = new StringBuilder();
-    toString(datum, buffer);
-    return buffer.toString();
-  }
-  /** Renders a Java datum as <a href="http://www.json.org/">JSON</a>. */
-  protected void toString(Object datum, StringBuilder buffer) {
-    if (datum instanceof IndexedRecord) {
-      buffer.append("{");
-      int count = 0;
-      IndexedRecord record = (IndexedRecord)datum;
-      for (Field f : record.getSchema().getFields()) {
-        toString(f.name(), buffer);
-        buffer.append(": ");
-        toString(record.get(f.pos()), buffer);
-        if (++count < record.getSchema().getFields().size())
-          buffer.append(", ");
-      }
-      buffer.append("}");
-    } else if (datum instanceof GenericArray) {
-      GenericArray array = (GenericArray)datum;
-      buffer.append("[");
-      long last = array.size()-1;
-      int i = 0;
-      for (Object element : array) {
-        toString(element, buffer);
-        if (i++ < last)
-          buffer.append(", ");
-      }        
-      buffer.append("]");
-    } else if (datum instanceof Map) {
-      buffer.append("{");
-      int count = 0;
-      @SuppressWarnings(value="unchecked")
-      Map<Object,Object> map = (Map<Object,Object>)datum;
-      for (Map.Entry<Object,Object> entry : map.entrySet()) {
-        toString(entry.getKey(), buffer);
-        buffer.append(": ");
-        toString(entry.getValue(), buffer);
-        if (++count < map.size())
-          buffer.append(", ");
-      }
-      buffer.append("}");
-    } else if (datum instanceof Utf8 || datum instanceof String) {
-      buffer.append("\"");
-      buffer.append(datum);                       // TODO: properly escape!
-      buffer.append("\"");
-    } else if (datum instanceof ByteBuffer) {
-      buffer.append("{\"bytes\": \"");
-      ByteBuffer bytes = (ByteBuffer)datum;
-      for (int i = bytes.position(); i < bytes.limit(); i++)
-        buffer.append((char)bytes.get(i));
-      buffer.append("\"}");
-    } else {
-      buffer.append(datum);
-    }
-  }
-
-  /** Create a schema given an example datum. */
-  public Schema induce(Object datum) {
-    if (datum instanceof IndexedRecord) {
-      return ((IndexedRecord)datum).getSchema();
-    } else if (datum instanceof GenericArray) {
-      Schema elementType = null;
-      for (Object element : (GenericArray)datum) {
-        if (elementType == null) {
-          elementType = induce(element);
-        } else if (!elementType.equals(induce(element))) {
-          throw new AvroTypeException("No mixed type arrays.");
-        }
-      }
-      if (elementType == null) {
-        throw new AvroTypeException("Empty array: "+datum);
-      }
-      return Schema.createArray(elementType);
-
-    } else if (datum instanceof Map) {
-      @SuppressWarnings(value="unchecked")
-      Map<Object,Object> map = (Map<Object,Object>)datum;
-      Schema value = null;
-      for (Map.Entry<Object,Object> entry : map.entrySet()) {
-        if (value == null) {
-          value = induce(entry.getValue());
-        } else if (!value.equals(induce(entry.getValue()))) {
-          throw new AvroTypeException("No mixed type map values.");
-        }
-      }
-      if (value == null) {
-        throw new AvroTypeException("Empty map: "+datum);
-      }
-      return Schema.createMap(value);
-    } else if (datum instanceof GenericFixed) {
-      return Schema.createFixed(null, null, null,
-                                ((GenericFixed)datum).bytes().length);
-    }
-    else if (datum instanceof Utf8)       return Schema.create(Type.STRING);
-    else if (datum instanceof ByteBuffer) return Schema.create(Type.BYTES);
-    else if (datum instanceof Integer)    return Schema.create(Type.INT);
-    else if (datum instanceof Long)       return Schema.create(Type.LONG);
-    else if (datum instanceof Float)      return Schema.create(Type.FLOAT);
-    else if (datum instanceof Double)     return Schema.create(Type.DOUBLE);
-    else if (datum instanceof Boolean)    return Schema.create(Type.BOOLEAN);
-    else if (datum == null)               return Schema.create(Type.NULL);
-
-    else throw new AvroTypeException("Can't create schema for: "+datum);
-  }
-
-  /** Return the index for a datum within a union.  Implemented with {@link
-   * #instanceOf(Schema,Object)}.*/
-  public int resolveUnion(Schema union, Object datum) {
-    int i = 0;
-    for (Schema type : union.getTypes()) {
-      if (instanceOf(type, datum))
-        return i;
-      i++;
-    }
-    throw new AvroRuntimeException("Not in union "+union+": "+datum);
-  }
-
-  /** Called by {@link #resolveUnion(Schema,Object)}.  May be overridden for
-      alternate data representations.*/
-  protected boolean instanceOf(Schema schema, Object datum) {
-    switch (schema.getType()) {
-    case RECORD:
-      if (!isRecord(datum)) return false;
-      return (schema.getName() == null) ||
-        schema.getName().equals(getRecordSchema(datum).getName());
-    case ENUM:    return isEnum(datum);
-    case ARRAY:   return isArray(datum);
-    case MAP:     return isMap(datum);
-    case FIXED:   return isFixed(datum);
-    case STRING:  return isString(datum);
-    case BYTES:   return isBytes(datum);
-    case INT:     return datum instanceof Integer;
-    case LONG:    return datum instanceof Long;
-    case FLOAT:   return datum instanceof Float;
-    case DOUBLE:  return datum instanceof Double;
-    case BOOLEAN: return datum instanceof Boolean;
-    case NULL:    return datum == null;
-    default: throw new AvroRuntimeException("Unexpected type: " +schema);
-    }
-  }
-
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isArray(Object datum) {
-    return datum instanceof GenericArray;
-  }
-
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isRecord(Object datum) {
-    return datum instanceof IndexedRecord;
-  }
-
-  /** Called to obtain the schema of a record.  By default calls
-   * {GenericContainer#getSchema().  May be overridden for alternate record
-   * representations. */
-  protected Schema getRecordSchema(Object record) {
-    return ((GenericContainer)record).getSchema();
-  }
-
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isEnum(Object datum) {
-    return datum instanceof String;
-  }
-  
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isMap(Object datum) {
-    return datum instanceof Map;
-  }
-  
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isFixed(Object datum) {
-    return datum instanceof GenericFixed;
-  }
-
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isString(Object datum) {
-    return datum instanceof Utf8;
-  }
-
-  /** Called by the default implementation of {@link #instanceOf}.*/
-  protected boolean isBytes(Object datum) {
-    return datum instanceof ByteBuffer;
-  }
-
-  /** Compute a hash code according to a schema, consistent with {@link
-   * #compare(Object,Object,Schema)}. */
-  public int hashCode(Object o, Schema s) {
-    if (o == null) return 0;                      // incomplete datum
-    int hashCode = 1;
-    switch (s.getType()) {
-    case RECORD:
-      IndexedRecord r = (IndexedRecord)o;
-      for (Field f : s.getFields()) {
-        if (f.order() == Field.Order.IGNORE)
-          continue;
-        hashCode = hashCodeAdd(hashCode, r.get(f.pos()), f.schema());
-      }
-      return hashCode;
-    case ARRAY:
-      GenericArray a = (GenericArray)o;
-      Schema elementType = a.getSchema().getElementType();
-      for (Object e : a)
-        hashCode = hashCodeAdd(hashCode, e, elementType);
-      return hashCode;
-    case UNION:
-      return hashCode(o, s.getTypes().get(resolveUnion(s, o)));
-    case NULL:
-      return 0;
-    default:
-      return o.hashCode();
-    }
-  }
-
-  /** Add the hash code for an object into an accumulated hash code. */
-  protected int hashCodeAdd(int hashCode, Object o, Schema s) {
-    return 31*hashCode + hashCode(o, s);
-  }
-
-  /** Compare objects according to their schema.  If equal, return zero.  If
-   * greater-than, return 1, if less than return -1.  Order is consistent with
-   * that of {@link BinaryData#compare(byte[], int, byte[], int, Schema)}.
-   */
-  @SuppressWarnings(value="unchecked")
-  public int compare(Object o1, Object o2, Schema s) {
-    if (o1 == o2) return 0;
-    switch (s.getType()) {
-    case RECORD:
-      IndexedRecord r1 = (IndexedRecord)o1;
-      IndexedRecord r2 = (IndexedRecord)o2;
-      for (Field f : s.getFields()) {
-        if (f.order() == Field.Order.IGNORE)
-          continue;                               // ignore this field
-        int pos = f.pos();
-        int compare = compare(r1.get(pos), r2.get(pos), f.schema());
-        if (compare != 0)                         // not equal
-          return f.order() == Field.Order.DESCENDING ? -compare : compare;
-      }
-      return 0;
-    case ENUM:
-      return s.getEnumOrdinal((String)o1) - s.getEnumOrdinal((String)o2);
-    case ARRAY:
-      GenericArray a1 = (GenericArray)o1;
-      GenericArray a2 = (GenericArray)o2;
-      Iterator e1 = a1.iterator();
-      Iterator e2 = a2.iterator();
-      Schema elementType = a1.getSchema().getElementType();
-      while(e1.hasNext() && e2.hasNext()) {
-        int compare = compare(e1.next(), e2.next(), elementType);
-        if (compare != 0) return compare;
-      }
-      return e1.hasNext() ? 1 : (e2.hasNext() ? -1 : 0);
-    case MAP:
-      throw new AvroRuntimeException("Can't compare maps!");
-    case UNION:
-      int i1 = resolveUnion(s, o1);
-      int i2 = resolveUnion(s, o2);
-      return (i1 == i2)
-        ? compare(o1, o2, s.getTypes().get(i1))
-        : i1 - i2;
-    case NULL:
-      return 0;
-    default:
-      return ((Comparable)o1).compareTo(o2);
-    }
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumReader.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumReader.java
deleted file mode 100644
index 91ff301..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumReader.java
+++ /dev/null
@@ -1,358 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.ResolvingDecoder;
-import org.apache.avro.util.Utf8;
-
-/** {@link DatumReader} for generic Java objects. */
-public class GenericDatumReader<D> implements DatumReader<D> {
-  private Schema actual;
-  private Schema expected;
-  private Object resolver;
-
-  public GenericDatumReader() {}
-
-  public GenericDatumReader(Schema actual) {
-    this.actual = actual;
-    this.expected = actual;
-  }
-
-  public GenericDatumReader(Schema actual, Schema expected)
-    throws IOException {
-    this.actual = actual;
-    this.expected = expected;
-  }
-
-  @Override
-  public void setSchema(Schema actual) {
-    this.actual = actual;
-    if (expected == null) {
-      expected = actual;
-    }
-    resolver = null;
-  }
-
-  public void setExpected(Schema expected) throws IOException {
-    this.expected = expected;
-  }
-
-  @SuppressWarnings("unchecked")
-  public D read(D reuse, Decoder in) throws IOException {
-    if (resolver == null) {
-      resolver = ResolvingDecoder.resolve(actual, expected);
-    }
-    ResolvingDecoder r = new ResolvingDecoder(resolver, in);
-    D result = (D) read(reuse, expected, r);
-    r.drain();
-    return result;
-  }
-  
-  /** Called to read data.*/
-  protected Object read(Object old, Schema expected,
-      ResolvingDecoder in) throws IOException {
-    switch (expected.getType()) {
-    case RECORD:  return readRecord(old, expected, in);
-    case ENUM:    return readEnum(expected, in);
-    case ARRAY:   return readArray(old, expected, in);
-    case MAP:     return readMap(old, expected, in);
-    case UNION:   return read(old, expected.getTypes().get(in.readIndex()), in);
-    case FIXED:   return readFixed(old, expected, in);
-    case STRING:  return readString(old, expected, in);
-    case BYTES:   return readBytes(old, in);
-    case INT:     return readInt(old, expected, in);
-    case LONG:    return in.readLong();
-    case FLOAT:   return in.readFloat();
-    case DOUBLE:  return in.readDouble();
-    case BOOLEAN: return in.readBoolean();
-    case NULL:    in.readNull(); return null;
-    default: throw new AvroRuntimeException("Unknown type: " + expected);
-    }
-  }
-
-  /** Called to read a record instance. May be overridden for alternate record
-   * representations.*/
-  protected Object readRecord(Object old, Schema expected, 
-      ResolvingDecoder in) throws IOException {
-    Object record = newRecord(old, expected);
-    
-    for (Field f : in.readFieldOrder()) {
-      int pos = f.pos();
-      String name = f.name();
-      Object oldDatum = (old != null) ? getField(record, name, pos) : null;
-      setField(record, name, pos, read(oldDatum, f.schema(), in));
-    }
-
-    return record;
-  }
-
-  /** Called by the default implementation of {@link #readRecord} to set a
-   * record fields value to a record instance.  The default implementation is
-   * for {@link IndexedRecord}.*/
-  protected void setField(Object record, String name, int position, Object o) {
-    ((IndexedRecord)record).put(position, o);
-  }
-  
-  /** Called by the default implementation of {@link #readRecord} to retrieve a
-   * record field value from a reused instance.  The default implementation is
-   * for {@link IndexedRecord}.*/
-  protected Object getField(Object record, String name, int position) {
-    return ((IndexedRecord)record).get(position);
-  }
-
-  /** Called by the default implementation of {@link #readRecord} to remove a
-   * record field value from a reused instance.  The default implementation is
-   * for {@link GenericRecord}.*/
-  protected void removeField(Object record, String field, int position) {
-    ((GenericRecord)record).put(position, null);
-  }
-  
-  /** Called to read an enum value. May be overridden for alternate enum
-   * representations.  By default, returns the symbol as a String. */
-  protected Object readEnum(Schema expected, Decoder in) throws IOException {
-    return createEnum(expected.getEnumSymbols().get(in.readEnum()), expected);
-  }
-
-  /** Called to create an enum value. May be overridden for alternate enum
-   * representations.  By default, returns the symbol as a String. */
-  protected Object createEnum(String symbol, Schema schema) { return symbol; }
-
-  /** Called to read an array instance.  May be overridden for alternate array
-   * representations.*/
-  protected Object readArray(Object old, Schema expected,
-      ResolvingDecoder in) throws IOException {
-    Schema expectedType = expected.getElementType();
-    long l = in.readArrayStart();
-    long base = 0;
-    if (l > 0) {
-      Object array = newArray(old, (int) l, expected);
-      do {
-        for (long i = 0; i < l; i++) {
-          addToArray(array, base + i, read(peekArray(array), expectedType, in));
-        }
-        base += l;
-      } while ((l = in.arrayNext()) > 0);
-      return array;
-    } else {
-      return newArray(old, 0, expected);
-    }
-  }
-
-  /** Called by the default implementation of {@link #readArray} to retrieve a
-   * value from a reused instance.  The default implementation is for {@link
-   * GenericArray}.*/
-  @SuppressWarnings("unchecked")
-  protected Object peekArray(Object array) {
-    return ((GenericArray) array).peek();
-  }
-
-  /** Called by the default implementation of {@link #readArray} to add a value.
-   * The default implementation is for {@link GenericArray}.*/
-  @SuppressWarnings("unchecked")
-  protected void addToArray(Object array, long pos, Object e) {
-    ((GenericArray) array).add(e);
-  }
-  
-  /** Called to read a map instance.  May be overridden for alternate map
-   * representations.*/
-  protected Object readMap(Object old, Schema expected,
-      ResolvingDecoder in) throws IOException {
-    Schema eValue = expected.getValueType();
-    long l = in.readMapStart();
-    Object map = newMap(old, (int) l);
-    if (l > 0) {
-      do {
-        for (int i = 0; i < l; i++) {
-          addToMap(map, readString(null, in), read(null, eValue, in));
-        }
-      } while ((l = in.mapNext()) > 0);
-    }
-    return map;
-  }
-
-  /** Called by the default implementation of {@link #readMap} to add a
-   * key/value pair.  The default implementation is for {@link Map}.*/
-  @SuppressWarnings("unchecked")
-  protected void addToMap(Object map, Object key, Object value) {
-    ((Map) map).put(key, value);
-  }
-  
-  /** Called to read a fixed value. May be overridden for alternate fixed
-   * representations.  By default, returns {@link GenericFixed}. */
-  protected Object readFixed(Object old, Schema expected, Decoder in)
-    throws IOException {
-    GenericFixed fixed = (GenericFixed)createFixed(old, expected);
-    in.readFixed(fixed.bytes(), 0, expected.getFixedSize());
-    return fixed;
-  }
-
-  /** Called to create an fixed value. May be overridden for alternate fixed
-   * representations.  By default, returns {@link GenericFixed}. */
-  protected Object createFixed(Object old, Schema schema) {
-    if ((old instanceof GenericFixed)
-        && ((GenericFixed)old).bytes().length == schema.getFixedSize())
-      return old;
-    return new GenericData.Fixed(schema);
-  }
-
-  /** Called to create an fixed value. May be overridden for alternate fixed
-   * representations.  By default, returns {@link GenericFixed}. */
-  protected Object createFixed(Object old, byte[] bytes, Schema schema) {
-    GenericFixed fixed = (GenericFixed)createFixed(old, schema);
-    System.arraycopy(bytes, 0, fixed.bytes(), 0, schema.getFixedSize());
-    return fixed;
-  }
-  /**
-   * Called to create new record instances. Subclasses may override to use a
-   * different record implementation. The returned instance must conform to the
-   * schema provided. If the old object contains fields not present in the
-   * schema, they should either be removed from the old object, or it should
-   * create a new instance that conforms to the schema. By default, this returns
-   * a {@link GenericData.Record}.
-   */
-  protected Object newRecord(Object old, Schema schema) {
-    if (old instanceof IndexedRecord) {
-      IndexedRecord record = (IndexedRecord)old;
-      if (record.getSchema() == schema)
-        return record;
-    }
-    return new GenericData.Record(schema);
-  }
-
-  /** Called to create new array instances.  Subclasses may override to use a
-   * different array implementation.  By default, this returns a {@link
-   * GenericData.Array}.*/
-  @SuppressWarnings("unchecked")
-  protected Object newArray(Object old, int size, Schema schema) {
-    if (old instanceof GenericArray) {
-      ((GenericArray) old).clear();
-      return old;
-    } else return new GenericData.Array(size, schema);
-  }
-
-  /** Called to create new array instances.  Subclasses may override to use a
-   * different map implementation.  By default, this returns a {@link
-   * HashMap}.*/
-  @SuppressWarnings("unchecked")
-  protected Object newMap(Object old, int size) {
-    if (old instanceof Map) {
-      ((Map) old).clear();
-      return old;
-    } else return new HashMap<Object, Object>(size);
-  }
-
-  /** Called to read strings.  Subclasses may override to use a different
-   * string representation.  By default, this calls {@link
-   * #readString(Object,Decoder)}.*/
-  protected Object readString(Object old, Schema expected,
-                              Decoder in) throws IOException {
-    return readString(old, in);
-  }
-  /** Called to read strings.  Subclasses may override to use a different
-   * string representation.  By default, this calls {@link
-   * Decoder#readString(Utf8)}.*/
-  protected Object readString(Object old, Decoder in) throws IOException {
-    return in.readString((Utf8)old);
-  }
-
-  /** Called to create a string from a default value.  Subclasses may override
-   * to use a different string representation.  By default, this calls {@link
-   * Utf8#Utf8(String)}.*/
-  protected Object createString(String value) { return new Utf8(value); }
-
-  /** Called to read byte arrays.  Subclasses may override to use a different
-   * byte array representation.  By default, this calls {@link
-   * Decoder#readBytes(ByteBuffer)}.*/
-  protected Object readBytes(Object old, Decoder in) throws IOException {
-    return in.readBytes((ByteBuffer)old);
-  }
-
-  /** Called to read integers.  Subclasses may override to use a different
-   * integer representation.  By default, this calls {@link
-   * Decoder#readInt()}.*/
-  protected Object readInt(Object old, Schema expected, Decoder in)
-    throws IOException {
-    return in.readInt();
-  }
-
-  /** Called to create byte arrays from default values.  Subclasses may
-   * override to use a different byte array representation.  By default, this
-   * calls {@link ByteBuffer#wrap(byte[])}.*/
-  protected Object createBytes(byte[] value) { return ByteBuffer.wrap(value); }
-
-  /** Skip an instance of a schema. */
-  public static void skip(Schema schema, Decoder in) throws IOException {
-    switch (schema.getType()) {
-    case RECORD:
-      for (Field field : schema.getFields())
-        skip(field.schema(), in);
-      break;
-    case ENUM:
-      in.readInt();
-      break;
-    case ARRAY:
-      Schema elementType = schema.getElementType();
-      for (long l = in.skipArray(); l > 0; l = in.skipArray()) {
-        for (long i = 0; i < l; i++) {
-          skip(elementType, in);
-        }
-      }
-      break;
-    case MAP:
-      Schema value = schema.getValueType();
-      for (long l = in.skipMap(); l > 0; l = in.skipMap()) {
-        for (long i = 0; i < l; i++) {
-          in.skipString();
-          skip(value, in);
-        }
-      }
-      break;
-    case UNION:
-      skip(schema.getTypes().get((int)in.readIndex()), in);
-      break;
-    case FIXED:
-      in.skipFixed(schema.getFixedSize());
-      break;
-    case STRING:
-      in.skipString();
-      break;
-    case BYTES:
-      in.skipBytes();
-      break;
-    case INT:     in.readInt();           break;
-    case LONG:    in.readLong();          break;
-    case FLOAT:   in.readFloat();         break;
-    case DOUBLE:  in.readDouble();        break;
-    case BOOLEAN: in.readBoolean();       break;
-    case NULL:                            break;
-    default: throw new RuntimeException("Unknown type: "+schema);
-    }
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumWriter.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumWriter.java
deleted file mode 100644
index 020b3c2..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericDatumWriter.java
+++ /dev/null
@@ -1,197 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.util.Utf8;
-
-/** {@link DatumWriter} for generic Java objects. */
-public class GenericDatumWriter<D> implements DatumWriter<D> {
-  private final GenericData data;
-  private Schema root;
-
-  public GenericDatumWriter() { this(GenericData.get()); }
-
-  protected GenericDatumWriter(GenericData data) { this.data = data; }
-
-  public GenericDatumWriter(Schema root) {
-    this();
-    setSchema(root);
-  }
-
-  protected GenericDatumWriter(Schema root, GenericData data) {
-    this(data);
-    setSchema(root);
-  }
-
-  public void setSchema(Schema root) { this.root = root; }
-
-  public void write(D datum, Encoder out) throws IOException {
-    write(root, datum, out);
-  }
-  
-  /** Called to write data.*/
-  protected void write(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    switch (schema.getType()) {
-    case RECORD: writeRecord(schema, datum, out); break;
-    case ENUM:   writeEnum(schema, datum, out);   break;
-    case ARRAY:  writeArray(schema, datum, out);  break;
-    case MAP:    writeMap(schema, datum, out);    break;
-    case UNION:
-      int index = data.resolveUnion(schema, datum);
-      out.writeIndex(index);
-      write(schema.getTypes().get(index), datum, out);
-      break;
-    case FIXED:   writeFixed(schema, datum, out);   break;
-    case STRING:  writeString(schema, datum, out);  break;
-    case BYTES:   writeBytes(datum, out);           break;
-    case INT:     out.writeInt((Integer)datum);     break;
-    case LONG:    out.writeLong((Long)datum);       break;
-    case FLOAT:   out.writeFloat((Float)datum);     break;
-    case DOUBLE:  out.writeDouble((Double)datum);   break;
-    case BOOLEAN: out.writeBoolean((Boolean)datum); break;
-    case NULL:    out.writeNull();                  break;
-    default: error(schema,datum);
-    }
-  }
-
-  /** Called to write a record.  May be overridden for alternate record
-   * representations.*/
-  protected void writeRecord(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    for (Field field : schema.getFields()) {
-      write(field.schema(), getField(datum, field.name(), field.pos()), out);
-    }
-  }
-  
-  /** Called by the default implementation of {@link #writeRecord} to retrieve
-   * a record field value.  The default implementation is for {@link
-   * IndexedRecord}.*/
-  protected Object getField(Object record, String field, int position) {
-    return ((IndexedRecord) record).get(position);
-  }
-  
-  /** Called to write an enum value.  May be overridden for alternate enum
-   * representations.*/
-  protected void writeEnum(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    out.writeEnum(schema.getEnumOrdinal((String)datum));
-  }
-  
-  /** Called to write a array.  May be overridden for alternate array
-   * representations.*/
-  protected void writeArray(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    Schema element = schema.getElementType();
-    long size = getArraySize(datum);
-    out.writeArrayStart();
-    out.setItemCount(size);
-    for (Iterator<? extends Object> it = getArrayElements(datum); it.hasNext();) {
-      out.startItem();
-      write(element, it.next(), out);
-    }
-    out.writeArrayEnd();
-  }
-
-  /** Called by the default implementation of {@link #writeArray} to get the
-   * size of an array.  The default implementation is for {@link
-   * GenericArray}.*/
-  @SuppressWarnings("unchecked")
-  protected long getArraySize(Object array) {
-    return ((GenericArray) array).size();
-  }
-
-  /** Called by the default implementation of {@link #writeArray} to enumerate
-   * array elements.  The default implementation is for {@link GenericArray}.*/
-  @SuppressWarnings("unchecked")
-  protected Iterator<? extends Object> getArrayElements(Object array) {
-    return ((GenericArray) array).iterator();
-  }
-  
-  /** Called to write a map.  May be overridden for alternate map
-   * representations.*/
-  protected void writeMap(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    Schema value = schema.getValueType();
-    int size = getMapSize(datum);
-    out.writeMapStart();
-    out.setItemCount(size);
-    for (Map.Entry<Object,Object> entry : getMapEntries(datum)) {
-      out.startItem();
-      writeString(entry.getKey(), out);
-      write(value, entry.getValue(), out);
-    }
-    out.writeMapEnd();
-  }
-
-  /** Called by the default implementation of {@link #writeMap} to get the size
-   * of a map.  The default implementation is for {@link Map}.*/
-  @SuppressWarnings("unchecked")
-  protected int getMapSize(Object map) {
-    return ((Map) map).size();
-  }
-
-  /** Called by the default implementation of {@link #writeMap} to enumerate
-   * map elements.  The default implementation is for {@link Map}.*/
-  @SuppressWarnings("unchecked")
-  protected Iterable<Map.Entry<Object,Object>> getMapEntries(Object map) {
-    return ((Map) map).entrySet();
-  }
-  
-  /** Called to write a string.  May be overridden for alternate string
-   * representations.*/
-  protected void writeString(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    writeString(datum, out);
-  }
-  /** Called to write a string.  May be overridden for alternate string
-   * representations.*/
-  protected void writeString(Object datum, Encoder out) throws IOException {
-    out.writeString((Utf8)datum);
-  }
-
-  /** Called to write a bytes.  May be overridden for alternate bytes
-   * representations.*/
-  protected void writeBytes(Object datum, Encoder out) throws IOException {
-    out.writeBytes((ByteBuffer)datum);
-  }
-
-  /** Called to write a fixed value.  May be overridden for alternate fixed
-   * representations.*/
-  protected void writeFixed(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    out.writeFixed(((GenericFixed)datum).bytes(), 0, schema.getFixedSize());
-  }
-  
-  private void error(Schema schema, Object datum) {
-    throw new AvroTypeException("Not a "+schema+": "+datum);
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericFixed.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericFixed.java
deleted file mode 100644
index d487041..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericFixed.java
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-/** Fixed-size data. */
-public interface GenericFixed {
-  /** Return the data. */
-  byte[] bytes();
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRecord.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRecord.java
deleted file mode 100644
index aa8c193..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRecord.java
+++ /dev/null
@@ -1,27 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-/** A generic instance of a record schema.  Fields are accessible by name as
- * well as by index. */
-public interface GenericRecord extends IndexedRecord {
-  /** Set the value of a field given its name. */
-  void put(String key, Object v);
-  /** Return the value of a field given its name. */
-  Object get(String key);
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRequestor.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRequestor.java
deleted file mode 100644
index c984c0f..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericRequestor.java
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.generic;
-
-import java.io.IOException;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.Requestor;
-import org.apache.avro.ipc.Transceiver;
-
-/** {@link Requestor} implementation for generic Java data. */
-public class GenericRequestor extends Requestor {
-  public GenericRequestor(Protocol protocol, Transceiver transceiver)
-    throws IOException {
-    super(protocol, transceiver);
-  }
-
-  @Override
-  public Object request(String messageName, Object request)
-    throws IOException {
-    try {
-      return super.request(messageName, request);
-    } catch (Exception e) {
-      if (e instanceof RuntimeException)
-        throw (RuntimeException)e;
-      if (e instanceof IOException)
-        throw (IOException)e;
-      throw new AvroRemoteException(e);
-    }
-  }
-
-  @Override
-  public void writeRequest(Schema schema, Object request, Encoder out)
-    throws IOException {
-    new GenericDatumWriter<Object>(schema).write(request, out);
-  }
-
-  @Override
-  public Object readResponse(Schema schema, Decoder in) throws IOException {
-    return new GenericDatumReader<Object>(schema).read(null, in);
-  }
-
-  @Override
-  public AvroRemoteException readError(Schema schema, Decoder in)
-    throws IOException {
-    return new AvroRemoteException(new GenericDatumReader<Object>(schema).read(null,in));
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericResponder.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericResponder.java
deleted file mode 100644
index 2e8d3f6..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/GenericResponder.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.generic;
-
-import java.io.IOException;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.Responder;
-
-/** {@link Responder} implementation for generic Java data. */
-public abstract class GenericResponder extends Responder {
-
-  public GenericResponder(Protocol local) {
-    super(local);
-  }
-
-  @Override
-  public Object readRequest(Schema schema, Decoder in) throws IOException {
-    return new GenericDatumReader<Object>(schema).read(null, in);
-  }
-
-  @Override
-  public void writeResponse(Schema schema, Object response, Encoder out)
-    throws IOException {
-    new GenericDatumWriter<Object>(schema).write(response, out);
-  }
-
-  @Override
-  public void writeError(Schema schema, Object error,
-                         Encoder out) throws IOException {
-    if (error instanceof AvroRemoteException)
-      error = ((AvroRemoteException)error).getValue();
-    new GenericDatumWriter<Object>(schema).write(error, out);
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/IndexedRecord.java b/branch-1.3/lang/java/src/java/org/apache/avro/generic/IndexedRecord.java
deleted file mode 100644
index e3e93d8..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/IndexedRecord.java
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-/** A record implementation that permits field access by integer index.*/
-public interface IndexedRecord extends GenericContainer {
-  /** Set the value of a field given its position in the schema. */
-  void put(int i, Object v);
-  /** Return the value of a field given its position in the schema. */
-  Object get(int i);
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/generic/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/generic/package.html
deleted file mode 100644
index 24c5d0e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/generic/package.html
+++ /dev/null
@@ -1,43 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-A generic representation for Avro data.
-
-<p>This representation is best for applications which deal with
-  dynamic data, whose schemas are not known until runtime.
-
-<p>Avro schemas are mapped to Java types as follows:
-<ul>
-<li>Schema records are implemented as {@link org.apache.avro.generic.GenericRecord}.
-<li>Schema enums are implemented as {@link java.lang.String}.
-<li>Schema arrays are implemented as {@link org.apache.avro.generic.GenericArray}.
-<li>Schema maps are implemented as {@link java.util.Map}.
-<li>Schema fixed are implemented as {@link org.apache.avro.generic.GenericFixed}.
-<li>Schema strings are implemented as {@link org.apache.avro.util.Utf8}.
-<li>Schema bytes are implemented as {@link java.nio.ByteBuffer}.
-<li>Schema ints are implemented as {@link java.lang.Integer}.
-<li>Schema longs are implemented as {@link java.lang.Long}.
-<li>Schema floats are implemented as {@link java.lang.Float}.
-<li>Schema doubles are implemented as {@link java.lang.Double}.
-<li>Schema booleans are implemented as {@link java.lang.Boolean}.
-</ul>
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryData.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryData.java
deleted file mode 100644
index 49f732d..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryData.java
+++ /dev/null
@@ -1,187 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.io.BinaryDecoder.BufferAccessor;
-
-/** Utilities for binary-encoded data. */
-public class BinaryData {
-
-  private BinaryData() {}                      // no public ctor
-
-  private static class Decoders {
-     private final BufferAccessor b1, b2;
-     private final BinaryDecoder d1, d2;
-    public Decoders() {
-       this.d1 = new BinaryDecoder(new byte[0], 0, 0);
-       this.d2 = new BinaryDecoder(new byte[0], 0, 0);
-       this.b1 = d1.getBufferAccessor();
-       this.b2 = d2.getBufferAccessor();
-    }
-     public void set(byte[] data1, int off1, int len1, 
-         byte[] data2, int off2, int len2) {
-       this.d1.init(data1, off1, len1);
-       this.d2.init(data2, off2, len2);
-  }
-  }                     // no public ctor
-
-  private static final ThreadLocal<Decoders> DECODERS
-    = new ThreadLocal<Decoders>() {
-    @Override protected Decoders initialValue() { return new Decoders(); }
-  };
-
-  /** Compare binary encoded data.  If equal, return zero.  If greater-than,
-   * return 1, if less than return -1. Order is consistent with that of {@link
-   * org.apache.avro.generic.GenericData#compare(Object, Object, Schema)}.*/
-  public static int compare(byte[] b1, int s1,
-                            byte[] b2, int s2,
-                            Schema schema) {
-    Decoders decoders = DECODERS.get();
-    decoders.set(b1, s1, b1.length, b2, s2, b2.length);
-    try {
-      return compare(decoders, schema);
-    } catch (IOException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  /** If equal, return the number of bytes consumed.  If greater than, return
-   * GT, if less than, return LT. */
-  private static int compare(Decoders d, Schema schema) throws IOException {
-    Decoder d1 = d.d1; Decoder d2 = d.d2;
-    switch (schema.getType()) {
-    case RECORD: {
-      for (Field field : schema.getFields()) {
-        if (field.order() == Field.Order.IGNORE) {
-          GenericDatumReader.skip(field.schema(), d1);
-          GenericDatumReader.skip(field.schema(), d2);
-          continue;
-        }
-        int c = compare(d, field.schema());
-        if (c != 0)
-          return (field.order() != Field.Order.DESCENDING) ? c : -c;
-      }
-      return 0;
-    }
-    case ENUM: case INT: {
-      int i1 = d1.readInt();
-      int i2 = d2.readInt();
-      return i1 == i2 ? 0 : (i1 > i2 ? 1 : -1);
-    }
-    case LONG: {
-      long l1 = d1.readLong();
-      long l2 = d2.readLong();
-      return l1 == l2 ? 0 : (l1 > l2 ? 1 : -1);
-    }
-    case ARRAY: {
-      long i = 0;                                 // position in array
-      long r1 = 0, r2 = 0;                        // remaining in current block
-      long l1 = 0, l2 = 0;                        // total array length
-      while (true) {
-        if (r1 == 0) {                            // refill blocks(s)
-          r1 = d1.readLong();
-          if (r1 < 0) { r1 = -r1; d1.readLong(); }
-          l1 += r1;
-        }
-        if (r2 == 0) {
-          r2 = d2.readLong();
-          if (r2 < 0) { r2 = -r2; d2.readLong(); }
-          l2 += r2;
-        }
-        if (r1 == 0 || r2 == 0)                   // empty block: done
-          return (l1 == l2) ? 0 : ((l1 > l2) ? 1 : -1);
-        long l = Math.min(l1, l2);
-        while (i < l) {                           // compare to end of block
-          int c = compare(d, schema.getElementType());
-          if (c != 0) return c;
-          i++; r1--; r2--;
-        }
-      }
-    }
-    case MAP:
-      throw new AvroRuntimeException("Can't compare maps!");
-    case UNION: {
-      int i1 = d1.readInt();
-      int i2 = d2.readInt();
-      if (i1 == i2) {
-        return compare(d, schema.getTypes().get(i1));
-      } else {
-        return i1 - i2;
-      }
-    }
-    case FIXED: {
-      int size = schema.getFixedSize();
-      int c = compareBytes(d.b1.getBuf(), d.b1.getPos(), size,
-                           d.b2.getBuf(), d.b2.getPos(), size);
-      d.d1.skipFixed(size);
-      d.d2.skipFixed(size);
-      return c;
-    }
-    case STRING: case BYTES: {
-      int l1 = d1.readInt();
-      int l2 = d2.readInt();
-      int c = compareBytes(d.b1.getBuf(), d.b1.getPos(), l1,
-                           d.b2.getBuf(), d.b2.getPos(), l2);
-      d.d1.skipFixed(l1);
-      d.d2.skipFixed(l2);
-      return c;
-    }
-    case FLOAT: {
-      float f1 = d1.readFloat();
-      float f2 = d2.readFloat();
-      return (f1 == f2) ? 0 : ((f1 > f2) ? 1 : -1);
-    }
-    case DOUBLE: {
-      double f1 = d1.readDouble();
-      double f2 = d2.readDouble();
-      return (f1 == f2) ? 0 : ((f1 > f2) ? 1 : -1);
-    }
-    case BOOLEAN:
-      boolean b1 = d1.readBoolean();
-      boolean b2 = d2.readBoolean();
-      return (b1 == b2) ? 0 : (b1 ? 1 : -1);
-    case NULL:
-      return 0;
-    default:
-      throw new AvroRuntimeException("Unexpected schema to compare!");
-    }
-  }
-
-  /** Lexicographically compare bytes.  If equal, return zero.  If greater-than,
-   * return a positive value, if less than return a negative value. */
-  public static int compareBytes(byte[] b1, int s1, int l1,
-                                 byte[] b2, int s2, int l2) {
-    int end1 = s1 + l1;
-    int end2 = s2 + l2;
-    for (int i = s1, j = s2; i < end1 && j < end2; i++, j++) {
-      int a = (b1[i] & 0xff);
-      int b = (b2[j] & 0xff);
-      if (a != b) {
-        return a - b;
-      }
-    }
-    return l1 - l2;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryDecoder.java
deleted file mode 100644
index 90f31c6..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryDecoder.java
+++ /dev/null
@@ -1,983 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.util.Utf8;
-
-/** An {@link Decoder} for binary-format data.
- * <p/>
- * Instances are created using {@link DecoderFactory}.
- * <p/>
- * This class may read-ahead and buffer bytes from the source beyond what is
- * required to serve its read methods.  See {@link #inputStream} and 
- * {@link DecoderFactory#configureDirectDecoder(boolean)}.
- * The number of unused bytes in the buffer can be accessed by
- * inputStream().remaining(), if the BinaryDecoder is not 'direct'.
- * 
- * @see Encoder
- */
-
-public class BinaryDecoder extends Decoder {
-  private ByteSource source = null;
-  // we keep the buffer and its state variables in this class and not in a
-  // container class for performance reasons. This improves performance
-  // over a container object by about 5% to 15%
-  // for example, we could have a FastBuffer class with these state variables
-  // and keep a private FastBuffer member here. This simplifies the
-  // "detach source" code and source access to the buffer, but
-  // hurts performance.
-  private byte[] buf = null;
-  private int minPos = 0;
-  private int pos = 0;
-  private int limit = 0;
-
-  BufferAccessor getBufferAccessor() {
-    return new BufferAccessor(this);
-  }
-  
-  /** protected constructor for child classes */
-  protected BinaryDecoder() {
-  }
-
-  /**
-   * @deprecated Use {@link DecoderFactory} to create BinaryDecoder instances and
-   *             reinitialize them
-   */
-  @Deprecated
-  public BinaryDecoder(InputStream in) {
-    super();
-    init(in);
-  }
-
-  BinaryDecoder(int bufferSize, InputStream in) {
-    super();
-    init(bufferSize, in);
-  }
-
-  BinaryDecoder(byte[] data, int offset, int length) {
-    super();
-    init(data, offset, length);
-  }
-
-  /**
-   * @deprecated Use {@link DecoderFactory} to create BinaryDecoder instances and
-   *             reinitialize them
-   */
-  @Override
-  @Deprecated
-  public void init(InputStream in) {
-    init(DecoderFactory.DEFAULT_BUFFER_SIZE, in);
-  }
-
-  void init(int bufferSize, InputStream in) {
-    configureSource(bufferSize, new InputStreamByteSource(in));
-  }
-
-  void init(byte[] data, int offset, int length) {
-    configureSource(DecoderFactory.DEFAULT_BUFFER_SIZE, new ByteArrayByteSource(
-        data, offset, length));
-  }
-
-  /**
-   * Initializes this decoder with a new ByteSource. Detaches the old source (if
-   * it exists) from this Decoder. The old source's state no longer depends on
-   * this Decoder and its InputStream interface will continue to drain the
-   * remaining buffer and source data.
-   * <p/>
-   * The decoder will read from the new source. The source will generally
-   * replace the buffer with its own. If the source allocates a new buffer, it
-   * will create it with size bufferSize.
-   */
-  private void configureSource(int bufferSize, ByteSource source) {
-    if (null != this.source) {
-      this.source.detach();
-    }
-    source.attach(bufferSize, this);
-    this.source = source;
-  }
-
-  @Override
-  public void readNull() throws IOException {
-  }
-
-  @Override
-  public boolean readBoolean() throws IOException {
-    // inlined, shorter version of ensureBounds
-    if (limit == pos) {
-      limit = source.tryReadRaw(buf, 0, buf.length);
-      pos = 0;
-      if (limit == 0) {
-        throw new EOFException();
-      }
-    }
-    int n = buf[pos++] & 0xff;
-    return n == 1;
-  }
-
-  @Override
-  public int readInt() throws IOException {
-    ensureBounds(5); // won't throw index out of bounds
-    int len = 1;
-    int b = buf[pos] & 0xff;
-    int n = b & 0x7f;
-    if (b > 0x7f) {
-      b = buf[pos + len++] & 0xff;
-      n ^= (b & 0x7f) << 7;
-      if (b > 0x7f) {
-        b = buf[pos + len++] & 0xff;
-        n ^= (b & 0x7f) << 14;
-        if (b > 0x7f) {
-          b = buf[pos + len++] & 0xff;
-          n ^= (b & 0x7f) << 21;
-          if (b > 0x7f) {
-            b = buf[pos + len++] & 0xff;
-            n ^= (b & 0x7f) << 28;
-          }
-          if (b > 0x7f) {
-            throw new IOException("Invalid int encoding");
-          }
-        }
-      }
-    }
-    if (pos + len > limit) {
-      throw new EOFException();
-    }
-    pos += len;
-    return (n >>> 1) ^ -(n & 1); // back to two's-complement
-  }
-
-  @Override
-  public long readLong() throws IOException {
-    ensureBounds(10);
-    int len = 1;
-    int b = buf[pos] & 0xff;
-    int n = b & 0x7f;
-    long l;
-    if (b > 0x7f) {
-      b = buf[pos + len++] & 0xff;
-      n ^= (b & 0x7f) << 7;
-      if (b > 0x7f) {
-        b = buf[pos + len++] & 0xff;
-        n ^= (b & 0x7f) << 14;
-        if (b > 0x7f) {
-          b = buf[pos + len++] & 0xff;
-          n ^= (b & 0x7f) << 21;
-          // only the low 28 bits can be set, so this won't carry
-          // the sign bit to the long
-          l = n;
-          if (b > 0x7f) {
-            b = buf[pos + len++] & 0xff;
-            l ^= (b & 0x7fL) << 28;
-            if (b > 0x7f) {
-              b = buf[pos + len++] & 0xff;
-              l ^= (b & 0x7fL) << 35;
-              if (b > 0x7f) {
-                b = buf[pos + len++] & 0xff;
-                l ^= (b & 0x7fL) << 42;
-                if (b > 0x7f) {
-                  b = buf[pos + len++] & 0xff;
-                  l ^= (b & 0x7fL) << 49;
-                  if (b > 0x7f) {
-                    b = buf[pos + len++] & 0xff;
-                    l ^= (b & 0x7fL) << 56;
-                    if (b > 0x7f) {
-                      b = buf[pos + len++] & 0xff;
-                      l ^= (b & 0x7fL) << 63;
-                    }
-                    if (b > 0x7f) {
-                      throw new IOException("Invalid long encoding");
-                    }
-                  }
-                }
-              }
-            }
-          }
-        } else {
-          l = n;
-        }
-      } else {
-        l = n;
-      }
-    } else {
-      l = n;
-    }
-    if (pos + len > limit) {
-      throw new EOFException();
-    }
-    pos += len;
-    return (l >>> 1) ^ -(l & 1); // back to two's-complement
-  }
-
-  @Override
-  public float readFloat() throws IOException {
-    ensureBounds(4);
-    int len = 1;
-    int n = (buf[pos] & 0xff) | ((buf[pos + len++] & 0xff) << 8)
-        | ((buf[pos + len++] & 0xff) << 16) | ((buf[pos + len++] & 0xff) << 24);
-    if ((pos + 4) > limit) {
-      throw new EOFException();
-    }
-    pos += 4;
-    return Float.intBitsToFloat(n);
-  }
-
-  @Override
-  public double readDouble() throws IOException {
-    ensureBounds(8);
-    int len = 1;
-    int n1 = (buf[pos] & 0xff) | ((buf[pos + len++] & 0xff) << 8)
-        | ((buf[pos + len++] & 0xff) << 16) | ((buf[pos + len++] & 0xff) << 24);
-    int n2 = (buf[pos + len++] & 0xff) | ((buf[pos + len++] & 0xff) << 8)
-        | ((buf[pos + len++] & 0xff) << 16) | ((buf[pos + len++] & 0xff) << 24);
-    if ((pos + 8) > limit) {
-      throw new EOFException();
-    }
-    pos += 8;
-    return Double.longBitsToDouble((((long) n1) & 0xffffffffL)
-        | (((long) n2) << 32));
-  }
-
-  @Override
-  public Utf8 readString(Utf8 old) throws IOException {
-    int length = readInt();
-    Utf8 result = (old != null ? old : new Utf8());
-    result.setLength(length);
-    doReadBytes(result.getBytes(), 0, length);
-    return result;
-  }
-
-  @Override
-  public void skipString() throws IOException {
-    doSkipBytes(readInt());
-  }
-
-  @Override
-  public ByteBuffer readBytes(ByteBuffer old) throws IOException {
-    int length = readInt();
-    ByteBuffer result;
-    if (old != null && length <= old.capacity()) {
-      result = old;
-      result.clear();
-    } else {
-      result = ByteBuffer.allocate(length);
-    }
-    doReadBytes(result.array(), result.position(), length);
-    result.limit(length);
-    return result;
-  }
-
-  @Override
-  public void skipBytes() throws IOException {
-    doSkipBytes(readInt());
-  }
-
-  @Override
-  public void readFixed(byte[] bytes, int start, int length) throws IOException {
-    doReadBytes(bytes, start, length);
-  }
-
-  @Override
-  public void skipFixed(int length) throws IOException {
-    doSkipBytes(length);
-  }
-
-  @Override
-  public int readEnum() throws IOException {
-    return readInt();
-  }
-
-  protected void doSkipBytes(long length) throws IOException {
-    int remaining = limit - pos;
-    if (length <= remaining) {
-      pos += length;
-    } else {
-      limit = pos = 0;
-      length -= remaining;
-      source.skipSourceBytes(length);
-    }
-  }
-
-  /**
-   * Reads <tt>length</tt> bytes into <tt>bytes</tt> starting at <tt>start</tt>.
-   * 
-   * @throws EOFException
-   *           If there are not enough number of bytes in the source.
-   * @throws IOException
-   */
-  protected void doReadBytes(byte[] bytes, int start, int length)
-      throws IOException {
-    int remaining = limit - pos;
-    if (length <= remaining) {
-      System.arraycopy(buf, pos, bytes, start, length);
-      pos += length;
-    } else {
-      // read the rest of the buffer
-      System.arraycopy(buf, pos, bytes, start, remaining);
-      start += remaining;
-      length -= remaining;
-      pos = limit;
-      // finish from the byte source
-      source.readRaw(bytes, start, length);
-    }
-  }
-
-  /**
-   * Returns the number of items to follow in the current array or map. Returns
-   * 0 if there are no more items in the current array and the array/map has
-   * ended.
-   * 
-   * @throws IOException
-   */
-  protected long doReadItemCount() throws IOException {
-    long result = readLong();
-    if (result < 0) {
-      readLong(); // Consume byte-count if present
-      result = -result;
-    }
-    return result;
-  }
-
-  /**
-   * Reads the count of items in the current array or map and skip those items,
-   * if possible. If it could skip the items, keep repeating until there are no
-   * more items left in the array or map. If items cannot be skipped (because
-   * byte count to skip is not found in the stream) return the count of the
-   * items found. The client needs to skip the items individually.
-   * 
-   * @return Zero if there are no more items to skip and end of array/map is
-   *         reached. Positive number if some items are found that cannot be
-   *         skipped and the client needs to skip them individually.
-   * @throws IOException
-   */
-  private long doSkipItems() throws IOException {
-    long result = readInt();
-    while (result < 0) {
-      long bytecount = readLong();
-      doSkipBytes(bytecount);
-      result = readInt();
-    }
-    return result;
-  }
-
-  @Override
-  public long readArrayStart() throws IOException {
-    return doReadItemCount();
-  }
-
-  @Override
-  public long arrayNext() throws IOException {
-    return doReadItemCount();
-  }
-
-  @Override
-  public long skipArray() throws IOException {
-    return doSkipItems();
-  }
-
-  @Override
-  public long readMapStart() throws IOException {
-    return doReadItemCount();
-  }
-
-  @Override
-  public long mapNext() throws IOException {
-    return doReadItemCount();
-  }
-
-  @Override
-  public long skipMap() throws IOException {
-    return doSkipItems();
-  }
-
-  @Override
-  public int readIndex() throws IOException {
-    return readInt();
-  }
-
-  /**
-   * Returns true if the current BinaryDecoder is at the end of its source data and
-   * cannot read any further without throwing an EOFException or other
-   * IOException.
-   * <p/>
-   * Not all implementations of BinaryDecoder support isEnd(). Implementations that do
-   * not support isEnd() will throw a
-   * {@link java.lang.UnsupportedOperationException}.
-   */
-  public boolean isEnd() throws IOException {
-    if (limit - pos > 0) {
-      // buffer not empty, not at end.
-      return false;
-    } else {
-      if (source.isEof()) {
-        return true;
-      }
-      // read from source.
-      int read = source.tryReadRaw(buf, 0, buf.length);
-      pos = 0;
-      limit = read;
-      if (0 == read) {
-        // nothing left
-        return true;
-      }
-      return false;
-    }
-  }
-
-  /**
-   * Ensures that buf[pos + num - 1] is not out of the buffer array bounds.
-   * However, buf[pos + num -1] may be >= limit if there is not enough data left
-   * in the source to fill the array with num bytes.
-   * <p/>
-   * This method allows readers to read ahead by num bytes safely without
-   * checking for EOF at each byte. However, readers must ensure that their
-   * reads are valid by checking that their read did not advance past the limit
-   * before adjusting pos.
-   * <p/>
-   * num must be less than the buffer size and greater than 0
-   */
-  private void ensureBounds(int num) throws IOException {
-    int remaining = limit - pos;
-    if (remaining < num) {
-      // move remaining to front
-      source.compactAndFill(buf, pos, minPos, remaining);
-    }
-  }
-
-  /**
-   * Returns an {@link java.io.InputStream} that is aware of any buffering that
-   * may occur in this BinaryDecoder. Readers that need to interleave decoding 
-   * Avro data with other reads must access this InputStream to do so unless
-   * the implementation is 'direct' and does not read beyond the minimum bytes
-   * necessary from the source.  
-   * <p>
-   * See {@link DecoderFactory#configureDirectDecoder(boolean)}
-   */
-  public InputStream inputStream() {
-    return source;
-  }
-
-  /**
-   * BufferAccessor is used by BinaryEncoder to enable {@link ByteSource}s and
-   * the InputStream returned by {@link BinaryDecoder.inputStream} to access the
-   * BinaryEncoder's buffer. When a BufferAccessor is created, it is attached to
-   * a BinaryDecoder and its buffer. Its accessors directly reference the
-   * BinaryDecoder's buffer. When detach() is called, it stores references to
-   * the BinaryDecoder's buffer directly. The BinaryDecoder only detaches a
-   * BufferAccessor when it is initializing to a new ByteSource. Therefore, a
-   * client that is using the InputStream returned by BinaryDecoder.inputStream
-   * can continue to use that stream after a BinaryDecoder has been
-   * reinitialized to read from new data.
-   */
-  static class BufferAccessor {
-    private final BinaryDecoder decoder;
-    private byte[] buf;
-    private int pos;
-    private int limit;
-    boolean detached = false;
-
-    private BufferAccessor(BinaryDecoder decoder) {
-      this.decoder = decoder;
-    }
-
-    void detach() {
-      this.buf = decoder.buf;
-      this.pos = decoder.pos;
-      this.limit = decoder.limit;
-      detached = true;
-    }
-
-    int getPos() {
-      if (detached)
-        return this.pos;
-      else
-        return decoder.pos;
-    }
-
-    int getLim() {
-      if (detached)
-        return this.limit;
-      else
-        return decoder.limit;
-    }
-
-    byte[] getBuf() {
-      if (detached)
-        return this.buf;
-      else
-        return decoder.buf;
-    }
-
-    void setPos(int pos) {
-      if (detached)
-        this.pos = pos;
-      else
-        decoder.pos = pos;
-    }
-
-    void setLimit(int limit) {
-      if (detached)
-        this.limit = limit;
-      else
-        decoder.limit = limit;
-    }
-
-    void setBuf(byte[] buf, int offset, int length) {
-      if (detached) {
-        this.buf = buf;
-        this.limit = offset + length;
-        this.pos = offset;
-      } else {
-        decoder.buf = buf;
-        decoder.limit = offset + length;
-        decoder.pos = offset;
-        decoder.minPos = offset;
-      }
-    }
-  }
-
-  /**
-   * ByteSource abstracts the source of data from the core workings of
-   * BinaryDecoder. This is very important for performance reasons because
-   * InputStream's API is a barrier to performance due to several quirks:
-   * InputStream does not in general require that as many bytes as possible have
-   * been read when filling a buffer.
-   * <p/>
-   * InputStream's terminating conditions for a read are two-fold: EOFException
-   * and '-1' on the return from read(). Implementations are supposed to return
-   * '-1' on EOF but often do not. The extra terminating conditions cause extra
-   * conditionals on both sides of the API, and slow performance significantly.
-   * <p/>
-   * ByteSource implementations provide read() and skip() variants that have
-   * stronger guarantees than InputStream, freeing client code to be simplified
-   * and faster.
-   * <p/>
-   * {@link skipSourceBytes} and {@link readRaw} are guaranteed to have read or
-   * skipped as many bytes as possible, or throw EOFException.
-   * {@link trySkipBytes} and {@link tryRead} are guaranteed to attempt to read
-   * or skip as many bytes as possible and never throw EOFException, while
-   * returning the exact number of bytes skipped or read. {@link isEof} returns
-   * true if all the source bytes have been read or skipped. This condition can
-   * also be detected by a client if an EOFException is thrown from
-   * {@link skipSourceBytes} or {@link readRaw}, or if {@link trySkipBytes} or
-   * {@link tryRead} return 0;
-   * <p/>
-   * A ByteSource also implements the InputStream contract for use by APIs that
-   * require it. The InputStream interface must take into account buffering in
-   * any decoder that this ByteSource is attached to. The other methods do not
-   * account for buffering.
-   */
-
-  abstract static class ByteSource extends InputStream {
-    // maintain a reference to the buffer, so that if this
-    // source is detached from the Decoder, and a client still
-    // has a reference to it via inputStream(), bytes are not
-    // lost
-    protected BufferAccessor ba;
-
-    protected ByteSource() {
-    }
-
-    abstract boolean isEof();
-
-    protected void attach(int bufferSize, BinaryDecoder decoder) {
-      decoder.buf = new byte[bufferSize];
-      decoder.pos = 0;
-      decoder.minPos = 0;
-      decoder.limit = 0;
-      this.ba = new BufferAccessor(decoder);
-      return;
-    }
-
-    protected void detach() {
-      ba.detach();
-    }
-
-    /**
-     * Skips length bytes from the source. If length bytes cannot be skipped due
-     * to end of file/stream/channel/etc an EOFException is thrown
-     * 
-     * @param length
-     *          the number of bytes to attempt to skip
-     * @throws IOException
-     *           if an error occurs
-     * @throws EOFException
-     *           if length bytes cannot be skipped
-     */
-    protected abstract void skipSourceBytes(long length) throws IOException;
-
-    /**
-     * Attempts to skip <i>skipLength</i> bytes from the source. Returns the
-     * actual number of bytes skipped. This method must attempt to skip as many
-     * bytes as possible up to <i>skipLength</i> bytes. Skipping 0 bytes signals
-     * end of stream/channel/file/etc
-     * 
-     * @param skipLength
-     *          the number of bytes to attempt to skip
-     * @return the count of actual bytes skipped.
-     */
-    protected abstract long trySkipBytes(long skipLength) throws IOException;
-
-    /**
-     * Reads raw from the source, into a byte[]. Used for reads that are larger
-     * than the buffer, or otherwise unbuffered. This is a mandatory read -- if
-     * there is not enough bytes in the source, EOFException is thrown.
-     * 
-     * @throws IOException
-     *           if an error occurs
-     * @throws EOFException
-     *           if len bytes cannot be read
-     * */
-    protected abstract void readRaw(byte[] data, int off, int len)
-        throws IOException;
-
-    /**
-     * Attempts to copy up to <i>len</i> bytes from the source into data,
-     * starting at index <i>off</i>. Returns the actual number of bytes copied
-     * which may be between 0 and <i>len</i>.
-     * <p/>
-     * This method must attempt to read as much as possible from the source.
-     * Returns 0 when at the end of stream/channel/file/etc.
-     * 
-     * @throws IOException
-     *           if an error occurs reading
-     **/
-    protected abstract int tryReadRaw(byte[] data, int off, int len)
-        throws IOException;
-
-    /**
-     * If this source buffers, compacts the buffer by placing the
-     * <i>remaining</i> bytes starting at <i>pos</i> at <i>minPos</i>. This may
-     * be done in the current buffer, or may replace the buffer with a new one.
-     * 
-     * The end result must be a buffer with at least 16 bytes of remaining space.
-     * 
-     * @param pos
-     * @param minPos
-     * @param remaining
-     * @throws IOException
-     */
-    protected void compactAndFill(byte[] buf, int pos, int minPos, int remaining)
-        throws IOException {
-      System.arraycopy(buf, pos, buf, minPos, remaining);
-      ba.setPos(minPos);
-      int newLimit = remaining
-          + tryReadRaw(buf, minPos + remaining, buf.length - remaining);
-      ba.setLimit(newLimit);
-    }
-
-    @Override
-    public int read(byte[] b, int off, int len) throws IOException {
-      int lim = ba.getLim();
-      int pos = ba.getPos();
-      byte[] buf = ba.getBuf();
-      int remaining = (lim - pos);
-      if (remaining >= len) {
-        System.arraycopy(buf, pos, b, off, len);
-        pos = pos + len;
-        ba.setPos(pos);
-        return len;
-      } else {
-        // flush buffer to array
-        System.arraycopy(buf, pos, b, off, remaining);
-        pos = pos + remaining;
-        ba.setPos(pos);
-        // get the rest from the stream (skip array)
-        int inputRead = remaining
-            + tryReadRaw(b, off + remaining, len - remaining);
-        if (inputRead == 0) {
-          return -1;
-        } else {
-          return inputRead;
-        }
-      }
-    }
-
-    @Override
-    public long skip(long n) throws IOException {
-      int lim = ba.getLim();
-      int pos = ba.getPos();
-      int remaining = lim - pos;
-      if (remaining > n) {
-        pos += n;
-        ba.setPos(pos);
-        return n;
-      } else {
-        pos = lim;
-        ba.setPos(pos);
-        long isSkipCount = trySkipBytes(n - remaining);
-        return isSkipCount + remaining;
-      }
-    }
-
-    /**
-     * returns the number of bytes remaining that this BinaryDecoder has
-     * buffered from its source
-     */
-    @Override
-    public int available() throws IOException {
-      return (ba.getLim() - ba.getPos());
-    }
-  }
-
-  private static class InputStreamByteSource extends ByteSource {
-    private InputStream in;
-    protected boolean isEof = false;
-
-    private InputStreamByteSource(InputStream in) {
-      super();
-      this.in = in;
-    }
-
-    @Override
-    protected void skipSourceBytes(long length) throws IOException {
-      boolean readZero = false;
-      while (length > 0) {
-        long n = in.skip(length);
-        if (n > 0) {
-          length -= n;
-          continue;
-        }
-        // The inputStream contract is evil.
-        // zero "might" mean EOF. So check for 2 in a row, we will
-        // infinite loop waiting for -1 with some classes others
-        // spuriously will return 0 on occasion without EOF
-        if (n == 0) {
-          if (readZero) {
-            isEof = true;
-            throw new EOFException();
-          }
-          readZero = true;
-          continue;
-        }
-        // read negative
-        isEof = true;
-        throw new EOFException();
-      }
-    }
-
-    @Override
-    protected long trySkipBytes(long length) throws IOException {
-      long leftToSkip = length;
-      try {
-        boolean readZero = false;
-        while (leftToSkip > 0) {
-          long n = in.skip(length);
-          if (n > 0) {
-            leftToSkip -= n;
-            continue;
-          }
-          // The inputStream contract is evil.
-          // zero "might" mean EOF. So check for 2 in a row, we will
-          // infinite loop waiting for -1 with some classes others
-          // spuriously will return 0 on occasion without EOF
-          if (n == 0) {
-            if (readZero) {
-              isEof = true;
-              break;
-            }
-            readZero = true;
-            continue;
-          }
-          // read negative
-          isEof = true;
-          break;
-
-        }
-      } catch (EOFException eof) {
-        isEof = true;
-      }
-      return length - leftToSkip;
-    }
-
-    @Override
-    protected void readRaw(byte[] data, int off, int len) throws IOException {
-      while (len > 0) {
-        int read = in.read(data, off, len);
-        if (read < 0) {
-          isEof = true;
-          throw new EOFException();
-        }
-        len -= read;
-        off += read;
-      }
-    }
-
-    @Override
-    protected int tryReadRaw(byte[] data, int off, int len) throws IOException {
-      int leftToCopy = len;
-      try {
-        while (leftToCopy > 0) {
-          int read = in.read(data, off, leftToCopy);
-          if (read < 0) {
-            isEof = true;
-            break;
-          }
-          leftToCopy -= read;
-          off += read;
-        }
-      } catch (EOFException eof) {
-        isEof = true;
-      }
-      return len - leftToCopy;
-    }
-
-    @Override
-    public int read() throws IOException {
-      if (ba.getLim() - ba.getPos() == 0) {
-        return in.read();
-      } else {
-        int position = ba.getPos();
-        int result = ba.getBuf()[position] & 0xff;
-        ba.setPos(position + 1);
-        return result;
-      }
-    }
-
-    @Override
-    public boolean isEof() {
-      return isEof;
-    }
-
-    @Override
-    public void close() throws IOException {
-      in.close();
-    }
-  }
-
-  /**
-   * This byte source is special. It will avoid copying data by using the
-   * source's byte[] as a buffer in the decoder.
-   * 
-   */
-  private static class ByteArrayByteSource extends ByteSource {
-    private byte[] data;
-    private int position;
-    private int max;
-    private boolean compacted = false;
-
-    private ByteArrayByteSource(byte[] data, int start, int len) {
-      super();
-      // make sure data is not too small, otherwise getLong may try and
-      // read 10 bytes and get index out of bounds.
-      if (data.length < 16 || len < 16) {
-        this.data = new byte[16];
-        System.arraycopy(data, start, this.data, 0, len);
-        this.position = 0;
-        this.max = len;
-      } else {
-        // use the array passed in
-        this.data = data;
-        this.position = start;
-        this.max = start + len;
-      }
-    }
-
-    @Override
-    protected void attach(int bufferSize, BinaryDecoder decoder) {
-      // buffer size is not used here, the byte[] source is the buffer.
-      decoder.buf = this.data;
-      decoder.pos = this.position;
-      decoder.minPos = this.position;
-      decoder.limit = this.max;
-      this.ba = new BufferAccessor(decoder);
-      return;
-    }
-
-    @Override
-    protected void skipSourceBytes(long length) throws IOException {
-      long skipped = trySkipBytes(length);
-      if (skipped < length) {
-        throw new EOFException();
-      }
-    }
-
-    @Override
-    protected long trySkipBytes(long length) throws IOException {
-      // the buffer is shared, so this should return 0
-      max = ba.getLim();
-      position = ba.getPos();
-      long remaining = max - position;
-      if (remaining >= length) {
-        position += length;
-        ba.setPos(position);
-        return length;
-      } else {
-        position += remaining;
-        ba.setPos(position);
-        return remaining;
-      }
-    }
-
-    @Override
-    protected void readRaw(byte[] data, int off, int len) throws IOException {
-      int read = tryReadRaw(data, off, len);
-      if (read < len) {
-        throw new EOFException();
-      }
-    }
-
-    @Override
-    protected int tryReadRaw(byte[] data, int off, int len) throws IOException {
-      // the buffer is shared, nothing to read
-      return 0;
-    }
-
-    @Override
-    protected void compactAndFill(byte[] buf, int pos, int minPos, int remaining)
-        throws IOException {
-      // this implementation does not want to mutate the array passed in,
-      // so it makes a new tiny buffer unless it has been compacted once before
-      if (!compacted) {
-        // assumes ensureCapacity is never called with a size more than 16
-        byte[] tinybuf = new byte[remaining + 16];
-        System.arraycopy(buf, pos, tinybuf, 0, remaining);
-        ba.setBuf(tinybuf, 0, remaining);
-        compacted = true;
-      }
-    }
-
-    @Override
-    public int read() throws IOException {
-      max = ba.getLim();
-      position = ba.getPos();
-      if (position >= max) {
-        return -1;
-      } else {
-        int result = ba.getBuf()[position++] & 0xff;
-        ba.setPos(position);
-        return result;
-      }
-    }
-
-    @Override
-    public void close() throws IOException {
-      ba.setPos(ba.getLim()); // effectively set isEof to false
-    }
-
-    @Override
-    public boolean isEof() {
-      int remaining = ba.getLim() - ba.getPos();
-      return (remaining == 0);
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryEncoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryEncoder.java
deleted file mode 100644
index a3cadeb..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/BinaryEncoder.java
+++ /dev/null
@@ -1,253 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.ipc.ByteBufferOutputStream;
-import org.apache.avro.util.Utf8;
-
-/**
- * Low-level support for serializing Avro values.
- *
- * This class has two types of methods.  One type of methods support
- * the writing of leaf values (for example, {@link #writeLong} and
- * {@link #writeString}).  These methods have analogs in {@link
- * Decoder}.
- *
- * The other type of methods support the writing of maps and arrays.
- * These methods are {@link #writeArrayStart}, {@link
- * #startItem}, and {@link #writeArrayEnd} (and similar methods for
- * maps).  Some implementations of {@link Encoder} handle the
- * buffering required to break large maps and arrays into blocks,
- * which is necessary for applications that want to do streaming.
- * (See {@link #writeArrayStart} for details on these methods.)
- *
- *  @see Decoder
- */
-public class BinaryEncoder extends Encoder {
-  protected OutputStream out;
-  
-  private interface ByteWriter {
-    void write(ByteBuffer bytes) throws IOException;
-  }
-  
-  private static final class SimpleByteWriter implements ByteWriter {
-    private final OutputStream out;
-
-    public SimpleByteWriter(OutputStream out) {
-      this.out = out;
-    }
-
-    @Override
-    public void write(ByteBuffer bytes) throws IOException {
-      encodeLong(bytes.remaining(), out);
-      out.write(bytes.array(), bytes.position(), bytes.remaining());
-    }
-  }
-
-  private static final class ReuseByteWriter implements ByteWriter {
-    private final ByteBufferOutputStream bbout;
-    public ReuseByteWriter(ByteBufferOutputStream bbout) {
-      this.bbout = bbout;
-    }
-
-    @Override
-    public void write(ByteBuffer bytes) throws IOException {
-      encodeLong(bytes.remaining(), bbout);
-      bbout.writeBuffer(bytes);
-    }
-  }
-  
-  private final ByteWriter byteWriter;
-
-  /** Create a writer that sends its output to the underlying stream
-   *  <code>out</code>. */
-  public BinaryEncoder(OutputStream out) {
-    this.out = out;
-    this.byteWriter = (out instanceof ByteBufferOutputStream) ?
-        new ReuseByteWriter((ByteBufferOutputStream) out) :
-          new SimpleByteWriter(out);
-  }
-
-  @Override
-  public void init(OutputStream out) throws IOException {
-    flush();
-    this.out = out;
-  }
-
-  @Override
-  public void flush() throws IOException {
-    if (out != null) {
-      out.flush();
-    }
-  }
-
-  @Override
-  public void writeNull() throws IOException { }
-  
-  @Override
-  public void writeBoolean(boolean b) throws IOException {
-    out.write(b ? 1 : 0);
-  }
-
-  @Override
-  public void writeInt(int n) throws IOException {
-    encodeLong(n, out);
-  }
-
-  @Override
-  public void writeLong(long n) throws IOException {
-    encodeLong(n, out);
-  }
-  
-  @Override
-  public void writeFloat(float f) throws IOException {
-    encodeFloat(f, out);
-  }
-
-  @Override
-  public void writeDouble(double d) throws IOException {
-    encodeDouble(d, out);
-  }
-
-  @Override
-  public void writeString(Utf8 utf8) throws IOException {
-    encodeLong(utf8.getLength(), out);
-    out.write(utf8.getBytes(), 0, utf8.getLength());
-  }
-
-  @Override
-  public void writeBytes(ByteBuffer bytes) throws IOException {
-    byteWriter.write(bytes);
-  }
-  
-  @Override
-  public void writeBytes(byte[] bytes, int start, int len) throws IOException {
-    encodeLong(len, out);
-    out.write(bytes, start, len);
-  }
-  
-  @Override
-  public void writeFixed(byte[] bytes, int start, int len) throws IOException {
-    out.write(bytes, start, len);
-  }
-
-  @Override
-  public void writeEnum(int e) throws IOException {
-    encodeLong(e, out);
-  }
-
-  @Override
-  public void writeArrayStart() throws IOException {
-  }
-
-  @Override
-  public void setItemCount(long itemCount) throws IOException {
-    if (itemCount > 0) {
-      writeLong(itemCount);
-    }
-  }
-  
-  @Override
-  public void startItem() throws IOException {
-  }
-
-  @Override
-  public void writeArrayEnd() throws IOException {
-    encodeLong(0, out);
-  }
-
-  @Override
-  public void writeMapStart() throws IOException {
-  }
-
-  @Override
-  public void writeMapEnd() throws IOException {
-    encodeLong(0, out);
-  }
-
-  @Override
-  public void writeIndex(int unionIndex) throws IOException {
-    encodeLong(unionIndex, out);
-  }
-  
-  protected static void encodeLong(long n, OutputStream o) throws IOException {
-    n = (n << 1) ^ (n >> 63); // move sign to low-order bit
-    while ((n & ~0x7F) != 0) {
-      o.write((byte)((n & 0x7f) | 0x80));
-      n >>>= 7;
-    }
-    o.write((byte)n);
-  }
-
-  protected static int encodeLong(long n, byte[] b, int pos) {
-    n = (n << 1) ^ (n >> 63); // move sign to low-order bit
-    while ((n & ~0x7F) != 0) {
-      b[pos++] = (byte)((n & 0x7f) | 0x80);
-      n >>>= 7;
-    }
-    b[pos++] = (byte) n;
-    return pos;
-  }
-
-  protected static void encodeFloat(float f, OutputStream o) throws IOException {
-    long bits = Float.floatToRawIntBits(f);
-    o.write((int)(bits      ) & 0xFF);
-    o.write((int)(bits >>  8) & 0xFF);
-    o.write((int)(bits >> 16) & 0xFF);
-    o.write((int)(bits >> 24) & 0xFF);
-  }
-
-  protected static int encodeFloat(float f, byte[] b, int pos) {
-    long bits = Float.floatToRawIntBits(f);
-    b[pos++] = (byte)((bits      ) & 0xFF);
-    b[pos++] = (byte)((bits >>  8) & 0xFF);
-    b[pos++] = (byte)((bits >> 16) & 0xFF);
-    b[pos++] = (byte)((bits >> 24) & 0xFF);
-    return pos;
-  }
-
-  protected static void encodeDouble(double d, OutputStream o) throws IOException {
-    long bits = Double.doubleToRawLongBits(d);
-    o.write((int)(bits      ) & 0xFF);
-    o.write((int)(bits >>  8) & 0xFF);
-    o.write((int)(bits >> 16) & 0xFF);
-    o.write((int)(bits >> 24) & 0xFF);
-    o.write((int)(bits >> 32) & 0xFF);
-    o.write((int)(bits >> 40) & 0xFF);
-    o.write((int)(bits >> 48) & 0xFF);
-    o.write((int)(bits >> 56) & 0xFF);
-  }
-
-  protected static int encodeDouble(double d, byte[] b, int pos) {
-    long bits = Double.doubleToRawLongBits(d);
-    b[pos++] = (byte)((bits      ) & 0xFF);
-    b[pos++] = (byte)((bits >>  8) & 0xFF);
-    b[pos++] = (byte)((bits >> 16) & 0xFF);
-    b[pos++] = (byte)((bits >> 24) & 0xFF);
-    b[pos++] = (byte)((bits >> 32) & 0xFF);
-    b[pos++] = (byte)((bits >> 40) & 0xFF);
-    b[pos++] = (byte)((bits >> 48) & 0xFF);
-    b[pos++] = (byte)((bits >> 56) & 0xFF);
-    return pos;
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/BlockingBinaryEncoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/BlockingBinaryEncoder.java
deleted file mode 100644
index 1a58ee1..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/BlockingBinaryEncoder.java
+++ /dev/null
@@ -1,621 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.util.Utf8;
-
-/** A {@link Encoder} that writes large arrays and maps as a sequence of
- * blocks.  So long as individual primitive values fit in memory, arbitrarily
- * long arrays and maps may be written and subsequently read without exhausting
- * memory.  Values are buffered until the specified block size would be
- * exceeded, minimizing block overhead.
- * @see Encoder
- */
-public class BlockingBinaryEncoder extends BinaryEncoder {
-
- /* Implementation note:
-  *
-  * Blocking is complicated because of nesting.  If a large, nested
-  * value overflows your buffer, you've got to do a lot of dancing
-  * around to output the blocks correctly.
-  *
-  * To handle this complexity, this class keeps a stack of blocked
-  * values: each time a new block is started (e.g., by a call to
-  * {@link #writeArrayStart}), an entry is pushed onto this stack.
-  *
-  * In this stack, we keep track of the state of a block.  Blocks can
-  * be in two states.  "Regular" blocks have a non-zero byte count.
-  * "Overflow" blocks help us deal with the case where a block
-  * contains a value that's too big to buffer.  In this case, the
-  * block contains only one item, and we give it an unknown
-  * byte-count.  Because these values (1,unknown) are fixed, we're
-  * able to write the header for these overflow blocks to the
-  * underlying stream without seeing the entire block.  After writing
-  * this header, we've freed our buffer space to be fully devoted to
-  * blocking the large, inner value.
-  */
-
-  private static class BlockedValue {
-    public enum State {
-      /**
-       * The bottom element of our stack represents being _outside_
-       * of a blocked value.
-       */
-      ROOT,
-
-      /**
-       * Represents the "regular" case, i.e., a blocked-value whose
-       * current block is fully contained in the buffer.  In this
-       * case, {@link BlockedValue#start} points to the start of the
-       * blocks _data_ -- but no room has been left for a header!
-       * When this block is terminated, it's data will have to be
-       * moved over a bit to make room for the header. */
-      REGULAR,
-
-      /**
-       * Represents a blocked-value whose current block is in the
-       * overflow state.  In this case, {@link BlockedValue#start} is zero. The
-       * header for such a block has _already been written_ (we've
-       * written out a header indicating that the block has a single
-       * item, and we put a "zero" down for the byte-count to indicate
-       * that we don't know the physical length of the buffer.  Any blocks
-       *  _containing_ this block must be in the {@link #OVERFLOW}
-       *  state. */
-     OVERFLOW
-    };
-
-    /** The type of this blocked value (ARRAY or MAP). */
-    public Schema.Type type;
-
-    /** The state of this BlockedValue */
-    public State state;
-    
-    /** The location in the buffer where this blocked value starts */
-    public int start;
-
-    /**
-     * The index one past the last byte for the previous item. If this
-     * is the first item, this is same as {@link #start}.
-     */
-    public int lastFullItem;
-    
-    /**
-     * Number of items in this blocked value that are stored
-     * in the buffer.
-     */
-    public int items;
-
-    /** Number of items left to write*/
-    public long itemsLeftToWrite;
-
-    /** Create a ROOT instance. */
-    public BlockedValue() {
-      this.type = null;
-      this.state = BlockedValue.State.ROOT;
-      this.start = this.lastFullItem = 0;
-      this.items = 1; // Makes various assertions work out
-    }
-    
-    /**
-     * Check invariants of <code>this</code> and also the
-     * <code>BlockedValue</code> containing <code>this</code>.
-     */
-    public boolean check(BlockedValue prev, int pos) {
-      assert state != State.ROOT || type == null;
-      assert (state == State.ROOT ||
-              type == Schema.Type.ARRAY || type == Schema.Type.MAP);
-
-      assert 0 <= items;
-      assert 0 != items || start == pos;         // 0==itms ==> start==pos
-      assert 1 < items || start == lastFullItem; // 1<=itms ==> start==lFI
-      assert items <= 1 || start <= lastFullItem; // 1<itms ==> start<=lFI
-      assert lastFullItem <= pos;
-
-      switch (state) {
-      case ROOT:
-          assert start == 0;
-          assert prev == null;
-          break;
-      case REGULAR:
-          assert start >= 0;
-          assert prev.lastFullItem <= start;
-          assert 1 <= prev.items;
-          break;
-      case OVERFLOW:
-          assert start == 0;
-          assert items == 1;
-          assert prev.state == State.ROOT || prev.state == State.OVERFLOW;
-          break;
-      }
-      return false;
-    }
-  }
-
-  /**
-   * The buffer to hold the bytes before being written into the underlying
-   * stream.
-   */
-  private byte[] buf;
-  
-  /**
-   * Index into the location in {@link #buf}, where next byte can be written.
-   */
-  private int pos;
-  
-  /**
-   * The state stack.
-   */
-  private BlockedValue[] blockStack;
-  private int stackTop = -1;
-  private static final int STACK_STEP = 10;
-
-  private static final class EncoderBuffer extends ByteArrayOutputStream {
-    public byte[] buffer() {
-      return buf;
-    }
-    
-    public int length() {
-      return count;
-    }
-  }
-  
-  private EncoderBuffer encoderBuffer = new EncoderBuffer();
-
-  private boolean check() {
-    assert out != null;
-    assert buf != null;
-    assert MIN_BUFFER_SIZE <= buf.length;
-    assert 0 <= pos;
-    assert pos <= buf.length : pos + " " + buf.length;
-
-    assert blockStack != null;
-    BlockedValue prev = null;
-    for (int i = 0; i <= stackTop; i++) {
-      BlockedValue v = blockStack[i];
-      v.check(prev, pos);
-      prev = v;
-    }
-    return true;
-  }
-
-  private static final int DEFAULT_BUFFER_SIZE = 64 * 1024;
-  private static final int MIN_BUFFER_SIZE = 64;
-
-  public BlockingBinaryEncoder(OutputStream out) {
-    this(out, DEFAULT_BUFFER_SIZE);
-  }
-
-  public BlockingBinaryEncoder(OutputStream out, int bufferSize) {
-    super(out);
-    if (bufferSize < MIN_BUFFER_SIZE) {
-      throw new IllegalArgumentException("Buffer size too smll.");
-    }
-    this.buf = new byte[bufferSize];
-    this.pos = 0;
-    blockStack = new BlockedValue[0];
-    expandStack();
-    BlockedValue bv = blockStack[++stackTop];
-    bv.type = null;
-    bv.state = BlockedValue.State.ROOT;
-    bv.start = bv.lastFullItem = 0;
-    bv.items = 1;
-
-    assert check();
-  }
-
-  private void expandStack() {
-    int oldLength = blockStack.length;
-    blockStack = Arrays.copyOf(blockStack,
-        blockStack.length + STACK_STEP);
-    for (int i = oldLength; i < blockStack.length; i++) {
-      blockStack[i] = new BlockedValue();
-    }
-  }
-
-  /** Redirect output (and reset the parser state if we're checking). */
-  @Override
-  public void init(OutputStream out) throws IOException {
-    super.init(out);
-    this.pos = 0;
-    this.stackTop = 0;
-
-    assert check();
-  }
-
-  @Override
-  public void flush() throws IOException {
-    if (out != null) {
-      BlockedValue bv = blockStack[stackTop];
-      if (bv.state == BlockedValue.State.ROOT) {
-        out.write(buf, 0, pos);
-        pos = 0;
-      } else {
-        while (bv.state != BlockedValue.State.OVERFLOW) {
-          compact();
-        }
-      }
-      out.flush();
-    }
-
-    assert check();
-  }
-
-  @Override
-  public void writeBoolean(boolean b) throws IOException {
-    if (buf.length < (pos + 1)) ensure(1);
-    buf[pos++] = (byte)(b ? 1 : 0);
-
-    assert check();
-  }
-
-  @Override
-  public void writeInt(int n) throws IOException {
-    if (pos + 5 > buf.length) {
-      ensure(5);
-    }
-    pos = encodeLong(n, buf, pos);
-
-    assert check();
-  }
-
-  @Override
-  public void writeLong(long n) throws IOException {
-    if (pos + 10 > buf.length) {
-      ensure(10);
-    }
-    pos = encodeLong(n, buf, pos);
-
-    assert check();
-  }
-    
-  @Override
-  public void writeFloat(float f) throws IOException {
-    if (pos + 4 > buf.length) {
-      ensure(4);
-    }
-    pos = encodeFloat(f, buf, pos);
-
-    assert check();
-  }
-
-  @Override
-  public void writeDouble(double d) throws IOException {
-    if (pos + 8 > buf.length) {
-      ensure(8);
-    }
-    pos = encodeDouble(d, buf, pos);
-
-    assert check();
-  }
-
-  @Override
-  public void writeString(Utf8 utf8) throws IOException {
-    writeBytes(utf8.getBytes(), 0, utf8.getLength());
-
-    assert check();
-  }
-
-  @Override
-  public void writeBytes(ByteBuffer bytes) throws IOException {
-    writeBytes(bytes.array(), bytes.position(), bytes.remaining());
-
-    assert check();
-  }
-  
-  @Override
-  public void writeFixed(byte[] bytes, int start, int len) throws IOException {
-    doWriteBytes(bytes, start, len);
-
-    assert check();
-  }
-  
-  @Override
-  public void writeEnum(int e) throws IOException {
-    writeInt(e);
-  }
-
-  @Override
-  public void writeBytes(byte[] bytes, int start, int len) throws IOException {
-    if (pos + 5 > buf.length) {
-      ensure(5);
-    }
-    pos = encodeLong(len, buf, pos);
-    doWriteBytes(bytes, start, len);
-
-    assert check();
-  }
-
-  @Override
-  public void writeArrayStart() throws IOException {
-    if (stackTop + 1 == blockStack.length) {
-      expandStack();
-    }
-
-    BlockedValue bv = blockStack[++stackTop];
-    bv.type = Schema.Type.ARRAY;
-    bv.state = BlockedValue.State.REGULAR;
-    bv.start = bv.lastFullItem = pos;
-    bv.items = 0;
-
-    assert check();
-  }
-
-  @Override
-  public void setItemCount(long itemCount) throws IOException {
-    BlockedValue v = blockStack[stackTop];
-    assert v.type == Schema.Type.ARRAY || v.type == Schema.Type.MAP;
-    assert v.itemsLeftToWrite == 0;
-    v.itemsLeftToWrite = itemCount;
-
-    assert check();
-  }
-  
-  @Override
-  public void startItem() throws IOException {
-    if (blockStack[stackTop].state == BlockedValue.State.OVERFLOW) {
-      finishOverflow();
-    }
-    BlockedValue t = blockStack[stackTop];
-    t.items++;
-    t.lastFullItem = pos;
-    t.itemsLeftToWrite--;
-
-    assert check();
-  }
-
-  @Override
-  public void writeArrayEnd() throws IOException {
-    BlockedValue top = blockStack[stackTop];
-    if (top.type != Schema.Type.ARRAY) {
-      throw new AvroTypeException("Called writeArrayEnd outside of an array.");
-    }
-    if (top.itemsLeftToWrite != 0) {
-      throw new AvroTypeException("Failed to write expected number of array elements.");
-    }
-    endBlockedValue();
-
-    assert check();
-  }
-
-  @Override
-  public void writeMapStart() throws IOException {
-    if (stackTop + 1 == blockStack.length) {
-      expandStack();
-    }
-
-    BlockedValue bv = blockStack[++stackTop];
-    bv.type = Schema.Type.MAP;
-    bv.state = BlockedValue.State.REGULAR;
-    bv.start = bv.lastFullItem = pos;
-    bv.items = 0;
-
-    assert check();
-  }
-
-  @Override
-  public void writeMapEnd() throws IOException {
-    BlockedValue top = blockStack[stackTop];
-    if (top.type != Schema.Type.MAP) {
-      throw new AvroTypeException("Called writeMapEnd outside of a map.");
-    }
-    if (top.itemsLeftToWrite != 0) {
-      throw new AvroTypeException("Failed to read write expected number of array elements.");
-    }
-    endBlockedValue();
-    
-    assert check();
-  }
-
-  @Override
-  public void writeIndex(int unionIndex) throws IOException {
-    if (pos + 5 > buf.length) {
-      ensure(5);
-    }
-    pos = encodeLong(unionIndex, buf, pos);
-
-    assert check();
-  }
-
-  private void endBlockedValue() throws IOException {
-    for (; ;) {
-      assert check();
-      BlockedValue t = blockStack[stackTop];
-      assert t.state != BlockedValue.State.ROOT;
-      if (t.state == BlockedValue.State.OVERFLOW) {
-        finishOverflow();
-      }
-      assert t.state == BlockedValue.State.REGULAR;
-      if (0 < t.items) {
-        int byteCount = pos - t.start;
-        if (t.start == 0 &&
-          blockStack[stackTop - 1].state
-            != BlockedValue.State.REGULAR) { // Lucky us -- don't have to move
-          encodeLong(-t.items, out);
-          encodeLong(byteCount, out);
-        } else {
-          encodeLong(-t.items, encoderBuffer);
-          encodeLong(byteCount, encoderBuffer);
-          final int headerSize = encoderBuffer.length();
-          if (buf.length >= pos + headerSize) {
-            pos += headerSize;
-            final int m = t.start;
-            System.arraycopy(buf, m, buf, m + headerSize, byteCount);
-            System.arraycopy(encoderBuffer.buffer(), 0, buf, m, headerSize);
-            encoderBuffer.reset();
-          } else {
-            encoderBuffer.reset();
-            compact();
-            continue;
-          }
-        }
-      }
-      stackTop--;
-      if (buf.length < (pos + 1)) ensure(1);
-      buf[pos++] = 0;   // Sentinel for last block in a blocked value
-      assert check();
-      if (blockStack[stackTop].state == BlockedValue.State.ROOT) {
-        flush();
-      }
-      return;
-    }
-  }
-
-  /**
-   * Called when we've finished writing the last item in an overflow
-   * buffer.  When this is finished, the top of the stack will be
-   * an empty block in the "regular" state.
-   * @throws IOException
-   */
-  private void finishOverflow() throws IOException {
-    BlockedValue s = blockStack[stackTop];
-    if (s.state != BlockedValue.State.OVERFLOW) {
-      throw new IllegalStateException("Not an overflow block");
-    }
-    assert check();
-
-    // Flush any remaining data for this block
-    out.write(buf, 0, pos);
-    pos = 0;
-
-    // Reset top of stack to be in REGULAR mode
-    s.state = BlockedValue.State.REGULAR;
-    s.start = s.lastFullItem = 0;
-    s.items = 0;
-    assert check();
-  }
-
-  private void ensure(int l) throws IOException {
-    if (buf.length < l) {
-      throw new IllegalArgumentException("Too big: " + l);
-    }
-    while (buf.length < (pos + l)) {
-      if (blockStack[stackTop].state == BlockedValue.State.REGULAR) {
-        compact();
-      } else {
-        out.write(buf, 0, pos);
-        pos = 0;
-      }
-    }
-  }
-
-  private void doWriteBytes(byte[] bytes, int start, int len)
-    throws IOException {
-    if (len < buf.length) {
-      ensure(len);
-      System.arraycopy(bytes, start, buf, pos, len);
-      pos += len;
-    } else {
-      ensure(buf.length);
-      assert blockStack[stackTop].state == BlockedValue.State.ROOT ||
-        blockStack[stackTop].state == BlockedValue.State.OVERFLOW;
-      write(bytes, start, len);
-    }
-    assert check();
-  }
-
-  private void write(byte[] b, int off, int len) throws IOException {
-    if (blockStack[stackTop].state == BlockedValue.State.ROOT) {
-      out.write(b, off, len);
-    } else {
-      assert check();
-      while (buf.length < (pos + len)) {
-        if (blockStack[stackTop].state == BlockedValue.State.REGULAR) {
-          compact();
-        } else {
-          out.write(buf, 0, pos);
-          pos = 0;
-          if (buf.length <= len) {
-            out.write(b, off, len);
-            len = 0;
-          }
-        }
-      }
-      System.arraycopy(b, off, buf, pos, len);
-      pos += len;
-      assert check();
-    }
-  }
-
-  /** Only call if you're there are REGULAR-state values on the stack. */
-  private void compact() throws IOException {
-    assert check();
-
-    // Find first REGULAR-state value
-    BlockedValue s = null;
-    int i;
-    for (i = 1; i <= stackTop; i++) {
-      s = blockStack[i];
-      if (s.state == BlockedValue.State.REGULAR) break;
-    }
-    assert s != null;
-
-    // We're going to transition "s" into the overflow state.  To do
-    // this, We're going to flush any bytes prior to "s", then write
-    // any full items of "s" into a block, start an overflow
-    // block, write any remaining bytes of "s" up to the start of the
-    // next more deeply-nested blocked-value, and finally move over
-    // any remaining bytes (which will be from more deeply-nested
-    // blocked values).
-
-    // Flush any bytes prios to "s"
-    out.write(buf, 0, s.start);
-
-    // Write any full items of "s"
-    if (1 < s.items) {
-      encodeLong(-(s.items - 1), out);
-      encodeLong(s.lastFullItem - s.start, out);
-      out.write(buf, s.start, s.lastFullItem - s.start);
-      s.start = s.lastFullItem;
-      s.items = 1;
-    }
-
-    // Start an overflow block for s
-    encodeLong(1, out);
-
-    // Write any remaining bytes for "s", up to the next-most
-    // deeply-nested value
-    BlockedValue n = ((i + 1) <= stackTop ?
-        blockStack[i + 1] : null);
-    int end = (n == null ? pos : n.start);
-    out.write(buf, s.lastFullItem, end - s.lastFullItem);
-
-    // Move over any bytes that remain (and adjust indices)
-    System.arraycopy(buf, end, buf, 0, pos - end);
-    for (int j = i + 1; j <= stackTop; j++) {
-        n = blockStack[j];
-        n.start -= end;
-        n.lastFullItem -= end;
-    }
-    pos -= end;
-
-    assert s.items == 1;
-    s.start = s.lastFullItem = 0;
-    s.state = BlockedValue.State.OVERFLOW;
-
-    assert check();
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumReader.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumReader.java
deleted file mode 100644
index 9ac16f9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumReader.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-
-import org.apache.avro.Schema;
-
-/** Read data of a schema.
- * <p>Determines the in-memory data representation.
- */
-public interface DatumReader<D> {
-
-  /** Set the schema. */
-  void setSchema(Schema schema);
-
-  /** Read a datum.  Traverse the schema, depth-first, reading all leaf values
-   * in the schema into a datum that is returned.  If the provided datum is
-   * non-null it may be reused and returned. */
-  D read(D reuse, Decoder in) throws IOException;
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumWriter.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumWriter.java
deleted file mode 100644
index c30bf82..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/DatumWriter.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-
-import org.apache.avro.Schema;
-
-/** Write data of a schema.
- * <p>Implemented for different in-memory data representations.
- */
-public interface DatumWriter<D> {
-
-  /** Set the schema. */
-  void setSchema(Schema schema);
-
-  /** Write a datum.  Traverse the schema, depth first, writing each leaf value
-   * in the schema from the datum to the output. */
-  void write(D datum, Encoder out) throws IOException;
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/Decoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/Decoder.java
deleted file mode 100644
index 6a9c43c..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/Decoder.java
+++ /dev/null
@@ -1,287 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.util.Utf8;
-
-/**
- * Low-level support for de-serializing Avro values.
- * <p>
- *  This class has two types of methods.  One type of methods support
- *  the reading of leaf values (for example, {@link #readLong} and
- *  {@link #readString}).
- *
- *  The other type of methods support the reading of maps and arrays.
- *  These methods are {@link #readArrayStart}, {@link #arrayNext},
- *  and similar methods for maps).  See {@link #readArrayStart} for
- *  details on these methods.)
- *
- *  @see Encoder
- */
-
-public abstract class Decoder {
-  
-  /** Start reading against a different input stream.  Stateful
-    * subclasses will reset their states to their initial state. 
-    * <p/>
-    * A Decoder may read ahead beyond the minimum bytes necessary,
-    * similar to a BufferedInputStream.
-    * In order to interleave reads on this stream's data between
-    * Decoder and another reader, the other reader must use the
-    * buffer-aware input stream returned by the Decoder implementation,
-    * if applicable.
-   * @throws IOException */
-  public abstract void init(InputStream in) throws IOException;
-
-  /**
-   * "Reads" a null value.  (Doesn't actually read anything, but
-   * advances the state of the parser if the implementation is
-   * stateful.)
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          null is not the type of the next value to be read
-   */
-  public abstract void readNull() throws IOException;
-
-  /**
-   * Reads a boolean value written by {@link Encoder#writeBoolean}.
-   * @throws AvroTypeException If this is a stateful reader and
-   * boolean is not the type of the next value to be read
-   */
-
-  public abstract boolean readBoolean() throws IOException;
-
-  /**
-   * Reads an integer written by {@link Encoder#writeInt}.
-   * @throws AvroTypeException If encoded value is larger than
-   *          32-bits
-   * @throws AvroTypeException If this is a stateful reader and
-   *          int is not the type of the next value to be read
-   */
-  public abstract int readInt() throws IOException;
-
-  /**
-   * Reads a long written by {@link Encoder#writeLong}.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          long is not the type of the next value to be read
-   */
-  public abstract long readLong() throws IOException;
-
-  /**
-   * Reads a float written by {@link Encoder#writeFloat}.
-   * @throws AvroTypeException If this is a stateful reader and
-   * is not the type of the next value to be read
-   */
-  public abstract float readFloat() throws IOException;
-
-  /**
-   * Reads a double written by {@link Encoder#writeDouble}.
-   * @throws AvroTypeException If this is a stateful reader and
-   *           is not the type of the next value to be read
-   */
-  public abstract double readDouble() throws IOException;
-    
-  /**
-   * Reads a char-string written by {@link Encoder#writeString}.
-   * @throws AvroTypeException If this is a stateful reader and
-   * char-string is not the type of the next value to be read
-   */
-  public abstract Utf8 readString(Utf8 old) throws IOException;
-    
-  /**
-   * Discards a char-string written by {@link Encoder#writeString}.
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          char-string is not the type of the next value to be read
-   */
-  public abstract void skipString() throws IOException;
-
-  /**
-   * Reads a byte-string written by {@link Encoder#writeBytes}.
-   * if <tt>old</tt> is not null and has sufficient capacity to take in
-   * the bytes being read, the bytes are returned in <tt>old</tt>.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          byte-string is not the type of the next value to be read
-   */
-  public abstract ByteBuffer readBytes(ByteBuffer old) throws IOException;
-
-  /**
-   * Discards a byte-string written by {@link Encoder#writeBytes}.
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          byte-string is not the type of the next value to be read
-   */
-  public abstract void skipBytes() throws IOException;
-  
-  /**
-   * Reads fixed sized binary object.
-   * @param bytes The buffer to store the contents being read.
-   * @param start The position where the data needs to be written.
-   * @param length  The size of the binary object.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          fixed sized binary object is not the type of the next
-   *          value to be read or the length is incorrect.
-   * @throws IOException
-   */
-  public abstract void readFixed(byte[] bytes, int start, int length)
-    throws IOException;
-
-  /**
-   * A shorthand for <tt>readFixed(bytes, 0, bytes.length)</tt>.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          fixed sized binary object is not the type of the next
-   *          value to be read or the length is incorrect.
-   * @throws IOException
-   */
-  public void readFixed(byte[] bytes) throws IOException {
-    readFixed(bytes, 0, bytes.length);
-  }
-  
-  /**
-   * Discards fixed sized binary object.
-   * @param length  The size of the binary object to be skipped.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          fixed sized binary object is not the type of the next
-   *          value to be read or the length is incorrect.
-   * @throws IOException
-   */
-  public abstract void skipFixed(int length) throws IOException;
-
-  /**
-   * Reads an enumeration.
-   * @return The enumeration's value.
-   * @throws AvroTypeException If this is a stateful reader and
-   *          enumeration is not the type of the next value to be read.
-   * @throws IOException
-   */
-  public abstract int readEnum() throws IOException;
-  
-  /**
-   * Reads and returns the size of the first block of an array.  If
-   * this method returns non-zero, then the caller should read the
-   * indicated number of items, and then call {@link
-   * #arrayNext} to find out the number of items in the next
-   * block.  The typical pattern for consuming an array looks like:
-   * <pre>
-   *   for(long i = in.readArrayStart(); i != 0; i = in.arrayNext()) {
-   *     for (long j = 0; j < i; j++) {
-   *       read next element of the array;
-   *     }
-   *   }
-   * </pre>
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          array is not the type of the next value to be read */
-  public abstract long readArrayStart() throws IOException;
-
-  /**
-   * Processes the next block of an array andreturns the number of items in
-   * the block and let's the caller
-   * read those items.
-   * @throws AvroTypeException When called outside of an
-   *         array context
-   */
-  public abstract long arrayNext() throws IOException;
-
-  /**
-   * Used for quickly skipping through an array.  Note you can
-   * either skip the entire array, or read the entire array (with
-   * {@link #readArrayStart}), but you can't mix the two on the
-   * same array.
-   *
-   * This method will skip through as many items as it can, all of
-   * them if possible.  It will return zero if there are no more
-   * items to skip through, or an item count if it needs the client's
-   * help in skipping.  The typical usage pattern is:
-   * <pre>
-   *   for(long i = in.skipArray(); i != 0; i = i.skipArray()) {
-   *     for (long j = 0; j < i; j++) {
-   *       read and discard the next element of the array;
-   *     }
-   *   }
-   * </pre>
-   * Note that this method can automatically skip through items if a
-   * byte-count is found in the underlying data, or if a schema has
-   * been provided to the implementation, but
-   * otherwise the client will have to skip through items itself.
-   *
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          array is not the type of the next value to be read
-   */
-  public abstract long skipArray() throws IOException;
-
-  /**
-   * Reads and returns the size of the next block of map-entries.
-   * Similar to {@link #readArrayStart}.
-   *
-   *  As an example, let's say you want to read a map of records,
-   *  the record consisting of an Long field and a Boolean field.
-   *  Your code would look something like this:
-   * <pre>
-   *   Map<String,Record> m = new HashMap<String,Record>();
-   *   Record reuse = new Record();
-   *   for(long i = in.readMapStart(); i != 0; i = in.readMapNext()) {
-   *     for (long j = 0; j < i; j++) {
-   *       String key = in.readString();
-   *       reuse.intField = in.readInt();
-   *       reuse.boolField = in.readBoolean();
-   *       m.put(key, reuse);
-   *     }
-   *   }
-   * </pre>
-   * @throws AvroTypeException If this is a stateful reader and
-   *         map is not the type of the next value to be read
-   */
-  public abstract long readMapStart() throws IOException;
-
-  /**
-   * Processes the next block of map entries and returns the count of them.
-   * Similar to {@link #arrayNext}.  See {@link #readMapStart} for details.
-   * @throws AvroTypeException When called outside of a
-   *         map context
-   */
-  public abstract long mapNext() throws IOException;
-
-  /**
-   * Support for quickly skipping through a map similar to {@link #skipArray}.
-   *
-   * As an example, let's say you want to skip a map of records,
-   * the record consisting of an Long field and a Boolean field.
-   * Your code would look something like this:
-   * <pre>
-   *   for(long i = in.skipMap(); i != 0; i = in.skipMap()) {
-   *     for (long j = 0; j < i; j++) {
-   *       in.skipString();  // Discard key
-   *       in.readInt(); // Discard int-field of value
-   *       in.readBoolean(); // Discard boolean-field of value
-   *     }
-   *   }
-   * </pre>
-   *  @throws AvroTypeException If this is a stateful reader and
-   *          array is not the type of the next value to be read */
-
-  public abstract long skipMap() throws IOException;
-
-  /**
-   * Reads the tag of a union written by {@link Encoder#writeIndex}.
-   * @throws AvroTypeException If this is a stateful reader and
-   *         union is not the type of the next value to be read
-   */
-  public abstract int readIndex() throws IOException;
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/DecoderFactory.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/DecoderFactory.java
deleted file mode 100644
index 2870631..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/DecoderFactory.java
+++ /dev/null
@@ -1,232 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.InputStream;
-
-/**
- * A factory for creating and configuring {@link Decoder}s.
- * <p/>
- * Factories are thread-safe, and are generally cached by applications for
- * performance reasons. Multiple instances are only required if multiple
- * concurrent configurations are needed.
- * 
- * @see Decoder
- */
-
-public class DecoderFactory {
-  private static final DecoderFactory DEFAULT_FACTORY = new DefaultDecoderFactory();
-  static final int DEFAULT_BUFFER_SIZE = 32 * 1000;
-
-  int binaryDecoderBufferSize = DEFAULT_BUFFER_SIZE;
-  boolean preferDirect = false;
-
-  /** Constructor for factory instances */
-  public DecoderFactory() {
-    super();
-  }
-
-  /**
-   * Returns an immutable static DecoderFactory configured with default settings
-   * All mutating methods throw IllegalArgumentExceptions. All creator methods
-   * create objects with default settings.
-   */
-  public static DecoderFactory defaultFactory() {
-    return DEFAULT_FACTORY;
-  }
-
-  /**
-   * Configures this factory to use the specified buffer size when creating
-   * Decoder instances that buffer their input. The default buffer size is
-   * 32000 bytes.
-   * 
-   * @param size The preferred buffer size. Valid values are in the range [32,
-   *          16*1024*1024]. Values outide this range are rounded to the nearest
-   *          value in the range. Values less than 512 or greater than 1024*1024
-   *          are not recommended.
-   * @return This factory, to enable method chaining:
-   * <pre>
-   * DecoderFactory myFactory = new DecoderFactory().useBinaryDecoderBufferSize(4096);
-   * </pre>
-   */
-  public DecoderFactory configureDecoderBufferSize(int size) {
-    if (size < 32)
-      size = 32;
-    if (size > 16 * 1024 * 1024)
-      size = 16 * 1024 * 1024;
-    this.binaryDecoderBufferSize = size;
-    return this;
-  }
-  
-  /**
-   * Returns this factory's configured preferred buffer size.  Used when creating
-   * Decoder instances that buffer. See {@link #configureDecoderBufferSize}
-   * @return The preferred buffer size, in bytes.
-   */
-  public int getConfiguredBufferSize() {
-    return this.binaryDecoderBufferSize;
-  }
-  
-  /**
-   * Configures this factory to create "direct" BinaryDecoder instances when applicable.
-   * <p/>
-   * The default is false, since buffering or 'read-ahead' decoders can be 
-   * twice as fast.  In most cases a normal BinaryDecoder is sufficient in combination with
-   * {@link BinaryDecoder#inputStream()} which provides a buffer-aware view on
-   * the data.
-   * <p/>
-   * A "direct" BinaryDecoder does not read ahead from an InputStream or other data source
-   * that cannot be rewound.  From the perspective of a client, a "direct" decoder
-   * must never read beyond the minimum necessary bytes to service a {@link BinaryDecoder}
-   * API read request.  
-   * <p/>
-   * In the case that the performance of a normal BinaryDecoder does not outweigh the
-   * inconvenience of its buffering semantics, a "direct" decoder can be
-   * used.
-   * <p/>
-   * Generally, this distinction only applies to BinaryDecoder that read from an InputStream.
-   * @param useDirect If true, this factory will generate "direct" BinaryDecoder
-   * implementations when applicable. If false (the default) the faster buffering
-   * implementations will be generated.
-   * @return This factory, to enable method chaining:
-   * <pre>
-   * DecoderFactory myFactory = new DecoderFactory().configureDirectDecoder(true);
-   * </pre>
-   */
-  public DecoderFactory configureDirectDecoder(boolean useDirect) {
-    this.preferDirect = useDirect;
-    return this;
-  }
-
-  /**
-   * Creates or reinitializes a {@link BinaryDecoder} with the input stream
-   * provided as the source of data. If <i>reuse</i> is provided, it will be
-   * reinitialized to the given input stream.
-   * <p/>
-   * If this factory is configured to create "direct" BinaryDecoder instances,
-   * this will return a non-buffering variant. Otherwise, this instance will
-   * buffer the number of bytes configured by this factory, reading up to that
-   * many bytes from the InputStream ahead of the minimum required for Decoder
-   * API requests. {@link BinaryDecoder#inputStream()} provides a view on the data
-   * that is buffer-aware, for users that need to access possibly buffered data
-   * outside of the Decoder API.
-   * 
-   * @param in The InputStream to initialize to
-   * @param reuse The BinaryDecoder to <i>attempt<i/> to reuse given the factory
-   *          configuration. A specific BinaryDecoder implementation may not be
-   *          compatible with reuse. For example, a BinaryDecoder created as
-   *          'direct' can not be reinitialized to function in a non-'direct'
-   *          mode. If <i>reuse<i/> is null a new instance is always created.
-   * @return A BinaryDecoder that uses <i>in</i> as its source of data. If
-   *         <i>reuse</i> is null, this will be a new instance. If <i>reuse</i>
-   *         is not null, then it may be reinitialized if compatible, otherwise
-   *         a new instance will be returned.
-   *         <p/>
-   *         example:
-   * 
-   *         <pre>
-   * DecoderFactory factory = new DecoderFactory();
-   * Decoder d = factory.createBinaryDecoder(input, null); // a new BinaryDecoder
-   * d = createBinaryDecoder(input2, d); // reinitializes d to read from input2
-   * factory.configureDirectDecoder(true);
-   * Decoder d2 = factory.createBinaryDecoder(input3, d); // a new BinaryDecoder
-   * </pre>
-   * 
-   *         <i>d2</i> above is not a reused instance of <i>d</d> because the
-   *         latter is not 'direct' and can't be reused to create a 'direct'
-   *         instance. Users must be careful to use the BinaryDecoder returned
-   *         from the factory and not assume that the factory passed in the
-   *         <i>reuse</i> argument
-   */
-  public BinaryDecoder createBinaryDecoder(InputStream in, BinaryDecoder reuse) {
-    if (null == reuse) {
-      if (preferDirect) {
-        return new DirectBinaryDecoder(in);
-      } else {
-        return new BinaryDecoder(binaryDecoderBufferSize, in);
-      }
-    } else {
-      if (!preferDirect) {
-        if(reuse.getClass() == BinaryDecoder.class) {
-          reuse.init(binaryDecoderBufferSize, in);
-          return reuse;
-        } else {
-          return new BinaryDecoder(binaryDecoderBufferSize, in);
-        }
-      } else {
-        if (reuse.getClass() == DirectBinaryDecoder.class) {
-          ((DirectBinaryDecoder)reuse).init(in);
-          return reuse;
-        } else {
-          return new DirectBinaryDecoder(in);
-        }
-      }
-    }
-  }
-
-  /**
-   * Creates or reinitializes a {@link BinaryDecoder} with the byte array
-   * provided as the source of data. If <i>reuse</i> is provided, it will
-   * attempt to reinitiailize <i>reuse</i> to the new byte array. This instance
-   * will use the provided byte array as its buffer.
-   * {@link BinaryDecoder#inputStream()} provides a view on the data that is
-   * buffer-aware and can provide a view of the data not yet read by Decoder API
-   * methods.
-   * 
-   * @param bytes The byte array to initialize to
-   * @param offset The offset to start reading from
-   * @param length The maximum number of bytes to read from the byte array
-   * @param reuse The BinaryDecoder to attempt to reinitialize. if null a new
-   *          BinaryDecoder is created.
-   * @return A BinaryDecoder that uses <i>bytes</i> as its source of data. If
-   *         <i>reuse</i> is null, this will be a new instance. <i>reuse</i> may
-   *         be reinitialized if appropriate, otherwise a new instance is
-   *         returned. Clients must not assume that <i>reuse</i> is
-   *         reinitialized and returned.
-   */
-  public BinaryDecoder createBinaryDecoder(byte[] bytes, int offset,
-      int length, BinaryDecoder reuse) {
-    if (null != reuse && reuse.getClass() == BinaryDecoder.class) {
-      reuse.init(bytes, offset, length);
-      return reuse;
-    } else {
-      return new BinaryDecoder(bytes, offset, length);
-    }
-  }
-
-  /**
-   * This method is shorthand for
-   * <pre>
-   * createBinaryDecoder(bytes, 0, bytes.length, reuse);
-   * </pre> {@link #createBinaryDecoder(byte[], int, int, BinaryDecoder)}
-   */
-  public BinaryDecoder createBinaryDecoder(byte[] bytes, BinaryDecoder reuse) {
-    return createBinaryDecoder(bytes, 0, bytes.length, reuse);
-  }
-
-  private static class DefaultDecoderFactory extends DecoderFactory {
-    @Override
-    public DecoderFactory configureDecoderBufferSize(int bufferSize) {
-      throw new IllegalArgumentException("This Factory instance is Immutable");
-    }
-    @Override
-    public DecoderFactory configureDirectDecoder(boolean arg0) {
-      throw new IllegalArgumentException("This Factory instance is Immutable");
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/DirectBinaryDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/DirectBinaryDecoder.java
deleted file mode 100644
index 366d42e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/DirectBinaryDecoder.java
+++ /dev/null
@@ -1,203 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.ipc.ByteBufferInputStream;
-
-/**
- *  A non-buffering version of {@link BinaryDecoder}.
- *  <p/>
- *  This implementation will not read-ahead from the provided InputStream
- *  beyond the minimum required to service the API requests.
- *
- *  @see Encoder
- */
-
-class DirectBinaryDecoder extends BinaryDecoder {
-  private InputStream in;
-
-  private class ByteReader {
-    public ByteBuffer read(ByteBuffer old, int length) throws IOException {
-      ByteBuffer result;
-      if (old != null && length <= old.capacity()) {
-        result = old;
-        result.clear();
-      } else {
-        result = ByteBuffer.allocate(length);
-      }
-      doReadBytes(result.array(), result.position(), length);
-      result.limit(length);
-      return result;
-    }
-  }
-
-  private class ReuseByteReader extends ByteReader {
-    private final ByteBufferInputStream bbi;
-    
-    public ReuseByteReader(ByteBufferInputStream bbi) {
-      this.bbi = bbi;
-    }
-    
-    @Override
-    public ByteBuffer read(ByteBuffer old, int length) throws IOException {
-      if (old != null) {
-        return super.read(old, length);
-      } else {
-        return bbi.readBuffer(length);
-      }
-    }
-    
-  }
-
-  private final ByteReader byteReader;
-
-  DirectBinaryDecoder(InputStream in) {
-    super();
-    this.in = in;
-    byteReader = (in instanceof ByteBufferInputStream) ?
-        new ReuseByteReader((ByteBufferInputStream) in) : new ByteReader();
-  }
-
-  @Override
-  public void init(InputStream in) {
-    this.in = in;
-  }
-
-  @Override
-  public boolean readBoolean() throws IOException {
-    int n = in.read();
-    if (n < 0) {
-      throw new EOFException();
-    }
-    return n == 1;
-  }
-
-  @Override
-  public int readInt() throws IOException {
-    int n = 0;
-    int b;
-    int shift = 0;
-    do {
-      b = in.read();
-      if (b >= 0) {
-         n |= (b & 0x7F) << shift;
-         if ((b & 0x80) == 0) {
-           return (n >>> 1) ^ -(n & 1); // back to two's-complement
-         }
-      } else {
-        throw new EOFException();
-      }
-      shift += 7;
-    } while (shift < 32);
-    throw new IOException("Invalid int encoding");
-    
-  }
-
-  @Override
-  public long readLong() throws IOException {
-    long n = 0;
-    int b;
-    int shift = 0;
-    do { 
-      b = in.read();
-      if (b >= 0) {
-         n |= (b & 0x7FL) << shift;
-         if ((b & 0x80) == 0) {
-           return (n >>> 1) ^ -(n & 1); // back to two's-complement
-         }
-      } else {
-        throw new EOFException();
-      }
-      shift += 7;
-    } while (shift < 64);
-    throw new IOException("Invalid long encoding");
-  }
-
-  private final byte[] buf = new byte[8];
-
-  @Override
-  public float readFloat() throws IOException {
-    doReadBytes(buf, 0, 4);
-    int n = (((int) buf[0]) & 0xff)
-      |  ((((int) buf[1]) & 0xff) << 8)
-      |  ((((int) buf[2]) & 0xff) << 16)
-      |  ((((int) buf[3]) & 0xff) << 24);
-    return Float.intBitsToFloat(n);
-  }
-
-  @Override
-  public double readDouble() throws IOException {
-    doReadBytes(buf, 0, 8);
-    long n = (((long) buf[0]) & 0xff)
-      |  ((((long) buf[1]) & 0xff) << 8)
-      |  ((((long) buf[2]) & 0xff) << 16)
-      |  ((((long) buf[3]) & 0xff) << 24)
-      |  ((((long) buf[4]) & 0xff) << 32)
-      |  ((((long) buf[5]) & 0xff) << 40)
-      |  ((((long) buf[6]) & 0xff) << 48)
-      |  ((((long) buf[7]) & 0xff) << 56);
-    return Double.longBitsToDouble(n);
-  }
-
-  @Override
-  public ByteBuffer readBytes(ByteBuffer old) throws IOException {
-    int length = readInt();
-    return byteReader.read(old, length);
-  }
-
-  @Override
-  protected void doSkipBytes(long length) throws IOException {
-    while (length > 0) {
-      long n = in.skip(length);
-      if (n <= 0) {
-        throw new EOFException();
-      }
-      length -= n;
-    }
-  }
-
-  @Override
-  protected void doReadBytes(byte[] bytes, int start, int length)
-    throws IOException {
-    for (; ;) {
-      int n = in.read(bytes, start, length);
-      if (n == length || length == 0) {
-        return;
-      } else if (n < 0) {
-        throw new EOFException();
-      }
-      start += n;
-      length -= n;
-    }
-  }
-
-  @Override
-  public InputStream inputStream() {
-    return in;
-  }
-
-  @Override
-  public boolean isEnd() throws IOException {
-    throw new UnsupportedOperationException();
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/Encoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/Encoder.java
deleted file mode 100644
index ac9a73a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/Encoder.java
+++ /dev/null
@@ -1,271 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.Flushable;
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.util.Utf8;
-
-/**
- * Low-level support for serializing Avro values.
- *
- * This class has two types of methods.  One type of methods support
- * the writing of leaf values (for example, {@link #writeLong} and
- * {@link #writeString}).  These methods have analogs in {@link
- * Decoder}.
- *
- * The other type of methods support the writing of maps and arrays.
- * These methods are {@link #writeArrayStart}, {@link
- * #startItem}, and {@link #writeArrayEnd} (and similar methods for
- * maps).  Some implementations of {@link Encoder} handle the
- * buffering required to break large maps and arrays into blocks,
- * which is necessary for applications that want to do streaming.
- * (See {@link #writeArrayStart} for details on these methods.)
- *
- *  @see Decoder
- */
-public abstract class Encoder implements Flushable {
-
-  /** Redirect output (and reset the parser state if we're checking). */
-  public abstract void init(OutputStream out) throws IOException;
-
-  /**
-   * "Writes" a null value.  (Doesn't actually write anything, but
-   * advances the state of the parser if this class is stateful.)
-   * @throws AvroTypeException If this is a stateful writer and a
-   *         null is not expected
-   */
-  public abstract void writeNull() throws IOException;
-  
-  /**
-   * Write a boolean value.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * boolean is not expected
-   */
-  public abstract void writeBoolean(boolean b) throws IOException;
-
-  /**
-   * Writes a 32-bit integer.
-   * @throws AvroTypeException If this is a stateful writer and an
-   * integer is not expected
-   */
-  public abstract void writeInt(int n) throws IOException;
-
-  /**
-   * Write a 64-bit integer.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * long is not expected
-   */
-  public abstract void writeLong(long n) throws IOException;
-  
-  /** Write a float.
-   * @throws IOException 
-   * @throws AvroTypeException If this is a stateful writer and a
-   * float is not expected
-   */
-  public abstract void writeFloat(float f) throws IOException;
-
-  /**
-   * Write a double.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * double is not expected
-   */
-  public abstract void writeDouble(double d) throws IOException;
-
-  /**
-   * Write a Unicode character string.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * char-string is not expected
-   */
-  public abstract void writeString(Utf8 utf8) throws IOException;
-
-  /**
-   * Write a Unicode character string.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * char-string is not expected
-   */
-  public void writeString(String str) throws IOException {
-    writeString(new Utf8(str));
-  }
-
-  /**
-   * Write a byte string.
-   * @throws AvroTypeException If this is a stateful writer and a
-   *         byte-string is not expected
-   */
-  public abstract void writeBytes(ByteBuffer bytes) throws IOException;
-  
-  /**
-   * Write a byte string.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * byte-string is not expected
-   */
-  public abstract void writeBytes(byte[] bytes, int start, int len) throws IOException;
-  
-  /**
-   * Writes a byte string.
-   * Equivalent to <tt>writeBytes(bytes, 0, bytes.length)</tt>
-   * @throws IOException 
-   * @throws AvroTypeException If this is a stateful writer and a
-   * byte-string is not expected
-   */
-  public void writeBytes(byte[] bytes) throws IOException {
-    writeBytes(bytes, 0, bytes.length);
-  }
-
-  /**
-   * Writes a fixed size binary object.
-   * @param bytes The contents to write
-   * @param start The position within <tt>bytes</tt> where the contents
-   * start.
-   * @param len The number of bytes to write.
-   * @throws AvroTypeException If this is a stateful writer and a
-   * byte-string is not expected
-   * @throws IOException
-   */
-  public abstract void writeFixed(byte[] bytes, int start, int len) throws IOException;
-
-  /**
-   * A shorthand for <tt>writeFixed(bytes, 0, bytes.length)</tt>
-   * @param bytes
-   */
-  public void writeFixed(byte[] bytes) throws IOException {
-    writeFixed(bytes, 0, bytes.length);
-  }
-  
-  /**
-   * Writes an enumeration.
-   * @param e
-   * @throws AvroTypeException If this is a stateful writer and an enumeration
-   * is not expected or the <tt>e</tt> is out of range.
-   * @throws IOException
-   */
-  public abstract void writeEnum(int e) throws IOException;
-
-  /** Call this method to start writing an array.
-   *
-   *  When starting to serialize an array, call {@link
-   *  #writeArrayStart}. Then, before writing any data for any item
-   *  call {@link #setItemCount} followed by a sequence of
-   *  {@link #startItem()} and the item itself. The number of
-   *  {@link #startItem()} should match the number specified in
-   *  {@link #setItemCount}.
-   *  When actually writing the data of the item, you can call any {@link
-   *  Encoder} method (e.g., {@link #writeLong}).  When all items
-   *  of the array have been written, call {@link #writeArrayEnd}.
-   *
-   *  As an example, let's say you want to write an array of records,
-   *  the record consisting of an Long field and a Boolean field.
-   *  Your code would look something like this:
-   *  <pre>
-   *  out.writeArrayStart();
-   *  out.setItemCount(list.size());
-   *  for (Record r : list) {
-   *    out.startItem();
-   *    out.writeLong(r.longField);
-   *    out.writeBoolean(r.boolField);
-   *  }
-   *  out.writeArrayEnd();
-   *  </pre>
-   *  @throws AvroTypeException If this is a stateful writer and an
-   *          array is not expected
-   */
-  public abstract void writeArrayStart() throws IOException;
-
-  /**
-   * Call this method before writing a batch of items in an array or a map.
-   * Then for each item, call {@link #startItem()} followed by any of the
-   * other write methods of {@link Encoder}. The number of calls
-   * to {@link #startItem()} must be equal to the count specified
-   * in {@link #setItemCount}. Once a batch is completed you
-   * can start another batch with {@link #setItemCount}.
-   * 
-   * @param itemCount The number of {@link #startItem()} calls to follow.
-   * @throws IOException
-   */
-  public abstract void setItemCount(long itemCount) throws IOException;
-  
-  /**
-   * Start a new item of an array or map.
-   * See {@link #writeArrayStart} for usage information.
-   * @throws AvroTypeException If called outside of an array or map context
-   */
-  public abstract void startItem() throws IOException;
-
-  /**
-   * Call this method to finish writing an array.
-   * See {@link #writeArrayStart} for usage information.
-   *
-   * @throws AvroTypeException If items written does not match count
-   *          provided to {@link #writeArrayStart}
-   * @throws AvroTypeException If not currently inside an array
-   */
-  public abstract void writeArrayEnd() throws IOException;
-
-  /**
-   * Call this to start a new map.  See
-   * {@link #writeArrayStart} for details on usage.
-   *
-   * As an example of usage, let's say you want to write a map of
-   * records, the record consisting of an Long field and a Boolean
-   * field.  Your code would look something like this:
-   * <pre>
-   * out.writeMapStart();
-   * out.setItemCount(list.size());
-   * for (Map.Entry<String,Record> entry : map.entrySet()) {
-   *   out.startItem();
-   *   out.writeString(entry.getKey());
-   *   out.writeLong(entry.getValue().longField);
-   *   out.writeBoolean(entry.getValue().boolField);
-   * }
-   * out.writeMapEnd();
-   * </pre>
-   * @throws AvroTypeException If this is a stateful writer and a
-   * map is not expected
-   */
-  public abstract void writeMapStart() throws IOException;
-
-  /**
-   * Call this method to terminate the inner-most, currently-opened
-   * map.  See {@link #writeArrayStart} for more details.
-   *
-   * @throws AvroTypeException If items written does not match count
-   *          provided to {@link #writeMapStart}
-   * @throws AvroTypeException If not currently inside a map
-   */
-  public abstract void writeMapEnd() throws IOException;
-
-  /** Call this method to write the tag of a union.
-   *
-   * As an example of usage, let's say you want to write a union,
-   * whose second branch is a record consisting of an Long field and
-   * a Boolean field.  Your code would look something like this:
-   * <pre>
-   * out.writeIndex(1);
-   * out.writeLong(record.longField);
-   * out.writeBoolean(record.boolField);
-   * </pre>
-   * @throws AvroTypeException If this is a stateful writer and a
-   * map is not expected
-   */
-  public abstract void writeIndex(int unionIndex) throws IOException;
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonDecoder.java
deleted file mode 100644
index cc84e97..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonDecoder.java
+++ /dev/null
@@ -1,426 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.EOFException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.parsing.JsonGrammarGenerator;
-import org.apache.avro.io.parsing.Parser;
-import org.apache.avro.io.parsing.Symbol;
-import org.apache.avro.util.Utf8;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonParser;
-import org.codehaus.jackson.JsonToken;
-
-/** A {@link Decoder} for Avro's JSON data encoding. */
-public class JsonDecoder extends ParsingDecoder
-  implements Parser.ActionHandler {
-  private JsonParser in;
-  private static JsonFactory jsonFactory = new JsonFactory();
-  
-  static final String CHARSET = "ISO-8859-1";
-
-  JsonDecoder(Symbol root, InputStream in) throws IOException {
-    super(root);
-    init(in);
-  }
-  
-  JsonDecoder(Symbol root, String in) throws IOException {
-    super(root);
-    init(in);
-  }
-
-  /** Creates a new JsonDecoder based on an InputStream. */
-  public JsonDecoder(Schema schema, InputStream in) throws IOException {
-    this(new JsonGrammarGenerator().generate(schema), in);
-  }
-  
-  /** Creates a new JsonDecoder based on a String input. */
-  public JsonDecoder(Schema schema, String in) throws IOException {
-    this(new JsonGrammarGenerator().generate(schema), in);
-  }
-
-  private void advance(Symbol symbol) throws IOException {
-    this.parser.processTrailingImplicitActions();
-    if (in.getCurrentToken() == null && this.parser.depth() == 1)
-      throw new EOFException();
-    parser.advance(symbol);
-  }
-
-  @Override
-  public void init(InputStream in) throws IOException {
-    parser.reset();
-    this.in = jsonFactory.createJsonParser(in);
-    this.in.nextToken();
-  }
-  
-  /** Re-initializes to start reading from a new String input. */
-  public void init(String in) throws IOException {
-    parser.reset();
-    this.in = new JsonFactory().createJsonParser(in);
-    this.in.nextToken();
-  }
-
-  @Override
-  public void readNull() throws IOException {
-    advance(Symbol.NULL);
-    if (in.getCurrentToken() == JsonToken.VALUE_NULL) {
-      in.nextToken();
-    } else {
-      throw error("null");
-    }
-  }
-
-  @Override
-  public boolean readBoolean() throws IOException {
-    advance(Symbol.BOOLEAN);
-    JsonToken t = in.getCurrentToken(); 
-    if (t == JsonToken.VALUE_TRUE || t == JsonToken.VALUE_FALSE) {
-      in.nextToken();
-      return t == JsonToken.VALUE_TRUE;
-    } else {
-      throw error("boolean");
-    }
-  }
-
-  @Override
-  public int readInt() throws IOException {
-    advance(Symbol.INT);
-    if (in.getCurrentToken() == JsonToken.VALUE_NUMBER_INT) {
-      int result = in.getIntValue();
-      in.nextToken();
-      return result;
-    } else {
-      throw error("int");
-    }
-  }
-    
-  @Override
-  public long readLong() throws IOException {
-    advance(Symbol.LONG);
-    if (in.getCurrentToken() == JsonToken.VALUE_NUMBER_INT) {
-      long result = in.getLongValue();
-      in.nextToken();
-      return result;
-    } else {
-      throw error("long");
-    }
-  }
-
-  @Override
-  public float readFloat() throws IOException {
-    advance(Symbol.FLOAT);
-    if (in.getCurrentToken() == JsonToken.VALUE_NUMBER_FLOAT) {
-      float result = in.getFloatValue();
-      in.nextToken();
-      return result;
-    } else {
-      throw error("float");
-    }
-  }
-
-  @Override
-  public double readDouble() throws IOException {
-    advance(Symbol.DOUBLE);
-    if (in.getCurrentToken() == JsonToken.VALUE_NUMBER_FLOAT) {
-      double result = in.getDoubleValue();
-      in.nextToken();
-      return result;
-    } else {
-      throw error("double");
-    }
-  }
-    
-  @Override
-  public Utf8 readString(Utf8 old) throws IOException {
-    advance(Symbol.STRING);
-    if (parser.topSymbol() == Symbol.MAP_KEY_MARKER) {
-      parser.advance(Symbol.MAP_KEY_MARKER);
-      if (in.getCurrentToken() != JsonToken.FIELD_NAME) {
-        throw error("map-key");
-      }
-    } else {
-      if (in.getCurrentToken() != JsonToken.VALUE_STRING) {
-        throw error("string");
-      }
-    }
-    String result = in.getText();
-    in.nextToken();
-    return new Utf8(result);
-  }
-
-  @Override
-  public void skipString() throws IOException {
-    advance(Symbol.STRING);
-    if (parser.topSymbol() == Symbol.MAP_KEY_MARKER) {
-      parser.advance(Symbol.MAP_KEY_MARKER);
-      if (in.getCurrentToken() != JsonToken.FIELD_NAME) {
-        throw error("map-key");
-      }
-    } else {
-      if (in.getCurrentToken() != JsonToken.VALUE_STRING) {
-        throw error("string");
-      }
-    }
-    in.nextToken();
-  }
-
-  @Override
-  public ByteBuffer readBytes(ByteBuffer old) throws IOException {
-    advance(Symbol.BYTES);
-    if (in.getCurrentToken() == JsonToken.VALUE_STRING) {
-      byte[] result = readByteArray();
-      in.nextToken();
-      return ByteBuffer.wrap(result);
-    } else {
-      throw error("bytes");
-    }
-  }
-
-  private byte[] readByteArray() throws IOException {
-    byte[] result = in.getText().getBytes(CHARSET);
-    return result;
-  }
-
-  @Override
-  public void skipBytes() throws IOException {
-    advance(Symbol.BYTES);
-    if (in.getCurrentToken() == JsonToken.VALUE_STRING) {
-      in.nextToken();
-    } else {
-      throw error("bytes");
-    }
-  }
-
-  private void checkFixed(int size) throws IOException {
-    advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    if (size != top.size) {
-      throw new AvroTypeException(
-        "Incorrect length for fixed binary: expected " +
-        top.size + " but received " + size + " bytes.");
-    }
-  }
-    
-  @Override
-  public void readFixed(byte[] bytes, int start, int len) throws IOException {
-    checkFixed(len);
-    if (in.getCurrentToken() == JsonToken.VALUE_STRING) {
-      byte[] result = readByteArray();
-      in.nextToken();
-      if (result.length != len) {
-        throw new AvroTypeException("Expected fixed length " + len
-            + ", but got" + result.length);
-      }
-      System.arraycopy(result, 0, bytes, start, len);
-    } else {
-      throw error("fixed");
-    }
-  }
-
-  @Override
-  public void skipFixed(int length) throws IOException {
-    checkFixed(length);
-    doSkipFixed(length);
-  }
-
-  private void doSkipFixed(int length) throws IOException {
-    if (in.getCurrentToken() == JsonToken.VALUE_STRING) {
-      byte[] result = readByteArray();
-      in.nextToken();
-      if (result.length != length) {
-        throw new AvroTypeException("Expected fixed length " + length
-            + ", but got" + result.length);
-      }
-    } else {
-      throw error("fixed");
-    }
-  }
-
-  @Override
-  protected void skipFixed() throws IOException {
-    advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    doSkipFixed(top.size);
-  }
-
-  @Override
-  public int readEnum() throws IOException {
-    advance(Symbol.ENUM);
-    Symbol.EnumLabelsAction top = (Symbol.EnumLabelsAction) parser.popSymbol();
-    if (in.getCurrentToken() == JsonToken.VALUE_STRING) {
-      in.getText();
-      int n = top.findLabel(in.getText());
-      if (n >= 0) {
-        in.nextToken();
-        return n;
-      }
-      throw new AvroTypeException("Unknown symbol in enum " + in.getText());
-    } else {
-      throw error("fixed");
-    }
-  }
-
-  @Override
-  public long readArrayStart() throws IOException {
-    advance(Symbol.ARRAY_START);
-    if (in.getCurrentToken() == JsonToken.START_ARRAY) {
-      in.nextToken();
-      return doArrayNext();
-    } else {
-      throw error("array-start");
-    }
-  }
-
-  @Override
-  public long arrayNext() throws IOException {
-    advance(Symbol.ITEM_END);
-    return doArrayNext();
-  }
-
-  private long doArrayNext() throws IOException {
-    if (in.getCurrentToken() == JsonToken.END_ARRAY) {
-      parser.advance(Symbol.ARRAY_END);
-      in.nextToken();
-      return 0;
-    } else {
-      return 1;
-    }
-  }
-
-  @Override
-  public long skipArray() throws IOException {
-    advance(Symbol.ARRAY_START);
-    if (in.getCurrentToken() == JsonToken.START_ARRAY) {
-      in.skipChildren();
-      in.nextToken();
-      advance(Symbol.ARRAY_END);    
-    } else {
-      throw error("array-start");
-    }
-    return 0;
-  }
-
-  @Override
-  public long readMapStart() throws IOException {
-    advance(Symbol.MAP_START);
-    if (in.getCurrentToken() == JsonToken.START_OBJECT) {
-      in.nextToken();
-      return doMapNext();
-    } else {
-      throw error("map-start");
-    }
-  }
-
-  @Override
-  public long mapNext() throws IOException {
-    advance(Symbol.ITEM_END);
-    return doMapNext();
-  }
-
-  private long doMapNext() throws IOException {
-    if (in.getCurrentToken() == JsonToken.END_OBJECT) {
-      in.nextToken();
-      advance(Symbol.MAP_END);
-      return 0;
-    } else {
-      return 1;
-    }
-  }
-
-  @Override
-  public long skipMap() throws IOException {
-    advance(Symbol.MAP_START);
-    if (in.getCurrentToken() == JsonToken.START_OBJECT) {
-      in.skipChildren();
-      in.nextToken();
-      advance(Symbol.MAP_END);    
-    } else {
-      throw error("map-start");
-    }
-    return 0;
-  }
-
-  @Override
-  public int readIndex() throws IOException {
-    advance(Symbol.UNION);
-    Symbol.Alternative a = (Symbol.Alternative) parser.popSymbol();
-    
-    String label;
-    if (in.getCurrentToken() == JsonToken.VALUE_NULL) {
-      label = "null";
-    } else if (in.getCurrentToken() == JsonToken.START_OBJECT &&
-               in.nextToken() == JsonToken.FIELD_NAME) {
-      label = in.getText();
-      in.nextToken();
-      parser.pushSymbol(Symbol.UNION_END);
-    } else {
-      throw error("start-union");
-    }
-    int n = a.findLabel(label);
-    if (n < 0)
-      throw new AvroTypeException("Unknown union branch " + label);
-    parser.pushSymbol(a.getSymbol(n));
-    return n;
-  }
-
-  public Symbol doAction(Symbol input, Symbol top) throws IOException {
-    if (top instanceof Symbol.FieldAdjustAction) {
-      Symbol.FieldAdjustAction fa = (Symbol.FieldAdjustAction) top;
-      if (in.getCurrentToken() == JsonToken.FIELD_NAME) {
-        String fn = in.getCurrentName();
-        if (fa.fname.equals(fn)) {
-          in.nextToken();
-          return null;
-        } else {
-          throw new AvroTypeException("Expected field name " + fa.fname +
-              " got " + in.getCurrentName());
-        }
-      }
-    } else if (top == Symbol.RECORD_START) {
-      if (in.getCurrentToken() == JsonToken.START_OBJECT) {
-        in.nextToken();
-      } else {
-        throw error("record-start");
-      }
-    } else if (top == Symbol.RECORD_END || top == Symbol.UNION_END) {
-      if (in.getCurrentToken() == JsonToken.END_OBJECT) {
-        in.nextToken();
-      } else {
-        throw error(top == Symbol.RECORD_END ? "record-end" : "union-end");
-      }
-    } else {
-      throw new AvroTypeException("Unknown action symbol " + top);
-    }
-    return null;
-  }
-
-  private AvroTypeException error(String type) {
-    return new AvroTypeException("Expected " + type +
-        ". Got " + in.getCurrentToken());
-  }
-
-  public void setItemCount(long itemCount) throws IOException {
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonEncoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonEncoder.java
deleted file mode 100644
index 23d7a60..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/JsonEncoder.java
+++ /dev/null
@@ -1,245 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-import java.util.BitSet;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.parsing.JsonGrammarGenerator;
-import org.apache.avro.io.parsing.Parser;
-import org.apache.avro.io.parsing.Symbol;
-import org.apache.avro.util.Utf8;
-import org.codehaus.jackson.JsonEncoding;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonGenerator;
-
-/** An {@link Encoder} for Avro's JSON data encoding. */
-public class JsonEncoder extends ParsingEncoder implements Parser.ActionHandler {
-  final Parser parser;
-  private JsonGenerator out;
-  /**
-   * Has anything been written into the collections?
-   */
-  protected BitSet isEmpty = new BitSet();
-
-  public JsonEncoder(Schema sc, OutputStream out) throws IOException {
-    this(sc, getJsonGenerator(out));
-  }
-
-  public JsonEncoder(Schema sc, JsonGenerator out) throws IOException {
-    this.out = out;
-    this.parser =
-      new Parser(new JsonGrammarGenerator().generate(sc), this);
-  }
-
-  @Override
-  public void flush() throws IOException {
-    parser.processImplicitActions();
-    if (out != null) {
-      out.flush();
-    }
-  }
-
-  @Override
-  public void init(OutputStream out) throws IOException {
-    flush();
-    this.out = getJsonGenerator(out);
-  }
-
-  private static JsonGenerator getJsonGenerator(OutputStream out)
-    throws IOException {
-    return out == null ? null :
-      new JsonFactory().createJsonGenerator(out, JsonEncoding.UTF8);
-  }
-
-  @Override
-  public void writeNull() throws IOException {
-    parser.advance(Symbol.NULL);
-    out.writeNull();
-  }
-
-  @Override
-  public void writeBoolean(boolean b) throws IOException {
-    parser.advance(Symbol.BOOLEAN);
-    out.writeBoolean(b);
-  }
-
-  @Override
-  public void writeInt(int n) throws IOException {
-    parser.advance(Symbol.INT);
-    out.writeNumber(n);
-  }
-
-  @Override
-  public void writeLong(long n) throws IOException {
-    parser.advance(Symbol.LONG);
-    out.writeNumber(n);
-  }
-
-  @Override
-  public void writeFloat(float f) throws IOException {
-    parser.advance(Symbol.FLOAT);
-    out.writeNumber(f);
-  }
-
-  @Override
-  public void writeDouble(double d) throws IOException {
-    parser.advance(Symbol.DOUBLE);
-    out.writeNumber(d);
-  }
-
-  @Override
-  public void writeString(Utf8 utf8) throws IOException {
-    parser.advance(Symbol.STRING);
-    if (parser.topSymbol() == Symbol.MAP_KEY_MARKER) {
-      parser.advance(Symbol.MAP_KEY_MARKER);
-      out.writeFieldName(utf8.toString());
-    } else {
-      out.writeString(utf8.toString());
-    }
-  }
-
-  @Override
-  public void writeBytes(ByteBuffer bytes) throws IOException {
-    if (bytes.hasArray()) {
-      writeBytes(bytes.array(), bytes.position(), bytes.remaining());
-    } else {
-      byte[] b = new byte[bytes.remaining()];
-      for (int i = 0; i < b.length; i++) {
-        b[i] = bytes.get();
-      }
-      writeBytes(b);
-    }
-  }
-
-  @Override
-  public void writeBytes(byte[] bytes, int start, int len) throws IOException {
-    parser.advance(Symbol.BYTES);
-    writeByteArray(bytes, start, len);
-  }
-
-  private void writeByteArray(byte[] bytes, int start, int len)
-    throws IOException {
-    out.writeString(
-        new String(bytes, start, len, JsonDecoder.CHARSET));
-  }
-
-  @Override
-  public void writeFixed(byte[] bytes, int start, int len) throws IOException {
-    parser.advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    if (len != top.size) {
-      throw new AvroTypeException(
-        "Incorrect length for fixed binary: expected " +
-        top.size + " but received " + len + " bytes.");
-    }
-    writeByteArray(bytes, start, len);
-  }
-
-  @Override
-  public void writeEnum(int e) throws IOException {
-    parser.advance(Symbol.ENUM);
-    Symbol.EnumLabelsAction top = (Symbol.EnumLabelsAction) parser.popSymbol();
-    if (e < 0 || e >= top.size) {
-      throw new AvroTypeException(
-          "Enumeration out of range: max is " +
-          top.size + " but received " + e);
-    }
-    out.writeString(top.getLabel(e));
-  }
-
-  @Override
-  public void writeArrayStart() throws IOException {
-    parser.advance(Symbol.ARRAY_START);
-    out.writeStartArray();
-    push();
-    isEmpty.set(depth());
-  }
-
-  @Override
-  public void writeArrayEnd() throws IOException {
-    if (! isEmpty.get(pos)) {
-      parser.advance(Symbol.ITEM_END);
-    }
-    pop();
-    parser.advance(Symbol.ARRAY_END);
-    out.writeEndArray();
-  }
-
-  @Override
-  public void writeMapStart() throws IOException {
-    push();
-    isEmpty.set(depth());
-
-    parser.advance(Symbol.MAP_START);
-    out.writeStartObject();
-  }
-
-  @Override
-  public void writeMapEnd() throws IOException {
-    if (! isEmpty.get(pos)) {
-      parser.advance(Symbol.ITEM_END);
-    }
-    pop();
-
-    parser.advance(Symbol.MAP_END);
-    out.writeEndObject();
-  }
-
-  @Override
-  public void startItem() throws IOException {
-    if (! isEmpty.get(pos)) {
-      parser.advance(Symbol.ITEM_END);
-    }
-    super.startItem();
-    isEmpty.clear(depth());
-  }
-
-  @Override
-  public void writeIndex(int unionIndex) throws IOException {
-    parser.advance(Symbol.UNION);
-    Symbol.Alternative top = (Symbol.Alternative) parser.popSymbol();
-    Symbol symbol = top.getSymbol(unionIndex);
-    if (symbol != Symbol.NULL) {
-      out.writeStartObject();
-      out.writeFieldName(top.getLabel(unionIndex));
-      parser.pushSymbol(Symbol.UNION_END);
-    }
-    parser.pushSymbol(symbol);
-  }
-
-  @Override
-  public Symbol doAction(Symbol input, Symbol top) throws IOException {
-    if (top instanceof Symbol.FieldAdjustAction) {
-      Symbol.FieldAdjustAction fa = (Symbol.FieldAdjustAction) top;
-      out.writeFieldName(fa.fname);
-    } else if (top == Symbol.RECORD_START) {
-      out.writeStartObject();
-    } else if (top == Symbol.RECORD_END || top == Symbol.UNION_END) {
-      out.writeEndObject();
-    } else {
-      throw new AvroTypeException("Unknown action symbol " + top);
-    }
-    return null;
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingDecoder.java
deleted file mode 100644
index 6c39909..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingDecoder.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-
-import org.apache.avro.io.parsing.SkipParser;
-import org.apache.avro.io.parsing.Symbol;
-import org.apache.avro.io.parsing.Parser.ActionHandler;
-import org.apache.avro.io.parsing.SkipParser.SkipHandler;
-
-/** Base class for <a href="parsing/package-summary.html">parser</a>-based
- * {@link Decoder}s. */
-public abstract class ParsingDecoder extends Decoder
-  implements ActionHandler, SkipHandler {
-  protected final SkipParser parser;
-
-  public ParsingDecoder(Symbol root) throws IOException {
-    this.parser = new SkipParser(root, this, this);
-  }
-
-  protected abstract void skipFixed() throws IOException;
-  
-  public void skipAction() throws IOException {
-    parser.popSymbol();
-  }
-
-  public void skipTopSymbol() throws IOException {
-    Symbol top = parser.topSymbol();
-    if (top == Symbol.NULL) {
-      readNull();
-    } if (top == Symbol.BOOLEAN) {
-      readBoolean();
-    } else if (top == Symbol.INT) {
-      readInt();
-    } else if (top == Symbol.LONG) {
-      readLong();
-    } else if (top == Symbol.FLOAT) {
-      readFloat();
-    } else if (top == Symbol.DOUBLE) {
-      readDouble();
-    } else if (top == Symbol.STRING) {
-      skipString();
-    } else if (top == Symbol.BYTES) {
-      skipBytes();
-    } else if (top == Symbol.ENUM)  {
-      readEnum();
-    } else if (top == Symbol.FIXED) {
-      skipFixed();
-    } else if (top == Symbol.UNION) {
-      readIndex();
-    } else if (top == Symbol.ARRAY_START) {
-      skipArray();
-    } else if (top == Symbol.MAP_START) {
-      skipMap();
-    }
-  }
-  
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingEncoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingEncoder.java
deleted file mode 100644
index 3bdb04b..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/ParsingEncoder.java
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.util.Arrays;
-
-import org.apache.avro.AvroTypeException;
-
-/** Base class for <a href="parsing/package-summary.html">parser</a>-based
- * {@link Encoder}s. */
-public abstract class ParsingEncoder extends Encoder {
-  /**
-   * Tracks the number of items that remain to be written in
-   * the collections (array or map).
-   */
-  private long[] counts = new long[10];
-  
-  protected int pos = -1;
-
-  @Override
-  public void setItemCount(long itemCount) throws IOException {
-    if (counts[pos] != 0) {
-      throw new AvroTypeException("Incorrect number of items written. " +
-          counts[pos] + " more required.");
-    }
-    counts[pos] = itemCount;
-  }
-
-  @Override
-  public void startItem() throws IOException {
-    counts[pos]--;
-  }
-
-  /** Push a new collection on to the stack. */
-  protected final void push() {
-    if (pos == counts.length) {
-      counts = Arrays.copyOf(counts, pos + 10);
-    }
-    counts[++pos] = 0;
-  }
-  
-  protected final void pop() {
-    if (counts[pos] != 0) {
-      throw new AvroTypeException("Incorrect number of items written. " +
-          counts[pos] + " more required.");
-    }
-    pos--;
-  }
-  
-  protected final int depth() {
-    return pos;
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/ResolvingDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/ResolvingDecoder.java
deleted file mode 100644
index cfd76c1..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/ResolvingDecoder.java
+++ /dev/null
@@ -1,248 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.parsing.ResolvingGrammarGenerator;
-import org.apache.avro.io.parsing.Symbol;
-
-/**
- * {@link Decoder} that performs type-resolution between the reader's and
- * writer's schemas.
- *
- * <p>When resolving schemas, this class will return the values of fields in
- * _writer's_ order, not the reader's order.  (However, it returns _only_ the
- * reader's fields, not any extra fields the writer may have written.)  To help
- * clients handle fields that appear to be coming out of order, this class
- * defines the method {@link #readFieldOrder}.
- *
- * <p>See the <a href="doc-files/parsing.html">parser documentation</a> for
- *  information on how this works.
- */
-public class ResolvingDecoder extends ValidatingDecoder {
-
-  private Decoder backup;
-  
-  public ResolvingDecoder(Schema writer, Schema reader, Decoder in)
-    throws IOException {
-    this(resolve(writer, reader), in);
-  }
-  
-  /**
-   * Constructs a <tt>ResolvingDecoder</tt> using the given resolver.
-   * The resolver must have been returned by a previous call to
-   * {@link #resolve(Schema, Schema)}.
-   * @param resolver  The resolver to use.
-   * @param in  The underlying decoder.
-   * @throws IOException
-   */
-  public ResolvingDecoder(Object resolver, Decoder in)
-    throws IOException {
-    super((Symbol) resolver, in);
-  }
-
-  /**
-   * Produces an opaque resolver that can be used to construct a new
-   * {@link ResolvingDecoder#ResolvingDecoder(Object, Decoder)}. The
-   * returned Object is immutable and hence can be simultaneously used
-   * in many ResolvingDecoders. This method is reasonably expensive, the
-   * users are encouraged to cache the result.
-   * 
-   * @param writer  The writer's schema.
-   * @param reader  The reader's schema.
-   * @return  The opaque reolver.
-   * @throws IOException
-   */
-  public static Object resolve(Schema writer, Schema reader)
-    throws IOException {
-    return new ResolvingGrammarGenerator().generate(writer, reader);
-  }
-
-  /** Returns the actual order in which the reader's fields will be
-   * returned to the reader.
-   *
-   * This method is useful because {@link ResolvingDecoder}
-   * returns values in the order written by the writer, rather than
-   * the order expected by the reader.  This method allows readers
-   * to figure out what fields to expect.  Let's say the reader is
-   * expecting a three-field record, the first field is a long, the
-   * second a string, and the third an array.  In this case, a
-   * typical usage might be as follows:
-   * <pre>
-   *   Schema.Fields[] fieldOrder = in.readFieldOrder();
-   *   for (int i = 0; i &lt; 3; i++) {
-   *     switch (fieldOrder[i].pos()) {
-   *     case 1:
-   *       foo(in.readLong());
-   *       break;
-   *     case 2:
-   *       someVariable = in.readString();
-   *       break;
-   *     case 3:
-   *       bar(in); // The code of "bar" will read an array-of-int
-   *       break;
-   *     }
-   * </pre>
-   * Note that {@link ResolvingDecoder} will return only the
-   * fields expected by the reader, not other fields that may have
-   * been written by the writer.  Thus, the iteration-count of "3" in
-   * the above loop will always be correct.
-   *
-   * Throws a runtime exception if we're not just about to read the
-   * field of a record.  Also, this method will consume the field
-   * information, and thus may only be called <em>once</em> before
-   * reading the field value.  (However, if the client knows the
-   * order of incoming fields, then the client does <em>not</em>
-   * need to call this method but rather can just start reading the
-   * field values.)
-   *
-   * @throws AvroTypeException If we're not starting a new record
-   *                               
-   */
-  public final Schema.Field[] readFieldOrder() throws IOException {
-    return ((Symbol.FieldOrderAction) parser.advance(Symbol.FIELD_ACTION)).
-      fields;
-  }
-  
-  /**
-   * Consume any more data that has been written by the writer but not
-   * needed by the reader so that the the underlying decoder is in proper
-   * shape for the next record. This situation happens when, for example,
-   * the writer writes a record with two fields and the reader needs only the
-   * first field.
-   * 
-   * This function should be called after completely decoding an object but
-   * before next object can be decoded from the same underlying decoder
-   * either directly or through another resolving decoder. If the same resolving
-   * decoder is used for the next object as well, calling this method is
-   * optional; the state of this resolving decoder ensures that any leftover
-   * portions are consumed before the next object is decoded.
-   * @throws IOException
-   */
-  public final void drain() throws IOException {
-    parser.processImplicitActions();
-  }
-
-  @Override
-  public long readLong() throws IOException {
-    Symbol actual = parser.advance(Symbol.LONG);
-    if (actual == Symbol.INT) {
-      return in.readInt();
-    } else if (actual == Symbol.DOUBLE) {
-      return (long) in.readDouble();
-    } else {
-      assert actual == Symbol.LONG;
-      return in.readLong();
-    }
-  }
-
-  @Override
-  public double readDouble() throws IOException {
-    Symbol actual = parser.advance(Symbol.DOUBLE);
-    if (actual == Symbol.INT) {
-      return (double) in.readInt();
-    } else if (actual == Symbol.LONG) {
-      return (double) in.readLong();
-    } else if (actual == Symbol.FLOAT) {
-      return (double) in.readFloat();
-    } else {
-      assert actual == Symbol.DOUBLE;
-      return in.readDouble();
-    }
-  }
-  
-  @Override
-  public int readEnum() throws IOException {
-    parser.advance(Symbol.ENUM);
-    Symbol.EnumAdjustAction top = (Symbol.EnumAdjustAction) parser.popSymbol();
-    int n = in.readEnum();
-    Object o = top.adjustments[n];
-    if (o instanceof Integer) {
-      return ((Integer) o).intValue();
-    } else {
-      throw new AvroTypeException((String) o);
-    }
-  }
-    
-  @Override
-  public int readIndex() throws IOException {
-    parser.advance(Symbol.UNION);
-    Symbol.UnionAdjustAction top = (Symbol.UnionAdjustAction) parser.popSymbol();
-    parser.pushSymbol(top.symToParse);
-    return top.rindex;
-  }
-
-  @Override
-  public Symbol doAction(Symbol input, Symbol top) throws IOException {
-    if (top instanceof Symbol.FieldOrderAction) {
-      return input == Symbol.FIELD_ACTION ? top : null;
-    } if (top instanceof Symbol.ResolvingAction) {
-      Symbol.ResolvingAction t = (Symbol.ResolvingAction) top;
-      if (t.reader != input) {
-        throw new AvroTypeException("Found " + t.reader + " while looking for "
-                                    + input);
-      } else {
-        return t.writer;
-      }
-    } else if (top instanceof Symbol.SkipAction) {
-      Symbol symToSkip = ((Symbol.SkipAction) top).symToSkip;
-      parser.skipSymbol(symToSkip);
-    } else if (top instanceof Symbol.WriterUnionAction) {
-      Symbol.Alternative branches = (Symbol.Alternative) parser.popSymbol();
-      parser.pushSymbol(branches.getSymbol(in.readIndex()));
-    } else if (top instanceof Symbol.ErrorAction) {
-      throw new AvroTypeException(((Symbol.ErrorAction) top).msg);
-    } else if (top instanceof Symbol.DefaultStartAction) {
-      Symbol.DefaultStartAction dsa = (Symbol.DefaultStartAction) top;
-      backup = in;
-      in = DecoderFactory.defaultFactory()
-        .createBinaryDecoder(dsa.contents, null);
-    } else if (top == Symbol.DEFAULT_END_ACTION) {
-      in = backup;
-    } else {
-      throw new AvroTypeException("Unknown action: " + top);
-    }
-    return null;
-  }
-
-  @Override
-  public void skipAction() throws IOException {
-    Symbol top = parser.popSymbol();
-    if (top instanceof Symbol.ResolvingAction) {
-      parser.pushSymbol(((Symbol.ResolvingAction) top).writer);
-    } else if (top instanceof Symbol.SkipAction) {
-      parser.pushSymbol(((Symbol.SkipAction) top).symToSkip);
-    } else if (top instanceof Symbol.WriterUnionAction) {
-      Symbol.Alternative branches = (Symbol.Alternative) parser.popSymbol();
-      parser.pushSymbol(branches.getSymbol(in.readIndex()));
-    } else if (top instanceof Symbol.ErrorAction) {
-      throw new AvroTypeException(((Symbol.ErrorAction) top).msg);
-    } else if (top instanceof Symbol.DefaultStartAction) {
-      Symbol.DefaultStartAction dsa = (Symbol.DefaultStartAction) top;
-      backup = in;
-      in = DecoderFactory.defaultFactory()
-        .createBinaryDecoder(dsa.contents, null);
-    } else if (top == Symbol.DEFAULT_END_ACTION) {
-      in = backup;
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingDecoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingDecoder.java
deleted file mode 100644
index a219f6e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingDecoder.java
+++ /dev/null
@@ -1,232 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.parsing.Parser;
-import org.apache.avro.io.parsing.Symbol;
-import org.apache.avro.io.parsing.ValidatingGrammarGenerator;
-import org.apache.avro.util.Utf8;
-
-/**
- * An implementation of {@link Decoder} that ensures that the sequence
- * of operations conforms to a schema.
- */
-public class ValidatingDecoder extends ParsingDecoder
-  implements Parser.ActionHandler {
-  protected Decoder in;
-
-  ValidatingDecoder(Symbol root, Decoder in) throws IOException {
-    super(root);
-    this.in = in;
-  }
-
-  public ValidatingDecoder(Schema schema, Decoder in) throws IOException {
-    this(new ValidatingGrammarGenerator().generate(schema), in);
-  }
-
-  @Override
-  public void init(InputStream in) throws IOException {
-    parser.reset();
-    this.in.init(in);
-  }
-
-  @Override
-  public void readNull() throws IOException {
-    parser.advance(Symbol.NULL);
-    in.readNull();
-  }
-    
-  @Override
-  public boolean readBoolean() throws IOException {
-    parser.advance(Symbol.BOOLEAN);
-    return in.readBoolean();
-  }
-
-  @Override
-  public int readInt() throws IOException {
-    parser.advance(Symbol.INT);
-    return in.readInt();
-  }
-    
-  @Override
-  public long readLong() throws IOException {
-    parser.advance(Symbol.LONG);
-    return in.readLong();
-  }
-
-  @Override
-  public float readFloat() throws IOException {
-    parser.advance(Symbol.FLOAT);
-    return in.readFloat();
-  }
-
-  @Override
-  public double readDouble() throws IOException {
-    parser.advance(Symbol.DOUBLE);
-    return in.readDouble();
-  }
-    
-  @Override
-  public Utf8 readString(Utf8 old) throws IOException {
-    parser.advance(Symbol.STRING);
-    return in.readString(old);
-  }
-
-  @Override
-  public void skipString() throws IOException {
-    parser.advance(Symbol.STRING);
-    in.skipString();
-  }
-
-  @Override
-  public ByteBuffer readBytes(ByteBuffer old) throws IOException {
-    parser.advance(Symbol.BYTES);
-    return in.readBytes(old);
-  }
-
-  @Override
-  public void skipBytes() throws IOException {
-    parser.advance(Symbol.BYTES);
-    in.skipBytes();
-  }
-
-  private void checkFixed(int size) throws IOException {
-    parser.advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    if (size != top.size) {
-      throw new AvroTypeException(
-        "Incorrect length for fixed binary: expected " +
-        top.size + " but received " + size + " bytes.");
-    }
-  }
-    
-  @Override
-  public void readFixed(byte[] bytes, int start, int len) throws IOException {
-    checkFixed(len);
-    in.readFixed(bytes, start, len);
-  }
-
-  @Override
-  public void skipFixed(int length) throws IOException {
-    checkFixed(length);
-    in.skipFixed(length);
-  }
-
-  protected void skipFixed() throws IOException {
-    parser.advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    in.skipFixed(top.size);
-  }
-
-  @Override
-  public int readEnum() throws IOException {
-    parser.advance(Symbol.ENUM);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    int result = in.readEnum();
-    if (result < 0 || result >= top.size) {
-      throw new AvroTypeException(
-          "Enumeration out of range: max is " +
-          top.size + " but received " + result);
-    }
-    return result;
-  }
-
-  @Override
-  public long readArrayStart() throws IOException {
-    parser.advance(Symbol.ARRAY_START);
-    long result = in.readArrayStart();
-    if (result == 0) {
-      parser.advance(Symbol.ARRAY_END);
-    }
-    return result;
-  }
-
-  @Override
-  public long arrayNext() throws IOException {
-    parser.processTrailingImplicitActions();
-    long result = in.arrayNext();
-    if (result == 0) {
-      parser.advance(Symbol.ARRAY_END);
-    }
-    return result;
-  }
-
-  @Override
-  public long skipArray() throws IOException {
-    parser.advance(Symbol.ARRAY_START);
-    for (long c = in.skipArray(); c != 0; c = in.skipArray()) {
-      while (c-- > 0) {
-        parser.skipRepeater();
-      }
-    }
-    parser.advance(Symbol.ARRAY_END);    
-    return 0;
-  }
-
-  @Override
-  public long readMapStart() throws IOException {
-    parser.advance(Symbol.MAP_START);
-    long result = in.readMapStart();
-    if (result == 0) {
-      parser.advance(Symbol.MAP_END);
-    }
-    return result;
-  }
-
-  @Override
-  public long mapNext() throws IOException {
-    parser.processTrailingImplicitActions();
-    long result = in.mapNext();
-    if (result == 0) {
-      parser.advance(Symbol.MAP_END);
-    }
-    return result;
-  }
-
-  @Override
-  public long skipMap() throws IOException {
-    parser.advance(Symbol.MAP_START);
-    for (long c = in.skipMap(); c != 0; c = in.skipMap()) {
-      while (c-- > 0) {
-        parser.skipRepeater();
-      }
-    }
-    parser.advance(Symbol.MAP_END);
-    return 0;
-  }
-
-  @Override
-  public int readIndex() throws IOException {
-    parser.advance(Symbol.UNION);
-    Symbol.Alternative top = (Symbol.Alternative) parser.popSymbol();
-    int result = in.readIndex();
-    parser.pushSymbol(top.getSymbol(result));
-    return result;
-  }
-  
-  public Symbol doAction(Symbol input, Symbol top) throws IOException {
-    return null;
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingEncoder.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingEncoder.java
deleted file mode 100644
index df86752..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/ValidatingEncoder.java
+++ /dev/null
@@ -1,203 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.io;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-import java.util.BitSet;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.io.parsing.ValidatingGrammarGenerator;
-import org.apache.avro.io.parsing.Parser;
-import org.apache.avro.io.parsing.Symbol;
-import org.apache.avro.util.Utf8;
-
-/**
- * An implementation of {@link Encoder} that ensures that the sequence
- * of operations conforms to a schema.
- */
-public class ValidatingEncoder extends ParsingEncoder 
-  implements Parser.ActionHandler {
-  protected final Encoder out;
-  protected final Parser parser;
-  /**
-   * Has anything been written into the collections?
-   */
-  protected BitSet isEmpty = new BitSet();
-
-  ValidatingEncoder(Symbol root, Encoder out) throws IOException {
-    this.out = out;
-    this.parser = new Parser(root, this);
-  }
-
-  public ValidatingEncoder(Schema schema, Encoder in) throws IOException {
-    this(new ValidatingGrammarGenerator().generate(schema), in);
-  }
-
-  @Override
-  public void flush() throws IOException {
-    out.flush();
-  }
-
-  @Override
-  public void init(OutputStream out) throws IOException {
-    flush();
-    parser.reset();
-    this.out.init(out);
-  }
-
-  @Override
-  public void writeNull() throws IOException {
-    parser.advance(Symbol.NULL);
-    out.writeNull();
-  }
-
-  @Override
-  public void writeBoolean(boolean b) throws IOException {
-    parser.advance(Symbol.BOOLEAN);
-    out.writeBoolean(b);
-  }
-
-  @Override
-  public void writeInt(int n) throws IOException {
-    parser.advance(Symbol.INT);
-    out.writeInt(n);
-  }
-
-  @Override
-  public void writeLong(long n) throws IOException {
-    parser.advance(Symbol.LONG);
-    out.writeLong(n);
-  }
-
-  @Override
-  public void writeFloat(float f) throws IOException {
-    parser.advance(Symbol.FLOAT);
-    out.writeFloat(f);
-  }
-
-  @Override
-  public void writeDouble(double d) throws IOException {
-    parser.advance(Symbol.DOUBLE);
-    out.writeDouble(d);
-  }
-
-  @Override
-  public void writeString(Utf8 utf8) throws IOException {
-    parser.advance(Symbol.STRING);
-    out.writeString(utf8);
-  }
-
-  @Override
-  public void writeBytes(ByteBuffer bytes) throws IOException {
-    parser.advance(Symbol.BYTES);
-    out.writeBytes(bytes);
-  }
-
-  @Override
-  public void writeBytes(byte[] bytes, int start, int len) throws IOException {
-    parser.advance(Symbol.BYTES);
-    out.writeBytes(bytes, start, len);
-  }
-
-  @Override
-  public void writeFixed(byte[] bytes, int start, int len) throws IOException {
-    parser.advance(Symbol.FIXED);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    if (len != top.size) {
-      throw new AvroTypeException(
-        "Incorrect length for fixed binary: expected " +
-        top.size + " but received " + len + " bytes.");
-    }
-    out.writeFixed(bytes, start, len);
-  }
-
-  @Override
-  public void writeEnum(int e) throws IOException {
-    parser.advance(Symbol.ENUM);
-    Symbol.IntCheckAction top = (Symbol.IntCheckAction) parser.popSymbol();
-    if (e < 0 || e >= top.size) {
-      throw new AvroTypeException(
-          "Enumeration out of range: max is " +
-          top.size + " but received " + e);
-    }
-    out.writeEnum(e);
-  }
-
-  @Override
-  public void writeArrayStart() throws IOException {
-    push();
-    parser.advance(Symbol.ARRAY_START);
-    out.writeArrayStart();
-  }
-
-  @Override
-  public void writeArrayEnd() throws IOException {
-    parser.advance(Symbol.ARRAY_END);
-    out.writeArrayEnd();
-    pop();
-  }
-
-  @Override
-  public void writeMapStart() throws IOException {
-    push();
-    parser.advance(Symbol.MAP_START);
-    out.writeMapStart();
-  }
-
-  @Override
-  public void writeMapEnd() throws IOException {
-    parser.advance(Symbol.MAP_END);
-    out.writeMapEnd();
-    pop();
-  }
-
-  @Override
-  public void setItemCount(long itemCount) throws IOException {
-    super.setItemCount(itemCount);
-    out.setItemCount(itemCount);
-  }
-
-  @Override
-  public void startItem() throws IOException {
-    super.startItem();
-    out.startItem();
-  }
-
-  @Override
-  public void writeIndex(int unionIndex) throws IOException {
-    parser.advance(Symbol.UNION);
-    Symbol.Alternative top = (Symbol.Alternative) parser.popSymbol();
-    parser.pushSymbol(top.getSymbol(unionIndex));
-    out.writeIndex(unionIndex);
-  }
-
-  @Override
-  public Symbol doAction(Symbol input, Symbol top) throws IOException {
-    return null;
-  }
-
-  /** Have we written at least one item into the current collection? */
-  protected final boolean isTopEmpty() {
-    return isEmpty.get(pos);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/io/package.html
deleted file mode 100644
index 6cfb489..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Input/output utilities.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/JsonGrammarGenerator.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/JsonGrammarGenerator.java
deleted file mode 100644
index 3dad82c..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/JsonGrammarGenerator.java
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-
-/**
- * The class that generates a grammar suitable to parse Avro data
- * in JSON format.
- */
-
-public class JsonGrammarGenerator extends ValidatingGrammarGenerator {
-  /**
-   * Returns the non-terminal that is the start symbol
-   * for the grammar for the grammar for the given schema <tt>sc</tt>.
-   */
-  public Symbol generate(Schema schema) {
-    return Symbol.root(generate(schema, new HashMap<LitS, Symbol>()));
-  }
-
-  /**
-   * Returns the non-terminal that is the start symbol
-   * for grammar of the given schema <tt>sc</tt>. If there is already an entry
-   * for the given schema in the given map <tt>seen</tt> then
-   * that entry is returned. Otherwise a new symbol is generated and
-   * an entry is inserted into the map.
-   * @param sc    The schema for which the start symbol is required
-   * @param seen  A map of schema to symbol mapping done so far.
-   * @return      The start symbol for the schema
-   */
-  public Symbol generate(Schema sc, Map<LitS, Symbol> seen) {
-    switch (sc.getType()) {
-    case NULL:
-    case BOOLEAN:
-    case INT:
-    case LONG:
-    case FLOAT:
-    case DOUBLE:
-    case STRING:
-    case BYTES:
-    case FIXED:
-    case UNION:
-      return super.generate(sc, seen);
-    case ENUM:
-      return Symbol.seq(new Symbol.EnumLabelsAction(sc.getEnumSymbols()),
-          Symbol.ENUM);
-    case ARRAY:
-      return Symbol.seq(Symbol.repeat(Symbol.ARRAY_END,
-              Symbol.ITEM_END, generate(sc.getElementType(), seen)),
-          Symbol.ARRAY_START);
-    case MAP:
-      return Symbol.seq(Symbol.repeat(Symbol.MAP_END,
-              Symbol.ITEM_END, generate(sc.getValueType(), seen),
-              Symbol.MAP_KEY_MARKER, Symbol.STRING),
-          Symbol.MAP_START);
-    case RECORD: {
-      LitS wsc = new LitS(sc);
-      Symbol rresult = seen.get(wsc);
-      if (rresult == null) {
-        Symbol[] production = new Symbol[sc.getFields().size() * 2 + 2];
-        rresult = Symbol.seq(production);
-        seen.put(wsc, rresult);
-
-        int i = production.length;
-        int n = 0;
-        production[--i] = Symbol.RECORD_START;
-        for (Field f : sc.getFields()) {
-          production[--i] = new Symbol.FieldAdjustAction(n, f.name());
-          production[--i] = generate(f.schema(), seen);
-          n++;
-        }
-        production[--i] = Symbol.RECORD_END;
-      }
-      return rresult;
-    }
-    default:
-      throw new RuntimeException("Unexpected schema type");
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Parser.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Parser.java
deleted file mode 100644
index 796de3e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Parser.java
+++ /dev/null
@@ -1,192 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.io.IOException;
-import java.util.Arrays;
-
-import org.apache.avro.AvroTypeException;
-
-/**
- * Parser is the class that maintains the stack for parsing. This class
- * is used by encoders, which are not required to skip.
- */
-public class Parser {
-  /**
-   * The parser knows how to handle the terminal and non-terminal
-   * symbols. But it needs help from outside to handle implicit
-   * and explicit actions. The clients implement this interface to
-   * provide this help.
-   */
-  public interface ActionHandler {
-    /**
-     * Handle the action symbol <tt>top</tt> when the <tt>input</tt> is
-     * sought to be taken off the stack.
-     * @param input The input symbol from the caller of advance
-     * @param top The symbol at the top the stack.
-     * @return  <tt>null</tt> if advance() is to continue processing the
-     * stack. If not <tt>null</tt> the return value will be returned
-     * by advance().
-     * @throws IOException
-     */
-    Symbol doAction(Symbol input, Symbol top) throws IOException;
-  }
-
-  protected final ActionHandler symbolHandler;
-  protected Symbol[] stack;
-  protected int pos;
-
-  public Parser(Symbol root, ActionHandler symbolHandler)
-    throws IOException {
-    this.symbolHandler = symbolHandler;
-    this.stack = new Symbol[5]; // Start small to make sure expansion code works
-    this.stack[0] = root;
-    this.pos = 1;
-  }
-
-  /**
-   * If there is no sufficient room in the stack, use this expand it.
-   */
-  private void expandStack() {
-    stack = Arrays.copyOf(stack, stack.length+Math.max(stack.length,1024));
-  }
-
-  /**
-   * Recursively replaces the symbol at the top of the stack with its
-   * production, until the top is a terminal. Then checks if the
-   * top symbol matches the terminal symbol suppled <tt>terminal</tt>.
-   * @param input The symbol to match against the terminal at the
-   * top of the stack.
-   * @return The terminal symbol at the top of the stack unless an
-   * implicit action resulted in another symbol, in which case that
-   * symbol is returned.
-   */
-  public final Symbol advance(Symbol input) throws IOException {
-    for (; ;) {
-      Symbol top = stack[--pos];
-      if (top == input) {
-        return top; // A common case
-      }
-
-      Symbol.Kind k = top.kind;
-      if (k == Symbol.Kind.IMPLICIT_ACTION) {
-        Symbol result = symbolHandler.doAction(input, top);
-        if (result != null) {
-          return result;
-        }
-      } else if (k == Symbol.Kind.TERMINAL) {
-        throw new AvroTypeException("Attempt to process a "
-                + input + " when a "
-                + top + " was expected.");
-      } else if (k == Symbol.Kind.REPEATER
-          && input == ((Symbol.Repeater) top).end) {
-        return input;
-      } else {
-        pushProduction(top);
-      }
-    }
-  }
-  
-  /**
-   * Performs any implicit actions at the top the stack, expanding any
-   * production (other than the root) that may be encountered.
-   * This method will fail if there are any repeaters on the stack.
-   * @throws IOException
-   */
-  public final void processImplicitActions() throws IOException {
-     while (pos > 1) {
-      Symbol top = stack[pos - 1];
-      if (top.kind == Symbol.Kind.IMPLICIT_ACTION) {
-        pos--;
-        symbolHandler.doAction(null, top);
-      } else if (top.kind != Symbol.Kind.TERMINAL) {
-        pos--;
-        pushProduction(top);
-      } else {
-        break;
-      }
-    }
-  }
-
-  /**
-   * Performs any "trailing" implicit actions at the top the stack. 
-   */
-  public final void processTrailingImplicitActions() throws IOException {
-    while (pos >= 1) {
-      Symbol top = stack[pos - 1];
-      if (top.kind == Symbol.Kind.IMPLICIT_ACTION 
-        && ((Symbol.ImplicitAction) top).isTrailing) {
-        pos--;
-        symbolHandler.doAction(null, top);
-      } else {
-        break;
-      }
-    }
-  }
-
-  /**
-   * Pushes the production for the given symbol <tt>sym</tt>.
-   * If <tt>sym</tt> is a repeater and <tt>input</tt> is either
-   * {@link Symbol#ARRAY_END} or {@link Symbol#MAP_END} pushes nothing.
-   * @param sym
-   */
-  public final void pushProduction(Symbol sym) {
-    Symbol[] p = sym.production;
-    while (pos + p.length > stack.length) {
-      expandStack();
-    }
-    System.arraycopy(p, 0, stack, pos, p.length);
-    pos += p.length;
-  }
-
-  /**
-   * Pops and returns the top symbol from the stack.
-   */
-  public Symbol popSymbol() {
-    return stack[--pos];
-  }
-  
-  /**
-   * Returns the top symbol from the stack.
-   */
-  public Symbol topSymbol() {
-    return stack[pos - 1];
-  }
-  
-  /**
-   * Pushes <tt>sym</tt> on to the stack.
-   */
-  public void pushSymbol(Symbol sym) {
-    if (pos == stack.length) {
-      expandStack();
-    }
-    stack[pos++] = sym;
-  }
-  
-  /**
-   * Returns the depth of the stack.
-   */
-  public int depth() {
-    return pos;
-  }
-
-  public void reset() {
-    pos = 1;
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ResolvingGrammarGenerator.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ResolvingGrammarGenerator.java
deleted file mode 100644
index 2e50f42..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ResolvingGrammarGenerator.java
+++ /dev/null
@@ -1,448 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.Encoder;
-import org.codehaus.jackson.JsonNode;
-
-/**
- * The class that generates a resolving grammar to resolve between two
- * schemas.
- */
-public class ResolvingGrammarGenerator extends ValidatingGrammarGenerator {
-  /**
-   * Resolves the writer schema <tt>writer</tt> and the reader schema
-   * <tt>reader</tt> and returns the start symbol for the grammar generated. 
-   * @param writer    The schema used by the writer
-   * @param reader    The schema used by the reader
-   * @return          The start symbol for the resolving grammar
-   * @throws IOException 
-   */
-  public final Symbol generate(Schema writer, Schema reader)
-    throws IOException {
-    return Symbol.root(generate(writer, reader, new HashMap<LitS, Symbol>()));
-  }
-  
-  /**
-   * Resolves the writer schema <tt>writer</tt> and the reader schema
-   * <tt>reader</tt> and returns the start symbol for the grammar generated.
-   * If there is already a symbol in the map <tt>seen</tt> for resolving the
-   * two schemas, then that symbol is returned. Otherwise a new symbol is
-   * generated and returnd. 
-   * @param writer    The schema used by the writer
-   * @param reader    The schema used by the reader
-   * @param seen      The &lt;reader-schema, writer-schema&gt; to symbol
-   * map of start symbols of resolving grammars so far.
-   * @return          The start symbol for the resolving grammar
-   * @throws IOException 
-   */
-  public Symbol generate(Schema writer, Schema reader,
-                                Map<LitS, Symbol> seen) throws IOException
-  {
-    final Schema.Type writerType = writer.getType();
-    final Schema.Type readerType = reader.getType();
-
-    if (writerType == readerType) {
-      switch (writerType) {
-      case NULL:
-        return Symbol.NULL;
-      case BOOLEAN:
-        return Symbol.BOOLEAN;
-      case INT:
-        return Symbol.INT;
-      case LONG:
-        return Symbol.LONG;
-      case FLOAT:
-        return Symbol.FLOAT;
-      case DOUBLE:
-        return Symbol.DOUBLE;
-      case STRING:
-        return Symbol.STRING;
-      case BYTES:
-        return Symbol.BYTES;
-      case FIXED:
-        if (writer.getName().equals(reader.getName())
-            && writer.getFixedSize() == reader.getFixedSize()) {
-          return Symbol.seq(new Symbol.IntCheckAction(writer.getFixedSize()),
-              Symbol.FIXED);
-        }
-        break;
-
-      case ENUM:
-        if (writer.getName() == null
-                || writer.getName().equals(reader.getName())) {
-          return Symbol.seq(mkEnumAdjust(writer.getEnumSymbols(),
-                  reader.getEnumSymbols()), Symbol.ENUM);
-        }
-        break;
-
-      case ARRAY:
-        return Symbol.seq(Symbol.repeat(Symbol.ARRAY_END,
-                generate(writer.getElementType(),
-                reader.getElementType(), seen)),
-            Symbol.ARRAY_START);
-      
-      case MAP:
-        return Symbol.seq(Symbol.repeat(Symbol.MAP_END,
-                generate(writer.getValueType(),
-                reader.getValueType(), seen), Symbol.STRING),
-            Symbol.MAP_START);
-      case RECORD:
-        return resolveRecords(writer, reader, seen);
-      case UNION:
-        return resolveUnion(writer, reader, seen);
-      default:
-        throw new AvroTypeException("Unkown type for schema: " + writerType);
-      }
-    } else {  // writer and reader are of different types
-      if (writerType == Schema.Type.UNION) {
-        return resolveUnion(writer, reader, seen);
-      }
-  
-      switch (readerType) {
-      case LONG:
-        switch (writerType) {
-        case INT:
-        case DOUBLE:
-        case FLOAT:
-          return Symbol.resolve(super.generate(writer, seen), Symbol.LONG);
-        }
-        break;
-  
-      case DOUBLE:
-        switch (writerType) {
-        case INT:
-        case LONG:
-        case FLOAT:
-          return Symbol.resolve(super.generate(writer, seen), Symbol.DOUBLE);
-        }
-        break;
-  
-      case UNION:
-        int j = bestBranch(reader, writer);
-        if (j >= 0) {
-          Symbol s = generate(writer, reader.getTypes().get(j), seen);
-          return Symbol.seq(new Symbol.UnionAdjustAction(j, s), Symbol.UNION);
-        }
-        break;
-      case NULL:
-      case BOOLEAN:
-      case INT:
-      case STRING:
-      case FLOAT:
-      case BYTES:
-      case ENUM:
-      case ARRAY:
-      case MAP:
-      case RECORD:
-        break;
-      default:
-        throw new RuntimeException("Unexpected schema type: " + readerType);
-      }
-    }
-    return Symbol.error("Found " + writer + ", expecting " + reader);
-  }
-
-  private Symbol resolveUnion(Schema writer, Schema reader,
-      Map<LitS, Symbol> seen) throws IOException {
-    List<Schema> alts = writer.getTypes();
-    final int size = alts.size();
-    Symbol[] symbols = new Symbol[size];
-    String[] labels = new String[size];
-
-    /**
-     * We construct a symbol without filling the arrays. Please see
-     * {@link Symbol#production} for the reason.
-     */
-    int i = 0;
-    for (Schema w : alts) {
-      symbols[i] = generate(w, reader, seen);
-      labels[i] = w.getName();
-      i++;
-    }
-    return Symbol.seq(Symbol.alt(symbols, labels),
-        new Symbol.WriterUnionAction());
-  }
-
-  private Symbol resolveRecords(Schema writer, Schema reader,
-      Map<LitS, Symbol> seen) throws IOException {
-    LitS wsc = new LitS2(writer, reader);
-    Symbol result = seen.get(wsc);
-    if (result == null) {
-      List<Field> wfields = writer.getFields();
-      List<Field> rfields = reader.getFields();
-
-      // First, compute reordering of reader fields, plus
-      // number elements in the result's production
-      Field[] reordered = new Field[rfields.size()];
-      int ridx = 0;
-      int count = 1 + wfields.size();
-
-      for (Field f : wfields) {
-        Field rdrField = reader.getField(f.name());
-        if (rdrField != null) {
-          reordered[ridx++] = rdrField;
-        }
-      }
-
-      for (Field rf : rfields) {
-        String fname = rf.name();
-        if (writer.getField(fname) == null) {
-          if (rf.defaultValue() == null) {
-            result = Symbol.error("Found " + writer + ", expecting " + reader);
-            seen.put(wsc, result);
-            return result;
-          } else {
-            reordered[ridx++] = rf;
-            count += 3;
-          }
-        }
-      }
-
-      Symbol[] production = new Symbol[count];
-      production[--count] = new Symbol.FieldOrderAction(reordered);
-
-      /**
-       * We construct a symbol without filling the array. Please see
-       * {@link Symbol#production} for the reason.
-       */
-      result = Symbol.seq(production);
-      seen.put(wsc, result);
-
-      /*
-       * For now every field in read-record with no default value
-       * must be in write-record.
-       * Write record may have additional fields, which will be
-       * skipped during read.
-       */
-
-      // Handle all the writer's fields
-      for (Field wf : wfields) {
-        String fname = wf.name();
-        Field rf = reader.getField(fname);
-        if (rf == null) {
-          production[--count] =
-            new Symbol.SkipAction(super.generate(wf.schema(),
-              seen));
-        } else {
-          production[--count] =
-            generate(wf.schema(), rf.schema(), seen);
-        }
-      }
-
-      // Add default values for fields missing from Writer
-      for (Field rf : rfields) {
-        String fname = rf.name();
-        Field wf = writer.getField(fname);
-        if (wf == null) {
-          byte[] bb = getBinary(rf.schema(), rf.defaultValue());
-          production[--count] = new Symbol.DefaultStartAction(bb);
-          production[--count] = generate(rf.schema(), rf.schema(), seen);
-          production[--count] = Symbol.DEFAULT_END_ACTION;
-        }
-      }
-    }
-    return result;
-  }
-
-  /**
-   * Returns the Avro binary encoded version of <tt>n</tt> according to
-   * the schema <tt>s</tt>.
-   * @param s The schema for encoding
-   * @param n The Json node that has the value to be encoded.
-   * @return  The binary encoded version of <tt>n</tt>.
-   * @throws IOException
-   */
-  private static byte[] getBinary(Schema s, JsonNode n) throws IOException {
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    Encoder e = new BinaryEncoder(out);
-    encode(e, s, n);
-    return out.toByteArray();
-  }
-  
-  /**
-   * Encodes the given Json node <tt>n</tt> on to the encoder <tt>e</tt>
-   * according to the schema <tt>s</tt>.
-   * @param e The encoder to encode into.
-   * @param s The schema for the object being encoded.
-   * @param n The Json node to encode.
-   * @throws IOException
-   */
-  
-  static void encode(Encoder e, Schema s, JsonNode n)
-    throws IOException {
-    switch (s.getType()) {
-    case RECORD:
-      for (Field f : s.getFields()) {
-        String name = f.name();
-        JsonNode v = n.get(name);
-        if (v == null) {
-          v = f.defaultValue();
-        }
-        if (v == null) {
-          throw new AvroTypeException("No default value for: " + name);
-        }
-        encode(e, f.schema(), v);
-      }
-      break;
-    case ENUM:
-      e.writeEnum(s.getEnumOrdinal(n.getTextValue()));
-      break;
-    case ARRAY:
-      e.writeArrayStart();
-      e.setItemCount(n.size());
-      Schema i = s.getElementType();
-      for (JsonNode node : n) {
-        e.startItem();
-        encode(e, i, node);
-      }
-      e.writeArrayEnd();
-      break;
-    case MAP:
-      e.writeMapStart();
-      e.setItemCount(n.size());
-      Schema v = s.getValueType();
-      for (Iterator<String> it = n.getFieldNames(); it.hasNext();) {
-        e.startItem();
-        String key = it.next();
-        e.writeString(key);
-        encode(e, v, n.get(key));
-      }
-      e.writeMapEnd();
-      break;
-    case UNION:
-      e.writeIndex(0);
-      encode(e, s.getTypes().get(0), n);
-      break;
-    case FIXED:
-      byte[] bb = n.getTextValue().getBytes("ISO-8859-1");
-      if (bb.length != s.getFixedSize()) {
-        bb = Arrays.copyOf(bb, s.getFixedSize());
-      }
-      e.writeFixed(bb);
-      break;
-    case STRING:
-      e.writeString(n.getTextValue());
-      break;
-    case BYTES:
-      e.writeBytes(n.getTextValue().getBytes("ISO-8859-1"));
-      break;
-    case INT:
-      e.writeInt(n.getIntValue());
-      break;
-    case LONG:
-      e.writeLong(n.getLongValue());
-      break;
-    case FLOAT:
-      e.writeFloat((float) n.getDoubleValue());
-      break;
-    case DOUBLE:
-      e.writeDouble(n.getDoubleValue());
-      break;
-    case BOOLEAN:
-      e.writeBoolean(n.getBooleanValue());
-      break;
-    case NULL:
-      e.writeNull();
-      break;
-    }
-  }
-
-  private static Symbol mkEnumAdjust(List<String> wsymbols,
-      List<String> rsymbols){
-    Object[] adjustments = new Object[wsymbols.size()];
-    for (int i = 0; i < adjustments.length; i++) {
-      int j = rsymbols.indexOf(wsymbols.get(i));
-      adjustments[i] = (j == -1 ? "No match for " + wsymbols.get(i)
-                                : new Integer(j));
-    }
-    return new Symbol.EnumAdjustAction(rsymbols.size(), adjustments);
-  }
-
-  private static int bestBranch(Schema r, Schema w) {
-    Schema.Type vt = w.getType();
-      // first scan for exact match
-      int j = 0;
-      for (Schema b : r.getTypes()) {
-        if (vt == b.getType())
-          if (vt == Schema.Type.RECORD) {
-            String vname = w.getName();
-            if (vname == null || vname.equals(b.getName()))
-              return j;
-          } else
-            return j;
-        j++;
-      }
-
-      // then scan match via numeric promotion
-      j = 0;
-      for (Schema b : r.getTypes()) {
-        switch (vt) {
-        case INT:
-          switch (b.getType()) {
-          case LONG: case DOUBLE:
-            return j;
-          }
-          break;
-        case LONG:
-        case FLOAT:
-          switch (b.getType()) {
-          case DOUBLE:
-            return j;
-          }
-          break;
-        }
-        j++;
-      }
-      return -1;
-  }
-
-  /**
-   * Clever trick which differentiates items put into
-   * <code>seen</code> by {@link ValidatingGrammarGenerator#validating validating()}
-   * from those put in by {@link ValidatingGrammarGenerator#resolving resolving()}.
-   */
-   static class LitS2 extends ValidatingGrammarGenerator.LitS {
-     public Schema expected;
-     public LitS2(Schema actual, Schema expected) {
-       super(actual);
-       this.expected = expected;
-     }
-     public boolean equals(Object o) {
-       if (! (o instanceof LitS2)) return false;
-       LitS2 other = (LitS2) o;
-       return actual == other.actual && expected == other.expected;
-     }
-     public int hashCode() {
-       return super.hashCode() + expected.hashCode();
-     }
-   }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/SkipParser.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/SkipParser.java
deleted file mode 100644
index 5afbe60..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/SkipParser.java
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.io.IOException;
-
-/**
- * A parser that capable of skipping as well read and write. This class is
- * used by decoders who (unlink encoders) are required to implement
- * methods to skip.
- */
-public class SkipParser extends Parser {
-  /**
-   * The clients implement this interface to skip symbols and actions.
-   */
-  public interface SkipHandler {
-    /**
-     * Skips the action at the top of the stack.
-     */
-    void skipAction() throws IOException;
-
-    /**
-     * Skips the symbol at the top of the stack.
-     */
-    void skipTopSymbol() throws IOException;
-  }
-  
-  private final SkipHandler skipHandler;
-
-  public SkipParser(Symbol root, ActionHandler symbolHandler,
-      SkipHandler skipHandler)
-    throws IOException {
-    super(root, symbolHandler);
-    this.skipHandler = skipHandler;
-  }
-
-  /**
-   * Skips data by calling <code>skipXyz</code> or
-   * <code>readXyz</code> methods on <code>this</code>, until the
-   * parser stack reaches the target level.
-   */
-  public final void skipTo(int target) throws IOException {
-    outer:
-    while (target < pos) {
-      Symbol top = stack[pos - 1];
-      while (top.kind != Symbol.Kind.TERMINAL) {
-        if (top.kind == Symbol.Kind.IMPLICIT_ACTION
-            || top.kind == Symbol.Kind.EXPLICIT_ACTION) {
-          skipHandler.skipAction();
-        } else {
-          --pos;
-          pushProduction(top);
-        }
-        continue outer;
-      }
-      skipHandler.skipTopSymbol();
-    }
-  }
-
-  /**
-   * Skips the repeater at the top the stack.
-   */
-  public final void skipRepeater() throws IOException {
-    int target = pos;
-    Symbol repeater = stack[--pos];
-    assert repeater.kind == Symbol.Kind.REPEATER;
-    pushProduction(repeater);
-    skipTo(target);
-  }
-
-  /**
-   * Pushes the given symbol on to the skip and skips it.
-   * @param symToSkip The symbol that should be skipped.
-   */
-  public final void skipSymbol(Symbol symToSkip) throws IOException {
-    int target = pos;
-    pushSymbol(symToSkip);
-    skipTo(target);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Symbol.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Symbol.java
deleted file mode 100644
index 7c346b6..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/Symbol.java
+++ /dev/null
@@ -1,550 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.NoSuchElementException;
-
-import org.apache.avro.Schema;
-
-/**
- * Symbol is the base of all symbols (terminals and non-terminals) of
- * the grammar.
- */
-public abstract class Symbol {
-  /*
-   * The type of symbol.
-   */
-  public enum Kind {
-    /** terminal symbols which have no productions */
-    TERMINAL,
-    /** Start symbol for some grammar */
-    ROOT,
-    /** non-terminal symbol which is a sequence of one or more other symbols */
-    SEQUENCE,
-    /** non-termial to represent the contents of an array or map */
-    REPEATER,
-    /** non-terminal to represent the union */
-    ALTERNATIVE,
-    /** non-terminal action symbol which are automatically consumed */
-    IMPLICIT_ACTION,
-    /** non-terminal action symbol which is explicitly consumed */
-    EXPLICIT_ACTION
-  };
-
-  /// The kind of this symbol.
-  public final Kind kind;
-
-  /**
-   * The production for this symbol. If this symbol is a terminal
-   * this is <tt>null</tt>. Otherwise this holds the the sequence of
-   * the symbols that forms the production for this symbol. The
-   * sequence is in the reverse order of production. This is useful
-   * for easy copying onto parsing stack.
-   * 
-   * Please note that this is a final. So the production for a symbol
-   * should be known before that symbol is constructed. This requirement
-   * cannot be met for those symbols which are recursive (e.g. a record that
-   * holds union a branch of which is the record itself). To resolve this
-   * problem, we initialize the symbol with an array of nulls. Later we
-   * fill the symbols. Not clean, but works. The other option is to not have
-   * this field a final. But keeping it final and thus keeping symbol immutable
-   * gives some confort. See various generators how we generate records.
-   */
-  public final Symbol[] production;
-  /**
-   * Constructs a new symbol of the given kind <tt>kind</tt>.
-   */
-  protected Symbol(Kind kind) {
-    this(kind, null);
-  }
-    
-    
-  protected Symbol(Kind kind, Symbol[] production) {
-    this.production = production;
-    this.kind = kind;
-  }
-
-  /**
-   * A convenience method to construct a root symbol.
-   */
-  static Symbol root(Symbol... symbols) {
-    return new Root(symbols);
-  }
-  /**
-   * A convenience method to construct a sequence.
-   * @param production  The constituent symbols of the sequence.
-   */
-  static Symbol seq(Symbol... production) {
-    return new Sequence(production);
-  }
-
-  /**
-   * A convenience method to construct a repeater.
-   * @param symsToRepeat The symbols to repeat in the repeater.
-   */
-  static Symbol repeat(Symbol endSymbol, Symbol... symsToRepeat) {
-    return new Repeater(endSymbol, symsToRepeat);
-  }
-
-  /**
-   *  A convenience method to construct a union.
-   */
-  static Symbol alt(Symbol[] symbols, String[] labels) {
-    return new Alternative(symbols, labels);
-  }
-
-  /**
-   * A convenience method to construct an ErrorAction.
-   * @param e
-   * @return
-   */
-  static Symbol error(String e) {
-    return new ErrorAction(e);
-  }
-  
-  /**
-   * A convenience method to construct a ResolvingAction.
-   * @param w The writer symbol
-   * @param r The reader symbol
-   */
-  static Symbol resolve(Symbol w, Symbol r) {
-    return new ResolvingAction(w, r);
-  }
-  
-  private static class Fixup {
-    public final Symbol[] symbols;
-    public final int pos;
-    
-    public Fixup(Symbol[] symbols, int pos) {
-      this.symbols = symbols;
-      this.pos = pos;
-    }
-  }
-  
-  public Symbol flatten(Map<Sequence, Sequence> map,
-      Map<Sequence, List<Fixup>> map2) {
-    return this;
-  }
-  
-  public int flattenedSize() {
-    return 1;
-  }
-  
-  /**
-   * Flattens the given sub-array of symbols into an sub-array of symbols. Every
-   * <tt>Sequence</tt> in the input are replaced by its production recursively.
-   * Non-<tt>Sequence</tt> symbols, they internally have other symbols
-   * those internal symbols also get flattened.
-   * 
-   * The algorithm does a few tricks to handle recursive symbol definitions.
-   * In order to avoid infinite recursion with recursive symbols, we have a map
-   * of Symbol->Symbol. Before fully constructing a flattened symbol for a
-   * <tt>Sequence</tt> we insert an empty output symbol into the map and then
-   * start filling the production for the <tt>Sequence</tt>. If the same
-   * <tt>Sequence</tt> is encountered due to recursion, we simply return the
-   * (empty) output <tt>Sequence<tt> from the map. Then we actually fill out
-   * the production for the <tt>Sequence</tt>.
-   * As part of the flattening process we copy the production of
-   * <tt>Sequence</tt>s into larger arrays. If the original <tt>Sequence</tt>
-   * has not not be fully constructed yet, we copy a bunch of <tt>null</tt>s.
-   * Fix-up remembers all those <tt>null</tt> patches. The fix-ups gets finally
-   * filled when we know the symbols to occupy those patches.
-   *  
-   * @param in  The array of input symbols to flatten
-   * @param start The position where the input sub-array starts.
-   * @param out The output that receives the flattened list of symbols. The
-   * output array should have sufficient space to receive the expanded sub-array
-   * of symbols.
-   * @param skip  The position where the output input sub-array starts.
-   * @param map A map of symbols which have already been expanded. Useful for
-   * handling recursive definitions and for caching.
-   * @param map2  A map to to store the list of fix-ups.
-   */
-  static void flatten(Symbol[] in, int start,
-      Symbol[] out, int skip,
-      Map<Sequence, Sequence> map,
-      Map<Sequence, List<Fixup>> map2) {
-    for (int i = start, j = skip; i < in.length; i++) {
-      Symbol s = in[i].flatten(map, map2);
-      if (s instanceof Sequence) {
-        Symbol[] p = s.production;
-        List<Fixup> l = map2.get(s);
-        if (l == null) {
-          System.arraycopy(p, 0, out, j, p.length);
-        } else {
-          l.add(new Fixup(out, j));
-        }
-        j += p.length;
-      } else {
-        out[j++] = s;
-      }
-    }
-  }
-
-  /**
-   * Returns the amount of space required to flatten the given
-   * sub-array of symbols.
-   * @param symbols The array of input symbols.
-   * @param start The index where the subarray starts.
-   * @return  The number of symbols that will be produced if one expands
-   * the given input.
-   */
-  protected static int flattenedSize(Symbol[] symbols, int start) {
-    int result = 0;
-    for (int i = start; i < symbols.length; i++) {
-      if (symbols[i] instanceof Sequence) {
-        Sequence s = (Sequence) symbols[i];
-        result += s.flattenedSize();
-      } else {
-        result += 1;
-      }
-    }
-    return result;
-  }
-
-  private static class Terminal extends Symbol {
-    private final String printName;
-    public Terminal(String printName) {
-      super(Kind.TERMINAL);
-      this.printName = printName;
-    }
-    public String toString() { return printName; }
-  }
-
-  public static class ImplicitAction extends Symbol {
-    /**
-     * Set to <tt>true</tt> if and only if this implicit action is 
-     * a trailing action. That is, it is an action that follows
-     * real symbol. E.g {@link Symbol#DEFAULT_END_ACTION}.
-     */
-    public final boolean isTrailing;
-
-    private ImplicitAction() {
-      this(false);
-    }
-    
-    private ImplicitAction(boolean isTrailing) {
-      super(Kind.IMPLICIT_ACTION);
-      this.isTrailing = isTrailing;
-    }
-  }
-  
-  protected static class Root extends Symbol {
-    private Root(Symbol... symbols) {
-      super(Kind.ROOT, makeProduction(symbols));
-      production[0] = this;
-    }
-
-    private static Symbol[] makeProduction(Symbol[] symbols) {
-      Symbol[] result = new Symbol[flattenedSize(symbols, 0) + 1];
-      flatten(symbols, 0, result, 1,
-          new HashMap<Sequence, Sequence>(),
-          new HashMap<Sequence, List<Fixup>>());
-      return result;
-    }
-  }
-  
-  protected static class Sequence extends Symbol implements Iterable<Symbol> {
-    private Sequence(Symbol[] productions) {
-      super(Kind.SEQUENCE, productions);
-    }
-
-    public Symbol get(int index) {
-      return production[index];
-    }
-    
-    public int size() {
-      return production.length;
-    }
-    
-    public Iterator<Symbol> iterator() {
-      return new Iterator<Symbol>() {
-        private int pos = production.length;
-        
-        public boolean hasNext() {
-          return 0 < pos;
-        }
-        
-        public Symbol next() {
-          if (0 < pos) {
-            return production[--pos];
-          } else {
-            throw new NoSuchElementException();
-          }
-        }
-        
-        public void remove() {
-          throw new UnsupportedOperationException();
-        }
-      };
-    }
-    @Override
-    public Sequence flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      Sequence result = map.get(this);
-      if (result == null) {
-        result = new Sequence(new Symbol[flattenedSize()]);
-        map.put(this, result);
-        List<Fixup> l = new ArrayList<Fixup>();
-        map2.put(result, l);
-        
-        flatten(production, 0,
-            result.production, 0, map, map2);
-        for (Fixup f : l) {
-          System.arraycopy(result.production, 0, f.symbols, f.pos,
-              result.production.length);
-        }
-        map2.remove(result);
-      }
-      return result;
-    }
-
-    @Override
-    public final int flattenedSize() {
-      return flattenedSize(production, 0);
-    }
-  }
-
-  public static class Repeater extends Symbol {
-    public final Symbol end;
-   
-    private Repeater(Symbol end, Symbol... sequenceToRepeat) {
-      super(Kind.REPEATER, makeProduction(sequenceToRepeat));
-      this.end = end;
-      production[0] = this;
-    }
-    
-    private static Symbol[] makeProduction(Symbol[] p) {
-      Symbol[] result = new Symbol[p.length + 1];
-      System.arraycopy(p, 0, result, 1, p.length);
-      return result;
-    }
-    
-    @Override
-    public Repeater flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      Repeater result =
-        new Repeater(end, new Symbol[flattenedSize(production, 1)]);
-      flatten(production, 1, result.production, 1, map, map2);
-      return result;
-    }
-
-  }
-    
-  public static class Alternative extends Symbol {
-    public final Symbol[] symbols;
-    public final String[] labels;
-    private Alternative(Symbol[] symbols, String[] labels) {
-      super(Kind.ALTERNATIVE);
-      this.symbols = symbols;
-      this.labels = labels;
-    }
-    
-    public Symbol getSymbol(int index) {
-      return symbols[index];
-    }
-    
-    public String getLabel(int index) {
-      return labels[index];
-    }
-    
-    public int size() {
-      return symbols.length;
-    }
-
-    public int findLabel(String label) {
-      if (label != null) {
-        for (int i = 0; i < labels.length; i++) {
-          if (label.equals(labels[i])) {
-            return i;
-          }
-        }
-      }
-      return -1;
-    }
-
-    @Override
-    public Alternative flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      Symbol[] ss = new Symbol[symbols.length];
-      for (int i = 0; i < ss.length; i++) {
-        ss[i] = symbols[i].flatten(map, map2);
-      }
-      return new Alternative(ss, labels);
-    }
-  }
-
-  public static class ErrorAction extends ImplicitAction {
-    public final String msg;
-    private ErrorAction(String msg) {
-      this.msg = msg;
-    }
-  }
-
-  public static class IntCheckAction extends Symbol {
-    public final int size;
-    public IntCheckAction(int size) {
-      super(Kind.EXPLICIT_ACTION);
-      this.size = size;
-    }
-  }
-
-  public static class EnumAdjustAction extends IntCheckAction {
-    public final Object[] adjustments;
-    public EnumAdjustAction(int rsymCount, Object[] adjustments) {
-      super(rsymCount);
-      this.adjustments = adjustments;
-    }
-  }
-
-  public static class WriterUnionAction extends ImplicitAction {
-  }
-
-  public static class ResolvingAction extends ImplicitAction {
-    public final Symbol writer;
-    public final Symbol reader;
-    private ResolvingAction(Symbol writer, Symbol reader) {
-      this.writer = writer;
-      this.reader = reader;
-    }
-    
-    @Override
-    public ResolvingAction flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      return new ResolvingAction(writer.flatten(map, map2),
-          reader.flatten(map, map2));
-    }
-
-  }
-  
-  public static class SkipAction extends ImplicitAction {
-    public final Symbol symToSkip;
-    public SkipAction(Symbol symToSkip) {
-      this.symToSkip = symToSkip;
-    }
-    
-    @Override
-    public SkipAction flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      return new SkipAction(symToSkip.flatten(map, map2));
-    }
-
-  }
-
-  public static class FieldAdjustAction extends ImplicitAction {
-    public final int rindex;
-    public final String fname;
-    public FieldAdjustAction(int rindex, String fname) {
-      this.rindex = rindex;
-      this.fname = fname;
-    }
-  }
-  
-  public static final class FieldOrderAction extends ImplicitAction {
-    public final Schema.Field[] fields;
-    public FieldOrderAction(Schema.Field[] fields) {
-      this.fields = fields;
-    }
-  }
-
-  public static class DefaultStartAction extends ImplicitAction {
-    public final byte[] contents;
-    public DefaultStartAction(byte[] contents) {
-      this.contents = contents;
-    }
-  }
-
-  public static class UnionAdjustAction extends ImplicitAction {
-    public final int rindex;
-    public final Symbol symToParse;
-    public UnionAdjustAction(int rindex, Symbol symToParse) {
-      this.rindex = rindex;
-      this.symToParse = symToParse;
-    }
-    
-    @Override
-    public UnionAdjustAction flatten(Map<Sequence, Sequence> map,
-        Map<Sequence, List<Fixup>> map2) {
-      return new UnionAdjustAction(rindex, symToParse.flatten(map, map2));
-    }
-    
-  }
-
-  /** For JSON. */
-  public static class EnumLabelsAction extends IntCheckAction {
-    public final List<String> symbols;
-    public EnumLabelsAction(List<String> symbols) {
-      super(symbols.size());
-      this.symbols = symbols;
-    }
-    
-    public String getLabel(int n) {
-      return symbols.get(n);
-    }
-    
-    public int findLabel(String l) {
-      if (l != null) {
-        for (int i = 0; i < symbols.size(); i++) {
-          if (l.equals(symbols.get(i))) {
-            return i;
-          }
-        }
-      }
-      return -1;
-    }
-  }
-
-  /**
-   * The terminal symbols for the grammar.
-   */
-  public static final Symbol NULL = new Symbol.Terminal("null");
-  public static final Symbol BOOLEAN = new Symbol.Terminal("boolean");
-  public static final Symbol INT = new Symbol.Terminal("int");
-  public static final Symbol LONG = new Symbol.Terminal("long");
-  public static final Symbol FLOAT = new Symbol.Terminal("float");
-  public static final Symbol DOUBLE = new Symbol.Terminal("double");
-  public static final Symbol STRING = new Symbol.Terminal("string");
-  public static final Symbol BYTES = new Symbol.Terminal("bytes");
-  public static final Symbol FIXED = new Symbol.Terminal("fixed");
-  public static final Symbol ENUM = new Symbol.Terminal("enum");
-  public static final Symbol UNION = new Symbol.Terminal("union");
-
-  public static final Symbol ARRAY_START = new Symbol.Terminal("array-start");
-  public static final Symbol ARRAY_END = new Symbol.Terminal("array-end");
-  public static final Symbol MAP_START = new Symbol.Terminal("map-start");
-  public static final Symbol MAP_END = new Symbol.Terminal("map-end");
-  public static final Symbol ITEM_END = new Symbol.Terminal("item-end");
-
-  /* a pseudo terminal used by parsers */
-  public static final Symbol FIELD_ACTION =
-    new Symbol.Terminal("field-action");
-
-  public static final Symbol RECORD_START = new ImplicitAction(false);
-  public static final Symbol RECORD_END = new ImplicitAction(true);
-  public static final Symbol UNION_END = new ImplicitAction(true);
-  
-  public static final Symbol DEFAULT_END_ACTION = new ImplicitAction(true);
-  public static final Symbol MAP_KEY_MARKER =
-    new Symbol.Terminal("map-key-marker");
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ValidatingGrammarGenerator.java b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ValidatingGrammarGenerator.java
deleted file mode 100644
index 1ba14c6..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/ValidatingGrammarGenerator.java
+++ /dev/null
@@ -1,137 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-
-/**
- * The class that generates validating grammar.
- */
-public class ValidatingGrammarGenerator {
-  /**
-   * Returns the non-terminal that is the start symbol
-   * for the grammar for the given schema <tt>sc</tt>.
-   */
-  public Symbol generate(Schema schema) {
-    return Symbol.root(generate(schema, new HashMap<LitS, Symbol>()));
-  }
-
-  /**
-   * Returns the non-terminal that is the start symbol
-   * for the grammar for the given schema <tt>sc</tt>. If there is already an entry
-   * for the given schema in the given map <tt>seen</tt> then
-   * that entry is returned. Otherwise a new symbol is generated and
-   * an entry is inserted into the map.
-   * @param sc    The schema for which the start symbol is required
-   * @param seen  A map of schema to symbol mapping done so far.
-   * @return      The start symbol for the schema
-   */
-  public Symbol generate(Schema sc, Map<LitS, Symbol> seen) {
-    switch (sc.getType()) {
-    case NULL:
-      return Symbol.NULL;
-    case BOOLEAN:
-      return Symbol.BOOLEAN;
-    case INT:
-      return Symbol.INT;
-    case LONG:
-      return Symbol.LONG;
-    case FLOAT:
-      return Symbol.FLOAT;
-    case DOUBLE:
-      return Symbol.DOUBLE;
-    case STRING:
-      return Symbol.STRING;
-    case BYTES:
-      return Symbol.BYTES;
-    case FIXED:
-      return Symbol.seq(new Symbol.IntCheckAction(sc.getFixedSize()),
-          Symbol.FIXED);
-    case ENUM:
-      return Symbol.seq(new Symbol.IntCheckAction(sc.getEnumSymbols().size()),
-          Symbol.ENUM);
-    case ARRAY:
-      return Symbol.seq(Symbol.repeat(Symbol.ARRAY_END, generate(sc.getElementType(), seen)),
-          Symbol.ARRAY_START);
-    case MAP:
-      return Symbol.seq(Symbol.repeat(Symbol.MAP_END,
-              generate(sc.getValueType(), seen), Symbol.STRING),
-          Symbol.MAP_START);
-    case RECORD: {
-      LitS wsc = new LitS(sc);
-      Symbol rresult = seen.get(wsc);
-      if (rresult == null) {
-        Symbol[] production = new Symbol[sc.getFields().size()];
-
-        /**
-         * We construct a symbol without filling the array. Please see
-         * {@link Symbol#production} for the reason.
-         */
-        rresult = Symbol.seq(production);
-        seen.put(wsc, rresult);
-
-        int i = production.length;
-        for (Field f : sc.getFields()) {
-          production[--i] = generate(f.schema(), seen);
-        }
-      }
-      return rresult;
-    }
-    case UNION:
-      List<Schema> subs = sc.getTypes();
-      Symbol[] symbols = new Symbol[subs.size()];
-      String[] labels = new String[subs.size()];
-      
-      int i = 0;
-      for (Schema b : sc.getTypes()) {
-        symbols[i] = generate(b, seen);
-        labels[i] = b.getName();
-        i++;
-      }
-      return Symbol.seq(Symbol.alt(symbols, labels), Symbol.UNION);
-
-    default:
-      throw new RuntimeException("Unexpected schema type");
-    }
-  }
-
-  /** A wrapper around Schema that does "==" equality. */
-  static class LitS {
-    public final Schema actual;
-    public LitS(Schema actual) { this.actual = actual; }
-    
-    /**
-     * Two LitS are equal if and only if their underlying schema is
-     * the same (not merely equal).
-     */
-    public boolean equals(Object o) {
-      if (! (o instanceof LitS)) return false;
-      return actual == ((LitS)o).actual;
-    }
-    
-    public int hashCode() {
-      return actual.hashCode();
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/doc-files/parsing.html b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/doc-files/parsing.html
deleted file mode 100644
index d8ba34d..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/doc-files/parsing.html
+++ /dev/null
@@ -1,615 +0,0 @@
-<html>

-<!--

-   Licensed to the Apache Software Foundation (ASF) under one or more

-   contributor license agreements.  See the NOTICE file distributed with

-   this work for additional information regarding copyright ownership.

-   The ASF licenses this file to You under the Apache License, Version 2.0

-   (the "License"); you may not use this file except in compliance with

-   the License.  You may obtain a copy of the License at

-

-       http://www.apache.org/licenses/LICENSE-2.0

-

-   Unless required by applicable law or agreed to in writing, software

-   distributed under the License is distributed on an "AS IS" BASIS,

-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

-   See the License for the specific language governing permissions and

-   limitations under the License.

--->

-<head>

-<title>Avro schemas as LL(1) CFG definitions</title>

-</head>

-<body>

-

-<center><h1>Avro schemas as LL(1) CFG definitions</h1></center>

-

-This document shows how an Avro schema can be interpreted as the definition of a context-free grammar in LL(1).  We use such an interpretation for two use-cases.  In one use-case, we use them to validate readers and writers of data against a single Avro schema.  Specifically, sequences of <code>Encoder.writeXyz</code> methods can be validated against a schema, and similarly sequences of <code>Decoder.readXyz</code> methods can be validated against a schema.

-

-The second use-case is using grammars to perform schema resolution.  For this use-case, we've developed a subclass of <code>Decoder</code> which takes two Avro schemas as input -- a reader and a writer schema.  This subclass accepts an input stream written according to the writer schema, and presents it to a client expecting the reader schema.  If the writer writes a long, for example, where the reader expects a double, then the <code>Decoder.readDoubl</code> method will convert the writer's long into a double.

-

-This document looks at grammars in the context of these two use-cases.  We first look at the single-schema case, then the double-schema case.  In the future, we believe the interpretation of Avro schemas as CFGs will find other uses (for example, to determine whether or not a schema admits finite-sized values).

-

-

-<h1>The interpretation</h1>

-

-<p> We parse a schema into a set of JSON objects.  For each record, map, array, union schema inside this set, this parse is going to generate a unique identifier "n<sub>i</sub>" (the "pointer" to the schema).  By convention, n<sub>0</sub> is the identifier for the "top-level" schema (i.e., the schema we want to read or write).  In addition, where n<sub>i</sub> is a union, the parse will generate a unique identifier "b<sub>ij</sub>" for each branch of the union.

-

-<p> A context-free grammar (CFG) consists of a set of terminal-symbols, a set of non-terminal symbols, a set of productions, and a start symbol.  Here's how we interpret an Avro schema as a CFG:

-

-<p> <b>Terminal symbols:</b> The terminal symbols of the CFG consist of <code>null</code>, <code>bool</code>, <code>int</code>, <code>long</code>, <code>float</code>, <code>double</code>, <code>string</code>, <code>bytes</code>, <code>enum</code>, <code>fixed</code>, <code>arraystart</code>, <code>arrayend</code>, <code>mapstart</code>, <code>mapend</code>, and <code>union</code>.  In addition, we define the special terminals <code>"1"</code>, <code>"2"</code>, <code>"3"</code>, <code>...</code> which designate the "tag" of a union (i.e., which branch of the union is actually being written or was found in the data).

-

-<p> Below, we use the variable <i>P</i> to represent any one of <code>null</code>, <code>bool</code>, <code>int</code>, <code>long</code>, <code>double</code>, <code>string</code>, <code>bytes</code> (i.e., the "primitives").

-

-<p><b>Non-terminal symbols:</b> The non-terminal symbols of the CFG consist of the identifiers n<sub>i</sub>, u<sub>i</sub>, r<sub>i</sub>, e<sub>i</sub>, f<sub>i</sub> and r<sub>p</sub> (there is a non-terminal r<sub>p</sub> for each symbol in <i>P</i>).

-

-<p><b>Productions:</b> The productions of the CFG are as follows:

-

-<p><i>Records:</i> If n<sub>i</sub> is a record-schema, then it defines the following production:

-<br>&nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= sym(f<sub>i1</sub>) sym(f<sub>i2</sub>) .. sym(f<sub>im</sub>)

-<br>where f<sub>ij</sub> is field "j" of record n<sub>i</sub>, and sym(f<sub>ij</sub>) is the appropriate member of <i>P</i> if f<sub>ij</sub> is a primitive type, or the appropriate n<sub>k</sub> for some k if f<sub>ij</sub> is a map, array, union, or record schema.

-

-<p><i>Arrays:</i> If n<sub>i</sub> is an array schema, then it defines the following productions:

-<br> &nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= <code>arraystart</code> r<sub>i</sub> <code>arrayend</code>

-<br> &nbsp;&nbsp;&nbsp;r<sub>i</sub> ::= sym(n<sub>i</sub>) r<sub>i</sub> | &#949;

-<br> where "sym(n<sub>i</sub>)" is either some <i>P</i>, if this is an array of primitives, or the non-terminal associated with the schema of the element-type of n<sub>k</sub>.

-

-<p><i>Maps:</i> If n<sub>i</sub> is a map schema of element type <i>P</i>, then it defines the following production:

-<br>&nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= <code>mapstart</code> r<sub>i</sub> <code>mapend</code>

-<br>&nbsp;&nbsp;&nbsp;r<sub>i</sub> ::= <code>string</code> sym(n<sub>i</sub>) r<sub>i</sub> | &#949;

-<br> where "sym(n<sub>i</sub>)" is either some <i>P</i>, if the value-type is a primitive, or the non-terminal associated with the schema of the value-type of n<sub>k</sub>.

-

-<p><i>Unions:</i> If n<sub>i</sub> is a union schema, then it defines the following productions:

-<br>&nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= <code>union</code> u<sub>i</sub>

-<br>&nbsp;&nbsp;&nbsp;u<sub>i</sub> ::= 1 sym(b<sub>i1</sub>) | 2 sym(b<sub>i2</sub>) | ... | j sym(b<sub>ij</sub>)

-<br> where the "1", "2", "3" are the tags for the union, and the b<sub>ij</sub> is branch "j" of union "n<sub>i</sub>", and sym(b<sub>ij</sub>) is the appropriate member of <i>P</i> if b<sub>ij</sub> is a primitive type, or the appropriate n<sub>k</sub> if b<sub>ij</sub> is a map, array, union, or record schema.  (The introduction of the terminal symbol "UNION" plus the introduction of the additional non-terminal "u<sub>i</sub>" is a convenience to our parsing implementation.)

-

-<p><i>Enum</i> If n<sub>i</sub> is an enum schema, then it defines the following production:

-<br>&nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= <code>enum</code> e<sub>i</sub>

-<br>&nbsp;&nbsp;&nbsp;e<sub>i</sub> ::= &#949;

-

-<br> Here there is no real production for e<sub>i</sub>. The symbol is used to associate some meta information such as the number of values in the enumeration.

-

-<p><i>Fixed</i> If n<sub>i</sub> is an fixed binary schema, then it defines the following production:

-<br>&nbsp;&nbsp;&nbsp;n<sub>i</sub> ::= <code>enum</code> f<sub>i</sub>

-<br>&nbsp;&nbsp;&nbsp;f<sub>i</sub> ::= &#949;

-

-<br> Here there is no real production for f<sub>i</sub>. The symbol is used to associate some meta information such as the size of the fixed binary.

-

-<p><b>Start symbol:</b> the starting symbol of the grammar is n<sub>0</sub>.

-

-<p>

-This grammar defined by the above transformation is LL(1).  (Proof: The only alternatives in these grammars are for the u<sub>i</sub> ("union") symbols and the r<sub>i</sub> ("repeating") symbols.  For "union" the alternative productions correspond to each one of the branches of the union. Each alternative production for a union starts of a unique tag-terminal, so by looking at the very first terminal one can decide which of the productions to select. In the case of the r<sub>k</sub>, there are two alternative production, the second of which is &#949;. Since these only appear inside <code>array</code>- or <code>mapstart</code>/<code>end</code> pairs, the <code>arrayend</code> or <code>mapend</code> symbol serves to predict that the &#949; should be selected and any other terminal symbol predicts that the other production should be selected.)

-

-Here's an example.  Consider the schema:

-<pre>

-{

-  "type":"record", "name":"foo",

-  "fields":[

-    {"name":"bar","type":"double"},

-    {"name":"baz","type":{"type":"array", "items":"string"}},

-    {"name":"zip",

-     "type":{"type":"map",

-     "values":["null",{"type":"array", "items":"bytes"},"foo"]}},

-  ]

-}

-</pre>

-This schema generates the following grammar:

-<pre>

-  n0 ::= double n1 n2

-  r1 ::= string r1 | &#949;

-  n1 ::= arraystart r1 arrayend

-  r2 ::= string n3 r2 | &#949;

-  n2 ::= mapstart r2 mapend

-  u3 ::= 1 null | 2 n4 | 3 n0

-  n3 ::= union u3

-  r4 ::= bytes r4 | &#949;

-  n4 ::= arraystart r4 arrayend

-</pre>

-The symbol "n0" is the start-symbol for this grammar.

-

-<H1>Reminder on LL(1) parsing</H1>

-

-While there's lots of material on the Web on table-driven LL(1) parsing, it all tends to over complicate things.  The best discussion I've found is in <i><a href=http://www.amazon.com/Crafting-Compiler-C-Charles-Fischer/dp/0805321667>Crafting a compiler</a></i>, by Fischer and LeBlanc (my copy is from 1988 -- I hope they quality hasn't slid since then).  Here's a quick summary.

-

-Parsing is the process of attempting to prove that a string can be derived from a grammar.  Top-down parsing attempts this proof in a top-down manner.  You start with the start symbol of the grammar and you ask yourself "Hey, given the input string, how can I derive this start symbol?"

-

-Now, in general, the start-symbol can be derived from one of a finite number of alternative productions:

-<br>&nbsp;&nbsp;&nbsp;S ::= A<sub>11</sub> A<sub>12</sub> .. A<sub>1n<sub>1</sub></sub> | A<sub>21</sub> .. A<sub>2n<sub>2</sub></sub> | ... | A<sub>m1</sub> .. A<sub>mn<sub>m</sub></sub>

-<br>So the question of deriving the symbol "S" comes down to asking "Hey, given the input I'm looking at, which of these productions for S could have produced that input?"  The key property of LL(1) grammars is that this question is easy to answer.  All you need to do is look at the first token in your input, and that token tells you which of these alternatives could've produced that input.  (This token is sometimes called the "lookahead symbol.")

-

-<p>So the idea is that you put your start symbol on the stack to initialize things.  You pop that symbol off the stack, ask which production for S could've produced the input you're looking at, push that production back on the stack, and repeat.  Let's fill in the details.

-

-<p>The parsing table for this parsing procedure is a function of two inputs and one output:

-<pre>

-   T: Non-terminal x Terminal --> Production

-</pre>

-Remember, a "production" is a sequence of symbols -- a mix of terminals and non-terminals -- that derive a non-terminal in the grammar.

-

-<p>This function <code>T</code> takes a a non-terminal, a terminal, and returns a production for the non-terminal.  The non-terminal is the symbol you're trying to derive (from the top of the parsing stack); the terminal is the current symbol in your input stream (the lookahead symbol).  If <code>X</code> is the first input and <code>a</code>, then the output is the unique production for <code>X</code> that can produce the input symbol <code>a</code>.  (This function can also return the special result "Error" to indicate there is no such production, i.e., we can't parse the input.)

-

-<p>If you have such a table, then your parsing code looks like this:

-<pre>

-parse(Table T, TokenStream in):

-  Stack stack = new Stack(Table.startSymbol);

-  for (Token t = in.next(); t != EOF; t = in.next())

-    advance(stack, T, t);

-

-advance(Stack stack, Table T, Token t):

-  X = stack.pop();

-  while (! isTerminal(X)):

-    if T(X,t) yields production Y<sub>1</sub> Y<sub>2</sub> ... Y<sub>n</sub>):

-      // push production in reverse order, so we leave looking for

-      // the first symbol of the production

-      stack.push(Y<sub>n</sub>);

-      ...;

-      stack.push(Y<sub>2</sub>);

-      stack.push(Y<sub>1</sub>);

-    else, T(X,t) is undefined, so throw an error;

-    X = stack.pop(); // Repeat until we find a terminal

-

-  if X == t then return

-  else throw an error;

-</pre>

-

-

-

-<h1>Parsing tables for Avro</h1>

-

-Traditionally, the parsing table for an LL(1) grammar defined as follows:

-<pre>

-  T(A,y) = A ::= X<sub>1</sub> ... X<sub>n</sub>  -- if y is in Predict(A ::= X<sub>1</sub> ... X<sub>n</sub>)

-  T(A,y) = Error              -- otherwise

-</pre>

-where <code>Predict(A ::= X<sub>1</sub> ... X<sub>n</sub>)</code> returns the unique first symbol that predicts this particular production for <code>A</code>.

-

-<p>But in our case, almost all productions have a single alternative.  If a non-terminal symbol <code>A</code> is on the top of the stack, then we don't even have to look at the input to figure out which production could derive <code>A</code> because there's only one such production!  Thus, we can define a special parsing table for Avro-induced grammars as follows:

-<pre>

-  T(A,y) = A ::= sym(f<sub>i1</sub>) sym(f<sub>i2</sub>) .. sym(f<sub>im</sub>) -- if A is a record schema

-  T(A,y) = A ::= <code>arraystart</code> r<sub>i</sub> <code>arrayend</code>       -- if A is the non-terminal for an array schema

-  T(A,y) = A ::= <code>mapstart</code> r<sub>i</sub> <code>mapend</code>           -- if A is the non-terminal for an map schema

-  T(A,y) = A ::= <code>union</code> u<sub>i</sub>                     -- if A is a union schema

-  T(A,y) = A ::= y sym(b<sub>ij</sub>)                   -- if A is a u<sub>i</sub> schema (note the "y" inside this production)

-  T(A,y) = Error                              -- if A is "A ::= k sym(b<sub>ij</sub>)" and "y" isn't

-                                              --   in any of the branches of the corresponding union

-  T(A,y) = A ::= n<sub>i</sub> r<sub>i</sub>    -- if A is r<sub>i</sub> and y is neither <code>arrayend</code> nor <code>mapend</code>

-  T(A,y) = A ::= &#949;  -- if A is r<sub>i</sub> and y is either <code>arrayend</code> or <code>mapend</code>

-</pre>

-Note that only the last three rules for <code>T(A,y)</code> consider the lookahead symbol (i.e., only the last three rules actually look at the value of <code>y</code>).  These are the rules for dealing with productions that have alternatives, i.e., the rules for unions (where there is an alternative for each branch) and the rules for repeaters (where there is one alternative for the "repeat" case and another alternative for the "end" case).

-

-<p>The nice thing about this alternative formulation of the parsing table is that we don't actually have to compute the predict set, which is not super complicated, but would be a pile of code to test and maintain.

-

-<p>It should be noted that the resulting parsing table catches errors in different states than the traditional LL(1) parsing table.  For example, let's say our Schema is simply an array of ints, which induces the following grammar:

-<pre>

-  n<sub>0</sub> ::= <code>arraystart</code> r<sub>int</sub> <code>arrayend</code>

-  r<sub>int</sub> ::= int r<sub>int</sub> | &#949;

-</pre>

-The traditional LL(1) table would be:

-<pre>

-  T(n<sub>0</sub>,<code>arraystart</code>) = n<sub>0</sub> ::= <code>arraystart</code> r<sub>int</sub> <code>arrayend</code>

-  T(r<sub>int</sub>,int) = r<sub>int</sub> ::= int r<sub>int</sub>

-  T(r<sub>int</sub>,<code>arrayend</code>) = &#949;

-  T(A,y) = Error -- if (A,y) is none of the above

-</pre>

-while our parser table would be:

-<pre>

-  T'(n<sub>0</sub>,y) = n<sub>0</sub> ::= <code>arraystart</code> r<sub>int</sub> <code>arrayend</code> -- for all y

-  T'(r<sub>int</sub>,y) = r<sub>int</sub> ::= int r<sub>int</sub>             -- for all y other than <code>arrayend</code>

-  T'(r<sub>int</sub>,<code>arrayend</code>) = &#949;

-</pre>

-Note that <code>T</code> is defined as <code>Error</code> for a lot of <code>(A,y)</code> pairs, but <code>T'</code> is defined as <code>Error</code> for <i>none</i> of them.  How can this be?

-

-<p>The difference is that <code>T</code> catches many errors when terminals fail to appear in Predict sets, while <code>T'</code> catches the errors when terminals fail to match corresponding terminals on the parser stack.  For example, let's say <code>r<sub>int</sub></code> is on the top of the parser stack, and the symbol <code>double</code> is arrives (which means, in practice, that a <code>writeDouble</code> call is encountered).  In this case, a parser with the standard table will catch the error right away, because <code>double</code> is not in the predict-set for <code>r<sub>int</sub></code>.  A parser with our alternative table will first replace the <code>r<sub>int</sub></code> on the stack with the sequence <code>int&nbsp;r<sub>int</sub></code> (with <code>int</code> on the top of the stack).  It will <em>then</em> throw an error, because the input symbol <code>double</code> does not match the non-terminal <code>int</code> that's now on the top of the stack.

-

-<p>However, we believe that our modified parser will except exactly the same set of strings as the standard parser.

-

-

-<h1>Induction rules</h1>

-

-<p>The first section ("The interpretation") informally describes the grammer generated by an Avro schema.  This section provides a more formal description using a set of induction rules.  The earlier description in section one is fine for describing how a single Avro schema generates a grammar.  But soon we're going to describe how two schemas together define a "resolving" grammar, and for that description we'll need the more formal mechanism described here.

-

-<p>The terminal and non-terminal symbols in our grammar are as described in the first section.  Our induction rules will define a function "C(S)=&lt;G,a&gt;", which takes an Avro schema "S" and returns a pair consisting of a set of productions "X" and a symbol "a".  This symbol "a" -- which is either a terminal, or a non-terminal defined by G -- generates the values described by schema S.

-

-<p>The first rule applies to all Avro primitive types:

-

-<table align=center>

-  <tr align=center><td><i>p</i> in {<code>null</code>, <code>boolean</code>, <code>int</code>, <code>long</code>, <code>double</code>, <code>string</code>, <code>bytes</code>}</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(<i>p</i>)=&lt;{}, <i>p</i>&gt;</td></tr>

-</table>

-

-<p>This first rule does not generate any productions, and simply returns the terminal symbol corresponding to the primitive types of the schema.

-

-<p>The next rule applies to record schemas:

-

-<table align=center>

-  <tr><td align=center>

-  <table cellspacing=0 cellpadding=0><tr><td>S=</td><td><code>{"type":"record", "name":</code>a<code>,</code></td></tr>

-         <tr><td></td><td><code>"fields":[{"name":</code>F<sub>1</sub><code>, "type":</code>S<sub>1</sub><code>},</code>...<code>, {"name":</code>F<sub>n</sub><code>, "type":</code>S<sub>n</sub><code>}]}</code></td></tr></table></td></tr>

-  <tr align=center><td>C(S<sub>j</sub>)=&lt;G<sub>j</sub>, f<sub>j</sub>&gt;</td></tr>

-  <tr align=center><td><hr></td></tr>

-  <tr align=center><td>C(S)=&lt;G<sub>1</sub> &#8746; ... &#8746; G<sub>n</sub> &#8746; {a::=f<sub>1</sub> f<sub>2</sub> ... f<sub>n</sub>}, a&gt;</td></tr>

-</tr>

-</table>

-

-<p>In this case, the set of output-productions consists of all the productions generated by the element-types of the record, plus a production that defines the non-terminal "n" to be the sequence of field-types.  We return "n"as the grammar symbol representing this record-schema.

-

-<p>Next, we define the rule for arrays:

-

-<table align=center>

-  <tr align=center><td>S=<code>{"type":"array", "items":S<sub>e</sub>}</code></td></tr>

-  <tr align=center><td>C(S<sub>e</sub>)=&lt;G<sub>e</sub>,e&gt;</tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(S)=&lt;G<sub>e</sub> &#8746; {r ::= e r, r ::= &#949;, a ::= <code>arraystart</code> r <code>arrayend</code>}, a&gt;</td></tr>

-</table>

-

-<p>For arrays, the set of output productions again contains all productions generated by the element-type.  In addition, we define <em>two</em> productions for "r", which represents the repetition of this element type.  The first production is the recursive case, which consists of the element-type followed by "r" all over again.  The next case is the base case, which is the empty production.  Having defined this repetition, we can then define "n" as this repetation bracketed by the terminal symbols <code>arraystart</code> and <code>arrayend</code>.

-

-<p>The rule for maps is almost identical to that for arrays:

-

-<table align=center>

-  <tr align=center><td>S=<code>{"type":"map", "values":S<sub>e</sub>}</code></td></tr>

-  <tr align=center><td>C(S<sub>e</sub>)=&lt;G<sub>e</sub>,e&gt;</tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(S)=&lt;G<sub>e</sub> &#8746; {r ::= <code>string</code> e r, r ::= &#949;, a ::= <code>mapstart</code> r <code>mapend</code>}, a&gt;</td></tr>

-</table>

-

-<p>The only difference from arrays is that map-elements consists of a <code>string</code> together with an element-type (vs. just an element type).

-

-<p>The rule for unions:

-<table align=center>

-<tr align=center>

- <td>S=[S<sub>1</sub>, S<sub>2</sub><code>, ..., S<sub>n</sub>]</td>

-</tr>

-<tr align=center>

- <td>C(S<sub>j</sub>)=&lt;G<sub>j</sub>, b<sub>j</sub>&gt;</td>

-</tr>

-<tr align=center><td><hr></td></tr>

-<tr align=center><td>C(S)=&lt;G<sub>1</sub> &#8746; ... &#8746; G<sub>n</sub> &#8746; {u::=1 b<sub>1</sub>, u::=2 b<sub>2</sub>, ..., u::=n b<sub>n</sub>, a::=<code>union</code> u}, a&gt;</td></tr>

-</table>

-

-<p>In this rule, we again accumulate productions (G<sub>j</sub>)generated by each of the sub-schemas contained by the top-level schemas.  If there are "k" branches, we define "k" different productions for the non-terminal symbol "u", one for each branch in the union.  These per-branch productions consist of the index of the branch (1 for the first branch, 2 for the second, and so-forth), followed by the symbol representing the schema of that branch.  With these productions for "u" defined, we can define "n" as simply the terminal-symbol <code>union</code> followed by this non-terminal "u".

-

-

-<p>The rule for fixed size binaries:

-<table align=center>

-<tr align=center>

- <td>S=<code>{"type"="fixed", "name"=a, "size"=s}</code></td>

-</tr>

-<tr align=center><td><hr></td></tr>

-<tr align=center><td>C(S)=&lt;{a::=<code>fixed</code> f, f::=&#949;}, a&gt;</td></tr>

-</table>

-

-<p>In this rule, we define a new non-termial f which has associated size of the fixed-binary.

-

-<p>The rule for enums:

-<table align=center>

-<tr align=center>

- <td>S=<code>{"type"="enum", "name"=a, "symbols"=["s1", "s2", "s3", ...]}</code></td>

-</tr>

-<tr align=center><td><hr></td></tr>

-<tr align=center><td>C(S)=&lt;{a::=<code>enum</code> e, e::=&#949;}, a&gt;</td></tr>

-</table>

-

-<p>In this rule, we define a new non-termial f which has associated range of values.

-

-<h1>Resolution using action symbols</h1>

-

-We want to use grammars to represent Avro's rules for schema resolution.  To do this, we need a way to encode certain actions that the parser should perform as part of the resolution.  In particular:

-

-<ul>

-<li> <b>Resolver action:</b> when the writer writes a primitive type that can be promoted into the reader's type, we use a "resolver action" to aid in this process.  This is used for only a limited number of cases: int->long, int->double, long->double, and double->long.

-

-<p> <li> <b>Skip action:</b> when writer's schema for a record contains fields that are not in the reader's schema, we to skip them.  "Skip actions" are used for this purpose.

-

-<p> <li> <b>Field action:</b> the fields of a record can appear in different orders in the reader's and writer's schemas.  In the API we're designing, to support streaming, fields will be returned to the reader in the order generated by the writer; we need to help the reader map this back to its own field-order.  Field actions support this requirement.

-

-<p> <li> <b>Reader union actions:</b> the reader's schema can have a union where the writer's schema did not.  For example, the writer's schema might call for simply a long, while the reader's schema calls for a union that contains a long among other things.  The reader should experience the writer's long values as if they came from a union.  Reader union actions support this requirement.

-

-<p> <li> <b>Writer union actions</b> are the dual of the previous case: the writer may write a union where the reader expects just one branch of the union.  Writer union actions help bridge such situations.

-

-<p> <li> <b>Enum actions:</b> when we have reader- and writer-schema has enumerations, enum actions are used to map the writer's numerical value to the reader's numeric value.

-

-<p> <li> <b>Error actions:</b> in general, errors in schema-resolution can only be detected when data is being read.  For example, if the writer writers a <code>[long,&nbsp;string]</code> union, and the reader is expecting just a <code>long</code>, an error is only reported when the writer sends a string rather than a long.  Further, the Avro spec recommends that <em>all</em> errors be detected at reading-time, even if they could be detected earlier.  Error actions support the deferral of errors.

-</ul>

-

-<p>These actions will become "action symbols" in our grammar.  Action symbols are symbols that cause our parser to perform special activities when they appear on the top of the parsing stack.  For example, when the skip-action makes it to the top of the stack, the parser will automatically skip the next value in the input stream.  (Again, Fischer and LeBlanc has a nice description of action symbols.)

-

-<p>We're going to use induction rules to define a grammar.  This time, our induction rules will define a two-argument function "C(W,R)=&lt;G,a&gt;", which takes two schema, the writer's and reader's schemas respectively.  The results of this function the same as they where for the single-schema case.

-

-<p>The first rule applies to all Avro primitive types:

-

-<table align=center>

-  <tr align=center><td><i>p</i> in {<code>null</code>, <code>boolean</code>, <code>int</code>, <code>long</code>, <code>double</code>, <code>string</code>, <code>bytes</code>}</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(<i>p</i>,<i>p</i>)=&lt;{}, <i>p</i>&gt;</td></tr>

-</table>

-

-<p> In this case, the writer and reader schemas agree, so the resulting grammar should just expect the agreed-upon primitive type.

-

-<p>The next rule deals with resolution of primitive types:

-

-<table align=center>

-  <tr align=center><td>w in {<code>int</code>, <code>long</code>, <code>double</code>}</td></tr>

-  <tr align=center><td>r in {<code>long</code>, <code>double</code>}</td></tr>

-  <tr align=center><td>w != r</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(w,r)=&lt;{}, ResolverAction(w,r)&gt;</td></tr>

-</table>

-

-<p> When this parameterized action is encountered, the parser will resolve the writer's value into the reader's expected-type for that value.  In the parsing loop, when we encounter this symbol, we use the "r" parameter of this symbol to check that the reader is asking for the right type of value, and we use the "w" parameter to figure out how to parse the data in the input stream.

-

-<p>On final possibility for pimitive types are incompatible types:

-

-<table align=center>

-  <tr align=center><td>The w,r pair does not fit the previous two rules, AND neither</td></tr>

-  <tr align=center><td>of the pair is a union, AND the pair aren't both compounds</td></tr>

-  <tr align=center><td>of the same type (i.e., two arrays, two records, or two maps)</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(w,r)=&lt;{}, ErrorAction&gt;</td></tr>

-</table>

-

-<p> When this parameterized action is encountered, the parser will throw an error.  Keep in mind that this symbol might be generated in the middle of a recursive call to "G."  For example, if the reader's schema is long, and the writers is [long,&nbsp;string], we'll generate an error symbol for the string-branch of the union; if this branch is occurred in actual input, an error will then be generated.

-

-<p>The next rule deals with resolution fixed size binaries:

-

-<table align=center>

-  <tr align=center><td>w = {"type"="fixed", "name":"n1", "size"=s1}</td></tr>

-  <tr align=center><td>r = {"type"="fixed", "name":"n2", "size"=s2}</td></tr>

-  <tr align=center><td>n1 != n2 or s1 != s2</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(w,r)=&lt;{}, ErrorAction&gt;</td></tr>

-</table>

-

-<table align=center>

-  <tr align=center><td>w = {"type"="fixed", "name":"n1", "size"=s1}</td></tr>

-  <tr align=center><td>r = {"type"="fixed", "name":"n2", "size"=s2}</td></tr>

-  <tr align=center><td>n1 == n2 and s1 == s2</td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(w,r)=&lt;{ a::=<code>fixed</code> f, f::=&#949;}, a&gt;</td></tr>

-</table>

-

-If the names are identical and sizes are identical, then we match otherwise an error is generated.

-

-<p>The next rule deals with resolution enums:

-

-<table align=center>

-  <tr align=center><td>w = {"type"="enum", "symbols":[sw<sub>1</sub>, sw<sub>2</sub>, ..., sw<sub>m</sub>] }</td></tr>

-  <tr align=center><td>r = {"type"="enum", "symbols":[sr<sub>1</sub>, sr<sub>2</sub>, ..., sr<sub>n</sub>] }</td></tr>

-  <tr align=center><td>f<sub>i</sub> = EnumAction(i, j) if sw<sub>i</sub> == sr<sub>j</sub></td></tr>

-  <tr align=center><td>f<sub>i</sub> = ErrorAction if sw<sub>i</sub> does not match any sr<sub>j</sub></td></tr>

-  <tr><td><hr></td></tr>

-  <tr align=center><td>C(w,r)=&lt;{ a::=<code>enum</code> e, e::=&#949;}, a&gt;</td></tr>

-</table>

-

-The symbol e has the set of actions f<sub>i</sub> associated with it. It chooses the right action based on the runtime data.

-

-<p>Now that we have rules for primitive types, we can define rules for compound types.  First, let's look at records:

-

-<table align=center>

-<tr>

- <td align=center>

-  <table cellspacing=0 cellpadding=0>

-  <tr>

-   <td>W=</td>

-   <td><code>{"type":"record","name":</code>w<code>,</code></td>

-  </tr>

-  <tr>

-    <td></td>

-    <td><code>"fields":[{"name":</code>E<sub>1</sub><code>,

-                         "type":</code>S<sub>1</sub><code>},</code>...<code>,

-                         {"name":</code>E<sub>n</sub><code>,

-                         "type":</code>S<sub>n</sub><code>}]}</code></td>

-  </tr>

-  </table>

- </td>

-</tr>

-<tr>

- <td align=center>

-  <table cellspacing=0 cellpadding=0>

-  <tr>

-   <td>R=</td>

-   <td><code>{"type":"record", "name":</code>r<code>,</code></td>

-  </tr>

-  <tr>

-   <td></td>

-   <td><code>"fields":[{"name":</code>F<sub>1</sub><code>,

-                        "type":</code>T<sub>1</sub><code>},</code>...<code>,

-                       {"name":</code>F<sub>m</sub><code>,

-                        "type":</code>T<sub>m</sub><code>}]}</code>

-   </td>

-  </tr>

-  </table>

- </td>

-</tr>

-<tr align=center>

- <td>{F<sub>1</sub>, ..., F<sub>m</sub>} is a

-     subset of {E<sub>1</sub>, ..., E<sub>n</sub>}</td>

-</tr>

-<tr>

- <td align=center>

-  C(S<sub>j</sub>, T<sub>i</sub>)

-   = &lt;G<sub>j</sub>, f<sub>j</sub>&gt;

-   -- for all E<sub>j</sub>=F<sub>i</sub></td>

-</tr>

- <td align=center>

-  C(S<sub>j</sub>)

-   = &lt;G<sub>j</sub>, f<sub>j</sub>&gt;

-   -- for all E<sub>j</sub> not in {F<sub>1</sub>, ..., F<sub>m</sub>}</td>

-</tr>

-<tr>

- <td align=center>

-  <table>

-  <tr>

-   <td rowspan=2 valign=middle>f'<sub>j</sub>=</td>

-   <td><sub>/ </sub>FieldAction(i, E<sub>i</sub>) f<sub>j</sub>

-        &nbsp;&nbsp;-- if E<sub>j</sub>=F<sub>i</sub></td>

-  </tr>

-  <tr>

-   <td><sup>\ </sup>SkipAction(f<sub>j</sub>)

-    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-- if E<sub>j</sub> not in {F<sub>1</sub>, ..., F<sub>m</sub>}</td>

-  </tr>

-  </table>

- </td>

-</tr>

-<tr align=center><td><hr></td></tr>

-<tr align=center><td>C(W,R)=&lt;G<sub>1</sub> &#8746; G<sub>2</sub> &#8746; ... &#8746; G<sub>n</sub> &#8746; { w::=f'<sub>1</sub> f'<sub>2</sub> ... f'<sub>n</sub> }, w&gt;</td></tr>

-</table>

-

-<p>The substance of this rule lies in the definion of the "f'<sub>j</sub>".  If the writer's field F<sub>j</sub> is not a member of the reader's schema, then a skip-action is generated, which will cause the parser to automatically skip over the field without the reader knowing.  (In this case, note that we use the <em>single</em>-argument version of "C", i.e., the version defined in the previous section!)

-

-If the wrtier's field F<sub>j</sub> <em>is</em> a member f the reader's schema, then "f'<sub>j</sub>" is a two-symbol sequence: the first symbol is a (parameterized) field-action which is used to tell the reader which of it's own fields is coming next, followed by the symbol for parsing the value written by the writer.

-

-<p>The above rule for records works only when the reader and writer have the same name, and the reader's fields are subset of the writer's.  In other cases, an error is producted.

-

-<p> The rule for arrays is straight forward:

-

-<table align=center>

-<tr align=center>

- <td>W=<code>{"type":"array", "items":S<sub>w</sub>}</code></td>

-</tr>

-<tr align=center>

- <td>R=<code>{"type":"array", "items":S<sub>r</sub>}</code></td>

-</tr>

-<tr align=center>

- <td>C(S<sub>w</sub>, S<sub>r</sub>)=&lt;G<sub>e</sub>,e&gt;

-</tr>

-<tr><td><hr></td></tr>

-<tr align=center><td>C(W,R)=&lt;G<sub>e</sub> U {r ::= e r, r ::= &#949;, a ::= <code>arraystart</code> r <code>arrayend}, a&gt;</td></tr>

-</table>

-

-<p>Here the rule is largely the same as for the single-schema case, although the recursive use of G may result in productions that are very different.  The rule for maps changes in a similarly-small way, so we don't bother to detail that case in this document.

-

-<p>The final rules are for unions.  Let's first look at the case where the writer is a union but the reader is not:

-

-<table align=center>

-<tr align=center>

- <td>W=[S<sub>1</sub>, ..., S<sub>n</sub>]</td>

-</tr>

-<tr align=center>

- <td>R is not a union schema</td>

-</tr>

-<tr align=center>

- <td>C(S<sub>j</sub>,R)=&lt;G<sub>j</sub>, b<sub>j</sub>&gt;</td>

-</tr>

-<tr><td><hr></td></tr>

-<tr align=center><td>C(R,W)=&lt;G<sub>1</sub> &#8746; G<sub>2</sub> &#8746; ... &#8746; G<sub>n</sub> &#8746; {a::=WriterUnionAction(b<sub>1</sub>, b<sub>2</sub>, ..., b<sub>n</sub>)}, a&gt;</td></tr>

-</table>

-

-<p>Here, a writer-union action is generated that looks much like a union did in the single-schema case.  However, unlike in that case, the writer-union action will cause the parser to automatically interpret the writer's union value.

-

-<p> Now let's look when the reader expects a union.  The first of these cases is an error case:

-

-<table align=center>

-<tr align=center>

- <td>W is not a union schema</td>

-</tr>

-<tr align=center>

- <td>R=[R<sub>1</sub>, ..., R<sub>n</sub>]</td>

-</tr>

-<tr><td>W does not resolve to any of the branches of R</td></tr>

-<tr><td><hr></td></tr>

-<tr><td align=center>C(W,R)=&lt;{}, ErrorAction&gt;</td></tr>

-</table>

-

-<p>In this case, there's no way to resolve the two schemas, so we generate an error action to remind us of this fact at run-time.  (Again, this error action might be under a branch of a containing union, and thus might never be triggered at run-time, so it wouldn't be correct to signal an error at "compile" time.)

-

-<p> Here's the non-error case:

-

-<table align=center>

-<tr align=center>

- <td>W is not a union schema</td>

-</tr>

-<tr align=center>

- <td>R=[R<sub>1</sub>, ..., R<sub>n</sub>]</td>

-</tr>

-<tr><td align=center>Branch "j" of R is the best match for W</td></tr>

-<tr><td align=center>C(W,R<sub>j</sub>)=&lt;&nbsp;G,w&gt;</td></tr>

-<tr><td><hr></td></tr>

-<tr><td align=center>C(W,R)=&lt;G, ReaderUnionAction(j,w)&gt;</td></tr>

-</table>

-

-<p> In this case, we can decide at "compile time" which of the branches of the reader will be the best match for the value that's going to be written by the writer.  We then generate a reader union action, which tells the parser first, which branch-number of the reader's we should report to the schema, and then second which symbol to use to parse the writer's actual value.

-

-<p> The interesting case is when the writer's and reader's schemas are both unions:

-

-<table align=center>

-<tr align=center>

- <td>W=[W<sub>1</sub>, ..., W<sub>n</sub>]</td>

-</tr>

-<tr align=center>

- <td>R=[R<sub>1</sub>, ..., R<sub>m</sub>]</td>

-</tr>

-<tr align=center>

- <td>C(W<sub>j</sub>, R)=&lt;G<sub>j</sub>, b<sub>j</sub>&gt;</td>

-</tr>

-<tr><td><hr></td></tr>

-<tr align=center><td>C(W,R)=&lt;G<sub>1</sub> &#8746; ... &#8746; G<sub>n</sub> &#8746; {u::=1 b<sub>1</sub>, u::=2 b<sub>2</sub>, ..., u::=n b<sub>n</sub>, a::=<code>union</code> u}, a&gt;</td></tr>

-</table>

-

-<p> Note that in the inductive case ("C(W<sub>j</sub>, R)"), each <i>branch</i> of the writer ("W<sub>j</sub>") is compared to the <em>entire union</em> of the reader ("R").  Thus, one of the two previous cases (the error case or the reader-union case) gets generated for each branch of the writer's union.

-

-

-<h1>Resolving parser</h1>

-

-Here's a stylized version of the actual parsing code, with comments, to illustrate how a resolving-grammar is actually used.  To better understand this code, compare it to the simple code for "advance" given earlier in this document.

-

-<pre>

-  Symbol advance(Stack stack, Table T, Symbol t, TokenStream in):

-    Symbol X = stack.pop();

-    while (! isTerminal(X)):

-      case X:

-        FieldAction:

-          // In this case, the main parsing loop can "ask" for the

-          // field information by passing a FieldAction symbol as

-          // "t".  If it does, it'll get the (parameterized) symbol

-          // from the parsing table.  If it doesn't ask for this

-          // information, then the information will be ignored.

-          if (isFieldAction(t)) return X;

-

-        SkipAction(productionToSkip):

-          // In this case we automatically skip the production we've

-          // been asked to skip

-          in.skip(productionToSkip);

-

-        WriterUnionAction(b_1, b_2, ..., b_n):

-          // In this case, we read from the token input-stream to

-          // determine the actual branch witten by the writer.

-          // We then push this branch on the parsing stack, to tell

-          // the parser what type of value to look for

-          int i = in.readIndex();

-          stack.push(b_i);

-

-        NonTerminal:

-          if T(X,t) yields production Y<sub>1</sub> Y<sub>2</sub> ... Y<sub>n</sub>):

-            // push production in reverse order, so we leave looking for

-            // the first symbol of the production

-            stack.push(Y<sub>n</sub>);

-            ...;

-            stack.push(Y<sub>2</sub>);

-            stack.push(Y<sub>1</sub>);

-          else, T(X,t) is undefined, so throw an error;

-

-      X = stack.pop();

-    }

-    // We've left the loop, so X is a terminal symbol:

-    case X:

-      ResolvingTable(w,r):

-        // If reader is looking for an "r", then the reader's

-        // looking for the right thing according to the reader's

-        // schema, but return the type actually written so the

-        // proper conversion can happen.

-        if (r == t) return w;

-

-      ReaderUnionAction(index,writerSym):

-        // Reader-union actions are allowed where the reader

-        // is expecting a union.  In this case, we return the

-        // (parameterized!) reader-union-action symbol and 

-        // the code above figures out what to do

-        if (t == union) return X;

-

-      ErrorAction:

-        throw the deferred error;

-      

-    // Fall-through case:

-    if (X == t) then return X

-    else throw an aerror 

-</pre>

diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/package.html
deleted file mode 100644
index 2a879ab..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/io/parsing/package.html
+++ /dev/null
@@ -1,40 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Implementation of Avro schemas as LL(1) grammars.
-
-<p>See
-<a href="doc-files/parsing.html">parser documentation</a> for details on how
-this is achieved.
-
-<p>
-The classes in this package are used by
-{@link org.apache.avro.io.ValidatingEncoder},
-{@link org.apache.avro.io.ValidatingDecoder},
-{@link org.apache.avro.io.ResolvingDecoder},
-{@link org.apache.avro.io.JsonEncoder} and
-{@link org.apache.avro.io.JsonDecoder},
-
-<p>
-Unless one plans to generate a variation of the grammar or use a grammar,
-one not need to understand these classes.
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/AvroRemoteException.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/AvroRemoteException.java
deleted file mode 100644
index cd85514..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/AvroRemoteException.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-
-/** Base class for exceptions thrown to client by server. */
-public class AvroRemoteException extends IOException {
-  private Object value;
-
-  protected AvroRemoteException() {}
-
-  public AvroRemoteException(Throwable value) {
-    this(value.toString());
-    initCause(value);
-  }
-
-  public AvroRemoteException(Object value) {
-    super(value != null ? value.toString() : null);
-    this.value = value;
-  }
-  
-  public Object getValue() { return value; }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferInputStream.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferInputStream.java
deleted file mode 100644
index 6def9a7..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferInputStream.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-import java.util.List;
-
-/** Utility to present {@link ByteBuffer} data as an {@link InputStream}.*/
-public class ByteBufferInputStream extends InputStream {
-  private List<ByteBuffer> buffers;
-  private int current;
-
-  public ByteBufferInputStream(List<ByteBuffer> buffers) {
-    this.buffers = buffers;
-  }
-
-  /** @see InputStream#read()
-   * @throws EOFException if EOF is reached. */
-  public int read() throws IOException {
-    return getBuffer().get() & 0xff;
-  }
-
-  /** @see InputStream#read(byte[], int, int)
-   * @throws EOFException if EOF is reached before reading all the bytes. */
-  public int read(byte[] b, int off, int len) throws IOException {
-    if (len == 0) return 0;
-    ByteBuffer buffer = getBuffer();
-    int remaining = buffer.remaining();
-    if (len > remaining) {
-      buffer.get(b, off, remaining);
-      return remaining;
-    } else {
-      buffer.get(b, off, len);
-      return len;
-    }
-  }
-
-  /** Read a buffer from the input without copying, if possible.
-   * @throws EOFException if EOF is reached before reading all the bytes. */
-  public ByteBuffer readBuffer(int length) throws IOException {
-    if (length == 0) return ByteBuffer.allocate(0);
-    ByteBuffer buffer = getBuffer();
-    if (buffer.remaining() == length) {           // can return current as-is?
-      current++;
-      return buffer;                              // return w/o copying
-    }
-    // punt: allocate a new buffer & copy into it
-    ByteBuffer result = ByteBuffer.allocate(length);
-    int start = 0;
-    while (start < length)
-      start += read(result.array(), start, length-start);
-    return result;
-  }
-
-  /** Returns the next non-empty buffer.
-   * @throws EOFException if EOF is reached before reading all the bytes.
-   */
-  private ByteBuffer getBuffer() throws IOException {
-    while (current < buffers.size()) {
-      ByteBuffer buffer = buffers.get(current);
-      if (buffer.hasRemaining())
-        return buffer;
-      current++;
-    }
-    throw new EOFException();
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferOutputStream.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferOutputStream.java
deleted file mode 100644
index e21b01b..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ByteBufferOutputStream.java
+++ /dev/null
@@ -1,91 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.nio.ByteBuffer;
-
-import java.util.ArrayList;
-import java.util.List;
-
-/** Utility to collect data written to an {@link OutputStream} in {@link
- * ByteBuffer}s.*/
-public class ByteBufferOutputStream extends OutputStream {
-  public static final int BUFFER_SIZE = 8192;
-
-  private List<ByteBuffer> buffers;
-
-  public ByteBufferOutputStream() {
-    reset();
-  }
-
-  /** Returns all data written and resets the stream to be empty. */
-  public List<ByteBuffer> getBufferList() {
-    List<ByteBuffer> result = buffers;
-    reset();
-    for (ByteBuffer buffer : result) buffer.flip();
-    return result;
-  }
-
-  public void reset() {
-    buffers = new ArrayList<ByteBuffer>(1);
-    buffers.add(ByteBuffer.allocate(BUFFER_SIZE));
-  }
-
-  public void write(ByteBuffer buffer) {
-    buffers.add(buffer);
-  }
-
-  public void write(int b) {
-    ByteBuffer buffer = buffers.get(buffers.size()-1);
-    if (buffer.remaining() < 1) {
-      buffer = ByteBuffer.allocate(BUFFER_SIZE);
-      buffers.add(buffer);
-    }
-    buffer.put((byte)b);
-  }
-
-  public void write(byte[] b, int off, int len) {
-    ByteBuffer buffer = buffers.get(buffers.size()-1);
-    int remaining = buffer.remaining();
-    while (len > remaining) {
-      buffer.put(b, off, remaining);
-      len -= remaining;
-      off += remaining;
-      buffer = ByteBuffer.allocate(BUFFER_SIZE);
-      buffers.add(buffer);
-      remaining = buffer.remaining();
-    }
-    buffer.put(b, off, len);
-  }
-
-  /** Add a buffer to the output without copying, if possible.
-   * Sets buffer's position to its limit.
-   */
-  public void writeBuffer(ByteBuffer buffer) throws IOException {
-    if (buffer.remaining() < BUFFER_SIZE) {
-      write(buffer.array(), buffer.position(), buffer.remaining());
-    } else {
-      buffers.add(buffer);                        // append w/o copying
-    }
-    buffer.position(buffer.limit());              // mark data as consumed
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramServer.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramServer.java
deleted file mode 100644
index bae403e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramServer.java
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.net.SocketAddress;
-import java.nio.channels.ClosedChannelException;
-import java.nio.channels.DatagramChannel;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/** A datagram-based server implementation. This uses a simple, non-standard
- * wire protocol and is not intended for production services. */
-public class DatagramServer extends Thread implements Server {
-  private static final Logger LOG =
-    LoggerFactory.getLogger(DatagramServer.class);
-
-  private final Responder responder;
-  private final DatagramChannel channel;
-  private final Transceiver transceiver;
-
-  public DatagramServer(Responder responder, SocketAddress addr)
-    throws IOException {
-    String name = "DatagramServer on "+addr;
-
-    this.responder = responder;
-
-    this.channel = DatagramChannel.open();
-    channel.socket().bind(addr);
-
-    this.transceiver = new DatagramTransceiver(channel);
-
-    setName(name);
-    setDaemon(true);
-    start();
-  }
-
-  public int getPort() { return channel.socket().getLocalPort(); }
-
-  public void run() {
-    while (true) {
-      try {
-        transceiver.writeBuffers(responder.respond(transceiver.readBuffers()));
-      } catch (ClosedChannelException e) {
-        return;
-      } catch (IOException e) {
-        LOG.warn("unexpected error", e);
-        throw new RuntimeException(e);
-      }
-    }
-  }
-  
-  public void close() { this.interrupt(); }
-
-  public static void main(String[] arg) throws Exception {
-    DatagramServer server = new DatagramServer(null, new InetSocketAddress(0));
-    System.out.println("started");
-    server.join();
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramTransceiver.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramTransceiver.java
deleted file mode 100644
index db4c1f0..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/DatagramTransceiver.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.util.List;
-import java.util.ArrayList;
-import java.io.IOException;
-import java.net.SocketAddress;
-import java.nio.ByteBuffer;
-import java.nio.channels.DatagramChannel;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/** A datagram-based {@link Transceiver} implementation. This uses a simple,
- * non-standard wire protocol and is not intended for production services. */
-public class DatagramTransceiver extends Transceiver {
-  private static final Logger LOG
-    = LoggerFactory.getLogger(DatagramTransceiver.class);
-
-  private static final int MAX_SIZE = 16 * 1024;
-
-  private DatagramChannel channel;
-  private SocketAddress remote;
-  private ByteBuffer buffer = ByteBuffer.allocate(MAX_SIZE);
-
-  public String getRemoteName() { return remote.toString(); }
-
-  public DatagramTransceiver(SocketAddress remote) throws IOException {
-    this(DatagramChannel.open());
-    this.remote = remote;
-  }
-
-  public DatagramTransceiver(DatagramChannel channel) {
-    this.channel = channel;
-  }
-
-  public synchronized List<ByteBuffer> readBuffers() throws IOException {
-    buffer.clear();
-    remote = channel.receive(buffer);
-    LOG.info("received from "+remote);
-    buffer.flip();
-    List<ByteBuffer> buffers = new ArrayList<ByteBuffer>();
-    while (true) {
-      int length = buffer.getInt();
-      if (length == 0) {                          // end of buffers
-        return buffers;
-      }
-      ByteBuffer chunk = buffer.slice();          // use data without copying
-      chunk.limit(length);
-      buffer.position(buffer.position()+length);
-      buffers.add(chunk);
-    }
-  }
-
-  public synchronized void writeBuffers(List<ByteBuffer> buffers)
-    throws IOException {
-    buffer.clear();
-    for (ByteBuffer b : buffers) {
-      buffer.putInt(b.remaining());
-      buffer.put(b);                              // copy data.  sigh.
-    }
-    buffer.putInt(0);
-    buffer.flip();
-    channel.send(buffer, remote);
-    LOG.info("sent to "+remote);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpServer.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpServer.java
deleted file mode 100644
index 3478b5c..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpServer.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-
-import org.apache.avro.AvroRuntimeException;
-
-import org.mortbay.jetty.servlet.Context;
-import org.mortbay.jetty.servlet.ServletHolder;
-
-/** An HTTP-based RPC {@link Server}. */
-public class HttpServer implements Server {
-  private org.mortbay.jetty.Server server;
-
-  /** Starts a server on the named port. */
-  public HttpServer(Responder responder, int port) throws IOException {
-    this(new ResponderServlet(responder), port);
-  }
-
-  /** Starts a server on the named port. */
-  public HttpServer(ResponderServlet servlet, int port) throws IOException {
-    this.server = new org.mortbay.jetty.Server(port);
-    new Context(server,"/").addServlet(new ServletHolder(servlet), "/*");
-    try {
-      server.start();
-    } catch (Exception e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  @Override
-  public int getPort() { return server.getConnectors()[0].getLocalPort(); }
-
-  @Override
-  public void close() {
-    try {
-      server.stop();
-    } catch (Exception e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpTransceiver.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpTransceiver.java
deleted file mode 100644
index 211131e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/HttpTransceiver.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.io.EOFException;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
-import java.net.URL;
-import java.net.HttpURLConnection;
-
-/** An HTTP-based {@link Transceiver} implementation. */
-public class HttpTransceiver extends Transceiver {
-  static final String CONTENT_TYPE = "avro/binary"; 
-
-  private URL url;
-  private HttpURLConnection connection;
-  
-  public HttpTransceiver(URL url) { this.url = url; }
-
-  public String getRemoteName() { return this.url.toString(); }
-
-  @Override
-  public synchronized List<ByteBuffer> transceive(List<ByteBuffer> request)
-    throws IOException {
-    this.connection = (HttpURLConnection)url.openConnection();
-    connection.setRequestMethod("POST");
-    connection.setRequestProperty("Content-Type", CONTENT_TYPE);
-    connection.setRequestProperty("Content-Length",
-                                  Integer.toString(getLength(request)));
-    connection.setDoOutput(true);
-    //LOG.info("Connecting to: "+url);
-    return super.transceive(request);
-  }
-
-  public synchronized List<ByteBuffer> readBuffers() throws IOException {
-    return readBuffers(connection.getInputStream());
-  }
-
-  public synchronized void writeBuffers(List<ByteBuffer> buffers)
-    throws IOException {
-    writeBuffers(buffers, connection.getOutputStream());
-  }
-
-  static int getLength(List<ByteBuffer> buffers) {
-    int length = 0;
-    for (ByteBuffer buffer : buffers) {
-      length += 4;
-      length += buffer.remaining();
-    }
-    length += 4;
-    return length;
-  }
-
-  static List<ByteBuffer> readBuffers(InputStream in)
-    throws IOException {
-    List<ByteBuffer> buffers = new ArrayList<ByteBuffer>();
-    while (true) {
-      int length = (in.read()<<24)+(in.read()<<16)+(in.read()<<8)+in.read();
-      if (length == 0) {                       // end of buffers
-        return buffers;
-      }
-      ByteBuffer buffer = ByteBuffer.allocate(length);
-      while (buffer.hasRemaining()) {
-        int p = buffer.position();
-        int i = in.read(buffer.array(), p, buffer.remaining());
-        if (i < 0)
-          throw new EOFException("Unexpected EOF");
-        buffer.position(p+i);
-      }
-      buffer.flip();
-      buffers.add(buffer);
-    }
-  }
-
-  static void writeBuffers(List<ByteBuffer> buffers, OutputStream out)
-    throws IOException {
-    for (ByteBuffer buffer : buffers) {
-      writeLength(buffer.limit(), out);           // length-prefix
-      out.write(buffer.array(), buffer.position(), buffer.remaining());
-      buffer.position(buffer.limit());
-    }
-    writeLength(0, out);                          // null-terminate
-  }
-
-  private static void writeLength(int length, OutputStream out)
-    throws IOException {
-    out.write(0xff & (length >>> 24));
-    out.write(0xff & (length >>> 16));
-    out.write(0xff & (length >>> 8));
-    out.write(0xff & length);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/LocalTransceiver.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/LocalTransceiver.java
deleted file mode 100644
index fa93ac8..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/LocalTransceiver.java
+++ /dev/null
@@ -1,52 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.List;
-
-/** Implementation of IPC that remains in process. */
-public class LocalTransceiver extends Transceiver {
-  private Responder responder;
-
-  public LocalTransceiver(Responder responder) {
-    this.responder = responder;
-  }
-
-  @Override
-  public String getRemoteName() {
-    return "local";
-  }
-
-  @Override
-  public List<ByteBuffer> transceive(List<ByteBuffer> request)
-      throws IOException {
-    return responder.respond(request);
-  }
-
-  @Override
-  public List<ByteBuffer> readBuffers() throws IOException {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public void writeBuffers(List<ByteBuffer> buffers) throws IOException {
-    throw new UnsupportedOperationException();
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCContext.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCContext.java
deleted file mode 100644
index a6ae303..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCContext.java
+++ /dev/null
@@ -1,157 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc;
-
-import java.nio.ByteBuffer;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.util.Utf8;
-
-/**
- * This class represents the context of an RPC call or RPC handshake.
- * Designed to provide information to RPC plugin writers,
- * this class encapsulates information about the rpc exchange,
- * including handshake and call metadata.
- *
- */
-public class RPCContext {
-  
-  protected Map<Utf8,ByteBuffer> requestHandshakeMeta, responseHandshakeMeta;
-  protected Map<Utf8,ByteBuffer> requestCallMeta, responseCallMeta;
-  
-  protected Object response;
-  protected Exception error;
-  private Message message;
-  
-  /**
-   * This is an access method for the handshake state
-   * provided by the client to the server.
-   * @return a map representing handshake state from
-   * the client to the server
-   */
-  public Map<Utf8,ByteBuffer> requestHandshakeMeta() {
-    if (requestHandshakeMeta == null) {
-      requestHandshakeMeta = new HashMap<Utf8,ByteBuffer>();
-    }
-    return requestHandshakeMeta;
-  }
-  
-  void setRequestHandshakeMeta(Map<Utf8,ByteBuffer> newmeta) {
-    requestHandshakeMeta = newmeta;
-  }
-  
-  /**
-   * This is an access method for the handshake state
-   * provided by the server back to the client
-   * @return a map representing handshake state from
-   * the server to the client
-   */
-  public Map<Utf8,ByteBuffer> responseHandshakeMeta() {
-    if (responseHandshakeMeta == null) {
-      responseHandshakeMeta = new HashMap<Utf8,ByteBuffer>();
-    }
-    return responseHandshakeMeta;
-  }
-  
-  void setResponseHandshakeMeta(Map<Utf8,ByteBuffer> newmeta) {
-    responseHandshakeMeta = newmeta;
-  }
-  
-  /**
-   * This is an access method for the per-call state
-   * provided by the client to the server.
-   * @return a map representing per-call state from
-   * the client to the server
-   */
-  public Map<Utf8,ByteBuffer> requestCallMeta() {
-    if (requestCallMeta == null) {
-      requestCallMeta = new HashMap<Utf8,ByteBuffer>();
-    }
-    return requestCallMeta;
-  }
-  
-  void setRequestCallMeta(Map<Utf8,ByteBuffer> newmeta) {
-    requestCallMeta = newmeta;
-  }
-  
-  /**
-   * This is an access method for the per-call state
-   * provided by the server back to the client.
-   * @return a map representing per-call state from
-   * the server to the client
-   */
-  public Map<Utf8,ByteBuffer> responseCallMeta() {
-    if (responseCallMeta == null) {
-      responseCallMeta = new HashMap<Utf8,ByteBuffer>();
-    }
-    return responseCallMeta;
-  }
-  
-  void setResponseCallMeta(Map<Utf8,ByteBuffer> newmeta) {
-    responseCallMeta = newmeta;
-  }
-  
-  void setResponse(Object response) {
-    this.response = response;
-    this.error = null;
-  }
-  
-  /**
-   * The response object generated at the server,
-   * if it exists.  If an exception was generated,
-   * this will be null.
-   * @return the response created by this RPC, no
-   * null if an exception was generated
-   */
-  public Object response() {
-    return response;
-  }
-  
-  void setError(Exception error) {
-    this.response = null;
-    this.error = error;
-  }
-  
-  /**
-   * The exception generated at the server,
-   * or null if no such exception has occured
-   * @return the exception generated at the server, or
-   * null if no such exception
-   */
-  public Exception error() {
-    return error;
-  }
-  
-  /**
-   * Indicates whether an exception was generated
-   * at the server
-   * @return true is an exception was generated at
-   * the server, or false if not
-   */
-  public boolean isError() {
-    return error != null;
-  }
-
-  public void setMessage(Message message) {
-    this.message = message;    
-  }
-  
-  public Message getMessage() { return message; }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCPlugin.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCPlugin.java
deleted file mode 100644
index 288b8be..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/RPCPlugin.java
+++ /dev/null
@@ -1,74 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc;
-
-/**
- * An instrumentation API for RPC metadata.  Each of these methods
- * is invoked at key points during the RPC exchange.  Additionally,
- * path-based <em>metadata</em> that is passed along with the RPC call
- * and can be set or queried by subsequent instrumentation points.
- */
-public class RPCPlugin {
-  
-  /**
-   * Called on the client before the initial RPC handshake to
-   * setup any handshake metadata for this plugin
-   * @param context the handshake rpc context
-   */
-  public void clientStartConnect(RPCContext context) { }
-  
-  /**
-   * Called on the server during the RPC handshake
-   * @param context the handshake rpc context
-   */
-  public void serverConnecting(RPCContext context) { }
-  
-  /**
-   * Called on the client after the initial RPC handshake
-   * @param context the handshake rpc context
-   */
-  public void clientFinishConnect(RPCContext context) { }
-  
-  /**
-   * This method is invoked at the client before it issues the RPC call.
-   * @param context the per-call rpc context (in/out parameter)
-   */
-  public void clientSendRequest(RPCContext context) { }
-  
-  /**
-   * This method is invoked at the RPC server when the request is received,
-   * but before the call itself is executed
-   * @param context the per-call rpc context (in/out parameter)
-   */
-  public void serverReceiveRequest(RPCContext context) { }
-  
-  /**
-   * This method is invoked at the server after the call is executed,
-   * but before the response is returned to the client
-   * @param context the per-call rpc context (in/out parameter)
-   */
-  public void serverSendResponse(RPCContext context) { }
-    
-  /**
-   * This method is invoked at the client after the call is executed,
-   * and after the client receives the response
-   * @param context the per-call rpc context
-   */
-  public void clientReceiveResponse(RPCContext context) { }
-  
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Requestor.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Requestor.java
deleted file mode 100644
index 0e1467a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Requestor.java
+++ /dev/null
@@ -1,231 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.BinaryDecoder;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.specific.SpecificDatumReader;
-import org.apache.avro.specific.SpecificDatumWriter;
-import org.apache.avro.util.Utf8;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/** Base class for the client side of a protocol interaction. */
-public abstract class Requestor {
-  private static final Logger LOG = LoggerFactory.getLogger(Requestor.class);
-
-  private static final Schema META =
-    Schema.createMap(Schema.create(Schema.Type.BYTES));
-  private static final GenericDatumReader<Map<Utf8,ByteBuffer>> META_READER =
-    new GenericDatumReader<Map<Utf8,ByteBuffer>>(META);
-  private static final GenericDatumWriter<Map<Utf8,ByteBuffer>> META_WRITER =
-    new GenericDatumWriter<Map<Utf8,ByteBuffer>>(META);
-
-  private Protocol local;
-  private Protocol remote;
-  private boolean sendLocalText;
-  private Transceiver transceiver;
-  
-  protected List<RPCPlugin> rpcMetaPlugins;
-
-  public Protocol getLocal() { return local; }
-  public Protocol getRemote() { return remote; }
-  public Transceiver getTransceiver() { return transceiver; }
-
-  protected Requestor(Protocol local, Transceiver transceiver)
-    throws IOException {
-    this.local = local;
-    this.transceiver = transceiver;
-    this.rpcMetaPlugins =
-      Collections.synchronizedList(new ArrayList<RPCPlugin>());
-  }
-  
-  /**
-   * Adds a new plugin to manipulate RPC metadata.  Plugins
-   * are executed in the order that they are added.
-   * @param plugin a plugin that will manipulate RPC metadata
-   */
-  public void addRPCPlugin(RPCPlugin plugin) {
-    rpcMetaPlugins.add(plugin);
-  }
-
-  /** Writes a request message and reads a response or error message. */
-  public Object request(String messageName, Object request)
-    throws Exception {
-    BinaryDecoder in = null;
-    Message m;
-    RPCContext context = new RPCContext();
-    do {
-      ByteBufferOutputStream bbo = new ByteBufferOutputStream();
-      Encoder out = new BinaryEncoder(bbo);
-
-      writeHandshake(out);                      // prepend handshake
-
-      // use local protocol to write request
-      m = getLocal().getMessages().get(messageName);
-      if (m == null)
-        throw new AvroRuntimeException("Not a local message: "+messageName);
-      context.setMessage(m);
-      
-      for (RPCPlugin plugin : rpcMetaPlugins) {
-        plugin.clientSendRequest(context);
-      }
-      
-      META_WRITER.write(context.requestCallMeta(), out);
-      out.writeString(m.getName());       // write message name
-      writeRequest(m.getRequest(), request, out); // write request payload
-      
-      List<ByteBuffer> response =                 // transceive
-        getTransceiver().transceive(bbo.getBufferList());
-      
-      ByteBufferInputStream bbi = new ByteBufferInputStream(response);
-      in = DecoderFactory.defaultFactory().createBinaryDecoder(bbi, in);
-    } while (!readHandshake(in));
-
-    // use remote protocol to read response
-    m = getRemote().getMessages().get(messageName);
-    if (m == null)
-      throw new AvroRuntimeException("Not a remote message: "+messageName);
-    context.setRequestCallMeta(META_READER.read(null, in));
-    
-    if (!in.readBoolean()) {                      // no error
-      Object response = readResponse(m.getResponse(), in);
-      context.setResponse(response);
-      for (RPCPlugin plugin : rpcMetaPlugins) {
-        plugin.clientReceiveResponse(context);
-      }
-      return response;
-      
-    } else {
-      Exception error = readError(m.getErrors(), in);
-      context.setError(error);
-      for (RPCPlugin plugin : rpcMetaPlugins) {
-        plugin.clientReceiveResponse(context);
-      }
-      throw error;
-    }
-    
-  }
-
-  private static final Map<String,MD5> REMOTE_HASHES =
-    Collections.synchronizedMap(new HashMap<String,MD5>());
-  private static final Map<MD5,Protocol> REMOTE_PROTOCOLS =
-    Collections.synchronizedMap(new HashMap<MD5,Protocol>());
-
-  private static final SpecificDatumWriter<HandshakeRequest> HANDSHAKE_WRITER =
-    new SpecificDatumWriter<HandshakeRequest>(HandshakeRequest.class);
-
-  private static final SpecificDatumReader<HandshakeResponse> HANDSHAKE_READER =
-    new SpecificDatumReader<HandshakeResponse>(HandshakeResponse.class);
-
-  private void writeHandshake(Encoder out) throws IOException {
-    MD5 localHash = new MD5();
-    localHash.bytes(local.getMD5());
-    String remoteName = transceiver.getRemoteName();
-    MD5 remoteHash = REMOTE_HASHES.get(remoteName);
-    remote = REMOTE_PROTOCOLS.get(remoteHash);
-    if (remoteHash == null) {                     // guess remote is local
-      remoteHash = localHash;
-      remote = local;
-    }
-    HandshakeRequest handshake = new HandshakeRequest();
-    handshake.clientHash = localHash;
-    handshake.serverHash = remoteHash;
-    if (sendLocalText)
-      handshake.clientProtocol = new Utf8(local.toString());
-    
-    RPCContext context = new RPCContext();
-    for (RPCPlugin plugin : rpcMetaPlugins) {
-      plugin.clientStartConnect(context);
-    }
-    handshake.meta = context.requestHandshakeMeta();
-    
-    HANDSHAKE_WRITER.write(handshake, out);
-  }
-
-  private boolean readHandshake(Decoder in) throws IOException {
-    boolean established = false;
-    HandshakeResponse handshake = HANDSHAKE_READER.read(null, in);
-    switch (handshake.match) {
-    case BOTH:
-      established = true;
-      break;
-    case CLIENT:
-      LOG.debug("Handshake match = CLIENT");
-      setRemote(handshake);
-      established = true;
-      break;
-    case NONE:
-      LOG.debug("Handshake match = NONE");
-      setRemote(handshake);
-      sendLocalText = true;
-      break;
-    default:
-      throw new AvroRuntimeException("Unexpected match: "+handshake.match);
-    }
-    
-    RPCContext context = new RPCContext();
-    if (handshake.meta != null) {
-      context.setResponseHandshakeMeta((Map<Utf8, ByteBuffer>) handshake.meta);
-    }
-      
-    for (RPCPlugin plugin : rpcMetaPlugins) {
-      plugin.clientFinishConnect(context);
-    }
-    return established;
-  }
-
-  private void setRemote(HandshakeResponse handshake) {
-    remote = Protocol.parse(handshake.serverProtocol.toString());
-    MD5 remoteHash = (MD5)handshake.serverHash;
-    REMOTE_HASHES.put(transceiver.getRemoteName(), remoteHash);
-    if (!REMOTE_PROTOCOLS.containsKey(remoteHash))
-      REMOTE_PROTOCOLS.put(remoteHash, remote);
-  }
-
-  /** Writes a request message. */
-  public abstract void writeRequest(Schema schema, Object request,
-                                    Encoder out) throws IOException;
-
-  /** Reads a response message. */
-  public abstract Object readResponse(Schema schema, Decoder in)
-    throws IOException;
-
-  /** Reads an error message. */
-  public abstract Exception readError(Schema schema, Decoder in)
-    throws IOException;
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Responder.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Responder.java
deleted file mode 100644
index 57c7842..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Responder.java
+++ /dev/null
@@ -1,207 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.util.Utf8;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.specific.SpecificDatumReader;
-import org.apache.avro.specific.SpecificDatumWriter;
-
-/** Base class for the server side of a protocol interaction. */
-public abstract class Responder {
-  private static final Logger LOG = LoggerFactory.getLogger(Responder.class);
-
-  private static final Schema META =
-    Schema.createMap(Schema.create(Schema.Type.BYTES));
-  private static final GenericDatumReader<Map<Utf8,ByteBuffer>> META_READER =
-    new GenericDatumReader<Map<Utf8,ByteBuffer>>(META);
-  private static final GenericDatumWriter<Map<Utf8,ByteBuffer>> META_WRITER =
-    new GenericDatumWriter<Map<Utf8,ByteBuffer>>(META);
-
-  private Map<MD5,Protocol> protocols
-    = Collections.synchronizedMap(new HashMap<MD5,Protocol>());
-
-  private Protocol local;
-  private MD5 localHash;
-  protected List<RPCPlugin> rpcMetaPlugins;
-
-  protected Responder(Protocol local) {
-    this.local = local;
-    this.localHash = new MD5();
-    localHash.bytes(local.getMD5());
-    protocols.put(localHash, local);
-    this.rpcMetaPlugins =
-      Collections.synchronizedList(new ArrayList<RPCPlugin>());
-  }
-
-  public Protocol getLocal() { return local; }
-  
-  /**
-   * Adds a new plugin to manipulate per-call metadata.  Plugins
-   * are executed in the order that they are added.
-   * @param plugin a plugin that will manipulate RPC metadata
-   */
-  public void addRPCPlugin(RPCPlugin plugin) {
-    rpcMetaPlugins.add(plugin);
-  }
-
-  /** Called by a server to deserialize a request, compute and serialize
-   * a response or error. */
-  public List<ByteBuffer> respond(List<ByteBuffer> buffers) throws IOException {
-    Decoder in = DecoderFactory.defaultFactory().createBinaryDecoder(
-        new ByteBufferInputStream(buffers), null);
-    ByteBufferOutputStream bbo = new ByteBufferOutputStream();
-    Encoder out = new BinaryEncoder(bbo);
-    Exception error = null;
-    RPCContext context = new RPCContext();
-    try {
-      Protocol remote = handshake(in, out);
-      if (remote == null)                        // handshake failed
-        return bbo.getBufferList();
-
-      // read request using remote protocol specification
-      context.setRequestCallMeta(META_READER.read(null, in));
-      String messageName = in.readString(null).toString();
-      Message m = remote.getMessages().get(messageName);
-      if (m == null)
-        throw new AvroRuntimeException("No such remote message: "+messageName);
-      
-      context.setMessage(m);
-      
-      Object request = readRequest(m.getRequest(), in);
-      
-      for (RPCPlugin plugin : rpcMetaPlugins) {
-        plugin.serverReceiveRequest(context);
-      }
-
-      // create response using local protocol specification
-      m = getLocal().getMessages().get(messageName);
-      if (m == null)
-        throw new AvroRuntimeException("No message named "+messageName
-                                       +" in "+getLocal());
-      Object response = null;
-      try {
-        response = respond(m, request);
-        context.setResponse(response);
-      } catch (Exception e) {
-        error = e;
-        context.setError(error);
-      }
-      
-      for (RPCPlugin plugin : rpcMetaPlugins) {
-        plugin.serverSendResponse(context);
-      }
-      
-      META_WRITER.write(context.responseCallMeta(), out);
-      out.writeBoolean(error != null);
-      if (error == null)
-        writeResponse(m.getResponse(), response, out);
-      else
-        writeError(m.getErrors(), error, out);
-
-    } catch (Exception e) {                       // system error
-      LOG.warn("system error", e);
-      context.setError(e);
-      bbo = new ByteBufferOutputStream();
-      out = new BinaryEncoder(bbo);
-      META_WRITER.write(context.responseCallMeta(), out);
-      out.writeBoolean(true);
-      writeError(Protocol.SYSTEM_ERRORS, new Utf8(e.toString()), out);
-    }
-      
-    return bbo.getBufferList();
-  }
-
-  private SpecificDatumWriter<HandshakeResponse> handshakeWriter =
-    new SpecificDatumWriter<HandshakeResponse>(HandshakeResponse.class);
-  private SpecificDatumReader<HandshakeRequest> handshakeReader =
-    new SpecificDatumReader<HandshakeRequest>(HandshakeRequest.class);
-
-  private Protocol handshake(Decoder in, Encoder out)
-    throws IOException {
-    HandshakeRequest request = (HandshakeRequest)handshakeReader.read(null, in);
-    Protocol remote = protocols.get(request.clientHash);
-    if (remote == null && request.clientProtocol != null) {
-      remote = Protocol.parse(request.clientProtocol.toString());
-      protocols.put(request.clientHash, remote);
-    }
-    HandshakeResponse response = new HandshakeResponse();
-    if (localHash.equals(request.serverHash)) {
-      response.match =
-        remote == null ? HandshakeMatch.NONE : HandshakeMatch.BOTH;
-    } else {
-      response.match =
-        remote == null ? HandshakeMatch.NONE : HandshakeMatch.CLIENT;
-    }
-    if (response.match != HandshakeMatch.BOTH) {
-      response.serverProtocol = new Utf8(local.toString());
-      response.serverHash = localHash;
-    }
-    
-    RPCContext context = new RPCContext();
-    context.setRequestHandshakeMeta((Map<Utf8, ByteBuffer>) request.meta);
-    
-    for (RPCPlugin plugin : rpcMetaPlugins) {
-      plugin.serverConnecting(context);
-    }
-    response.meta = context.responseHandshakeMeta();
-    
-    handshakeWriter.write(response, out);
-    return remote;
-  }
-
-  /** Computes the response for a message. */
-  public abstract Object respond(Message message, Object request)
-    throws Exception;
-
-  /** Reads a request message. */
-  public abstract Object readRequest(Schema schema, Decoder in)
-    throws IOException;
-
-  /** Writes a response message. */
-  public abstract void writeResponse(Schema schema, Object response,
-                                     Encoder out) throws IOException;
-
-  /** Writes an error message. */
-  public abstract void writeError(Schema schema, Object error,
-                                  Encoder out) throws IOException;
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ResponderServlet.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ResponderServlet.java
deleted file mode 100644
index 53f6094..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/ResponderServlet.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.List;
-
-import javax.servlet.ServletException;
-import javax.servlet.http.HttpServlet;
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-
-import org.apache.avro.AvroRuntimeException;
-
-/** An {@link HttpServlet} that responds to Avro RPC requests. */
-public class ResponderServlet extends HttpServlet {
-  private Responder responder;
-
-  public ResponderServlet(Responder responder) throws IOException {
-    this.responder = responder;
-  }
-
-  @Override
-  protected void doPost(HttpServletRequest request,
-                        HttpServletResponse response)
-    throws IOException, ServletException {
-    response.setContentType(HttpTransceiver.CONTENT_TYPE);
-    List<ByteBuffer> requestBufs =
-      HttpTransceiver.readBuffers(request.getInputStream());
-    try {
-      List<ByteBuffer> responseBufs = responder.respond(requestBufs);
-      response.setContentLength(HttpTransceiver.getLength(responseBufs));
-      HttpTransceiver.writeBuffers(responseBufs, response.getOutputStream());
-    } catch (AvroRuntimeException e) {
-      throw new ServletException(e);
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Server.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Server.java
deleted file mode 100644
index 50692b9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Server.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-/** A server listening on a port. */
-public interface Server {
-  /** The port this server runs on. */
-  int getPort();
-
-  /** Stop this server. */
-  void close();
-  
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketServer.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketServer.java
deleted file mode 100644
index 4bf2d8e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketServer.java
+++ /dev/null
@@ -1,112 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.net.SocketAddress;
-import java.nio.channels.ClosedChannelException;
-import java.nio.channels.ServerSocketChannel;
-import java.nio.channels.SocketChannel;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/** A socket-based server implementation. This uses a simple, non-standard wire
- * protocol and is not intended for production services. */
-public class SocketServer extends Thread implements Server {
-  private static final Logger LOG = LoggerFactory.getLogger(SocketServer.class);
-
-  private Responder responder;
-  private ServerSocketChannel channel;
-  private ThreadGroup group;
-
-  public SocketServer(Responder responder, SocketAddress addr)
-    throws IOException {
-    String name = "SocketServer on "+addr;
-
-    this.responder = responder;
-    this.group = new ThreadGroup(name);
-    this.channel = ServerSocketChannel.open();
-
-    channel.socket().bind(addr);
-
-    setName(name);
-    setDaemon(true);
-    start();
-  }
-
-  public int getPort() { return channel.socket().getLocalPort(); }
-
-  public void run() {
-    LOG.info("starting "+channel.socket().getInetAddress());
-    while (true) {
-      try {
-        new Connection(channel.accept());
-      } catch (ClosedChannelException e) {
-        return;
-      } catch (IOException e) {
-        LOG.warn("unexpected error", e);
-        throw new RuntimeException(e);
-      } finally {
-        LOG.info("stopping "+channel.socket().getInetAddress());
-      }
-    }
-  }
-
-  public void close() {
-    group.interrupt();
-  }
-
-  private class Connection extends SocketTransceiver implements Runnable {
-
-    public Connection(SocketChannel channel) throws IOException {
-      super(channel);
-
-      Thread thread = new Thread(group, this);
-      thread.setName("Connection to "+channel.socket().getRemoteSocketAddress());
-      thread.setDaemon(true);
-      thread.start();
-    }
-
-    public void run() {
-      try {
-        try {
-          while (true) {
-            writeBuffers(responder.respond(readBuffers()));
-          }
-        } catch (ClosedChannelException e) {
-          return;
-        } finally {
-          close();
-        }
-      } catch (IOException e) {
-        LOG.warn("unexpected error", e);
-      }
-    }
-
-  }
-  
-  public static void main(String[] arg) throws Exception {
-    SocketServer server = new SocketServer(null, new InetSocketAddress(0));
-    System.out.println("started");
-    server.join();
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketTransceiver.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketTransceiver.java
deleted file mode 100644
index 9df6c81..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/SocketTransceiver.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.IOException;
-import java.net.SocketAddress;
-import java.nio.channels.SocketChannel;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/** A socket-based {@link Transceiver} implementation.  This uses a simple,
- * non-standard wire protocol and is not intended for production services. */
-public class SocketTransceiver extends Transceiver {
-  private static final Logger LOG
-    = LoggerFactory.getLogger(SocketTransceiver.class);
-
-  private SocketChannel channel;
-  private ByteBuffer header = ByteBuffer.allocate(4);
-  
-  public SocketTransceiver(SocketAddress address) throws IOException {
-    this(SocketChannel.open(address));
-  }
-
-  public SocketTransceiver(SocketChannel channel) throws IOException {
-    this.channel = channel;
-    this.channel.socket().setTcpNoDelay(true);
-    LOG.info("open to "+getRemoteName());
-  }
-
-  public String getRemoteName() {
-    return channel.socket().getRemoteSocketAddress().toString();
-  }
-
-  public synchronized List<ByteBuffer> readBuffers() throws IOException {
-    List<ByteBuffer> buffers = new ArrayList<ByteBuffer>();
-    while (true) {
-      header.clear();
-      while (header.hasRemaining()) {
-        channel.read(header);
-      }
-      header.flip();
-      int length = header.getInt();
-      if (length == 0) {                       // end of buffers
-        return buffers;
-      }
-      ByteBuffer buffer = ByteBuffer.allocate(length);
-      while (buffer.hasRemaining()) {
-        channel.read(buffer);
-      }
-      buffer.flip();
-      buffers.add(buffer);
-    }
-  }
-
-  public synchronized void writeBuffers(List<ByteBuffer> buffers)
-    throws IOException {
-    for (ByteBuffer buffer : buffers) {
-      writeLength(buffer.limit());                // length-prefix
-      channel.write(buffer);
-    }
-    writeLength(0);                               // null-terminate
-  }
-
-  private void writeLength(int length) throws IOException {
-    header.clear();
-    header.putInt(length);
-    header.flip();
-    channel.write(header);
-  }
-
-  public void close() throws IOException {
-    if (channel.isOpen()) {
-      LOG.info("closing to "+getRemoteName());
-      channel.close();
-    }
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Transceiver.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Transceiver.java
deleted file mode 100644
index bc79bc0..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/Transceiver.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.ipc;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.List;
-
-/** Base class for transmitters and recievers of raw binary messages. */
-public abstract class Transceiver implements Closeable {
-
-  public abstract String getRemoteName();
-
-  public synchronized List<ByteBuffer> transceive(List<ByteBuffer> request)
-    throws IOException {
-    writeBuffers(request);
-    return readBuffers();
-  }
-
-  public abstract List<ByteBuffer> readBuffers() throws IOException;
-
-  public abstract void writeBuffers(List<ByteBuffer> buffers)
-    throws IOException;
-
-  public void close() throws IOException {}
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/package.html
deleted file mode 100644
index f246ebd..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Support for inter-process calls.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/FloatHistogram.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/FloatHistogram.java
deleted file mode 100644
index 15fd094..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/FloatHistogram.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-/**
- * Specific implementation of histogram for floats,
- * which also keeps track of basic summary statistics.
- * @param <B>
- */
-class FloatHistogram<B> extends Histogram<B, Float> {
-  private float runningSum;
-  private float runningSumOfSquares;
-
-  public FloatHistogram(Segmenter<B, Float> segmenter) {
-    super(segmenter);
-  }
-
-  @Override
-  public void add(Float value) {
-    super.add(value);
-    runningSum += value;
-    runningSumOfSquares += value*value;
-  }
-
-  public float getMean() {
-    if (totalCount == 0) {
-      return Float.NaN;
-    }
-    return runningSum / totalCount;
-  }
-
-  public float getUnbiasedStdDev() {
-    if (totalCount <= 1) {
-      return Float.NaN;
-    }
-    float mean = getMean();
-    return (float)Math.sqrt((runningSumOfSquares - totalCount*mean*mean)/(totalCount - 1));
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Histogram.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Histogram.java
deleted file mode 100644
index 4f26f8b..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Histogram.java
+++ /dev/null
@@ -1,205 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import java.util.Iterator;
-import java.util.Map;
-import java.util.SortedSet;
-import java.util.TreeMap;
-
-/**
- * Represents a histogram of values.  This class uses a {@link Segmenter}
- * to determine which bucket to place a given value into.
- *
- * Note that Histogram, by itself, is not synchronized.
- * @param <B> Bucket type.  Often String, since buckets are typically
- * used for their toString() representation.
- * @param <T> Type of value
- */
-class Histogram<B, T> {
-  private Segmenter<B, T> segmenter;
-  private int[] counts;
-  protected int totalCount;
-
-  /**
-   * Interface to determine which bucket to place a value in.
-   *
-   * Segmenters should be immutable, so many histograms can re-use
-   * the same segmenter.
-   */
-  interface Segmenter<B, T> {
-    /** Number of buckets to use. */
-    int size();
-    /**
-     * Which bucket to place value in.
-     *
-     * @return Index of bucket for the value.  At least 0 and less than size().
-     * @throws SegmenterException if value does not fit in a bucket.
-     */
-    int segment(T value);
-    /**
-     * Returns an iterator of buckets. The order of iteration
-     * is consistent with the segment numbers.
-     */
-    Iterator<B> getBuckets();
-  }
-
-  public static class SegmenterException extends RuntimeException {
-    public SegmenterException(String s) {
-      super(s);
-    }
-  }
-
-  public static class TreeMapSegmenter<T extends Comparable<T>>
-      implements Segmenter<String, T> {
-    private TreeMap<T, Integer> index = new TreeMap<T, Integer>();
-    public TreeMapSegmenter(SortedSet<T> leftEndpoints) {
-      if (leftEndpoints.isEmpty()) {
-        throw new IllegalArgumentException(
-            "Endpoints must not be empty: " + leftEndpoints);
-      }
-      int i = 0;
-      for (T t : leftEndpoints) {
-        index.put(t, i++);
-      }
-    }
-
-    public int segment(T value) {
-      Map.Entry<T, Integer> e = index.floorEntry(value);
-      if (e == null) {
-        throw new SegmenterException("Could not find bucket for: " + value);
-      }
-      return e.getValue();
-    }
-
-    @Override
-    public int size() {
-      return index.size();
-    }
-
-    private String rangeAsString(T a, T b) {
-      return String.format("[%s,%s)", a, b == null ? "infinity" : b);
-    }
-
-    @Override
-    public Iterator<String> getBuckets() {
-      return new Iterator<String>() {
-        Iterator<T> it = index.keySet().iterator();
-        T cur = it.next(); // there's always at least one element
-
-        @Override
-        public boolean hasNext() {
-          return it.hasNext();
-        }
-
-        @Override
-        public String next() {
-          T left = cur;
-          cur = it.hasNext() ? it.next() : null;
-          return rangeAsString(left, cur);
-        }
-
-        @Override
-        public void remove() {
-          throw new UnsupportedOperationException();
-
-        }
-      };
-    }
-  }
-
-  /**
-   * Creates a histogram using the specified segmenter.
-   */
-  public Histogram(Segmenter<B, T> segmenter) {
-    this.segmenter = segmenter;
-    this.counts = new int[segmenter.size()];
-  }
-
-  /** Tallies a value in the histogram. */
-  public void add(T value) {
-    int i = segmenter.segment(value);
-    counts[i]++;
-    totalCount++;
-  }
-
-  /**
-   * Returns the underlying bucket values.
-   */
-  public int[] getHistogram() {
-    return counts;
-  }
-
-  /** Returns the total count of entries. */
-  public int getCount() {
-    return totalCount;
-  }
-
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    boolean first = true;
-    for (Entry<B> e : entries()) {
-      if (!first) {
-        sb.append(";");
-      } else {
-        first = false;
-      }
-      sb.append(e.bucket).append("=").append(e.count);
-    }
-    return sb.toString();
-  }
-
-  static class Entry<B> {
-    public Entry(B bucket, int count) {
-      this.bucket = bucket;
-      this.count = count;
-    }
-    B bucket;
-    int count;
-  }
-
-  private class EntryIterator implements Iterable<Entry<B>>, Iterator<Entry<B>> {
-    int i = 0;
-    Iterator<B> bucketNameIterator = segmenter.getBuckets();
-
-    @Override
-    public Iterator<Entry<B>> iterator() {
-      return this;
-    }
-
-    @Override
-    public boolean hasNext() {
-      return i < segmenter.size();
-    }
-
-    @Override
-    public Entry<B> next() {
-      return new Entry<B>(bucketNameIterator.next(), counts[i++]);
-    }
-
-    @Override
-    public void remove() {
-      throw new UnsupportedOperationException();
-    }
-
-  }
-
-  public Iterable<Entry<B>> entries() {
-    return new EntryIterator();
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsPlugin.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsPlugin.java
deleted file mode 100644
index b78b446..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsPlugin.java
+++ /dev/null
@@ -1,121 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeSet;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
-
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.ipc.RPCContext;
-import org.apache.avro.ipc.RPCPlugin;
-import org.apache.avro.ipc.stats.Histogram.Segmenter;
-import org.apache.avro.ipc.stats.Stopwatch.Ticks;
-
-/**
- * Collects count and latency statistics about RPC calls.  Keeps
- * data for every method.
- *
- * This uses milliseconds as the standard unit of measure
- * throughout the class, stored in floats.
- */
-public class StatsPlugin extends RPCPlugin {
-  /** Static declaration of histogram buckets. */
-  static final Segmenter<String, Float> DEFAULT_SEGMENTER =
-    new Histogram.TreeMapSegmenter<Float>(new TreeSet<Float>(Arrays.asList(
-            0f,
-           25f,
-           50f,
-           75f,
-          100f,
-          200f,
-          300f,
-          500f,
-          750f,
-         1000f, // 1 second
-         2000f,
-         5000f,
-        10000f,
-        60000f, // 1 minute
-       600000f)));
-
-  /** Per-method histograms.
-   * Must be accessed while holding a lock on methodTimings. */
-  Map<Message, FloatHistogram<?>> methodTimings =
-    new HashMap<Message, FloatHistogram<?>>();
-
-  /** RPCs in flight. */
-  ConcurrentMap<RPCContext, Stopwatch> activeRpcs =
-    new ConcurrentHashMap<RPCContext, Stopwatch>();
-  private Ticks ticks;
-
-  private Segmenter<?, Float> segmenter;
-
-  /** Construct a plugin with custom Ticks and Segmenter implementations. */
-  StatsPlugin(Ticks ticks, Segmenter<?, Float> segmenter) {
-    this.segmenter = segmenter;
-    this.ticks = ticks;
-  }
-
-  /** Construct a plugin with default (system) ticks, and default
-   * histogram segmentation. */
-  public StatsPlugin() {
-    this(Stopwatch.SYSTEM_TICKS, DEFAULT_SEGMENTER);
-  }
-
-  @Override
-  public void serverReceiveRequest(RPCContext context) {
-    Stopwatch t = new Stopwatch(ticks);
-    t.start();
-    this.activeRpcs.put(context, t);
-  }
-
-  @Override
-  public void serverSendResponse(RPCContext context) {
-    Stopwatch t = this.activeRpcs.remove(context);
-    t.stop();
-    publish(context, t);
-  }
-
-  /** Adds timing to the histograms. */
-  private void publish(RPCContext context, Stopwatch t) {
-    Message message = context.getMessage();
-    if (message == null) throw new IllegalArgumentException();
-    synchronized(methodTimings) {
-      FloatHistogram<?> h = methodTimings.get(context.getMessage());
-      if (h == null) {
-        h = createNewHistogram();
-        methodTimings.put(context.getMessage(), h);
-      }
-      h.add(nanosToMillis(t.elapsedNanos()));
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  private FloatHistogram<?> createNewHistogram() {
-    return new FloatHistogram(segmenter);
-  }
-
-  /** Converts nanoseconds to milliseconds. */
-  static float nanosToMillis(long elapsedNanos) {
-    return elapsedNanos / 1000000.0f;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsServlet.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsServlet.java
deleted file mode 100644
index 642e374..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/StatsServlet.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import java.io.IOException;
-import java.io.Writer;
-import java.util.Map.Entry;
-
-import javax.servlet.ServletException;
-import javax.servlet.http.HttpServlet;
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.ipc.RPCContext;
-
-/**
- * Exposes information provided by a StatsPlugin as
- * a web page.
- *
- * This class follows the same synchronization conventions
- * as StatsPlugin, to avoid requiring StatsPlugin to serve
- * a copy of the data.
- */
-public class StatsServlet extends HttpServlet {
-  private final StatsPlugin statsPlugin;
-
-  public StatsServlet(StatsPlugin statsPlugin) {
-    this.statsPlugin = statsPlugin;
-  }
-
-  @Override
-  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
-      throws ServletException, IOException {
-    resp.setContentType("text/html");
-    writeStats(resp.getWriter());
-  }
-
-  void writeStats(Writer w) throws IOException {
-    w.append("<html><head><title>Avro RPC Stats</title></head>");
-    w.append("<body><h1>Avro RPC Stats</h1>");
-
-    w.append("<h2>Active RPCs</h2>");
-    w.append("<ol>");
-    for (Entry<RPCContext, Stopwatch> rpc : this.statsPlugin.activeRpcs.entrySet()) {
-      writeActiveRpc(w, rpc.getKey(), rpc.getValue());
-    }
-    w.append("</ol>");
-
-    w.append("<h2>Per-method Timing</h2>");
-    synchronized(this.statsPlugin.methodTimings) {
-      for (Entry<Message, FloatHistogram<?>> e :
-        this.statsPlugin.methodTimings.entrySet()) {
-        writeMethod(w, e.getKey(), e.getValue());
-      }
-    }
-    w.append("</body></html>");
-  }
-
-  private void writeActiveRpc(Writer w, RPCContext rpc, Stopwatch stopwatch) throws IOException {
-    w.append("<li>").append(rpc.getMessage().getName()).append(": ");
-    w.append(formatMillis(StatsPlugin.nanosToMillis(stopwatch.elapsedNanos())));
-    w.append("</li>");
-  }
-
-  private void writeMethod(Writer w, Message message, FloatHistogram<?> hist) throws IOException {
-    w.append("<h3>").append(message.getName()).append("</h3>");
-    w.append("<p>Number of calls: ");
-    w.append(Integer.toString(hist.getCount()));
-    w.append("</p><p>Average Duration: ");
-    w.append(formatMillis(hist.getMean()));
-    w.append("</p>");
-    w.append("</p><p>Std Dev: ");
-    w.append(formatMillis(hist.getUnbiasedStdDev()));
-    w.append("</p>");
-
-    w.append("<dl>");
-
-    for (Histogram.Entry<?> e : hist.entries()) {
-      w.append("<dt>");
-      w.append(e.bucket.toString());
-      w.append("</dt>");
-      w.append("<dd>").append(Integer.toString(e.count)).append("</dd>");
-      w.append("</dt>");
-    }
-    w.append("</dl>");
-  }
-
-  private CharSequence formatMillis(float millis) {
-    return String.format("%.0fms", millis);
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Stopwatch.java b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Stopwatch.java
deleted file mode 100644
index e15cac6..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/Stopwatch.java
+++ /dev/null
@@ -1,74 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-/** Encapsulates the passing of time. */
-class Stopwatch  {
-  /** Encapsulates ticking time sources. */
-  interface Ticks {
-    /**
-     * Returns a number of "ticks" in nanoseconds.
-     * This should be monotonically non-decreasing.
-     */
-    long ticks();
-  }
-
-  /** Default System time source. */
-  public final static Ticks SYSTEM_TICKS = new SystemTicks();
-
-  private Ticks ticks;
-  private long start;
-  private long elapsed = -1;
-  private boolean running;
-
-  public Stopwatch(Ticks ticks) {
-    this.ticks = ticks;
-  }
-
-  /** Returns seconds that have elapsed since start() */
-  public long elapsedNanos() {
-    if (running) {
-      return this.ticks.ticks() - start;
-    } else {
-      if (elapsed == -1) throw new IllegalStateException();
-      return elapsed;
-    }
-  }
-
-  /** Starts the stopwatch. */
-  public void start() {
-    if (running) throw new IllegalStateException();
-    start = ticks.ticks();
-    running = true;
-  }
-
-  /** Stops the stopwatch and calculates the elapsed time. */
-  public void stop() {
-    if (!running) throw new IllegalStateException();
-    elapsed = ticks.ticks() - start;
-    running = false;
-  }
-
-  /** Implementation of Ticks using System.nanoTime(). */
-  private static class SystemTicks implements Ticks {
-    @Override
-    public long ticks() {
-      return System.nanoTime();
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/package.html
deleted file mode 100644
index b27b866..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/ipc/stats/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Utilities to collect and display IPC statistics.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/package.html
deleted file mode 100644
index 692bba9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/package.html
+++ /dev/null
@@ -1,44 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>Avro kernel classes.
-
-<p>A {@link org.apache.avro.Schema} provides an abstract definition of
-  a data type.
-
-<p>The in-memory representation of data is determined by {@link
-  org.apache.avro.io.DatumReader} and {@link
-  org.apache.avro.io.DatumWriter} implementations.  Generic
-  implementations are provided in the {@link org.apache.avro.generic}
-  package.  A {@link org.apache.avro.specific.SpecificCompiler
-  compiler} can generate specific java classes and interfaces for
-  schemas and protocols.  Schemas may be automatically generated for
-  existing Java classes by reflection using the {@link
-  org.apache.avro.reflect} package.
-
-<p>Data of a given schema is always serialized identically, regardless
-  of its in-memory representation, by traversing the schema and
-  writing leaf values from the data structure with a {@link
-  org.apache.avro.io.Encoder}.  Deserializing similarly proceeds by
-  traversing the schema, reading leaf values with a {@link
-  org.apache.avro.io.Decoder} and storing them in an in-memory data
-  structure.
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/InduceSchemaTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/InduceSchemaTool.java
deleted file mode 100644
index 33ec716..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/InduceSchemaTool.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.io.File;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.net.URL;
-import java.net.URLClassLoader;
-import java.util.List;
-
-import org.apache.avro.tool.Tool;
-
-/**
- * Utility to induce a schema from a class or a protocol from an interface.
- */
-public class InduceSchemaTool implements Tool {
-
-  @Override
-  public int run(InputStream in, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    if (args.size() == 0 || args.size() > 2) {
-      System.err.println("Usage: [colon-delimited-classpath] classname");
-      return 1;
-    }
-    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
-    String className;
-    if (args.size() == 2) {
-      String classpaths = args.get(0);
-      className = args.get(1);
-      if (!classpaths.isEmpty()) {
-        String[] paths = args.get(0).split(":");
-        URL[] urls = new URL[paths.length];
-        for (int i = 0; i < paths.length; ++i) {
-          urls[i] = new File(paths[i]).toURI().toURL();
-        }
-        classLoader = URLClassLoader.newInstance(urls, classLoader);
-      }
-    } else {
-      className = args.get(0);
-    }
-
-    Class<?> klass = classLoader.loadClass(className);
-    if (klass.isInterface()) {
-      System.out.println(ReflectData.get().getProtocol(klass).toString(true));
-    } else {
-      System.out.println(ReflectData.get().getSchema(klass).toString(true));
-    }
-    return 0;
-  }
-
-  @Override
-  public String getName() {
-    return "induce";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Induce schema/protocol from Java class/interface via reflection.";
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Nullable.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Nullable.java
deleted file mode 100644
index bf62beb..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Nullable.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.lang.annotation.Documented;
-import java.lang.annotation.ElementType;
-import java.lang.annotation.Retention;
-import java.lang.annotation.RetentionPolicy;
-import java.lang.annotation.Target;
-
-/** Declares that null is a valid value for a Java type.  Causes an Avro union
- * with null to be used.  May be applied to parameters, fields and methods (to
- * declare the return type).
- */
-@Retention(RetentionPolicy.RUNTIME)
-@Target({ElementType.PARAMETER, ElementType.METHOD, ElementType.FIELD})
-@Documented
-public @interface Nullable {}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectData.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectData.java
deleted file mode 100644
index 53e1534..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectData.java
+++ /dev/null
@@ -1,430 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.lang.reflect.Field;
-import java.lang.reflect.Method;
-import java.lang.reflect.Modifier;
-import java.lang.reflect.Type;
-import java.lang.reflect.ParameterizedType;
-import java.lang.reflect.GenericArrayType;
-import java.lang.annotation.Annotation;
-import java.util.Collection;
-import java.util.List;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.LinkedHashMap;
-import java.util.Map;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericFixed;
-import org.apache.avro.specific.SpecificData;
-import org.apache.avro.specific.FixedSize;
-import org.apache.avro.ipc.AvroRemoteException;
-
-import com.thoughtworks.paranamer.CachingParanamer;
-import com.thoughtworks.paranamer.Paranamer;
-
-/** Utilities to use existing Java classes and interfaces via reflection. */
-public class ReflectData extends SpecificData {
-  
-  /** {@link ReflectData} implementation that permits null field values.  The
-   * schema generated for each field is a union of its declared type and
-   * null. */
-  public static class AllowNull extends ReflectData {
-
-    private static final AllowNull INSTANCE = new AllowNull();
-
-    /** Return the singleton instance. */
-    public static AllowNull get() { return INSTANCE; }
-
-    protected Schema createFieldSchema(Field field, Map<String, Schema> names) {
-      Schema schema = super.createFieldSchema(field, names);
-      return makeNullable(schema);
-    }
-  }
-  
-  private static final ReflectData INSTANCE = new ReflectData();
-
-  protected ReflectData() {}
-  
-  /** Return the singleton instance. */
-  public static ReflectData get() { return INSTANCE; }
-
-  @Override
-  protected boolean isRecord(Object datum) {
-    if (datum == null) return false;
-    return getSchema(datum.getClass()).getType() == Schema.Type.RECORD;
-  }
-
-  @Override
-  protected boolean isArray(Object datum) {
-    return (datum instanceof Collection) || datum.getClass().isArray();
-  }
-
-  @Override
-  protected boolean isString(Object datum) {
-    return datum instanceof String;
-  }
-
-  @Override
-  protected boolean isBytes(Object datum) {
-    if (datum == null) return false;
-    Class c = datum.getClass();
-    return c.isArray() && c.getComponentType() == Byte.TYPE;
-  }
-
-  @Override
-  protected Schema getRecordSchema(Object record) {
-    return getSchema(record.getClass());
-  }
-
-  @Override
-  public boolean validate(Schema schema, Object datum) {
-    switch (schema.getType()) {
-    case RECORD:
-      if (datum == null) return false;
-      Class c = datum.getClass(); 
-      for (Schema.Field f : schema.getFields()) {
-        try {
-          if (!validate(f.schema(),
-                        getField(c, f.name()).get(datum)))
-          return false;
-        } catch (IllegalAccessException e) {
-          throw new AvroRuntimeException(e);
-        }
-      }
-      return true;
-    case ARRAY:
-      if (datum instanceof Collection) {          // collection
-        for (Object element : (Collection)datum)
-          if (!validate(schema.getElementType(), element))
-            return false;
-        return true;
-      } else if (datum.getClass().isArray()) {    // array
-        int length = java.lang.reflect.Array.getLength(datum);
-        for (int i = 0; i < length; i++)
-          if (!validate(schema.getElementType(),
-                        java.lang.reflect.Array.get(datum, i)))
-            return false;
-        return true;
-      }
-      return false;
-    default:
-      return super.validate(schema, datum);
-    }
-  }
-
-  private static final Map<Class,Map<String,Field>> FIELD_CACHE =
-    new ConcurrentHashMap<Class,Map<String,Field>>();
-
-  /** Return the named field of the provided class.  Implementation caches
-   * values, since this is used at runtime to get and set fields. */
-  protected static Field getField(Class c, String name) {
-    Map<String,Field> fields = FIELD_CACHE.get(c);
-    if (fields == null) {
-      fields = new ConcurrentHashMap<String,Field>();
-      FIELD_CACHE.put(c, fields);
-    }
-    Field f = fields.get(name);
-    if (f == null) {
-      f = findField(c, name);
-      fields.put(name, f);
-    }
-    return f;
-  }
-
-  private static Field findField(Class c, String name) {
-    do {
-      try {
-        Field f = c.getDeclaredField(name);
-        f.setAccessible(true);
-        return f;
-      } catch (NoSuchFieldException e) {}
-      c = c.getSuperclass();
-    } while (c != null);
-    throw new AvroRuntimeException("No field named "+name+" in: "+c);
-  }
-
-  static final String CLASS_PROP = "java-class";
-  static final String ELEMENT_PROP = "java-element-class";
-
-  static Class getClassProp(Schema schema, String prop) {
-    String name = schema.getProp(prop);
-    if (name == null) return null;
-    try {
-      return Class.forName(name);
-    } catch (ClassNotFoundException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  private static final Class BYTES_CLASS = new byte[0].getClass();
-
-  @Override
-  public Class getClass(Schema schema) {
-    switch (schema.getType()) {
-    case ARRAY:
-      Class collectionClass = getClassProp(schema, CLASS_PROP);
-      if (collectionClass != null)
-        return collectionClass;
-      return java.lang.reflect.Array.newInstance(getClass(schema.getElementType()),0).getClass();
-    case STRING:  return String.class;
-    case BYTES:   return BYTES_CLASS;
-    case INT:
-      if (Short.class.getName().equals(schema.getProp(CLASS_PROP)))
-        return Short.TYPE;
-    default:
-      return super.getClass(schema);
-    }
-  }
-
-  @Override
-  @SuppressWarnings(value="unchecked")
-  protected Schema createSchema(Type type, Map<String,Schema> names) {
-    if (type instanceof GenericArrayType) {                  // generic array
-      Type component = ((GenericArrayType)type).getGenericComponentType();
-      if (component == Byte.TYPE)                            // byte array
-        return Schema.create(Schema.Type.BYTES);           
-      Schema result = Schema.createArray(createSchema(component, names));
-      setElement(result, component);
-      return result;
-    } else if (type instanceof ParameterizedType) {
-      ParameterizedType ptype = (ParameterizedType)type;
-      Class raw = (Class)ptype.getRawType();
-      Type[] params = ptype.getActualTypeArguments();
-      if (Map.class.isAssignableFrom(raw)) {                 // Map
-        Type key = params[0];
-        Type value = params[1];
-        if (!(key == String.class))
-          throw new AvroTypeException("Map key class not String: "+key);
-        return Schema.createMap(createSchema(value, names));
-      } else if (Collection.class.isAssignableFrom(raw)) {   // Collection
-        if (params.length != 1)
-          throw new AvroTypeException("No array type specified.");
-        Schema schema = Schema.createArray(createSchema(params[0], names));
-        schema.addProp(CLASS_PROP, raw.getName());
-        return schema;
-      }
-    } else if ((type == Short.class) || (type == Short.TYPE)) {
-      Schema result = Schema.create(Schema.Type.INT);
-      result.addProp(CLASS_PROP, Short.class.getName());
-      return result;
-    } else if (type instanceof Class) {                      // Class
-      Class<?> c = (Class<?>)type;
-      if (c.isPrimitive() || Number.class.isAssignableFrom(c)
-          || c == Void.class || c == Boolean.class)          // primitive
-        return super.createSchema(type, names);
-      if (c.isArray()) {                                     // array
-        Class component = c.getComponentType();
-        if (component == Byte.TYPE)                          // byte array
-          return Schema.create(Schema.Type.BYTES);
-        Schema result = Schema.createArray(createSchema(component, names));
-        setElement(result, component);
-        return result;
-      }
-      if (c == String.class)                                 // String
-        return Schema.create(Schema.Type.STRING);
-      String fullName = c.getName();
-      Schema schema = names.get(fullName);
-      if (schema == null) {
-        String name = c.getSimpleName();
-        String space = c.getPackage().getName();
-        if (c.getEnclosingClass() != null)                   // nested class
-          space = c.getEnclosingClass().getName() + "$";
-        Union union = c.getAnnotation(Union.class);
-        if (union != null) {                                 // union annotated
-          return getAnnotatedUnion(union, names);
-        } else if (c.isAnnotationPresent(Stringable.class)){ // Stringable
-          Schema result = Schema.create(Schema.Type.STRING);
-          result.addProp(CLASS_PROP, c.getName());
-          return result;
-        } else if (c.isEnum()) {                             // Enum
-          List<String> symbols = new ArrayList<String>();
-          Enum[] constants = (Enum[])c.getEnumConstants();
-          for (int i = 0; i < constants.length; i++)
-            symbols.add(constants[i].name());
-          schema = Schema.createEnum(name, null /* doc */, space, symbols);
-        } else if (GenericFixed.class.isAssignableFrom(c)) { // fixed
-          int size = c.getAnnotation(FixedSize.class).value();
-          schema = Schema.createFixed(name, null /* doc */, space, size);
-        } else {                                             // record
-          List<Schema.Field> fields = new ArrayList<Schema.Field>();
-          boolean error = Throwable.class.isAssignableFrom(c);
-          schema = Schema.createRecord(name, null /* doc */, space, error);
-          names.put(c.getName(), schema);
-          for (Field field : getFields(c))
-            if ((field.getModifiers()&(Modifier.TRANSIENT|Modifier.STATIC))==0){
-              Schema fieldSchema = createFieldSchema(field, names);
-              fields.add(new Schema.Field(field.getName(),
-                  fieldSchema, null /* doc */, null));
-            }
-          if (error)                              // add Throwable message
-            fields.add(new Schema.Field("detailMessage", THROWABLE_MESSAGE,
-                                        null, null));
-          schema.setFields(fields);
-        }
-        names.put(fullName, schema);
-      }
-      return schema;
-    }
-    return super.createSchema(type, names);
-  }
-
-  private static final Schema THROWABLE_MESSAGE =
-    makeNullable(Schema.create(Schema.Type.STRING));
-
-  // if array element type is a class with a union annotation, note it
-  // this is required because we cannot set a property on the union itself 
-  @SuppressWarnings(value="unchecked")
-  private void setElement(Schema schema, Type element) {
-    if (!(element instanceof Class)) return;
-    Class<?> c = (Class<?>)element;
-    Union union = c.getAnnotation(Union.class);
-    if (union != null)                          // element is annotated union
-      schema.addProp(ELEMENT_PROP, c.getName());
-  }
-
-  // construct a schema from a union annotation
-  private Schema getAnnotatedUnion(Union union, Map<String,Schema> names) {
-    List<Schema> branches = new ArrayList<Schema>();
-    for (Class branch : union.value())
-      branches.add(createSchema(branch, names));
-    return Schema.createUnion(branches);
-  }
-
-  /** Create and return a union of the null schema and the provided schema. */
-  public static Schema makeNullable(Schema schema) {
-    return Schema.createUnion(Arrays.asList(Schema.create(Schema.Type.NULL),
-                                            schema));
-  }
-
-  // Return of this class and its superclasses to serialize.
-  // Not cached, since this is only used to create schemas, which are cached.
-  private Collection<Field> getFields(Class recordClass) {
-    Map<String,Field> fields = new LinkedHashMap<String,Field>();
-    Class c = recordClass;
-    do {
-      if (c.getPackage().getName().startsWith("java."))
-        break;                                    // skip java built-in classes
-      for (Field field : c.getDeclaredFields())
-        if ((field.getModifiers() & (Modifier.TRANSIENT|Modifier.STATIC)) == 0)
-          if (fields.put(field.getName(), field) != null)
-            throw new AvroTypeException(c+" contains two fields named: "+field);
-      c = c.getSuperclass();
-    } while (c != null);
-    return fields.values();
-  }
-
-  /** Create a schema for a field. */
-  protected Schema createFieldSchema(Field field, Map<String, Schema> names) {
-    Schema schema = createSchema(field.getGenericType(), names);
-    if (field.isAnnotationPresent(Nullable.class))           // nullable
-      schema = makeNullable(schema);
-    return schema;
-  }
-
-  /** Return the protocol for a Java interface.
-   * <p>Note that this requires that <a
-   * href="http://paranamer.codehaus.org/">Paranamer</a> is run over compiled
-   * interface declarations, since Java 6 reflection does not provide access to
-   * method parameter names.  See Avro's build.xml for an example. */
-  @Override
-  public Protocol getProtocol(Class iface) {
-    Protocol protocol =
-      new Protocol(iface.getSimpleName(), iface.getPackage().getName()); 
-    Map<String,Schema> names = new LinkedHashMap<String,Schema>();
-    Map<String,Message> messages = protocol.getMessages();
-    for (Method method : iface.getMethods())
-      if ((method.getModifiers() & Modifier.STATIC) == 0) {
-        String name = method.getName();
-        if (messages.containsKey(name))
-          throw new AvroTypeException("Two methods with same name: "+name);
-        messages.put(name, getMessage(method, protocol, names));
-      }
-
-    // reverse types, since they were defined in reference order
-    List<Schema> types = new ArrayList<Schema>();
-    types.addAll(names.values());
-    Collections.reverse(types);
-    protocol.setTypes(types);
-
-    return protocol;
-  }
-
-  private final Paranamer paranamer = new CachingParanamer();
-
-  private Message getMessage(Method method, Protocol protocol,
-                             Map<String,Schema> names) {
-    List<Schema.Field> fields = new ArrayList<Schema.Field>();
-    String[] paramNames = paranamer.lookupParameterNames(method);
-    Type[] paramTypes = method.getGenericParameterTypes();
-    Annotation[][] annotations = method.getParameterAnnotations();
-    for (int i = 0; i < paramTypes.length; i++) {
-      Schema paramSchema = getSchema(paramTypes[i], names);
-      for (int j = 0; j < annotations[i].length; j++)
-        if (annotations[i][j] instanceof Union)
-          paramSchema = getAnnotatedUnion(((Union)annotations[i][j]), names);
-        else if (annotations[i][j] instanceof Nullable)
-          paramSchema = makeNullable(paramSchema);
-      String paramName =  paramNames.length == paramTypes.length
-        ? paramNames[i]
-        : paramSchema.getName()+i;
-      fields.add(new Schema.Field(paramName, paramSchema,
-        null /* doc */, null));
-    }
-    Schema request = Schema.createRecord(fields);
-
-    Union union = method.getAnnotation(Union.class);
-    Schema response = union == null
-      ? getSchema(method.getGenericReturnType(), names)
-      : getAnnotatedUnion(union, names);
-    if (method.isAnnotationPresent(Nullable.class))          // nullable
-      response = makeNullable(response);
-
-    List<Schema> errs = new ArrayList<Schema>();
-    errs.add(Protocol.SYSTEM_ERROR);              // every method can throw
-    for (Type err : method.getGenericExceptionTypes())
-      if (err != AvroRemoteException.class) 
-        errs.add(getSchema(err, names));
-    Schema errors = Schema.createUnion(errs);
-
-    return protocol.createMessage(method.getName(), null /* doc */, request, response, errors);
-  }
-
-  private Schema getSchema(Type type, Map<String,Schema> names) {
-    try {
-      return createSchema(type, names);
-    } catch (AvroTypeException e) {               // friendly exception
-      throw new AvroTypeException("Error getting schema for "+type+": "
-                                  +e.getMessage(), e);
-    }
-  }
-
-  @Override
-  public int compare(Object o1, Object o2, Schema s) {
-    throw new UnsupportedOperationException();
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumReader.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumReader.java
deleted file mode 100644
index 7b8dd41..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumReader.java
+++ /dev/null
@@ -1,146 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.io.IOException;
-import java.util.Collection;
-import java.util.ArrayList;
-import java.lang.reflect.Array;
-import java.lang.reflect.InvocationTargetException;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema;
-import org.apache.avro.specific.SpecificDatumReader;
-import org.apache.avro.io.Decoder;
-
-/**
- * {@link org.apache.avro.io.DatumReader DatumReader} for existing classes via
- * Java reflection.
- */
-public class ReflectDatumReader<T> extends SpecificDatumReader<T> {
-  public ReflectDatumReader() {}
-
-  public ReflectDatumReader(Class<T> c) {
-    this(ReflectData.get().getSchema(c));
-  }
-
-  public ReflectDatumReader(Schema root) {
-    super(root);
-  }
-
-  @Override
-  protected void setField(Object record, String name, int position, Object o) {
-    try {
-      ReflectData.getField(record.getClass(), name).set(record, o);
-    } catch (IllegalAccessException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  @Override
-  protected Object getField(Object record, String name, int position) {
-    try {
-      return ReflectData.getField(record.getClass(), name).get(record);
-    } catch (IllegalAccessException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  @Override
-  @SuppressWarnings(value="unchecked")
-  protected Object newArray(Object old, int size, Schema schema) {
-    ReflectData data = ReflectData.get();
-    Class collectionClass = ReflectData.getClassProp(schema, ReflectData.CLASS_PROP);
-    if (collectionClass != null) {
-      if (old instanceof Collection) {
-        ((Collection)old).clear();
-        return old;
-      }
-      if (collectionClass.isAssignableFrom(ArrayList.class))
-        return new ArrayList();
-      return newInstance(collectionClass);
-    }
-    Class elementClass = ReflectData.getClassProp(schema, ReflectData.ELEMENT_PROP);
-    if (elementClass == null)
-      elementClass = data.getClass(schema.getElementType());
-    return Array.newInstance(elementClass, size);
-  }
-
-  @Override
-  protected Object peekArray(Object array) {
-    return null;
-  }
-
-  @Override
-  @SuppressWarnings(value="unchecked")
-  protected void addToArray(Object array, long pos, Object e) {
-    if (array instanceof Collection) {
-      ((Collection)array).add(e);
-    } else {
-      Array.set(array, (int)pos, e);
-    }
-  }
-
-  @Override
-  @SuppressWarnings(value="unchecked")
-  protected Object readString(Object old, Schema s,
-                              Decoder in) throws IOException {
-    String value = (String)readString(null, in);
-    Class c = ReflectData.getClassProp(s, ReflectData.CLASS_PROP);
-    if (c != null)                                // Stringable annotated class
-      try {                                       // use String-arg ctor
-        return c.getConstructor(String.class).newInstance(value);
-      } catch (NoSuchMethodException e) {
-        throw new AvroRuntimeException(e);
-      } catch (InstantiationException e) {
-        throw new AvroRuntimeException(e);
-      } catch (IllegalAccessException e) {
-        throw new AvroRuntimeException(e);
-      } catch (InvocationTargetException e) {
-        throw new AvroRuntimeException(e);
-      }
-    return value;
-  }
-
-  @Override
-  protected Object readString(Object old, Decoder in) throws IOException {
-    return super.readString(null, in).toString();
-  }
-
-  @Override
-  protected Object createString(String value) { return value; }
-
-  @Override
-  protected Object readBytes(Object old, Decoder in) throws IOException {
-    ByteBuffer bytes = in.readBytes(null);
-    byte[] result = new byte[bytes.remaining()];
-    bytes.get(result);
-    return result;
-  }
-
-  @Override
-  protected Object readInt(Object old,
-                           Schema expected, Decoder in) throws IOException {
-    Object value = in.readInt();
-    if (Short.class.getName().equals(expected.getProp(ReflectData.CLASS_PROP)))
-      value = ((Integer)value).shortValue();
-    return value;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumWriter.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumWriter.java
deleted file mode 100644
index 9541311..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectDatumWriter.java
+++ /dev/null
@@ -1,125 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.lang.reflect.Array;
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.Collection;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema;
-import org.apache.avro.specific.SpecificDatumWriter;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.util.Utf8;
-
-/**
- * {@link org.apache.avro.io.DatumWriter DatumWriter} for existing classes
- * via Java reflection.
- */
-public class ReflectDatumWriter<T> extends SpecificDatumWriter<T> {
-  public ReflectDatumWriter() {
-    this(ReflectData.get());
-  }
-
-  public ReflectDatumWriter(Class<T> c) {
-    this(c, ReflectData.get());
-  }
-
-  public ReflectDatumWriter(Class<T> c, ReflectData data) {
-    this(data.getSchema(c), data);
-  }
-
-  public ReflectDatumWriter(Schema root) {
-    this(root, ReflectData.get());
-  }
-
-  protected ReflectDatumWriter(Schema root, ReflectData reflectData) {
-    super(root, reflectData);
-  }
-  
-  protected ReflectDatumWriter(ReflectData reflectData) {
-    super(reflectData);
-  }
-  
-  @Override
-  protected Object getField(Object record, String name, int position) {
-    try {
-      return ReflectData.getField(record.getClass(), name).get(record);
-    } catch (IllegalAccessException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-  
-  @Override
-  @SuppressWarnings("unchecked")
-  protected long getArraySize(Object array) {
-    if (array instanceof Collection)
-      return ((Collection)array).size();
-    return Array.getLength(array);
-        
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  protected Iterator<Object> getArrayElements(final Object array) {
-    if (array instanceof Collection)
-      return ((Collection<Object>)array).iterator();
-    return new Iterator<Object>() {
-      private int i = 0;
-      private final int length = Array.getLength(array);
-      public boolean hasNext() { return i < length; }
-      public Object next() { return Array.get(array, i++); }
-      public void remove() { throw new UnsupportedOperationException(); }
-    };
-  }
-
-  @Override
-  protected void writeString(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    if (schema.getProp(ReflectData.CLASS_PROP) != null) // Stringable annotated
-      datum = datum.toString();                         // call toString()
-    writeString(datum, out);
-  }
-
-  @Override
-  protected void writeString(Object datum, Encoder out) throws IOException {
-    out.writeString(new Utf8((String)datum));
-  }
-
-  @Override
-  protected void writeBytes(Object datum, Encoder out) throws IOException {
-    out.writeBytes((byte[])datum);
-  }
-
-  @Override
-  protected void write(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    if (datum instanceof Short)
-      datum = ((Short)datum).intValue();
-    try {
-      super.write(schema, datum, out);
-    } catch (NullPointerException e) {            // improve error message
-      NullPointerException result =
-        new NullPointerException("in "+schema.getName()+" "+e.getMessage());
-      result.initCause(e);
-      throw result;
-    }
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectRequestor.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectRequestor.java
deleted file mode 100644
index 8145089..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectRequestor.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.reflect;
-
-import java.io.IOException;
-import java.lang.reflect.Proxy;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.ipc.Transceiver;
-
-/** A {@link org.apache.avro.ipc.Requestor} for existing interfaces. */
-public class ReflectRequestor extends SpecificRequestor {
-  
-  public ReflectRequestor(Class<?> iface, Transceiver transceiver)
-    throws IOException {
-    this(ReflectData.get().getProtocol(iface), transceiver);
-  }
-
-  protected ReflectRequestor(Protocol protocol, Transceiver transceiver)
-    throws IOException {
-    super(protocol, transceiver);
-  }
-    
-  @Override
-  protected DatumWriter<Object> getDatumWriter(Schema schema) {
-    return new ReflectDatumWriter<Object>(schema);
-  }
-
-  @Override
-  protected DatumReader<Object> getDatumReader(Schema schema) {
-    return new ReflectDatumReader<Object>(schema);
-  }
-
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, Transceiver transciever)
-    throws IOException {
-    return getClient(iface, transciever, ReflectData.get());
-  }
-
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, Transceiver transciever, ReflectData reflectData)
-    throws IOException {
-    Protocol protocol = reflectData.getProtocol(iface);
-    return Proxy.newProxyInstance(iface.getClassLoader(),
-                                  new Class[] { iface },
-                                  new ReflectRequestor(protocol, transciever));
-  }
-  
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, ReflectRequestor rreq)
-    throws IOException {
-    return Proxy.newProxyInstance(iface.getClassLoader(),
-                                  new Class[] { iface }, rreq);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectResponder.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectResponder.java
deleted file mode 100644
index 200ea7d..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/ReflectResponder.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.reflect;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.specific.SpecificResponder;
-
-/** {@link org.apache.avro.ipc.Responder} for existing interfaces.*/
-public class ReflectResponder extends SpecificResponder {
-  public ReflectResponder(Class iface, Object impl) {
-    super(ReflectData.get().getProtocol(iface), impl, ReflectData.get());
-  }
-  
-  public ReflectResponder(Protocol protocol, Object impl) {
-    super(protocol, impl, ReflectData.get());
-  }
-
-  @Override
-  protected DatumWriter<Object> getDatumWriter(Schema schema) {
-    return new ReflectDatumWriter<Object>(schema);
-  }
-
-  @Override
-  protected DatumReader<Object> getDatumReader(Schema schema) {
-    return new ReflectDatumReader<Object>(schema);
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Stringable.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Stringable.java
deleted file mode 100644
index e734671..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Stringable.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.lang.annotation.Documented;
-import java.lang.annotation.ElementType;
-import java.lang.annotation.Retention;
-import java.lang.annotation.RetentionPolicy;
-import java.lang.annotation.Target;
-
-/**
- * Declares that a class should be represented by an Avro string.  It's {@link
- * Object#toString()} method will be used to convert it to a string, and its
- * single String parameter constructor will be used to create instances.
- */
-@Retention(RetentionPolicy.RUNTIME)
-@Target({ElementType.TYPE})
-@Documented
-public @interface Stringable {}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Union.java b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Union.java
deleted file mode 100644
index b22fa66..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/Union.java
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.reflect;
-
-import java.lang.annotation.Documented;
-import java.lang.annotation.ElementType;
-import java.lang.annotation.Retention;
-import java.lang.annotation.RetentionPolicy;
-import java.lang.annotation.Target;
-
-/**
- * Declares that a Java type should be represented by an Avro union schema.
- * May be used for base classes or interfaces whose instantiable subclasses can
- * be listed in the parameters to the @Union annotation.  If applied to method
- * parameters this determines the reflected message parameter type.  If applied
- * to a method, this determines its return type.
- */
-@Retention(RetentionPolicy.RUNTIME)
-@Target({ElementType.TYPE, ElementType.PARAMETER, ElementType.METHOD})
-@Documented
-public @interface Union {
-  /** The instantiable classes that compose this union. */
-  Class[] value();
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/reflect/package.html
deleted file mode 100644
index 450c12a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/reflect/package.html
+++ /dev/null
@@ -1,70 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Use Java reflection to generate schemas and protocols for existing
-classes.
-
-<p>This API is not recommended except as a stepping stone for
-systems that currently uses Java interfaces to define RPC protocols.
-For new RPC systems, the {@link org.apache.avro.specific specific} API
-is preferred.  For systems that process dynamic data, the {@link
-org.apache.avro.generic generic} API is probably best.
-
-<p>Java types are mapped to Avro schemas as follows:
-
-<ul>
-
-<li><b>Classes</b> are mapped to Avro records. Only concrete classes
-  with a no-argument constructor are supported.  Fields are not
-  permitted to be null.  All inherited fields that are not static or
-  transient are used.</li>
-
-<li><b>Arrays</b> are mapped to Avro array schemas.  If an array's
-  elements are a union defined by the {@link
-  org.apache.avro.reflect.Union Union} annotation, the "java-element"
-  property is set to the union's class, e.g.:
-  <pre>{"type": "array", "java-element": "org.acme.Foo"}</pre>
-</li>
-
-<li><b>Collection</b> implementations are mapped to Avro array schemas
-  with the "java-class" property set to the collection
-  implementation, e.g.:
-  <pre>{"type": "array", "java-class": "java.util.ArrayList"}</pre>
-</li>
-
-<li><b>{@link java.lang.String}</b> is mapped to an Avro string schema.</li>
-
-<li><b>byte[]</b> is mapped to an Avro bytes schema.</li>
-
-<li><b>short</b> is mapped to an Avro int schema with the "java-class"
-  property set to "java.lang.Short", e.g.:
-  <pre>{"type": "int", "java-class": "java.lang.Short"}</pre>
-
-<li>All other types are mapped as in the {@link org.apache.avro.generic
-  generic} API.</li>
-
-</ul>
-
-The {@link org.apache.avro.reflect.Union Union} annotation can be used
-to support reflection of schemas for interfaces, abstract base classes
-and other uses of polymorphism.
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/FixedSize.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/FixedSize.java
deleted file mode 100644
index 8ccd185..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/FixedSize.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.lang.annotation.Documented;
-import java.lang.annotation.ElementType;
-import java.lang.annotation.Retention;
-import java.lang.annotation.RetentionPolicy;
-import java.lang.annotation.Target;
-
-/**
- * Declares the size of implementations of
- * {@link org.apache.avro.generic.GenericFixed GenericFixed}.
- */
-@Retention(RetentionPolicy.RUNTIME)
-@Target({ElementType.TYPE})
-@Documented
-public @interface FixedSize {
-  /** The declared size of instances of classes with this annotation. */
-  int value();
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/ProtocolTask.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/ProtocolTask.java
deleted file mode 100644
index 0be1d57..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/ProtocolTask.java
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-
-import org.apache.avro.AvroRuntimeException;
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.DirectoryScanner;
-import org.apache.tools.ant.Project;
-import org.apache.tools.ant.Task;
-import org.apache.tools.ant.types.FileSet;
-
-/** Ant task to generate Java interface and classes for a protocol. */
-public class ProtocolTask extends Task {
-  private File src;
-  private File dest = new File(".");
-  private final ArrayList<FileSet> filesets = new ArrayList<FileSet>();
-  
-  /** Set the schema file. */
-  public void setFile(File file) { this.src = file; }
-  
-  /** Set the output directory */
-  public void setDestdir(File dir) { this.dest = dir; }
-  
-  /** Add a fileset. */
-  public void addFileset(FileSet set) { filesets.add(set); }
-  
-  /** Run the compiler. */
-  public void execute() {
-    if (src == null && filesets.size()==0)
-      throw new BuildException("No file or fileset specified.");
-
-    if (src != null)
-      compile(src);
-
-    Project myProject = getProject();
-    for (int i = 0; i < filesets.size(); i++) {
-      FileSet fs = filesets.get(i);
-      DirectoryScanner ds = fs.getDirectoryScanner(myProject);
-      File dir = fs.getDir(myProject);
-      String[] srcs = ds.getIncludedFiles();
-      for (int j = 0; j < srcs.length; j++) {
-        compile(new File(dir, srcs[j]));
-      }
-    }
-  }
-  
-  protected void doCompile(File file, File dir) throws IOException {
-    SpecificCompiler.compileProtocol(file, dir);
-  }
-
-  private void compile(File file) {
-    try {
-      doCompile(file, dest);
-    } catch (AvroRuntimeException e) {
-      throw new BuildException(e);
-    } catch (IOException e) {
-      throw new BuildException(e);
-    }
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SchemaTask.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SchemaTask.java
deleted file mode 100644
index ddb2a43..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SchemaTask.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.io.File;
-import java.io.IOException;
-
-/** Ant task to generate Java interface and classes for a protocol. */
-public class SchemaTask extends ProtocolTask {
-  protected void doCompile(File src, File dest) throws IOException {
-    SpecificCompiler.compileSchema(src, dest);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificCompiler.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificCompiler.java
deleted file mode 100644
index d6063ae..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificCompiler.java
+++ /dev/null
@@ -1,423 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.io.File;
-import java.io.FileWriter;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.tool.Tool;
-
-/**
- * Generate specific Java interfaces and classes for protocols and schemas.
- *
- * Java reserved keywords are mangled to preserve compilation.
- */
-public class SpecificCompiler {
-  private final Set<Schema> queue = new HashSet<Schema>();
-  private final Protocol protocol;
-
-  /* List of Java reserved words from
-   * http://java.sun.com/docs/books/jls/third_edition/html/lexical.html. */
-  private static final Set<String> RESERVED_WORDS = new HashSet<String>(
-      Arrays.asList(new String[] {
-          "abstract", "assert", "boolean", "break", "byte", "case", "catch",
-          "char", "class", "const", "continue", "default", "do", "double",
-          "else", "enum", "extends", "false", "final", "finally", "float",
-          "for", "goto", "if", "implements", "import", "instanceof", "int",
-          "interface", "long", "native", "new", "null", "package", "private",
-          "protected", "public", "return", "short", "static", "strictfp",
-          "super", "switch", "synchronized", "this", "throw", "throws",
-          "transient", "true", "try", "void", "volatile", "while"
-        }));
-
-  public SpecificCompiler(Protocol protocol) {
-    // enqueue all types
-    for (Schema s : protocol.getTypes()) {
-      enqueue(s);
-    }
-    this.protocol = protocol;
-  }
-
-  public SpecificCompiler(Schema schema) {
-    enqueue(schema);
-    this.protocol = null;
-  }
-
-  /**
-   * Captures output file path and contents.
-   */
-  static class OutputFile {
-    String path;
-    String contents;
-
-    /**
-     * Writes output to path destination directory, creating directories as
-     * necessary.  Returns the created file.
-     */
-    File writeToDestination(File destDir) throws IOException {
-      File f = new File(destDir, path);
-      f.getParentFile().mkdirs();
-      FileWriter fw = new FileWriter(f);
-      try {
-        fw.write(contents);
-      } finally {
-        fw.close();
-      }
-      return f;
-    }
-  }
-
-  /**
-   * Generates Java interface and classes for a protocol.
-   * @param src the source Avro protocol file
-   * @param dest the directory to place generated files in
-   */
-  public static void compileProtocol(File src, File dest) throws IOException {
-    Protocol protocol = Protocol.parse(src);
-    SpecificCompiler compiler = new SpecificCompiler(protocol);
-    compiler.compileToDestination(dest);
-  }
-
-  /** Generates Java classes for a schema. */
-  public static void compileSchema(File src, File dest) throws IOException {
-    Schema schema = Schema.parse(src);
-    SpecificCompiler compiler = new SpecificCompiler(schema);
-    compiler.compileToDestination(dest);
-  }
-
-  static String mangle(String word) {
-    if (RESERVED_WORDS.contains(word)) {
-      return word + "$";
-    }
-    return word;
-  }
-
-  /** Recursively enqueue schemas that need a class generated. */
-  private void enqueue(Schema schema) {
-    if (queue.contains(schema)) return;
-    switch (schema.getType()) {
-    case RECORD:
-      queue.add(schema);
-      for (Schema.Field field : schema.getFields())
-        enqueue(field.schema());
-      break;
-    case MAP:
-      enqueue(schema.getValueType());
-      break;
-    case ARRAY:
-      enqueue(schema.getElementType());
-      break;
-    case UNION:
-      for (Schema s : schema.getTypes())
-        enqueue(s);
-      break;
-    case ENUM:
-    case FIXED:
-      queue.add(schema);
-      break;
-    case STRING: case BYTES:
-    case INT: case LONG:
-    case FLOAT: case DOUBLE:
-    case BOOLEAN: case NULL:
-      break;
-    default: throw new RuntimeException("Unknown type: "+schema);
-    }
-  }
-
-  /** Generate java classes for enqueued schemas. */
-  Collection<OutputFile> compile() {
-    List<OutputFile> out = new ArrayList<OutputFile>();
-    for (Schema schema : queue) {
-      out.add(compile(schema));
-    }
-    if (protocol != null) {
-      out.add(compileInterface(protocol));
-    }
-    return out;
-  }
-
-  private void compileToDestination(File dst) throws IOException {
-    for (Schema schema : queue) {
-      OutputFile o = compile(schema);
-      o.writeToDestination(dst);
-    }
-    if (protocol != null) {
-      compileInterface(protocol).writeToDestination(dst);
-    }
-  }
-
-  private OutputFile compileInterface(Protocol protocol) {
-    OutputFile outputFile = new OutputFile();
-    String mangledName = mangle(protocol.getName());
-    outputFile.path = makePath(mangledName, protocol.getNamespace());
-    StringBuilder out = new StringBuilder();
-    header(out, protocol.getNamespace());
-    doc(out, 1, protocol.getDoc());
-    line(out, 0, "public interface " + mangledName + " {");
-    line(out, 1, "public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse(\""
-           +esc(protocol)+"\");");
-    for (Map.Entry<String,Message> e : protocol.getMessages().entrySet()) {
-      String name = e.getKey();
-      Message message = e.getValue();
-      Schema request = message.getRequest();
-      Schema response = message.getResponse();
-      doc(out, 1, e.getValue().getDoc());
-      line(out, 1, unbox(response)+" "+ mangle(name)+"("+params(request)+")");
-      line(out, 2,"throws org.apache.avro.ipc.AvroRemoteException"+errors(message.getErrors())+";");
-    }
-    line(out, 0, "}");
-
-    outputFile.contents = out.toString();
-    return outputFile;
-  }
-
-
-  static String makePath(String name, String space) {
-    if (space == null || space.isEmpty()) {
-      return name + ".java";
-    } else {
-      return space.replace('.', File.separatorChar) + File.separatorChar
-        + name + ".java";
-    }
-  }
-
-  private void header(StringBuilder out, String namespace) {
-    if(namespace != null) {
-      line(out, 0, "package "+namespace+";\n");
-    }
-    line(out, 0, "@SuppressWarnings(\"all\")");
-  }
-
-  private String params(Schema request) {
-    StringBuilder b = new StringBuilder();
-    int count = 0;
-    for (Schema.Field param : request.getFields()) {
-      String paramName = mangle(param.name());
-      b.append(unbox(param.schema()));
-      b.append(" ");
-      b.append(paramName);
-      if (++count < request.getFields().size())
-        b.append(", ");
-    }
-    return b.toString();
-  }
-
-  private String errors(Schema errs) {
-    StringBuilder b = new StringBuilder();
-    for (Schema error : errs.getTypes().subList(1, errs.getTypes().size())) {
-      b.append(", ");
-      b.append(mangle(error.getFullName()));
-    }
-    return b.toString();
-  }
-
-  private OutputFile compile(Schema schema) {
-    OutputFile outputFile = new OutputFile();
-    String name = mangle(schema.getName());
-    outputFile.path = makePath(name, schema.getNamespace());
-    StringBuilder out = new StringBuilder();
-    header(out, schema.getNamespace());
-    switch (schema.getType()) {
-    case RECORD:
-      doc(out, 0, schema.getDoc());
-      line(out, 0, "public class "+name+
-           (schema.isError()
-            ? " extends org.apache.avro.specific.SpecificExceptionBase"
-             : " extends org.apache.avro.specific.SpecificRecordBase")
-           +" implements org.apache.avro.specific.SpecificRecord {");
-      // schema definition
-      line(out, 1, "public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse(\""
-           +esc(schema)+"\");");
-      // field declations
-      for (Schema.Field field : schema.getFields()) {
-        doc(out, 1, field.doc());
-        line(out, 1, "public " + unbox(field.schema()) + " "
-             + mangle(field.name()) + ";");
-      }
-      // schema method
-      line(out, 1, "public org.apache.avro.Schema getSchema() { return SCHEMA$; }");
-      // get method
-      line(out, 1, "public java.lang.Object get(int field$) {");
-      line(out, 2, "switch (field$) {");
-      int i = 0;
-      for (Schema.Field field : schema.getFields())
-        line(out, 2, "case "+(i++)+": return "+mangle(field.name())+";");
-      line(out, 2, "default: throw new org.apache.avro.AvroRuntimeException(\"Bad index\");");
-      line(out, 2, "}");
-      line(out, 1, "}");
-      // set method
-      line(out, 1, "@SuppressWarnings(value=\"unchecked\")");
-      line(out, 1, "public void put(int field$, java.lang.Object value$) {");
-      line(out, 2, "switch (field$) {");
-      i = 0;
-      for (Schema.Field field : schema.getFields())
-        line(out, 2, "case "+(i++)+": "+mangle(field.name())+" = ("+
-             type(field.schema())+")value$; break;");
-      line(out, 2, "default: throw new org.apache.avro.AvroRuntimeException(\"Bad index\");");
-      line(out, 2, "}");
-      line(out, 1, "}");
-      line(out, 0, "}");
-      break;
-    case ENUM:
-      doc(out, 0, schema.getDoc());
-      line(out, 0, "public enum "+name+" { ");
-      StringBuilder b = new StringBuilder();
-      int count = 0;
-      for (String symbol : schema.getEnumSymbols()) {
-        b.append(mangle(symbol));
-        if (++count < schema.getEnumSymbols().size())
-          b.append(", ");
-      }
-      line(out, 1, b.toString());
-      line(out, 0, "}");
-      break;
-    case FIXED:
-      doc(out, 0, schema.getDoc());
-      line(out, 0, "@org.apache.avro.specific.FixedSize("+schema.getFixedSize()+")");
-      line(out, 0, "public class "+name+" extends org.apache.avro.specific.SpecificFixed {}");
-      break;
-    case MAP: case ARRAY: case UNION: case STRING: case BYTES:
-    case INT: case LONG: case FLOAT: case DOUBLE: case BOOLEAN: case NULL:
-      break;
-    default: throw new RuntimeException("Unknown type: "+schema);
-    }
-
-    outputFile.contents = out.toString();
-    return outputFile;
-  }
-
-  private void doc(StringBuilder out, int indent, String doc) {
-    if (doc != null) {
-      line(out, indent, "/** " + escapeForJavaDoc(doc) + " */");
-    }
-  }
-
-  /** Be sure that generated code will compile by replacing
-   * end-comment markers with the appropriate HTML entity. */
-  private String escapeForJavaDoc(String doc) {
-    return doc.replace("*/", "*&#47;");
-  }
-
-  private static final Schema NULL_SCHEMA = Schema.create(Schema.Type.NULL);
-
-  private String type(Schema schema) {
-    switch (schema.getType()) {
-    case RECORD:
-    case ENUM:
-    case FIXED:
-      return mangle(schema.getFullName());
-    case ARRAY:
-      return "org.apache.avro.generic.GenericArray<"+type(schema.getElementType())+">";
-    case MAP:
-      return "java.util.Map<org.apache.avro.util.Utf8,"+type(schema.getValueType())+">";
-    case UNION:
-      List<Schema> types = schema.getTypes();     // elide unions with null
-      if ((types.size() == 2) && types.contains(NULL_SCHEMA))
-        return type(types.get(types.get(0).equals(NULL_SCHEMA) ? 1 : 0));
-      return "java.lang.Object";
-    case STRING:  return "org.apache.avro.util.Utf8";
-    case BYTES:   return "java.nio.ByteBuffer";
-    case INT:     return "java.lang.Integer";
-    case LONG:    return "java.lang.Long";
-    case FLOAT:   return "java.lang.Float";
-    case DOUBLE:  return "java.lang.Double";
-    case BOOLEAN: return "java.lang.Boolean";
-    case NULL:    return "java.lang.Void";
-    default: throw new RuntimeException("Unknown type: "+schema);
-    }
-  }
-
-  private String unbox(Schema schema) {
-    switch (schema.getType()) {
-    case INT:     return "int";
-    case LONG:    return "long";
-    case FLOAT:   return "float";
-    case DOUBLE:  return "double";
-    case BOOLEAN: return "boolean";
-    default:      return type(schema);
-    }
-  }
-
-  private void line(StringBuilder out, int indent, String text) {
-    for (int i = 0; i < indent; i ++) {
-      out.append("  ");
-    }
-    out.append(text);
-    out.append("\n");
-  }
-
-  static String esc(Object o) {
-    return o.toString().replace("\"", "\\\"");
-  }
-
-  public static void main(String[] args) throws Exception {
-    //compileSchema(new File(args[0]), new File(args[1]));
-    compileProtocol(new File(args[0]), new File(args[1]));
-  }
-
-  /**
-   * Implementation of Tool for inclusion by the "avro-tools" runner.
-   */
-  public static class SpecificCompilerTool implements Tool {
-    @Override
-    public int run(InputStream in, PrintStream out, PrintStream err,
-        List<String> args) throws Exception {
-      if (args.size() != 3) {
-        System.err.println("Expected 3 arguments: (schema|protocol) inputfile outputdir");
-        return 1;
-      }
-      String method = args.get(0);
-      File input = new File(args.get(1));
-      File output = new File(args.get(2));
-      if ("schema".equals(method)) {
-        compileSchema(input, output);
-      } else if ("protocol".equals(method)) {
-        compileProtocol(input, output);
-      } else {
-        System.err.println("Expected \"schema\" or \"protocol\".");
-        return 1;
-      }
-      return 0;
-    }
-
-    @Override
-    public String getName() {
-      return "compile";
-    }
-
-    @Override
-    public String getShortDescription() {
-      return "Generates Java code for the given schema.";
-    }
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificData.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificData.java
deleted file mode 100644
index 157dd3d..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificData.java
+++ /dev/null
@@ -1,216 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.util.Map;
-import java.util.List;
-import java.util.WeakHashMap;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.LinkedHashMap;
-import java.nio.ByteBuffer;
-import java.lang.reflect.ParameterizedType;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol;
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.AvroTypeException;
-import org.apache.avro.Schema.Type;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericArray;
-import org.apache.avro.util.Utf8;
-
-/** Utilities for generated Java classes and interfaces. */
-public class SpecificData extends GenericData {
-
-  private static final SpecificData INSTANCE = new SpecificData();
-
-  protected SpecificData() {}
-  
-  /** Return the singleton instance. */
-  public static SpecificData get() { return INSTANCE; }
-
-  @Override
-  protected boolean isRecord(Object datum) {
-    return datum instanceof SpecificRecord;
-  }
-
-  @Override
-  protected Schema getRecordSchema(Object record) {
-    return ((SpecificRecord)record).getSchema();
-  }
-
-  @Override
-  protected boolean isEnum(Object datum) {
-    return datum instanceof Enum;
-  }
-
-  @Override
-  public boolean validate(Schema schema, Object datum) {
-    switch (schema.getType()) {
-    case ENUM:
-      return datum instanceof Enum
-        && schema.getEnumSymbols().contains(((Enum)datum).name());
-    default:
-      return super.validate(schema, datum);
-    }
-  }
-
-  private Map<String,Class> classCache = new ConcurrentHashMap<String,Class>();
-
-  private static final Schema NULL_SCHEMA = Schema.create(Schema.Type.NULL);
-
-  /** Return the class that implements a schema. */
-  public Class getClass(Schema schema) {
-    switch (schema.getType()) {
-    case FIXED:
-    case RECORD:
-    case ENUM:
-      String name = schema.getFullName();
-      Class c = classCache.get(name);
-      if (c == null) {
-        try {
-          c = Class.forName(getClassName(schema));
-          classCache.put(name, c);
-        } catch (ClassNotFoundException e) {
-          throw new AvroRuntimeException(e);
-        }
-      }
-      return c;
-    case ARRAY:   return GenericArray.class;
-    case MAP:     return Map.class;
-    case UNION:
-      List<Schema> types = schema.getTypes();     // elide unions with null
-      if ((types.size() == 2) && types.contains(NULL_SCHEMA))
-        return getClass(types.get(types.get(0).equals(NULL_SCHEMA) ? 1 : 0));
-      return Object.class;
-    case STRING:  return Utf8.class;
-    case BYTES:   return ByteBuffer.class;
-    case INT:     return Integer.TYPE;
-    case LONG:    return Long.TYPE;
-    case FLOAT:   return Float.TYPE;
-    case DOUBLE:  return Double.TYPE;
-    case BOOLEAN: return Boolean.TYPE;
-    case NULL:    return Void.TYPE;
-    default: throw new AvroRuntimeException("Unknown type: "+schema);
-    }
-  }
-
-  /** Returns the Java class name indicated by a schema's name and namespace. */
-  public String getClassName(Schema schema) {
-    String namespace = schema.getNamespace();
-    String name = schema.getName();
-    if (namespace == null)
-      return name;
-    String dot = namespace.endsWith("$") ? "" : ".";
-    return namespace + dot + name;
-  }
-
-  private final WeakHashMap<java.lang.reflect.Type,Schema> schemaCache =
-    new WeakHashMap<java.lang.reflect.Type,Schema>();
-
-  /** Find the schema for a Java type. */
-  public Schema getSchema(java.lang.reflect.Type type) {
-    Schema schema = schemaCache.get(type);
-    if (schema == null) {
-      schema = createSchema(type, new LinkedHashMap<String,Schema>());
-      schemaCache.put(type, schema);
-    }
-    return schema;
-  }
-
-  /** Create the schema for a Java type. */
-  @SuppressWarnings(value="unchecked")
-  protected Schema createSchema(java.lang.reflect.Type type,
-                                Map<String,Schema> names) {
-    if (type == Utf8.class)
-      return Schema.create(Type.STRING);
-    else if (type == ByteBuffer.class)
-      return Schema.create(Type.BYTES);
-    else if ((type == Integer.class) || (type == Integer.TYPE))
-      return Schema.create(Type.INT);
-    else if ((type == Long.class) || (type == Long.TYPE))
-      return Schema.create(Type.LONG);
-    else if ((type == Float.class) || (type == Float.TYPE))
-      return Schema.create(Type.FLOAT);
-    else if ((type == Double.class) || (type == Double.TYPE))
-      return Schema.create(Type.DOUBLE);
-    else if ((type == Boolean.class) || (type == Boolean.TYPE))
-      return Schema.create(Type.BOOLEAN);
-    else if ((type == Void.class) || (type == Void.TYPE))
-      return Schema.create(Type.NULL);
-    else if (type instanceof ParameterizedType) {
-      ParameterizedType ptype = (ParameterizedType)type;
-      Class raw = (Class)ptype.getRawType();
-      java.lang.reflect.Type[] params = ptype.getActualTypeArguments();
-      if (GenericArray.class.isAssignableFrom(raw)) { // array
-        if (params.length != 1)
-          throw new AvroTypeException("No array type specified.");
-        return Schema.createArray(createSchema(params[0], names));
-      } else if (Map.class.isAssignableFrom(raw)) {   // map
-        java.lang.reflect.Type key = params[0];
-        java.lang.reflect.Type value = params[1];
-        if (!(key == Utf8.class))
-          throw new AvroTypeException("Map key class not Utf8: "+key);
-        return Schema.createMap(createSchema(value, names));
-      } else {
-        return createSchema(raw, names);
-      }
-    } else if (type instanceof Class) {               // class
-      Class c = (Class)type;
-      String fullName = c.getName();
-      Schema schema = names.get(fullName);
-      if (schema == null)
-        try {
-          schema = (Schema)(c.getDeclaredField("SCHEMA$").get(null));
-        } catch (NoSuchFieldException e) {
-          throw new AvroRuntimeException(e);
-        } catch (IllegalAccessException e) {
-          throw new AvroRuntimeException(e);
-        }
-      names.put(fullName, schema);
-      return schema;
-    }
-    throw new AvroTypeException("Unknown type: "+type);
-  }
-
-  /** Return the protocol for a Java interface. */
-  public Protocol getProtocol(Class iface) {
-    try {
-      return (Protocol)(iface.getDeclaredField("PROTOCOL").get(null));
-    } catch (NoSuchFieldException e) {
-      throw new AvroRuntimeException(e);
-    } catch (IllegalAccessException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-  @Override
-  public int compare(Object o1, Object o2, Schema s) {
-    switch (s.getType()) {
-    case ENUM:
-      return ((Enum)o1).ordinal() - ((Enum)o2).ordinal();
-    default:
-      return super.compare(o1, o2, s);
-    }
-  }
-
-}
-
-
-
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumReader.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumReader.java
deleted file mode 100644
index 327b7ff..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumReader.java
+++ /dev/null
@@ -1,89 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.lang.reflect.Constructor;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-
-import org.apache.avro.Schema;
-import org.apache.avro.generic.GenericDatumReader;
-
-/** {@link org.apache.avro.io.DatumReader DatumReader} for generated Java classes. */
-public class SpecificDatumReader<T> extends GenericDatumReader<T> {
-  public SpecificDatumReader() {}
-
-  public SpecificDatumReader(Class<T> c) {
-    this(SpecificData.get().getSchema(c));
-  }
-
-  public SpecificDatumReader(Schema schema) {
-    super(schema);
-  }
-
-  @Override
-  protected Object newRecord(Object old, Schema schema) {
-    Class c = SpecificData.get().getClass(schema);
-    return (c.isInstance(old) ? old : newInstance(c));
-  }
-
-  @Override
-  protected void setField(Object record, String name, int position, Object o) {
-    ((SpecificRecord)record).put(position, o);
-  }
-  @Override
-  protected Object getField(Object record, String name, int position) {
-    return ((SpecificRecord)record).get(position);
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  protected Object createEnum(String symbol, Schema schema) {
-    return Enum.valueOf(SpecificData.get().getClass(schema), symbol);
-  }
-
-  @Override
-  protected Object createFixed(Object old, Schema schema) {
-    Class c = SpecificData.get().getClass(schema);
-    return c.isInstance(old) ? old : newInstance(c);
-  }
-
-  private static final Class<?>[] EMPTY_ARRAY = new Class[]{};
-  private static final Map<Class,Constructor> CTOR_CACHE =
-    new ConcurrentHashMap<Class,Constructor>();
-
-  /** Create an instance of a class. */
-  @SuppressWarnings("unchecked")
-  protected static Object newInstance(Class c) {
-    Object result;
-    try {
-      Constructor meth = (Constructor)CTOR_CACHE.get(c);
-      if (meth == null) {
-        meth = c.getDeclaredConstructor(EMPTY_ARRAY);
-        meth.setAccessible(true);
-        CTOR_CACHE.put(c, meth);
-      }
-      result = meth.newInstance();
-    } catch (Exception e) {
-      throw new RuntimeException(e);
-    }
-    return result;
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumWriter.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumWriter.java
deleted file mode 100644
index 22a827f..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificDatumWriter.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import java.io.IOException;
-
-import org.apache.avro.Schema;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.Encoder;
-
-/** {@link org.apache.avro.io.DatumWriter DatumWriter} for generated Java classes. */
-public class SpecificDatumWriter<T> extends GenericDatumWriter<T> {
-  public SpecificDatumWriter() {}
-
-  public SpecificDatumWriter(Class<T> c) {
-    super(SpecificData.get().getSchema(c), SpecificData.get());
-  }
-  
-  public SpecificDatumWriter(Schema schema) {
-    super(schema, SpecificData.get());
-  }
-  
-  protected SpecificDatumWriter(Schema root, SpecificData specificData) {
-    super(root, specificData);
-  }
-  
-  protected SpecificDatumWriter(SpecificData specificData) {
-    super(specificData);
-  }
-  
-  @Override
-  protected Object getField(Object record, String name, int position) {
-    return ((SpecificRecord)record).get(position);
-  }
-
-  @Override
-  protected void writeEnum(Schema schema, Object datum, Encoder out)
-    throws IOException {
-    out.writeEnum(((Enum)datum).ordinal());
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificExceptionBase.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificExceptionBase.java
deleted file mode 100644
index b3cd436..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificExceptionBase.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.specific;
-
-import org.apache.avro.Schema;
-import org.apache.avro.ipc.AvroRemoteException;
-
-/** Base class for specific exceptions. */
-public abstract class SpecificExceptionBase extends AvroRemoteException
-  implements SpecificRecord {
-
-  public abstract Schema getSchema();
-  public abstract Object get(int field);
-  public abstract void put(int field, Object value);
-
-  public boolean equals(Object that) {
-    if (that == this) return true;                        // identical object
-    if (!(that instanceof SpecificExceptionBase)) return false; // not a record
-    if (this.getClass() != that.getClass()) return false; // not same schema
-    return SpecificData.get().compare(this, that, this.getSchema()) == 0;
-  }
-
-  public int hashCode() {
-    return SpecificData.get().hashCode(this, this.getSchema());
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificFixed.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificFixed.java
deleted file mode 100644
index 312e956..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificFixed.java
+++ /dev/null
@@ -1,28 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import org.apache.avro.generic.GenericData;
-
-/** Base class for generated fixed-sized data classes. */
-public abstract class SpecificFixed extends GenericData.Fixed {
-  public SpecificFixed() {
-    bytes(new byte[getClass().getAnnotation(FixedSize.class).value()]);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecord.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecord.java
deleted file mode 100644
index ad78df2..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecord.java
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import org.apache.avro.generic.IndexedRecord;
-
-/** Implemented by generated record classes. Permits efficient access to
- * fields.*/
-public interface SpecificRecord extends IndexedRecord {
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecordBase.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecordBase.java
deleted file mode 100644
index d0a6705..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRecordBase.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import org.apache.avro.Schema;
-
-/** Base class for generated record classes. */
-public abstract class SpecificRecordBase
-  implements SpecificRecord, Comparable<SpecificRecord> {
-
-  public abstract Schema getSchema();
-  public abstract Object get(int field);
-  public abstract void put(int field, Object value);
-
-  @Override
-  public boolean equals(Object that) {
-    if (that == this) return true;                        // identical object
-    if (!(that instanceof SpecificRecord)) return false;  // not a record
-    if (this.getClass() != that.getClass()) return false; // not same schema
-    return this.compareTo((SpecificRecord)that) == 0;
-  }
-    
-  @Override
-  public int hashCode() {
-    return SpecificData.get().hashCode(this, this.getSchema());
-  }
-
-  @Override
-  public int compareTo(SpecificRecord that) {
-    return SpecificData.get().compare(this, that, this.getSchema());
-  }
-
-  @Override
-  public String toString() {
-    return SpecificData.get().toString(this);
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRequestor.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRequestor.java
deleted file mode 100644
index c55afbc..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificRequestor.java
+++ /dev/null
@@ -1,109 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.specific;
-
-import java.io.IOException;
-import java.lang.reflect.Method;
-import java.lang.reflect.Proxy;
-import java.lang.reflect.InvocationHandler;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.Transceiver;
-import org.apache.avro.ipc.Requestor;
-
-/** {@link org.apache.avro.ipc.Requestor Requestor} for generated interfaces. */
-public class SpecificRequestor extends Requestor implements InvocationHandler {
-  
-  public SpecificRequestor(Class<?> iface, Transceiver transceiver)
-    throws IOException {
-    this(SpecificData.get().getProtocol(iface), transceiver);
-  }
-
-  protected SpecificRequestor(Protocol protocol, Transceiver transceiver)
-    throws IOException {
-    super(protocol, transceiver);
-  }
-
-  @Override
-  public Object invoke(Object proxy, Method method, Object[] args)
-    throws Throwable {
-    return request(method.getName(), args);
-  }
-
-  protected DatumWriter<Object> getDatumWriter(Schema schema) {
-    return new SpecificDatumWriter<Object>(schema);
-  }
-
-  protected DatumReader<Object> getDatumReader(Schema schema) {
-    return new SpecificDatumReader<Object>(schema);
-  }
-
-  @Override
-  public void writeRequest(Schema schema, Object request, Encoder out)
-    throws IOException {
-    Object[] args = (Object[])request;
-    int i = 0;
-    for (Schema.Field param : schema.getFields())
-      getDatumWriter(param.schema()).write(args[i++], out);
-  }
-    
-  @Override
-  public Object readResponse(Schema schema, Decoder in) throws IOException {
-    return getDatumReader(schema).read(null, in);
-  }
-
-  @Override
-  public Exception readError(Schema schema, Decoder in)
-    throws IOException {
-    Object value = getDatumReader(schema).read(null, in);
-    if (value instanceof Exception)
-      return (Exception)value;
-    return new AvroRemoteException(value);
-  }
-
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, Transceiver transciever)
-    throws IOException {
-    return getClient(iface, transciever, SpecificData.get());
-  }
-
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, Transceiver transciever,
-                                 SpecificData specificData)
-    throws IOException {
-    Protocol protocol = specificData.getProtocol(iface);
-    return Proxy.newProxyInstance(iface.getClassLoader(),
-                                  new Class[] { iface },
-                                  new SpecificRequestor(protocol, transciever));
-  }
-  
-  /** Create a proxy instance whose methods invoke RPCs. */
-  public static Object getClient(Class<?> iface, SpecificRequestor requestor)
-    throws IOException {
-    return Proxy.newProxyInstance(iface.getClassLoader(),
-                                  new Class[] { iface }, requestor);
-  }
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificResponder.java b/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificResponder.java
deleted file mode 100644
index b21a623..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/SpecificResponder.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.specific;
-
-import java.io.IOException;
-import java.lang.reflect.InvocationTargetException;
-import java.lang.reflect.Method;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.ipc.Responder;
-
-/** {@link org.apache.avro.ipc.Responder Responder} for generated interfaces.*/
-public class SpecificResponder extends Responder {
-  private Object impl;
-  private SpecificData data;
-
-  public SpecificResponder(Class iface, Object impl) {
-    this(SpecificData.get().getProtocol(iface), impl);
-  }
-    
-  public SpecificResponder(Protocol protocol, Object impl) {
-    this(protocol, impl, SpecificData.get());
-  }
-
-  protected SpecificResponder(Protocol protocol, Object impl,
-                              SpecificData data) {
-    super(protocol);
-    this.impl = impl;
-    this.data = data;
-  }
-
-  protected DatumWriter<Object> getDatumWriter(Schema schema) {
-    return new SpecificDatumWriter<Object>(schema);
-  }
-
-  protected DatumReader<Object> getDatumReader(Schema schema) {
-    return new SpecificDatumReader<Object>(schema);
-  }
-
-  @Override
-  public Object readRequest(Schema schema, Decoder in) throws IOException {
-    Object[] args = new Object[schema.getFields().size()];
-    int i = 0;
-    for (Schema.Field param : schema.getFields())
-      args[i++] = getDatumReader(param.schema()).read(null, in);
-    return args;
-  }
-
-  @Override
-  public void writeResponse(Schema schema, Object response, Encoder out)
-    throws IOException {
-    getDatumWriter(schema).write(response, out);
-  }
-
-  @Override
-  public void writeError(Schema schema, Object error,
-                         Encoder out) throws IOException {
-    getDatumWriter(schema).write(error, out);
-  }
-
-  @Override
-  public Object respond(Message message, Object request) throws Exception {
-    Class[] paramTypes = new Class[message.getRequest().getFields().size()];
-    int i = 0;
-    try {
-      for (Schema.Field param: message.getRequest().getFields())
-        paramTypes[i++] = data.getClass(param.schema());
-      Method method = impl.getClass().getMethod(message.getName(), paramTypes);
-      return method.invoke(impl, (Object[])request);
-    } catch (InvocationTargetException e) {
-      throw (Exception)e.getTargetException();
-    } catch (NoSuchMethodException e) {
-      throw new AvroRuntimeException(e);
-    } catch (IllegalAccessException e) {
-      throw new AvroRuntimeException(e);
-    }
-  }
-
-}
-
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/specific/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/specific/package.html
deleted file mode 100644
index 6485e15..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/specific/package.html
+++ /dev/null
@@ -1,40 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Generate specific Java classes for schemas and protocols.
-
-<p>This API is recommended for most RPC uses and for data applications
-  that always use the same datatypes, i.e., whose schemas are known at
-  compile time.  For data applications that accept dynamic datatypes
-  the {@link org.apache.avro.generic generic} API is recommended.
-
-<p>Avro types are mapped to Java as follows:
-
-<ul>
-
-<li>Record, enum, and fixed schemas generate Java class definitions.</li>
-
-<li>All other types are mapped as in the {@link org.apache.avro.generic
-  generic} API.</li>
-
-</ul>
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/BinaryFragmentToJsonTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/BinaryFragmentToJsonTool.java
deleted file mode 100644
index 57f58d9..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/BinaryFragmentToJsonTool.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.FileInputStream;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-import org.apache.avro.Schema;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.JsonEncoder;
-import org.codehaus.jackson.JsonEncoding;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonGenerator;
-
-/** Converts an input file from Avro binary into JSON. */
-public class BinaryFragmentToJsonTool implements Tool {
-  @Override
-  public int run(InputStream stdin, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    if (args.size() != 2) {
-      err.println("Expected 1 arguments: schema binary_data_file");
-      err.println("Use '-' as binary_data_file for stdin.");
-      return 1;
-    }
-    Schema schema = Schema.parse(args.get(0));
-    InputStream input;
-    boolean needsClosing;
-    if (args.get(1).equals("-")) {
-      input = stdin;
-      needsClosing = false;
-    } else {
-      input = new FileInputStream(args.get(1));
-      needsClosing = true;
-    }
-    try {
-      DatumReader<Object> reader = new GenericDatumReader<Object>(schema);
-      Object datum = reader.read(null,
-          DecoderFactory.defaultFactory().createBinaryDecoder(input, null));
-      DatumWriter<Object> writer = new GenericDatumWriter<Object>(schema);
-      JsonGenerator g =
-        new JsonFactory().createJsonGenerator(out, JsonEncoding.UTF8);
-      g.useDefaultPrettyPrinter();
-      writer.write(datum, new JsonEncoder(schema, g));
-      g.flush();
-      out.println();
-      out.flush();
-    } finally {
-      if (needsClosing) {
-        input.close();
-      }
-    }
-    return 0;
-  }
-
-  @Override
-  public String getName() {
-    return "fragtojson";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Renders a binary-encoded Avro datum as JSON.";
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileGetSchemaTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileGetSchemaTool.java
deleted file mode 100644
index 256a22a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileGetSchemaTool.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.File;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.generic.GenericDatumReader;
-
-/** Reads a data file to get its schema. */
-public class DataFileGetSchemaTool implements Tool {
-
-  @Override
-  public String getName() {
-    return "getschema";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Prints out schema of an Avro data file.";
-  }
-
-  @Override
-  public int run(InputStream stdin, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    if (args.size() != 1) {
-      err.println("Expected 1 argument: input_file");
-      return 1;
-    }
-    DataFileReader<Void> reader =
-      new DataFileReader<Void>(new File(args.get(0)),
-                               new GenericDatumReader<Void>());
-    out.println(reader.getSchema().toString(true));
-    return 0;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileReadTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileReadTool.java
deleted file mode 100644
index 5efad48..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileReadTool.java
+++ /dev/null
@@ -1,78 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.File;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-import org.apache.avro.Schema;
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.io.JsonEncoder;
-import org.codehaus.jackson.JsonGenerator;
-
-/** Reads a data file and dumps to JSON */
-public class DataFileReadTool implements Tool {
-
-  @Override
-  public String getName() {
-    return "tojson";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Dumps an Avro data file as JSON, one record per line.";
-  }
-
-  @Override
-  public int run(InputStream stdin, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    if (args.size() != 1) {
-      // Unlike other commands, "-" can't be used for stdin, because
-      // we can only use seekable files.
-      err.println("Expected 1 argument: input_file.");
-      return 1;
-    }
-
-    GenericDatumReader<Object> reader = new GenericDatumReader<Object>();
-    DataFileReader<Object> fileReader =
-      new DataFileReader<Object>(new File(args.get(0)), reader);
-    try {
-      Schema schema = fileReader.getSchema();
-      DatumWriter<Object> writer = new GenericDatumWriter<Object>(schema);
-      Encoder encoder = new JsonEncoder(schema, (JsonGenerator)null);
-      for (Object datum : fileReader) {
-        // init() recreates the internal Jackson JsonGenerator
-        encoder.init(out);
-        writer.write(datum, encoder);
-        encoder.flush();
-        out.println();
-      }
-      out.flush();
-    } finally {
-      fileReader.close();
-    }
-    return 0;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileWriteTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileWriteTool.java
deleted file mode 100644
index cb7553c..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/DataFileWriteTool.java
+++ /dev/null
@@ -1,98 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.EOFException;
-import java.io.InputStream;
-import java.io.DataInputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-import joptsimple.OptionParser;
-import joptsimple.OptionSet;
-import joptsimple.OptionSpec;
-
-import org.apache.avro.Schema;
-import org.apache.avro.file.CodecFactory;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.JsonDecoder;
-
-/** Reads new-line delimited JSON records and writers an Avro data file. */
-public class DataFileWriteTool implements Tool {
-
-  @Override
-  public String getName() {
-    return "fromjson";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Reads JSON records and writes an Avro data file.";
-  }
-
-  @Override
-  public int run(InputStream stdin, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-
-    OptionParser p = new OptionParser();
-    OptionSpec<String> codec =
-      p.accepts("codec", "Compression codec")
-      .withRequiredArg()
-      .defaultsTo("null")
-      .ofType(String.class);
-    OptionSet opts = p.parse(args.toArray(new String[0]));
-
-    if (opts.nonOptionArguments().size() != 2) {
-      err.println("Expected 2 args: schema input_file");
-      p.printHelpOn(err);
-      return 1;
-    }
-
-    Schema schema = Schema.parse(args.get(0));
-    DatumReader<Object> reader = new GenericDatumReader<Object>(schema);
-
-    InputStream input = Util.fileOrStdin(args.get(1), stdin);
-    try {
-      DataInputStream din = new DataInputStream(input);
-      DataFileWriter<Object> writer =
-        new DataFileWriter<Object>(new GenericDatumWriter<Object>());
-      writer.setCodec(CodecFactory.fromString(codec.value(opts)));
-      writer.create(schema, out);
-      Decoder decoder = new JsonDecoder(schema, din);
-      Object datum;
-      while (true) {
-        try {
-          datum = reader.read(null, decoder);
-        } catch (EOFException e) {
-          break;
-        }
-        writer.append(datum);
-      }
-      writer.close();
-    } finally {
-      if (input != stdin) {
-        input.close();
-      }
-    }
-    return 0;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/GenAvroTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/GenAvroTool.java
deleted file mode 100644
index 59daca7..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/GenAvroTool.java
+++ /dev/null
@@ -1,78 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.tool;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.genavro.GenAvro;
-
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-/**
- * Tool implementation for generating Avro JSON schemata from
- * genavro format files.
- */
-public class GenAvroTool implements Tool {
-  @Override
-  public int run(InputStream in, PrintStream out, PrintStream err,
-                  List<String> args) throws Exception {
-
-    InputStream parseIn = in;
-    PrintStream parseOut = out;
-
-    if (args.size() > 2 ||
-        (args.size() == 1 && (args.get(0).equals("--help") ||
-                              args.get(0).equals("-help")))) {
-      err.println("Usage: GenAvroTool [in] [out]");
-      err.println("");
-      err.println("If an output path is not specified, outputs to stdout.");
-      err.println("If no input or output is specified, takes input from");
-      err.println("stdin and outputs to stdin.");
-      err.println("The special path \"-\" may also be specified to refer to");
-      err.println("stdin and stdout.");
-      return -1;
-    }
-
-    if (args.size() >= 1 && ! "-".equals(args.get(0))) {
-      parseIn = new FileInputStream(args.get(0));
-    }
-    if (args.size() == 2 && ! "-".equals(args.get(1))) {
-      parseOut = new PrintStream(new FileOutputStream(args.get(1)));
-    }
-
-
-    GenAvro parser = new GenAvro(parseIn);
-    Protocol p = parser.CompilationUnit();
-    parseOut.print(p.toString(true));
-    return 0;
-  }
-
-  @Override
-  public String getName() {
-    return "genavro";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Generates a JSON schema from a GenAvro file";
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/JsonToBinaryFragmentTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/JsonToBinaryFragmentTool.java
deleted file mode 100644
index d3baeb7..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/JsonToBinaryFragmentTool.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.FileInputStream;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-import org.apache.avro.Schema;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.JsonDecoder;
-
-/** Tool to convert JSON data into the binary form. */
-public class JsonToBinaryFragmentTool implements Tool {
-  @Override
-  public int run(InputStream stdin, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    if (args.size() != 2) {
-      err.println("Expected 2 arguments: schema json_data_file");
-      err.println("Use '-' as json_data_file for stdin.");
-      return 1;
-    }
-    Schema schema = Schema.parse(args.get(0));
-    InputStream input;
-    boolean needsClosing;
-    if (args.get(1).equals("-")) {
-      input = stdin;
-      needsClosing = false;
-    } else {
-      input = new FileInputStream(args.get(1));
-      needsClosing = true;
-    }
-    
-    try {
-    GenericDatumReader<Object> reader = 
-        new GenericDatumReader<Object>(schema);
-    Object datum = reader.read(null, new JsonDecoder(schema, input));
-    
-    GenericDatumWriter<Object> writer = 
-        new GenericDatumWriter<Object>(schema);
-    writer.write(datum, new BinaryEncoder(out));
-    } finally {
-      if (needsClosing) {
-        input.close();
-      }
-    }
-    return 0;
-  }
-
-  @Override
-  public String getName() {
-    return "jsontofrag";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Renders a JSON-encoded Avro datum as binary.";
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Main.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/Main.java
deleted file mode 100644
index fc9ee53..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Main.java
+++ /dev/null
@@ -1,97 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.util.Arrays;
-import java.util.Map;
-import java.util.TreeMap;
-
-import java.io.InputStream;
-
-import org.apache.avro.reflect.InduceSchemaTool;
-import org.apache.avro.specific.SpecificCompiler.SpecificCompilerTool;
-
-/** Command-line driver.*/
-public class Main {
-  /**
-   * Available tools, initialized in constructor.
-   */
-  final Map<String, Tool> tools;
-
-  int maxLen = 0;
-
-  Main() {
-    tools = new TreeMap<String, Tool>();
-    for (Tool tool : new Tool[] {
-        new SpecificCompilerTool(),
-        new InduceSchemaTool(),
-        new JsonToBinaryFragmentTool(),
-        new BinaryFragmentToJsonTool(),
-        new DataFileReadTool(),
-        new DataFileWriteTool(),
-        new DataFileGetSchemaTool(),
-        new GenAvroTool(),
-        new RpcReceiveTool(),
-        new RpcSendTool()
-        }) {
-      Tool prev = tools.put(tool.getName(), tool);
-      if (prev != null) {
-        throw new AssertionError(
-            "Two tools with identical names: " + tool + ", " + prev);
-      }
-      maxLen = Math.max(tool.getName().length(), maxLen);
-    }
-  }
-
-  public static void main(String[] args) throws Exception {
-    int rc = new Main().run(args);
-    System.exit(rc);
-  }
-
-  /**
-   * Delegates to tool specified on the command-line.
-   */
-  private int run(String[] args) throws Exception {
-    if (args.length != 0) {
-      Tool tool = tools.get(args[0]);
-      if (tool != null) {
-        return tool.run(
-          System.in, System.out, System.err, Arrays.asList(args).subList(1, args.length));
-      }
-    }
-    System.err.print("Version ");
-    printStream(Main.class.getClassLoader().getResourceAsStream("VERSION.txt"));
-    System.err.print(" of ");
-    printStream(Main.class.getClassLoader().getResourceAsStream("NOTICE.txt"));
-    System.err.println("----------------");
-
-    System.err.println("Available tools:");
-    for (Tool k : tools.values()) {
-      System.err.printf("%" + maxLen + "s  %s\n", k.getName(), k.getShortDescription());
-    }
-
-    return 1;
-  }
-
-  private static void printStream(InputStream in) throws Exception {
-    byte[] buffer = new byte[1024];
-    for (int i = in.read(buffer); i != -1; i = in.read(buffer))
-      System.err.write(buffer, 0, i);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcReceiveTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcReceiveTool.java
deleted file mode 100644
index 6e7c39a..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcReceiveTool.java
+++ /dev/null
@@ -1,176 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.io.File;
-import java.util.List;
-import java.util.concurrent.CountDownLatch;
-import java.net.URI;
-
-import joptsimple.OptionParser;
-import joptsimple.OptionSet;
-import joptsimple.OptionSpec;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.generic.GenericResponder;
-import org.apache.avro.io.JsonEncoder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.HttpServer;
-import org.codehaus.jackson.JsonEncoding;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonGenerator;
-
-/**
- * Receives one RPC call and responds.  (The moral equivalent
- * of "netcat".)
- */
-public class RpcReceiveTool implements Tool {
-  private PrintStream out;
-  private Object response;
-  /** Used to communicate between server thread (responder) and run() */
-  private CountDownLatch latch;
-  private Message expectedMessage;
-  HttpServer server;
-
-  @Override
-  public String getName() {
-    return "rpcreceive";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Opens an HTTP RPC Server and listens for one message.";
-  }
-  
-  private class SinkResponder extends GenericResponder {
-
-    public SinkResponder(Protocol local) {
-      super(local);
-    }
-
-    @Override
-    public Object respond(Message message, Object request)
-    throws AvroRemoteException {
-      if (!message.equals(expectedMessage)) {
-        out.println(String.format("Expected message '%s' but received '%s'.", 
-            expectedMessage.getName(), message.getName()));
-        latch.countDown();
-        throw new IllegalArgumentException("Unexpected message.");
-      }
-      out.print(message.getName());
-      out.print("\t");
-      try {
-        JsonGenerator jsonGenerator = new JsonFactory().createJsonGenerator(
-            out, JsonEncoding.UTF8);
-        JsonEncoder jsonEncoder = new JsonEncoder(message.getRequest(), jsonGenerator);
-
-        GenericDatumWriter<Object> writer = new GenericDatumWriter<Object>(
-            message.getRequest());
-        writer.write(request, jsonEncoder);
-        jsonGenerator.flush();
-        jsonEncoder.flush();
-        out.flush();
-      } catch (IOException e) {
-        throw new RuntimeException(e);
-      }
-      out.println();
-      new Thread() {
-        public void run() {
-          try {
-            Thread.sleep(1000);
-          } catch (InterruptedException e) {}
-          latch.countDown();
-        }
-      }.start();
-      return response;
-    }
-  }
-  
-  @Override
-  public int run(InputStream in, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    // Split up into two functions for easier testing.
-    int r = run1(in, out, err, args);
-    if (r != 0) {
-      return r;
-    }
-    return run2(err);
-  }
-
-  int run1(InputStream in, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    OptionParser p = new OptionParser();
-    OptionSpec<String> file =
-      p.accepts("file", "Data file containing response datum.")
-      .withRequiredArg()
-      .ofType(String.class);
-    OptionSpec<String> data =
-      p.accepts("data", "JSON-encoded response datum.")
-      .withRequiredArg()
-      .ofType(String.class);
-    OptionSet opts = p.parse(args.toArray(new String[0]));
-    args = opts.nonOptionArguments();
-
-    if (args.size() != 3) {
-      err.println("Usage: uri protocol_file message_name (-data d | -file f)");
-      p.printHelpOn(err);
-      return 1;
-    }
-
-    URI uri = new URI(args.get(0));
-    Protocol protocol = Protocol.parse(new File(args.get(1)));
-    String messageName = args.get(2);
-    expectedMessage = protocol.getMessages().get(messageName);
-    if (expectedMessage == null) {
-      err.println(String.format("No message named '%s' found in protocol '%s'.",
-          messageName, protocol));
-      return 1;
-    }
-    if (data.value(opts) != null) {
-      this.response =
-        Util.jsonToGenericDatum(expectedMessage.getResponse(),
-                                data.value(opts));
-    } else if (file.value(opts) != null) {
-      this.response = Util.datumFromFile(expectedMessage.getResponse(),
-                                         file.value(opts));
-    } else {
-      err.println("One of -data or -file must be specified.");
-      return 1;
-    }
-    
-    this.out = out;
-    
-    latch = new CountDownLatch(1);
-    server = new HttpServer(new SinkResponder(protocol), uri.getPort());
-    out.println("Port: " + server.getPort());
-    return 0;
-  }
-  
-  int run2(PrintStream err) throws InterruptedException {
-    latch.await();
-    err.println("Closing server.");
-    server.close();
-    return 0;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcSendTool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcSendTool.java
deleted file mode 100644
index b040abd..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/RpcSendTool.java
+++ /dev/null
@@ -1,123 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.io.File;
-import java.net.URI;
-import java.util.List;
-
-import joptsimple.OptionParser;
-import joptsimple.OptionSet;
-import joptsimple.OptionSpec;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.JsonEncoder;
-import org.apache.avro.ipc.HttpTransceiver;
-import org.codehaus.jackson.JsonEncoding;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonGenerator;
-
-/**
- * Sends a single RPC message.
- */
-public class RpcSendTool implements Tool {
-  @Override
-  public String getName() {
-    return "rpcsend";
-  }
-
-  @Override
-  public String getShortDescription() {
-    return "Sends a single RPC message.";
-  }
-
-  @Override
-  public int run(InputStream in, PrintStream out, PrintStream err,
-      List<String> args) throws Exception {
-    OptionParser p = new OptionParser();
-    OptionSpec<String> file =
-      p.accepts("file", "Data file containing request parameters.")
-      .withRequiredArg()
-      .ofType(String.class);
-    OptionSpec<String> data =
-      p.accepts("data", "JSON-encoded request parameters.")
-      .withRequiredArg()
-      .ofType(String.class);
-    OptionSet opts = p.parse(args.toArray(new String[0]));
-    args = opts.nonOptionArguments();
-
-    if (args.size() != 3) {
-      err.println("Usage: uri protocol_file message_name (-data d | -file f)");
-      p.printHelpOn(err);
-      return 1;
-    }
-
-    URI uri = new URI(args.get(0));
-    Protocol protocol = Protocol.parse(new File(args.get(1)));
-    String messageName = args.get(2);
-    Message message = protocol.getMessages().get(messageName);
-    if (message == null) {
-      err.println(String.format("No message named '%s' found in protocol '%s'.",
-          messageName, protocol));
-      return 1;
-    }
-    
-    Object datum;
-    if (data.value(opts) != null) {
-      datum = Util.jsonToGenericDatum(message.getRequest(), data.value(opts));
-    } else if (file.value(opts) != null) {
-      datum = Util.datumFromFile(message.getRequest(), file.value(opts));
-    } else {
-      err.println("One of -data or -file must be specified.");
-      return 1;
-    }
-
-    GenericRequestor client = makeClient(protocol, uri);
-    Object response = client.request(message.getName(), datum);
-    dumpJson(out, message.getResponse(), response);
-    return 0;
-  }
-
-  private void dumpJson(PrintStream out, Schema schema, Object datum) 
-  throws IOException {
-    DatumWriter<Object> writer = new GenericDatumWriter<Object>(schema);
-    JsonGenerator g =
-      new JsonFactory().createJsonGenerator(out, JsonEncoding.UTF8);
-    g.useDefaultPrettyPrinter();
-    writer.write(datum, new JsonEncoder(schema, g));
-    g.flush();
-    out.println();
-    out.flush();
-  }
-
-  private GenericRequestor makeClient(Protocol protocol, URI uri) 
-  throws IOException {
-    HttpTransceiver transceiver = 
-      new HttpTransceiver(uri.toURL());
-    GenericRequestor requestor = new GenericRequestor(protocol, transceiver);
-    return requestor;
-  }
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Tool.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/Tool.java
deleted file mode 100644
index 3500a1f..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Tool.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.InputStream;
-import java.io.PrintStream;
-import java.util.List;
-
-/**
- * Command-line "avro-tools" utilities should implement this
- * interface for delegation by {@link Main}.
- */
-public interface Tool {
-  /**
-   * Runs the tool with supplied arguments.  Input and output streams
-   * are customizable for easier testing.
-   *
-   * @param in Input stream to read data (typically System.in).
-   * @param out Output of tool (typically System.out).
-   * @param err Error stream (typically System.err).
-   * @param args Non-null list of arguments.
-   * @return result code (0 for success)
-   * @throws Exception Just like main(), tools may throw Exception.
-   */
-  int run(InputStream in, PrintStream out, PrintStream err, List<String> args) throws Exception;
-
-  /**
-   * Name of tool, to be used in listings.
-   */
-  String getName();
-
-  /**
-   * 1-line description to be used in command listings.
-   */
-  String getShortDescription();
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Util.java b/branch-1.3/lang/java/src/java/org/apache/avro/tool/Util.java
deleted file mode 100644
index a7d9e1e..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/Util.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.InputStream;
-
-import org.apache.avro.Schema;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.io.JsonDecoder;
-import org.apache.avro.file.DataFileReader;
-
-/** Static utility methods for tools. */
-class Util {
-  /**
-   * Returns stdin if filename is "-", else opens the file
-   * and returns an InputStream for it.
-   */
-  static InputStream fileOrStdin(String filename, InputStream stdin) 
-      throws FileNotFoundException {
-    if (filename.equals("-")) {
-      return stdin;
-    } else {
-      return new FileInputStream(new File(filename));
-    }
-  }
-  
-  /** 
-   * Converts a String JSON object into a generic datum.
-   * 
-   * This is inefficient (creates extra objects), so should be used 
-   * sparingly.
-   */
-  static Object jsonToGenericDatum(Schema schema, String jsonData) throws IOException {
-    GenericDatumReader<Object> reader = 
-      new GenericDatumReader<Object>(schema);
-    Object datum = reader.read(null, new JsonDecoder(schema, jsonData));
-    return datum;
-  }
-
-  /** Reads and returns the first datum in a data file. */
-  static Object datumFromFile(Schema schema, String file) throws IOException {
-    DataFileReader<Object> in =
-      new DataFileReader<Object>(new File(file),
-                                 new GenericDatumReader<Object>(schema));
-    try {
-      return in.next();
-    } finally {
-      in.close();
-    }
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/tool/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/tool/package.html
deleted file mode 100644
index b08ebae..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/tool/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Avro command-line tool.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/util/Utf8.java b/branch-1.3/lang/java/src/java/org/apache/avro/util/Utf8.java
deleted file mode 100644
index 352a040..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/util/Utf8.java
+++ /dev/null
@@ -1,94 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.util;
-
-import java.io.UnsupportedEncodingException;
-
-import org.apache.avro.io.BinaryData;
-import org.apache.avro.reflect.Stringable;
-
-/** A Utf8 string. */
-@Stringable
-public class Utf8 implements Comparable<Utf8> {
-  private static final byte[] EMPTY = new byte[0];
-
-  byte[] bytes = EMPTY;
-  int length;
-
-  public Utf8() {}
-
-  public Utf8(String string) {
-    try {
-      this.bytes = string.getBytes("UTF-8");
-    } catch (UnsupportedEncodingException e) {
-      throw new RuntimeException(e);
-    }
-    this.length = bytes.length;
-  }
-
-  public Utf8(byte[] bytes) {
-    this.bytes = bytes;
-    this.length = bytes.length;
-  }
-
-  public byte[] getBytes() { return bytes; }
-  public int getLength() { return length; }
-
-  public Utf8 setLength(int newLength) {
-    if (this.length < newLength) {
-      byte[] newBytes = new byte[newLength];
-      System.arraycopy(bytes, 0, newBytes, 0, this.length);
-      this.bytes = newBytes;
-    }
-    this.length = newLength;
-    return this;
-  }
-
-  public String toString() {
-    try {
-      return new String(bytes, 0, length, "UTF-8");
-    } catch (UnsupportedEncodingException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  public boolean equals(Object o) {
-    if (o == this) return true;
-    if (!(o instanceof Utf8)) return false;
-    Utf8 that = (Utf8)o;
-    if (!(this.length == that.length)) return false;
-    byte[] thatBytes = that.bytes;
-    for (int i = 0; i < this.length; i++)
-      if (bytes[i] != thatBytes[i])
-        return false;
-    return true;
-  }
-
-  public int hashCode() {
-    int hash = length;
-    for (int i = 0; i < this.length; i++)
-      hash += bytes[i] & 0xFF;
-    return hash;
-  }
-
-  public int compareTo(Utf8 that) {
-    return BinaryData.compareBytes(this.bytes, 0, this.length,
-                                   that.bytes, 0, that.length);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/java/org/apache/avro/util/package.html b/branch-1.3/lang/java/src/java/org/apache/avro/util/package.html
deleted file mode 100644
index 638e187..0000000
--- a/branch-1.3/lang/java/src/java/org/apache/avro/util/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<html>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<body>
-Common utility classes.
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/java/overview.html b/branch-1.3/lang/java/src/java/overview.html
deleted file mode 100644
index d64872e..0000000
--- a/branch-1.3/lang/java/src/java/overview.html
+++ /dev/null
@@ -1,88 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-<html>
-<head>
-   <title>Avro</title>
-</head>
-<body>Avro is a data serialization system.
-
-  <h2>Overview</h2>
-
-  <p>Avro provides:
-    <ul>
-      <li>Rich data structures.
-      <li>A compact, fast, binary data format.
-      <li>A container file, to store persistent data.
-      <li>Remote procedure call (RPC).
-      <li>Simple integration with dynamic languages.  Code generation
-      is not required to read or write data files nor to use or
-      implement RPC protocols.  Code generation as an optional
-      optimization, only worth implementing for statically typed
-      languages.
-    </ul>  
-
-  <h2>Schemas</h2>
-
-  <p>Avro relies on <i>{@link org.apache.avro.Schema schemas}</i>.
-  When Avro data is read, the schema used when writing it is always
-  present.  This permits each datum to be written with no per-value
-  overheads, making serialization both fast and small.  This also
-  facilitates use with dynamic, scripting languages, since data,
-  together with its schema, is fully self-describing.
-
-  <p>When Avro data is stored in a {@link
-  org.apache.avro.file.DataFileWriter file}, its schema is stored with
-  it, so that files may be processed later by any program.  If the
-  program reading the data expects a different schema this can be
-  easily resolved, since both schemas are present.
-
-  <p>When Avro is used in {@link org.apache.avro.ipc RPC}, the client
-    and server exchange schemas in the connection handshake.  (This
-    can be optimized so that, for most calls, no schemas are actually
-    transmitted.)  Since both client and server both have the other's
-    full schema, correspondence between same named fields, missing
-    fields, extra fields, etc. can all be easily resolved.
-
-  <p>Avro schemas are defined with
-  with <a href="http://www.json.org/">JSON</a> .  This facilitates
-  implementation in languages that already have JSON libraries.
-
-  <h2>Comparison with other systems</h2>
-
-  Avro provides functionality similar to systems such
-  as <a href="http://incubator.apache.org/thrift/">Thrift</a>,
-  <a href="http://code.google.com/protobuf/">Protocol Buffers</a>,
-  etc.  Avro differs from these systems in the following fundamental
-  aspects.
-  <ul>
-    <li><i>Dynamic typing</i>: Avro does not require that code be
-    generated.  Data is always accompanied by a schema that permits
-    full processing of that data without code generation, static
-    datatypes, etc.  This facilitates construction of generic
-    data-processing systems and languages.
-    <li><i>Untagged data</i>: Since the schema is present when data is
-    read, considerably less type information need be encoded with
-    data, resulting in smaller serialization size.</li>
-    <li><i>No manually-assigned field IDs</i>: When a schema changes,
-    both the old and new schema are always present when processing
-    data, so differences may be resolved symbolically, using field
-    names.
-  </ul>  
-
-</body>
-</html>
diff --git a/branch-1.3/lang/java/src/test/bin/gen_rpc_interop.sh b/branch-1.3/lang/java/src/test/bin/gen_rpc_interop.sh
deleted file mode 100755
index 7897dd8..0000000
--- a/branch-1.3/lang/java/src/test/bin/gen_rpc_interop.sh
+++ /dev/null
@@ -1,76 +0,0 @@
-#!/bin/bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e						  # exit on error
-
-cd `dirname "$0"`/../../..			  # connect to lang/java 
-
-VERSION=`cat ../../share/VERSION.txt`
-
-set -x						  # echo commands
-
-function write_request() {
-    messageName="$1"
-    callName="$2"
-    params="$3"
-    data="$4"
-    
-    outdir=../../share/test/interop/rpc/$messageName/$callName
-    mkdir -p $outdir
-
-    outfile=$outdir/request.avro
-
-    schema='{"type":"record","name":"'$messageName'","fields":'$params'}'
-
-    echo -n "$data" | \
-	java -jar build/avro-tools-$VERSION.jar fromjson "$schema" - > $outfile
-}
-
-function write_response() {
-    messageName="$1"
-    callName="$2"
-    schema="$3"
-    data="$4"
-    
-    outdir=../../share/test/interop/rpc/$messageName/$callName
-    mkdir -p $outdir
-
-    outfile=$outdir/response.avro
-
-    echo -n "$data" | \
-	java -jar build/avro-tools-$VERSION.jar fromjson "$schema" - > $outfile
-}
-
-write_request hello world \
-    '[{"name": "greeting", "type": "string"}]' \
-    '{"greeting": "Hello World!"}'
-
-write_response hello world '"string"' '"Hello World"'
-
-write_request echo foo \
-    '[{"name": "record", "type": {"name": "org.apache.avro.test.TestRecord", "type": "record", "fields": [ {"name": "name", "type": "string", "order": "ignore"}, {"name": "kind", "type": {"name": "Kind", "type": "enum", "symbols": ["FOO","BAR","BAZ"]}, "order": "descending"}, {"name": "hash", "type": {"name": "MD5", "type": "fixed", "size": 16}} ] }}]' \
-    '{"record": {"name": "Foo", "kind": "FOO", "hash": "0123456789012345"}}'
-
-write_response echo foo \
-    '{"name": "org.apache.avro.test.TestRecord", "type": "record", "fields": [ {"name": "name", "type": "string", "order": "ignore"}, {"name": "kind", "type": {"name": "Kind", "type": "enum", "symbols": ["FOO","BAR","BAZ"]}, "order": "descending"}, {"name": "hash", "type": {"name": "MD5", "type": "fixed", "size": 16}} ]}' \
-    '{"name": "Foo", "kind": "FOO", "hash": "0123456789012345"}'
-
-write_request add onePlusOne \
-    '[{"name": "arg1", "type": "int"}, {"name": "arg2", "type": "int"}]' \
-    '{"arg1": 1, "arg2": 1}'
-
-write_response add onePlusOne '"int"' 2
diff --git a/branch-1.3/lang/java/src/test/bin/test_tools.sh b/branch-1.3/lang/java/src/test/bin/test_tools.sh
deleted file mode 100755
index 0df6aab..0000000
--- a/branch-1.3/lang/java/src/test/bin/test_tools.sh
+++ /dev/null
@@ -1,87 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Tests avro-tools commands.
-
-# Echo all commands, so that test failure location is clear.
-set -o xtrace
-# This script will exist at any false return value.
-set -o errexit
-
-if [ "$JAVA_HOME" = "" ]; then
-  echo "Error: JAVA_HOME is not set."
-  exit 1
-fi
-
-if [ "$TOOLS" = "" ]; then
-  echo "Error: TOOLS is not set."
-  exit 1
-fi
-
-if [ "$TMPDIR" = "" ]; then
-  echo "Error: TMPDIR is not set."
-  exit 1
-fi
-
-CMD="$JAVA_HOME/bin/java -jar $TOOLS"
-
-######################################################################
-# Clean up temp directory.
-rm -rf $TMPDIR
-
-######################################################################
-$CMD compile protocol ../../share/test/schemas/namespace.avpr $TMPDIR/namespace
-# Check that the expected names were generated
-[ "MD5.java TestError.java TestNamespace.java TestRecord.java " = \
-  "$(find $TMPDIR/namespace -name "*.java" \
-    | awk -F "/" '{ print $NF }' | sort | tr '\n' ' ')" ]
-$CMD compile schema ../../share/test/schemas/interop.avsc $TMPDIR/schema
-[ "Foo.java Interop.java Kind.java MD5.java Node.java " = \
-  "$(find $TMPDIR/schema -name "*.java" \
-    | awk -F "/" '{ print $NF }' | sort | tr '\n' ' ')" ]
-######################################################################
-# Testing induce schema
-$CMD induce build/test/classes org.apache.avro.BarRecord \
- | tr -d '\n ' | grep -q -F  '{"type":"record","name":"BarRecord"'
-######################################################################
-# Test induce protocol
-$CMD induce build/test/classes 'org.apache.avro.TestReflect$C' \
- | tr -d '\n ' | grep -q -F  '{"protocol":"C"'
-######################################################################
-# Test to/from avro (both fragments and data files)
-$CMD jsontofrag '"string"' <(echo '"Long string implies readable length encoding."') \
- | cmp -s - <(echo -n 'ZLong string implies readable length encoding.')
-$CMD fragtojson '"string"' <(printf \\006foo) \
- | cmp -s - <(echo '"foo"')
-# And test that stdin support (via "-") works too
-echo '"The identity function"' \
-  | $CMD jsontofrag '"string"' - \
-  | $CMD fragtojson '"string"' - \
-  | cmp -s - <(echo '"The identity function"')
-
-$CMD fromjson '"string"' <(echo '"foo"'; echo '"bar"') \
-  > $TMPDIR/data_file_write.avro
-$CMD tojson $TMPDIR/data_file_write.avro \
-  | cmp -s - <(echo '"foo"'; echo '"bar"')
-$CMD getschema $TMPDIR/data_file_write.avro \
-  | cmp -s - <(echo '"string"')
-######################################################################
-
-$CMD 2>&1 | grep -q "Available tools:"
-$CMD doesnotexist 2>&1 | grep -q "Available tools:"
-! $CMD 2>&1 > /dev/null
-! $CMD 2>&1 doesnotexist 2>&1 > /dev/null
diff --git a/branch-1.3/lang/java/src/test/genavro/input/interop.genavro b/branch-1.3/lang/java/src/test/genavro/input/interop.genavro
deleted file mode 100644
index e2d3764..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/interop.genavro
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-// Currently genavro only does Protocols.
-@namespace("org.apache.avro")
-protocol InteropProtocol {
-  record Foo {
-    string label;
-  }
-
-  enum Kind { A, B, C }
-  fixed MD5(16);
-
-  record Node {
-    string label;
-    array<Node> children;
-  }
-
-  record Interop {
-    int intField;
-    long longField;
-    string stringField;
-    boolean boolField;
-    float floatField;
-    double doubleField;
-    null nullField;
-    array<double> arrayField;
-    map<Foo> mapField;
-    union { boolean, double, array<bytes> } unionFIeld;
-    Kind enumField;
-    MD5 fixedField;
-    Node recordField;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/input/mr_events.genavro b/branch-1.3/lang/java/src/test/genavro/input/mr_events.genavro
deleted file mode 100644
index 63568fb..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/mr_events.genavro
+++ /dev/null
@@ -1,72 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
- * Genavro format for a particular protocol found in Hadoop MapReduce.
- * Used as a test case/example to show that we can express real-world stuff more
- * succinctly.
- */
-@namespace("org.apache.hadoop.mapreduce.jobhistory")
-protocol Events {
-  record JhCounter {
-    string name;
-    string displayName;
-    long value;
-  }
-
-  record JhCounterGroup {
-    string name;
-    string displayName;
-    array<JhCounter> counts;
-  }
-
-  record JhCounters {
-    string name;
-    array<JhCounterGroup> groups;
-  }
-
-  record JobFinished {
-    string jobid;
-    long finishTime;
-    int finishedMaps;
-    int finishedReduces;
-    int failedMaps;
-    int failedReduces;
-    JhCounters totalCounters;
-    JhCounters mapCounters;
-    JhCounters reduceCounters;
-  }
-
-  record JobInited {
-    string jobid;
-    long launchTime;
-    int totalMaps;
-    int totalReduces;
-    string jobStatus;
-  }
-
-  record JobSubmitted {
-    string jobid;
-    string jobName;
-    string userName;
-    long submitTime;
-    string jobConfPath;
-  }
-
-  // ... TODO continue
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/input/namespaces.genavro b/branch-1.3/lang/java/src/test/genavro/input/namespaces.genavro
deleted file mode 100644
index fe371b4..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/namespaces.genavro
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-@namespace("avro.test.protocol")
-protocol TestNamespace {
-  @namespace("avro.test.fixed")
-  fixed FixedInOtherNamespace(16);
-
-  fixed FixedInThisNamespace(16);
-
-  @namespace("avro.test.record")
-  record RecordInOtherNamespace {}
-
-  @namespace("avro.test.error")
-  error ErrorInOtherNamespace {}
-
-  @namespace("avro.test.enum")
-  enum EnumInOtherNamespace { FOO }
-
-  record RefersToOthers {
-    avro.test.fixed.FixedInOtherNamespace someFixed;
-    avro.test.record.RecordInOtherNamespace someRecord;
-    avro.test.error.ErrorInOtherNamespace someError;
-    avro.test.enum.EnumInOtherNamespace someEnum;
-    FixedInThisNamespace thisFixed;
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/input/reservedwords.genavro b/branch-1.3/lang/java/src/test/genavro/input/reservedwords.genavro
deleted file mode 100644
index 3f07cbe..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/reservedwords.genavro
+++ /dev/null
@@ -1,22 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-protocol Foo {
-  void `error`();
-  void `void`();
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/input/simple.genavro b/branch-1.3/lang/java/src/test/genavro/input/simple.genavro
deleted file mode 100644
index 03f8940..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/simple.genavro
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
- * A simple test case.
- */
-@namespace("org.apache.avro.test")
-protocol Simple {
-  enum Kind {
-    FOO,
-    BAR, // the bar enum value
-    BAZ
-  }
-
-  fixed MD5(16);
-
-  record TestRecord {
-    @order("ignore")
-    string name;
-
-    @order("descending")
-    Kind kind;
-
-    MD5 hash;
-
-    union { MD5, null} nullableHash;
-  }
-
-  error TestError {
-    string message;
-  }
-
-  string hello(string greeting);
-  TestRecord echo(TestRecord `record`);
-  int add(int arg1, int arg2);
-  bytes echoBytes(bytes data);
-  void `error`() throws TestError;
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/input/unicode.genavro b/branch-1.3/lang/java/src/test/genavro/input/unicode.genavro
deleted file mode 100644
index c7c40f1..0000000
--- a/branch-1.3/lang/java/src/test/genavro/input/unicode.genavro
+++ /dev/null
@@ -1,29 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
-* This is a test that UTF8 functions correctly.
-* このテストでは、UTF - 8で正しく機能している。
-* 这是一个测试,UTF - 8的正常运行。
-*/
-protocol Протоколы {
-  record Структура {
-    string Строковый;
-    string 文字列;
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/genavro/output/interop.avpr b/branch-1.3/lang/java/src/test/genavro/output/interop.avpr
deleted file mode 100644
index abb940d..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/interop.avpr
+++ /dev/null
@@ -1,87 +0,0 @@
-{
-  "protocol" : "InteropProtocol",
-  "namespace" : "org.apache.avro",
-  "types" : [ {
-    "type" : "record",
-    "name" : "Foo",
-    "fields" : [ {
-      "name" : "label",
-      "type" : "string"
-    } ]
-  }, {
-    "type" : "enum",
-    "name" : "Kind",
-    "symbols" : [ "A", "B", "C" ]
-  }, {
-    "type" : "fixed",
-    "name" : "MD5",
-    "size" : 16
-  }, {
-    "type" : "record",
-    "name" : "Node",
-    "fields" : [ {
-      "name" : "label",
-      "type" : "string"
-    }, {
-      "name" : "children",
-      "type" : {
-        "type" : "array",
-        "items" : "Node"
-      }
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "Interop",
-    "fields" : [ {
-      "name" : "intField",
-      "type" : "int"
-    }, {
-      "name" : "longField",
-      "type" : "long"
-    }, {
-      "name" : "stringField",
-      "type" : "string"
-    }, {
-      "name" : "boolField",
-      "type" : "boolean"
-    }, {
-      "name" : "floatField",
-      "type" : "float"
-    }, {
-      "name" : "doubleField",
-      "type" : "double"
-    }, {
-      "name" : "nullField",
-      "type" : "null"
-    }, {
-      "name" : "arrayField",
-      "type" : {
-        "type" : "array",
-        "items" : "double"
-      }
-    }, {
-      "name" : "mapField",
-      "type" : {
-        "type" : "map",
-        "values" : "Foo"
-      }
-    }, {
-      "name" : "unionFIeld",
-      "type" : [ "boolean", "double", {
-        "type" : "array",
-        "items" : "bytes"
-      } ]
-    }, {
-      "name" : "enumField",
-      "type" : "Kind"
-    }, {
-      "name" : "fixedField",
-      "type" : "MD5"
-    }, {
-      "name" : "recordField",
-      "type" : "Node"
-    } ]
-  } ],
-  "messages" : {
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/genavro/output/mr_events.avpr b/branch-1.3/lang/java/src/test/genavro/output/mr_events.avpr
deleted file mode 100644
index 7091947..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/mr_events.avpr
+++ /dev/null
@@ -1,118 +0,0 @@
-{
-  "protocol" : "Events",
-  "namespace" : "org.apache.hadoop.mapreduce.jobhistory",
-  "types" : [ {
-    "type" : "record",
-    "name" : "JhCounter",
-    "fields" : [ {
-      "name" : "name",
-      "type" : "string"
-    }, {
-      "name" : "displayName",
-      "type" : "string"
-    }, {
-      "name" : "value",
-      "type" : "long"
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "JhCounterGroup",
-    "fields" : [ {
-      "name" : "name",
-      "type" : "string"
-    }, {
-      "name" : "displayName",
-      "type" : "string"
-    }, {
-      "name" : "counts",
-      "type" : {
-        "type" : "array",
-        "items" : "JhCounter"
-      }
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "JhCounters",
-    "fields" : [ {
-      "name" : "name",
-      "type" : "string"
-    }, {
-      "name" : "groups",
-      "type" : {
-        "type" : "array",
-        "items" : "JhCounterGroup"
-      }
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "JobFinished",
-    "fields" : [ {
-      "name" : "jobid",
-      "type" : "string"
-    }, {
-      "name" : "finishTime",
-      "type" : "long"
-    }, {
-      "name" : "finishedMaps",
-      "type" : "int"
-    }, {
-      "name" : "finishedReduces",
-      "type" : "int"
-    }, {
-      "name" : "failedMaps",
-      "type" : "int"
-    }, {
-      "name" : "failedReduces",
-      "type" : "int"
-    }, {
-      "name" : "totalCounters",
-      "type" : "JhCounters"
-    }, {
-      "name" : "mapCounters",
-      "type" : "JhCounters"
-    }, {
-      "name" : "reduceCounters",
-      "type" : "JhCounters"
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "JobInited",
-    "fields" : [ {
-      "name" : "jobid",
-      "type" : "string"
-    }, {
-      "name" : "launchTime",
-      "type" : "long"
-    }, {
-      "name" : "totalMaps",
-      "type" : "int"
-    }, {
-      "name" : "totalReduces",
-      "type" : "int"
-    }, {
-      "name" : "jobStatus",
-      "type" : "string"
-    } ]
-  }, {
-    "type" : "record",
-    "name" : "JobSubmitted",
-    "fields" : [ {
-      "name" : "jobid",
-      "type" : "string"
-    }, {
-      "name" : "jobName",
-      "type" : "string"
-    }, {
-      "name" : "userName",
-      "type" : "string"
-    }, {
-      "name" : "submitTime",
-      "type" : "long"
-    }, {
-      "name" : "jobConfPath",
-      "type" : "string"
-    } ]
-  } ],
-  "messages" : {
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/genavro/output/namespaces.avpr b/branch-1.3/lang/java/src/test/genavro/output/namespaces.avpr
deleted file mode 100644
index 81a220d..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/namespaces.avpr
+++ /dev/null
@@ -1,50 +0,0 @@
-{
-  "protocol" : "TestNamespace",
-  "namespace" : "avro.test.protocol",
-  "types" : [ {
-    "type" : "fixed",
-    "name" : "FixedInOtherNamespace",
-    "namespace" : "avro.test.fixed",
-    "size" : 16
-  }, {
-    "type" : "fixed",
-    "name" : "FixedInThisNamespace",
-    "size" : 16
-  }, {
-    "type" : "record",
-    "name" : "RecordInOtherNamespace",
-    "namespace" : "avro.test.record",
-    "fields" : [ ]
-  }, {
-    "type" : "error",
-    "name" : "ErrorInOtherNamespace",
-    "namespace" : "avro.test.error",
-    "fields" : [ ]
-  }, {
-    "type" : "enum",
-    "name" : "EnumInOtherNamespace",
-    "namespace" : "avro.test.enum",
-    "symbols" : [ "FOO" ]
-  }, {
-    "type" : "record",
-    "name" : "RefersToOthers",
-    "fields" : [ {
-      "name" : "someFixed",
-      "type" : "avro.test.fixed.FixedInOtherNamespace"
-    }, {
-      "name" : "someRecord",
-      "type" : "avro.test.record.RecordInOtherNamespace"
-    }, {
-      "name" : "someError",
-      "type" : "avro.test.error.ErrorInOtherNamespace"
-    }, {
-      "name" : "someEnum",
-      "type" : "avro.test.enum.EnumInOtherNamespace"
-    }, {
-      "name" : "thisFixed",
-      "type" : "FixedInThisNamespace"
-    } ]
-  } ],
-  "messages" : {
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/genavro/output/reservedwords.avpr b/branch-1.3/lang/java/src/test/genavro/output/reservedwords.avpr
deleted file mode 100644
index c8ccbbb..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/reservedwords.avpr
+++ /dev/null
@@ -1,15 +0,0 @@
-{
-  "protocol" : "Foo",
-  "namespace" : null,
-  "types" : [ ],
-  "messages" : {
-    "error" : {
-      "request" : [ ],
-      "response" : "null"
-    },
-    "void" : {
-      "request" : [ ],
-      "response" : "null"
-    }
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/genavro/output/simple.avpr b/branch-1.3/lang/java/src/test/genavro/output/simple.avpr
deleted file mode 100644
index 83b55bd..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/simple.avpr
+++ /dev/null
@@ -1,78 +0,0 @@
-{
-  "protocol" : "Simple",
-  "namespace" : "org.apache.avro.test",
-  "types" : [ {
-    "type" : "enum",
-    "name" : "Kind",
-    "symbols" : [ "FOO", "BAR", "BAZ" ],
-    "order" : "descending"
-  }, {
-    "type" : "fixed",
-    "name" : "MD5",
-    "size" : 16
-  }, {
-    "type" : "record",
-    "name" : "TestRecord",
-    "fields" : [ {
-      "name" : "name",
-      "type" : {
-        "type" : "string",
-        "order" : "ignore"
-      }
-    }, {
-      "name" : "kind",
-      "type" : "Kind"
-    }, {
-      "name" : "hash",
-      "type" : "MD5"
-    }, {
-      "name" : "nullableHash",
-      "type" : [ "MD5", "null" ]
-    } ]
-  }, {
-    "type" : "error",
-    "name" : "TestError",
-    "fields" : [ {
-      "name" : "message",
-      "type" : "string"
-    } ]
-  } ],
-  "messages" : {
-    "hello" : {
-      "request" : [ {
-        "name" : "greeting",
-        "type" : "string"
-      } ],
-      "response" : "string"
-    },
-    "echo" : {
-      "request" : [ {
-        "name" : "record",
-        "type" : "TestRecord"
-      } ],
-      "response" : "TestRecord"
-    },
-    "add" : {
-      "request" : [ {
-        "name" : "arg1",
-        "type" : "int"
-      }, {
-        "name" : "arg2",
-        "type" : "int"
-      } ],
-      "response" : "int"
-    },
-    "echoBytes" : {
-      "request" : [ {
-        "name" : "data",
-        "type" : "bytes"
-      } ],
-      "response" : "bytes"
-    },
-    "error" : {
-      "request" : [ ],
-      "response" : "null",
-      "errors" : [ "TestError" ]
-    }
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/genavro/output/unicode.avpr b/branch-1.3/lang/java/src/test/genavro/output/unicode.avpr
deleted file mode 100644
index 5f15da4..0000000
--- a/branch-1.3/lang/java/src/test/genavro/output/unicode.avpr
+++ /dev/null
@@ -1,17 +0,0 @@
-{
-  "protocol" : "Протоколы",
-  "namespace" : null,
-  "types" : [ {
-    "type" : "record",
-    "name" : "Структура",
-    "fields" : [ {
-      "name" : "Строковый",
-      "type" : "string"
-    }, {
-      "name" : "文字列",
-      "type" : "string"
-    } ]
-  } ],
-  "messages" : {
-  }
-}
\ No newline at end of file
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/AvroTestUtil.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/AvroTestUtil.java
deleted file mode 100644
index 2d5b361..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/AvroTestUtil.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.io.File;
-
-/** Utilities for Avro tests. */
-public class AvroTestUtil {
-  static final File TMPDIR = new File(System.getProperty("test.dir", "/tmp"));
-
-  private AvroTestUtil() { }
-  
-  /** Create a temporary file in a test-appropriate directory. */
-  public static File tempFile(String name) {
-    return new File(TMPDIR, name);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/BarRecord.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/BarRecord.java
deleted file mode 100644
index 70bdd83..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/BarRecord.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-public class BarRecord {
-  private String beerMsg;
-
-  public BarRecord() {
-  }
-
-  public BarRecord(String beerMsg) {
-    this.beerMsg = beerMsg;
-  }
-
-  @Override
-  public boolean equals(Object that) {
-    if (that instanceof BarRecord) {
-      if (this.beerMsg == null) {
-        return ((BarRecord) that).beerMsg == null;
-      } else {
-        return this.beerMsg.equals(((BarRecord) that).beerMsg);
-      }
-    }
-    return false;
-  }
-
-  @Override
-  public int hashCode() {
-    return beerMsg.hashCode();
-  }
-
-  @Override
-  public String toString() {
-    return BarRecord.class.getSimpleName() + "{msg=" + beerMsg + "}";
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/FooRecord.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/FooRecord.java
deleted file mode 100644
index 0447f01..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/FooRecord.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-public class FooRecord {
-  private int fooCount;
-
-  public FooRecord() {
-  }
-
-  public FooRecord(int fooCount) {
-    this.fooCount = fooCount;
-  }
-
-  @Override
-  public boolean equals(Object that) {
-    if (that instanceof FooRecord) {
-      return this.fooCount == ((FooRecord) that).fooCount;
-    }
-    return false;
-  }
-
-  @Override
-  public int hashCode() {
-    return fooCount;
-  }
-
-  @Override
-  public String toString() {
-    return FooRecord.class.getSimpleName() + "{count=" + fooCount + "}";
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/GenerateBlockingData.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/GenerateBlockingData.java
deleted file mode 100644
index 6d72385..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/GenerateBlockingData.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.BlockingBinaryEncoder;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Encoder;
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-
-/**
- * Generates file with objects of a specific schema(that doesn't contain nesting
- * of arrays and maps) with random data. This is only for testing.
- * Generated file contains the count of objects of the specified schema followed
- * by objects serialized using BlockingBinaryEncoder. No other metadata is
- * written to the file. See interoptests.py for more details(interoptests.py
- * reads the file generated here and validates the contents).
- */
-public class GenerateBlockingData {
-  private static final int SYNC_INTERVAL = 1000;
-  private static ByteArrayOutputStream buffer =
-                      new ByteArrayOutputStream(2*SYNC_INTERVAL);
-  
-  private static Encoder bufOut = new BlockingBinaryEncoder(buffer);
-  private static int blockCount;
-
-  private static void writeBlock(Encoder vout, FileOutputStream out)
-               throws IOException{
-    vout.writeLong(blockCount);
-    bufOut.flush();
-    buffer.writeTo(out);
-    buffer.reset();
-    blockCount = 0;
-  }
-  
-  public static void main(String[] args) throws Exception {
-    if(args.length != 3) {
-      System.out.println(
-          "Usage: GenerateBlockingData <schemafile> <outputfile> <count>");
-      System.exit(-1);
-    }
-    
-    Schema sch = Schema.parse(new File(args[0]));
-    File outputFile = new File(args[1]);
-    int numObjects = Integer.parseInt(args[2]);
-    
-    FileOutputStream out = new FileOutputStream(outputFile, false);
-    DatumWriter<Object> dout = new GenericDatumWriter<Object>();
-    dout.setSchema(sch);
-    Encoder vout = new BinaryEncoder(out);
-    vout.writeLong(numObjects); // metadata:the count of objects in the file
-    
-    for (Object datum : new RandomData(sch, numObjects)) {
-      dout.write(datum, bufOut);
-      blockCount++;
-      if (buffer.size() >= SYNC_INTERVAL) {
-        writeBlock(vout, out);
-      }
-    }
-    if (blockCount > 0) {
-      writeBlock(vout, out);
-    }
-    out.flush();
-    out.close();
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/RPCMetaTestPlugin.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/RPCMetaTestPlugin.java
deleted file mode 100644
index 967bb10..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/RPCMetaTestPlugin.java
+++ /dev/null
@@ -1,177 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.nio.ByteBuffer;
-import java.util.Map;
-
-import junit.framework.Assert;
-
-import org.apache.avro.ipc.RPCContext;
-import org.apache.avro.ipc.RPCPlugin;
-import org.apache.avro.util.Utf8;
-
-/**
- * An implementation of an RPC metadata plugin API designed for unit testing.
- * This plugin tests handshake and call state by passing a string as metadata,
- * slowly building it up at each instrumentation point, testing it as it goes.
- * Finally, after the call or handshake is complete, the constructed string is
- * tested.
- */
-public final class RPCMetaTestPlugin extends RPCPlugin {
-  
-  protected final Utf8 key;
-  
-  public RPCMetaTestPlugin(String keyname) {
-    key = new Utf8(keyname);
-  }
-  
-  @Override
-  public void clientStartConnect(RPCContext context) {
-    ByteBuffer buf = ByteBuffer.wrap("ap".getBytes());
-    context.requestHandshakeMeta().put(key, buf);
-  }
-  
-  @Override
-  public void serverConnecting(RPCContext context) {
-    
-    Assert.assertNotNull(context.requestHandshakeMeta());
-    Assert.assertNotNull(context.responseHandshakeMeta());
-    
-    if (!context.requestHandshakeMeta().containsKey(key)) return;
-    
-    ByteBuffer buf = context.requestHandshakeMeta().get(key);
-    Assert.assertNotNull(buf);
-    Assert.assertNotNull(buf.array());
-    
-    String partialstr = new String(buf.array());
-    Assert.assertNotNull(partialstr);
-    Assert.assertEquals("partial string mismatch", "ap", partialstr);
-    
-    buf = ByteBuffer.wrap((partialstr + "ac").getBytes());
-    Assert.assertTrue(buf.remaining() > 0);
-    context.responseHandshakeMeta().put(key, buf);
-  }
-  
-  @Override
-  public void clientFinishConnect(RPCContext context) {
-    Map<Utf8,ByteBuffer> handshakeMeta = context.responseHandshakeMeta();
-    
-    Assert.assertNotNull(handshakeMeta);
-    
-    if (!handshakeMeta.containsKey(key)) return;
-    
-    ByteBuffer buf = handshakeMeta.get(key);
-    Assert.assertNotNull(buf);
-    Assert.assertNotNull(buf.array());
-    
-    String partialstr = new String(buf.array());
-    Assert.assertNotNull(partialstr);
-    Assert.assertEquals("partial string mismatch", "apac", partialstr);
-    
-    buf = ByteBuffer.wrap((partialstr + "he").getBytes());
-    Assert.assertTrue(buf.remaining() > 0);
-    handshakeMeta.put(key, buf);
-    
-    checkRPCMetaMap(handshakeMeta);
-  }
-  
-  @Override
-  public void clientSendRequest(RPCContext context) { 
-    ByteBuffer buf = ByteBuffer.wrap("ap".getBytes());
-    context.requestCallMeta().put(key, buf);
-    Assert.assertNotNull(context.getMessage());
-  }
-  
-  @Override
-  public void serverReceiveRequest(RPCContext context) {
-    Map<Utf8,ByteBuffer> meta = context.requestCallMeta();
-    
-    Assert.assertNotNull(meta);
-    
-    if (!meta.containsKey(key)) return;
-    
-    ByteBuffer buf = meta.get(key);
-    Assert.assertNotNull(buf);
-    Assert.assertNotNull(buf.array());
-    
-    String partialstr = new String(buf.array());
-    Assert.assertNotNull(partialstr);
-    Assert.assertEquals("partial string mismatch", "ap", partialstr);
-    
-    buf = ByteBuffer.wrap((partialstr + "a").getBytes());
-    Assert.assertTrue(buf.remaining() > 0);
-    meta.put(key, buf);
-    
-    Assert.assertNotNull(context.getMessage());
-  }
-  
-  @Override
-  public void serverSendResponse(RPCContext context) {
-    Assert.assertNotNull(context.requestCallMeta());
-    Assert.assertNotNull(context.responseCallMeta());
-    
-    if (!context.requestCallMeta().containsKey(key)) return;
-    
-    ByteBuffer buf = context.requestCallMeta().get(key);
-    Assert.assertNotNull(buf);
-    Assert.assertNotNull(buf.array());
-    
-    String partialstr = new String(buf.array());
-    Assert.assertNotNull(partialstr);
-    Assert.assertEquals("partial string mismatch", "apa", partialstr);
-    
-    buf = ByteBuffer.wrap((partialstr + "c").getBytes());
-    Assert.assertTrue(buf.remaining() > 0);
-    context.responseCallMeta().put(key, buf);
-  }
-  
-  @Override
-  public void clientReceiveResponse(RPCContext context) {
-    Assert.assertNotNull(context.responseCallMeta());
-    
-    if (!context.responseCallMeta().containsKey(key)) return;
-    
-    ByteBuffer buf = context.responseCallMeta().get(key);
-    Assert.assertNotNull(buf);
-    Assert.assertNotNull(buf.array());
-    
-    String partialstr = new String(buf.array());
-    Assert.assertNotNull(partialstr);
-    Assert.assertEquals("partial string mismatch", "apac", partialstr);
-    
-    buf = ByteBuffer.wrap((partialstr + "he").getBytes());
-    Assert.assertTrue(buf.remaining() > 0);
-    context.responseCallMeta().put(key, buf);
-    
-    checkRPCMetaMap(context.responseCallMeta());
-  }
-  
-  protected void checkRPCMetaMap(Map<Utf8,ByteBuffer> rpcMeta) {
-    Assert.assertNotNull(rpcMeta);
-    Assert.assertTrue("key not present in map", rpcMeta.containsKey(key));
-    
-    ByteBuffer keybuf = rpcMeta.get(key);
-    Assert.assertNotNull(keybuf);
-    Assert.assertTrue("key BB had nothing remaining", keybuf.remaining() > 0);
-    
-    String str = new String(keybuf.array());
-    Assert.assertEquals("apache", str);
-  }
-  
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/RandomData.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/RandomData.java
deleted file mode 100644
index 582308f..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/RandomData.java
+++ /dev/null
@@ -1,141 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.io.File;
-import java.nio.ByteBuffer;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Random;
-
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericArray;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.generic.GenericRecord;
-import org.apache.avro.util.Utf8;
-
-/** Generates schema data as Java objects with random values. */
-public class RandomData implements Iterable<Object> {
-  private final Schema root;
-  private final long seed;
-  private final int count;
-
-  public RandomData(Schema schema, int count) {
-    this(schema, count, System.currentTimeMillis());
-  }
-
-  public RandomData(Schema schema, int count, long seed) {
-    this.root = schema;
-    this.seed = seed;
-    this.count = count;
-  }
-  
-  public Iterator<Object> iterator() {
-    return new Iterator<Object>() {
-      private int n;
-      private Random random = new Random(seed);
-      public boolean hasNext() { return n < count; }
-      public Object next() {
-        n++;
-        return generate(root, random, 0);
-      }
-      public void remove() { throw new UnsupportedOperationException(); }
-    };
-  }
-  
-  @SuppressWarnings(value="unchecked")
-  private static Object generate(Schema schema, Random random, int d) {
-    switch (schema.getType()) {
-    case RECORD:
-      GenericRecord record = new GenericData.Record(schema);
-      for (Schema.Field field : schema.getFields())
-        record.put(field.name(), generate(field.schema(), random, d+1));
-      return record;
-    case ENUM:
-      List<String> symbols = schema.getEnumSymbols();
-      return symbols.get(random.nextInt(symbols.size()));
-    case ARRAY:
-      int length = (random.nextInt(5)+2)-d;
-      GenericArray<Object> array =
-        new GenericData.Array(length<=0?0:length, schema);
-      for (int i = 0; i < length; i++)
-        array.add(generate(schema.getElementType(), random, d+1));
-      return array;
-    case MAP:
-      length = (random.nextInt(5)+2)-d;
-      Map<Object,Object> map = new HashMap<Object,Object>(length<=0?0:length);
-      for (int i = 0; i < length; i++) {
-        map.put(randomUtf8(random, 40),
-                generate(schema.getValueType(), random, d+1));
-      }
-      return map;
-    case UNION:
-      List<Schema> types = schema.getTypes();
-      return generate(types.get(random.nextInt(types.size())), random, d);
-    case FIXED:
-      byte[] bytes = new byte[schema.getFixedSize()];
-      random.nextBytes(bytes);
-      return new GenericData.Fixed(bytes);
-    case STRING:  return randomUtf8(random, 40);
-    case BYTES:   return randomBytes(random, 40);
-    case INT:     return random.nextInt();
-    case LONG:    return random.nextLong();
-    case FLOAT:   return random.nextFloat();
-    case DOUBLE:  return random.nextDouble();
-    case BOOLEAN: return random.nextBoolean();
-    case NULL:    return null;
-    default: throw new RuntimeException("Unknown type: "+schema);
-    }
-  }
-
-  private static Utf8 randomUtf8(Random rand, int maxLength) {
-    Utf8 utf8 = new Utf8().setLength(rand.nextInt(maxLength));
-    for (int i = 0; i < utf8.getLength(); i++) {
-      utf8.getBytes()[i] = (byte)('a'+rand.nextInt('z'-'a'));
-    }
-    return utf8;
-  }
-
-  private static ByteBuffer randomBytes(Random rand, int maxLength) {
-    ByteBuffer bytes = ByteBuffer.allocate(rand.nextInt(maxLength));
-    bytes.limit(bytes.capacity());
-    rand.nextBytes(bytes.array());
-    return bytes;
-  }
-
-  public static void main(String[] args) throws Exception {
-    if(args.length != 3) {
-      System.out.println("Usage: RandomData <schemafile> <outputfile> <count>");
-      System.exit(-1);
-    }
-    Schema sch = Schema.parse(new File(args[0]));
-    DataFileWriter<Object> writer =
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .create(sch, new File(args[1]));
-    try {
-      for (Object datum : new RandomData(sch, Integer.parseInt(args[2]))) {
-        writer.append(datum);
-      }
-    } finally {
-      writer.close();
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/SimpleException.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/SimpleException.java
deleted file mode 100644
index fd1b8c3..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/SimpleException.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-/** This should be a static nested class in TestProtocolReflect, but that
- * breaks CheckStyle (http://jira.codehaus.org/browse/MPCHECKSTYLE-20). */
-public class SimpleException extends Exception {
-  SimpleException() {}
-  SimpleException(String message) { super(message) ; }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestBulkData.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestBulkData.java
deleted file mode 100644
index ccdb64f..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestBulkData.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.ipc.HttpServer;
-import org.apache.avro.ipc.HttpTransceiver;
-import org.apache.avro.ipc.Server;
-import org.apache.avro.ipc.Transceiver;
-
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.io.IOException;
-import java.net.URL;
-import java.nio.ByteBuffer;
-import java.util.Random;
-
-import org.apache.avro.test.BulkData;
-
-public class TestBulkData {
-  private static final long COUNT =
-    Integer.parseInt(System.getProperty("test.count", "10"));
-  private static final int SIZE =
-    Integer.parseInt(System.getProperty("test.size", "65536"));
-
-  private static final ByteBuffer DATA = ByteBuffer.allocate(SIZE);
-  {
-    Random rand = new Random();
-    DATA.limit(DATA.capacity());
-    DATA.position(0);
-    rand.nextBytes(DATA.array());
-  }
-
-  public static class BulkDataImpl implements BulkData {
-    public ByteBuffer read() { return DATA.duplicate(); }
-    public Void write(ByteBuffer data) {
-      Assert.assertEquals(SIZE, data.remaining());
-      return null;
-    }
-  }
-
-  private static Server server;
-  private static Transceiver client;
-  private static BulkData proxy;
-
-  @Before
-  public void startServer() throws Exception {
-    server =
-      new HttpServer(new SpecificResponder(BulkData.class, new BulkDataImpl()),
-                     0);
-    client =
-      new HttpTransceiver(new URL("http://127.0.0.1:"+server.getPort()+"/"));
-    proxy = (BulkData)SpecificRequestor.getClient(BulkData.class, client);
-  }
-
-  @Test
-  public void testRead() throws IOException {
-    for (int i = 0; i < COUNT; i++)
-      Assert.assertEquals(SIZE, proxy.read().remaining());
-  }
-
-  @Test
-  public void testWrite() throws IOException {
-    for (int i = 0; i < COUNT; i++)
-      proxy.write(DATA.duplicate());
-  }
-
-  @After
-  public void stopServer() throws Exception {
-    server.close();
-  }
-
-  public static void main(String[] args) throws Exception {
-    TestBulkData test = new TestBulkData();
-    test.startServer();
-    System.out.println("READ");
-    long start = System.currentTimeMillis();
-    test.testRead();
-    printStats(start);
-    System.out.println("WRITE");
-    start = System.currentTimeMillis();
-    test.testWrite();
-    printStats(start);
-    test.stopServer();
-  }
-
-  private static void printStats(long start) {
-    double seconds = (System.currentTimeMillis()-start)/1000.0;
-    System.out.println("seconds = "+(int)seconds);
-    System.out.println("requests/second = "+(int)(COUNT/seconds));
-    double megabytes = (COUNT*SIZE)/(1024*1024.0);
-    System.out.println("MB = "+(int)megabytes);
-    System.out.println("MB/second = "+ (int)(megabytes/seconds));
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestCompare.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestCompare.java
deleted file mode 100644
index ee4bb4a..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestCompare.java
+++ /dev/null
@@ -1,240 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-
-import org.apache.avro.generic.GenericArray;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.specific.SpecificData;
-import org.apache.avro.specific.SpecificDatumWriter;
-import org.apache.avro.io.BinaryData;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.util.Utf8;
-
-import org.apache.avro.test.TestRecord;
-import org.apache.avro.test.Kind;
-import org.apache.avro.test.MD5;
-
-public class TestCompare {
-
-  @Test
-  public void testNull() throws Exception {
-    Schema schema = Schema.parse("\"null\"");
-    byte[] b = render(null, schema, new GenericDatumWriter<Object>());
-    assertEquals(0, BinaryData.compare(b, 0, b, 0, schema));
-  }
-
-  @Test
-  public void testBoolean() throws Exception {
-    check("\"boolean\"", Boolean.FALSE, Boolean.TRUE);
-  }
-
-  @Test
-  public void testString() throws Exception {
-    check("\"string\"", new Utf8("a"), new Utf8("b"));
-    check("\"string\"", new Utf8("a"), new Utf8("ab"));
-  }
-
-  @Test
-  public void testBytes() throws Exception {
-    check("\"bytes\"",
-          ByteBuffer.wrap(new byte[]{}),
-          ByteBuffer.wrap(new byte[]{1}));
-    check("\"bytes\"",
-          ByteBuffer.wrap(new byte[]{1}),
-          ByteBuffer.wrap(new byte[]{2}));
-    check("\"bytes\"",
-          ByteBuffer.wrap(new byte[]{1,2}),
-          ByteBuffer.wrap(new byte[]{2}));
-  }
-
-  @Test
-  public void testInt() throws Exception {
-    check("\"int\"", new Integer(-1), new Integer(0));
-    check("\"int\"", new Integer(0), new Integer(1));
-  }
-
-  @Test
-  public void testLong() throws Exception {
-    check("\"long\"", new Long(11), new Long(12));
-    check("\"long\"", new Long(-1), new Long(1));
-  }
-
-  @Test
-  public void testFloat() throws Exception {
-    check("\"float\"", new Float(1.1), new Float(1.2));
-    check("\"float\"", new Float(-1.1), new Float(1.0));
-  }
-
-  @Test
-  public void testDouble() throws Exception {
-    check("\"double\"", new Double(1.2), new Double(1.3));
-    check("\"double\"", new Double(-1.2), new Double(1.3));
-  }
-
-  @Test
-  public void testArray() throws Exception {
-    String json = "{\"type\":\"array\", \"items\": \"long\"}";
-    Schema schema = Schema.parse(json);
-    GenericArray<Long> a1 = new GenericData.Array<Long>(1, schema);
-    a1.add(1L);
-    GenericArray<Long> a2 = new GenericData.Array<Long>(1, schema);
-    a2.add(1L);
-    a2.add(0L);
-    check(json, a1, a2);
-  }
-
-  @Test
-  public void testRecord() throws Exception {
-    String fields = " \"fields\":["
-      +"{\"name\":\"f\",\"type\":\"int\",\"order\":\"ignore\"},"
-      +"{\"name\":\"g\",\"type\":\"int\",\"order\":\"descending\"},"
-      +"{\"name\":\"h\",\"type\":\"int\"}]}";
-    String recordJson = "{\"type\":\"record\", \"name\":\"Test\","+fields;
-    Schema schema = Schema.parse(recordJson);
-    GenericData.Record r1 = new GenericData.Record(schema);
-    r1.put("f", 1);
-    r1.put("g", 13);
-    r1.put("h", 41);
-    GenericData.Record r2 = new GenericData.Record(schema);
-    r2.put("f", 0);
-    r2.put("g", 12);
-    r2.put("h", 41);
-    check(recordJson, r1, r2);
-    r2.put("f", 0);
-    r2.put("g", 13);
-    r2.put("h", 42);
-    check(recordJson, r1, r2);
-
-    String record2Json = "{\"type\":\"record\", \"name\":\"Test2\","+fields;
-    Schema schema2 = Schema.parse(record2Json);
-    GenericData.Record r3= new GenericData.Record(schema2);
-    r3.put("f", 1);
-    r3.put("g", 13);
-    r3.put("h", 41);
-    assert(!r1.equals(r3));                       // same fields, diff name
-  }
-
-  @Test
-  public void testEnum() throws Exception {
-    check("{\"type\":\"enum\", \"name\":\"Test\",\"symbols\": [\"A\", \"B\"]}",
-          "A", "B");
-  }
-
-  @Test
-  public void testFixed() throws Exception {
-    check("{\"type\": \"fixed\", \"name\":\"Test\", \"size\": 1}",
-          new GenericData.Fixed(new byte[]{(byte)'a'}),
-          new GenericData.Fixed(new byte[]{(byte)'b'}));
-  }
-
-  @Test
-  public void testUnion() throws Exception {
-    check("[\"string\", \"long\"]", new Utf8("a"), new Utf8("b"), false);
-    check("[\"string\", \"long\"]", new Long(1), new Long(2), false);
-    check("[\"string\", \"long\"]", new Utf8("a"), new Long(1), false);
-  }
-
-  @Test
-  public void testSpecificRecord() throws Exception {
-    TestRecord s1 = new TestRecord();
-    TestRecord s2 = new TestRecord();
-    s1.name = new Utf8("foo");
-    s1.kind = Kind.BAZ;
-    s1.hash = new MD5();
-    s1.hash.bytes(new byte[] {0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5});
-    s2.name = new Utf8("bar");
-    s2.kind = Kind.BAR;
-    s2.hash = new MD5();
-    s2.hash.bytes(new byte[] {0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,6});
-    Schema schema = SpecificData.get().getSchema(TestRecord.class);
-
-    check(schema, s1, s2, true, new SpecificDatumWriter<TestRecord>(schema),
-          SpecificData.get());
-    s2.kind = Kind.BAZ;
-    check(schema, s1, s2, true, new SpecificDatumWriter<TestRecord>(schema),
-          SpecificData.get());
-  }  
-
-  private static <T> void check(String schemaJson, T o1, T o2)
-    throws Exception {
-    check(schemaJson, o1, o2, true);
-  }
-
-  private static <T> void check(String schemaJson, T o1, T o2,
-                            boolean comparable)
-    throws Exception {
-    check(Schema.parse(schemaJson), o1, o2, comparable,
-          new GenericDatumWriter<T>(), GenericData.get());
-  }
-
-  private static <T> void check(Schema schema, T o1, T o2,
-                            boolean comparable,
-                            DatumWriter<T> writer,
-                            GenericData comparator)
-    throws Exception {
-
-    byte[] b1 = render(o1, schema, writer);
-    byte[] b2 = render(o2, schema, writer);
-    assertEquals(-1, BinaryData.compare(b1, 0, b2, 0, schema));
-    assertEquals(1, BinaryData.compare(b2, 0, b1, 0, schema));
-    assertEquals(0, BinaryData.compare(b1, 0, b1, 0, schema));
-    assertEquals(0, BinaryData.compare(b2, 0, b2, 0, schema));
-
-    assertEquals(-1, compare(o1, o2, schema, comparable, comparator));
-    assertEquals(1, compare(o2, o1, schema, comparable, comparator));
-    assertEquals(0, compare(o1, o1, schema, comparable, comparator));
-    assertEquals(0, compare(o2, o2, schema, comparable, comparator));
-
-    assert(o1.equals(o1));
-    assert(o2.equals(o2));
-    assert(!o1.equals(o2));
-    assert(!o2.equals(o1));
-    assert(!o1.equals(new Object()));
-    assert(!o2.equals(new Object()));
-    assert(!o1.equals(null));
-    assert(!o2.equals(null));
-
-    assert(o1.hashCode() != o2.hashCode());
-  }
-
-  @SuppressWarnings(value="unchecked")
-  private static int compare(Object o1, Object o2, Schema schema,
-                             boolean comparable, GenericData comparator) {
-    return comparable
-      ? ((Comparable)o1).compareTo(o2)
-      : comparator.compare(o1, o2, schema);
-  }
-
-  private static <T> byte[] render(T datum, Schema schema,
-                               DatumWriter<T> writer)
-    throws IOException {
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    writer.setSchema(schema);
-    writer.write(datum, new BinaryEncoder(out));
-    return out.toByteArray();
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFile.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFile.java
deleted file mode 100644
index 26f56e8..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFile.java
+++ /dev/null
@@ -1,240 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Random;
-
-import org.apache.avro.file.CodecFactory;
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.specific.SpecificDatumReader;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import org.junit.runners.Parameterized.Parameters;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-@RunWith(Parameterized.class)
-public class TestDataFile {
-  private static final Logger LOG =
-    LoggerFactory.getLogger(TestDataFile.class);
-
-  CodecFactory codec = null;
-  public TestDataFile(CodecFactory codec) {
-    this.codec = codec;
-    LOG.info("Running with codec: " + codec);
-  }
-
-  @Parameters
-  public static List<Object[]> codecs() {
-    List<Object[]> r = new ArrayList<Object[]>();
-    r.add(new Object[] { null });
-    r.add(new Object[] { CodecFactory.deflateCodec(0) });
-    r.add(new Object[] { CodecFactory.deflateCodec(1) });
-    r.add(new Object[] { CodecFactory.deflateCodec(9) });
-    r.add(new Object[] { CodecFactory.nullCodec() });
-    return r;
-  }
-
-  private static final int COUNT =
-    Integer.parseInt(System.getProperty("test.count", "200"));
-  private static final boolean VALIDATE =
-    !"false".equals(System.getProperty("test.validate", "true"));
-  private static final File DIR
-    = new File(System.getProperty("test.dir", "/tmp"));
-  private static final File DATAFILE_DIR
-    = new File(System.getProperty("test.dir", "/tmp"));
-  private static final long SEED = System.currentTimeMillis();
-  private static final String SCHEMA_JSON =
-    "{\"type\": \"record\", \"name\": \"Test\", \"fields\": ["
-    +"{\"name\":\"stringField\", \"type\":\"string\"},"
-    +"{\"name\":\"longField\", \"type\":\"long\"}]}";
-  private static final Schema SCHEMA = Schema.parse(SCHEMA_JSON);
-
-  private File makeFile() {
-    return new File(DIR, "test-" + codec + ".avro");
-  }
-
-  @Test
-  public void testGenericWrite() throws IOException {
-    DataFileWriter<Object> writer =
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .setSyncInterval(100);
-    if (codec != null) {
-      writer.setCodec(codec);
-    }
-    writer.create(SCHEMA, makeFile());
-    try {
-      int count = 0;
-      for (Object datum : new RandomData(SCHEMA, COUNT, SEED)) {
-        writer.append(datum);
-        if (++count%(COUNT/3) == 0)
-          writer.sync();                          // force some syncs mid-file
-      }
-    } finally {
-      writer.close();
-    }
-  }
-
-  @Test
-  public void testGenericRead() throws IOException {
-    DataFileReader<Object> reader =
-      new DataFileReader<Object>(makeFile(), new GenericDatumReader<Object>());
-    try {
-      Object datum = null;
-      if (VALIDATE) {
-        for (Object expected : new RandomData(SCHEMA, COUNT, SEED)) {
-          datum = reader.next(datum);
-          assertEquals(expected, datum);
-        }
-      } else {
-        for (int i = 0; i < COUNT; i++) {
-          datum = reader.next(datum);
-        }
-      }
-    } finally {
-      reader.close();
-    }
-  }
-
-  @Test
-  public void testSplits() throws IOException {
-    File file = makeFile();
-    DataFileReader<Object> reader =
-      new DataFileReader<Object>(file, new GenericDatumReader<Object>());
-    Random rand = new Random(SEED);
-    try {
-      int splits = 10;                            // number of splits
-      int length = (int)file.length();            // length of file
-      int end = length;                           // end of split
-      int remaining = end;                        // bytes remaining
-      int count = 0;                              // count of entries
-      while (remaining > 0) {
-        int start = Math.max(0, end - rand.nextInt(2*length/splits));
-        reader.sync(start);                       // count entries in split
-        while (!reader.pastSync(end)) {
-          reader.next();
-          count++;
-        }
-        remaining -= end-start;
-        end = start;
-      }
-      assertEquals(COUNT, count);
-    } finally {
-      reader.close();
-    }
-  }
-
-  @Test
-  public void testGenericAppend() throws IOException {
-    File file = makeFile();
-    long start = file.length();
-    DataFileWriter<Object> writer =
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .appendTo(file);
-    try {
-      for (Object datum : new RandomData(SCHEMA, COUNT, SEED+1)) {
-        writer.append(datum);
-      }
-    } finally {
-      writer.close();
-    }
-    DataFileReader<Object> reader =
-      new DataFileReader<Object>(file, new GenericDatumReader<Object>());
-    try {
-      reader.seek(start);
-      Object datum = null;
-      if (VALIDATE) {
-        for (Object expected : new RandomData(SCHEMA, COUNT, SEED+1)) {
-          datum = reader.next(datum);
-          assertEquals(expected, datum);
-        }
-      } else {
-        for (int i = 0; i < COUNT; i++) {
-          datum = reader.next(datum);
-        }
-      }
-    } finally {
-      reader.close();
-    }
-  }  
-
-  protected void readFile(File f, DatumReader<Object> datumReader)
-    throws IOException {
-    System.out.println("Reading "+ f.getName());
-    DataFileReader<Object> reader = new DataFileReader<Object>(f, datumReader);
-    for (Object datum : reader) {
-      assertNotNull(datum);
-    }
-  }
-
-  public static void main(String[] args) throws Exception {
-    File input = new File(args[0]);
-    Schema projection = null;
-    if (args.length > 1)
-      projection = Schema.parse(new File(args[1]));
-    TestDataFile tester = new TestDataFile(null);
-    tester.readFile(input, new GenericDatumReader<Object>(null, projection));
-    long start = System.currentTimeMillis();
-    for (int i = 0; i < 4; i++)
-      tester.readFile(input, new GenericDatumReader<Object>(null, projection));
-    System.out.println("Time: "+(System.currentTimeMillis()-start));
-  }
-
-  public static class InteropTest {
-
-  @Test
-    public void testGeneratedGeneric() throws IOException {
-      System.out.println("Reading with generic:");
-      readFiles(new GenericDatumReader<Object>());
-    }
-
-  @Test
-    public void testGeneratedSpecific() throws IOException {
-      System.out.println("Reading with specific:");
-      readFiles(new SpecificDatumReader<Object>());
-    }
-
-  // Can't use same Interop.java as specific for reflect, since its stringField
-  // has type Utf8, which reflect would try to assign a String to.  We could
-  // fix this by defining a reflect-specific version of Interop.java, but we'd
-  // need to put it on a different classpath than the specific one.
-
-  // @Test
-  //   public void testGeneratedReflect() throws IOException {
-  //     readFiles(new ReflectDatumReader(Interop.class));
-  //   }
-
-    private void readFiles(DatumReader<Object> datumReader) throws IOException {
-      TestDataFile test = new TestDataFile(null);
-      for (File f : DATAFILE_DIR.listFiles())
-        test.readFile(f, datumReader);
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileConcat.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileConcat.java
deleted file mode 100644
index 0a893a9..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileConcat.java
+++ /dev/null
@@ -1,154 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.avro.file.CodecFactory;
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import org.junit.runners.Parameterized.Parameters;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-@RunWith(Parameterized.class)
-public class TestDataFileConcat {
-  private static final Logger LOG =
-    LoggerFactory.getLogger(TestDataFileConcat.class);
-
-  CodecFactory codec = null;
-  CodecFactory codec2 = null;
-  boolean recompress;
-  public TestDataFileConcat(CodecFactory codec, CodecFactory codec2, Boolean recompress) {
-    this.codec = codec;
-    this.codec2 = codec2;
-    this.recompress = recompress;
-    LOG.info("Testing concatenating files, " + codec2 + " into " + codec + 
-        " with recompress=" + recompress);
-  }
-
-  @Parameters
-  public static List<Object[]> codecs() {
-    List<Object[]> r = new ArrayList<Object[]>();
-    r.add(new Object[] { null , null, false});
-    r.add(new Object[] { null , null, true});
-    r.add(new Object[]
-        { CodecFactory.deflateCodec(1), CodecFactory.deflateCodec(6), false });
-    r.add(new Object[]
-        { CodecFactory.deflateCodec(1), CodecFactory.deflateCodec(6), true });
-    r.add(new Object[]
-        { CodecFactory.deflateCodec(3), CodecFactory.nullCodec(), false });
-    r.add(new Object[]
-        { CodecFactory.nullCodec(), CodecFactory.deflateCodec(6), false });
-    return r;
-  }
-
-  private static final int COUNT =
-    Integer.parseInt(System.getProperty("test.count", "200"));
-  private static final boolean VALIDATE =
-    !"false".equals(System.getProperty("test.validate", "true"));
-  private static final File DIR
-    = new File(System.getProperty("test.dir", "/tmp"));
-  private static final long SEED = System.currentTimeMillis();
-
-  private static final String SCHEMA_JSON =
-    "{\"type\": \"record\", \"name\": \"Test\", \"fields\": ["
-    +"{\"name\":\"stringField\", \"type\":\"string\"},"
-    +"{\"name\":\"longField\", \"type\":\"long\"}]}";
-  private static final Schema SCHEMA = Schema.parse(SCHEMA_JSON);
-
-  private File makeFile(String name) {
-    return new File(DIR, "test-" + name + ".avro");
-  }
-
-  @Test
-  public void testConcateateFiles() throws IOException {
-    File file1 = makeFile((codec == null ? "null" : codec.toString()) + "-A");
-    File file2 = makeFile((codec2 == null ? "null" : codec2.toString()) + "-B");
-    DataFileWriter<Object> writer =
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .setSyncInterval(500);
-    if (codec != null) {
-      writer.setCodec(codec);
-    }
-    writer.create(SCHEMA, file1);
-    try {
-      for (Object datum : new RandomData(SCHEMA, COUNT, SEED)) {
-        writer.append(datum);
-      }
-    } finally {
-      writer.close();
-    }
-    DataFileWriter<Object> writer2 =
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .setSyncInterval(500);
-    if (codec2 != null) {
-      writer2.setCodec(codec2);
-    }
-    writer2.create(SCHEMA, file2);
-    try {
-      for (Object datum : new RandomData(SCHEMA, COUNT, SEED+1)) {
-        writer2.append(datum);
-      }
-    } finally {
-      writer2.close();
-    }
-    DataFileWriter<Object> concatinto = 
-      new DataFileWriter<Object>(new GenericDatumWriter<Object>())
-      .setSyncInterval(500);
-    concatinto.appendTo(file1);
-    DataFileReader<Object> concatfrom =
-      new DataFileReader<Object>(file2, new GenericDatumReader<Object>());
-    concatinto.appendAllFrom(concatfrom, recompress);
-    concatinto.close();
-    
-    DataFileReader<Object> concat =
-      new DataFileReader<Object>(file1, new GenericDatumReader<Object>());
-   
-    try {
-      Object datum = null;
-      if (VALIDATE) {
-        for (Object expected : new RandomData(SCHEMA, COUNT, SEED)) {
-          datum = concat.next(datum);
-          assertEquals(expected, datum);
-        }
-        for (Object expected : new RandomData(SCHEMA, COUNT, SEED+1)) {
-          datum = concat.next(datum);
-          assertEquals(expected, datum);
-        }
-      } else {
-        for (int i = 0; i < COUNT*2; i++) {
-          datum = concat.next(datum);
-        }
-      }
-    } finally {
-      concat.close();
-    }
-
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileDeflate.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileDeflate.java
deleted file mode 100644
index 12efbf3..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileDeflate.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-
-import org.apache.avro.Schema.Type;
-import org.apache.avro.file.CodecFactory;
-import org.apache.avro.file.DataFileStream;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.util.Utf8;
-import org.junit.Test;
-
-/** Simple test of DataFileWriter and DataFileStream with deflate codec. */
-public class TestDataFileDeflate {
-  @Test
-  public void testWriteAndRead() throws IOException {
-    Schema schema = Schema.create(Type.STRING);
-
-    // Write it
-    DataFileWriter<Utf8> w = new DataFileWriter<Utf8>(new GenericDatumWriter<Utf8>(schema));
-    w.setCodec(CodecFactory.deflateCodec(6));
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    w.create(schema, baos);
-    w.append(new Utf8("hello world"));
-    w.append(new Utf8("hello moon"));
-    w.sync();
-    w.append(new Utf8("bye bye world"));
-    w.append(new Utf8("bye bye moon"));
-    w.close();
-
-    // Read it
-    DataFileStream<Utf8> r = new DataFileStream<Utf8>(
-        new ByteArrayInputStream(baos.toByteArray()),
-        new GenericDatumReader<Utf8>(schema));
-    assertEquals("hello world", r.next().toString());
-    assertEquals("hello moon", r.next().toString());
-    assertEquals("bye bye world", r.next().toString());
-    assertEquals("bye bye moon", r.next().toString());
-    assertFalse(r.hasNext());
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileMeta.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileMeta.java
deleted file mode 100644
index 28f446a..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileMeta.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.IOException;
-
-import junit.framework.Assert;
-
-import org.apache.avro.Schema.Type;
-import org.apache.avro.file.DataFileStream;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.junit.Test;
-
-public class TestDataFileMeta {
-  @Test(expected=AvroRuntimeException.class)
-  public void testUseReservedMeta() {
-    DataFileWriter<?> w = new DataFileWriter<Object>(new GenericDatumWriter<Object>());
-    w.setMeta("avro.foo", "bar");
-  }
-
-  @Test()
-  public void testUseMeta() throws IOException {
-    DataFileWriter<?> w = new DataFileWriter<Object>(new GenericDatumWriter<Object>());
-    File f = AvroTestUtil.tempFile("testDataFileMeta.avro");
-    w.setMeta("hello", "bar");
-    w.create(Schema.create(Type.NULL), f);
-    w.close();
-
-    DataFileStream<Void> r = new DataFileStream<Void>(new FileInputStream(f), new GenericDatumReader<Void>());
-    assertEquals("bar", r.getMetaString("hello"));
-  }
-
-  @Test(expected=AvroRuntimeException.class)
-  public void testUseMetaAfterCreate() throws IOException {
-    DataFileWriter<?> w = new DataFileWriter<Object>(new GenericDatumWriter<Object>());
-    w.create(Schema.create(Type.NULL), new ByteArrayOutputStream());
-    w.setMeta("foo", "bar");
-  }
-
-  @Test
-  public void testBlockSizeSetInvalid() {
-    int exceptions = 0;
-    for (int i = -1; i < 33; i++) {
-      // 33 invalid, one valid
-      try {
-        new DataFileWriter<Object>(new GenericDatumWriter<Object>()).setSyncInterval(i);
-      } catch (IllegalArgumentException iae) {
-        exceptions++;
-      }
-    }
-    Assert.assertEquals(33, exceptions);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileReflect.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileReflect.java
deleted file mode 100644
index 78ff014..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestDataFileReflect.java
+++ /dev/null
@@ -1,185 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.file.SeekableFileInput;
-import org.apache.avro.reflect.ReflectData;
-import org.apache.avro.reflect.ReflectDatumReader;
-import org.apache.avro.reflect.ReflectDatumWriter;
-import org.junit.Assert;
-import org.junit.Test;
-
-public class TestDataFileReflect {
-
-  private static final File DIR = new File(System.getProperty("test.dir",
-      "/tmp"));
-  private static final File FILE = new File(DIR, "test.avro");
-
-  /*
-   * Test that using multiple schemas in a file works doing a union before
-   * writing any records.
-   */
-  @Test
-  public void testMultiReflectWithUnionBeforeWriting() throws IOException {
-    FileOutputStream fos = new FileOutputStream(FILE);
-
-    ReflectData reflectData = ReflectData.get();
-    List<Schema> schemas = Arrays.asList(new Schema[] {
-        reflectData.getSchema(FooRecord.class),
-        reflectData.getSchema(BarRecord.class) });
-    Schema union = Schema.createUnion(schemas);
-    DataFileWriter<Object> writer =
-      new DataFileWriter<Object>(new ReflectDatumWriter<Object>(union))
-      .create(union, fos);
-
-    // test writing to a file
-    CheckList<Object> check = new CheckList<Object>();
-    write(writer, new BarRecord("One beer please"), check);
-    write(writer, new FooRecord(10), check);
-    write(writer, new BarRecord("Two beers please"), check);
-    write(writer, new FooRecord(20), check);
-    writer.close();
-
-    ReflectDatumReader<Object> din = new ReflectDatumReader<Object>();
-    SeekableFileInput sin = new SeekableFileInput(FILE);
-    DataFileReader<Object> reader = new DataFileReader<Object>(sin, din);
-    int count = 0;
-    for (Object datum : reader)
-      check.assertEquals(datum, count++);
-    Assert.assertEquals(count, check.size());
-    reader.close();
-  }
-
-  /*
-   * Test that writing a record with a field that is null.
-   */
-  @Test
-  public void testNull() throws IOException {
-    FileOutputStream fos = new FileOutputStream(FILE);
-
-    ReflectData reflectData = ReflectData.AllowNull.get();
-    Schema schema = reflectData.getSchema(BarRecord.class);
-    DataFileWriter<BarRecord> writer = new DataFileWriter<BarRecord>
-      (new ReflectDatumWriter<BarRecord>(BarRecord.class, reflectData))
-      .create(schema, fos);
-
-    // test writing to a file
-    CheckList<BarRecord> check = new CheckList<BarRecord>();
-    write(writer, new BarRecord("One beer please"), check);
-    // null record here, fails when using the default reflectData instance
-    write(writer, new BarRecord(), check);
-    write(writer, new BarRecord("Two beers please"), check);
-    writer.close();
-
-    ReflectDatumReader<BarRecord> din = new ReflectDatumReader<BarRecord>();
-    SeekableFileInput sin = new SeekableFileInput(FILE);
-    DataFileReader<BarRecord> reader = new DataFileReader<BarRecord>(sin, din);
-    int count = 0;
-    for (BarRecord datum : reader)
-      check.assertEquals(datum, count++);
-    Assert.assertEquals(count, check.size());
-    reader.close();
-  }
-
-  /*
-   * Test that writing out and reading in a nested class works
-   */
-  @Test
-  public void testNestedClass() throws IOException {
-    FileOutputStream fos = new FileOutputStream(FILE);
-
-    Schema schema = ReflectData.get().getSchema(BazRecord.class);
-    DataFileWriter<BazRecord> writer =
-      new DataFileWriter<BazRecord>(new ReflectDatumWriter<BazRecord>(schema))
-      .create(schema, fos);
-
-    // test writing to a file
-    CheckList<BazRecord> check = new CheckList<BazRecord>();
-    write(writer, new BazRecord(10), check);
-    write(writer, new BazRecord(20), check);
-    writer.close();
-
-    ReflectDatumReader<BazRecord> din = new ReflectDatumReader<BazRecord>();
-    SeekableFileInput sin = new SeekableFileInput(FILE);
-    DataFileReader<BazRecord> reader = new DataFileReader<BazRecord>(sin, din);
-    int count = 0;
-    for (BazRecord datum : reader)
-      check.assertEquals(datum, count++);
-    Assert.assertEquals(count, check.size());
-    reader.close();
-  }
-
-  private <T> void write(DataFileWriter<T> writer, T o, CheckList<T> l)
-      throws IOException {
-    writer.append(l.addAndReturn(o));
-  }
-
-  @SuppressWarnings("serial")
-  private static class CheckList<T> extends ArrayList<T> {
-    T addAndReturn(T check) {
-      add(check);
-      return check;
-    }
-
-    void assertEquals(Object toCheck, int i) {
-      Assert.assertNotNull(toCheck);
-      Object o = get(i);
-      Assert.assertNotNull(o);
-      Assert.assertEquals(toCheck, o);
-    }
-  }
-
-  private static class BazRecord {
-    private int nbr;
-
-    @SuppressWarnings("unused")
-    public BazRecord() {
-    }
-
-    public BazRecord(int nbr) {
-      this.nbr = nbr;
-    }
-
-    @Override
-    public boolean equals(Object that) {
-      if (that instanceof BazRecord) {
-        return this.nbr == ((BazRecord) that).nbr;
-      }
-      return false;
-    }
-
-    @Override
-    public int hashCode() {
-      return nbr;
-    }
-
-    @Override
-    public String toString() {
-      return BazRecord.class.getSimpleName() + "{cnt=" + nbr + "}";
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceReflect.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceReflect.java
deleted file mode 100644
index b8cf477..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceReflect.java
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.reflect.ReflectRequestor;
-import org.apache.avro.reflect.ReflectResponder;
-import org.apache.avro.test.namespace.TestNamespace;
-import org.junit.Before;
-
-import java.net.InetSocketAddress;
-
-public class TestNamespaceReflect extends TestNamespaceSpecific {
-
-  @Before
-  public void testStartServer() throws Exception {
-    server = new SocketServer(new ReflectResponder(TestNamespace.class, new TestImpl()),
-                              new InetSocketAddress(0));
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    proxy = (TestNamespace)ReflectRequestor.getClient(TestNamespace.class, client);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceSpecific.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceSpecific.java
deleted file mode 100644
index 70972ad..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestNamespaceSpecific.java
+++ /dev/null
@@ -1,91 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.ipc.Transceiver;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.test.namespace.TestNamespace;
-import org.apache.avro.test.util.MD5;
-import org.apache.avro.test.errors.TestError;
-import org.apache.avro.test.namespace.TestRecord;
-import org.apache.avro.util.Utf8;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-
-import java.io.IOException;
-import java.net.InetSocketAddress;
-
-public class TestNamespaceSpecific {
-
-  public static class TestImpl implements TestNamespace {
-    public TestRecord echo(TestRecord record) { return record; }
-    public Void error() throws AvroRemoteException {
-      TestError error = new TestError();
-      error.message = new Utf8("an error");
-      throw error;
-    }
-  }
-
-  protected static SocketServer server;
-  protected static Transceiver client;
-  protected static TestNamespace proxy;
-
-  @Before
-  public void testStartServer() throws Exception {
-    server = new SocketServer(new SpecificResponder(TestNamespace.class, new TestImpl()),
-                              new InetSocketAddress(0));
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    proxy = (TestNamespace)SpecificRequestor.getClient(TestNamespace.class, client);
-  }
-
-  @Test
-  public void testEcho() throws IOException {
-    TestRecord record = new TestRecord();
-    record.hash = new MD5();
-    System.arraycopy(new byte[]{0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5}, 0,
-                     record.hash.bytes(), 0, 16);
-    TestRecord echoed = proxy.echo(record);
-    assertEquals(record, echoed);
-    assertEquals(record.hashCode(), echoed.hashCode());
-  }
-
-  @Test
-  public void testError() throws IOException {
-    TestError error = null;
-    try {
-      proxy.error();
-    } catch (TestError e) {
-      error = e;
-    }
-    assertNotNull(error);
-    assertEquals("an error", error.message.toString());
-  }
-
-  @After
-  public void testStopServer() throws IOException {
-    client.close();
-    server.close();
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolDatagram.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolDatagram.java
deleted file mode 100644
index 6a2c28f..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolDatagram.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.util.Random;
-import org.apache.avro.ipc.DatagramServer;
-import org.apache.avro.ipc.DatagramTransceiver;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.test.Simple;
-import org.junit.Before;
-
-import java.net.InetSocketAddress;
-
-public class TestProtocolDatagram extends TestProtocolSpecific {
-
-  @Before
-  public void testStartServer() throws Exception {
-    server =
-      new DatagramServer(new SpecificResponder(Simple.class, new TestImpl()),
-                         new InetSocketAddress("localhost",
-                                               new Random().nextInt(10000)+10000));
-    client = new DatagramTransceiver(new InetSocketAddress("localhost", server.getPort()));
-    proxy = (Simple)SpecificRequestor.getClient(Simple.class, client);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGeneric.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGeneric.java
deleted file mode 100644
index bc12ea2..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGeneric.java
+++ /dev/null
@@ -1,195 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericRecord;
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.generic.GenericResponder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.ipc.Transceiver;
-import org.apache.avro.util.Utf8;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Random;
-
-public class TestProtocolGeneric {
-  private static final Logger LOG
-    = LoggerFactory.getLogger(TestProtocolGeneric.class);
-
-  protected static final File FILE = new File("../../share/test/schemas/simple.avpr");
-  protected static final Protocol PROTOCOL;
-  static {
-    try {
-      PROTOCOL = Protocol.parse(FILE);
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  protected static class TestResponder extends GenericResponder {
-    public TestResponder() { super(PROTOCOL); }
-    public Object respond(Message message, Object request)
-      throws AvroRemoteException {
-      GenericRecord params = (GenericRecord)request;
-
-      if ("hello".equals(message.getName())) {
-        LOG.info("hello: "+params.get("greeting"));
-        return new Utf8("goodbye");
-      }
-
-      if ("echo".equals(message.getName())) {
-        Object record = params.get("record");
-        LOG.info("echo: "+record);
-        return record;
-      }
-
-      if ("echoBytes".equals(message.getName())) {
-        Object data = params.get("data");
-        LOG.info("echoBytes: "+data);
-        return data;
-      }
-
-      if ("error".equals(message.getName())) {
-        GenericRecord error =
-          new GenericData.Record(PROTOCOL.getType("TestError"));
-        error.put("message", new Utf8("an error"));
-        throw new AvroRemoteException(error);
-      }
-      
-      throw new AvroRuntimeException("unexpected message: "+message.getName());
-    }
-
-  }
-
-  protected static SocketServer server;
-  protected static Transceiver client;
-  protected static GenericRequestor requestor;
-
-  @Before
-  public void testStartServer() throws Exception {
-    server = new SocketServer(new TestResponder(), new InetSocketAddress(0));
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    requestor = new GenericRequestor(PROTOCOL, client);
-  }
-
-  @Test
-  public void testHello() throws IOException {
-    GenericRecord params = 
-      new GenericData.Record(PROTOCOL.getMessages().get("hello").getRequest());
-    params.put("greeting", new Utf8("bob"));
-    Utf8 response = (Utf8)requestor.request("hello", params);
-    assertEquals(new Utf8("goodbye"), response);
-  }
-
-  @Test
-  public void testEcho() throws IOException {
-    GenericRecord record =
-      new GenericData.Record(PROTOCOL.getType("TestRecord"));
-    record.put("name", new Utf8("foo"));
-    record.put("kind", "BAR");
-    record.put("hash", new GenericData.Fixed
-               (new byte[]{0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5}));
-    GenericRecord params =
-      new GenericData.Record(PROTOCOL.getMessages().get("echo").getRequest());
-    params.put("record", record);
-    Object echoed = requestor.request("echo", params);
-    assertEquals(record, echoed);
-  }
-
-  @Test
-  public void testEchoBytes() throws IOException {
-    Random random = new Random();
-    int length = random.nextInt(1024*16);
-    GenericRecord params =
-      new GenericData.Record(PROTOCOL.getMessages().get("echoBytes").getRequest());
-    ByteBuffer data = ByteBuffer.allocate(length);
-    random.nextBytes(data.array());
-    data.flip();
-    params.put("data", data);
-    Object echoed = requestor.request("echoBytes", params);
-    assertEquals(data, echoed);
-  }
-
-  @Test
-  public void testError() throws IOException {
-    GenericRecord params =
-      new GenericData.Record(PROTOCOL.getMessages().get("error").getRequest());
-    AvroRemoteException error = null;
-    try {
-      requestor.request("error", params);
-    } catch (AvroRemoteException e) {
-      error = e;
-    }
-    assertNotNull(error);
-    assertEquals("an error", ((GenericRecord)error.getValue()).get("message").toString());
-  }
-
-  @Test
-  /** Construct and use a different protocol whose "hello" method has an extra
-      argument to check that schema is sent to parse request. */
-  public void testHandshake() throws IOException {
-    Protocol protocol = new Protocol("Simple", "org.apache.avro.test");
-    List<Field> fields = new ArrayList<Field>();
-    fields.add(new Schema.Field("extra", Schema.create(Schema.Type.BOOLEAN),
-                   null, null));
-    fields.add(new Schema.Field("greeting", Schema.create(Schema.Type.STRING),
-                   null, null));
-    Protocol.Message message =
-      protocol.createMessage("hello",
-                             null /* doc */,
-                             Schema.createRecord(fields),
-                             Schema.create(Schema.Type.STRING),
-                             Schema.createUnion(new ArrayList<Schema>()));
-    protocol.getMessages().put("hello", message);
-    Transceiver t
-      = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    try {
-      GenericRequestor r = new GenericRequestor(protocol, t);
-      GenericRecord params = new GenericData.Record(message.getRequest());
-      params.put("extra", Boolean.TRUE);
-      params.put("greeting", new Utf8("bob"));
-      Utf8 response = (Utf8)r.request("hello", params);
-      assertEquals(new Utf8("goodbye"), response);
-    } finally {
-      t.close();
-    }
-  }
-
-  @After
-  public void testStopServer() throws IOException {
-    client.close();
-    server.close();
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGenericMeta.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGenericMeta.java
deleted file mode 100644
index 12adbe9..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolGenericMeta.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.net.InetSocketAddress;
-
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.ipc.Responder;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.junit.Before;
-
-public class TestProtocolGenericMeta extends TestProtocolGeneric {
-  
-  @Before
-  public void testStartServer() throws Exception {
-    Responder responder = new TestResponder();
-    responder.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    responder.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-    server = new SocketServer(responder, new InetSocketAddress(0));
-    
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    requestor = new GenericRequestor(PROTOCOL, client);
-    requestor.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    requestor.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolHttp.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolHttp.java
deleted file mode 100644
index 1729580..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolHttp.java
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.HttpServer;
-import org.apache.avro.ipc.HttpTransceiver;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.test.Simple;
-import org.junit.Before;
-
-import java.net.URL;
-
-public class TestProtocolHttp extends TestProtocolSpecific {
-
-  @Before
-  public void testStartServer() throws Exception {
-    server =
-      new HttpServer(new SpecificResponder(Simple.class, new TestImpl()), 0);
-    client =
-      new HttpTransceiver(new URL("http://127.0.0.1:"+server.getPort()+"/"));
-    proxy = (Simple)SpecificRequestor.getClient(Simple.class, client);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolParsing.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolParsing.java
deleted file mode 100644
index 46ea074..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolParsing.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.File;
-import java.io.IOException;
-
-import org.junit.Test;
-
-public class TestProtocolParsing {
-  public static Protocol getSimpleProtocol() throws IOException {
-    File file = new File("../../share/test/schemas/simple.avpr");
-    Protocol protocol = Protocol.parse(file);
-    return protocol;
-  }
-  
-  @Test
-  public void testParsing() throws IOException {
-    Protocol protocol = getSimpleProtocol();
-    
-    assertEquals(protocol.getDoc(), "Protocol used for testing.");
-    assertEquals(5, protocol.getMessages().size());
-    assertEquals("Pretend you're in a cave!", protocol.getMessages().get("echo").getDoc());    
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflect.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflect.java
deleted file mode 100644
index eb75191..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflect.java
+++ /dev/null
@@ -1,126 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.Server;
-import org.apache.avro.ipc.Transceiver;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.reflect.ReflectRequestor;
-import org.apache.avro.reflect.ReflectResponder;
-
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertArrayEquals;
-import static org.junit.Assert.assertNotNull;
-
-import java.net.InetSocketAddress;
-import java.util.Random;
-import java.io.IOException;
-
-public class TestProtocolReflect {
-
-  public static class TestRecord {
-    private String name;
-    public int hashCode() { return this.name.hashCode(); }
-    public boolean equals(Object that) {
-      return this.name.equals(((TestRecord)that).name);
-    }
-  }
-
-  public interface Simple {
-    String hello(String greeting);
-    TestRecord echo(TestRecord record);
-    int add(int arg1, int arg2);
-    byte[] echoBytes(byte[] data);
-    void error() throws SimpleException;
-  }
-  
-  public static class TestImpl implements Simple {
-    public String hello(String greeting) { return "goodbye"; }
-    public int add(int arg1, int arg2) { return arg1 + arg2; }
-    public TestRecord echo(TestRecord record) { return record; }
-    public byte[] echoBytes(byte[] data) { return data; }
-    public void error() throws SimpleException {
-      throw new SimpleException("foo");
-    }
-  }
-
-  protected static Server server;
-  protected static Transceiver client;
-  protected static Simple proxy;
-
-  @Before
-  public void testStartServer() throws Exception {
-    server = new SocketServer(new ReflectResponder(Simple.class, new TestImpl()),
-                              new InetSocketAddress(0));
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    proxy = (Simple)ReflectRequestor.getClient(Simple.class, client);
-  }
-
-  @Test
-  public void testHello() throws IOException {
-    String response = proxy.hello("bob");
-    assertEquals("goodbye", response);
-  }
-
-  @Test
-  public void testEcho() throws IOException {
-    TestRecord record = new TestRecord();
-    record.name = "foo";
-    TestRecord echoed = proxy.echo(record);
-    assertEquals(record, echoed);
-  }
-
-  @Test
-  public void testAdd() throws IOException {
-    int result = proxy.add(1, 2);
-    assertEquals(3, result);
-  }
-
-  @Test
-  public void testEchoBytes() throws IOException {
-    Random random = new Random();
-    int length = random.nextInt(1024*16);
-    byte[] data = new byte[length];
-    random.nextBytes(data);
-    byte[] echoed = proxy.echoBytes(data);
-    assertArrayEquals(data, echoed);
-  }
-
-  @Test
-  public void testError() throws IOException {
-    SimpleException error = null;
-    try {
-      proxy.error();
-    } catch (SimpleException e) {
-      error = e;
-    }
-    assertNotNull(error);
-    assertEquals("foo", error.getMessage());
-  }
-
-  @After
-  public void testStopServer() throws IOException {
-    client.close();
-    server.close();
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflectMeta.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflectMeta.java
deleted file mode 100644
index 44e24a5..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolReflectMeta.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.reflect.ReflectRequestor;
-import org.apache.avro.reflect.ReflectResponder;
-import org.apache.avro.test.Simple;
-import org.junit.Before;
-
-import java.net.InetSocketAddress;
-
-public class TestProtocolReflectMeta extends TestProtocolReflect {
-
-  @Before
-  public void testStartServer() throws Exception {
-    ReflectResponder rresp = new ReflectResponder(Simple.class, new TestImpl());
-    rresp.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    rresp.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-    server = new SocketServer(rresp, new InetSocketAddress(0));
-    
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    ReflectRequestor requestor = new ReflectRequestor(Simple.class, client);
-    requestor.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    requestor.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-    proxy = (Simple)ReflectRequestor.getClient(Simple.class, (ReflectRequestor)requestor);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecific.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecific.java
deleted file mode 100644
index a253ef5..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecific.java
+++ /dev/null
@@ -1,181 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.Server;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.ipc.Transceiver;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.test.Simple;
-import org.apache.avro.test.Kind;
-import org.apache.avro.test.MD5;
-import org.apache.avro.test.TestError;
-import org.apache.avro.test.TestRecord;
-import org.apache.avro.util.Utf8;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.io.IOException;
-import java.io.LineNumberReader;
-import java.net.InetSocketAddress;
-import java.nio.ByteBuffer;
-import java.util.Random;
-
-
-public class TestProtocolSpecific {
-
-  protected static final File SERVER_PORTS_DIR
-  = new File(System.getProperty("test.dir", "/tmp")+"/server-ports/");
-
-  public static class TestImpl implements Simple {
-    public Utf8 hello(Utf8 greeting) { return new Utf8("goodbye"); }
-    public int add(int arg1, int arg2) { return arg1 + arg2; }
-    public TestRecord echo(TestRecord record) { return record; }
-    public ByteBuffer echoBytes(ByteBuffer data) { return data; }
-    public Void error() throws AvroRemoteException {
-      TestError error = new TestError();
-      error.message = new Utf8("an error");
-      throw error;
-    }
-  }
-
-  protected static Server server;
-  protected static Transceiver client;
-  protected static Simple proxy;
-
-  @Before
-  public void testStartServer() throws Exception {
-    server = new SocketServer(new SpecificResponder(Simple.class, new TestImpl()),
-                              new InetSocketAddress(0));
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    proxy = (Simple)SpecificRequestor.getClient(Simple.class, client);
-  }
-
-  @Test
-  public void testHello() throws IOException {
-    Utf8 response = proxy.hello(new Utf8("bob"));
-    assertEquals(new Utf8("goodbye"), response);
-  }
-
-  @Test
-  public void testHashCode() throws IOException {
-    TestError error = new TestError();
-    error.hashCode();
-  }
-
-  @Test
-  public void testEcho() throws IOException {
-    TestRecord record = new TestRecord();
-    record.name = new Utf8("foo");
-    record.kind = Kind.BAR;
-    record.hash = new MD5();
-    System.arraycopy(new byte[]{0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5}, 0,
-                     record.hash.bytes(), 0, 16);
-    TestRecord echoed = proxy.echo(record);
-    assertEquals(record, echoed);
-    assertEquals(record.hashCode(), echoed.hashCode());
-  }
-
-  @Test
-  public void testAdd() throws IOException {
-    int result = proxy.add(1, 2);
-    assertEquals(3, result);
-  }
-
-  @Test
-  public void testEchoBytes() throws IOException {
-    Random random = new Random();
-    int length = random.nextInt(1024*16);
-    ByteBuffer data = ByteBuffer.allocate(length);
-    random.nextBytes(data.array());
-    data.flip();
-    ByteBuffer echoed = proxy.echoBytes(data);
-    assertEquals(data, echoed);
-  }
-
-  @Test
-  public void testEmptyEchoBytes() throws IOException {
-    ByteBuffer data = ByteBuffer.allocate(0);
-    ByteBuffer echoed = proxy.echoBytes(data);
-    data.flip();
-    assertEquals(data, echoed);
-  }
-
-  @Test
-  public void testError() throws IOException {
-    TestError error = null;
-    try {
-      proxy.error();
-    } catch (TestError e) {
-      error = e;
-    }
-    assertNotNull(error);
-    assertEquals("an error", error.message.toString());
-  }
-
-  @After
-  public void testStopServer() throws IOException {
-    client.close();
-    server.close();
-  }
-
-  public static class InteropTest {
-
-  @Test
-    public void testClient() throws Exception {
-      for (File f : SERVER_PORTS_DIR.listFiles()) {
-        LineNumberReader reader = new LineNumberReader(new FileReader(f));
-        int port = Integer.parseInt(reader.readLine());
-        System.out.println("Validating java client to "+
-            f.getName()+" - " + port);
-        Transceiver client = new SocketTransceiver(
-            new InetSocketAddress("localhost", port));
-        proxy = (Simple)SpecificRequestor.getClient(Simple.class, client);
-        TestProtocolSpecific proto = new TestProtocolSpecific();
-        proto.testHello();
-        proto.testEcho();
-        proto.testEchoBytes();
-        proto.testError();
-        System.out.println("Done! Validation java client to "+
-            f.getName()+" - " + port);
-      }
-    }
-
-    /**
-     * Starts the RPC server.
-     */
-    public static void main(String[] args) throws Exception {
-      SocketServer server = new SocketServer(
-          new SpecificResponder(Simple.class, new TestImpl()),
-          new InetSocketAddress(0));
-      File portFile = new File(SERVER_PORTS_DIR, "java-port");
-      FileWriter w = new FileWriter(portFile);
-      w.write(Integer.toString(server.getPort()));
-      w.close();
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecificMeta.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecificMeta.java
deleted file mode 100644
index 76d1c81..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestProtocolSpecificMeta.java
+++ /dev/null
@@ -1,46 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import java.net.InetSocketAddress;
-
-import org.apache.avro.ipc.Responder;
-import org.apache.avro.ipc.SocketServer;
-import org.apache.avro.ipc.SocketTransceiver;
-import org.apache.avro.specific.SpecificRequestor;
-import org.apache.avro.specific.SpecificResponder;
-import org.apache.avro.test.Simple;
-import org.junit.Before;
-
-
-public class TestProtocolSpecificMeta extends TestProtocolSpecific {
-  
-  @Before
-  public void testStartServer() throws Exception {
-    Responder responder = new SpecificResponder(Simple.class, new TestImpl());
-    responder.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    responder.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-    server = new SocketServer(responder, new InetSocketAddress(0));
-    
-    client = new SocketTransceiver(new InetSocketAddress(server.getPort()));
-    SpecificRequestor req = new SpecificRequestor(Simple.class, client);
-    req.addRPCPlugin(new RPCMetaTestPlugin("key1"));
-    req.addRPCPlugin(new RPCMetaTestPlugin("key2"));
-    proxy = (Simple)SpecificRequestor.getClient(Simple.class, (SpecificRequestor)req);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestReflect.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestReflect.java
deleted file mode 100644
index 0833b6e..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestReflect.java
+++ /dev/null
@@ -1,515 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertTrue;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.lang.reflect.Type;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.avro.Schema.Field;
-import org.apache.avro.TestReflect.SampleRecord.AnotherSampleRecord;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.reflect.ReflectData;
-import org.apache.avro.reflect.ReflectDatumReader;
-import org.apache.avro.reflect.ReflectDatumWriter;
-import org.apache.avro.reflect.Stringable;
-import org.apache.avro.reflect.Nullable;
-import org.apache.avro.reflect.Union;
-import org.junit.Test;
-
-public class TestReflect {
-
-  // test primitive type inference
-  @Test public void testVoid() {
-    check(Void.TYPE, "\"null\"");
-    check(Void.class, "\"null\"");
-  }
-
-  @Test public void testBoolean() {
-    check(Boolean.TYPE, "\"boolean\"");
-    check(Boolean.class, "\"boolean\"");
-  }
-
-  @Test public void testInt() {
-    check(Integer.TYPE, "\"int\"");
-    check(Integer.class, "\"int\"");
-  }
-
-  @Test public void testLong() {
-    check(Long.TYPE, "\"long\"");
-    check(Long.class, "\"long\"");
-  }
-
-  @Test public void testFloat() {
-    check(Float.TYPE, "\"float\"");
-    check(Float.class, "\"float\"");
-  }
-
-  @Test public void testDouble() {
-    check(Double.TYPE, "\"double\"");
-    check(Double.class, "\"double\"");
-  }
-
-  @Test public void testString() {
-    check("Foo", "\"string\"");
-  }
-
-  @Test public void testBytes() {
-    check(new byte[0], "\"bytes\"");
-  }
-
-  // test map, array and list type inference
-  public static class R1 {
-    private Map<String,String> mapField = new HashMap<String,String>();
-    private String[] arrayField = new String[] { "foo" };
-    private List<String> listField = new ArrayList<String>();
-
-    {
-      mapField.put("foo", "bar");
-      listField.add("foo");
-    }
-    
-    public boolean equals(Object o) {
-      if (!(o instanceof R1)) return false;
-      R1 that = (R1)o;
-      return mapField.equals(that.mapField)
-        && Arrays.equals(this.arrayField, that.arrayField) 
-        &&  listField.equals(that.listField);
-    }
-  }
-
-  @Test public void testMap() throws Exception {
-    check(R1.class.getDeclaredField("mapField").getGenericType(),
-          "{\"type\":\"map\",\"values\":\"string\"}");
-  }
-
-  @Test public void testArray() throws Exception {
-    check(R1.class.getDeclaredField("arrayField").getGenericType(),
-          "{\"type\":\"array\",\"items\":\"string\"}");
-  }
-  @Test public void testList() throws Exception {
-    check(R1.class.getDeclaredField("listField").getGenericType(),
-          "{\"type\":\"array\",\"items\":\"string\""
-          +",\"java-class\":\"java.util.List\"}");
-  }
-
-  @Test public void testR1() throws Exception {
-    checkReadWrite(new R1());
-  }
-
-  // test record, array and list i/o
-  public static class R2 {
-    private String[] arrayField;
-    private Collection<String> collectionField;
-    
-    public boolean equals(Object o) {
-      if (!(o instanceof R2)) return false;
-      R2 that = (R2)o;
-      return Arrays.equals(this.arrayField, that.arrayField) 
-        &&  collectionField.equals(that.collectionField);
-    }
-  }
-
-  @Test public void testR2() throws Exception {
-    R2 r2 = new R2();
-    r2.arrayField = new String[] {"foo"};
-    r2.collectionField = new ArrayList<String>();
-    r2.collectionField.add("foo");
-    checkReadWrite(r2);
-  }
-
-  // test array i/o of unboxed type
-  public static class R3 {
-    private int[] intArray;
-    
-    public boolean equals(Object o) {
-      if (!(o instanceof R3)) return false;
-      R3 that = (R3)o;
-      return Arrays.equals(this.intArray, that.intArray);
-    }
-  }
-
-  @Test public void testR3() throws Exception {
-    R3 r3 = new R3();
-    r3.intArray = new int[] {1};
-    checkReadWrite(r3);
-  }
-
-  // test inherited fields & short datatype
-  public static class R4 {
-    public short value;
-    public short[] shorts;
-    
-    public boolean equals(Object o) {
-      if (!(o instanceof R4)) return false;
-      R4 that = (R4)o;
-      return this.value == that.value
-        && Arrays.equals(this.shorts, that.shorts);
-    }
-  }
-
-  public static class R5 extends R4 {}
-
-  @Test public void testR5() throws Exception {
-    R5 r5 = new R5();
-    r5.value = 1;
-    r5.shorts = new short[] {3,255,256,Short.MAX_VALUE,Short.MIN_VALUE};
-    checkReadWrite(r5);
-  }
-
-  // test union annotation on a class
-  @Union({R7.class, R8.class})
-  public static class R6 {}
-
-  public static class R7 extends R6 {
-    public int value;
-    public boolean equals(Object o) {
-      if (!(o instanceof R7)) return false;
-      return this.value == ((R7)o).value;
-    }
-  }
-  public static class R8 extends R6 {
-    public float value;
-    public boolean equals(Object o) {
-      if (!(o instanceof R8)) return false;
-      return this.value == ((R8)o).value;
-    }
-  }
-
-  // test arrays of union annotated class
-  public static class R9  {
-    public R6[] r6s;
-    public boolean equals(Object o) {
-      if (!(o instanceof R9)) return false;
-      return Arrays.equals(this.r6s, ((R9)o).r6s);
-    }
-  }
-
-  @Test public void testR6() throws Exception {
-    R7 r7 = new R7();
-    r7.value = 1;
-    checkReadWrite(r7, ReflectData.get().getSchema(R6.class));
-    R8 r8 = new R8();
-    r8.value = 1;
-    checkReadWrite(r8, ReflectData.get().getSchema(R6.class));
-    R9 r9 = new R9();
-    r9.r6s = new R6[] {r7, r8};
-    checkReadWrite(r9, ReflectData.get().getSchema(R9.class));
-  }
-
-  // test union annotation on methods and parameters
-  public static interface P0 {
-    @Union({Void.class,String.class})
-      String foo(@Union({Void.class,String.class}) String s);
-  }
-
-  @Test public void testP0() throws Exception {
-    Protocol p0 = ReflectData.get().getProtocol(P0.class);
-    Protocol.Message message = p0.getMessages().get("foo");
-    // check response schema is union
-    Schema response = message.getResponse();
-    assertEquals(Schema.Type.UNION, response.getType());
-    assertEquals(Schema.Type.NULL, response.getTypes().get(0).getType());
-    assertEquals(Schema.Type.STRING, response.getTypes().get(1).getType());
-    // check request schema is union
-    Schema request = message.getRequest();
-    Field field = request.getField("s");
-    assertNotNull("field 's' should not be null", field);
-    Schema param = field.schema();
-    assertEquals(Schema.Type.UNION, param.getType());
-    assertEquals(Schema.Type.NULL, param.getTypes().get(0).getType());
-    assertEquals(Schema.Type.STRING, param.getTypes().get(1).getType());
-    // check union erasure
-    assertEquals(String.class, ReflectData.get().getClass(response));
-    assertEquals(String.class, ReflectData.get().getClass(param));
-  }
-
-  // test Stringable annotation
-  @Stringable public static class R10 {
-    private String text;
-    public R10(String text) { this.text = text; }
-    public String toString() { return text; }
-    public boolean equals(Object o) {
-      if (!(o instanceof R10)) return false;
-      return this.text.equals(((R10)o).text);
-    }
-  }
-  
-  @Test public void testR10() throws Exception {
-    Schema r10Schema = ReflectData.get().getSchema(R10.class);
-    assertEquals(Schema.Type.STRING, r10Schema.getType());
-    assertEquals(R10.class.getName(), r10Schema.getProp("java-class"));
-    checkReadWrite(new R10("foo"), r10Schema);
-  }
-
-  // test Nullable annotation on field
-  public static class R11 {
-    @Nullable private String text;
-    public boolean equals(Object o) {
-      if (!(o instanceof R11)) return false;
-      R11 that = (R11)o;
-      if (this.text == null) return that.text == null;
-      return this.text.equals(that.text);
-    }
-  }
-  
-  @Test public void testR11() throws Exception {
-    Schema r11Record = ReflectData.get().getSchema(R11.class);
-    assertEquals(Schema.Type.RECORD, r11Record.getType());
-    Schema r11Field = r11Record.getField("text").schema();
-    assertEquals(Schema.Type.UNION, r11Field.getType());
-    assertEquals(Schema.Type.NULL, r11Field.getTypes().get(0).getType());
-    Schema r11String = r11Field.getTypes().get(1);
-    assertEquals(Schema.Type.STRING, r11String.getType());
-    R11 r11 = new R11();
-    checkReadWrite(r11, r11Record);
-    r11.text = "foo";
-    checkReadWrite(r11, r11Record);
-  }
-
-  // test nullable annotation on methods and parameters
-  public static interface P1 {
-    @Nullable String foo(@Nullable String s);
-  }
-
-  @Test public void testP1() throws Exception {
-    Protocol p1 = ReflectData.get().getProtocol(P1.class);
-    Protocol.Message message = p1.getMessages().get("foo");
-    // check response schema is union
-    Schema response = message.getResponse();
-    assertEquals(Schema.Type.UNION, response.getType());
-    assertEquals(Schema.Type.NULL, response.getTypes().get(0).getType());
-    assertEquals(Schema.Type.STRING, response.getTypes().get(1).getType());
-    // check request schema is union
-    Schema request = message.getRequest();
-    Field field = request.getField("s");
-    assertNotNull("field 's' should not be null", field);
-    Schema param = field.schema();
-    assertEquals(Schema.Type.UNION, param.getType());
-    assertEquals(Schema.Type.NULL, param.getTypes().get(0).getType());
-    assertEquals(Schema.Type.STRING, param.getTypes().get(1).getType());
-    // check union erasure
-    assertEquals(String.class, ReflectData.get().getClass(response));
-    assertEquals(String.class, ReflectData.get().getClass(param));
-  }
-
-  // test error
-  public static class E1 extends Exception {}
-  public static interface P2 {
-    void error() throws E1;
-  }
-
-  @Test public void testP2() throws Exception {
-    Schema e1 = ReflectData.get().getSchema(E1.class);
-    assertEquals(Schema.Type.RECORD, e1.getType());
-    assertTrue(e1.isError());
-    Field message = e1.getField("detailMessage");
-    assertNotNull("field 'detailMessage' should not be null", message);
-    Schema messageSchema = message.schema();
-    assertEquals(Schema.Type.UNION, messageSchema.getType());
-    assertEquals(Schema.Type.NULL, messageSchema.getTypes().get(0).getType());
-    assertEquals(Schema.Type.STRING, messageSchema.getTypes().get(1).getType());
-
-    Protocol p2 = ReflectData.get().getProtocol(P2.class);
-    Protocol.Message m = p2.getMessages().get("error");
-    // check error schema is union
-    Schema response = m.getErrors();
-    assertEquals(Schema.Type.UNION, response.getType());
-    assertEquals(Schema.Type.STRING, response.getTypes().get(0).getType());
-    assertEquals(e1, response.getTypes().get(1));
-  }
-
-  void checkReadWrite(Object object) throws Exception {
-    checkReadWrite(object, ReflectData.get().getSchema(object.getClass()));
-  }
-  void checkReadWrite(Object object, Schema s) throws Exception {
-    ReflectDatumWriter<Object> writer = new ReflectDatumWriter<Object>(s);
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    writer.write(object, new BinaryEncoder(out));
-    ReflectDatumReader<Object> reader = new ReflectDatumReader<Object>(s);
-    Object after =
-      reader.read(null, DecoderFactory.defaultFactory().createBinaryDecoder(
-          out.toByteArray(), null));
-    assertEquals(object, after);
-  }
-
-  public static enum E { A, B };
-  @Test public void testEnum() throws Exception {
-    check(E.class, "{\"type\":\"enum\",\"name\":\"E\",\"namespace\":"
-          +"\"org.apache.avro.TestReflect$\",\"symbols\":[\"A\",\"B\"]}");
-  }
-
-  public static class R { int a; long b; }
-  @Test public void testRecord() throws Exception {
-    check(R.class, "{\"type\":\"record\",\"name\":\"R\",\"namespace\":"
-          +"\"org.apache.avro.TestReflect$\",\"fields\":["
-          +"{\"name\":\"a\",\"type\":\"int\"},"
-          +"{\"name\":\"b\",\"type\":\"long\"}]}");
-  }
-
-  private void check(Object o, String schemaJson) {
-    check(o.getClass(), schemaJson);
-  }
-
-  private void check(Type type, String schemaJson) {
-    assertEquals(schemaJson, ReflectData.get().getSchema(type).toString());
-  }
-
-  @Test
-  public void testRecordIO() throws IOException {
-    Schema schm = ReflectData.get().getSchema(SampleRecord.class);
-    ReflectDatumWriter<SampleRecord> writer = 
-      new ReflectDatumWriter<SampleRecord>(schm);
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    SampleRecord record = new SampleRecord();
-    record.x = 5;
-    record.y = 10;
-    writer.write(record, new BinaryEncoder(out));
-    ReflectDatumReader<SampleRecord> reader = 
-      new ReflectDatumReader<SampleRecord>(schm);
-    SampleRecord decoded =
-      reader.read(null, DecoderFactory.defaultFactory().createBinaryDecoder(
-          out.toByteArray(), null));
-    assertEquals(record, decoded);
-  }
-
-  @Test
-  public void testRecordWithNullIO() throws IOException {
-    ReflectData reflectData = ReflectData.AllowNull.get();
-    Schema schm = reflectData.getSchema(AnotherSampleRecord.class);
-    ReflectDatumWriter<AnotherSampleRecord> writer = 
-      new ReflectDatumWriter<AnotherSampleRecord>(schm);
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    // keep record.a null and see if that works
-    BinaryEncoder e = new BinaryEncoder(out);
-    AnotherSampleRecord a = new AnotherSampleRecord();
-    writer.write(a, e);
-    AnotherSampleRecord b = new AnotherSampleRecord(10);
-    writer.write(b, e);
-    e.flush();
-    ReflectDatumReader<AnotherSampleRecord> reader = 
-      new ReflectDatumReader<AnotherSampleRecord>(schm);
-    ByteArrayInputStream in = new ByteArrayInputStream(out.toByteArray());
-    Decoder d = DecoderFactory.defaultFactory().createBinaryDecoder(in, null);
-    AnotherSampleRecord decoded = reader.read(null, d);
-    assertEquals(a, decoded);
-    decoded = reader.read(null, d);
-    assertEquals(b, decoded);
-  }
-
-  public static class SampleRecord {
-    public int x = 1;
-    private int y = 2;
-
-    public int hashCode() {
-      return x + y;
-    }
-
-    public boolean equals(Object obj) {
-      if (this == obj)
-        return true;
-      if (obj == null)
-        return false;
-      if (getClass() != obj.getClass())
-        return false;
-      final SampleRecord other = (SampleRecord)obj;
-      if (x != other.x)
-        return false;
-      if (y != other.y)
-        return false;
-      return true;
-    }
-    
-    public static class AnotherSampleRecord {
-      private Integer a = null;
-      private SampleRecord s = null;
-
-      public AnotherSampleRecord() {
-      }
-
-      AnotherSampleRecord(Integer a) {
-        this.a = a;
-        this.s = new SampleRecord();
-      }
-
-      public int hashCode() {
-        int hash = (a != null ? a.hashCode() : 0);
-        hash += (s != null ? s.hashCode() : 0);
-        return hash;
-      }
-
-      public boolean equals(Object other) {
-        if (other instanceof AnotherSampleRecord) {
-          AnotherSampleRecord o = (AnotherSampleRecord) other;
-          if ( (this.a == null && o.a != null) ||
-               (this.a != null && !this.a.equals(o.a)) ||
-               (this.s == null && o.s != null) ||
-               (this.s != null && !this.s.equals(o.s)) ) {
-            return false;
-          }
-          return true;
-        } else {
-          return false;
-        }
-      }
-    }
-  }
-
-  public static class X { int i; }
-  public static class B1 { X x; }
-  public static class B2 { X x; }
-  public static class A { B1 b1; B2 b2; }
-  public static interface C { void foo(A a); }
-
-  @Test
-  public void testForwardReference() {
-    ReflectData data = ReflectData.get();
-    Protocol reflected = data.getProtocol(C.class);
-    Protocol reparsed = Protocol.parse(reflected.toString());
-    assertEquals(reflected, reparsed);
-    assert(reparsed.getTypes().contains(data.getSchema(A.class)));
-    assert(reparsed.getTypes().contains(data.getSchema(B1.class)));
-    assert(reparsed.getTypes().contains(data.getSchema(B2.class)));
-    assert(reparsed.getTypes().contains(data.getSchema(X.class)));
-  }
-
-  public static interface P3 {
-    void m1();
-    void m1(int x);
-  }
-
-  @Test(expected=AvroTypeException.class)
-  public void testOverloadedMethod() { 
-    ReflectData.get().getProtocol(P3.class);
-  }
-
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestSchema.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/TestSchema.java
deleted file mode 100644
index ffc648c..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/TestSchema.java
+++ /dev/null
@@ -1,616 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro;
-
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.fail;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-
-import org.apache.avro.Schema.Type;
-import org.apache.avro.generic.GenericArray;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.io.DecoderFactory;
-import org.apache.avro.io.DatumReader;
-import org.apache.avro.io.DatumWriter;
-import org.apache.avro.io.Decoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.JsonDecoder;
-import org.apache.avro.io.JsonEncoder;
-import org.apache.avro.specific.TestSpecificCompiler;
-import org.apache.avro.util.Utf8;
-
-public class TestSchema {
-
-  public static final String BASIC_ENUM_SCHEMA = "{\"type\":\"enum\", \"name\":\"Test\","
-            +"\"symbols\": [\"A\", \"B\"]}";
-
-  public static final String SCHEMA_WITH_DOC_TAGS = "{\n"
-      + "  \"type\": \"record\",\n"
-      + "  \"name\": \"outer_record\",\n"
-      + "  \"doc\": \"This is not a world record.\",\n"
-      + "  \"fields\": [\n"
-      + "    { \"type\": { \"type\": \"fixed\", \"doc\": \"Very Inner Fixed\", "
-      + "                  \"name\": \"very_inner_fixed\", \"size\": 1 },\n"
-      + "      \"doc\": \"Inner Fixed\", \"name\": \"inner_fixed\" },\n"
-      + "    { \"type\": \"string\",\n"
-      + "      \"name\": \"inner_string\",\n"
-      + "      \"doc\": \"Inner String\" },\n"
-      + "    { \"type\": { \"type\": \"enum\", \"doc\": \"Very Inner Enum\", \n"
-      + "                  \"name\": \"very_inner_enum\", \n"
-      + "                  \"symbols\": [ \"A\", \"B\", \"C\" ] },\n"
-      + "      \"doc\": \"Inner Enum\", \"name\": \"inner_enum\" },\n"
-      + "    { \"type\": [\"string\", \"int\"], \"doc\": \"Inner Union\", \n"
-      + "      \"name\": \"inner_union\" }\n" + "  ]\n" + "}\n";
-
-  private static final int COUNT =
-    Integer.parseInt(System.getProperty("test.count", "10"));
-
-  @Test
-  public void testNull() throws Exception {
-    assertEquals(Schema.create(Type.NULL), Schema.parse("\"null\""));
-    assertEquals(Schema.create(Type.NULL), Schema.parse("{\"type\":\"null\"}"));
-    check("\"null\"", "null", null);
-  }
-
-  @Test
-  public void testBoolean() throws Exception {
-    assertEquals(Schema.create(Type.BOOLEAN), Schema.parse("\"boolean\""));
-    assertEquals(Schema.create(Type.BOOLEAN),
-                 Schema.parse("{\"type\":\"boolean\"}"));
-    check("\"boolean\"", "true", Boolean.TRUE);
-  }
-
-  @Test
-  public void testString() throws Exception {
-    assertEquals(Schema.create(Type.STRING), Schema.parse("\"string\""));
-    assertEquals(Schema.create(Type.STRING),
-                 Schema.parse("{\"type\":\"string\"}"));
-    check("\"string\"", "\"foo\"", new Utf8("foo"));
-  }
-
-  @Test
-  public void testBytes() throws Exception {
-    assertEquals(Schema.create(Type.BYTES), Schema.parse("\"bytes\""));
-    assertEquals(Schema.create(Type.BYTES),
-                 Schema.parse("{\"type\":\"bytes\"}"));
-    check("\"bytes\"", "\"\\u0000ABC\\u00FF\"",
-          ByteBuffer.wrap(new byte[]{0,65,66,67,-1}));
-  }
-
-  @Test
-  public void testInt() throws Exception {
-    assertEquals(Schema.create(Type.INT), Schema.parse("\"int\""));
-    assertEquals(Schema.create(Type.INT), Schema.parse("{\"type\":\"int\"}"));
-    check("\"int\"", "9", new Integer(9));
-  }
-
-  @Test
-  public void testLong() throws Exception {
-    assertEquals(Schema.create(Type.LONG), Schema.parse("\"long\""));
-    assertEquals(Schema.create(Type.LONG), Schema.parse("{\"type\":\"long\"}"));
-    check("\"long\"", "11", new Long(11));
-  }
-
-  @Test
-  public void testFloat() throws Exception {
-    assertEquals(Schema.create(Type.FLOAT), Schema.parse("\"float\""));
-    assertEquals(Schema.create(Type.FLOAT),
-                 Schema.parse("{\"type\":\"float\"}"));
-    check("\"float\"", "1.1", new Float(1.1));
-  }
-
-  @Test
-  public void testDouble() throws Exception {
-    assertEquals(Schema.create(Type.DOUBLE), Schema.parse("\"double\""));
-    assertEquals(Schema.create(Type.DOUBLE),
-                 Schema.parse("{\"type\":\"double\"}"));
-    check("\"double\"", "1.2", new Double(1.2));
-  }
-
-  @Test
-  public void testArray() throws Exception {
-    String json = "{\"type\":\"array\", \"items\": \"long\"}";
-    Schema schema = Schema.parse(json);
-    GenericArray<Long> array = new GenericData.Array<Long>(1, schema);
-    array.add(1L);
-    check(json, "[1]", array);
-    checkParseError("{\"type\":\"array\"}");      // items required
-  }
-
-  @Test
-  public void testMap() throws Exception {
-    HashMap<Utf8,Long> map = new HashMap<Utf8,Long>();
-    map.put(new Utf8("a"), 1L);
-    check("{\"type\":\"map\", \"values\":\"long\"}", "{\"a\":1}", map);
-    checkParseError("{\"type\":\"map\"}");        // values required
-  }
-
-  @Test
-  public void testRecord() throws Exception {
-    String recordJson = "{\"type\":\"record\", \"name\":\"Test\", \"fields\":"
-      +"[{\"name\":\"f\", \"type\":\"long\"}]}";
-    Schema schema = Schema.parse(recordJson);
-    GenericData.Record record = new GenericData.Record(schema);
-    record.put("f", 11L);
-    check(recordJson, "{\"f\":11}", record, false);
-    checkParseError("{\"type\":\"record\"}");
-    checkParseError("{\"type\":\"record\",\"name\":\"X\"}");
-    checkParseError("{\"type\":\"record\",\"name\":\"X\",\"fields\":\"Y\"}");
-    checkParseError("{\"type\":\"record\",\"name\":\"X\",\"fields\":"
-                    +"[{\"name\":\"f\"}]}");       // no type
-    checkParseError("{\"type\":\"record\",\"name\":\"X\",\"fields\":"
-                    +"[{\"type\":\"long\"}]}");    // no name
-  }
-
-  @Test
-  public void testEnum() throws Exception {
-    check(BASIC_ENUM_SCHEMA, "\"B\"", "B", false);
-    checkParseError("{\"type\":\"enum\"}");        // symbols required
-    checkParseError("{\"type\":\"enum\",\"symbols\": [\"X\",\"X\"]}");
-  }
-
-  @Test
-  public void testFixed() throws Exception {
-    check("{\"type\": \"fixed\", \"name\":\"Test\", \"size\": 1}", "\"a\"",
-          new GenericData.Fixed(new byte[]{(byte)'a'}), false);
-    checkParseError("{\"type\":\"fixed\"}");        // size required
-  }
-
-  @Test
-  public void testRecursive() throws Exception {
-    check("{\"type\": \"record\", \"name\": \"Node\", \"fields\": ["
-          +"{\"name\":\"label\", \"type\":\"string\"},"
-          +"{\"name\":\"children\", \"type\":"
-          +"{\"type\": \"array\", \"items\": \"Node\" }}]}",
-          false);
-  }
-
-  @Test
-  public void testRecursiveEquals() throws Exception {
-    String jsonSchema = "{\"type\":\"record\", \"name\":\"List\", \"fields\": ["
-      +"{\"name\":\"next\", \"type\":\"List\"}]}";
-    Schema s1 = Schema.parse(jsonSchema);
-    Schema s2 = Schema.parse(jsonSchema);
-    assertEquals(s1, s2);
-    s1.hashCode();                                // test no stackoverflow
-  }
-
-  @Test
-  public void testLisp() throws Exception {
-    check("{\"type\": \"record\", \"name\": \"Lisp\", \"fields\": ["
-          +"{\"name\":\"value\", \"type\":[\"null\", \"string\","
-          +"{\"type\": \"record\", \"name\": \"Cons\", \"fields\": ["
-          +"{\"name\":\"car\", \"type\":\"Lisp\"},"
-          +"{\"name\":\"cdr\", \"type\":\"Lisp\"}]}]}]}",
-          false);
-  }
-
-  @Test
-  public void testUnion() throws Exception {
-    check("[\"string\", \"long\"]", false);
-    checkDefault("[\"double\", \"long\"]", "1.1", new Double(1.1));
-
-    // check union json
-    String record = "{\"type\":\"record\",\"name\":\"Foo\",\"fields\":[]}";
-    String fixed = "{\"type\":\"fixed\",\"name\":\"Bar\",\"size\": 1}";
-    String enu = "{\"type\":\"enum\",\"name\":\"Baz\",\"symbols\": [\"X\"]}";
-    Schema union = Schema.parse("[\"null\",\"string\","
-                                +record+","+ enu+","+fixed+"]");
-    checkJson(union, null, "null");
-    checkJson(union, new Utf8("foo"), "{\"string\":\"foo\"}");
-    checkJson(union,
-              new GenericData.Record(Schema.parse(record)),
-              "{\"Foo\":{}}");
-    checkJson(union,
-              new GenericData.Fixed(new byte[]{(byte)'a'}),
-              "{\"Bar\":\"a\"}");
-    checkJson(union, "X", "{\"Baz\":\"X\"}");
-  }
-
-  @Test
-  public void testComplexUnions() throws Exception {
-    // one of each unnamed type and two of named types
-    String partial = "[\"int\", \"long\", \"float\", \"double\", \"boolean\", \"bytes\"," +
-    " \"string\", {\"type\":\"array\", \"items\": \"long\"}," +
-    " {\"type\":\"map\", \"values\":\"long\"}";
-    String namedTypes = ", {\"type\":\"record\",\"name\":\"Foo\",\"fields\":[]}," +
-    " {\"type\":\"fixed\",\"name\":\"Bar\",\"size\": 1}," +
-    " {\"type\":\"enum\",\"name\":\"Baz\",\"symbols\": [\"X\"]}";
-    
-    String namedTypes2 = ", {\"type\":\"record\",\"name\":\"Foo2\",\"fields\":[]}," +
-    " {\"type\":\"fixed\",\"name\":\"Bar2\",\"size\": 1}," +
-    " {\"type\":\"enum\",\"name\":\"Baz2\",\"symbols\": [\"X\"]}";
-    
-    check(partial + namedTypes + "]", false);
-    check(partial + namedTypes + namedTypes2 + "]", false); 
-    checkParseError(partial + namedTypes + namedTypes + "]");
-    
-    // fail with two branches of the same unnamed type
-    checkUnionError(new Schema[] {Schema.create(Type.INT), Schema.create(Type.INT)});
-    checkUnionError(new Schema[] {Schema.create(Type.LONG), Schema.create(Type.LONG)});
-    checkUnionError(new Schema[] {Schema.create(Type.FLOAT), Schema.create(Type.FLOAT)});
-    checkUnionError(new Schema[] {Schema.create(Type.DOUBLE), Schema.create(Type.DOUBLE)});
-    checkUnionError(new Schema[] {Schema.create(Type.BOOLEAN), Schema.create(Type.BOOLEAN)});
-    checkUnionError(new Schema[] {Schema.create(Type.BYTES), Schema.create(Type.BYTES)});
-    checkUnionError(new Schema[] {Schema.create(Type.STRING), Schema.create(Type.STRING)});
-    checkUnionError(new Schema[] {Schema.createArray(Schema.create(Type.INT)), 
-        Schema.createArray(Schema.create(Type.INT))});
-    checkUnionError(new Schema[] {Schema.createMap(Schema.create(Type.INT)), 
-        Schema.createMap(Schema.create(Type.INT))});
-    
-    List<String> symbols = new ArrayList<String>();
-    symbols.add("NOTHING");
-    
-    // succeed with two branches of the same named type, if different names
-    buildUnion(new Schema[] {Schema.createRecord("Foo", null, "org.test", false),
-        Schema.createRecord("Foo2", null, "org.test", false)});
-    buildUnion(new Schema[] {Schema.createEnum("Bar", null, "org.test", symbols),
-        Schema.createEnum("Bar2", null, "org.test", symbols)});
-    buildUnion(new Schema[] {Schema.createFixed("Baz", null, "org.test", 2),
-        Schema.createFixed("Baz2", null, "org.test", 1)});
-
-    // fail with two branches of the same named type, but same names
-    checkUnionError(new Schema[] {Schema.createRecord("Foo", null, "org.test", false),
-        Schema.createRecord("Foo", null, "org.test", false)});
-    checkUnionError(new Schema[] {Schema.createEnum("Bar", null, "org.test", symbols),
-        Schema.createEnum("Bar", null, "org.test", symbols)});
-    checkUnionError(new Schema[] {Schema.createFixed("Baz", null, "org.test", 2),
-        Schema.createFixed("Baz", null, "org.test", 1)});
-    
-    Schema union = buildUnion(new Schema[] {Schema.create(Type.INT)});
-    // fail if creating a union of a union
-    checkUnionError(new Schema[] {union});
-  }
-  
-  @Test
-  public void testComplexProp() throws Exception {
-    String json = "{\"type\":\"null\", \"foo\": [0]}";
-    Schema s = Schema.parse(json);
-    assertEquals(null, s.getProp("foo"));
-  }
-  
-  @Test
-  public void testParseInputStream() throws IOException {
-    Schema s = Schema.parse(
-        new ByteArrayInputStream("\"boolean\"".getBytes("UTF-8")));
-    assertEquals(Schema.parse("\"boolean\""), s);
-  }
-
-  @Test
-  public void testNamespaceScope() throws Exception {
-    String z = "{\"type\":\"record\",\"name\":\"Z\",\"fields\":[]}";
-    String y = "{\"type\":\"record\",\"name\":\"q.Y\",\"fields\":["
-      +"{\"name\":\"f\",\"type\":"+z+"}]}";
-    String x = "{\"type\":\"record\",\"name\":\"p.X\",\"fields\":["
-      +"{\"name\":\"f\",\"type\":"+y+"},"
-      +"{\"name\":\"g\",\"type\":"+z+"}"
-      +"]}";
-    Schema xs = Schema.parse(x);
-    Schema ys = xs.getField("f").schema();
-    assertEquals("p.Z", xs.getField("g").schema().getFullName());
-    assertEquals("q.Z", ys.getField("f").schema().getFullName());
-  }
-
-  private static void checkParseError(String json) {
-    try {
-      Schema.parse(json);
-    } catch (SchemaParseException e) {
-      return;
-    }
-    fail("Should not have parsed: "+json);
-  }
-
-  private static void checkUnionError(Schema[] branches) {
-    List<Schema> branchList = Arrays.asList(branches);
-    try {
-      Schema.createUnion(branchList);
-      fail("Union should not have constructed from: " + branchList);
-    } catch (AvroRuntimeException are) {
-      return;
-    }
-  }
-
-  private static Schema buildUnion(Schema[] branches) {
-    List<Schema> branchList = Arrays.asList(branches);
-    return Schema.createUnion(branchList);
-  }
-
-  /**
-   * Makes sure that "doc" tags are transcribed in the schemas.
-   * Note that there are docs both for fields and for the records
-   * themselves.
-   */
-  @Test
-  public void testDocs() {
-    Schema schema = Schema.parse(SCHEMA_WITH_DOC_TAGS);
-    assertEquals("This is not a world record.", schema.getDoc());
-    assertEquals("Inner Fixed", schema.getField("inner_fixed").doc());
-    assertEquals("Very Inner Fixed", schema.getField("inner_fixed").schema().getDoc());
-    assertEquals("Inner String", schema.getField("inner_string").doc());
-    assertEquals("Inner Enum", schema.getField("inner_enum").doc());
-    assertEquals("Very Inner Enum", schema.getField("inner_enum").schema().getDoc());
-    assertEquals("Inner Union", schema.getField("inner_union").doc());
-  }
-
-  private static void check(String schemaJson, String defaultJson,
-                            Object defaultValue) throws Exception {
-    check(schemaJson, defaultJson, defaultValue, true);
-  }
-  private static void check(String schemaJson, String defaultJson,
-                            Object defaultValue, boolean induce)
-    throws Exception {
-    check(schemaJson, induce);
-    checkDefault(schemaJson, defaultJson, defaultValue);
-  }
-
-  private static void check(String jsonSchema, boolean induce)
-    throws Exception {
-    Schema schema = Schema.parse(jsonSchema);
-    checkProp(schema);
-    for (Object datum : new RandomData(schema, COUNT)) {
-
-      if (induce) {
-        Schema induced = GenericData.get().induce(datum);
-        assertEquals("Induced schema does not match.", schema, induced);
-      }
-        
-      assertTrue("Datum does not validate against schema "+datum,
-                 GenericData.get().validate(schema, datum));
-
-      checkBinary(schema, datum,
-                  new GenericDatumWriter<Object>(),
-                  new GenericDatumReader<Object>());
-      checkJson(schema, datum,
-                  new GenericDatumWriter<Object>(),
-                  new GenericDatumReader<Object>());
-
-      // Check that we can generate the code for every schema we see.
-      TestSpecificCompiler.assertCompiles(schema, false);
-    }
-  }
-
-  private static void checkProp(Schema s0) throws Exception {
-    if(s0.getType().equals(Schema.Type.UNION)) return; // unions have no props
-    assertEquals(null, s0.getProp("foo"));
-    Schema s1 = Schema.parse(s0.toString());
-    s1.addProp("foo", "bar");
-    assertEquals("bar", s1.getProp("foo"));
-    assertFalse(s0.equals(s1));
-    Schema s2 = Schema.parse(s1.toString());
-    assertEquals("bar", s2.getProp("foo"));
-    assertEquals(s1, s2);
-    assertFalse(s0.equals(s2));
-  }
-
-  private static void checkBinary(Schema schema, Object datum,
-                                  DatumWriter<Object> writer,
-                                  DatumReader<Object> reader)
-    throws IOException {
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    writer.setSchema(schema);
-    writer.write(datum, new BinaryEncoder(out));
-    byte[] data = out.toByteArray();
-
-    reader.setSchema(schema);
-        
-    Object decoded =
-      reader.read(null, DecoderFactory.defaultFactory().createBinaryDecoder(
-          data, null));
-      
-    assertEquals("Decoded data does not match.", datum, decoded);
-  }
-
-  private static void checkJson(Schema schema, Object datum,
-                                DatumWriter<Object> writer,
-                                DatumReader<Object> reader)
-    throws IOException {
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    Encoder encoder = new JsonEncoder(schema, out);
-    writer.setSchema(schema);
-    writer.write(datum, encoder);
-    writer.write(datum, encoder);
-    encoder.flush();
-    byte[] data = out.toByteArray();
-
-    reader.setSchema(schema);
-    Decoder decoder = new JsonDecoder(schema, new ByteArrayInputStream(data));
-    Object decoded = reader.read(null, decoder);
-    assertEquals("Decoded data does not match.", datum, decoded);
-
-    decoded = reader.read(decoded, decoder);
-    assertEquals("Decoded data does not match.", datum, decoded);
-  }
-
-  private static void checkJson(Schema schema, Object datum,
-                                String json) throws Exception {
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    Encoder encoder = new JsonEncoder(schema, out);
-    DatumWriter<Object> writer = new GenericDatumWriter<Object>();
-    writer.setSchema(schema);
-    writer.write(datum, encoder);
-    encoder.flush();
-    byte[] data = out.toByteArray();
-
-    String encoded = new String(data, "UTF-8");
-    assertEquals("Encoded data does not match.", json, encoded);
-
-    DatumReader<Object> reader = new GenericDatumReader<Object>();
-    reader.setSchema(schema);
-    Object decoded =
-      reader.read(null, new JsonDecoder(schema,new ByteArrayInputStream(data)));
-      
-    assertEquals("Decoded data does not match.", datum, decoded);
-  }
-
-  private static final Schema ACTUAL =            // an empty record schema
-    Schema.parse("{\"type\":\"record\", \"name\":\"Foo\", \"fields\":[]}");
-
-  @SuppressWarnings(value="unchecked")
-  private static void checkDefault(String schemaJson, String defaultJson,
-                                   Object defaultValue) throws Exception {
-    String recordJson =
-      "{\"type\":\"record\", \"name\":\"Foo\", \"fields\":[{\"name\":\"f\", "
-    +"\"type\":"+schemaJson+", "
-    +"\"default\":"+defaultJson+"}]}";
-    Schema expected = Schema.parse(recordJson);
-    DatumReader in = new GenericDatumReader(ACTUAL, expected);
-    GenericData.Record record = (GenericData.Record)
-      in.read(null, DecoderFactory.defaultFactory().createBinaryDecoder(
-          new byte[0], null));
-    assertEquals("Wrong default.", defaultValue, record.get("f"));
-    assertEquals("Wrong toString", expected, Schema.parse(expected.toString()));
-  }
-
-  @Test(expected=AvroTypeException.class)
-  public void testNoDefaultField() throws Exception {
-    Schema expected =
-      Schema.parse("{\"type\":\"record\", \"name\":\"Foo\", \"fields\":"+
-                   "[{\"name\":\"f\", \"type\": \"string\"}]}");
-    DatumReader<Object> in = new GenericDatumReader<Object>(ACTUAL, expected);
-    in.read(null, DecoderFactory.defaultFactory().createBinaryDecoder(
-        new ByteArrayInputStream(new byte[0]), null));
-  }
-
-  @Test
-  public void testEnumMismatch() throws Exception {
-    Schema actual = Schema.parse
-      ("{\"type\":\"enum\",\"name\":\"E\",\"symbols\":[\"X\",\"Y\"]}");
-    Schema expected = Schema.parse
-      ("{\"type\":\"enum\",\"name\":\"E\",\"symbols\":[\"Y\",\"Z\"]}");
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    DatumWriter<Object> writer = new GenericDatumWriter<Object>(actual);
-    Encoder encoder = new BinaryEncoder(out);
-    writer.write("Y", encoder);
-    writer.write("X", encoder);
-    byte[] data = out.toByteArray();
-    Decoder decoder = DecoderFactory.defaultFactory().createBinaryDecoder(
-        data, null);
-    DatumReader<String> in = new GenericDatumReader<String>(actual, expected);
-    assertEquals("Wrong value", "Y", in.read(null, decoder));
-    try {
-      in.read(null, decoder);
-      fail("Should have thrown exception.");
-    } catch (AvroTypeException e) {
-      // expected
-    }
-  }
-
-  @Test(expected=AvroTypeException.class)
-  public void testRecordWithPrimitiveName() {
-    Schema.parse("{\"type\":\"record\", \"name\":\"string\", \"fields\": []}");
-  }
-  
-  @Test(expected=AvroTypeException.class)
-  public void testEnumWithPrimitiveName() {
-    Schema.parse("{\"type\":\"enum\", \"name\":\"null\", \"symbols\": [\"A\"]}");
-  }
-  
-  private static Schema enumSchema() {
-    return Schema.parse("{ \"type\": \"enum\", \"name\": \"e\", "
-        + "\"symbols\": [\"a\", \"b\"]}");
-  }
-
-  @Test(expected=AvroRuntimeException.class)
-  public void testImmutability1() {
-    Schema s = enumSchema();
-    s.addProp("p1", "1");
-    s.addProp("p1", "2");
-  }
-  
-  @Test(expected=AvroRuntimeException.class)
-  public void testImmutability2() {
-    Schema s = enumSchema();
-    s.addProp("p1", null);
-  }
-
-  private static List<String> lockedArrayList() {
-    return new Schema.LockableArrayList<String>(Arrays.asList(new String[] {
-        "a", "b", "c" })).lock();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList1() {
-    lockedArrayList().add("p");
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList2() {
-    lockedArrayList().remove("a");
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList3() {
-    lockedArrayList().addAll(Arrays.asList(new String[] { "p" }));
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList4() {
-    lockedArrayList().addAll(0,
-        Arrays.asList(new String[] { "p" }));
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList5() {
-    lockedArrayList().
-      removeAll(Arrays.asList(new String[] { "a" }));
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList6() {
-    lockedArrayList().
-      retainAll(Arrays.asList(new String[] { "a" }));
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList7() {
-    lockedArrayList().clear();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList8() {
-    lockedArrayList().iterator().remove();
-  }
-
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList9() {
-    Iterator<String> it = lockedArrayList().iterator();
-    it.next();
-    it.remove();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testLockedArrayList10() {
-    lockedArrayList().remove(1);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/file/TestLengthLimitedInputStream.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/file/TestLengthLimitedInputStream.java
deleted file mode 100644
index 2494428..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/file/TestLengthLimitedInputStream.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.file;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.ByteArrayInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-
-import org.junit.Before;
-import org.junit.Test;
-
-public class TestLengthLimitedInputStream {
-  InputStream raw;
-
-  @Before
-  public void setupRawStream() {
-    byte[] buf = new byte[128];
-    for (int i = 0; i < 128; ++i) {
-      buf[i] = (byte)i;
-    }
-    raw = new ByteArrayInputStream(buf);
-  }
-
-  @Test
-  public void testAvailable() throws IOException {
-    InputStream is = new LengthLimitedInputStream(raw, 10);
-    assertEquals(10, is.available());
-    is.skip(100);
-    assertEquals(0, is.available());
-  }
-
-  @Test
-  public void testRead() throws IOException {
-    InputStream is = new LengthLimitedInputStream(raw, 10);
-    byte[] x = new byte[12];
-    assertEquals(0, is.read());
-    assertEquals(9, is.read(x));
-    assertEquals(-1, is.read(x));
-    assertEquals(x[8], 9);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/genavro/TestGenAvro.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/genavro/TestGenAvro.java
deleted file mode 100644
index 545d222..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/genavro/TestGenAvro.java
+++ /dev/null
@@ -1,157 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.avro.genavro;
-
-import org.junit.Before;
-import org.junit.Test;
-import static org.junit.Assert.*;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileWriter;
-import java.io.IOException;
-import java.io.InputStreamReader;
-
-import org.apache.avro.Protocol;
-
-/**
- * Simple test harness for GenAvro.
- * This relies on an input/ and output/ directory. Inside
- * the input/ directory are .genavro files. Each file should have
- * a corresponding .avpr file in output/. When the test is run,
- * it generates and stringifies each .genavro file and compares
- * it to the expected output, failing if the two differ.
- *
- * To make it simpler to write these tests, you can run
- *   ant -Dtestcase=TestGenAvro -Dtest.genavro.mode=write
- * which will *replace* all expected output.
- */
-public class TestGenAvro {
-  private static final File TEST_DIR =
-    new File(System.getProperty("test.genavro.dir"));
-
-  private static final File TEST_INPUT_DIR =
-    new File(TEST_DIR, "input");
-
-  private static final File TEST_OUTPUT_DIR =
-    new File(TEST_DIR, "output");
-
-  private static final String TEST_MODE =
-    System.getProperty("test.genavro.mode", "run");
-
-  private List<GenTest> tests;
-
-  @Before
-  public void loadTests() {
-    assertTrue(TEST_DIR.exists());
-    assertTrue(TEST_INPUT_DIR.exists());
-    assertTrue(TEST_OUTPUT_DIR.exists());
-
-    tests = new ArrayList<GenTest>();
-    for (File inF : TEST_INPUT_DIR.listFiles()) {
-      if (!inF.getName().endsWith(".genavro")) continue;
-      if (inF.getName().startsWith(".")) continue;
-
-      File outF = new File(
-        TEST_OUTPUT_DIR,
-        inF.getName().replaceFirst("\\.genavro$", ".avpr"));
-      tests.add(new GenTest(inF, outF));
-    }
-  }
-
-  @Test
-  public void runTests() throws Exception {
-    if (! "run".equals(TEST_MODE)) return;
-
-    int passed = 0, failed = 0;
-    
-    for (GenTest t : tests) {
-      try {
-        t.run();
-        passed++;
-      } catch (Exception e) {
-        failed++;
-        e.printStackTrace(System.err);
-      }
-    }
-
-    if (failed > 0) {
-      fail(String.valueOf(failed) + " tests failed");
-    }
-  }
-
-  @Test
-  public void writeTests() throws Exception {
-    if (! "write".equals(TEST_MODE)) return;
-
-    for (GenTest t : tests) {
-      t.write();
-    }
-  }
-
-
-  /**
-   * An invididual comparison test
-   */
-  private static class GenTest {
-    private final File in, expectedOut;
-
-    public GenTest(File in, File expectedOut) {
-      this.in = in;
-      this.expectedOut = expectedOut;
-    }
-
-    private String generate() throws Exception {
-      GenAvro parser = new GenAvro(new FileInputStream(in), "UTF-8");
-      Protocol p = parser.CompilationUnit();
-      return p.toString(true);
-    }
-
-    public void run() throws Exception {
-      String output = generate();
-      String slurped = slurp(expectedOut);
-      assertEquals(slurped.trim(), output.replace("\r", "").trim());
-    }
-
-    public void write() throws Exception {
-      writeFile(expectedOut, generate());
-    }
-
-    private static String slurp(File f) throws IOException {
-      BufferedReader in = new BufferedReader(
-          new InputStreamReader(new FileInputStream(f), "UTF-8"));
-
-      String line = null;
-      StringBuilder builder = new StringBuilder();
-      while ((line = in.readLine()) != null) {
-        builder.append(line).append('\n');
-      }
-      return builder.toString();
-    }
-
-    private static void writeFile(File f, String s) throws IOException {
-      FileWriter w = new FileWriter(f);
-      w.write(s);
-      w.close();
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericData.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericData.java
deleted file mode 100644
index 7759cb8..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericData.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import java.util.ArrayList;
-import java.util.List;
-
-import static org.junit.Assert.*;
-
-import java.util.Arrays;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.AvroRuntimeException;
-import org.apache.avro.Schema.Type;
-
-import org.junit.Test;
-
-public class TestGenericData {
-  
-  @Test(expected=AvroRuntimeException.class)
-    public void testrecordConstructorNullSchema() throws Exception {
-    new GenericData.Record(null);
-  }
-    
-  @Test(expected=AvroRuntimeException.class)
-    public void testrecordConstructorWrongSchema() throws Exception {
-    new GenericData.Record(Schema.create(Schema.Type.INT));
-  }
-
-  @Test(expected=AvroRuntimeException.class)
-    public void testArrayConstructorNullSchema() throws Exception {
-    new GenericData.Array<Object>(1, null);
-  }
-    
-  @Test(expected=AvroRuntimeException.class)
-    public void testArrayConstructorWrongSchema() throws Exception {
-    new GenericData.Array<Object>(1, Schema.create(Schema.Type.INT));
-  }
-  
-  @Test
-  /** Make sure that even with nulls, hashCode() doesn't throw NPE. */
-  public void testHashCode() {
-    GenericData.get().hashCode(null, Schema.create(Type.NULL));
-    GenericData.get().hashCode(null, Schema.createUnion(
-        Arrays.asList(Schema.create(Type.BOOLEAN), Schema.create(Type.STRING))));
-  }
-
-  @Test
-  public void testRecordGetFieldDoesntExist() throws Exception {
-    List<Field> fields = new ArrayList<Field>();
-    Schema schema = Schema.createRecord(fields);
-    GenericData.Record record = new GenericData.Record(schema);
-    assertNull(record.get("does not exist"));
-  }
-    
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericDatumWriter.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericDatumWriter.java
deleted file mode 100644
index fdec8c6..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/generic/TestGenericDatumWriter.java
+++ /dev/null
@@ -1,52 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.generic;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-
-import org.apache.avro.Schema;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.io.JsonDecoder;
-import org.apache.avro.io.JsonEncoder;
-import org.junit.Test;
-
-public class TestGenericDatumWriter {
-  @Test
-  public void testWrite() throws IOException {
-    String json = "{\"type\": \"record\", \"name\": \"r\", \"fields\": ["
-      + "{ \"name\": \"f1\", \"type\": \"long\" }"
-      + "]}";
-    Schema s = Schema.parse(json);
-    GenericRecord r = new GenericData.Record(s);
-    r.put("f1", 100L);
-    ByteArrayOutputStream bao = new ByteArrayOutputStream();
-    GenericDatumWriter<GenericRecord> w =
-      new GenericDatumWriter<GenericRecord>(s);
-    Encoder e = new JsonEncoder(s, bao);
-    w.write(r, e);
-    e.flush();
-    
-    Object o = new GenericDatumReader<GenericRecord>(s).read(null,
-        new JsonDecoder(s, new ByteArrayInputStream(bao.toByteArray())));
-    assertEquals(r, o);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/Perf.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/Perf.java
deleted file mode 100644
index 661c745..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/Perf.java
+++ /dev/null
@@ -1,574 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayOutputStream;
-import java.io.EOFException;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Random;
-
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Field;
-import org.apache.avro.generic.GenericDatumReader;
-
-/**
- * Performance tests for various low level operations of
- * Avro encoding and decoding.
- */
-public class Perf {
-  private static final int COUNT = 100000; // needs to be a multiple of 4
-  private static final int CYCLES = 500;
-  
-  public static void main(String[] args) throws IOException {
-    List<Test> tests = new ArrayList<Test>();
-    ReadInt readIntTest = null;
-    for (String a : args) {
-      if (a.equals("-i")) {
-        readIntTest = new ReadInt();
-        tests.add(readIntTest);
-      } else if (a.equals("-f")) {
-        tests.add(new ReadFloat());
-      } else if (a.equals("-d")) {
-        tests.add(new ReadDouble());
-      } else if (a.equals("-l")) {
-        tests.add(new ReadLong());
-      } else if (a.equals("-ls")) {
-        tests.add(new ReadLongSmall(readIntTest));
-      } else if (a.equals("-b")) {
-        tests.add(new ReadBoolean());
-      } else if (a.equals("-R")) {
-        tests.add(new RepeaterTest());
-      } else if (a.equals("-N")) {
-        tests.add(new NestedRecordTest());
-      } else if (a.equals("-S")) {
-        tests.add(new ResolverTest());
-      } else if (a.equals("-M")) {
-        tests.add(new MigrationTest());
-      } else if (a.equals("-G")) {
-        tests.add(new GenericReaderTest());
-      } else if (a.equals("-Gd")) {
-        tests.add(new GenericReaderWithDefaultTest());
-      } else if (a.equals("-Go")) {
-        tests.add(new GenericReaderWithOutOfOrderTest());
-      } else if (a.equals("-Gp")) {
-        tests.add(new GenericReaderWithPromotionTest());
-      } else {
-        usage();
-        System.exit(1);
-      }
-    }
-    if (tests.isEmpty()) {
-      readIntTest = new ReadInt();
-      tests.addAll(Arrays.asList(new Test[] {
-          readIntTest, 
-          new ReadLongSmall(readIntTest), 
-          new ReadLong(),
-          new ReadFloat(), 
-          new ReadDouble(),
-          new ReadBoolean(),
-          new RepeaterTest(), new NestedRecordTest(),
-          new ResolverTest(), new MigrationTest(),
-          new GenericReaderTest(), new GenericReaderWithDefaultTest(),
-          new GenericReaderWithOutOfOrderTest(),
-          new GenericReaderWithPromotionTest()
-      }));
-    }
-    
-    for (int k = 0; k < tests.size(); k++) {
-      Test t = tests.get(k);
-      // get everything to compile once 
-      t.read();
-    }
-    for (int k = 0; k < tests.size(); k++) {
-      Test t = tests.get(k);
-      // warmup JVM 
-      for (int i = 0; i < t.cycles; i++) {
-        t.read();
-    }
-    // test
-    long s = 0;
-    for (int i = 0; i < t.cycles; i++) {
-      long l = t.read();
-      // System.out.println("** " + l);
-      s += l;
-    }
-    s /= 1000;
-    double entries = (t.cycles * (double) t.count);
-    double bytes = t.cycles * (double) t.data.length;
-    System.out.println(t.name + ": " + (s / 1000) + " ms, "
-        +  (entries / s) + " million entries/sec.  "
-        +  (bytes / s) + " million bytes/sec" );
-    tests.set(k, null);
-    }
-  }
-  
-  private abstract static class Test {
-
-    /**
-     * Name of the test.
-     */
-    public final String name;
-    public final int count;
-    public final int cycles;
-    protected byte[] data;
-    protected static DecoderFactory factory = new DecoderFactory();
-    
-    /**
-     * Reads the contents and returns the time taken in nanoseconds.
-     * @return  The time taken to complete the operation.
-     * @throws IOException
-     */
-    abstract long read() throws IOException;
-    
-    public Test(String name, int cycles, int count) {
-      this.name = name;
-      this.cycles = cycles;
-      this.count = count;
-    }
-    
-    protected void generateRepeaterData(Encoder e) throws IOException {
-      e.writeArrayStart();
-      e.setItemCount(count);
-      Random r = newRandom();
-      for (int i = 0; i < count; i++) {
-        e.writeDouble(r.nextDouble());
-        e.writeDouble(r.nextDouble());
-        e.writeDouble(r.nextDouble());
-        e.writeInt(r.nextInt());
-        e.writeInt(r.nextInt());
-        e.writeInt(r.nextInt());
-  }
-      e.writeArrayEnd();
-    }
-  }
-  
-  private static abstract class DecoderTest extends Test {
-    public final Schema schema;
-    public DecoderTest(String name, String json) throws IOException {
-      this(name, json, 1);
-    }
-    public DecoderTest(String name, String json, int factor) throws IOException {
-      super(name, CYCLES, COUNT/factor);
-      this.schema = Schema.parse(json);
-      ByteArrayOutputStream bao = new ByteArrayOutputStream();
-      Encoder e = new BinaryEncoder(bao);
-      genData(e);
-      e.flush();
-      data = bao.toByteArray();
-    }
-
-    @Override
-    public final long read() throws IOException {
-      Decoder d = getDecoder();
-      long t = System.nanoTime();
-      for (long l = d.readArrayStart(); l > 0; l = d.arrayNext()) {
-        for (int j = 0; j < l; j++) {
-          readInternal(d);
-        }
-      }
-      return (System.nanoTime() - t);
-    }
-    
-    protected Decoder getDecoder() throws IOException {
-      return newDecoder(data);
-    }
-
-    protected static Decoder newDecoder(byte[] data) {
-      return factory.createBinaryDecoder(data, null);
-    }
-
-    abstract void genData(Encoder e) throws IOException;
-    abstract void readInternal(Decoder d) throws IOException;
-  }
-  
-  /**
-   * Use a fixed value seed for random number generation
-   * to allow for better cross-run comparisons.
-   */
-  private static final long SEED = 19781210;
-
-  protected static Random newRandom() {
-    return new Random(SEED);
-  }
-
-  private static class ReadInt extends DecoderTest {
-    public ReadInt() throws IOException {
-      this("ReadInt", "{ \"type\": \"array\", \"items\": \"int\"} ");
-    }
-
-    public ReadInt(String name, String schema) throws IOException {
-      super(name, schema);
-    }
-
-    @Override void genData(Encoder e) throws IOException {
-      e.writeArrayStart();
-      e.setItemCount((count/4) * 4); //next lowest multiple of 4  
-      Random r = newRandom();
-      for (int i = 0; i < count/4; i++) {
-        e.writeInt(r.nextInt(50)); // fits in 1 byte
-        e.writeInt(r.nextInt(5000)); // fits in 2 bytes
-        e.writeInt(r.nextInt(500000)); // fits in 3 bytes
-        e.writeInt(r.nextInt(150000000)); // most in 4, some in 5
-      }
-      e.writeArrayEnd();
-    }
-
-    @Override
-    void readInternal(Decoder d) throws IOException {
-       d.readInt();
-    }
-  }
-
-  // This is the same data as ReadInt, but using readLong.
-  private static class ReadLongSmall extends DecoderTest {
-    public ReadLongSmall(ReadInt dataFrom) throws IOException {
-      super("ReadLongSmall", "{ \"type\": \"array\", \"items\": \"long\"} ");
-      data = dataFrom.data;
-    }
-    @Override void genData(Encoder e) throws IOException {
-    }
-    @Override
-    void readInternal(Decoder d) throws IOException {
-       d.readLong();
-    }
-  }
- 
-  // this tests reading Longs that are sometimes very large
-  private static class ReadLong extends DecoderTest {
-    public ReadLong() throws IOException {
-      super("ReadLong", "{ \"type\": \"array\", \"items\": \"long\"} ");
-    }
-    
-    @Override
-    void genData(Encoder e) throws IOException {
-      e.writeArrayStart();
-      e.setItemCount((count / 4) *4);
-      Random r = newRandom();
-      for (int i = 0; i < count /4; i++) {
-        e.writeLong(r.nextLong() % 0x7FL); // half fit in 1, half in 2 
-        e.writeLong(r.nextLong() % 0x1FFFFFL); // half fit in <=3, half in 4
-        e.writeLong(r.nextLong() % 0x3FFFFFFFFL); // half in <=5, half in 6
-        e.writeLong(r.nextLong() % 0x1FFFFFFFFFFFFL); // half in <=8, half in 9 
-      }
-      e.writeArrayEnd();
-    }
-
-    @Override
-    void readInternal(Decoder d) throws IOException {
-       d.readLong();
-    }
-  }
-  
-  private static class ReadFloat extends DecoderTest {
-    public ReadFloat() throws IOException {
-      super("ReadFloat", "{ \"type\": \"array\", \"items\": \"float\"} ");
-    }
-
-    @Override
-    void genData(Encoder e) throws IOException {
-      e.writeArrayStart();
-      e.setItemCount(count);
-      Random r = newRandom();
-      for (int i = 0; i < count; i++) {
-        e.writeFloat(r.nextFloat());
-      }
-      e.writeArrayEnd();
-    }
-
-    @Override
-    void readInternal(Decoder d) throws IOException {
-      d.readFloat();
-    }
-  }
-
-  private static class ReadBoolean extends DecoderTest {
-    public ReadBoolean() throws IOException {
-      super("ReadBoolean", "{ \"type\": \"array\", \"items\": \"boolean\"} ");
-    }
-    @Override void genData(Encoder e) throws IOException {
-       e.writeArrayStart();
-      e.setItemCount(count);
-      Random r = newRandom();
-      for (int i = 0; i < count; i++) {
-        e.writeBoolean(r.nextBoolean());
-      }
-      e.writeArrayEnd();
-    }
-    @Override
-    void readInternal(Decoder d) throws IOException {
-      d.readBoolean();
-    }
-  }
-
-  private static class ReadDouble extends DecoderTest {
-    public ReadDouble() throws IOException {
-      super("ReadDouble", "{ \"type\": \"array\", \"items\": \"double\"} ");
-    }
-    
-    @Override
-    void genData(Encoder e) throws IOException {
-      e.writeArrayStart();
-      e.setItemCount(count);
-      Random r = newRandom();
-      for (int i = 0; i < count; i++) {
-        e.writeDouble(r.nextFloat());
-      }
-      e.writeArrayEnd();
-    }
-    @Override
-    void readInternal(Decoder d) throws IOException {
-      d.readDouble();
-    }
-  }
-  
-  private static final String REPEATER_SCHEMA =
-    "{ \"type\": \"array\", \"items\":\n"
-    + "{ \"type\": \"record\", \"name\": \"R\", \"fields\": [\n"
-    + "{ \"name\": \"f1\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f2\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f3\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f4\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f5\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f6\", \"type\": \"int\" }\n"
-    + "] } }";
-
-  private static class RepeaterTest extends DecoderTest {
-    public RepeaterTest() throws IOException {
-      this("RepeaterTest");
-    }
-    
-    public RepeaterTest(String name) throws IOException {
-      super(name, REPEATER_SCHEMA, 6);
-    }
-    
-    @Override
-    protected void genData(Encoder e) throws IOException {
-      generateRepeaterData(e);
-    }
-    
-    @Override
-    protected void readInternal(Decoder d) throws IOException {
-      d.readDouble();
-      d.readDouble();
-      d.readDouble();
-      d.readInt();
-      d.readInt();
-      d.readInt();
-    }
-    
-    @Override
-    protected Decoder getDecoder() throws IOException {
-      return new ValidatingDecoder(schema, super.getDecoder());
-    }
-    
-  }
-  
-  private static class ResolverTest extends RepeaterTest {
-
-    public ResolverTest() throws IOException {
-      super("ResolverTest");
-    }
-    
-    @Override
-    protected Decoder getDecoder() throws IOException {
-      return new ResolvingDecoder(schema, schema, newDecoder(data));
-    }
-    
-  }
-
-  private static final String MIGRATION_SCHEMA_WITH_DEFAULT =
-    "{ \"type\": \"array\", \"items\":\n"
-    + "{ \"type\": \"record\", \"name\": \"R\", \"fields\": [\n"
-    + "{ \"name\": \"f1\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f2\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f3\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f4\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f5\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f6\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f7\", \"type\": \"string\", "
-      + "\"default\": \"undefined\" },\n"
-    + "{ \"name\": \"f8\", \"type\": \"string\","
-      + "\"default\": \"undefined\" }\n"
-    + "] } }";
-
-  private static final String MIGRATION_SCHEMA_WITH_OUT_OF_ORDER =
-    "{ \"type\": \"array\", \"items\":\n"
-    + "{ \"type\": \"record\", \"name\": \"R\", \"fields\": [\n"
-    + "{ \"name\": \"f1\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f3\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f5\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f2\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f4\", \"type\": \"int\" },\n"
-    + "{ \"name\": \"f6\", \"type\": \"int\" }\n"
-    + "] } }";
-
-  private static final String MIGRATION_SCHEMA_WITH_PROMOTION =
-    "{ \"type\": \"array\", \"items\":\n"
-    + "{ \"type\": \"record\", \"name\": \"R\", \"fields\": [\n"
-    + "{ \"name\": \"f1\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f2\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f3\", \"type\": \"double\" },\n"
-    + "{ \"name\": \"f4\", \"type\": \"long\" },\n"
-    + "{ \"name\": \"f5\", \"type\": \"long\" },\n"
-    + "{ \"name\": \"f6\", \"type\": \"long\" }\n"
-    + "] } }";
-
-
-  /**
-   * Tests the performance of introducing default values.
-   */
-  private static class MigrationTest extends RepeaterTest {
-    private final Schema readerSchema;
-    public MigrationTest() throws IOException {
-      super("MigrationTest");
-      readerSchema = Schema.parse(MIGRATION_SCHEMA_WITH_DEFAULT);
-    }
-    
-    @Override
-    protected Decoder getDecoder() throws IOException {
-      return new ResolvingDecoder(schema, readerSchema, newDecoder(data));
-    }
-    
-    @Override
-    protected void readInternal(Decoder d) throws IOException {
-      ResolvingDecoder r = (ResolvingDecoder) d;
-      Field[] ff = r.readFieldOrder();
-      for (Field f : ff) {
-        if (f.pos() < 3) {
-          r.readDouble();
-        } else if (f.pos() < 6) {
-          r.readInt();
-        } else {
-          r.readString(null);
-        }
-      }
-    }
-  }
-  
-  private static class GenericReaderTest extends Test {
-    public final Schema writerSchema;
-
-    public GenericReaderTest() throws IOException {
-      this("GenericReaderTest");
-    }
-
-    public GenericReaderTest(String name) throws IOException {
-      super(name, CYCLES, COUNT/12);
-      this.writerSchema = Schema.parse(REPEATER_SCHEMA);
-      ByteArrayOutputStream bao = new ByteArrayOutputStream();
-      Encoder e = new BinaryEncoder(bao);
-      generateRepeaterData(e);
-      data = bao.toByteArray();
-    }
-
-    @Override
-    public final long read() throws IOException {
-      GenericDatumReader<Object> r = getReader();
-      long t = System.nanoTime();
-      Decoder d =
-        DecoderFactory.defaultFactory().createBinaryDecoder(data, null);
-      Object reuse = null;
-      for (; ;) {
-        try {
-          reuse = r.read(reuse, d);
-        } catch (EOFException e) {
-          break;
-        }
-      }
-      
-      return (System.nanoTime() - t);
-    }
-    
-    protected GenericDatumReader<Object> getReader() throws IOException {
-      return new GenericDatumReader<Object>(writerSchema);
-    }
-  }
-
-  private static class GenericReaderWithMigrationTest extends GenericReaderTest {
-    private final Schema readerSchema;
-    protected GenericReaderWithMigrationTest(String name, String readerSchema)
-      throws IOException {
-      super(name);
-      this.readerSchema = Schema.parse(readerSchema);
-    }
-    
-    protected GenericDatumReader<Object> getReader() throws IOException {
-      return new GenericDatumReader<Object>(writerSchema, readerSchema);
-    }
-  }
-
-  private static class GenericReaderWithDefaultTest extends
-    GenericReaderWithMigrationTest {
-    public GenericReaderWithDefaultTest() throws IOException {
-      super("GenericReaderTestWithDefaultTest", MIGRATION_SCHEMA_WITH_DEFAULT);
-    }
-  }
-
-  private static class GenericReaderWithOutOfOrderTest extends
-    GenericReaderWithMigrationTest {
-    public GenericReaderWithOutOfOrderTest() throws IOException {
-      super("GenericReaderTestWithOutOfOrderTest",
-          MIGRATION_SCHEMA_WITH_OUT_OF_ORDER);
-    }
-  }
-
-  private static class GenericReaderWithPromotionTest extends
-    GenericReaderWithMigrationTest {
-    public GenericReaderWithPromotionTest() throws IOException {
-      super("GenericReaderTestWithPromotionTest",
-          MIGRATION_SCHEMA_WITH_PROMOTION);
-    }
-  }
-
-  private static class NestedRecordTest extends ReadInt {
-    public NestedRecordTest() throws IOException {
-      super("NestedRecordTest",
-        "{ \"type\": \"array\", \"items\": \n"
-        + "{ \"type\": \"record\", \"name\": \"r1\", \n"
-        + "\"fields\": \n"
-        + "[ { \"name\": \"f1\", \"type\": \"int\" } ] } } ");
-    }
-
-    @Override
-    public Decoder getDecoder() throws IOException {
-      return new ValidatingDecoder(schema, super.getDecoder());
-    }
-  }
-
-  private static void usage() {
-    System.out.println("Usage: Perf { -i | -ls | -l | -f | -d | -b | -R | -N " +
-      "| -S | -M | -G | -Gd | -Go | Gp }");
-    System.out.println("  -i readInt()");
-    System.out.println("  -ls readLongSmall()");
-    System.out.println("  -l readLong()");
-    System.out.println("  -f readFloat()");
-    System.out.println("  -d readDouble()");
-    System.out.println("  -b readBoolean()");
-    System.out.println("  -R repeater in validating decoder");
-    System.out.println("  -N nested record in validating decoder");
-    System.out.println("  -S resolving decoder");
-    System.out.println("  -M resolving decoder (with default fields)");
-    System.out.println("  -G GenericDatumReader");
-    System.out.println("  -Gd GenericDatumReader (with default fields)");
-    System.out.println("  -Go GenericDatumReader (with out-of-order fields)");
-    System.out.println("  -Gp GenericDatumReader (with promotion fields)");
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBinaryDecoder.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBinaryDecoder.java
deleted file mode 100644
index e0c0249..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBinaryDecoder.java
+++ /dev/null
@@ -1,402 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-
-import org.apache.avro.RandomData;
-import org.apache.avro.Schema;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.avro.util.Utf8;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import org.junit.runners.Parameterized.Parameters;
-
-@RunWith(Parameterized.class)
-public class TestBinaryDecoder {
-  // prime number buffer size so that looping tests hit the buffer edge
-  // at different points in the loop.
-  DecoderFactory factory;
-  public TestBinaryDecoder(boolean useDirect) {
-    factory = new DecoderFactory().configureDecoderBufferSize(521);
-    factory.configureDirectDecoder(useDirect);
-  }
-  
-  @Parameters
-  public static Collection<Object[]> data() {
-    return Arrays.asList(new Object[][] {
-        { true },
-        { false },
-    });
-  }
-  
-  private Decoder newDecoderWithNoData() throws IOException {
-    return newDecoder(new byte[0]);
-  }
-
-  private Decoder newDecoder(byte[] bytes, int start, int len)
-    throws IOException {
-    return factory.createBinaryDecoder(bytes, start, len, null);
-    
-  }
-
-  private Decoder newDecoder(InputStream in) {
-    return factory.createBinaryDecoder(in, null);
-  }
-
-  private Decoder newDecoder(byte[] bytes) throws IOException {
-    return factory.createBinaryDecoder(bytes, null);
-  }
-
-  /** Verify EOFException throw at EOF */
-
-  @Test(expected=EOFException.class)
-  public void testEOFBoolean() throws IOException {
-    newDecoderWithNoData().readBoolean();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFInt() throws IOException {
-    newDecoderWithNoData().readInt();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFLong() throws IOException {
-    newDecoderWithNoData().readLong();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFFloat() throws IOException {
-    newDecoderWithNoData().readFloat();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFDouble() throws IOException {
-    newDecoderWithNoData().readDouble();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFBytes() throws IOException {
-    newDecoderWithNoData().readBytes(null);
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFString() throws IOException {
-    newDecoderWithNoData().readString(new Utf8("a"));
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testEOFFixed() throws IOException {
-    newDecoderWithNoData().readFixed(new byte[1]);
-  }
-
-  @Test(expected=EOFException.class)
-  public void testEOFEnum() throws IOException {
-    newDecoderWithNoData().readEnum();
-  }
-  
-  private static byte[] data = null;
-  private static int seed = -1;
-  private static Schema schema = null;
-  private static int count = 200;
-  private static ArrayList<Object> records = new ArrayList<Object>(count);
-  @BeforeClass
-  public static void generateData() throws IOException {
-    seed = (int)System.currentTimeMillis();
-    // note some tests (testSkipping) rely on this explicitly
-    String jsonSchema =
-      "{\"type\": \"record\", \"name\": \"Test\", \"fields\": ["
-      +"{\"name\":\"intField\", \"type\":\"int\"},"
-      +"{\"name\":\"bytesField\", \"type\":\"bytes\"},"
-      +"{\"name\":\"booleanField\", \"type\":\"boolean\"},"
-      +"{\"name\":\"stringField\", \"type\":\"string\"},"
-      +"{\"name\":\"floatField\", \"type\":\"float\"},"
-      +"{\"name\":\"doubleField\", \"type\":\"double\"},"
-      +"{\"name\":\"arrayField\", \"type\": " +
-          "{\"type\":\"array\", \"items\":\"boolean\"}},"
-      +"{\"name\":\"longField\", \"type\":\"long\"}]}";
-    schema = Schema.parse(jsonSchema);
-    GenericDatumWriter<Object> writer = new GenericDatumWriter<Object>();
-    writer.setSchema(schema);
-    ByteArrayOutputStream baos = new ByteArrayOutputStream(8192);
-    BinaryEncoder encoder = new BinaryEncoder(baos);
-    
-    for (Object datum : new RandomData(schema, count, seed)) {
-      writer.write(datum, encoder);
-      records.add(datum);
-    }
-    data = baos.toByteArray();
-  }
-
-  @Test
-  public void testDecodeFromSources() throws IOException {
-    GenericDatumReader<Object> reader = new GenericDatumReader<Object>();
-    reader.setSchema(schema);
-    
-    ByteArrayInputStream is = new ByteArrayInputStream(data);
-    ByteArrayInputStream is2 = new ByteArrayInputStream(data);
-    ByteArrayInputStream is3 = new ByteArrayInputStream(data);
-
-    Decoder fromInputStream = newDecoder(is);
-    Decoder fromArray = newDecoder(data);
-    
-    byte[] data2 = new byte[data.length + 30];
-    Arrays.fill(data2, (byte)0xff);
-    System.arraycopy(data, 0, data2, 15, data.length);
-
-    Decoder fromOffsetArray = newDecoder(data2, 15, data.length);
-
-    BinaryDecoder initOnInputStream = factory.createBinaryDecoder(
-        new byte[50], 0, 30, null);
-    initOnInputStream = factory.createBinaryDecoder(is2, initOnInputStream);
-    BinaryDecoder initOnArray = factory.createBinaryDecoder(is3, null);
-    initOnArray = factory.createBinaryDecoder(
-        data, 0, data.length, initOnArray);
-    
-    for (Object datum : records) {
-      Assert.assertEquals(
-          "InputStream based BinaryDecoder result does not match",
-          datum, reader.read(null, fromInputStream));
-      Assert.assertEquals(
-          "Array based BinaryDecoder result does not match",
-          datum, reader.read(null, fromArray));
-      Assert.assertEquals(
-          "offset Array based BinaryDecoder result does not match",
-          datum, reader.read(null, fromOffsetArray));
-      Assert.assertEquals(
-          "InputStream initialized BinaryDecoder result does not match",
-          datum, reader.read(null, initOnInputStream));
-      Assert.assertEquals(
-          "Array initialized BinaryDecoder result does not match",
-          datum, reader.read(null, initOnArray));
-    }
-  }
-
-  @Test
-  public void testInputStreamProxy() throws IOException {
-    Decoder d = newDecoder(data);
-    if (d instanceof BinaryDecoder) {
-      BinaryDecoder bd = (BinaryDecoder) d;
-      InputStream test = bd.inputStream();
-      InputStream check = new ByteArrayInputStream(data);
-      validateInputStreamReads(test, check);
-      bd = factory.createBinaryDecoder(data, bd);
-      test = bd.inputStream();
-      check = new ByteArrayInputStream(data);
-      validateInputStreamSkips(test, check);
-      // with input stream sources
-      bd = factory.createBinaryDecoder(new ByteArrayInputStream(data), bd);
-      test = bd.inputStream();
-      check = new ByteArrayInputStream(data);
-      validateInputStreamReads(test, check);
-      bd = factory.createBinaryDecoder(new ByteArrayInputStream(data), bd);
-      test = bd.inputStream();
-      check = new ByteArrayInputStream(data);
-      validateInputStreamSkips(test, check);
-    }
-  }
-
-  @Test
-  public void testInputStreamProxyDetached() throws IOException {
-    Decoder d = newDecoder(data);
-    if (d instanceof BinaryDecoder) {
-      BinaryDecoder bd = (BinaryDecoder) d;
-      InputStream test = bd.inputStream();
-      InputStream check = new ByteArrayInputStream(data);
-      // detach input stream and decoder from old source
-      factory.createBinaryDecoder(new byte[56], null);
-      InputStream bad = bd.inputStream();
-      InputStream check2 = new ByteArrayInputStream(data);
-      validateInputStreamReads(test, check);
-      Assert.assertFalse(bad.read() == check2.read());
-    }
-  }
-  
-  @Test
-  public void testInputStreamPartiallyUsed() throws IOException {
-    BinaryDecoder bd = factory.createBinaryDecoder(
-        new ByteArrayInputStream(data), null);
-    InputStream test = bd.inputStream();
-    InputStream check = new ByteArrayInputStream(data);
-    // triggers buffer fill if unused and tests isEnd()
-    try {
-      Assert.assertFalse(bd.isEnd()); 
-    } catch (UnsupportedOperationException e) {
-      // this is ok if its a DirectBinaryDecoder.
-      if (bd.getClass() != DirectBinaryDecoder.class) {
-        throw e;
-      }
-    }
-    bd.readFloat(); // use data, and otherwise trigger buffer fill
-    check.skip(4); // skip the same # of bytes here
-    validateInputStreamReads(test, check);
-  }
-
-  private void validateInputStreamReads(InputStream test, InputStream check)
-      throws IOException {
-    byte[] bt = new byte[7];
-    byte[] bc = new byte[7]; 
-    while (true) {
-      int t = test.read();
-      int c = check.read();
-      Assert.assertEquals(c, t);
-      if (-1 == t) break;
-      t = test.read(bt);
-      c = check.read(bc);
-      Assert.assertEquals(c, t);
-      Assert.assertArrayEquals(bt, bc);
-      if (-1 == t) break;
-      t = test.read(bt, 1, 4);
-      c = check.read(bc, 1, 4);
-      Assert.assertEquals(c, t);
-      Assert.assertArrayEquals(bt, bc);
-      if (-1 == t) break;
-    }
-    Assert.assertEquals(0, test.skip(5));
-    Assert.assertEquals(0, test.available());
-    Assert.assertFalse(test.getClass() != ByteArrayInputStream.class && test.markSupported());
-    test.close();
-  }
-  
-  private void validateInputStreamSkips(InputStream test, InputStream check) throws IOException {
-    while(true) {
-      long t2 = test.skip(19);
-      long c2 = check.skip(19);
-      Assert.assertEquals(c2, t2);
-      if (0 == t2) break;
-    }
-    Assert.assertEquals(-1, test.read());
-  }
-
-  @Test
-  public void testBadIntEncoding() throws IOException {
-    byte[] badint = new byte[5];
-    Arrays.fill(badint, (byte)0xff);
-    Decoder bd = factory.createBinaryDecoder(badint, null);
-    String message = "";
-    try {
-      bd.readInt();
-    } catch (IOException ioe) {
-      message = ioe.getMessage();
-    }
-    Assert.assertEquals("Invalid int encoding", message);
-  }
-
-  @Test
-  public void testBadLongEncoding() throws IOException {
-    byte[] badint = new byte[10];
-    Arrays.fill(badint, (byte)0xff);
-    Decoder bd = factory.createBinaryDecoder(badint, null);
-    String message = "";
-    try {
-      bd.readLong();
-    } catch (IOException ioe) {
-      message = ioe.getMessage();
-    }
-    Assert.assertEquals("Invalid long encoding", message);
-  }
-
-  @Test(expected=EOFException.class)
-  public void testIntTooShort() throws IOException {
-    byte[] badint = new byte[4];
-    Arrays.fill(badint, (byte)0xff);
-    newDecoder(badint).readInt();
-  }
-
-  @Test(expected=EOFException.class)
-  public void testLongTooShort() throws IOException {
-    byte[] badint = new byte[9];
-    Arrays.fill(badint, (byte)0xff);
-    newDecoder(badint).readLong();
-  }
-  
-  @Test(expected=EOFException.class)
-  public void testFloatTooShort() throws IOException {
-    byte[] badint = new byte[3];
-    Arrays.fill(badint, (byte)0xff);
-    newDecoder(badint).readInt();
-  }
-
-  @Test(expected=EOFException.class)
-  public void testDoubleTooShort() throws IOException {
-    byte[] badint = new byte[7];
-    Arrays.fill(badint, (byte)0xff);
-    newDecoder(badint).readLong();
-  }
-
-  @Test
-  public void testSkipping() throws IOException {
-    Decoder d = newDecoder(data);
-    skipGenerated(d);
-    if (d instanceof BinaryDecoder) {
-      BinaryDecoder bd = (BinaryDecoder) d;
-      try {
-        Assert.assertTrue(bd.isEnd());
-      } catch (UnsupportedOperationException e) {
-        // this is ok if its a DirectBinaryDecoder.
-        if (bd.getClass() != DirectBinaryDecoder.class) {
-          throw e;
-        }
-      }
-      bd = factory.createBinaryDecoder(new ByteArrayInputStream(data), bd);
-      skipGenerated(bd);
-      try {
-        Assert.assertTrue(bd.isEnd());
-      } catch (UnsupportedOperationException e) {
-        // this is ok if its a DirectBinaryDecoder.
-        if (bd.getClass() != DirectBinaryDecoder.class) {
-          throw e;
-        }
-      }
-    }
-  }
-
-  private void skipGenerated(Decoder bd) throws IOException {
-    for (int i = 0; i < records.size(); i++) {
-      bd.readInt();
-      bd.skipBytes();
-      bd.skipFixed(1);
-      bd.skipString();
-      bd.skipFixed(4);
-      bd.skipFixed(8);
-      long leftover = bd.skipArray();
-      // booleans are one byte, array trailer is one byte
-      bd.skipFixed((int)leftover + 1); 
-      bd.skipFixed(0);
-      bd.readLong();
-    }
-    EOFException eof = null;
-    try {
-      bd.skipFixed(4);
-    } catch (EOFException e) {
-      eof = e;
-    }
-    Assert.assertTrue(null != eof);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO.java
deleted file mode 100644
index 5bb9868..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO.java
+++ /dev/null
@@ -1,475 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Stack;
-import java.util.Collection;
-import java.util.Arrays;
-
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonParser;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.assertFalse;
-
-@RunWith(Parameterized.class)
-public class TestBlockingIO {
-  private static final String UTF_8 = "UTF-8";
-
-  private final int iSize;
-  private final int iDepth;
-  private final String sInput;
-
-  public TestBlockingIO (int sz, int dp, String inp) {
-    this.iSize = sz;
-    this.iDepth = dp;
-    this.sInput = inp;
-  }
-  
-  private static class Tests {
-    private final JsonParser parser;
-    private final Decoder input;
-    private final int depth;
-    public Tests(int bufferSize, int depth, String input)
-      throws IOException {
-  
-      this.depth = depth;
-      byte[] in = input.getBytes("UTF-8");
-      JsonFactory f = new JsonFactory();
-      JsonParser p = f.createJsonParser(
-          new ByteArrayInputStream(input.getBytes("UTF-8")));
-      
-      ByteArrayOutputStream os = new ByteArrayOutputStream();
-      Encoder cos = new BlockingBinaryEncoder(os, bufferSize);
-      serialize(cos, p, os);
-      cos.flush();
-      
-      byte[] bb = os.toByteArray();
-      // dump(bb);
-      this.input = DecoderFactory.defaultFactory().createBinaryDecoder(bb, null);
-      this.parser =  f.createJsonParser(new ByteArrayInputStream(in));
-    }
-    
-    public void scan() throws IOException {
-      Stack<S> countStack = new Stack<S>();
-      long count = 0;
-      while (parser.nextToken() != null) {
-        switch (parser.getCurrentToken()) {
-        case END_ARRAY:
-          assertEquals(0, count);
-          assertTrue(countStack.peek().isArray);
-          count = countStack.pop().count;
-          break;
-        case END_OBJECT:
-          assertEquals(0, count);
-          assertFalse(countStack.peek().isArray);
-          count = countStack.pop().count;
-          break;
-        case START_ARRAY:
-          countStack.push(new S(count, true));
-          count = input.readArrayStart();
-          continue;
-        case VALUE_STRING:
-        {
-          String s = parser.getText();
-          int n = s.getBytes(UTF_8).length;
-          checkString(s, input, n);
-          break;
-        }
-        case FIELD_NAME:
-        {
-          String s = parser.getCurrentName();
-          int n = s.getBytes(UTF_8).length;
-          checkString(s, input, n);
-          continue;
-        }
-        case START_OBJECT:
-          countStack.push(new S(count, false));
-          count = input.readMapStart();
-          if (count < 0) {
-            count = -count;
-            input.readLong();  // byte count
-          }
-          continue;
-        default:
-          throw new RuntimeException("Unsupported: " + parser.getCurrentToken());
-        }
-        count--;
-        if (count == 0) {
-          count = countStack.peek().isArray ? input.arrayNext() :
-            input.mapNext();
-        }
-      }
-    }
-
-    public void skip(int skipLevel) throws IOException {
-      Stack<S> countStack = new Stack<S>();
-      long count = 0;
-      while (parser.nextToken() != null) {
-        switch (parser.getCurrentToken()) {
-        case END_ARRAY:
-          // assertEquals(0, count);
-          assertTrue(countStack.peek().isArray);
-          count = countStack.pop().count;
-          break;
-        case END_OBJECT:
-          // assertEquals(0, count);
-          assertFalse(countStack.peek().isArray);
-          count = countStack.pop().count;
-          break;
-        case START_ARRAY:
-          if (countStack.size() == skipLevel) {
-            skipArray(parser, input, depth - skipLevel);
-            break;
-          } else {
-            countStack.push(new S(count, true));
-            count = input.readArrayStart();
-            continue;
-          }
-        case VALUE_STRING:
-        {
-          if (countStack.size() == skipLevel) {
-            input.skipBytes();
-          } else {
-            String s = parser.getText();
-            int n = s.getBytes(UTF_8).length;
-            checkString(s, input, n);
-          }
-          break;
-        }
-        case FIELD_NAME:
-        {
-          String s = parser.getCurrentName();
-          int n = s.getBytes(UTF_8).length;
-          checkString(s, input, n);
-          continue;
-        }
-        case START_OBJECT:
-          if (countStack.size() == skipLevel) {
-            skipMap(parser, input, depth - skipLevel);
-            break;
-          } else {
-            countStack.push(new S(count, false));
-            count = input.readMapStart();
-            if (count < 0) {
-              count = -count;
-              input.readLong();  // byte count
-            }
-            continue;
-          }
-        default:
-          throw new RuntimeException("Unsupported: " + parser.getCurrentToken());
-        }
-        count--;
-        if (count == 0) {
-          count = countStack.peek().isArray ? input.arrayNext() :
-            input.mapNext();
-        }
-      }
-    }
-  }
-
-  protected static void dump(byte[] bb) {
-    int col = 0;
-    for (byte b : bb) {
-      if (col % 16 == 0) {
-        System.out.println();
-      }
-      col++;
-      System.out.print(Integer.toHexString(b & 0xff) + " ");
-    }
-    System.out.println();
-  }
-
-  private static class S {
-    public final long count;
-    public final boolean isArray;
-    
-    public S(long count, boolean isArray) {
-      this.count = count;
-      this.isArray = isArray;
-    }
-  }
-
-  @Test
-  public void testScan() throws IOException {
-    Tests t = new Tests(iSize, iDepth, sInput);
-    t.scan();
-  }
-
-  @Test
-  public void testSkip1() throws IOException {
-    testSkip(iSize, iDepth, sInput, 0);
-  }
-
-  @Test
-  public void testSkip2() throws IOException {
-    testSkip(iSize, iDepth, sInput, 1);
-  }
-
-  @Test
-  public void testSkip3() throws IOException {
-    testSkip(iSize, iDepth, sInput, 2);
-  }
-
-  private void testSkip(int bufferSize, int depth, String input,
-      int skipLevel)
-    throws IOException {
-    Tests t = new Tests(bufferSize, depth, input);
-    t.skip(skipLevel);
-  }
-
-  private static void skipMap(JsonParser parser, Decoder input, int depth)
-    throws IOException {
-    for (long l = input.skipMap(); l != 0; l = input.skipMap()) {
-      for (long i = 0; i < l; i++) {
-        if (depth == 0) {
-          input.skipBytes();
-        } else {
-          skipArray(parser, input, depth - 1);
-        }
-      }
-    }
-    parser.skipChildren();
-  }
-
-  private static void skipArray(JsonParser parser, Decoder input, int depth)
-    throws IOException {
-    for (long l = input.skipArray(); l != 0; l = input.skipArray()) {
-      for (long i = 0; i < l; i++) {
-        if (depth == 1) {
-          input.skipBytes();
-        } else {
-          skipArray(parser, input, depth - 1);
-        }
-      }
-    }
-    parser.skipChildren();
-  }
- 
-  private static void checkString(String s, Decoder input, int n)
-    throws IOException {
-    ByteBuffer buf = input.readBytes(null);
-    assertEquals(n, buf.remaining());
-    String s2 = new String(buf.array(), buf.position(),
-        buf.remaining(), UTF_8);
-    assertEquals(s, s2);
-  }
-  
-  private static void serialize(Encoder cos, JsonParser p,
-      ByteArrayOutputStream os)
-    throws IOException {
-    boolean[] isArray = new boolean[100];
-    int[] counts = new int[100];
-    int stackTop = -1;
-    
-    while (p.nextToken() != null) {
-      switch (p.getCurrentToken()) {
-      case END_ARRAY:
-        assertTrue(isArray[stackTop]);
-        cos.writeArrayEnd();
-        stackTop--;
-        break;
-      case END_OBJECT:
-        assertFalse(isArray[stackTop]);
-        cos.writeMapEnd();
-        stackTop--;
-        break;
-      case START_ARRAY:
-        if (stackTop >= 0 && isArray[stackTop]) {
-          cos.setItemCount(1);
-          cos.startItem();
-          counts[stackTop]++;
-        }
-        cos.writeArrayStart();
-        isArray[++stackTop] = true;
-        counts[stackTop] = 0;
-        continue;
-      case VALUE_STRING:
-        if (stackTop >= 0 && isArray[stackTop]) {
-          cos.setItemCount(1);
-          cos.startItem();
-          counts[stackTop]++;
-        }
-        byte[] bb = p.getText().getBytes(UTF_8);
-        cos.writeBytes(bb);
-        break;
-      case START_OBJECT:
-        if (stackTop >= 0 && isArray[stackTop]) {
-          cos.setItemCount(1);
-          cos.startItem();
-          counts[stackTop]++;
-        }
-        cos.writeMapStart();
-        isArray[++stackTop] = false;
-        counts[stackTop] = 0;
-        continue;
-      case FIELD_NAME:
-        cos.setItemCount(1);
-        cos.startItem();
-        counts[stackTop]++;
-        cos.writeBytes(p.getCurrentName().getBytes(UTF_8));
-        break;
-     default:
-       throw new RuntimeException("Unsupported: " + p.getCurrentToken());
-      }
-    }
-  }
-
-  @Parameterized.Parameters
-  public static Collection<Object[]> data() {
-    return Arrays.asList (new Object[][] {
-        { 64, 0, "" },
-        { 64, 0, jss(0, 'a') },
-        { 64, 0, jss(3, 'a') },
-        { 64, 0, jss(64, 'a') },
-        { 64, 0, jss(65, 'a') },
-        { 64, 0, jss(100, 'a') },
-        { 64, 1, "[]" },
-        { 64, 1, "[" + jss(0, 'a') + "]" },
-        { 64, 1, "[" + jss(3, 'a') + "]" },
-        { 64, 1, "[" + jss(61, 'a') + "]" },
-        { 64, 1, "[" + jss(62, 'a') + "]" },
-        { 64, 1, "[" + jss(64, 'a') + "]" },
-        { 64, 1, "[" + jss(65, 'a') + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(0, '0') + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(10, '0') + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(63, '0') + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(64, '0') + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(65, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(0, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(10, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(51, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(52, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(54, '0') + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(55, '0') + "]" },
-
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(0, 'a') + "," + jss(0, '0')
-               + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(0, 'a') + "," + jss(63, '0')
-               + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(0, 'a') + "," + jss(64, '0')
-               + "]" },
-        { 64, 1, "[" + jss(0, 'a') + "," + jss(0, 'a') + "," + jss(65, '0')
-                 + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(20, 'A') + "," + jss(10, '0')
-                 + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(20, 'A') + "," + jss(23, '0')
-                 + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(20, 'A') + "," + jss(24, '0')
-                 + "]" },
-        { 64, 1, "[" + jss(10, 'a') + "," + jss(20, 'A') + "," + jss(25, '0')
-                 + "]" },
-        { 64, 2, "[[]]"},
-        { 64, 2, "[[" + jss(0, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(10, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(59, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(60, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(100, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(10, '0') + ", " + jss(53, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(10, '0') + ", "  + jss(54, 'a') + "], []]" },
-        { 64, 2, "[[" + jss(10, '0') + ", "  + jss(55, 'a') + "], []]" },
-
-        { 64, 2, "[[], [" + jss(0, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(10, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(63, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(64, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(65, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(10, '0') + ", " + jss(53, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(10, '0') + ", " + jss(54, 'a') + "]]" },
-        { 64, 2, "[[], [" + jss(10, '0') + ", " + jss(55, 'a') + "]]" },
-
-        { 64, 2, "[[" + jss(10, '0') + "]]"},
-        { 64, 2, "[[" + jss(62, '0') + "]]"},
-        { 64, 2, "[[" + jss(63, '0') + "]]"},
-        { 64, 2, "[[" + jss(64, '0') + "]]"},
-        { 64, 2, "[[" + jss(10, 'a') + ", " + jss(10, '0') + "]]"},
-        { 64, 2, "[[" + jss(10, 'a') + ", " + jss(52, '0') + "]]"},
-        { 64, 2, "[[" + jss(10, 'a') + ", " + jss(53, '0') + "]]"},
-        { 64, 2, "[[" + jss(10, 'a') + ", " + jss(54, '0') + "]]"},
-        { 64, 3, "[[[" + jss(10, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(62, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(63, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(64, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + ", " + jss(10, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + ", " + jss(52, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + ", " + jss(53, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + "], [" + jss(54, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + "], [" + jss(10, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + "], [" + jss(52, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + "], [" + jss(53, '0') + "]]]"},
-        { 64, 3, "[[[" + jss(10, 'a') + "], [" + jss(54, '0') + "]]]"},
-
-        { 64, 2, "[[\"p\"], [\"mn\"]]"},
-        { 64, 2, "[[\"pqr\"], [\"mn\"]]"},
-        { 64, 2, "[[\"pqrstuvwxyz\"], [\"mn\"]]"},
-        { 64, 2, "[[\"abc\", \"pqrstuvwxyz\"], [\"mn\"]]"},
-        { 64, 2, "[[\"mn\"], [\"\"]]"},
-        { 64, 2, "[[\"mn\"], \"abc\"]"},
-        { 64, 2, "[[\"mn\"], \"abcdefghijk\"]"},
-        { 64, 2, "[[\"mn\"], \"pqr\", \"abc\"]"},
-        { 64, 2, "[[\"mn\"]]"},
-        { 64, 2, "[[\"p\"], [\"mnopqrstuvwx\"]]"},
-        { 64, 2, "[[\"pqr\"], [\"mnopqrstuvwx\"]]"},
-        { 64, 2, "[[\"pqrstuvwxyz\"], [\"mnopqrstuvwx\"]]"},
-        { 64, 2, "[[\"abc\"], \"pqrstuvwxyz\", [\"mnopqrstuvwx\"]]"},
-        { 64, 2, "[[\"mnopqrstuvwx\"], [\"\"]]"},
-        { 64, 2, "[[\"mnopqrstuvwx\"], [\"abc\"]]"},
-        { 64, 2, "[[\"mnopqrstuvwx\"], [\"abcdefghijk\"]]"},
-        { 64, 2, "[[\"mnopqrstuvwx\"], [\"pqr\", \"abc\"]]"},
-        { 100, 2, "[[\"pqr\", \"mnopqrstuvwx\"]]"},
-        { 100, 2, "[[\"pqr\", \"ab\", \"mnopqrstuvwx\"]]"},
-        { 64, 2, "[[[\"pqr\"]], [[\"ab\"], [\"mnopqrstuvwx\"]]]"},
-
-        { 64, 1, "{}" },
-        { 64, 1, "{\"n\": \"v\"}" },
-        { 64, 1, "{\"n1\": \"v\", \"n2\": []}" },
-        { 100, 1, "{\"n1\": \"v\", \"n2\": []}" },
-        { 100, 1, "{\"n1\": \"v\", \"n2\": [\"abc\"]}" },
-    });
-  }
-
-  /**
-   * Returns a new JSON String {@code n} bytes long with
-   * consecutive characters starting with {@code c}.
-   */
-  private static String jss(final int n, char c) {
-    char[] cc = new char[n + 2];
-    cc[0] = cc[n + 1] = '"';
-    for (int i = 1; i < n + 1; i++) {
-      if (c == 'Z') {
-        c = 'a';
-      } else if (c == 'z') {
-        c = '0';
-      } else if (c == '9') {
-        c = 'A';
-      } else {
-        c++;
-      }
-      cc[i] = c;
-    }
-    return new String(cc);
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO2.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO2.java
deleted file mode 100644
index f989ddb..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestBlockingIO2.java
+++ /dev/null
@@ -1,160 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.util.Collection;
-import java.util.Arrays;
-
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-
-/**
- * This class has more exhaustive tests for Blocking IO. The reason
- * we have both TestBlockingIO and TestBlockingIO2 is that with the
- * mnemonics used in TestBlockingIO2, it is hard to test skip() operations.
- * and with the test infrastructure of TestBlockingIO, it is hard to test
- * enums, unions etc.
- */
-@RunWith(Parameterized.class)
-public class TestBlockingIO2 {
-  private final Decoder decoder;
-  private final String calls;
-  private Object[] values;
-  
-  public TestBlockingIO2 (int bufferSize, int skipLevel, String calls)
-    throws IOException {
-
-    ByteArrayOutputStream os = new ByteArrayOutputStream();
-    Encoder encoder = new BlockingBinaryEncoder(os, bufferSize);
-    this.values = TestValidatingIO.randomValues(calls);
-
-    TestValidatingIO.generate(encoder, calls, values);
-    encoder.flush();
-    
-    byte[] bb = os.toByteArray();
-    
-    decoder = DecoderFactory.defaultFactory().createBinaryDecoder(bb, null);
-    this.calls = calls;
-  }
-    
-  @Test
-  public void testScan() throws IOException {
-    TestValidatingIO.check(decoder, calls, values, -1);
-  }
-
-  @Parameterized.Parameters
-  public static Collection<Object[]> data() {
-    return Arrays.asList (new Object[][] {
-        { 64, 0, "" },
-        { 64, 0, "S0" },
-        { 64, 0, "S3" },
-        { 64, 0, "S64" },
-        { 64, 0, "S65" },
-        { 64, 0, "S100" },
-        { 64, 1, "[]" },
-        { 64, 1, "[c1sS0]" },
-        { 64, 1, "[c1sS3]" },
-        { 64, 1, "[c1sS61]" },
-        { 64, 1, "[c1sS62]" },
-        { 64, 1, "[c1sS64]" },
-        { 64, 1, "[c1sS65]" },
-        { 64, 1, "[c2sS0sS0]" },
-        { 64, 1, "[c2sS0sS10]" },
-        { 64, 1, "[c2sS0sS63]" },
-        { 64, 1, "[c2sS0sS64]" },
-        { 64, 1, "[c2sS0sS65]" },
-        { 64, 1, "[c2sS10sS0]" },
-        { 64, 1, "[c2sS10sS10]" },
-        { 64, 1, "[c2sS10sS51]" },
-        { 64, 1, "[c2sS10sS52]" },
-        { 64, 1, "[c2sS10sS54]" },
-        { 64, 1, "[c2sS10sS55]" },
-        { 64, 1, "[c3sS0sS0sS0]" },
-        { 64, 1, "[c3sS0sS0sS63]" },
-        { 64, 1, "[c3sS0sS0sS64]" },
-        { 64, 1, "[c3sS0sS0sS65]" },
-        { 64, 1, "[c3sS10sS20sS10]" },
-        { 64, 1, "[c3sS10sS20sS23]" },
-        { 64, 1, "[c3sS10sS20sS24]" },
-        { 64, 1, "[c3sS10sS20sS25]" },
-        { 64, 1, "[c1s[]]" },
-        { 64, 1, "[c1s[c1sS0]]" },
-        { 64, 1, "[c1s[c1sS10]]" },
-        { 64, 1, "[c2s[c1sS10]s[]]" },
-        { 64, 1, "[c2s[c1sS59]s[]]" },
-        { 64, 1, "[c2s[c1sS60]s[]]" },
-        { 64, 1, "[c2s[c1sS100]s[]]" },
-        { 64, 1, "[c2s[c2sS10sS53]s[]]" },
-        { 64, 1, "[c2s[c2sS10sS54]s[]]" },
-        { 64, 1, "[c2s[c2sS10sS55]s[]]" },
-
-        { 64, 1, "[c2s[]s[c1sS0]]" },
-        { 64, 1, "[c2s[]s[c1sS10]]" },
-        { 64, 1, "[c2s[]s[c1sS63]]" },
-        { 64, 1, "[c2s[]s[c1sS64]]" },
-        { 64, 1, "[c2s[]s[c1sS65]]" },
-        { 64, 1, "[c2s[]s[c2sS10sS53]]" },
-        { 64, 1, "[c2s[]s[c2sS10sS54]]" },
-        { 64, 1, "[c2s[]s[c2sS10sS55]]" },
-
-        { 64, 1, "[c1s[c1sS10]]" },
-        { 64, 1, "[c1s[c1sS62]]" },
-        { 64, 1, "[c1s[c1sS63]]" },
-        { 64, 1, "[c1s[c1sS64]]" },
-
-        { 64, 1, "[c1s[c2sS10sS10]]" },
-        { 64, 1, "[c1s[c2sS10sS52]]" },
-        { 64, 1, "[c1s[c2sS10sS53]]" },
-        { 64, 1, "[c1s[c2sS10sS54]]" },
-
-        { 64, 1, "[c1s[c1s[c1sS10]]]" },
-        { 64, 1, "[c1s[c1s[c1sS62]]]" },
-        { 64, 1, "[c1s[c1s[c1sS63]]]" },
-        { 64, 1, "[c1s[c1s[c1sS64]]]" },
-
-        { 64, 1, "[c1s[c1s[c2sS10sS10]]]" },
-        { 64, 1, "[c1s[c1s[c2sS10sS52]]]" },
-        { 64, 1, "[c1s[c1s[c2sS10sS53]]]" },
-        { 64, 1, "[c1s[c1s[c2sS10sS54]]]" },
-
-        { 64, 1, "[c1s[c2sS10s[c1sS10]]]" },
-        { 64, 1, "[c1s[c2sS10s[c1sS52]]]" },
-        { 64, 1, "[c1s[c2sS10s[c1sS53]]]" },
-        { 64, 1, "[c1s[c2sS10s[c1sS54]]]" },
-
-        { 64, 1, "{}" },
-        { 64, 1, "{c1sK5S1}" },
-        { 64, 1, "{c1sK5[]}" },
-        { 100, 1, "{c1sK5[]}" },
-        { 100, 1, "{c1sK5[c1sS10]}" },
-
-        { 100, 1, "{c1sK5e10}" },
-        { 100, 1, "{c1sK5U1S10}" },
-        { 100, 1, "{c1sK5f10S10}" },
-        { 100, 1, "{c1sK5NS10}" },
-        { 100, 1, "{c1sK5BS10}" },
-        { 100, 1, "{c1sK5IS10}" },
-        { 100, 1, "{c1sK5LS10}" },
-        { 100, 1, "{c1sK5FS10}" },
-        { 100, 1, "{c1sK5DS10}" },
-    });
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestEncoders.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestEncoders.java
deleted file mode 100644
index 8a0dd44..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestEncoders.java
+++ /dev/null
@@ -1,55 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.OutputStream;
-
-import org.apache.avro.Schema;
-import org.junit.Test;
-
-public class TestEncoders {
-  @Test
-  public void testBinaryEncoderInit() throws IOException {
-    OutputStream out = null;
-    new BinaryEncoder(out).init(new ByteArrayOutputStream());
-  }
-
-  @Test
-  public void testBlockingBinaryEncoderInit() throws IOException {
-    OutputStream out = null;
-    new BlockingBinaryEncoder(out).init(new ByteArrayOutputStream());
-  }
-
-  @Test
-  public void testJsonEncoderInit() throws IOException {
-    Schema s = Schema.parse("\"int\"");
-    OutputStream out = null;
-    new JsonEncoder(s, out).init(new ByteArrayOutputStream());
-  }
-
-  @Test
-  public void testValidatingEncoderInit() throws IOException {
-    Schema s = Schema.parse("\"int\"");
-    OutputStream out = null;
-    Encoder e = new BinaryEncoder(out);
-    new ValidatingEncoder(s, e).init(new ByteArrayOutputStream());
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIO.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIO.java
deleted file mode 100644
index 72f55b3..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIO.java
+++ /dev/null
@@ -1,211 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import java.io.ByteArrayInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.util.Arrays;
-import java.util.Collection;
-
-import org.apache.avro.Schema;
-import org.apache.avro.io.TestValidatingIO.Encoding;
-import org.junit.Test;
-import org.junit.runners.Parameterized;
-import org.junit.runner.RunWith;
-
-@RunWith(Parameterized.class)
-public class TestResolvingIO {
-
-  protected final Encoding eEnc;
-  protected final int iSkipL;
-  protected final String sJsWrtSchm;
-  protected final String sWrtCls;
-  protected final String sJsRdrSchm;
-  protected final String sRdrCls;
-
-  public TestResolvingIO (Encoding encoding,
-      int skipLevel, String jsonWriterSchema,
-      String writerCalls,
-      String jsonReaderSchema, String readerCalls
-  ) {
-    this.eEnc = encoding;
-    this.iSkipL = skipLevel;
-    this.sJsWrtSchm = jsonWriterSchema;
-    this.sWrtCls = writerCalls;
-    this.sJsRdrSchm = jsonReaderSchema;
-    this.sRdrCls = readerCalls;
-  }
-  
-  @Test
-  public void testIdentical() throws IOException {
-    performTest(eEnc, iSkipL, sJsWrtSchm, sWrtCls, sJsRdrSchm, sRdrCls);
-  }
-
-  private static final int COUNT = 10;
-
-  @Test
-  public void testCompatible() throws IOException {
-    performTest(eEnc, iSkipL, sJsWrtSchm, sWrtCls, sJsRdrSchm, sRdrCls);
-  }
-
-  private void performTest(Encoding encoding,
-      int skipLevel, String jsonWriterSchema,
-      String writerCalls, 
-      String jsonReaderSchema, String readerCalls)
-  throws IOException {
-    for (int i = 0; i < COUNT; i++) {
-      testOnce(jsonWriterSchema, writerCalls,
-          jsonReaderSchema, readerCalls, encoding, skipLevel);
-    }
-  }
-  
-  private void testOnce(String jsonWriterSchema,
-      String writerCalls,
-      String jsonReaderSchema,
-      String readerCalls,
-      Encoding encoding,
-      int skipLevel) throws IOException {
-    Object[] values = TestValidatingIO.randomValues(writerCalls);
-    Schema writerSchema = Schema.parse(jsonWriterSchema);
-    byte[] bytes = TestValidatingIO.make(writerSchema, writerCalls,
-        values, encoding);
-    Schema readerSchema = Schema.parse(jsonReaderSchema);
-    check(writerSchema, readerSchema, bytes, readerCalls,
-        values,
-        encoding, skipLevel);
-  }
-
-  static void check(Schema wsc, Schema rsc, byte[] bytes,
-      String calls, Object[] values, Encoding encoding,
-      int skipLevel)
-      throws IOException {
-    // TestValidatingIO.dump(bytes);
-    // System.out.println(new String(bytes, "UTF-8"));
-    Decoder bvi = null;
-    switch (encoding) {
-    case BINARY:
-    case BLOCKING_BINARY:
-      bvi = DecoderFactory.defaultFactory().createBinaryDecoder(bytes, null);
-      break;
-    case JSON:
-      InputStream in = new ByteArrayInputStream(bytes);
-      bvi = new JsonDecoder(wsc, in);
-      break;
-    }
-    Decoder vi = new ResolvingDecoder(wsc, rsc, bvi);
-    TestValidatingIO.check(vi, calls, values, skipLevel);
-  }
-  
-  @Parameterized.Parameters
-  public static Collection<Object[]> data2() {
-    return Arrays.asList(TestValidatingIO.convertTo2dArray(encodings, skipLevels, testSchemas()));
-  }
-
-  static Object[][] encodings = new Object[][] { { Encoding.BINARY },
-          { Encoding.BLOCKING_BINARY }, { Encoding.JSON } };
-  static Object[][] skipLevels =
-    new Object[][] { { -1 }, { 0 }, { 1 }, { 2 }  };
-  private static Object[][] testSchemas() {
-    // The mnemonics are the same as {@link TestValidatingIO#testSchemas}
-    return new Object[][] {
-        // { "\"int\"", "I", "\"float\"", "F" }, // makes sense?
-        { "\"int\"", "I", "\"double\"", "D" },
-        // { "\"long\"", "L", "\"float\"", "F" }, // And this?
-        { "\"long\"", "L", "\"double\"", "D" },
-        { "\"float\"", "F", "\"double\"", "D" },
-        { "\"double\"", "D", "\"long\"", "L" },
-
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[]",
-          "{\"type\":\"array\", \"items\": \"long\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[]" },
-        { "{\"type\":\"array\", \"items\": \"long\"}", "[]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[]" },
-        { "{\"type\":\"array\", \"items\": \"float\"}", "[]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[]" },
-
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[c1sI]",
-          "{\"type\":\"array\", \"items\": \"long\"}", "[c1sL]" },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[c1sI]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[c1sD]" },
-        { "{\"type\":\"array\", \"items\": \"long\"}", "[c1sL]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[c1sD]" },
-        { "{\"type\":\"array\", \"items\": \"float\"}", "[c1sF]",
-          "{\"type\":\"array\", \"items\": \"double\"}", "[c1sD]" },
-
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{}",
-          "{\"type\":\"map\", \"values\": \"long\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"long\"}", "{}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"float\"}", "{}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{}" },
-
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{c1sK5I}",
-          "{\"type\":\"map\", \"values\": \"long\"}", "{c1sK5L}" },
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{c1sK5I}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{c1sK5D}" },
-        { "{\"type\":\"map\", \"values\": \"long\"}", "{c1sK5L}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{c1sK5D}" },
-        { "{\"type\":\"map\", \"values\": \"float\"}", "{c1sK5F}",
-          "{\"type\":\"map\", \"values\": \"double\"}", "{c1sK5D}" },
-
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"int\"}]}", "I",
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"long\"}]}", "L" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"int\"}]}", "I",
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"double\"}]}", "D" },
-
-        // multi-field record with promotions
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f0\", \"type\":\"boolean\"},"
-          + "{\"name\":\"f1\", \"type\":\"int\"},"
-          + "{\"name\":\"f2\", \"type\":\"float\"},"
-          + "{\"name\":\"f3\", \"type\":\"string\"}]}", "BIFS",
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f0\", \"type\":\"boolean\"},"
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", \"type\":\"double\"},"
-          + "{\"name\":\"f3\", \"type\":\"string\"}]}", "BLDS" },
-
-        { "[\"int\"]", "U0I",
-              "[\"long\"]", "U0L" },
-        { "[\"int\"]", "U0I",
-              "[\"double\"]", "U0D" },
-        { "[\"long\"]", "U0L",
-              "[\"double\"]", "U0D" },
-        { "[\"float\"]", "U0F",
-              "[\"double\"]", "U0D" },
-
-        { "\"int\"", "I", "[\"int\"]", "U0I" },
-
-        { "[\"int\"]", "U0I", "\"int\"", "I" },
-        { "[\"int\"]", "U0I", "\"long\"", "L" },
-
-        { "[\"boolean\", \"int\"]", "U1I",
-              "[\"boolean\", \"long\"]", "U1L" },
-        { "[\"boolean\", \"int\"]", "U1I",
-              "[\"long\", \"boolean\"]", "U0L" },
-    };
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIOResolving.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIOResolving.java
deleted file mode 100644
index 35382c5..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestResolvingIOResolving.java
+++ /dev/null
@@ -1,180 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import org.junit.Test;
-import org.junit.runners.Parameterized;
-import org.junit.runner.RunWith;
-import org.apache.avro.Schema;
-
-import java.io.IOException;
-import java.util.Collection;
-import java.util.Arrays;
-
-@RunWith(Parameterized.class)
-public class TestResolvingIOResolving {
-  protected TestValidatingIO.Encoding eEnc;
-  protected final int iSkipL;
-  protected final String sJsWrtSchm;
-  protected final String sWrtCls;
-  protected final String sJsRdrSchm;
-  protected final String sRdrCls;
-
-  protected final Object[] oaWrtVals;
-  protected final Object[] oaRdrVals;
-
-  public TestResolvingIOResolving(TestValidatingIO.Encoding encoding,
-      int skipLevel, String jsonWriterSchema,
-      String writerCalls,
-      Object[] writerValues,
-      String jsonReaderSchema, String readerCalls,
-      Object[] readerValues
-  ) {
-    this.eEnc = encoding;
-    this.iSkipL = skipLevel;
-    this.sJsWrtSchm = jsonWriterSchema;
-    this.sWrtCls = writerCalls;
-    this.oaWrtVals = writerValues;
-    this.sJsRdrSchm = jsonReaderSchema;
-    this.sRdrCls = readerCalls;
-    this.oaRdrVals = readerValues;
-  }
-
-  @Test
-  public void testResolving()
-    throws IOException {
-    Schema writerSchema = Schema.parse(sJsWrtSchm);
-    byte[] bytes = TestValidatingIO.make(writerSchema, sWrtCls,
-        oaWrtVals, TestValidatingIO.Encoding.BINARY);
-    Schema readerSchema = Schema.parse(sJsRdrSchm);
-    TestResolvingIO.check(writerSchema, readerSchema, bytes, sRdrCls,
-        oaRdrVals,
-        TestValidatingIO.Encoding.BINARY, iSkipL);
-  }
-
-  @Parameterized.Parameters
-  public static Collection<Object[]> data3() {
-    Collection<Object[]> ret = Arrays.asList(
-                TestValidatingIO.convertTo2dArray(TestResolvingIO.encodings,
-                                TestResolvingIO.skipLevels,
-        dataForResolvingTests()));
-    return ret;
-  }
-
-  private static Object[][] dataForResolvingTests() {
-    // The mnemonics are the same as {@link TestValidatingIO#testSchemas}
-    return new Object[][] {
-        // Projection
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"string\"},"
-          + "{\"name\":\"f2\", \"type\":\"string\"},"
-          + "{\"name\":\"f3\", \"type\":\"int\"}]}", "S10S10IS10S10I",
-          new Object[] { "s1", "s2", 100, "t1", "t2", 200 },
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"string\" },"
-          + "{\"name\":\"f2\", \"type\":\"string\"}]}", "RS10S10RS10S10",
-          new Object[] { "s1", "s2", "t1", "t2" } },
-        // Reordered fields
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"int\"},"
-          + "{\"name\":\"f2\", \"type\":\"string\"}]}", "IS10",
-          new Object[] { 10, "hello" },
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f2\", \"type\":\"string\" },"
-          + "{\"name\":\"f1\", \"type\":\"long\"}]}", "RLS10",
-          new Object[] { 10L, "hello" } },
-
-        // Default values
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":[]}", "",
-          new Object[] { },
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"int\", \"default\": 100}]}", "RI",
-          new Object[] { 100 } },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-            + "{\"name\":\"f2\", \"type\":\"int\"}]}", "I",
-          new Object[] { 10 },
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"int\", \"default\": 101},"
-          + "{\"name\":\"f2\", \"type\":\"int\"}]}", "RII",
-          new Object[] { 10, 101 } },
-        { "{\"type\":\"record\",\"name\":\"outer\",\"fields\":["
-            + "{\"name\": \"g1\", " +
-                        "\"type\":{\"type\":\"record\",\"name\":\"inner\",\"fields\":["
-                + "{\"name\":\"f2\", \"type\":\"int\"}]}}, "
-            + "{\"name\": \"g2\", \"type\": \"long\"}]}", "IL",
-          new Object[] { 10, 11L },
-          "{\"type\":\"record\",\"name\":\"outer\",\"fields\":["
-            + "{\"name\": \"g1\", " +
-                        "\"type\":{\"type\":\"record\",\"name\":\"inner\",\"fields\":["
-                + "{\"name\":\"f1\", \"type\":\"int\", \"default\": 101},"
-                + "{\"name\":\"f2\", \"type\":\"int\"}]}}, "
-          + "{\"name\": \"g2\", \"type\": \"long\"}]}}", "RRIIL",
-          new Object[] { 10, 101, 11L } },
-        // Default value for a record.
-        { "{\"type\":\"record\",\"name\":\"outer\",\"fields\":["
-            + "{\"name\": \"g2\", \"type\": \"long\"}]}", "L",
-          new Object[] { 11L },
-          "{\"type\":\"record\",\"name\":\"outer\",\"fields\":["
-            + "{\"name\": \"g1\", " +
-                "\"type\":{\"type\":\"record\",\"name\":\"inner\",\"fields\":["
-              + "{\"name\":\"f1\", \"type\":\"int\" },"
-              + "{\"name\":\"f2\", \"type\":\"int\"}] }, "
-              + "\"default\": { \"f1\": 10, \"f2\": 101 } }, "
-            + "{\"name\": \"g2\", \"type\": \"long\"}]}", "RLRII",
-          new Object[] { 11L, 10, 101} },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":[]}", "",
-          new Object[] { },
-          "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":{ \"type\": \"array\", \"items\": \"int\" }, "
-            + "\"default\": [100]}]}", "[c1sI]",
-          new Object[] { 100 } },
-        { "{ \"type\": \"array\", \"items\": {\"type\":\"record\","
-            + "\"name\":\"r\",\"fields\":[]} }", "[c1s]",
-            new Object[] { },
-          "{ \"type\": \"array\", \"items\": {\"type\":\"record\","
-            + "\"name\":\"r\",\"fields\":["
-            + "{\"name\":\"f\", \"type\":\"int\", \"default\": 100}]} }",
-            "[c1sI]",
-          new Object[] { 100 } },
-        // Enum resolution
-        { "{\"type\":\"enum\",\"name\":\"e\",\"symbols\":[\"x\",\"y\",\"z\"]}",
-            "e2",
-            new Object[] {  },
-            "{\"type\":\"enum\",\"name\":\"e\",\"symbols\":[ \"y\", \"z\" ]}",
-            "e1",
-            new Object[] {  } },
-        { "{\"type\":\"enum\",\"name\":\"e\",\"symbols\":[ \"x\", \"y\" ]}",
-            "e1",
-            new Object[] {  },
-            "{\"type\":\"enum\",\"name\":\"e\",\"symbols\":[ \"y\", \"z\" ]}",
-            "e0",
-            new Object[] {  } },
-
-        // Union
-        { "\"int\"", "I", new Object[] { 100 },
-            "[ \"long\", \"int\"]", "U1I", new Object[] { 100 } },
-        { "[ \"long\", \"int\"]", "U1I", new Object[] { 100 } ,
-            "\"int\"", "I", new Object[] { 100 } },
-        // Union + promotion
-        { "\"int\"", "I", new Object[] { 100 },
-            "[ \"long\", \"string\"]", "U0L", new Object[] { 100L } },
-        { "[ \"int\", \"string\"]", "U0I", new Object[] { 100 },
-            "\"long\"", "L", new Object[] { 100 } },
-    };
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestValidatingIO.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestValidatingIO.java
deleted file mode 100644
index fc90d80..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/TestValidatingIO.java
+++ /dev/null
@@ -1,852 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertArrayEquals;
-import static org.junit.Assert.fail;
-import org.apache.avro.Schema;
-import org.apache.avro.util.Utf8;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Random;
-
-@RunWith(Parameterized.class)
-public class TestValidatingIO {
-  enum Encoding {
-    BINARY,
-    BLOCKING_BINARY,
-    JSON,
-  }
-
-  private Encoding eEnc;
-  private int iSkipL;
-  private String sJsSch;
-  private String sCl;
-
-  public TestValidatingIO (Encoding enc, int skip, String js, String cls) {
-    this.eEnc = enc;
-    this.iSkipL = skip;
-    this.sJsSch = js;
-    this.sCl = cls;
-  }
-  private static final int COUNT = 1;
-  
-  @Test
-  public void testMain() throws IOException {
-    for (int i = 0; i < COUNT; i++) {
-      testOnce(Schema.parse(sJsSch), sCl, iSkipL, eEnc);
-    }
-  }
-
-  private void testOnce(Schema schema, String calls,
-      int skipLevel,
-      Encoding encoding)
-    throws IOException {
-    Object[] values = randomValues(calls);
-    byte[] bytes = make(schema, calls, values, encoding);
-    check(schema, bytes, calls, values, skipLevel, encoding);
-  }
-
-  public static byte[] make(Schema sc, String calls,
-      Object[] values, Encoding encoding) throws IOException {
-    ByteArrayOutputStream ba = new ByteArrayOutputStream();
-    Encoder bvo = null;
-    switch (encoding) {
-    case BINARY:
-      bvo = new BinaryEncoder(ba);
-      break;
-    case BLOCKING_BINARY:
-      bvo = new BlockingBinaryEncoder(ba);
-      break;
-    case JSON:
-      bvo = new JsonEncoder(sc, ba);
-      break;
-    }
-        
-    Encoder vo = new ValidatingEncoder(sc, bvo);
-    generate(vo, calls, values);
-    vo.flush();
-    return ba.toByteArray();
-  }
-
-  public static class InputScanner {
-    private final char[] chars;
-    private int cpos = 0;
-    
-    public InputScanner(char[] chars) {
-      this.chars = chars;
-    }
-    
-    public boolean next() {
-      if (cpos < chars.length) {
-        cpos++;
-      }
-      return cpos != chars.length;
-    }
-    
-    public char cur() {
-      return chars[cpos];
-    }
-    
-    public boolean isDone() {
-      return cpos == chars.length;
-    }
-  }
-  public static void generate(Encoder vw, String calls,
-      Object[] values) throws IOException {
-    InputScanner cs = new InputScanner(calls.toCharArray());
-    int p = 0;
-    while (! cs.isDone()) {
-      char c = cs.cur();
-      cs.next();
-      switch (c) {
-      case 'N':
-        vw.writeNull();
-        break;
-      case 'B':
-        boolean b = (Boolean) values[p++];
-        vw.writeBoolean(b);
-        break;
-      case 'I':
-        int ii = (Integer) values[p++];
-        vw.writeInt(ii);
-        break;
-      case 'L':
-        long l = (Long) values[p++];
-        vw.writeLong(l);
-        break;
-      case 'F':
-        float f = (Float) values[p++];
-        vw.writeFloat(f);
-        break;
-      case 'D':
-        double d = (Double) values[p++];
-        vw.writeDouble(d);
-        break;
-      case 'S':
-        {
-          extractInt(cs);
-          String s = (String) values[p++];
-          vw.writeString(new Utf8(s));
-          break;
-        }
-      case 'K':
-      {
-        extractInt(cs);
-        String s = (String) values[p++];
-        vw.writeString(s);
-        break;
-      }
-      case 'b':
-        {
-          extractInt(cs);
-          byte[] bb = (byte[]) values[p++];
-          vw.writeBytes(bb);
-          break;
-        }
-      case 'f':
-        {
-          extractInt(cs);
-          byte[] bb = (byte[]) values[p++];
-          vw.writeFixed(bb);
-          break;
-        }
-      case 'e':
-        {
-          int e = extractInt(cs);
-          vw.writeEnum(e);
-          break;
-        }
-      case '[':
-        vw.writeArrayStart();
-        break;
-      case ']':
-        vw.writeArrayEnd();
-        break;
-      case '{':
-        vw.writeMapStart();
-        break;
-      case '}':
-        vw.writeMapEnd();
-        break;
-      case 'c':
-        vw.setItemCount(extractInt(cs));
-        break;
-      case 's':
-        vw.startItem();
-        break;
-      case 'U':
-        {
-          vw.writeIndex(extractInt(cs));
-          break;
-        }
-      default:
-        fail();
-        break;
-      }
-    }
-  }
-
-  public static Object[] randomValues(String calls) {
-    Random r = new Random();
-    InputScanner cs = new InputScanner(calls.toCharArray());
-    List<Object> result = new ArrayList<Object>();
-    while (! cs.isDone()) {
-      char c = cs.cur();
-      cs.next();
-      switch (c) {
-      case 'N':
-        break;
-      case 'B':
-        result.add(r.nextBoolean());
-        break;
-      case 'I':
-        result.add(r.nextInt());
-        break;
-      case 'L':
-        result.add(r.nextLong());
-        break;
-      case 'F':
-        result.add(r.nextFloat());
-        break;
-      case 'D':
-        result.add(r.nextDouble());
-        break;
-      case 'S':
-      case 'K':
-        result.add(nextString(r, extractInt(cs)));
-        break;
-      case 'b':
-      case 'f':
-        result.add(nextBytes(r, extractInt(cs)));
-        break;
-      case 'e':
-      case 'c':
-      case 'U':
-        extractInt(cs);
-      case '[':
-      case ']':
-      case '{':
-      case '}':
-      case 's':
-        break;
-      default:
-        fail();
-        break;
-      }
-    }
-    return result.toArray();
-  }
-
-  private static int extractInt(InputScanner sc) {
-    int r = 0;
-    while (! sc.isDone()) {
-      if (Character.isDigit(sc.cur())) {
-        r = r * 10 + sc.cur() - '0';
-        sc.next();
-      } else {
-        break;
-      }
-    }
-    return r;
-  }
-
-  private static byte[] nextBytes(Random r, int length) {
-    byte[] bb = new byte[length];
-    r.nextBytes(bb);
-    return bb;
-  }
-
-  private static String nextString(Random r, int length) {
-    char[] cc = new char[length];
-    for (int i = 0; i < length; i++) {
-      cc[i] = (char) ('A' + r.nextInt(26));
-    }
-    return new String(cc);
-  }
-
-  private static void check(Schema sc, byte[] bytes, String calls,
-      Object[] values, final int skipLevel, Encoding encoding)
-    throws IOException {
-    // dump(bytes);
-    // System.out.println(new String(bytes, "UTF-8"));
-    Decoder bvi = null;
-    switch (encoding) {
-    case BINARY:
-    case BLOCKING_BINARY:
-      bvi = DecoderFactory.defaultFactory().createBinaryDecoder(bytes, null);
-      break;
-    case JSON:
-      InputStream in = new ByteArrayInputStream(bytes);
-      bvi = new JsonDecoder(sc, in);
-    }
-    Decoder vi = new ValidatingDecoder(sc, bvi);
-    check(vi, calls, values, skipLevel);
-  }
-  
-  public static void check(Decoder vi, String calls,
-      Object[] values, final int skipLevel) throws IOException {
-    InputScanner cs = new InputScanner(calls.toCharArray());
-    int p = 0;
-    int level = 0;
-    long[] counts = new long[100];
-    boolean[] isArray = new boolean[100];
-    boolean[] isEmpty = new boolean[100];
-    while (! cs.isDone()) {
-      final char c = cs.cur();
-      cs.next();
-      switch (c) {
-      case 'N':
-        vi.readNull();
-        break;
-      case 'B':
-        boolean b = ((Boolean) values[p++]).booleanValue();
-        assertEquals(b, vi.readBoolean());
-        break;
-      case 'I':
-        int ii = ((Integer) values[p++]).intValue();
-        assertEquals(ii, vi.readInt());
-        break;
-      case 'L':
-        long l = longValue(values[p++]);
-        assertEquals(l, vi.readLong());
-        break;
-      case 'F':
-        float f = floatValue(values[p++]);
-        assertEquals(f, vi.readFloat(), Math.abs(f / 1000));
-        break;
-      case 'D':
-        double d = doubleValue(values[p++]);
-        assertEquals(d, vi.readDouble(), Math.abs(d / 1000));
-        break;
-      case 'S':
-        extractInt(cs);
-        if (level == skipLevel) {
-          vi.skipString();
-          p++;
-        } else {
-          String s = (String) values[p++];
-          assertEquals(new Utf8(s), vi.readString(null));
-        }
-        break;
-      case 'K':
-        extractInt(cs);
-        if (level == skipLevel) {
-          vi.skipString();
-          p++;
-        } else {
-          String s = (String) values[p++];
-          assertEquals(new Utf8(s), vi.readString(null));
-        }
-        break;
-      case 'b':
-        extractInt(cs);
-        if (level == skipLevel) {
-          vi.skipBytes();
-          p++;
-        } else {
-          byte[] bb = (byte[]) values[p++];
-          ByteBuffer bb2 = vi.readBytes(null);
-          byte[] actBytes = new byte[bb2.remaining()];
-          System.arraycopy(bb2.array(), bb2.position(), actBytes,
-              0, bb2.remaining());
-          assertArrayEquals(bb, actBytes);
-        }
-        break;
-      case 'f':
-        {
-          int len = extractInt(cs);
-          if (level == skipLevel) {
-            vi.skipFixed(len);
-            p++;
-          } else {
-            byte[] bb = (byte[]) values[p++];
-            byte[] actBytes = new byte[len];
-            vi.readFixed(actBytes);
-            assertArrayEquals(bb, actBytes);
-          }
-        }
-        break;
-      case 'e':
-      {
-        int e = extractInt(cs);
-        if (level == skipLevel) {
-          vi.readEnum();
-        } else {
-          assertEquals(e, vi.readEnum());
-        }
-      }
-      break;
-      case '[':
-        if (level == skipLevel) {
-          p += skip(cs, vi, true);
-          break;
-        } else {
-          level++;
-          counts[level] = vi.readArrayStart();
-          isArray[level] = true;
-          isEmpty[level] = counts[level] == 0;
-          continue;
-        }
-      case '{':
-        if (level == skipLevel) {
-          p += skip(cs, vi, false);
-          break;
-        } else {
-          level++;
-          counts[level] = vi.readMapStart();
-          isArray[level] = false;
-          isEmpty[level] = counts[level] == 0;
-          continue;
-        }
-      case ']':
-        assertEquals(0, counts[level]);
-        if (! isEmpty[level]) {
-          assertEquals(0, vi.arrayNext());
-        }
-        level--;
-        break;
-      case '}':
-        assertEquals(0, counts[level]);
-        if (! isEmpty[level]) {
-          assertEquals(0, vi.mapNext());
-        }
-        level--;
-        break;
-      case 's':
-        if (counts[level] == 0) {
-          if (isArray[level]) {
-            counts[level] = vi.arrayNext();
-          } else {
-            counts[level] = vi.mapNext();
-          }
-        }
-        counts[level]--;
-        continue;
-      case 'c':
-        extractInt(cs);
-        continue;
-      case 'U':
-        {
-          int idx = extractInt(cs);
-          assertEquals(idx, vi.readIndex());
-          continue;
-        }
-      case 'R':
-          ((ResolvingDecoder) vi).readFieldOrder();
-          continue;
-      default:
-        fail();
-      }
-    }
-    assertEquals(values.length, p);
-  }
-  
-  private static float floatValue(Object object) {
-    return (object instanceof Integer) ? ((Integer) object).floatValue() :
-      (object instanceof Long) ? ((Long) object).floatValue() :
-      ((Float) object).floatValue();
-  }
-
-  private static double doubleValue(Object object) {
-    return (object instanceof Double) ? ((Double) object).doubleValue() :
-      (object instanceof Float) ? ((Float) object).doubleValue() :
-      (object instanceof Long) ? ((Long) object).doubleValue() :
-      ((Integer) object).doubleValue();
-  }
-
-  private static long longValue(Object object) {
-    return (object instanceof Long) ? ((Long) object).longValue() :
-      (object instanceof Double) ? ((Double) object).longValue() :
-      ((Integer) object).longValue();
-  }
-
-  private static int skip(InputScanner cs, Decoder vi, boolean isArray)
-    throws IOException {
-    final char end = isArray ? ']' : '}';
-    if (isArray) {
-      assertEquals(0, vi.skipArray());
-    } else if (end == '}'){
-      assertEquals(0, vi.skipMap());
-    }
-    int level = 0;
-    int p = 0;
-    while (! cs.isDone()) {
-      char c = cs.cur();
-      cs.next();
-      switch (c) {
-      case '[':
-      case '{':
-        ++level;
-        break;
-      case ']':
-      case '}':
-        if (c == end && level == 0) {
-          return p;
-        }
-        level--;
-        break;
-      case 'B':
-      case 'I':
-      case 'L':
-      case 'F':
-      case 'D':
-      case 'S':
-      case 'K':
-      case 'b':
-      case 'f':
-      case 'e':
-        p++;
-        break;
-      }
-    }
-    throw new RuntimeException("Don't know how to skip");
-  }
-
-  @Parameterized.Parameters
-  public static Collection<Object[]> data() {
-    return Arrays.asList(convertTo2dArray(encodings, skipLevels, testSchemas()));
-  }
-  
-  private static Object[][] encodings = new Object[][] {
-      { Encoding.BINARY }, { Encoding.BLOCKING_BINARY },
-      { Encoding.JSON }
-    }; 
-
-  private static Object[][] skipLevels = new Object[][] {
-      { -1 }, { 0 }, { 1 }, { 2 },
-  };
-  
-  public static Object[][] convertTo2dArray(final Object[][]... values) {
-    ArrayList<Object[]> ret = new ArrayList<Object[]>();
-
-    Iterator<Object[]> iter = cartesian(values);
-    while (iter.hasNext()) {
-      Object[] objects = iter.next();
-      ret.add(objects);
-    }
-    Object[][] retArrays = new Object[ret.size()][];
-    for (int i = 0; i < ret.size(); i++) {
-      retArrays[i] = ret.get(i);
-    }
-    return retArrays;
-  }
-  /**
-   * Returns the Cartesian product of input sequences.
-   */
-  public static Iterator<Object[]> cartesian(final Object[][]... values) {
-    return new Iterator<Object[]>() {
-      private int[] pos = new int[values.length];
-      @Override
-      public boolean hasNext() {
-        return pos[0] < values[0].length;
-      }
-
-      @Override
-      public Object[] next() {
-        Object[][] v = new Object[values.length][];
-        for (int i = 0; i < v.length; i++) {
-          v[i] = values[i][pos[i]];
-        }
-        for (int i = v.length - 1; i >= 0; i--) {
-          if (++pos[i] == values[i].length) {
-            if (i != 0) {
-              pos[i] = 0;
-            }
-          } else {
-            break;
-          }
-        }
-        return concat(v);
-      }
-
-      @Override
-      public void remove() {
-        throw new UnsupportedOperationException();
-      }
-    };
-  }
-  
-  /**
-   * Concatenates the input sequences in order and forms a longer sequence.
-   */
-  public static Object[] concat(Object[]... oo) {
-    int l = 0;
-    for (Object[] o : oo) {
-      l += o.length;
-    }
-    Object[] result = new Object[l];
-    l = 0;
-    for (Object[] o : oo) {
-      System.arraycopy(o, 0, result, l, o.length);
-      l += o.length;
-    }
-    return result;
-  }
-
-  /**
-   * Pastes incoming tables to form a wider table. All incoming tables
-   * should be of same height.
-   */
-  static Object[][] paste(Object[][]... in) {
-    Object[][] result = new Object[in[0].length][];
-    Object[][] cc = new Object[in.length][];
-    for (int i = 0; i < result.length; i++) {
-      for (int j = 0; j < cc.length; j++) {
-        cc[j] = in[j][i];
-      }
-      result[i] = concat(cc);
-    }
-    return result;
-  }
-
-  public static Object[][] testSchemas() {
-    /**
-     * The first argument is a schema.
-     * The second one is a sequence of (single character) mnemonics:
-     * N  null
-     * B  boolean
-     * I  int
-     * L  long
-     * F  float
-     * D  double
-     * K followed by integer - key-name (and its length) in a map
-     * S followed by integer - string and its length
-     * b followed by integer - bytes and length
-     * f followed by integer - fixed and length
-     * c  Number of items to follow in an array/map.
-     * U followed by integer - Union and its branch
-     * e followed by integer - Enum and its value
-     * [  Start array
-     * ]  End array
-     * {  Start map
-     * }  End map
-     * s  start item
-     */
-    return new Object[][] {
-        { "\"null\"", "N" },
-        { "\"boolean\"", "B" },
-        { "\"int\"", "I" },
-        { "\"long\"", "L" },
-        { "\"float\"", "F" },
-        { "\"double\"", "D" },
-        { "\"string\"", "S0" },
-        { "\"string\"", "S10" },
-        { "\"bytes\"", "b0" },
-        { "\"bytes\"", "b10" },
-        { "{\"type\":\"fixed\", \"name\":\"fi\", \"size\": 1}", "f1" },
-        { "{\"type\":\"fixed\", \"name\":\"fi\", \"size\": 10}", "f10" },
-        { "{\"type\":\"enum\", \"name\":\"en\", \"symbols\":[\"v1\", \"v2\"]}",
-            "e1" },
-
-        { "{\"type\":\"array\", \"items\": \"boolean\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"long\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"float\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"double\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"string\"}", "[]", },
-        { "{\"type\":\"array\", \"items\": \"bytes\"}", "[]", },
-        { "{\"type\":\"array\", \"items\":{\"type\":\"fixed\", "
-          + "\"name\":\"fi\", \"size\": 10}}", "[]" },
-
-        { "{\"type\":\"array\", \"items\": \"boolean\"}", "[c1sB]" },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[c1sI]" },
-        { "{\"type\":\"array\", \"items\": \"long\"}", "[c1sL]" },
-        { "{\"type\":\"array\", \"items\": \"float\"}", "[c1sF]" },
-        { "{\"type\":\"array\", \"items\": \"double\"}", "[c1sD]" },
-        { "{\"type\":\"array\", \"items\": \"string\"}", "[c1sS10]" },
-        { "{\"type\":\"array\", \"items\": \"bytes\"}", "[c1sb10]" },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[c1sIc1sI]" },
-        { "{\"type\":\"array\", \"items\": \"int\"}", "[c2sIsI]" },
-        { "{\"type\":\"array\", \"items\":{\"type\":\"fixed\", "
-          + "\"name\":\"fi\", \"size\": 10}}", "[c2sf10sf10]" },
-
-        { "{\"type\":\"map\", \"values\": \"boolean\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"long\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"float\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"double\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"string\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": \"bytes\"}", "{}" },
-        { "{\"type\":\"map\", \"values\": "
-          + "{\"type\":\"array\", \"items\":\"int\"}}", "{}" },
-
-        { "{\"type\":\"map\", \"values\": \"boolean\"}", "{c1sK5B}" },
-        { "{\"type\":\"map\", \"values\": \"int\"}", "{c1sK5I}" },
-        { "{\"type\":\"map\", \"values\": \"long\"}", "{c1sK5L}" },
-        { "{\"type\":\"map\", \"values\": \"float\"}", "{c1sK5F}" },
-        { "{\"type\":\"map\", \"values\": \"double\"}", "{c1sK5D}" },
-        { "{\"type\":\"map\", \"values\": \"string\"}", "{c1sK5S10}" },
-        { "{\"type\":\"map\", \"values\": \"bytes\"}", "{c1sK5b10}" },
-        { "{\"type\":\"map\", \"values\": "
-          + "{\"type\":\"array\", \"items\":\"int\"}}", "{c1sK5[c3sIsIsI]}" },
-
-        { "{\"type\":\"map\", \"values\": \"boolean\"}",
-            "{c1sK5Bc2sK5BsK5B}" },
-
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"boolean\"}]}", "B" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"int\"}]}", "I" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"long\"}]}", "L" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"float\"}]}", "F" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"double\"}]}", "D" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"string\"}]}", "S10" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f\", \"type\":\"bytes\"}]}", "b10" },
-
-        // multi-field records
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"int\"},"
-          + "{\"name\":\"f2\", \"type\":\"double\"},"
-          + "{\"name\":\"f3\", \"type\":\"string\"}]}", "IDS10" },
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f0\", \"type\":\"null\"},"
-          + "{\"name\":\"f1\", \"type\":\"boolean\"},"
-          + "{\"name\":\"f2\", \"type\":\"int\"},"
-          + "{\"name\":\"f3\", \"type\":\"long\"},"
-          + "{\"name\":\"f4\", \"type\":\"float\"},"
-          + "{\"name\":\"f5\", \"type\":\"double\"},"
-          + "{\"name\":\"f6\", \"type\":\"string\"},"
-          + "{\"name\":\"f7\", \"type\":\"bytes\"}]}",
-            "NBILFDS10b25" },
-        
-        // record of records
-        { "{\"type\":\"record\",\"name\":\"outer\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":{\"type\":\"record\", "
-          + "\"name\":\"inner\", \"fields\":["
-          + "{\"name\":\"g1\", \"type\":\"int\"}, {\"name\":\"g2\", "
-          + "\"type\":\"double\"}]}},"
-          + "{\"name\":\"f2\", \"type\":\"string\"},"
-          + "{\"name\":\"f3\", \"type\":\"inner\"}]}",
-          "IDS10ID" },
-        // record with array
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", "
-          + "\"type\":{\"type\":\"array\", \"items\":\"int\"}}]}",
-          "L[c1sI]" },
-
-        // record with map
-        { "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", "
-          + "\"type\":{\"type\":\"map\", \"values\":\"int\"}}]}",
-          "L{c1sK5I}" },
-
-        // array of records
-        { "{\"type\":\"array\", \"items\":"
-            + "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", \"type\":\"null\"}]}}",
-            "[c2sLNsLN]" },
-
-        { "{\"type\":\"array\", \"items\":"
-            + "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", "
-          + "\"type\":{\"type\":\"array\", \"items\":\"int\"}}]}}",
-            "[c2sL[c1sI]sL[c2sIsI]]" },
-        { "{\"type\":\"array\", \"items\":"
-            + "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", "
-          + "\"type\":{\"type\":\"map\", \"values\":\"int\"}}]}}",
-            "[c2sL{c1sK5I}sL{c2sK5IsK5I}]" },
-        { "{\"type\":\"array\", \"items\":"
-            + "{\"type\":\"record\",\"name\":\"r\",\"fields\":["
-          + "{\"name\":\"f1\", \"type\":\"long\"},"
-          + "{\"name\":\"f2\", "
-          + "\"type\":[\"null\", \"int\"]}]}}",
-            "[c2sLU0NsLU1I]" },
-
-        { "[\"boolean\"]", "U0B" },
-        { "[\"int\"]", "U0I" },
-        { "[\"long\"]", "U0L" },
-        { "[\"float\"]", "U0F" },
-        { "[\"double\"]", "U0D" },
-        { "[\"string\"]", "U0S10" },
-        { "[\"bytes\"]", "U0b10" },
-
-        { "[\"null\", \"int\"]", "U0N" },
-        { "[\"boolean\", \"int\"]", "U0B" },
-        { "[\"boolean\", \"int\"]", "U1I" },
-        { "[\"boolean\", {\"type\":\"array\", \"items\":\"int\"} ]",
-          "U0B" },
-
-        { "[\"boolean\", {\"type\":\"array\", \"items\":\"int\"} ]",
-            "U1[c1sI]" },
-          
-        // Recursion
-        { "{\"type\": \"record\", \"name\": \"Node\", \"fields\": ["
-          + "{\"name\":\"label\", \"type\":\"string\"},"
-          + "{\"name\":\"children\", \"type\":"
-          + "{\"type\": \"array\", \"items\": \"Node\" }}]}",
-          "S10[c1sS10[]]" },
-          
-        { "{\"type\": \"record\", \"name\": \"Lisp\", \"fields\": ["
-          + "{\"name\":\"value\", \"type\":[\"null\", \"string\","
-          + "{\"type\": \"record\", \"name\": \"Cons\", \"fields\": ["
-          + "{\"name\":\"car\", \"type\":\"Lisp\"},"
-          + "{\"name\":\"cdr\", \"type\":\"Lisp\"}]}]}]}",
-          "U0N"},
-        { "{\"type\": \"record\", \"name\": \"Lisp\", \"fields\": ["
-          + "{\"name\":\"value\", \"type\":[\"null\", \"string\","
-          + "{\"type\": \"record\", \"name\": \"Cons\", \"fields\": ["
-          + "{\"name\":\"car\", \"type\":\"Lisp\"},"
-          + "{\"name\":\"cdr\", \"type\":\"Lisp\"}]}]}]}",
-          "U1S10"},
-        { "{\"type\": \"record\", \"name\": \"Lisp\", \"fields\": ["
-          + "{\"name\":\"value\", \"type\":[\"null\", \"string\","
-          + "{\"type\": \"record\", \"name\": \"Cons\", \"fields\": ["
-          + "{\"name\":\"car\", \"type\":\"Lisp\"},"
-          + "{\"name\":\"cdr\", \"type\":\"Lisp\"}]}]}]}",
-          "U2U1S10U0N"},
-    };
-  }
-  
-  static void dump(byte[] bb) {
-    int col = 0;
-    for (byte b : bb) {
-      if (col % 16 == 0) {
-        System.out.println();
-      }
-      col++;
-      System.out.print(Integer.toHexString(b & 0xff) + " ");
-    }
-    System.out.println();
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/parsing/TestResolvingGrammarGenerator.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/io/parsing/TestResolvingGrammarGenerator.java
deleted file mode 100644
index a6a7b03..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/io/parsing/TestResolvingGrammarGenerator.java
+++ /dev/null
@@ -1,93 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.io.parsing;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.StringReader;
-import java.util.Arrays;
-import java.util.Collection;
-
-import org.apache.avro.Schema;
-import org.apache.avro.io.BinaryEncoder;
-import org.apache.avro.io.Encoder;
-import org.apache.avro.io.ValidatingEncoder;
-import org.codehaus.jackson.JsonFactory;
-import org.codehaus.jackson.JsonNode;
-import org.codehaus.jackson.map.ObjectMapper;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-
-@RunWith(Parameterized.class)
-public class TestResolvingGrammarGenerator {
-  private final Schema schema;
-  private final JsonNode data;
-  
-  public TestResolvingGrammarGenerator(String jsonSchema, String jsonData)
-    throws IOException {
-    this.schema = Schema.parse(jsonSchema);
-    JsonFactory factory = new JsonFactory();
-    ObjectMapper mapper = new ObjectMapper(factory);
-
-    this.data = mapper.readTree(new StringReader(jsonData));
-  }
-
-  @Test
-  public void test() throws IOException {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    Encoder e = new ValidatingEncoder(schema, new BinaryEncoder(baos));
-    
-    ResolvingGrammarGenerator.encode(e, schema, data);
-  }
-  
-  @Parameterized.Parameters
-  public static Collection<Object[]> data() {
-    Collection<Object[]> ret = Arrays.asList(
-        new Object[][] {
-            { "{ \"type\": \"record\", \"name\": \"r\", \"fields\": [ "
-                + " { \"name\" : \"f1\", \"type\": \"int\" }, "
-                + " { \"name\" : \"f2\", \"type\": \"float\" } "
-                + "] } }",
-              "{ \"f2\": 10.4, \"f1\": 10 } " },
-            { "{ \"type\": \"enum\", \"name\": \"e\", \"symbols\": "
-                + "[ \"s1\", \"s2\"] } }", " \"s1\" " },
-            { "{ \"type\": \"enum\", \"name\": \"e\", \"symbols\": "
-                + "[ \"s1\", \"s2\"] } }", " \"s2\" " },
-            { "{ \"type\": \"fixed\", \"name\": \"f\", \"size\": 10 }",
-              "\"hello\"" },
-            { "{ \"type\": \"array\", \"items\": \"int\" }",
-              "[ 10, 20, 30 ]" },
-            { "{ \"type\": \"map\", \"values\": \"int\" }",
-              "{ \"k1\": 10, \"k3\": 20, \"k3\": 30 }" },
-            { "[ \"int\", \"long\" ]", "10" },
-            { "\"string\"", "\"hello\"" },
-            { "\"bytes\"", "\"hello\"" },
-            { "\"int\"", "10" },
-            { "\"long\"", "10" },
-            { "\"float\"", "10.0" },
-            { "\"double\"", "10.0" },
-            { "\"boolean\"", "true" },
-            { "\"boolean\"", "false" },
-            { "\"null\"", "null" },
-            }
-        );
-    return ret;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/TestLocalTransceiver.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/TestLocalTransceiver.java
deleted file mode 100644
index a2adc6c..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/TestLocalTransceiver.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.IOException;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericRecord;
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.generic.GenericResponder;
-import org.apache.avro.util.Utf8;
-import org.junit.Test;
-
-public class TestLocalTransceiver {
-
-  Protocol protocol = Protocol.parse("" + "{\"protocol\": \"Minimal\", "
-      + "\"messages\": { \"m\": {"
-      + "   \"request\": [{\"name\": \"x\", \"type\": \"string\"}], "
-      + "   \"response\": \"string\"} } }");
-
-  static class TestResponder extends GenericResponder {
-    public TestResponder(Protocol local) {
-      super(local);
-    }
-
-    @Override
-    public Object respond(Message message, Object request)
-        throws AvroRemoteException {
-      assertEquals(new Utf8("hello"), ((GenericRecord) request).get("x"));
-      return new Utf8("there");
-    }
-
-  }
-
-  @Test
-  public void testSingleRpc() throws IOException {
-    Transceiver t = new LocalTransceiver(new TestResponder(protocol));
-    GenericRecord params = new GenericData.Record(protocol.getMessages().get(
-        "m").getRequest());
-    params.put("x", new Utf8("hello"));
-    GenericRequestor r = new GenericRequestor(protocol, t);
-    assertEquals(new Utf8("there"), r.request("m", params));
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/FakeTicks.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/FakeTicks.java
deleted file mode 100644
index 442890e..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/FakeTicks.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import org.apache.avro.ipc.stats.Stopwatch.Ticks;
-
-/** Implements Ticks with manual time-winding. */
-class FakeTicks implements Ticks {
-  long time = 0;
-
-  @Override
-  public long ticks() {
-    return time;
-  }
-
-  public void passTime(long nanos) {
-    time += nanos;
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/StatsPluginOverhead.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/StatsPluginOverhead.java
deleted file mode 100644
index 702016f..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/StatsPluginOverhead.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import java.io.IOException;
-import java.net.URL;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.generic.GenericResponder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.HttpServer;
-import org.apache.avro.ipc.HttpTransceiver;
-import org.apache.avro.ipc.Responder;
-import org.apache.avro.ipc.Transceiver;
-
-/**
- * Naively measures overhead of using the stats plugin.
- *
- * The API used is the generic one.
- * The protocol is the "null" protocol: null is sent
- * and returned.
- */
-public class StatsPluginOverhead {
-  /** Number of RPCs per iteration. */
-  private static final int COUNT = 100000;
-  private static final Protocol NULL_PROTOCOL = Protocol.parse(
-      "{\"protocol\": \"null\", "
-      + "\"messages\": { \"null\": {"
-      + "   \"request\": [], "
-      + "   \"response\": \"null\"} } }");
-
-  private static class IdentityResponder extends GenericResponder {
-    public IdentityResponder(Protocol local) {
-      super(local);
-    }
-
-    @Override
-    public Object respond(Message message, Object request)
-        throws AvroRemoteException {
-      return request;
-    }
-  }
-
-  public static void main(String[] args) throws Exception {
-    double with = sendRpcs(true)/1000000000.0;
-    double without = sendRpcs(false)/1000000000.0;
-
-    System.out.println(String.format(
-        "Overhead: %f%%.  RPC/s: %f (with) vs %f (without).  " +
-        "RPC time (ms): %f vs %f",
-        100*(with - without)/(without),
-        COUNT/with,
-        COUNT/without,
-        1000*with/COUNT,
-        1000*without/COUNT));
-  }
-
-  /** Sends RPCs and returns nanos elapsed. */
-  private static long sendRpcs(boolean withPlugin) throws IOException {
-    HttpServer server = createServer(withPlugin);
-    Transceiver t =
-      new HttpTransceiver(new URL("http://127.0.0.1:"+server.getPort()+"/"));
-    GenericRequestor requestor = new GenericRequestor(NULL_PROTOCOL, t);
-
-    long now = System.nanoTime();
-    for (int i = 0; i < COUNT; ++i) {
-      requestor.request("null", null);
-    }
-    long elapsed = System.nanoTime() - now;
-    t.close();
-    server.close();
-    return elapsed;
-  }
-
-  /** Starts an Avro server. */
-  private static HttpServer createServer(boolean withPlugin)
-      throws IOException {
-    Responder r = new IdentityResponder(NULL_PROTOCOL);
-    if (withPlugin) {
-      r.addRPCPlugin(new StatsPlugin());
-    }
-    // Start Avro server
-    return new HttpServer(r, 0);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestHistogram.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestHistogram.java
deleted file mode 100644
index 52a36d1..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestHistogram.java
+++ /dev/null
@@ -1,94 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import static org.junit.Assert.assertArrayEquals;
-import static org.junit.Assert.assertEquals;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Iterator;
-import java.util.List;
-import java.util.TreeSet;
-
-import org.apache.avro.ipc.stats.Histogram.Entry;
-import org.apache.avro.ipc.stats.Histogram.Segmenter;
-import org.junit.Test;
-
-public class TestHistogram {
-
-  @Test
-  public void testBasicOperation() {
-    Segmenter<String, Integer> s = new Histogram.TreeMapSegmenter<Integer>(
-        new TreeSet<Integer>(Arrays.asList(0, 1, 2, 4, 8, 16)));
-
-    Histogram<String, Integer> h = new Histogram<String, Integer>(s);
-
-    for(int i = 0; i < 20; ++i) {
-      h.add(i);
-    }
-    assertEquals(20, h.getCount());
-    assertArrayEquals(new int[] { 1, 1, 2, 4, 8, 4 }, h.getHistogram());
-
-    assertEquals("[0,1)=1;[1,2)=1;[2,4)=2;[4,8)=4;[8,16)=8;[16,infinity)=4", h.toString());
-
-    List<Entry<String>> entries = new ArrayList<Entry<String>>();
-    for (Entry<String> entry : h.entries()) {
-      entries.add(entry);
-    }
-    assertEquals("[0,1)", entries.get(0).bucket);
-    assertEquals(4, entries.get(5).count);
-    assertEquals(6, entries.size());
-  }
-
-  @Test(expected=Histogram.SegmenterException.class)
-  public void testBadValue() {
-    Segmenter<String, Long> s = new Histogram.TreeMapSegmenter<Long>(
-        new TreeSet<Long>(Arrays.asList(0L, 1L, 2L, 4L, 8L, 16L)));
-
-    Histogram<String, Long> h = new Histogram<String, Long>(s);
-    h.add(-1L);
-  }
-
-  /** Only has one bucket */
-  static class SingleBucketSegmenter implements Segmenter<String, Float >{
-    @Override
-    public Iterator<String> getBuckets() {
-      return Arrays.asList("X").iterator();
-    }
-
-    @Override
-    public int segment(Float value) { return 0; }
-
-    @Override
-    public int size() { return 1; }
-  }
-
-  @Test
-  public void testFloatHistogram() {
-    FloatHistogram<String> h = new FloatHistogram<String>(new SingleBucketSegmenter());
-    h.add(12.0f);
-    h.add(10.0f);
-    h.add(20.0f);
-
-    assertEquals(3, h.getCount());
-    assertEquals(14.0f, h.getMean(), 0.0001);
-    assertEquals(5.291f, h.getUnbiasedStdDev(), 0.001);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStatsPluginAndServlet.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStatsPluginAndServlet.java
deleted file mode 100644
index d954a6e..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStatsPluginAndServlet.java
+++ /dev/null
@@ -1,180 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.io.StringWriter;
-
-import org.apache.avro.Protocol;
-import org.apache.avro.Protocol.Message;
-import org.apache.avro.generic.GenericData;
-import org.apache.avro.generic.GenericRecord;
-import org.apache.avro.generic.GenericRequestor;
-import org.apache.avro.generic.GenericResponder;
-import org.apache.avro.ipc.AvroRemoteException;
-import org.apache.avro.ipc.HttpServer;
-import org.apache.avro.ipc.LocalTransceiver;
-import org.apache.avro.ipc.RPCContext;
-import org.apache.avro.ipc.Responder;
-import org.apache.avro.ipc.Transceiver;
-import org.junit.Test;
-import org.mortbay.jetty.Server;
-import org.mortbay.jetty.servlet.Context;
-import org.mortbay.jetty.servlet.ServletHolder;
-import org.mortbay.log.Log;
-
-public class TestStatsPluginAndServlet {
-  Protocol protocol = Protocol.parse("" + "{\"protocol\": \"Minimal\", "
-      + "\"messages\": { \"m\": {"
-      + "   \"request\": [{\"name\": \"x\", \"type\": \"int\"}], "
-      + "   \"response\": \"int\"} } }");
-  Message message = protocol.getMessages().get("m");
-
-  private static final long MS = 1000*1000L;
-
-  /** Returns an HTML string. */
-  private String generateServletResponse(StatsPlugin statsPlugin)
-      throws IOException {
-    StatsServlet servlet = new StatsServlet(statsPlugin);
-    StringWriter w = new StringWriter();
-    servlet.writeStats(w);
-    String o = w.toString();
-    return o;
-  }
-
-  /** Expects 0 and returns 1. */
-  static class TestResponder extends GenericResponder {
-    public TestResponder(Protocol local) {
-      super(local);
-    }
-
-    @Override
-    public Object respond(Message message, Object request)
-        throws AvroRemoteException {
-      assertEquals(0, ((GenericRecord) request).get("x"));
-      return 1;
-    }
-
-  }
-
-  private void makeRequest(Transceiver t) throws IOException {
-    GenericRecord params = new GenericData.Record(protocol.getMessages().get(
-        "m").getRequest());
-    params.put("x", 0);
-    GenericRequestor r = new GenericRequestor(protocol, t);
-    assertEquals(1, r.request("m", params));
-  }
-
-  @Test
-  public void testFullServerPath() throws IOException {
-    Responder r = new TestResponder(protocol);
-    StatsPlugin statsPlugin = new StatsPlugin();
-    r.addRPCPlugin(statsPlugin);
-    Transceiver t = new LocalTransceiver(r);
-
-    for (int i = 0; i < 10; ++i) {
-      makeRequest(t);
-    }
-
-    String o = generateServletResponse(statsPlugin);
-    assertTrue(o.contains("Number of calls: 10"));
-  }
-
-  @Test
-  public void testMultipleRPCs() throws IOException {
-    FakeTicks t = new FakeTicks();
-    StatsPlugin statsPlugin = new StatsPlugin(t, StatsPlugin.DEFAULT_SEGMENTER);
-    RPCContext context1 = makeContext();
-    RPCContext context2 = makeContext();
-    statsPlugin.serverReceiveRequest(context1);
-    t.passTime(100*MS); // first takes 100ms
-    statsPlugin.serverReceiveRequest(context2);
-    String r = generateServletResponse(statsPlugin);
-    // Check in progress RPCs
-    assertTrue(r.contains("m: 0ms"));
-    assertTrue(r.contains("m: 100ms"));
-    statsPlugin.serverSendResponse(context1);
-    t.passTime(900*MS); // second takes 900ms
-    statsPlugin.serverSendResponse(context2);
-
-    r = generateServletResponse(statsPlugin);
-    assertTrue(r.contains("Average Duration: 500ms"));
-  }
-
-  private RPCContext makeContext() {
-    RPCContext context = new RPCContext();
-    context.setMessage(message);
-    return context;
-  }
-
-  /** Sleeps as requested. */
-  private static class SleepyResponder extends GenericResponder {
-    public SleepyResponder(Protocol local) {
-      super(local);
-    }
-
-    @Override
-    public Object respond(Message message, Object request)
-        throws AvroRemoteException {
-      try {
-        Thread.sleep((Long)((GenericRecord)request).get("millis"));
-      } catch (InterruptedException e) {
-        throw new AvroRemoteException(e);
-      }
-      return null;
-    }
-  }
-
-  /**
-   * Demo program for using RPC stats.  Tool can be used (as below)
-   * to trigger RPCs.
-   * <pre>
-   * java -jar build/avro-tools-*.jar rpcsend '{"protocol":"sleepy","namespace":null,"types":[],"messages":{"sleep":{"request":[{"name":"millis","type":"long"}],"response":"null"}}}' sleep localhost 7002 '{"millis": 20000}'
-   * </pre>
-   * @param args
-   * @throws Exception
-   */
-  public static void main(String[] args) throws Exception {
-    if (args.length == 0) {
-      args = new String[] { "7002", "7003" };
-    }
-    Protocol protocol = Protocol.parse("{\"protocol\": \"sleepy\", "
-        + "\"messages\": { \"sleep\": {"
-        + "   \"request\": [{\"name\": \"millis\", \"type\": \"long\"}], "
-        + "   \"response\": \"null\"} } }");
-    Log.info("Using protocol: " + protocol.toString());
-    Responder r = new SleepyResponder(protocol);
-    StatsPlugin p = new StatsPlugin();
-    r.addRPCPlugin(p);
-
-    // Start Avro server
-    new HttpServer(r, Integer.parseInt(args[0]));
-
-    // Ideally we could use the same Jetty server
-    Server httpServer = new Server(Integer.parseInt(args[1]));
-    new Context(httpServer, "/").addServlet(
-        new ServletHolder(new StatsServlet(p)), "/*");
-    httpServer.start();
-    while(true) {
-      Thread.sleep(60*1000);
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStopwatch.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStopwatch.java
deleted file mode 100644
index 62f1580..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/ipc/stats/TestStopwatch.java
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.ipc.stats;
-
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import org.junit.Test;
-
-public class TestStopwatch {
-  @Test
-  public void testNormal() {
-    FakeTicks f = new FakeTicks();
-    Stopwatch s = new Stopwatch(f);
-    f.passTime(10);
-    s.start();
-    f.passTime(20);
-    assertEquals(20, s.elapsedNanos());
-    f.passTime(40);
-    s.stop();
-    f.passTime(80);
-    assertEquals(60, s.elapsedNanos());
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testNotStarted1() {
-    FakeTicks f = new FakeTicks();
-    Stopwatch s = new Stopwatch(f);
-    s.elapsedNanos();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testNotStarted2() {
-    FakeTicks f = new FakeTicks();
-    Stopwatch s = new Stopwatch(f);
-    s.stop();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testTwiceStarted() {
-    FakeTicks f = new FakeTicks();
-    Stopwatch s = new Stopwatch(f);
-    s.start();
-    s.start();
-  }
-
-  @Test(expected=IllegalStateException.class)
-  public void testTwiceStopped() {
-    FakeTicks f = new FakeTicks();
-    Stopwatch s = new Stopwatch(f);
-    s.start();
-    s.stop();
-    s.stop();
-  }
-
-  @Test
-  public void testSystemStopwatch() {
-    Stopwatch s = new Stopwatch(Stopwatch.SYSTEM_TICKS);
-    s.start();
-    s.stop();
-    assertTrue(s.elapsedNanos() >= 0);
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificCompiler.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificCompiler.java
deleted file mode 100644
index 91ebaf1..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificCompiler.java
+++ /dev/null
@@ -1,241 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Iterator;
-import java.util.List;
-
-import javax.tools.JavaCompiler;
-import javax.tools.StandardJavaFileManager;
-import javax.tools.ToolProvider;
-import javax.tools.JavaCompiler.CompilationTask;
-
-import org.apache.avro.AvroTestUtil;
-import org.apache.avro.Protocol;
-import org.apache.avro.Schema;
-import org.apache.avro.TestProtocolParsing;
-import org.apache.avro.TestSchema;
-import org.apache.avro.specific.SpecificCompiler.OutputFile;
-import org.junit.Test;
-
-
-public class TestSpecificCompiler {
-  @Test
-  public void testEsc() {
-    assertEquals("\\\"", SpecificCompiler.esc("\""));
-  }
-
-  @Test
-  public void testMakePath() {
-    assertEquals("foo/bar/Baz.java".replace("/", File.separator), SpecificCompiler.makePath("Baz", "foo.bar"));
-    assertEquals("baz.java", SpecificCompiler.makePath("baz", ""));
-  }
-
-  @Test
-  public void testPrimitiveSchemaGeneratesNothing() {
-    assertEquals(0, new SpecificCompiler(Schema.parse("\"double\"")).compile().size());
-  }
-
-  @Test
-  public void testSimpleEnumSchema() throws IOException {
-    Collection<OutputFile> outputs = new SpecificCompiler(Schema.parse(TestSchema.BASIC_ENUM_SCHEMA)).compile();
-    assertEquals(1, outputs.size());
-    OutputFile o = outputs.iterator().next();
-    assertEquals(o.path, "Test.java");
-    assertTrue(o.contents.contains("public enum Test"));
-    assertCompilesWithJavaCompiler(outputs);
-  }
-
-  @Test
-  public void testMangleIfReserved() {
-    assertEquals("foo", SpecificCompiler.mangle("foo"));
-    assertEquals("goto$", SpecificCompiler.mangle("goto"));
-  }
-
-  @Test
-  public void testManglingForProtocols() throws IOException {
-    String protocolDef = "" +
-      "{ \"protocol\": \"default\",\n" +
-      "  \"types\":\n" +
-      "    [\n" +
-      "      {\n" +
-      "       \"name\": \"finally\",\n" +
-      "       \"type\": \"error\",\n" +
-      "       \"fields\": [{\"name\": \"catch\", \"type\": \"boolean\"}]\n" +
-      "      }\n" +
-      "    ],\n" +
-      "  \"messages\": { \"goto\":\n" +
-      "    { \"request\": [{\"name\": \"break\", \"type\": \"string\"}],\n" +
-      "      \"response\": \"string\",\n" +
-      "      \"errors\": [\"finally\"]\n" +
-      "    }" +
-      "   }\n" +
-      "}\n";
-    Collection<OutputFile> c =
-      new SpecificCompiler(Protocol.parse(protocolDef)).compile();
-    Iterator<OutputFile> i = c.iterator();
-    String errType = i.next().contents;
-    String protocol = i.next().contents;
-
-    assertTrue(errType.contains("public class finally$ extends org.apache.avro.specific.SpecificExceptionBase"));
-    assertTrue(errType.contains("public boolean catch$;"));
-
-    assertTrue(protocol.contains("org.apache.avro.util.Utf8 goto$(org.apache.avro.util.Utf8 break$)"));
-    assertTrue(protocol.contains("public interface default$"));
-    assertTrue(protocol.contains("throws org.apache.avro.ipc.AvroRemoteException, finally$"));
-    
-    assertCompilesWithJavaCompiler(c);
-
-  }
-
-  @Test
-  public void testManglingForRecords() throws IOException {
-    String schema = "" +
-      "{ \"name\": \"volatile\", \"type\": \"record\", " +
-      "  \"fields\": [ {\"name\": \"package\", \"type\": \"string\" }," +
-      "                {\"name\": \"short\", \"type\": \"volatile\" } ] }";
-    Collection<OutputFile> c =
-      new SpecificCompiler(Schema.parse(schema)).compile();
-    assertEquals(1, c.size());
-    String contents = c.iterator().next().contents;
-
-    assertTrue(contents.contains("public org.apache.avro.util.Utf8 package$;"));
-    assertTrue(contents.contains("class volatile$ extends"));
-    assertTrue(contents.contains("volatile$ short$;"));
-    
-    assertCompilesWithJavaCompiler(c);
-  }
-
-  @Test
-  public void testManglingForEnums() throws IOException {
-    String enumSchema = "" +
-      "{ \"name\": \"instanceof\", \"type\": \"enum\"," +
-      "  \"symbols\": [\"new\", \"super\", \"switch\"] }";
-    Collection<OutputFile> c =
-      new SpecificCompiler(Schema.parse(enumSchema)).compile();
-    assertEquals(1, c.size());
-    String contents = c.iterator().next().contents;
-
-    assertTrue(contents.contains("new$"));
-    
-    assertCompilesWithJavaCompiler(c);
-  }
-
-  @Test
-  public void testSchemaWithDocs() {
-    Collection<OutputFile> outputs = new SpecificCompiler(
-        Schema.parse(TestSchema.SCHEMA_WITH_DOC_TAGS)).compile();
-    assertEquals(3, outputs.size());
-    int count = 0;
-    for (OutputFile o : outputs) {
-      if (o.path.endsWith("outer_record.java")) {
-        count++;
-        assertTrue(o.contents.contains("/** This is not a world record. */"));
-        assertTrue(o.contents.contains("/** Inner Fixed */"));
-        assertTrue(o.contents.contains("/** Inner Enum */"));
-        assertTrue(o.contents.contains("/** Inner String */"));
-      }
-      if (o.path.endsWith("very_inner_fixed.java")) {
-        count++;
-        assertTrue(o.contents.contains("/** Very Inner Fixed */"));
-        assertTrue(o.contents.contains("@org.apache.avro.specific.FixedSize(1)"));
-      }
-      if (o.path.endsWith("very_inner_enum.java")) {
-        count++;
-        assertTrue(o.contents.contains("/** Very Inner Enum */"));
-      }
-    }
- 
-    assertEquals(3, count);
-  }
-  
-  @Test
-  public void testProtocolWithDocs() throws IOException {
-    Protocol protocol = TestProtocolParsing.getSimpleProtocol();
-    Collection<OutputFile> out = new SpecificCompiler(protocol).compile();
-    assertEquals(5, out.size());
-    int count = 0;
-    for (OutputFile o : out) {
-      if (o.path.endsWith("Simple.java")) {
-        count++;
-        assertTrue(o.contents.contains("/** Protocol used for testing. */"));
-        assertTrue(o.contents.contains("/** Send a greeting */"));
-      }
-    }
-    assertEquals("Missed generated protocol!", 1, count);
-  }
-
-  /**
-   * Checks that a schema passes through the SpecificCompiler, and,
-   * optionally, uses the system's Java compiler to check
-   * that the generated code is valid.
-   */
-  public static void
-      assertCompiles(Schema schema, boolean useJavaCompiler) 
-  throws IOException {
-    Collection<OutputFile> outputs = new SpecificCompiler(schema).compile();
-    assertTrue(null != outputs);
-    if (useJavaCompiler) {
-      assertCompilesWithJavaCompiler(outputs);
-    }
-  }
-  
-  /**
-   * Checks that a protocol passes through the SpecificCompiler,
-   * and, optionally, uses the system's Java compiler to check
-   * that the generated code is valid.
-   */
-  public static void assertCompiles(Protocol protocol, boolean useJavaCompiler)
-  throws IOException {
-    Collection<OutputFile> outputs = new SpecificCompiler(protocol).compile();
-    assertTrue(null != outputs);
-    if (useJavaCompiler) {
-      assertCompilesWithJavaCompiler(outputs);
-    }
-  }
-  
-  /** Uses the system's java compiler to actually compile the generated code. */
-  static void assertCompilesWithJavaCompiler(Collection<OutputFile> outputs) 
-  throws IOException {
-    if (outputs.isEmpty()) {
-      return;               // Nothing to compile!
-    }
-    File dstDir = AvroTestUtil.tempFile("realCompiler");
-    List<File> javaFiles = new ArrayList<File>();
-    for (OutputFile o : outputs) {
-      javaFiles.add(o.writeToDestination(dstDir));
-    }
-
-    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
-    StandardJavaFileManager fileManager = 
-      compiler.getStandardFileManager(null, null, null);
-    
-    CompilationTask cTask = compiler.getTask(null, fileManager, null, null, 
-        null,
-        fileManager.getJavaFileObjects(
-            javaFiles.toArray(new File[javaFiles.size()])));
-    assertTrue(cTask.call());
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificData.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificData.java
deleted file mode 100644
index 89a8ad8..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/specific/TestSpecificData.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.specific;
-
-import static org.junit.Assert.*;
-
-import org.junit.Test;
-
-import org.apache.avro.test.TestRecord;
-
-public class TestSpecificData {
-  
-  @Test
-  /** Make sure that even with nulls, hashCode() doesn't throw NPE. */
-  public void testHashCode() {
-    new TestRecord().hashCode();
-    SpecificData.get().hashCode(null, TestRecord.SCHEMA$);
-  }
-
-  @Test
-  /** Make sure that even with nulls, toString() doesn't throw NPE. */
-  public void testToString() {
-    new TestRecord().toString();
-  }
-
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestDataFileTools.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestDataFileTools.java
deleted file mode 100644
index 81849e3..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestDataFileTools.java
+++ /dev/null
@@ -1,205 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.fail;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.PrintStream;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.List;
-
-import org.apache.avro.AvroTestUtil;
-import org.apache.avro.Schema;
-import org.apache.avro.Schema.Type;
-import org.apache.avro.file.DataFileReader;
-import org.apache.avro.file.DataFileWriter;
-import org.apache.avro.generic.GenericDatumReader;
-import org.apache.avro.generic.GenericDatumWriter;
-import org.apache.tools.ant.filters.StringInputStream;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-public class TestDataFileTools {
-  static final int COUNT = 10;
-  static File sampleFile;
-  static String jsonData;
-  static Schema schema;
-  
-  @BeforeClass
-  public static void writeSampleFile() throws IOException {
-    sampleFile = AvroTestUtil.tempFile(
-      TestDataFileTools.class.getName() + ".avro");
-    schema = Schema.create(Type.INT);
-    
-    DataFileWriter<Object> writer
-      = new DataFileWriter<Object>(new GenericDatumWriter<Object>(schema))
-      .create(schema, sampleFile);
-    StringBuilder builder = new StringBuilder();
-
-    for (int i = 0; i < COUNT; ++i) {
-      builder.append(Integer.toString(i));
-      builder.append("\n");
-      writer.append(i);
-    }
-
-    writer.flush();
-    writer.close();
-    
-    jsonData = builder.toString();
-  }
-  
-  @Test
-  public void testRead() throws Exception {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    PrintStream p = new PrintStream(baos);
-    new DataFileReadTool().run(
-        null, // stdin
-        p, // stdout
-        null, // stderr
-        Arrays.asList(sampleFile.getPath()));
-    assertEquals(jsonData.toString(), baos.toString("UTF-8").
-        replace("\r", ""));
-  }
-  
-  @Test
-  public void testGetSchema() throws Exception {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    PrintStream p = new PrintStream(baos);
-    new DataFileGetSchemaTool().run(
-        null, // stdin
-        p, // stdout
-        null, // stderr
-        Arrays.asList(sampleFile.getPath()));
-    assertEquals(schema.toString() + "\n",
-        baos.toString("UTF-8").replace("\r", ""));
-  }
-  
-  @Test
-  public void testWriteWithDeflate() throws Exception {
-    testWrite("deflate", Arrays.asList("--codec", "deflate"), "deflate");
-  }
-  
-  @Test
-  public void testWrite() throws Exception {
-    testWrite("plain", Collections.<String>emptyList(), "null");
-  }
-  
-  public void testWrite(String name, List<String> extra, String expectedCodec) 
-      throws Exception {
-    File outFile = AvroTestUtil.tempFile(
-        TestDataFileTools.class + ".testWrite." + name + ".avro");
-    FileOutputStream fout = new FileOutputStream(outFile);
-    PrintStream out = new PrintStream(fout);
-    List<String> args = new ArrayList<String>();
-    args.add(schema.toString());
-    args.add("-");
-    args.addAll(extra);
-    new DataFileWriteTool().run(
-        new StringInputStream(jsonData),
-        new PrintStream(out), // stdout
-        null, // stderr
-        args);
-    out.close();
-    fout.close();
-    
-    // Read it back, and make sure it's valid.
-    GenericDatumReader<Object> reader = new GenericDatumReader<Object>();
-    DataFileReader<Object> fileReader = new DataFileReader<Object>(outFile,reader);
-    int i = 0;
-    for (Object datum : fileReader) {
-      assertEquals(i, datum);
-      i++;
-    }
-    assertEquals(COUNT, i);
-    assertEquals(schema, fileReader.getSchema());
-    String codecStr = fileReader.getMetaString("avro.codec");
-    if (null == codecStr) {
-      codecStr = "null";
-    }
-    assertEquals(expectedCodec, codecStr);
-  }
-  
-  @Test
-  public void testFailureOnWritingPartialJSONValues() throws Exception {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    PrintStream out = new PrintStream(baos);
-    try {
-      new DataFileWriteTool().run(
-          new StringInputStream("{"),
-          new PrintStream(out), // stdout
-          null, // stderr
-          Arrays.asList("{ \"type\":\"record\", \"fields\":" +
-                        "[{\"name\":\"foo\", \"type\":\"string\"}], " +
-                        "\"name\":\"boring\" }", "-"));
-      fail("Expected exception.");
-    } catch (IOException expected) {
-      // expected
-    }
-  }
-  
-  @Test
-  public void testWritingZeroJsonValues() throws Exception {
-    File outFile = writeToAvroFile("zerojsonvalues",
-        schema.toString(),
-        "");
-    assertEquals(0, countRecords(outFile));
-  }
-  
-  private int countRecords(File outFile) throws IOException {
-    GenericDatumReader<Object> reader = new GenericDatumReader<Object>();
-    DataFileReader<Object> fileReader = 
-      new DataFileReader<Object>(outFile,reader);
-    int i = 0;
-    for (@SuppressWarnings("unused") Object datum : fileReader) {
-      i++;
-    }
-    return i;
-  }
-
-  @Test
-  public void testDifferentSeparatorsBetweenJsonRecords() throws Exception {
-    File outFile = writeToAvroFile(
-        "seperators", 
-        "{ \"type\":\"array\", \"items\":\"int\" }", 
-        "[]    [] []\n[][3]     ");
-    assertEquals(5, countRecords(outFile));
-  }
-  
-  public File writeToAvroFile(String testName, String schema, String json) throws Exception {
-    File outFile = AvroTestUtil.tempFile(
-        TestDataFileTools.class + "." + testName + ".avro");
-    FileOutputStream fout = new FileOutputStream(outFile);
-    PrintStream out = new PrintStream(fout);
-    new DataFileWriteTool().run(
-        new StringInputStream(json),
-        new PrintStream(out), // stdout
-        null, // stderr
-        Arrays.asList(schema, "-"));
-    out.close();
-    fout.close();
-    return outFile;
-  }
-  
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestJsonToFromBinaryFragmentTools.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestJsonToFromBinaryFragmentTools.java
deleted file mode 100644
index 98dfbdd..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestJsonToFromBinaryFragmentTools.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.ByteArrayInputStream;
-import java.io.ByteArrayOutputStream;
-import java.io.PrintStream;
-import java.util.Arrays;
-
-import org.apache.avro.Schema;
-import org.junit.Test;
-
-/**
- * Tests both {@link JsonToBinaryFragmentTool} 
- * and {@link BinaryFragmentToJsonTool}.
- */
-public class TestJsonToFromBinaryFragmentTools {
-  private static final Schema STRING_SCHEMA = Schema.parse("\"string\"");
-  private static final String UTF8 = "utf-8";
-  private static final String AVRO = 
-    "ZLong string implies readable length encoding.";
-  private static final String JSON = 
-    "\"Long string implies readable length encoding.\"";
-
-  @Test
-  public void testBinaryToJson() throws Exception {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    PrintStream p = new PrintStream(baos);
-    
-    new BinaryFragmentToJsonTool().run(
-        new ByteArrayInputStream(AVRO.getBytes(UTF8)), // stdin
-        p, // stdout
-        null, // stderr
-        Arrays.asList(STRING_SCHEMA.toString(), "-"));
-    assertEquals(JSON + "\n", baos.toString(UTF8).replace("\r", ""));
-  }
-  
-  @Test
-  public void testJsonToBinary() throws Exception {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream();
-    PrintStream p = new PrintStream(baos);
-    new JsonToBinaryFragmentTool().run(
-        new ByteArrayInputStream(JSON.getBytes(UTF8)), // stdin
-        p, // stdout
-        null, // stderr
-        Arrays.asList(STRING_SCHEMA.toString(), "-"));
-    assertEquals(AVRO, baos.toString(UTF8));
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestMain.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestMain.java
deleted file mode 100644
index c990a42..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestMain.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import static org.junit.Assert.fail;
-
-import org.junit.Test;
-
-public class TestMain {
-  @Test
-  /** Make sure that tool descriptions fit in 80 characters. */
-  public void testToolDescriptionLength() {
-    Main m = new Main();
-    for (Tool t : m.tools.values()) {
-      if (m.maxLen + 2 + t.getShortDescription().length() > 80) {
-        fail("Tool description too long: " + t.getName());
-      }
-    }
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestRpcReceiveAndSendTools.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestRpcReceiveAndSendTools.java
deleted file mode 100644
index cb2c68c..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/tool/TestRpcReceiveAndSendTools.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.tool;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.ByteArrayOutputStream;
-import java.io.PrintStream;
-import java.util.Arrays;
-
-import org.junit.Test;
-
-public class TestRpcReceiveAndSendTools {
-  
-  /**
-   * Starts a server (using the tool) and sends a single message to it.
-   */
-  @Test
-  public void testServeAndSend() throws Exception {
-    String protocolFile =
-      System.getProperty("share.dir") + "/test/schemas/simple.avpr";
-    ByteArrayOutputStream baos1 = new ByteArrayOutputStream();
-    PrintStream p1 = new PrintStream(baos1);
-    RpcReceiveTool receive = new RpcReceiveTool();
-    receive.run1(null, p1, System.err, 
-                 Arrays.asList("http://0.0.0.0:0/",
-                               protocolFile, "hello",
-                               "-data", "\"Hello!\""));
-    ByteArrayOutputStream baos2 = new ByteArrayOutputStream();
-    PrintStream p2 = new PrintStream(baos2);
-    RpcSendTool send = new RpcSendTool();
-    send.run(null, p2, System.err,
-             Arrays.asList("http://127.0.0.1:"+receive.server.getPort()+"/",
-                           protocolFile, "hello",  
-                           "-data", "{ \"greeting\": \"Hi!\" }"));
-    receive.run2(System.err);
-    
-    assertTrue(baos1.toString("UTF-8").replace("\r", "")
-               .endsWith("hello\t{\"greeting\":\"Hi!\"}\n"));
-    assertEquals("\"Hello!\"\n", baos2.toString("UTF-8").replace("\r", ""));
-  }
-}
diff --git a/branch-1.3/lang/java/src/test/java/org/apache/avro/util/TestUtf8.java b/branch-1.3/lang/java/src/test/java/org/apache/avro/util/TestUtf8.java
deleted file mode 100644
index 0a10666..0000000
--- a/branch-1.3/lang/java/src/test/java/org/apache/avro/util/TestUtf8.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.avro.util;
-
-import org.junit.Test;
-import static org.junit.Assert.assertEquals;
-
-public class TestUtf8 {
-  @Test public void testByteConstructor() throws Exception {
-    byte[] bs = "Foo".getBytes("UTF-8");
-    Utf8 u = new Utf8(bs);
-    assertEquals(bs.length, u.getLength());
-    for (int i=0; i<bs.length; i++) {
-      assertEquals(bs[i], u.getBytes()[i]);
-    }
-  }
-}
diff --git a/branch-1.3/lang/py/build.xml b/branch-1.3/lang/py/build.xml
deleted file mode 100644
index 057b3a2..0000000
--- a/branch-1.3/lang/py/build.xml
+++ /dev/null
@@ -1,100 +0,0 @@
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<project name="Avro" default="dist">
- 
-  <!-- Load user's default properties. -->
-  <property file="${user.home}/build.properties" />
-
-  <property name="share.dir" value="${basedir}/../../share"/>
-  <property name="dist.dir" value="${basedir}/../../dist/py"/>
-  <property name="top.build" value="${basedir}/../../build"/>
-
-  <loadresource property="version">
-    <file file="${share.dir}/VERSION.txt"/>
-  </loadresource>
-
-  <path id="java.classpath">
-    <fileset dir="lib">
-      <include name="**/*.jar" />
-    </fileset>
-  </path>
-
-  <path id="test.path">
-    <pathelement location="${basedir}/src"/>
-    <pathelement location="${basedir}/test"/>
-    <pathelement location="${basedir}/lib"/>
-  </path>
-
-  <target name="init">
-    <copy todir="${basedir}/src/avro">
-      <fileset dir="${share.dir}/schemas/org/apache/avro/ipc">
-      	<include name="**/*.avsc"/>
-      </fileset>
-    </copy>
-  </target>
-
-  <target name="test" depends="init" description="Run python unit tests">
-    <taskdef name="py-test" classname="org.pyant.tasks.PythonTestTask"
-	     classpathref="java.classpath"/>
-    <py-test python="python" pythonpathref="test.path" >
-      <fileset dir="${basedir}/test">
-        <include name="test_*.py"/>
-        <exclude name="test_datafile_interop.py"/>
-      </fileset>
-    </py-test>
-  </target>
-
-  <target name="interop-data-test" description="Run python interop data tests">
-    <taskdef name="py-test" classname="org.pyant.tasks.PythonTestTask"
-	     classpathref="java.classpath"/>
-    <py-test python="python" pythonpathref="test.path" >
-      <fileset dir="${basedir}/test">
-        <include name="test_datafile_interop.py"/>
-      </fileset>
-    </py-test>
-  </target>
-
-  <target name="interop-data-generate"
-          description="Generate Python interop data files.">
-    <mkdir dir="${top.build}/interop/data"/>
-    <exec executable="python">
-      <env key="PYTHONPATH" value="$PYTHONPATH:${basedir}/src"/>
-      <arg value="${basedir}/test/gen_interop_data.py"/>
-      <arg value="${share.dir}/test/schemas/interop.avsc"/>
-      <arg value="${top.build}/interop/data/py.avro"/>
-    </exec>
-  </target>
-
-  <target name="dist" description="Build egg">
-    <mkdir dir="${dist.dir}"/>
-    <exec executable="python" failonerror="true">
-      <arg value="setup.py"/>
-      <arg value="bdist_egg"/>
-      <arg value="--dist-dir=${dist.dir}"/>
-    </exec>
-  </target>
-
-  <target name="clean" description="Delete build files, and their directories">
-    <delete>
-      <fileset dir="src" includes="**/*.pyc" />
-      <fileset dir="${basedir}/src/avro" includes="**/*.avsc"/>
-    </delete>
-  </target>
-
-</project>
diff --git a/branch-1.3/lang/py/lib/pyAntTasks-1.3-LICENSE.txt b/branch-1.3/lang/py/lib/pyAntTasks-1.3-LICENSE.txt
deleted file mode 100644
index d645695..0000000
--- a/branch-1.3/lang/py/lib/pyAntTasks-1.3-LICENSE.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/branch-1.3/lang/py/lib/pyAntTasks-1.3.jar b/branch-1.3/lang/py/lib/pyAntTasks-1.3.jar
deleted file mode 100644
index 53a7877..0000000
--- a/branch-1.3/lang/py/lib/pyAntTasks-1.3.jar
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/lang/py/lib/simplejson/LICENSE.txt b/branch-1.3/lang/py/lib/simplejson/LICENSE.txt
deleted file mode 100644
index ad95f29..0000000
--- a/branch-1.3/lang/py/lib/simplejson/LICENSE.txt
+++ /dev/null
@@ -1,19 +0,0 @@
-Copyright (c) 2006 Bob Ippolito
-
-Permission is hereby granted, free of charge, to any person obtaining a copy of
-this software and associated documentation files (the "Software"), to deal in
-the Software without restriction, including without limitation the rights to
-use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
-of the Software, and to permit persons to whom the Software is furnished to do
-so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/branch-1.3/lang/py/lib/simplejson/__init__.py b/branch-1.3/lang/py/lib/simplejson/__init__.py
deleted file mode 100644
index d5b4d39..0000000
--- a/branch-1.3/lang/py/lib/simplejson/__init__.py
+++ /dev/null
@@ -1,318 +0,0 @@
-r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
-JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
-interchange format.
-
-:mod:`simplejson` exposes an API familiar to users of the standard library
-:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
-version of the :mod:`json` library contained in Python 2.6, but maintains
-compatibility with Python 2.4 and Python 2.5 and (currently) has
-significant performance advantages, even without using the optional C
-extension for speedups.
-
-Encoding basic Python object hierarchies::
-
-    >>> import simplejson as json
-    >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
-    '["foo", {"bar": ["baz", null, 1.0, 2]}]'
-    >>> print json.dumps("\"foo\bar")
-    "\"foo\bar"
-    >>> print json.dumps(u'\u1234')
-    "\u1234"
-    >>> print json.dumps('\\')
-    "\\"
-    >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
-    {"a": 0, "b": 0, "c": 0}
-    >>> from StringIO import StringIO
-    >>> io = StringIO()
-    >>> json.dump(['streaming API'], io)
-    >>> io.getvalue()
-    '["streaming API"]'
-
-Compact encoding::
-
-    >>> import simplejson as json
-    >>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':'))
-    '[1,2,3,{"4":5,"6":7}]'
-
-Pretty printing::
-
-    >>> import simplejson as json
-    >>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)
-    >>> print '\n'.join([l.rstrip() for l in  s.splitlines()])
-    {
-        "4": 5,
-        "6": 7
-    }
-
-Decoding JSON::
-
-    >>> import simplejson as json
-    >>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
-    >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
-    True
-    >>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
-    True
-    >>> from StringIO import StringIO
-    >>> io = StringIO('["streaming API"]')
-    >>> json.load(io)[0] == 'streaming API'
-    True
-
-Specializing JSON object decoding::
-
-    >>> import simplejson as json
-    >>> def as_complex(dct):
-    ...     if '__complex__' in dct:
-    ...         return complex(dct['real'], dct['imag'])
-    ...     return dct
-    ...
-    >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
-    ...     object_hook=as_complex)
-    (1+2j)
-    >>> import decimal
-    >>> json.loads('1.1', parse_float=decimal.Decimal) == decimal.Decimal('1.1')
-    True
-
-Specializing JSON object encoding::
-
-    >>> import simplejson as json
-    >>> def encode_complex(obj):
-    ...     if isinstance(obj, complex):
-    ...         return [obj.real, obj.imag]
-    ...     raise TypeError(repr(o) + " is not JSON serializable")
-    ...
-    >>> json.dumps(2 + 1j, default=encode_complex)
-    '[2.0, 1.0]'
-    >>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
-    '[2.0, 1.0]'
-    >>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
-    '[2.0, 1.0]'
-
-
-Using simplejson.tool from the shell to validate and pretty-print::
-
-    $ echo '{"json":"obj"}' | python -m simplejson.tool
-    {
-        "json": "obj"
-    }
-    $ echo '{ 1.2:3.4}' | python -m simplejson.tool
-    Expecting property name: line 1 column 2 (char 2)
-"""
-__version__ = '2.0.9'
-__all__ = [
-    'dump', 'dumps', 'load', 'loads',
-    'JSONDecoder', 'JSONEncoder',
-]
-
-__author__ = 'Bob Ippolito <bob@redivi.com>'
-
-from decoder import JSONDecoder
-from encoder import JSONEncoder
-
-_default_encoder = JSONEncoder(
-    skipkeys=False,
-    ensure_ascii=True,
-    check_circular=True,
-    allow_nan=True,
-    indent=None,
-    separators=None,
-    encoding='utf-8',
-    default=None,
-)
-
-def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
-        allow_nan=True, cls=None, indent=None, separators=None,
-        encoding='utf-8', default=None, **kw):
-    """Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
-    ``.write()``-supporting file-like object).
-
-    If ``skipkeys`` is true then ``dict`` keys that are not basic types
-    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
-    will be skipped instead of raising a ``TypeError``.
-
-    If ``ensure_ascii`` is false, then the some chunks written to ``fp``
-    may be ``unicode`` instances, subject to normal Python ``str`` to
-    ``unicode`` coercion rules. Unless ``fp.write()`` explicitly
-    understands ``unicode`` (as in ``codecs.getwriter()``) this is likely
-    to cause an error.
-
-    If ``check_circular`` is false, then the circular reference check
-    for container types will be skipped and a circular reference will
-    result in an ``OverflowError`` (or worse).
-
-    If ``allow_nan`` is false, then it will be a ``ValueError`` to
-    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
-    in strict compliance of the JSON specification, instead of using the
-    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
-
-    If ``indent`` is a non-negative integer, then JSON array elements and object
-    members will be pretty-printed with that indent level. An indent level
-    of 0 will only insert newlines. ``None`` is the most compact representation.
-
-    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
-    then it will be used instead of the default ``(', ', ': ')`` separators.
-    ``(',', ':')`` is the most compact JSON representation.
-
-    ``encoding`` is the character encoding for str instances, default is UTF-8.
-
-    ``default(obj)`` is a function that should return a serializable version
-    of obj or raise TypeError. The default simply raises TypeError.
-
-    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
-    ``.default()`` method to serialize additional types), specify it with
-    the ``cls`` kwarg.
-
-    """
-    # cached encoder
-    if (not skipkeys and ensure_ascii and
-        check_circular and allow_nan and
-        cls is None and indent is None and separators is None and
-        encoding == 'utf-8' and default is None and not kw):
-        iterable = _default_encoder.iterencode(obj)
-    else:
-        if cls is None:
-            cls = JSONEncoder
-        iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
-            check_circular=check_circular, allow_nan=allow_nan, indent=indent,
-            separators=separators, encoding=encoding,
-            default=default, **kw).iterencode(obj)
-    # could accelerate with writelines in some versions of Python, at
-    # a debuggability cost
-    for chunk in iterable:
-        fp.write(chunk)
-
-
-def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
-        allow_nan=True, cls=None, indent=None, separators=None,
-        encoding='utf-8', default=None, **kw):
-    """Serialize ``obj`` to a JSON formatted ``str``.
-
-    If ``skipkeys`` is false then ``dict`` keys that are not basic types
-    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
-    will be skipped instead of raising a ``TypeError``.
-
-    If ``ensure_ascii`` is false, then the return value will be a
-    ``unicode`` instance subject to normal Python ``str`` to ``unicode``
-    coercion rules instead of being escaped to an ASCII ``str``.
-
-    If ``check_circular`` is false, then the circular reference check
-    for container types will be skipped and a circular reference will
-    result in an ``OverflowError`` (or worse).
-
-    If ``allow_nan`` is false, then it will be a ``ValueError`` to
-    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
-    strict compliance of the JSON specification, instead of using the
-    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
-
-    If ``indent`` is a non-negative integer, then JSON array elements and
-    object members will be pretty-printed with that indent level. An indent
-    level of 0 will only insert newlines. ``None`` is the most compact
-    representation.
-
-    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
-    then it will be used instead of the default ``(', ', ': ')`` separators.
-    ``(',', ':')`` is the most compact JSON representation.
-
-    ``encoding`` is the character encoding for str instances, default is UTF-8.
-
-    ``default(obj)`` is a function that should return a serializable version
-    of obj or raise TypeError. The default simply raises TypeError.
-
-    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
-    ``.default()`` method to serialize additional types), specify it with
-    the ``cls`` kwarg.
-
-    """
-    # cached encoder
-    if (not skipkeys and ensure_ascii and
-        check_circular and allow_nan and
-        cls is None and indent is None and separators is None and
-        encoding == 'utf-8' and default is None and not kw):
-        return _default_encoder.encode(obj)
-    if cls is None:
-        cls = JSONEncoder
-    return cls(
-        skipkeys=skipkeys, ensure_ascii=ensure_ascii,
-        check_circular=check_circular, allow_nan=allow_nan, indent=indent,
-        separators=separators, encoding=encoding, default=default,
-        **kw).encode(obj)
-
-
-_default_decoder = JSONDecoder(encoding=None, object_hook=None)
-
-
-def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
-        parse_int=None, parse_constant=None, **kw):
-    """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
-    a JSON document) to a Python object.
-
-    If the contents of ``fp`` is encoded with an ASCII based encoding other
-    than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
-    be specified. Encodings that are not ASCII based (such as UCS-2) are
-    not allowed, and should be wrapped with
-    ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
-    object and passed to ``loads()``
-
-    ``object_hook`` is an optional function that will be called with the
-    result of any object literal decode (a ``dict``). The return value of
-    ``object_hook`` will be used instead of the ``dict``. This feature
-    can be used to implement custom decoders (e.g. JSON-RPC class hinting).
-
-    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
-    kwarg.
-
-    """
-    return loads(fp.read(),
-        encoding=encoding, cls=cls, object_hook=object_hook,
-        parse_float=parse_float, parse_int=parse_int,
-        parse_constant=parse_constant, **kw)
-
-
-def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
-        parse_int=None, parse_constant=None, **kw):
-    """Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
-    document) to a Python object.
-
-    If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
-    other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
-    must be specified. Encodings that are not ASCII based (such as UCS-2)
-    are not allowed and should be decoded to ``unicode`` first.
-
-    ``object_hook`` is an optional function that will be called with the
-    result of any object literal decode (a ``dict``). The return value of
-    ``object_hook`` will be used instead of the ``dict``. This feature
-    can be used to implement custom decoders (e.g. JSON-RPC class hinting).
-
-    ``parse_float``, if specified, will be called with the string
-    of every JSON float to be decoded. By default this is equivalent to
-    float(num_str). This can be used to use another datatype or parser
-    for JSON floats (e.g. decimal.Decimal).
-
-    ``parse_int``, if specified, will be called with the string
-    of every JSON int to be decoded. By default this is equivalent to
-    int(num_str). This can be used to use another datatype or parser
-    for JSON integers (e.g. float).
-
-    ``parse_constant``, if specified, will be called with one of the
-    following strings: -Infinity, Infinity, NaN, null, true, false.
-    This can be used to raise an exception if invalid JSON numbers
-    are encountered.
-
-    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
-    kwarg.
-
-    """
-    if (cls is None and encoding is None and object_hook is None and
-            parse_int is None and parse_float is None and
-            parse_constant is None and not kw):
-        return _default_decoder.decode(s)
-    if cls is None:
-        cls = JSONDecoder
-    if object_hook is not None:
-        kw['object_hook'] = object_hook
-    if parse_float is not None:
-        kw['parse_float'] = parse_float
-    if parse_int is not None:
-        kw['parse_int'] = parse_int
-    if parse_constant is not None:
-        kw['parse_constant'] = parse_constant
-    return cls(encoding=encoding, **kw).decode(s)
diff --git a/branch-1.3/lang/py/lib/simplejson/_speedups.c b/branch-1.3/lang/py/lib/simplejson/_speedups.c
deleted file mode 100644
index 23b5f4a..0000000
--- a/branch-1.3/lang/py/lib/simplejson/_speedups.c
+++ /dev/null
@@ -1,2329 +0,0 @@
-#include "Python.h"
-#include "structmember.h"
-#if PY_VERSION_HEX < 0x02060000 && !defined(Py_TYPE)
-#define Py_TYPE(ob)     (((PyObject*)(ob))->ob_type)
-#endif
-#if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN)
-typedef int Py_ssize_t;
-#define PY_SSIZE_T_MAX INT_MAX
-#define PY_SSIZE_T_MIN INT_MIN
-#define PyInt_FromSsize_t PyInt_FromLong
-#define PyInt_AsSsize_t PyInt_AsLong
-#endif
-#ifndef Py_IS_FINITE
-#define Py_IS_FINITE(X) (!Py_IS_INFINITY(X) && !Py_IS_NAN(X))
-#endif
-
-#ifdef __GNUC__
-#define UNUSED __attribute__((__unused__))
-#else
-#define UNUSED
-#endif
-
-#define DEFAULT_ENCODING "utf-8"
-
-#define PyScanner_Check(op) PyObject_TypeCheck(op, &PyScannerType)
-#define PyScanner_CheckExact(op) (Py_TYPE(op) == &PyScannerType)
-#define PyEncoder_Check(op) PyObject_TypeCheck(op, &PyEncoderType)
-#define PyEncoder_CheckExact(op) (Py_TYPE(op) == &PyEncoderType)
-
-static PyTypeObject PyScannerType;
-static PyTypeObject PyEncoderType;
-
-typedef struct _PyScannerObject {
-    PyObject_HEAD
-    PyObject *encoding;
-    PyObject *strict;
-    PyObject *object_hook;
-    PyObject *parse_float;
-    PyObject *parse_int;
-    PyObject *parse_constant;
-} PyScannerObject;
-
-static PyMemberDef scanner_members[] = {
-    {"encoding", T_OBJECT, offsetof(PyScannerObject, encoding), READONLY, "encoding"},
-    {"strict", T_OBJECT, offsetof(PyScannerObject, strict), READONLY, "strict"},
-    {"object_hook", T_OBJECT, offsetof(PyScannerObject, object_hook), READONLY, "object_hook"},
-    {"parse_float", T_OBJECT, offsetof(PyScannerObject, parse_float), READONLY, "parse_float"},
-    {"parse_int", T_OBJECT, offsetof(PyScannerObject, parse_int), READONLY, "parse_int"},
-    {"parse_constant", T_OBJECT, offsetof(PyScannerObject, parse_constant), READONLY, "parse_constant"},
-    {NULL}
-};
-
-typedef struct _PyEncoderObject {
-    PyObject_HEAD
-    PyObject *markers;
-    PyObject *defaultfn;
-    PyObject *encoder;
-    PyObject *indent;
-    PyObject *key_separator;
-    PyObject *item_separator;
-    PyObject *sort_keys;
-    PyObject *skipkeys;
-    int fast_encode;
-    int allow_nan;
-} PyEncoderObject;
-
-static PyMemberDef encoder_members[] = {
-    {"markers", T_OBJECT, offsetof(PyEncoderObject, markers), READONLY, "markers"},
-    {"default", T_OBJECT, offsetof(PyEncoderObject, defaultfn), READONLY, "default"},
-    {"encoder", T_OBJECT, offsetof(PyEncoderObject, encoder), READONLY, "encoder"},
-    {"indent", T_OBJECT, offsetof(PyEncoderObject, indent), READONLY, "indent"},
-    {"key_separator", T_OBJECT, offsetof(PyEncoderObject, key_separator), READONLY, "key_separator"},
-    {"item_separator", T_OBJECT, offsetof(PyEncoderObject, item_separator), READONLY, "item_separator"},
-    {"sort_keys", T_OBJECT, offsetof(PyEncoderObject, sort_keys), READONLY, "sort_keys"},
-    {"skipkeys", T_OBJECT, offsetof(PyEncoderObject, skipkeys), READONLY, "skipkeys"},
-    {NULL}
-};
-
-static Py_ssize_t
-ascii_escape_char(Py_UNICODE c, char *output, Py_ssize_t chars);
-static PyObject *
-ascii_escape_unicode(PyObject *pystr);
-static PyObject *
-ascii_escape_str(PyObject *pystr);
-static PyObject *
-py_encode_basestring_ascii(PyObject* self UNUSED, PyObject *pystr);
-void init_speedups(void);
-static PyObject *
-scan_once_str(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr);
-static PyObject *
-scan_once_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr);
-static PyObject *
-_build_rval_index_tuple(PyObject *rval, Py_ssize_t idx);
-static PyObject *
-scanner_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
-static int
-scanner_init(PyObject *self, PyObject *args, PyObject *kwds);
-static void
-scanner_dealloc(PyObject *self);
-static int
-scanner_clear(PyObject *self);
-static PyObject *
-encoder_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
-static int
-encoder_init(PyObject *self, PyObject *args, PyObject *kwds);
-static void
-encoder_dealloc(PyObject *self);
-static int
-encoder_clear(PyObject *self);
-static int
-encoder_listencode_list(PyEncoderObject *s, PyObject *rval, PyObject *seq, Py_ssize_t indent_level);
-static int
-encoder_listencode_obj(PyEncoderObject *s, PyObject *rval, PyObject *obj, Py_ssize_t indent_level);
-static int
-encoder_listencode_dict(PyEncoderObject *s, PyObject *rval, PyObject *dct, Py_ssize_t indent_level);
-static PyObject *
-_encoded_const(PyObject *const);
-static void
-raise_errmsg(char *msg, PyObject *s, Py_ssize_t end);
-static PyObject *
-encoder_encode_string(PyEncoderObject *s, PyObject *obj);
-static int
-_convertPyInt_AsSsize_t(PyObject *o, Py_ssize_t *size_ptr);
-static PyObject *
-_convertPyInt_FromSsize_t(Py_ssize_t *size_ptr);
-static PyObject *
-encoder_encode_float(PyEncoderObject *s, PyObject *obj);
-
-#define S_CHAR(c) (c >= ' ' && c <= '~' && c != '\\' && c != '"')
-#define IS_WHITESPACE(c) (((c) == ' ') || ((c) == '\t') || ((c) == '\n') || ((c) == '\r'))
-
-#define MIN_EXPANSION 6
-#ifdef Py_UNICODE_WIDE
-#define MAX_EXPANSION (2 * MIN_EXPANSION)
-#else
-#define MAX_EXPANSION MIN_EXPANSION
-#endif
-
-static int
-_convertPyInt_AsSsize_t(PyObject *o, Py_ssize_t *size_ptr)
-{
-    /* PyObject to Py_ssize_t converter */
-    *size_ptr = PyInt_AsSsize_t(o);
-    if (*size_ptr == -1 && PyErr_Occurred());
-        return 1;
-    return 0;
-}
-
-static PyObject *
-_convertPyInt_FromSsize_t(Py_ssize_t *size_ptr)
-{
-    /* Py_ssize_t to PyObject converter */
-    return PyInt_FromSsize_t(*size_ptr);
-}
-
-static Py_ssize_t
-ascii_escape_char(Py_UNICODE c, char *output, Py_ssize_t chars)
-{
-    /* Escape unicode code point c to ASCII escape sequences
-    in char *output. output must have at least 12 bytes unused to
-    accommodate an escaped surrogate pair "\uXXXX\uXXXX" */
-    output[chars++] = '\\';
-    switch (c) {
-        case '\\': output[chars++] = (char)c; break;
-        case '"': output[chars++] = (char)c; break;
-        case '\b': output[chars++] = 'b'; break;
-        case '\f': output[chars++] = 'f'; break;
-        case '\n': output[chars++] = 'n'; break;
-        case '\r': output[chars++] = 'r'; break;
-        case '\t': output[chars++] = 't'; break;
-        default:
-#ifdef Py_UNICODE_WIDE
-            if (c >= 0x10000) {
-                /* UTF-16 surrogate pair */
-                Py_UNICODE v = c - 0x10000;
-                c = 0xd800 | ((v >> 10) & 0x3ff);
-                output[chars++] = 'u';
-                output[chars++] = "0123456789abcdef"[(c >> 12) & 0xf];
-                output[chars++] = "0123456789abcdef"[(c >>  8) & 0xf];
-                output[chars++] = "0123456789abcdef"[(c >>  4) & 0xf];
-                output[chars++] = "0123456789abcdef"[(c      ) & 0xf];
-                c = 0xdc00 | (v & 0x3ff);
-                output[chars++] = '\\';
-            }
-#endif
-            output[chars++] = 'u';
-            output[chars++] = "0123456789abcdef"[(c >> 12) & 0xf];
-            output[chars++] = "0123456789abcdef"[(c >>  8) & 0xf];
-            output[chars++] = "0123456789abcdef"[(c >>  4) & 0xf];
-            output[chars++] = "0123456789abcdef"[(c      ) & 0xf];
-    }
-    return chars;
-}
-
-static PyObject *
-ascii_escape_unicode(PyObject *pystr)
-{
-    /* Take a PyUnicode pystr and return a new ASCII-only escaped PyString */
-    Py_ssize_t i;
-    Py_ssize_t input_chars;
-    Py_ssize_t output_size;
-    Py_ssize_t max_output_size;
-    Py_ssize_t chars;
-    PyObject *rval;
-    char *output;
-    Py_UNICODE *input_unicode;
-
-    input_chars = PyUnicode_GET_SIZE(pystr);
-    input_unicode = PyUnicode_AS_UNICODE(pystr);
-
-    /* One char input can be up to 6 chars output, estimate 4 of these */
-    output_size = 2 + (MIN_EXPANSION * 4) + input_chars;
-    max_output_size = 2 + (input_chars * MAX_EXPANSION);
-    rval = PyString_FromStringAndSize(NULL, output_size);
-    if (rval == NULL) {
-        return NULL;
-    }
-    output = PyString_AS_STRING(rval);
-    chars = 0;
-    output[chars++] = '"';
-    for (i = 0; i < input_chars; i++) {
-        Py_UNICODE c = input_unicode[i];
-        if (S_CHAR(c)) {
-            output[chars++] = (char)c;
-        }
-        else {
-            chars = ascii_escape_char(c, output, chars);
-        }
-        if (output_size - chars < (1 + MAX_EXPANSION)) {
-            /* There's more than four, so let's resize by a lot */
-            Py_ssize_t new_output_size = output_size * 2;
-            /* This is an upper bound */
-            if (new_output_size > max_output_size) {
-                new_output_size = max_output_size;
-            }
-            /* Make sure that the output size changed before resizing */
-            if (new_output_size != output_size) {
-                output_size = new_output_size;
-                if (_PyString_Resize(&rval, output_size) == -1) {
-                    return NULL;
-                }
-                output = PyString_AS_STRING(rval);
-            }
-        }
-    }
-    output[chars++] = '"';
-    if (_PyString_Resize(&rval, chars) == -1) {
-        return NULL;
-    }
-    return rval;
-}
-
-static PyObject *
-ascii_escape_str(PyObject *pystr)
-{
-    /* Take a PyString pystr and return a new ASCII-only escaped PyString */
-    Py_ssize_t i;
-    Py_ssize_t input_chars;
-    Py_ssize_t output_size;
-    Py_ssize_t chars;
-    PyObject *rval;
-    char *output;
-    char *input_str;
-
-    input_chars = PyString_GET_SIZE(pystr);
-    input_str = PyString_AS_STRING(pystr);
-
-    /* Fast path for a string that's already ASCII */
-    for (i = 0; i < input_chars; i++) {
-        Py_UNICODE c = (Py_UNICODE)(unsigned char)input_str[i];
-        if (!S_CHAR(c)) {
-            /* If we have to escape something, scan the string for unicode */
-            Py_ssize_t j;
-            for (j = i; j < input_chars; j++) {
-                c = (Py_UNICODE)(unsigned char)input_str[j];
-                if (c > 0x7f) {
-                    /* We hit a non-ASCII character, bail to unicode mode */
-                    PyObject *uni;
-                    uni = PyUnicode_DecodeUTF8(input_str, input_chars, "strict");
-                    if (uni == NULL) {
-                        return NULL;
-                    }
-                    rval = ascii_escape_unicode(uni);
-                    Py_DECREF(uni);
-                    return rval;
-                }
-            }
-            break;
-        }
-    }
-
-    if (i == input_chars) {
-        /* Input is already ASCII */
-        output_size = 2 + input_chars;
-    }
-    else {
-        /* One char input can be up to 6 chars output, estimate 4 of these */
-        output_size = 2 + (MIN_EXPANSION * 4) + input_chars;
-    }
-    rval = PyString_FromStringAndSize(NULL, output_size);
-    if (rval == NULL) {
-        return NULL;
-    }
-    output = PyString_AS_STRING(rval);
-    output[0] = '"';
-
-    /* We know that everything up to i is ASCII already */
-    chars = i + 1;
-    memcpy(&output[1], input_str, i);
-
-    for (; i < input_chars; i++) {
-        Py_UNICODE c = (Py_UNICODE)(unsigned char)input_str[i];
-        if (S_CHAR(c)) {
-            output[chars++] = (char)c;
-        }
-        else {
-            chars = ascii_escape_char(c, output, chars);
-        }
-        /* An ASCII char can't possibly expand to a surrogate! */
-        if (output_size - chars < (1 + MIN_EXPANSION)) {
-            /* There's more than four, so let's resize by a lot */
-            output_size *= 2;
-            if (output_size > 2 + (input_chars * MIN_EXPANSION)) {
-                output_size = 2 + (input_chars * MIN_EXPANSION);
-            }
-            if (_PyString_Resize(&rval, output_size) == -1) {
-                return NULL;
-            }
-            output = PyString_AS_STRING(rval);
-        }
-    }
-    output[chars++] = '"';
-    if (_PyString_Resize(&rval, chars) == -1) {
-        return NULL;
-    }
-    return rval;
-}
-
-static void
-raise_errmsg(char *msg, PyObject *s, Py_ssize_t end)
-{
-    /* Use the Python function simplejson.decoder.errmsg to raise a nice
-    looking ValueError exception */
-    static PyObject *errmsg_fn = NULL;
-    PyObject *pymsg;
-    if (errmsg_fn == NULL) {
-        PyObject *decoder = PyImport_ImportModule("simplejson.decoder");
-        if (decoder == NULL)
-            return;
-        errmsg_fn = PyObject_GetAttrString(decoder, "errmsg");
-        Py_DECREF(decoder);
-        if (errmsg_fn == NULL)
-            return;
-    }
-    pymsg = PyObject_CallFunction(errmsg_fn, "(zOO&)", msg, s, _convertPyInt_FromSsize_t, &end);
-    if (pymsg) {
-        PyErr_SetObject(PyExc_ValueError, pymsg);
-        Py_DECREF(pymsg);
-    }
-}
-
-static PyObject *
-join_list_unicode(PyObject *lst)
-{
-    /* return u''.join(lst) */
-    static PyObject *joinfn = NULL;
-    if (joinfn == NULL) {
-        PyObject *ustr = PyUnicode_FromUnicode(NULL, 0);
-        if (ustr == NULL)
-            return NULL;
-
-        joinfn = PyObject_GetAttrString(ustr, "join");
-        Py_DECREF(ustr);
-        if (joinfn == NULL)
-            return NULL;
-    }
-    return PyObject_CallFunctionObjArgs(joinfn, lst, NULL);
-}
-
-static PyObject *
-join_list_string(PyObject *lst)
-{
-    /* return ''.join(lst) */
-    static PyObject *joinfn = NULL;
-    if (joinfn == NULL) {
-        PyObject *ustr = PyString_FromStringAndSize(NULL, 0);
-        if (ustr == NULL)
-            return NULL;
-
-        joinfn = PyObject_GetAttrString(ustr, "join");
-        Py_DECREF(ustr);
-        if (joinfn == NULL)
-            return NULL;
-    }
-    return PyObject_CallFunctionObjArgs(joinfn, lst, NULL);
-}
-
-static PyObject *
-_build_rval_index_tuple(PyObject *rval, Py_ssize_t idx) {
-    /* return (rval, idx) tuple, stealing reference to rval */
-    PyObject *tpl;
-    PyObject *pyidx;
-    /*
-    steal a reference to rval, returns (rval, idx)
-    */
-    if (rval == NULL) {
-        return NULL;
-    }
-    pyidx = PyInt_FromSsize_t(idx);
-    if (pyidx == NULL) {
-        Py_DECREF(rval);
-        return NULL;
-    }
-    tpl = PyTuple_New(2);
-    if (tpl == NULL) {
-        Py_DECREF(pyidx);
-        Py_DECREF(rval);
-        return NULL;
-    }
-    PyTuple_SET_ITEM(tpl, 0, rval);
-    PyTuple_SET_ITEM(tpl, 1, pyidx);
-    return tpl;
-}
-
-static PyObject *
-scanstring_str(PyObject *pystr, Py_ssize_t end, char *encoding, int strict, Py_ssize_t *next_end_ptr)
-{
-    /* Read the JSON string from PyString pystr.
-    end is the index of the first character after the quote.
-    encoding is the encoding of pystr (must be an ASCII superset)
-    if strict is zero then literal control characters are allowed
-    *next_end_ptr is a return-by-reference index of the character
-        after the end quote
-
-    Return value is a new PyString (if ASCII-only) or PyUnicode
-    */
-    PyObject *rval;
-    Py_ssize_t len = PyString_GET_SIZE(pystr);
-    Py_ssize_t begin = end - 1;
-    Py_ssize_t next = begin;
-    int has_unicode = 0;
-    char *buf = PyString_AS_STRING(pystr);
-    PyObject *chunks = PyList_New(0);
-    if (chunks == NULL) {
-        goto bail;
-    }
-    if (end < 0 || len <= end) {
-        PyErr_SetString(PyExc_ValueError, "end is out of bounds");
-        goto bail;
-    }
-    while (1) {
-        /* Find the end of the string or the next escape */
-        Py_UNICODE c = 0;
-        PyObject *chunk = NULL;
-        for (next = end; next < len; next++) {
-            c = (unsigned char)buf[next];
-            if (c == '"' || c == '\\') {
-                break;
-            }
-            else if (strict && c <= 0x1f) {
-                raise_errmsg("Invalid control character at", pystr, next);
-                goto bail;
-            }
-            else if (c > 0x7f) {
-                has_unicode = 1;
-            }
-        }
-        if (!(c == '"' || c == '\\')) {
-            raise_errmsg("Unterminated string starting at", pystr, begin);
-            goto bail;
-        }
-        /* Pick up this chunk if it's not zero length */
-        if (next != end) {
-            PyObject *strchunk = PyString_FromStringAndSize(&buf[end], next - end);
-            if (strchunk == NULL) {
-                goto bail;
-            }
-            if (has_unicode) {
-                chunk = PyUnicode_FromEncodedObject(strchunk, encoding, NULL);
-                Py_DECREF(strchunk);
-                if (chunk == NULL) {
-                    goto bail;
-                }
-            }
-            else {
-                chunk = strchunk;
-            }
-            if (PyList_Append(chunks, chunk)) {
-                Py_DECREF(chunk);
-                goto bail;
-            }
-            Py_DECREF(chunk);
-        }
-        next++;
-        if (c == '"') {
-            end = next;
-            break;
-        }
-        if (next == len) {
-            raise_errmsg("Unterminated string starting at", pystr, begin);
-            goto bail;
-        }
-        c = buf[next];
-        if (c != 'u') {
-            /* Non-unicode backslash escapes */
-            end = next + 1;
-            switch (c) {
-                case '"': break;
-                case '\\': break;
-                case '/': break;
-                case 'b': c = '\b'; break;
-                case 'f': c = '\f'; break;
-                case 'n': c = '\n'; break;
-                case 'r': c = '\r'; break;
-                case 't': c = '\t'; break;
-                default: c = 0;
-            }
-            if (c == 0) {
-                raise_errmsg("Invalid \\escape", pystr, end - 2);
-                goto bail;
-            }
-        }
-        else {
-            c = 0;
-            next++;
-            end = next + 4;
-            if (end >= len) {
-                raise_errmsg("Invalid \\uXXXX escape", pystr, next - 1);
-                goto bail;
-            }
-            /* Decode 4 hex digits */
-            for (; next < end; next++) {
-                Py_UNICODE digit = buf[next];
-                c <<= 4;
-                switch (digit) {
-                    case '0': case '1': case '2': case '3': case '4':
-                    case '5': case '6': case '7': case '8': case '9':
-                        c |= (digit - '0'); break;
-                    case 'a': case 'b': case 'c': case 'd': case 'e':
-                    case 'f':
-                        c |= (digit - 'a' + 10); break;
-                    case 'A': case 'B': case 'C': case 'D': case 'E':
-                    case 'F':
-                        c |= (digit - 'A' + 10); break;
-                    default:
-                        raise_errmsg("Invalid \\uXXXX escape", pystr, end - 5);
-                        goto bail;
-                }
-            }
-#ifdef Py_UNICODE_WIDE
-            /* Surrogate pair */
-            if ((c & 0xfc00) == 0xd800) {
-                Py_UNICODE c2 = 0;
-                if (end + 6 >= len) {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                if (buf[next++] != '\\' || buf[next++] != 'u') {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                end += 6;
-                /* Decode 4 hex digits */
-                for (; next < end; next++) {
-                    c2 <<= 4;
-                    Py_UNICODE digit = buf[next];
-                    switch (digit) {
-                        case '0': case '1': case '2': case '3': case '4':
-                        case '5': case '6': case '7': case '8': case '9':
-                            c2 |= (digit - '0'); break;
-                        case 'a': case 'b': case 'c': case 'd': case 'e':
-                        case 'f':
-                            c2 |= (digit - 'a' + 10); break;
-                        case 'A': case 'B': case 'C': case 'D': case 'E':
-                        case 'F':
-                            c2 |= (digit - 'A' + 10); break;
-                        default:
-                            raise_errmsg("Invalid \\uXXXX escape", pystr, end - 5);
-                            goto bail;
-                    }
-                }
-                if ((c2 & 0xfc00) != 0xdc00) {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                c = 0x10000 + (((c - 0xd800) << 10) | (c2 - 0xdc00));
-            }
-            else if ((c & 0xfc00) == 0xdc00) {
-                raise_errmsg("Unpaired low surrogate", pystr, end - 5);
-                goto bail;
-            }
-#endif
-        }
-        if (c > 0x7f) {
-            has_unicode = 1;
-        }
-        if (has_unicode) {
-            chunk = PyUnicode_FromUnicode(&c, 1);
-            if (chunk == NULL) {
-                goto bail;
-            }
-        }
-        else {
-            char c_char = Py_CHARMASK(c);
-            chunk = PyString_FromStringAndSize(&c_char, 1);
-            if (chunk == NULL) {
-                goto bail;
-            }
-        }
-        if (PyList_Append(chunks, chunk)) {
-            Py_DECREF(chunk);
-            goto bail;
-        }
-        Py_DECREF(chunk);
-    }
-
-    rval = join_list_string(chunks);
-    if (rval == NULL) {
-        goto bail;
-    }
-    Py_CLEAR(chunks);
-    *next_end_ptr = end;
-    return rval;
-bail:
-    *next_end_ptr = -1;
-    Py_XDECREF(chunks);
-    return NULL;
-}
-
-
-static PyObject *
-scanstring_unicode(PyObject *pystr, Py_ssize_t end, int strict, Py_ssize_t *next_end_ptr)
-{
-    /* Read the JSON string from PyUnicode pystr.
-    end is the index of the first character after the quote.
-    if strict is zero then literal control characters are allowed
-    *next_end_ptr is a return-by-reference index of the character
-        after the end quote
-
-    Return value is a new PyUnicode
-    */
-    PyObject *rval;
-    Py_ssize_t len = PyUnicode_GET_SIZE(pystr);
-    Py_ssize_t begin = end - 1;
-    Py_ssize_t next = begin;
-    const Py_UNICODE *buf = PyUnicode_AS_UNICODE(pystr);
-    PyObject *chunks = PyList_New(0);
-    if (chunks == NULL) {
-        goto bail;
-    }
-    if (end < 0 || len <= end) {
-        PyErr_SetString(PyExc_ValueError, "end is out of bounds");
-        goto bail;
-    }
-    while (1) {
-        /* Find the end of the string or the next escape */
-        Py_UNICODE c = 0;
-        PyObject *chunk = NULL;
-        for (next = end; next < len; next++) {
-            c = buf[next];
-            if (c == '"' || c == '\\') {
-                break;
-            }
-            else if (strict && c <= 0x1f) {
-                raise_errmsg("Invalid control character at", pystr, next);
-                goto bail;
-            }
-        }
-        if (!(c == '"' || c == '\\')) {
-            raise_errmsg("Unterminated string starting at", pystr, begin);
-            goto bail;
-        }
-        /* Pick up this chunk if it's not zero length */
-        if (next != end) {
-            chunk = PyUnicode_FromUnicode(&buf[end], next - end);
-            if (chunk == NULL) {
-                goto bail;
-            }
-            if (PyList_Append(chunks, chunk)) {
-                Py_DECREF(chunk);
-                goto bail;
-            }
-            Py_DECREF(chunk);
-        }
-        next++;
-        if (c == '"') {
-            end = next;
-            break;
-        }
-        if (next == len) {
-            raise_errmsg("Unterminated string starting at", pystr, begin);
-            goto bail;
-        }
-        c = buf[next];
-        if (c != 'u') {
-            /* Non-unicode backslash escapes */
-            end = next + 1;
-            switch (c) {
-                case '"': break;
-                case '\\': break;
-                case '/': break;
-                case 'b': c = '\b'; break;
-                case 'f': c = '\f'; break;
-                case 'n': c = '\n'; break;
-                case 'r': c = '\r'; break;
-                case 't': c = '\t'; break;
-                default: c = 0;
-            }
-            if (c == 0) {
-                raise_errmsg("Invalid \\escape", pystr, end - 2);
-                goto bail;
-            }
-        }
-        else {
-            c = 0;
-            next++;
-            end = next + 4;
-            if (end >= len) {
-                raise_errmsg("Invalid \\uXXXX escape", pystr, next - 1);
-                goto bail;
-            }
-            /* Decode 4 hex digits */
-            for (; next < end; next++) {
-                Py_UNICODE digit = buf[next];
-                c <<= 4;
-                switch (digit) {
-                    case '0': case '1': case '2': case '3': case '4':
-                    case '5': case '6': case '7': case '8': case '9':
-                        c |= (digit - '0'); break;
-                    case 'a': case 'b': case 'c': case 'd': case 'e':
-                    case 'f':
-                        c |= (digit - 'a' + 10); break;
-                    case 'A': case 'B': case 'C': case 'D': case 'E':
-                    case 'F':
-                        c |= (digit - 'A' + 10); break;
-                    default:
-                        raise_errmsg("Invalid \\uXXXX escape", pystr, end - 5);
-                        goto bail;
-                }
-            }
-#ifdef Py_UNICODE_WIDE
-            /* Surrogate pair */
-            if ((c & 0xfc00) == 0xd800) {
-                Py_UNICODE c2 = 0;
-                if (end + 6 >= len) {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                if (buf[next++] != '\\' || buf[next++] != 'u') {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                end += 6;
-                /* Decode 4 hex digits */
-                for (; next < end; next++) {
-                    c2 <<= 4;
-                    Py_UNICODE digit = buf[next];
-                    switch (digit) {
-                        case '0': case '1': case '2': case '3': case '4':
-                        case '5': case '6': case '7': case '8': case '9':
-                            c2 |= (digit - '0'); break;
-                        case 'a': case 'b': case 'c': case 'd': case 'e':
-                        case 'f':
-                            c2 |= (digit - 'a' + 10); break;
-                        case 'A': case 'B': case 'C': case 'D': case 'E':
-                        case 'F':
-                            c2 |= (digit - 'A' + 10); break;
-                        default:
-                            raise_errmsg("Invalid \\uXXXX escape", pystr, end - 5);
-                            goto bail;
-                    }
-                }
-                if ((c2 & 0xfc00) != 0xdc00) {
-                    raise_errmsg("Unpaired high surrogate", pystr, end - 5);
-                    goto bail;
-                }
-                c = 0x10000 + (((c - 0xd800) << 10) | (c2 - 0xdc00));
-            }
-            else if ((c & 0xfc00) == 0xdc00) {
-                raise_errmsg("Unpaired low surrogate", pystr, end - 5);
-                goto bail;
-            }
-#endif
-        }
-        chunk = PyUnicode_FromUnicode(&c, 1);
-        if (chunk == NULL) {
-            goto bail;
-        }
-        if (PyList_Append(chunks, chunk)) {
-            Py_DECREF(chunk);
-            goto bail;
-        }
-        Py_DECREF(chunk);
-    }
-
-    rval = join_list_unicode(chunks);
-    if (rval == NULL) {
-        goto bail;
-    }
-    Py_DECREF(chunks);
-    *next_end_ptr = end;
-    return rval;
-bail:
-    *next_end_ptr = -1;
-    Py_XDECREF(chunks);
-    return NULL;
-}
-
-PyDoc_STRVAR(pydoc_scanstring,
-    "scanstring(basestring, end, encoding, strict=True) -> (str, end)\n"
-    "\n"
-    "Scan the string s for a JSON string. End is the index of the\n"
-    "character in s after the quote that started the JSON string.\n"
-    "Unescapes all valid JSON string escape sequences and raises ValueError\n"
-    "on attempt to decode an invalid string. If strict is False then literal\n"
-    "control characters are allowed in the string.\n"
-    "\n"
-    "Returns a tuple of the decoded string and the index of the character in s\n"
-    "after the end quote."
-);
-
-static PyObject *
-py_scanstring(PyObject* self UNUSED, PyObject *args)
-{
-    PyObject *pystr;
-    PyObject *rval;
-    Py_ssize_t end;
-    Py_ssize_t next_end = -1;
-    char *encoding = NULL;
-    int strict = 1;
-    if (!PyArg_ParseTuple(args, "OO&|zi:scanstring", &pystr, _convertPyInt_AsSsize_t, &end, &encoding, &strict)) {
-        return NULL;
-    }
-    if (encoding == NULL) {
-        encoding = DEFAULT_ENCODING;
-    }
-    if (PyString_Check(pystr)) {
-        rval = scanstring_str(pystr, end, encoding, strict, &next_end);
-    }
-    else if (PyUnicode_Check(pystr)) {
-        rval = scanstring_unicode(pystr, end, strict, &next_end);
-    }
-    else {
-        PyErr_Format(PyExc_TypeError,
-                     "first argument must be a string, not %.80s",
-                     Py_TYPE(pystr)->tp_name);
-        return NULL;
-    }
-    return _build_rval_index_tuple(rval, next_end);
-}
-
-PyDoc_STRVAR(pydoc_encode_basestring_ascii,
-    "encode_basestring_ascii(basestring) -> str\n"
-    "\n"
-    "Return an ASCII-only JSON representation of a Python string"
-);
-
-static PyObject *
-py_encode_basestring_ascii(PyObject* self UNUSED, PyObject *pystr)
-{
-    /* Return an ASCII-only JSON representation of a Python string */
-    /* METH_O */
-    if (PyString_Check(pystr)) {
-        return ascii_escape_str(pystr);
-    }
-    else if (PyUnicode_Check(pystr)) {
-        return ascii_escape_unicode(pystr);
-    }
-    else {
-        PyErr_Format(PyExc_TypeError,
-                     "first argument must be a string, not %.80s",
-                     Py_TYPE(pystr)->tp_name);
-        return NULL;
-    }
-}
-
-static void
-scanner_dealloc(PyObject *self)
-{
-    /* Deallocate scanner object */
-    scanner_clear(self);
-    Py_TYPE(self)->tp_free(self);
-}
-
-static int
-scanner_traverse(PyObject *self, visitproc visit, void *arg)
-{
-    PyScannerObject *s;
-    assert(PyScanner_Check(self));
-    s = (PyScannerObject *)self;
-    Py_VISIT(s->encoding);
-    Py_VISIT(s->strict);
-    Py_VISIT(s->object_hook);
-    Py_VISIT(s->parse_float);
-    Py_VISIT(s->parse_int);
-    Py_VISIT(s->parse_constant);
-    return 0;
-}
-
-static int
-scanner_clear(PyObject *self)
-{
-    PyScannerObject *s;
-    assert(PyScanner_Check(self));
-    s = (PyScannerObject *)self;
-    Py_CLEAR(s->encoding);
-    Py_CLEAR(s->strict);
-    Py_CLEAR(s->object_hook);
-    Py_CLEAR(s->parse_float);
-    Py_CLEAR(s->parse_int);
-    Py_CLEAR(s->parse_constant);
-    return 0;
-}
-
-static PyObject *
-_parse_object_str(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON object from PyString pystr.
-    idx is the index of the first character after the opening curly brace.
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the closing curly brace.
-
-    Returns a new PyObject (usually a dict, but object_hook can change that)
-    */
-    char *str = PyString_AS_STRING(pystr);
-    Py_ssize_t end_idx = PyString_GET_SIZE(pystr) - 1;
-    PyObject *rval = PyDict_New();
-    PyObject *key = NULL;
-    PyObject *val = NULL;
-    char *encoding = PyString_AS_STRING(s->encoding);
-    int strict = PyObject_IsTrue(s->strict);
-    Py_ssize_t next_idx;
-    if (rval == NULL)
-        return NULL;
-
-    /* skip whitespace after { */
-    while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-    /* only loop if the object is non-empty */
-    if (idx <= end_idx && str[idx] != '}') {
-        while (idx <= end_idx) {
-            /* read key */
-            if (str[idx] != '"') {
-                raise_errmsg("Expecting property name", pystr, idx);
-                goto bail;
-            }
-            key = scanstring_str(pystr, idx + 1, encoding, strict, &next_idx);
-            if (key == NULL)
-                goto bail;
-            idx = next_idx;
-
-            /* skip whitespace between key and : delimiter, read :, skip whitespace */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-            if (idx > end_idx || str[idx] != ':') {
-                raise_errmsg("Expecting : delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* read any JSON data type */
-            val = scan_once_str(s, pystr, idx, &next_idx);
-            if (val == NULL)
-                goto bail;
-
-            if (PyDict_SetItem(rval, key, val) == -1)
-                goto bail;
-
-            Py_CLEAR(key);
-            Py_CLEAR(val);
-            idx = next_idx;
-
-            /* skip whitespace before } or , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* bail if the object is closed or we didn't get the , delimiter */
-            if (idx > end_idx) break;
-            if (str[idx] == '}') {
-                break;
-            }
-            else if (str[idx] != ',') {
-                raise_errmsg("Expecting , delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-
-            /* skip whitespace after , delimiter */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-        }
-    }
-    /* verify that idx < end_idx, str[idx] should be '}' */
-    if (idx > end_idx || str[idx] != '}') {
-        raise_errmsg("Expecting object", pystr, end_idx);
-        goto bail;
-    }
-    /* if object_hook is not None: rval = object_hook(rval) */
-    if (s->object_hook != Py_None) {
-        val = PyObject_CallFunctionObjArgs(s->object_hook, rval, NULL);
-        if (val == NULL)
-            goto bail;
-        Py_DECREF(rval);
-        rval = val;
-        val = NULL;
-    }
-    *next_idx_ptr = idx + 1;
-    return rval;
-bail:
-    Py_XDECREF(key);
-    Py_XDECREF(val);
-    Py_DECREF(rval);
-    return NULL;
-}
-
-static PyObject *
-_parse_object_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON object from PyUnicode pystr.
-    idx is the index of the first character after the opening curly brace.
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the closing curly brace.
-
-    Returns a new PyObject (usually a dict, but object_hook can change that)
-    */
-    Py_UNICODE *str = PyUnicode_AS_UNICODE(pystr);
-    Py_ssize_t end_idx = PyUnicode_GET_SIZE(pystr) - 1;
-    PyObject *val = NULL;
-    PyObject *rval = PyDict_New();
-    PyObject *key = NULL;
-    int strict = PyObject_IsTrue(s->strict);
-    Py_ssize_t next_idx;
-    if (rval == NULL)
-        return NULL;
-
-    /* skip whitespace after { */
-    while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-    /* only loop if the object is non-empty */
-    if (idx <= end_idx && str[idx] != '}') {
-        while (idx <= end_idx) {
-            /* read key */
-            if (str[idx] != '"') {
-                raise_errmsg("Expecting property name", pystr, idx);
-                goto bail;
-            }
-            key = scanstring_unicode(pystr, idx + 1, strict, &next_idx);
-            if (key == NULL)
-                goto bail;
-            idx = next_idx;
-
-            /* skip whitespace between key and : delimiter, read :, skip whitespace */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-            if (idx > end_idx || str[idx] != ':') {
-                raise_errmsg("Expecting : delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* read any JSON term */
-            val = scan_once_unicode(s, pystr, idx, &next_idx);
-            if (val == NULL)
-                goto bail;
-
-            if (PyDict_SetItem(rval, key, val) == -1)
-                goto bail;
-
-            Py_CLEAR(key);
-            Py_CLEAR(val);
-            idx = next_idx;
-
-            /* skip whitespace before } or , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* bail if the object is closed or we didn't get the , delimiter */
-            if (idx > end_idx) break;
-            if (str[idx] == '}') {
-                break;
-            }
-            else if (str[idx] != ',') {
-                raise_errmsg("Expecting , delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-
-            /* skip whitespace after , delimiter */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-        }
-    }
-
-    /* verify that idx < end_idx, str[idx] should be '}' */
-    if (idx > end_idx || str[idx] != '}') {
-        raise_errmsg("Expecting object", pystr, end_idx);
-        goto bail;
-    }
-
-    /* if object_hook is not None: rval = object_hook(rval) */
-    if (s->object_hook != Py_None) {
-        val = PyObject_CallFunctionObjArgs(s->object_hook, rval, NULL);
-        if (val == NULL)
-            goto bail;
-        Py_DECREF(rval);
-        rval = val;
-        val = NULL;
-    }
-    *next_idx_ptr = idx + 1;
-    return rval;
-bail:
-    Py_XDECREF(key);
-    Py_XDECREF(val);
-    Py_DECREF(rval);
-    return NULL;
-}
-
-static PyObject *
-_parse_array_str(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON array from PyString pystr.
-    idx is the index of the first character after the opening brace.
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the closing brace.
-
-    Returns a new PyList
-    */
-    char *str = PyString_AS_STRING(pystr);
-    Py_ssize_t end_idx = PyString_GET_SIZE(pystr) - 1;
-    PyObject *val = NULL;
-    PyObject *rval = PyList_New(0);
-    Py_ssize_t next_idx;
-    if (rval == NULL)
-        return NULL;
-
-    /* skip whitespace after [ */
-    while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-    /* only loop if the array is non-empty */
-    if (idx <= end_idx && str[idx] != ']') {
-        while (idx <= end_idx) {
-
-            /* read any JSON term and de-tuplefy the (rval, idx) */
-            val = scan_once_str(s, pystr, idx, &next_idx);
-            if (val == NULL)
-                goto bail;
-
-            if (PyList_Append(rval, val) == -1)
-                goto bail;
-
-            Py_CLEAR(val);
-            idx = next_idx;
-
-            /* skip whitespace between term and , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* bail if the array is closed or we didn't get the , delimiter */
-            if (idx > end_idx) break;
-            if (str[idx] == ']') {
-                break;
-            }
-            else if (str[idx] != ',') {
-                raise_errmsg("Expecting , delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-
-            /* skip whitespace after , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-        }
-    }
-
-    /* verify that idx < end_idx, str[idx] should be ']' */
-    if (idx > end_idx || str[idx] != ']') {
-        raise_errmsg("Expecting object", pystr, end_idx);
-        goto bail;
-    }
-    *next_idx_ptr = idx + 1;
-    return rval;
-bail:
-    Py_XDECREF(val);
-    Py_DECREF(rval);
-    return NULL;
-}
-
-static PyObject *
-_parse_array_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON array from PyString pystr.
-    idx is the index of the first character after the opening brace.
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the closing brace.
-
-    Returns a new PyList
-    */
-    Py_UNICODE *str = PyUnicode_AS_UNICODE(pystr);
-    Py_ssize_t end_idx = PyUnicode_GET_SIZE(pystr) - 1;
-    PyObject *val = NULL;
-    PyObject *rval = PyList_New(0);
-    Py_ssize_t next_idx;
-    if (rval == NULL)
-        return NULL;
-
-    /* skip whitespace after [ */
-    while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-    /* only loop if the array is non-empty */
-    if (idx <= end_idx && str[idx] != ']') {
-        while (idx <= end_idx) {
-
-            /* read any JSON term  */
-            val = scan_once_unicode(s, pystr, idx, &next_idx);
-            if (val == NULL)
-                goto bail;
-
-            if (PyList_Append(rval, val) == -1)
-                goto bail;
-
-            Py_CLEAR(val);
-            idx = next_idx;
-
-            /* skip whitespace between term and , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-
-            /* bail if the array is closed or we didn't get the , delimiter */
-            if (idx > end_idx) break;
-            if (str[idx] == ']') {
-                break;
-            }
-            else if (str[idx] != ',') {
-                raise_errmsg("Expecting , delimiter", pystr, idx);
-                goto bail;
-            }
-            idx++;
-
-            /* skip whitespace after , */
-            while (idx <= end_idx && IS_WHITESPACE(str[idx])) idx++;
-        }
-    }
-
-    /* verify that idx < end_idx, str[idx] should be ']' */
-    if (idx > end_idx || str[idx] != ']') {
-        raise_errmsg("Expecting object", pystr, end_idx);
-        goto bail;
-    }
-    *next_idx_ptr = idx + 1;
-    return rval;
-bail:
-    Py_XDECREF(val);
-    Py_DECREF(rval);
-    return NULL;
-}
-
-static PyObject *
-_parse_constant(PyScannerObject *s, char *constant, Py_ssize_t idx, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON constant from PyString pystr.
-    constant is the constant string that was found
-        ("NaN", "Infinity", "-Infinity").
-    idx is the index of the first character of the constant
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the constant.
-
-    Returns the result of parse_constant
-    */
-    PyObject *cstr;
-    PyObject *rval;
-    /* constant is "NaN", "Infinity", or "-Infinity" */
-    cstr = PyString_InternFromString(constant);
-    if (cstr == NULL)
-        return NULL;
-
-    /* rval = parse_constant(constant) */
-    rval = PyObject_CallFunctionObjArgs(s->parse_constant, cstr, NULL);
-    idx += PyString_GET_SIZE(cstr);
-    Py_DECREF(cstr);
-    *next_idx_ptr = idx;
-    return rval;
-}
-
-static PyObject *
-_match_number_str(PyScannerObject *s, PyObject *pystr, Py_ssize_t start, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON number from PyString pystr.
-    idx is the index of the first character of the number
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the number.
-
-    Returns a new PyObject representation of that number:
-        PyInt, PyLong, or PyFloat.
-        May return other types if parse_int or parse_float are set
-    */
-    char *str = PyString_AS_STRING(pystr);
-    Py_ssize_t end_idx = PyString_GET_SIZE(pystr) - 1;
-    Py_ssize_t idx = start;
-    int is_float = 0;
-    PyObject *rval;
-    PyObject *numstr;
-
-    /* read a sign if it's there, make sure it's not the end of the string */
-    if (str[idx] == '-') {
-        idx++;
-        if (idx > end_idx) {
-            PyErr_SetNone(PyExc_StopIteration);
-            return NULL;
-        }
-    }
-
-    /* read as many integer digits as we find as long as it doesn't start with 0 */
-    if (str[idx] >= '1' && str[idx] <= '9') {
-        idx++;
-        while (idx <= end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-    }
-    /* if it starts with 0 we only expect one integer digit */
-    else if (str[idx] == '0') {
-        idx++;
-    }
-    /* no integer digits, error */
-    else {
-        PyErr_SetNone(PyExc_StopIteration);
-        return NULL;
-    }
-
-    /* if the next char is '.' followed by a digit then read all float digits */
-    if (idx < end_idx && str[idx] == '.' && str[idx + 1] >= '0' && str[idx + 1] <= '9') {
-        is_float = 1;
-        idx += 2;
-        while (idx <= end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-    }
-
-    /* if the next char is 'e' or 'E' then maybe read the exponent (or backtrack) */
-    if (idx < end_idx && (str[idx] == 'e' || str[idx] == 'E')) {
-
-        /* save the index of the 'e' or 'E' just in case we need to backtrack */
-        Py_ssize_t e_start = idx;
-        idx++;
-
-        /* read an exponent sign if present */
-        if (idx < end_idx && (str[idx] == '-' || str[idx] == '+')) idx++;
-
-        /* read all digits */
-        while (idx <= end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-
-        /* if we got a digit, then parse as float. if not, backtrack */
-        if (str[idx - 1] >= '0' && str[idx - 1] <= '9') {
-            is_float = 1;
-        }
-        else {
-            idx = e_start;
-        }
-    }
-
-    /* copy the section we determined to be a number */
-    numstr = PyString_FromStringAndSize(&str[start], idx - start);
-    if (numstr == NULL)
-        return NULL;
-    if (is_float) {
-        /* parse as a float using a fast path if available, otherwise call user defined method */
-        if (s->parse_float != (PyObject *)&PyFloat_Type) {
-            rval = PyObject_CallFunctionObjArgs(s->parse_float, numstr, NULL);
-        }
-        else {
-            rval = PyFloat_FromDouble(PyOS_ascii_atof(PyString_AS_STRING(numstr)));
-        }
-    }
-    else {
-        /* parse as an int using a fast path if available, otherwise call user defined method */
-        if (s->parse_int != (PyObject *)&PyInt_Type) {
-            rval = PyObject_CallFunctionObjArgs(s->parse_int, numstr, NULL);
-        }
-        else {
-            rval = PyInt_FromString(PyString_AS_STRING(numstr), NULL, 10);
-        }
-    }
-    Py_DECREF(numstr);
-    *next_idx_ptr = idx;
-    return rval;
-}
-
-static PyObject *
-_match_number_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t start, Py_ssize_t *next_idx_ptr) {
-    /* Read a JSON number from PyUnicode pystr.
-    idx is the index of the first character of the number
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the number.
-
-    Returns a new PyObject representation of that number:
-        PyInt, PyLong, or PyFloat.
-        May return other types if parse_int or parse_float are set
-    */
-    Py_UNICODE *str = PyUnicode_AS_UNICODE(pystr);
-    Py_ssize_t end_idx = PyUnicode_GET_SIZE(pystr) - 1;
-    Py_ssize_t idx = start;
-    int is_float = 0;
-    PyObject *rval;
-    PyObject *numstr;
-
-    /* read a sign if it's there, make sure it's not the end of the string */
-    if (str[idx] == '-') {
-        idx++;
-        if (idx > end_idx) {
-            PyErr_SetNone(PyExc_StopIteration);
-            return NULL;
-        }
-    }
-
-    /* read as many integer digits as we find as long as it doesn't start with 0 */
-    if (str[idx] >= '1' && str[idx] <= '9') {
-        idx++;
-        while (idx <= end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-    }
-    /* if it starts with 0 we only expect one integer digit */
-    else if (str[idx] == '0') {
-        idx++;
-    }
-    /* no integer digits, error */
-    else {
-        PyErr_SetNone(PyExc_StopIteration);
-        return NULL;
-    }
-
-    /* if the next char is '.' followed by a digit then read all float digits */
-    if (idx < end_idx && str[idx] == '.' && str[idx + 1] >= '0' && str[idx + 1] <= '9') {
-        is_float = 1;
-        idx += 2;
-        while (idx < end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-    }
-
-    /* if the next char is 'e' or 'E' then maybe read the exponent (or backtrack) */
-    if (idx < end_idx && (str[idx] == 'e' || str[idx] == 'E')) {
-        Py_ssize_t e_start = idx;
-        idx++;
-
-        /* read an exponent sign if present */
-        if (idx < end_idx && (str[idx] == '-' || str[idx] == '+')) idx++;
-
-        /* read all digits */
-        while (idx <= end_idx && str[idx] >= '0' && str[idx] <= '9') idx++;
-
-        /* if we got a digit, then parse as float. if not, backtrack */
-        if (str[idx - 1] >= '0' && str[idx - 1] <= '9') {
-            is_float = 1;
-        }
-        else {
-            idx = e_start;
-        }
-    }
-
-    /* copy the section we determined to be a number */
-    numstr = PyUnicode_FromUnicode(&str[start], idx - start);
-    if (numstr == NULL)
-        return NULL;
-    if (is_float) {
-        /* parse as a float using a fast path if available, otherwise call user defined method */
-        if (s->parse_float != (PyObject *)&PyFloat_Type) {
-            rval = PyObject_CallFunctionObjArgs(s->parse_float, numstr, NULL);
-        }
-        else {
-            rval = PyFloat_FromString(numstr, NULL);
-        }
-    }
-    else {
-        /* no fast path for unicode -> int, just call */
-        rval = PyObject_CallFunctionObjArgs(s->parse_int, numstr, NULL);
-    }
-    Py_DECREF(numstr);
-    *next_idx_ptr = idx;
-    return rval;
-}
-
-static PyObject *
-scan_once_str(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr)
-{
-    /* Read one JSON term (of any kind) from PyString pystr.
-    idx is the index of the first character of the term
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the number.
-
-    Returns a new PyObject representation of the term.
-    */
-    char *str = PyString_AS_STRING(pystr);
-    Py_ssize_t length = PyString_GET_SIZE(pystr);
-    if (idx >= length) {
-        PyErr_SetNone(PyExc_StopIteration);
-        return NULL;
-    }
-    switch (str[idx]) {
-        case '"':
-            /* string */
-            return scanstring_str(pystr, idx + 1,
-                PyString_AS_STRING(s->encoding),
-                PyObject_IsTrue(s->strict),
-                next_idx_ptr);
-        case '{':
-            /* object */
-            return _parse_object_str(s, pystr, idx + 1, next_idx_ptr);
-        case '[':
-            /* array */
-            return _parse_array_str(s, pystr, idx + 1, next_idx_ptr);
-        case 'n':
-            /* null */
-            if ((idx + 3 < length) && str[idx + 1] == 'u' && str[idx + 2] == 'l' && str[idx + 3] == 'l') {
-                Py_INCREF(Py_None);
-                *next_idx_ptr = idx + 4;
-                return Py_None;
-            }
-            break;
-        case 't':
-            /* true */
-            if ((idx + 3 < length) && str[idx + 1] == 'r' && str[idx + 2] == 'u' && str[idx + 3] == 'e') {
-                Py_INCREF(Py_True);
-                *next_idx_ptr = idx + 4;
-                return Py_True;
-            }
-            break;
-        case 'f':
-            /* false */
-            if ((idx + 4 < length) && str[idx + 1] == 'a' && str[idx + 2] == 'l' && str[idx + 3] == 's' && str[idx + 4] == 'e') {
-                Py_INCREF(Py_False);
-                *next_idx_ptr = idx + 5;
-                return Py_False;
-            }
-            break;
-        case 'N':
-            /* NaN */
-            if ((idx + 2 < length) && str[idx + 1] == 'a' && str[idx + 2] == 'N') {
-                return _parse_constant(s, "NaN", idx, next_idx_ptr);
-            }
-            break;
-        case 'I':
-            /* Infinity */
-            if ((idx + 7 < length) && str[idx + 1] == 'n' && str[idx + 2] == 'f' && str[idx + 3] == 'i' && str[idx + 4] == 'n' && str[idx + 5] == 'i' && str[idx + 6] == 't' && str[idx + 7] == 'y') {
-                return _parse_constant(s, "Infinity", idx, next_idx_ptr);
-            }
-            break;
-        case '-':
-            /* -Infinity */
-            if ((idx + 8 < length) && str[idx + 1] == 'I' && str[idx + 2] == 'n' && str[idx + 3] == 'f' && str[idx + 4] == 'i' && str[idx + 5] == 'n' && str[idx + 6] == 'i' && str[idx + 7] == 't' && str[idx + 8] == 'y') {
-                return _parse_constant(s, "-Infinity", idx, next_idx_ptr);
-            }
-            break;
-    }
-    /* Didn't find a string, object, array, or named constant. Look for a number. */
-    return _match_number_str(s, pystr, idx, next_idx_ptr);
-}
-
-static PyObject *
-scan_once_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssize_t *next_idx_ptr)
-{
-    /* Read one JSON term (of any kind) from PyUnicode pystr.
-    idx is the index of the first character of the term
-    *next_idx_ptr is a return-by-reference index to the first character after
-        the number.
-
-    Returns a new PyObject representation of the term.
-    */
-    Py_UNICODE *str = PyUnicode_AS_UNICODE(pystr);
-    Py_ssize_t length = PyUnicode_GET_SIZE(pystr);
-    if (idx >= length) {
-        PyErr_SetNone(PyExc_StopIteration);
-        return NULL;
-    }
-    switch (str[idx]) {
-        case '"':
-            /* string */
-            return scanstring_unicode(pystr, idx + 1,
-                PyObject_IsTrue(s->strict),
-                next_idx_ptr);
-        case '{':
-            /* object */
-            return _parse_object_unicode(s, pystr, idx + 1, next_idx_ptr);
-        case '[':
-            /* array */
-            return _parse_array_unicode(s, pystr, idx + 1, next_idx_ptr);
-        case 'n':
-            /* null */
-            if ((idx + 3 < length) && str[idx + 1] == 'u' && str[idx + 2] == 'l' && str[idx + 3] == 'l') {
-                Py_INCREF(Py_None);
-                *next_idx_ptr = idx + 4;
-                return Py_None;
-            }
-            break;
-        case 't':
-            /* true */
-            if ((idx + 3 < length) && str[idx + 1] == 'r' && str[idx + 2] == 'u' && str[idx + 3] == 'e') {
-                Py_INCREF(Py_True);
-                *next_idx_ptr = idx + 4;
-                return Py_True;
-            }
-            break;
-        case 'f':
-            /* false */
-            if ((idx + 4 < length) && str[idx + 1] == 'a' && str[idx + 2] == 'l' && str[idx + 3] == 's' && str[idx + 4] == 'e') {
-                Py_INCREF(Py_False);
-                *next_idx_ptr = idx + 5;
-                return Py_False;
-            }
-            break;
-        case 'N':
-            /* NaN */
-            if ((idx + 2 < length) && str[idx + 1] == 'a' && str[idx + 2] == 'N') {
-                return _parse_constant(s, "NaN", idx, next_idx_ptr);
-            }
-            break;
-        case 'I':
-            /* Infinity */
-            if ((idx + 7 < length) && str[idx + 1] == 'n' && str[idx + 2] == 'f' && str[idx + 3] == 'i' && str[idx + 4] == 'n' && str[idx + 5] == 'i' && str[idx + 6] == 't' && str[idx + 7] == 'y') {
-                return _parse_constant(s, "Infinity", idx, next_idx_ptr);
-            }
-            break;
-        case '-':
-            /* -Infinity */
-            if ((idx + 8 < length) && str[idx + 1] == 'I' && str[idx + 2] == 'n' && str[idx + 3] == 'f' && str[idx + 4] == 'i' && str[idx + 5] == 'n' && str[idx + 6] == 'i' && str[idx + 7] == 't' && str[idx + 8] == 'y') {
-                return _parse_constant(s, "-Infinity", idx, next_idx_ptr);
-            }
-            break;
-    }
-    /* Didn't find a string, object, array, or named constant. Look for a number. */
-    return _match_number_unicode(s, pystr, idx, next_idx_ptr);
-}
-
-static PyObject *
-scanner_call(PyObject *self, PyObject *args, PyObject *kwds)
-{
-    /* Python callable interface to scan_once_{str,unicode} */
-    PyObject *pystr;
-    PyObject *rval;
-    Py_ssize_t idx;
-    Py_ssize_t next_idx = -1;
-    static char *kwlist[] = {"string", "idx", NULL};
-    PyScannerObject *s;
-    assert(PyScanner_Check(self));
-    s = (PyScannerObject *)self;
-    if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&:scan_once", kwlist, &pystr, _convertPyInt_AsSsize_t, &idx))
-        return NULL;
-
-    if (PyString_Check(pystr)) {
-        rval = scan_once_str(s, pystr, idx, &next_idx);
-    }
-    else if (PyUnicode_Check(pystr)) {
-        rval = scan_once_unicode(s, pystr, idx, &next_idx);
-    }
-    else {
-        PyErr_Format(PyExc_TypeError,
-                 "first argument must be a string, not %.80s",
-                 Py_TYPE(pystr)->tp_name);
-        return NULL;
-    }
-    return _build_rval_index_tuple(rval, next_idx);
-}
-
-static PyObject *
-scanner_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
-{
-    PyScannerObject *s;
-    s = (PyScannerObject *)type->tp_alloc(type, 0);
-    if (s != NULL) {
-        s->encoding = NULL;
-        s->strict = NULL;
-        s->object_hook = NULL;
-        s->parse_float = NULL;
-        s->parse_int = NULL;
-        s->parse_constant = NULL;
-    }
-    return (PyObject *)s;
-}
-
-static int
-scanner_init(PyObject *self, PyObject *args, PyObject *kwds)
-{
-    /* Initialize Scanner object */
-    PyObject *ctx;
-    static char *kwlist[] = {"context", NULL};
-    PyScannerObject *s;
-
-    assert(PyScanner_Check(self));
-    s = (PyScannerObject *)self;
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:make_scanner", kwlist, &ctx))
-        return -1;
-
-    /* PyString_AS_STRING is used on encoding */
-    s->encoding = PyObject_GetAttrString(ctx, "encoding");
-    if (s->encoding == Py_None) {
-        Py_DECREF(Py_None);
-        s->encoding = PyString_InternFromString(DEFAULT_ENCODING);
-    }
-    else if (PyUnicode_Check(s->encoding)) {
-        PyObject *tmp = PyUnicode_AsEncodedString(s->encoding, NULL, NULL);
-        Py_DECREF(s->encoding);
-        s->encoding = tmp;
-    }
-    if (s->encoding == NULL || !PyString_Check(s->encoding))
-        goto bail;
-
-    /* All of these will fail "gracefully" so we don't need to verify them */
-    s->strict = PyObject_GetAttrString(ctx, "strict");
-    if (s->strict == NULL)
-        goto bail;
-    s->object_hook = PyObject_GetAttrString(ctx, "object_hook");
-    if (s->object_hook == NULL)
-        goto bail;
-    s->parse_float = PyObject_GetAttrString(ctx, "parse_float");
-    if (s->parse_float == NULL)
-        goto bail;
-    s->parse_int = PyObject_GetAttrString(ctx, "parse_int");
-    if (s->parse_int == NULL)
-        goto bail;
-    s->parse_constant = PyObject_GetAttrString(ctx, "parse_constant");
-    if (s->parse_constant == NULL)
-        goto bail;
-
-    return 0;
-
-bail:
-    Py_CLEAR(s->encoding);
-    Py_CLEAR(s->strict);
-    Py_CLEAR(s->object_hook);
-    Py_CLEAR(s->parse_float);
-    Py_CLEAR(s->parse_int);
-    Py_CLEAR(s->parse_constant);
-    return -1;
-}
-
-PyDoc_STRVAR(scanner_doc, "JSON scanner object");
-
-static
-PyTypeObject PyScannerType = {
-    PyObject_HEAD_INIT(NULL)
-    0,                    /* tp_internal */
-    "simplejson._speedups.Scanner",       /* tp_name */
-    sizeof(PyScannerObject), /* tp_basicsize */
-    0,                    /* tp_itemsize */
-    scanner_dealloc, /* tp_dealloc */
-    0,                    /* tp_print */
-    0,                    /* tp_getattr */
-    0,                    /* tp_setattr */
-    0,                    /* tp_compare */
-    0,                    /* tp_repr */
-    0,                    /* tp_as_number */
-    0,                    /* tp_as_sequence */
-    0,                    /* tp_as_mapping */
-    0,                    /* tp_hash */
-    scanner_call,         /* tp_call */
-    0,                    /* tp_str */
-    0,/* PyObject_GenericGetAttr, */                    /* tp_getattro */
-    0,/* PyObject_GenericSetAttr, */                    /* tp_setattro */
-    0,                    /* tp_as_buffer */
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,   /* tp_flags */
-    scanner_doc,          /* tp_doc */
-    scanner_traverse,                    /* tp_traverse */
-    scanner_clear,                    /* tp_clear */
-    0,                    /* tp_richcompare */
-    0,                    /* tp_weaklistoffset */
-    0,                    /* tp_iter */
-    0,                    /* tp_iternext */
-    0,                    /* tp_methods */
-    scanner_members,                    /* tp_members */
-    0,                    /* tp_getset */
-    0,                    /* tp_base */
-    0,                    /* tp_dict */
-    0,                    /* tp_descr_get */
-    0,                    /* tp_descr_set */
-    0,                    /* tp_dictoffset */
-    scanner_init,                    /* tp_init */
-    0,/* PyType_GenericAlloc, */        /* tp_alloc */
-    scanner_new,          /* tp_new */
-    0,/* PyObject_GC_Del, */              /* tp_free */
-};
-
-static PyObject *
-encoder_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
-{
-    PyEncoderObject *s;
-    s = (PyEncoderObject *)type->tp_alloc(type, 0);
-    if (s != NULL) {
-        s->markers = NULL;
-        s->defaultfn = NULL;
-        s->encoder = NULL;
-        s->indent = NULL;
-        s->key_separator = NULL;
-        s->item_separator = NULL;
-        s->sort_keys = NULL;
-        s->skipkeys = NULL;
-    }
-    return (PyObject *)s;
-}
-
-static int
-encoder_init(PyObject *self, PyObject *args, PyObject *kwds)
-{
-    /* initialize Encoder object */
-    static char *kwlist[] = {"markers", "default", "encoder", "indent", "key_separator", "item_separator", "sort_keys", "skipkeys", "allow_nan", NULL};
-
-    PyEncoderObject *s;
-    PyObject *allow_nan;
-
-    assert(PyEncoder_Check(self));
-    s = (PyEncoderObject *)self;
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwds, "OOOOOOOOO:make_encoder", kwlist,
-        &s->markers, &s->defaultfn, &s->encoder, &s->indent, &s->key_separator, &s->item_separator, &s->sort_keys, &s->skipkeys, &allow_nan))
-        return -1;
-
-    Py_INCREF(s->markers);
-    Py_INCREF(s->defaultfn);
-    Py_INCREF(s->encoder);
-    Py_INCREF(s->indent);
-    Py_INCREF(s->key_separator);
-    Py_INCREF(s->item_separator);
-    Py_INCREF(s->sort_keys);
-    Py_INCREF(s->skipkeys);
-    s->fast_encode = (PyCFunction_Check(s->encoder) && PyCFunction_GetFunction(s->encoder) == (PyCFunction)py_encode_basestring_ascii);
-    s->allow_nan = PyObject_IsTrue(allow_nan);
-    return 0;
-}
-
-static PyObject *
-encoder_call(PyObject *self, PyObject *args, PyObject *kwds)
-{
-    /* Python callable interface to encode_listencode_obj */
-    static char *kwlist[] = {"obj", "_current_indent_level", NULL};
-    PyObject *obj;
-    PyObject *rval;
-    Py_ssize_t indent_level;
-    PyEncoderObject *s;
-    assert(PyEncoder_Check(self));
-    s = (PyEncoderObject *)self;
-    if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&:_iterencode", kwlist,
-        &obj, _convertPyInt_AsSsize_t, &indent_level))
-        return NULL;
-    rval = PyList_New(0);
-    if (rval == NULL)
-        return NULL;
-    if (encoder_listencode_obj(s, rval, obj, indent_level)) {
-        Py_DECREF(rval);
-        return NULL;
-    }
-    return rval;
-}
-
-static PyObject *
-_encoded_const(PyObject *obj)
-{
-    /* Return the JSON string representation of None, True, False */
-    if (obj == Py_None) {
-        static PyObject *s_null = NULL;
-        if (s_null == NULL) {
-            s_null = PyString_InternFromString("null");
-        }
-        Py_INCREF(s_null);
-        return s_null;
-    }
-    else if (obj == Py_True) {
-        static PyObject *s_true = NULL;
-        if (s_true == NULL) {
-            s_true = PyString_InternFromString("true");
-        }
-        Py_INCREF(s_true);
-        return s_true;
-    }
-    else if (obj == Py_False) {
-        static PyObject *s_false = NULL;
-        if (s_false == NULL) {
-            s_false = PyString_InternFromString("false");
-        }
-        Py_INCREF(s_false);
-        return s_false;
-    }
-    else {
-        PyErr_SetString(PyExc_ValueError, "not a const");
-        return NULL;
-    }
-}
-
-static PyObject *
-encoder_encode_float(PyEncoderObject *s, PyObject *obj)
-{
-    /* Return the JSON representation of a PyFloat */
-    double i = PyFloat_AS_DOUBLE(obj);
-    if (!Py_IS_FINITE(i)) {
-        if (!s->allow_nan) {
-            PyErr_SetString(PyExc_ValueError, "Out of range float values are not JSON compliant");
-            return NULL;
-        }
-        if (i > 0) {
-            return PyString_FromString("Infinity");
-        }
-        else if (i < 0) {
-            return PyString_FromString("-Infinity");
-        }
-        else {
-            return PyString_FromString("NaN");
-        }
-    }
-    /* Use a better float format here? */
-    return PyObject_Repr(obj);
-}
-
-static PyObject *
-encoder_encode_string(PyEncoderObject *s, PyObject *obj)
-{
-    /* Return the JSON representation of a string */
-    if (s->fast_encode)
-        return py_encode_basestring_ascii(NULL, obj);
-    else
-        return PyObject_CallFunctionObjArgs(s->encoder, obj, NULL);
-}
-
-static int
-_steal_list_append(PyObject *lst, PyObject *stolen)
-{
-    /* Append stolen and then decrement its reference count */
-    int rval = PyList_Append(lst, stolen);
-    Py_DECREF(stolen);
-    return rval;
-}
-
-static int
-encoder_listencode_obj(PyEncoderObject *s, PyObject *rval, PyObject *obj, Py_ssize_t indent_level)
-{
-    /* Encode Python object obj to a JSON term, rval is a PyList */
-    PyObject *newobj;
-    int rv;
-
-    if (obj == Py_None || obj == Py_True || obj == Py_False) {
-        PyObject *cstr = _encoded_const(obj);
-        if (cstr == NULL)
-            return -1;
-        return _steal_list_append(rval, cstr);
-    }
-    else if (PyString_Check(obj) || PyUnicode_Check(obj))
-    {
-        PyObject *encoded = encoder_encode_string(s, obj);
-        if (encoded == NULL)
-            return -1;
-        return _steal_list_append(rval, encoded);
-    }
-    else if (PyInt_Check(obj) || PyLong_Check(obj)) {
-        PyObject *encoded = PyObject_Str(obj);
-        if (encoded == NULL)
-            return -1;
-        return _steal_list_append(rval, encoded);
-    }
-    else if (PyFloat_Check(obj)) {
-        PyObject *encoded = encoder_encode_float(s, obj);
-        if (encoded == NULL)
-            return -1;
-        return _steal_list_append(rval, encoded);
-    }
-    else if (PyList_Check(obj) || PyTuple_Check(obj)) {
-        return encoder_listencode_list(s, rval, obj, indent_level);
-    }
-    else if (PyDict_Check(obj)) {
-        return encoder_listencode_dict(s, rval, obj, indent_level);
-    }
-    else {
-        PyObject *ident = NULL;
-        if (s->markers != Py_None) {
-            int has_key;
-            ident = PyLong_FromVoidPtr(obj);
-            if (ident == NULL)
-                return -1;
-            has_key = PyDict_Contains(s->markers, ident);
-            if (has_key) {
-                if (has_key != -1)
-                    PyErr_SetString(PyExc_ValueError, "Circular reference detected");
-                Py_DECREF(ident);
-                return -1;
-            }
-            if (PyDict_SetItem(s->markers, ident, obj)) {
-                Py_DECREF(ident);
-                return -1;
-            }
-        }
-        newobj = PyObject_CallFunctionObjArgs(s->defaultfn, obj, NULL);
-        if (newobj == NULL) {
-            Py_XDECREF(ident);
-            return -1;
-        }
-        rv = encoder_listencode_obj(s, rval, newobj, indent_level);
-        Py_DECREF(newobj);
-        if (rv) {
-            Py_XDECREF(ident);
-            return -1;
-        }
-        if (ident != NULL) {
-            if (PyDict_DelItem(s->markers, ident)) {
-                Py_XDECREF(ident);
-                return -1;
-            }
-            Py_XDECREF(ident);
-        }
-        return rv;
-    }
-}
-
-static int
-encoder_listencode_dict(PyEncoderObject *s, PyObject *rval, PyObject *dct, Py_ssize_t indent_level)
-{
-    /* Encode Python dict dct a JSON term, rval is a PyList */
-    static PyObject *open_dict = NULL;
-    static PyObject *close_dict = NULL;
-    static PyObject *empty_dict = NULL;
-    PyObject *kstr = NULL;
-    PyObject *ident = NULL;
-    PyObject *key, *value;
-    Py_ssize_t pos;
-    int skipkeys;
-    Py_ssize_t idx;
-
-    if (open_dict == NULL || close_dict == NULL || empty_dict == NULL) {
-        open_dict = PyString_InternFromString("{");
-        close_dict = PyString_InternFromString("}");
-        empty_dict = PyString_InternFromString("{}");
-        if (open_dict == NULL || close_dict == NULL || empty_dict == NULL)
-            return -1;
-    }
-    if (PyDict_Size(dct) == 0)
-        return PyList_Append(rval, empty_dict);
-
-    if (s->markers != Py_None) {
-        int has_key;
-        ident = PyLong_FromVoidPtr(dct);
-        if (ident == NULL)
-            goto bail;
-        has_key = PyDict_Contains(s->markers, ident);
-        if (has_key) {
-            if (has_key != -1)
-                PyErr_SetString(PyExc_ValueError, "Circular reference detected");
-            goto bail;
-        }
-        if (PyDict_SetItem(s->markers, ident, dct)) {
-            goto bail;
-        }
-    }
-
-    if (PyList_Append(rval, open_dict))
-        goto bail;
-
-    if (s->indent != Py_None) {
-        /* TODO: DOES NOT RUN */
-        indent_level += 1;
-        /*
-            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
-            separator = _item_separator + newline_indent
-            buf += newline_indent
-        */
-    }
-
-    /* TODO: C speedup not implemented for sort_keys */
-
-    pos = 0;
-    skipkeys = PyObject_IsTrue(s->skipkeys);
-    idx = 0;
-    while (PyDict_Next(dct, &pos, &key, &value)) {
-        PyObject *encoded;
-
-        if (PyString_Check(key) || PyUnicode_Check(key)) {
-            Py_INCREF(key);
-            kstr = key;
-        }
-        else if (PyFloat_Check(key)) {
-            kstr = encoder_encode_float(s, key);
-            if (kstr == NULL)
-                goto bail;
-        }
-        else if (PyInt_Check(key) || PyLong_Check(key)) {
-            kstr = PyObject_Str(key);
-            if (kstr == NULL)
-                goto bail;
-        }
-        else if (key == Py_True || key == Py_False || key == Py_None) {
-            kstr = _encoded_const(key);
-            if (kstr == NULL)
-                goto bail;
-        }
-        else if (skipkeys) {
-            continue;
-        }
-        else {
-            /* TODO: include repr of key */
-            PyErr_SetString(PyExc_ValueError, "keys must be a string");
-            goto bail;
-        }
-
-        if (idx) {
-            if (PyList_Append(rval, s->item_separator))
-                goto bail;
-        }
-
-        encoded = encoder_encode_string(s, kstr);
-        Py_CLEAR(kstr);
-        if (encoded == NULL)
-            goto bail;
-        if (PyList_Append(rval, encoded)) {
-            Py_DECREF(encoded);
-            goto bail;
-        }
-        Py_DECREF(encoded);
-        if (PyList_Append(rval, s->key_separator))
-            goto bail;
-        if (encoder_listencode_obj(s, rval, value, indent_level))
-            goto bail;
-        idx += 1;
-    }
-    if (ident != NULL) {
-        if (PyDict_DelItem(s->markers, ident))
-            goto bail;
-        Py_CLEAR(ident);
-    }
-    if (s->indent != Py_None) {
-        /* TODO: DOES NOT RUN */
-        indent_level -= 1;
-        /*
-            yield '\n' + (' ' * (_indent * _current_indent_level))
-        */
-    }
-    if (PyList_Append(rval, close_dict))
-        goto bail;
-    return 0;
-
-bail:
-    Py_XDECREF(kstr);
-    Py_XDECREF(ident);
-    return -1;
-}
-
-
-static int
-encoder_listencode_list(PyEncoderObject *s, PyObject *rval, PyObject *seq, Py_ssize_t indent_level)
-{
-    /* Encode Python list seq to a JSON term, rval is a PyList */
-    static PyObject *open_array = NULL;
-    static PyObject *close_array = NULL;
-    static PyObject *empty_array = NULL;
-    PyObject *ident = NULL;
-    PyObject *s_fast = NULL;
-    Py_ssize_t num_items;
-    PyObject **seq_items;
-    Py_ssize_t i;
-
-    if (open_array == NULL || close_array == NULL || empty_array == NULL) {
-        open_array = PyString_InternFromString("[");
-        close_array = PyString_InternFromString("]");
-        empty_array = PyString_InternFromString("[]");
-        if (open_array == NULL || close_array == NULL || empty_array == NULL)
-            return -1;
-    }
-    ident = NULL;
-    s_fast = PySequence_Fast(seq, "_iterencode_list needs a sequence");
-    if (s_fast == NULL)
-        return -1;
-    num_items = PySequence_Fast_GET_SIZE(s_fast);
-    if (num_items == 0) {
-        Py_DECREF(s_fast);
-        return PyList_Append(rval, empty_array);
-    }
-
-    if (s->markers != Py_None) {
-        int has_key;
-        ident = PyLong_FromVoidPtr(seq);
-        if (ident == NULL)
-            goto bail;
-        has_key = PyDict_Contains(s->markers, ident);
-        if (has_key) {
-            if (has_key != -1)
-                PyErr_SetString(PyExc_ValueError, "Circular reference detected");
-            goto bail;
-        }
-        if (PyDict_SetItem(s->markers, ident, seq)) {
-            goto bail;
-        }
-    }
-
-    seq_items = PySequence_Fast_ITEMS(s_fast);
-    if (PyList_Append(rval, open_array))
-        goto bail;
-    if (s->indent != Py_None) {
-        /* TODO: DOES NOT RUN */
-        indent_level += 1;
-        /*
-            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
-            separator = _item_separator + newline_indent
-            buf += newline_indent
-        */
-    }
-    for (i = 0; i < num_items; i++) {
-        PyObject *obj = seq_items[i];
-        if (i) {
-            if (PyList_Append(rval, s->item_separator))
-                goto bail;
-        }
-        if (encoder_listencode_obj(s, rval, obj, indent_level))
-            goto bail;
-    }
-    if (ident != NULL) {
-        if (PyDict_DelItem(s->markers, ident))
-            goto bail;
-        Py_CLEAR(ident);
-    }
-    if (s->indent != Py_None) {
-        /* TODO: DOES NOT RUN */
-        indent_level -= 1;
-        /*
-            yield '\n' + (' ' * (_indent * _current_indent_level))
-        */
-    }
-    if (PyList_Append(rval, close_array))
-        goto bail;
-    Py_DECREF(s_fast);
-    return 0;
-
-bail:
-    Py_XDECREF(ident);
-    Py_DECREF(s_fast);
-    return -1;
-}
-
-static void
-encoder_dealloc(PyObject *self)
-{
-    /* Deallocate Encoder */
-    encoder_clear(self);
-    Py_TYPE(self)->tp_free(self);
-}
-
-static int
-encoder_traverse(PyObject *self, visitproc visit, void *arg)
-{
-    PyEncoderObject *s;
-    assert(PyEncoder_Check(self));
-    s = (PyEncoderObject *)self;
-    Py_VISIT(s->markers);
-    Py_VISIT(s->defaultfn);
-    Py_VISIT(s->encoder);
-    Py_VISIT(s->indent);
-    Py_VISIT(s->key_separator);
-    Py_VISIT(s->item_separator);
-    Py_VISIT(s->sort_keys);
-    Py_VISIT(s->skipkeys);
-    return 0;
-}
-
-static int
-encoder_clear(PyObject *self)
-{
-    /* Deallocate Encoder */
-    PyEncoderObject *s;
-    assert(PyEncoder_Check(self));
-    s = (PyEncoderObject *)self;
-    Py_CLEAR(s->markers);
-    Py_CLEAR(s->defaultfn);
-    Py_CLEAR(s->encoder);
-    Py_CLEAR(s->indent);
-    Py_CLEAR(s->key_separator);
-    Py_CLEAR(s->item_separator);
-    Py_CLEAR(s->sort_keys);
-    Py_CLEAR(s->skipkeys);
-    return 0;
-}
-
-PyDoc_STRVAR(encoder_doc, "_iterencode(obj, _current_indent_level) -> iterable");
-
-static
-PyTypeObject PyEncoderType = {
-    PyObject_HEAD_INIT(NULL)
-    0,                    /* tp_internal */
-    "simplejson._speedups.Encoder",       /* tp_name */
-    sizeof(PyEncoderObject), /* tp_basicsize */
-    0,                    /* tp_itemsize */
-    encoder_dealloc, /* tp_dealloc */
-    0,                    /* tp_print */
-    0,                    /* tp_getattr */
-    0,                    /* tp_setattr */
-    0,                    /* tp_compare */
-    0,                    /* tp_repr */
-    0,                    /* tp_as_number */
-    0,                    /* tp_as_sequence */
-    0,                    /* tp_as_mapping */
-    0,                    /* tp_hash */
-    encoder_call,         /* tp_call */
-    0,                    /* tp_str */
-    0,                    /* tp_getattro */
-    0,                    /* tp_setattro */
-    0,                    /* tp_as_buffer */
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,   /* tp_flags */
-    encoder_doc,          /* tp_doc */
-    encoder_traverse,     /* tp_traverse */
-    encoder_clear,        /* tp_clear */
-    0,                    /* tp_richcompare */
-    0,                    /* tp_weaklistoffset */
-    0,                    /* tp_iter */
-    0,                    /* tp_iternext */
-    0,                    /* tp_methods */
-    encoder_members,      /* tp_members */
-    0,                    /* tp_getset */
-    0,                    /* tp_base */
-    0,                    /* tp_dict */
-    0,                    /* tp_descr_get */
-    0,                    /* tp_descr_set */
-    0,                    /* tp_dictoffset */
-    encoder_init,         /* tp_init */
-    0,                    /* tp_alloc */
-    encoder_new,          /* tp_new */
-    0,                    /* tp_free */
-};
-
-static PyMethodDef speedups_methods[] = {
-    {"encode_basestring_ascii",
-        (PyCFunction)py_encode_basestring_ascii,
-        METH_O,
-        pydoc_encode_basestring_ascii},
-    {"scanstring",
-        (PyCFunction)py_scanstring,
-        METH_VARARGS,
-        pydoc_scanstring},
-    {NULL, NULL, 0, NULL}
-};
-
-PyDoc_STRVAR(module_doc,
-"simplejson speedups\n");
-
-void
-init_speedups(void)
-{
-    PyObject *m;
-    PyScannerType.tp_new = PyType_GenericNew;
-    if (PyType_Ready(&PyScannerType) < 0)
-        return;
-    PyEncoderType.tp_new = PyType_GenericNew;
-    if (PyType_Ready(&PyEncoderType) < 0)
-        return;
-    m = Py_InitModule3("_speedups", speedups_methods, module_doc);
-    Py_INCREF((PyObject*)&PyScannerType);
-    PyModule_AddObject(m, "make_scanner", (PyObject*)&PyScannerType);
-    Py_INCREF((PyObject*)&PyEncoderType);
-    PyModule_AddObject(m, "make_encoder", (PyObject*)&PyEncoderType);
-}
diff --git a/branch-1.3/lang/py/lib/simplejson/decoder.py b/branch-1.3/lang/py/lib/simplejson/decoder.py
deleted file mode 100644
index b769ea4..0000000
--- a/branch-1.3/lang/py/lib/simplejson/decoder.py
+++ /dev/null
@@ -1,354 +0,0 @@
-"""Implementation of JSONDecoder
-"""
-import re
-import sys
-import struct
-
-from simplejson.scanner import make_scanner
-try:
-    from simplejson._speedups import scanstring as c_scanstring
-except ImportError:
-    c_scanstring = None
-
-__all__ = ['JSONDecoder']
-
-FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
-
-def _floatconstants():
-    _BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
-    if sys.byteorder != 'big':
-        _BYTES = _BYTES[:8][::-1] + _BYTES[8:][::-1]
-    nan, inf = struct.unpack('dd', _BYTES)
-    return nan, inf, -inf
-
-NaN, PosInf, NegInf = _floatconstants()
-
-
-def linecol(doc, pos):
-    lineno = doc.count('\n', 0, pos) + 1
-    if lineno == 1:
-        colno = pos
-    else:
-        colno = pos - doc.rindex('\n', 0, pos)
-    return lineno, colno
-
-
-def errmsg(msg, doc, pos, end=None):
-    # Note that this function is called from _speedups
-    lineno, colno = linecol(doc, pos)
-    if end is None:
-        #fmt = '{0}: line {1} column {2} (char {3})'
-        #return fmt.format(msg, lineno, colno, pos)
-        fmt = '%s: line %d column %d (char %d)'
-        return fmt % (msg, lineno, colno, pos)
-    endlineno, endcolno = linecol(doc, end)
-    #fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})'
-    #return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end)
-    fmt = '%s: line %d column %d - line %d column %d (char %d - %d)'
-    return fmt % (msg, lineno, colno, endlineno, endcolno, pos, end)
-
-
-_CONSTANTS = {
-    '-Infinity': NegInf,
-    'Infinity': PosInf,
-    'NaN': NaN,
-}
-
-STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
-BACKSLASH = {
-    '"': u'"', '\\': u'\\', '/': u'/',
-    'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
-}
-
-DEFAULT_ENCODING = "utf-8"
-
-def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match):
-    """Scan the string s for a JSON string. End is the index of the
-    character in s after the quote that started the JSON string.
-    Unescapes all valid JSON string escape sequences and raises ValueError
-    on attempt to decode an invalid string. If strict is False then literal
-    control characters are allowed in the string.
-    
-    Returns a tuple of the decoded string and the index of the character in s
-    after the end quote."""
-    if encoding is None:
-        encoding = DEFAULT_ENCODING
-    chunks = []
-    _append = chunks.append
-    begin = end - 1
-    while 1:
-        chunk = _m(s, end)
-        if chunk is None:
-            raise ValueError(
-                errmsg("Unterminated string starting at", s, begin))
-        end = chunk.end()
-        content, terminator = chunk.groups()
-        # Content is contains zero or more unescaped string characters
-        if content:
-            if not isinstance(content, unicode):
-                content = unicode(content, encoding)
-            _append(content)
-        # Terminator is the end of string, a literal control character,
-        # or a backslash denoting that an escape sequence follows
-        if terminator == '"':
-            break
-        elif terminator != '\\':
-            if strict:
-                msg = "Invalid control character %r at" % (terminator,)
-                #msg = "Invalid control character {0!r} at".format(terminator)
-                raise ValueError(errmsg(msg, s, end))
-            else:
-                _append(terminator)
-                continue
-        try:
-            esc = s[end]
-        except IndexError:
-            raise ValueError(
-                errmsg("Unterminated string starting at", s, begin))
-        # If not a unicode escape sequence, must be in the lookup table
-        if esc != 'u':
-            try:
-                char = _b[esc]
-            except KeyError:
-                msg = "Invalid \\escape: " + repr(esc)
-                raise ValueError(errmsg(msg, s, end))
-            end += 1
-        else:
-            # Unicode escape sequence
-            esc = s[end + 1:end + 5]
-            next_end = end + 5
-            if len(esc) != 4:
-                msg = "Invalid \\uXXXX escape"
-                raise ValueError(errmsg(msg, s, end))
-            uni = int(esc, 16)
-            # Check for surrogate pair on UCS-4 systems
-            if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535:
-                msg = "Invalid \\uXXXX\\uXXXX surrogate pair"
-                if not s[end + 5:end + 7] == '\\u':
-                    raise ValueError(errmsg(msg, s, end))
-                esc2 = s[end + 7:end + 11]
-                if len(esc2) != 4:
-                    raise ValueError(errmsg(msg, s, end))
-                uni2 = int(esc2, 16)
-                uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
-                next_end += 6
-            char = unichr(uni)
-            end = next_end
-        # Append the unescaped character
-        _append(char)
-    return u''.join(chunks), end
-
-
-# Use speedup if available
-scanstring = c_scanstring or py_scanstring
-
-WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
-WHITESPACE_STR = ' \t\n\r'
-
-def JSONObject((s, end), encoding, strict, scan_once, object_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
-    pairs = {}
-    # Use a slice to prevent IndexError from being raised, the following
-    # check will raise a more specific ValueError if the string is empty
-    nextchar = s[end:end + 1]
-    # Normally we expect nextchar == '"'
-    if nextchar != '"':
-        if nextchar in _ws:
-            end = _w(s, end).end()
-            nextchar = s[end:end + 1]
-        # Trivial empty object
-        if nextchar == '}':
-            return pairs, end + 1
-        elif nextchar != '"':
-            raise ValueError(errmsg("Expecting property name", s, end))
-    end += 1
-    while True:
-        key, end = scanstring(s, end, encoding, strict)
-
-        # To skip some function call overhead we optimize the fast paths where
-        # the JSON key separator is ": " or just ":".
-        if s[end:end + 1] != ':':
-            end = _w(s, end).end()
-            if s[end:end + 1] != ':':
-                raise ValueError(errmsg("Expecting : delimiter", s, end))
-
-        end += 1
-
-        try:
-            if s[end] in _ws:
-                end += 1
-                if s[end] in _ws:
-                    end = _w(s, end + 1).end()
-        except IndexError:
-            pass
-
-        try:
-            value, end = scan_once(s, end)
-        except StopIteration:
-            raise ValueError(errmsg("Expecting object", s, end))
-        pairs[key] = value
-
-        try:
-            nextchar = s[end]
-            if nextchar in _ws:
-                end = _w(s, end + 1).end()
-                nextchar = s[end]
-        except IndexError:
-            nextchar = ''
-        end += 1
-
-        if nextchar == '}':
-            break
-        elif nextchar != ',':
-            raise ValueError(errmsg("Expecting , delimiter", s, end - 1))
-
-        try:
-            nextchar = s[end]
-            if nextchar in _ws:
-                end += 1
-                nextchar = s[end]
-                if nextchar in _ws:
-                    end = _w(s, end + 1).end()
-                    nextchar = s[end]
-        except IndexError:
-            nextchar = ''
-
-        end += 1
-        if nextchar != '"':
-            raise ValueError(errmsg("Expecting property name", s, end - 1))
-
-    if object_hook is not None:
-        pairs = object_hook(pairs)
-    return pairs, end
-
-def JSONArray((s, end), scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
-    values = []
-    nextchar = s[end:end + 1]
-    if nextchar in _ws:
-        end = _w(s, end + 1).end()
-        nextchar = s[end:end + 1]
-    # Look-ahead for trivial empty array
-    if nextchar == ']':
-        return values, end + 1
-    _append = values.append
-    while True:
-        try:
-            value, end = scan_once(s, end)
-        except StopIteration:
-            raise ValueError(errmsg("Expecting object", s, end))
-        _append(value)
-        nextchar = s[end:end + 1]
-        if nextchar in _ws:
-            end = _w(s, end + 1).end()
-            nextchar = s[end:end + 1]
-        end += 1
-        if nextchar == ']':
-            break
-        elif nextchar != ',':
-            raise ValueError(errmsg("Expecting , delimiter", s, end))
-
-        try:
-            if s[end] in _ws:
-                end += 1
-                if s[end] in _ws:
-                    end = _w(s, end + 1).end()
-        except IndexError:
-            pass
-
-    return values, end
-
-class JSONDecoder(object):
-    """Simple JSON <http://json.org> decoder
-
-    Performs the following translations in decoding by default:
-
-    +---------------+-------------------+
-    | JSON          | Python            |
-    +===============+===================+
-    | object        | dict              |
-    +---------------+-------------------+
-    | array         | list              |
-    +---------------+-------------------+
-    | string        | unicode           |
-    +---------------+-------------------+
-    | number (int)  | int, long         |
-    +---------------+-------------------+
-    | number (real) | float             |
-    +---------------+-------------------+
-    | true          | True              |
-    +---------------+-------------------+
-    | false         | False             |
-    +---------------+-------------------+
-    | null          | None              |
-    +---------------+-------------------+
-
-    It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
-    their corresponding ``float`` values, which is outside the JSON spec.
-
-    """
-
-    def __init__(self, encoding=None, object_hook=None, parse_float=None,
-            parse_int=None, parse_constant=None, strict=True):
-        """``encoding`` determines the encoding used to interpret any ``str``
-        objects decoded by this instance (utf-8 by default).  It has no
-        effect when decoding ``unicode`` objects.
-
-        Note that currently only encodings that are a superset of ASCII work,
-        strings of other encodings should be passed in as ``unicode``.
-
-        ``object_hook``, if specified, will be called with the result
-        of every JSON object decoded and its return value will be used in
-        place of the given ``dict``.  This can be used to provide custom
-        deserializations (e.g. to support JSON-RPC class hinting).
-
-        ``parse_float``, if specified, will be called with the string
-        of every JSON float to be decoded. By default this is equivalent to
-        float(num_str). This can be used to use another datatype or parser
-        for JSON floats (e.g. decimal.Decimal).
-
-        ``parse_int``, if specified, will be called with the string
-        of every JSON int to be decoded. By default this is equivalent to
-        int(num_str). This can be used to use another datatype or parser
-        for JSON integers (e.g. float).
-
-        ``parse_constant``, if specified, will be called with one of the
-        following strings: -Infinity, Infinity, NaN.
-        This can be used to raise an exception if invalid JSON numbers
-        are encountered.
-
-        """
-        self.encoding = encoding
-        self.object_hook = object_hook
-        self.parse_float = parse_float or float
-        self.parse_int = parse_int or int
-        self.parse_constant = parse_constant or _CONSTANTS.__getitem__
-        self.strict = strict
-        self.parse_object = JSONObject
-        self.parse_array = JSONArray
-        self.parse_string = scanstring
-        self.scan_once = make_scanner(self)
-
-    def decode(self, s, _w=WHITESPACE.match):
-        """Return the Python representation of ``s`` (a ``str`` or ``unicode``
-        instance containing a JSON document)
-
-        """
-        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
-        end = _w(s, end).end()
-        if end != len(s):
-            raise ValueError(errmsg("Extra data", s, end, len(s)))
-        return obj
-
-    def raw_decode(self, s, idx=0):
-        """Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning
-        with a JSON document) and return a 2-tuple of the Python
-        representation and the index in ``s`` where the document ended.
-
-        This can be used to decode a JSON document from a string that may
-        have extraneous data at the end.
-
-        """
-        try:
-            obj, end = self.scan_once(s, idx)
-        except StopIteration:
-            raise ValueError("No JSON object could be decoded")
-        return obj, end
diff --git a/branch-1.3/lang/py/lib/simplejson/encoder.py b/branch-1.3/lang/py/lib/simplejson/encoder.py
deleted file mode 100644
index cf58290..0000000
--- a/branch-1.3/lang/py/lib/simplejson/encoder.py
+++ /dev/null
@@ -1,440 +0,0 @@
-"""Implementation of JSONEncoder
-"""
-import re
-
-try:
-    from simplejson._speedups import encode_basestring_ascii as c_encode_basestring_ascii
-except ImportError:
-    c_encode_basestring_ascii = None
-try:
-    from simplejson._speedups import make_encoder as c_make_encoder
-except ImportError:
-    c_make_encoder = None
-
-ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]')
-ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
-HAS_UTF8 = re.compile(r'[\x80-\xff]')
-ESCAPE_DCT = {
-    '\\': '\\\\',
-    '"': '\\"',
-    '\b': '\\b',
-    '\f': '\\f',
-    '\n': '\\n',
-    '\r': '\\r',
-    '\t': '\\t',
-}
-for i in range(0x20):
-    #ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
-    ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
-
-# Assume this produces an infinity on all machines (probably not guaranteed)
-INFINITY = float('1e66666')
-FLOAT_REPR = repr
-
-def encode_basestring(s):
-    """Return a JSON representation of a Python string
-
-    """
-    def replace(match):
-        return ESCAPE_DCT[match.group(0)]
-    return '"' + ESCAPE.sub(replace, s) + '"'
-
-
-def py_encode_basestring_ascii(s):
-    """Return an ASCII-only JSON representation of a Python string
-
-    """
-    if isinstance(s, str) and HAS_UTF8.search(s) is not None:
-        s = s.decode('utf-8')
-    def replace(match):
-        s = match.group(0)
-        try:
-            return ESCAPE_DCT[s]
-        except KeyError:
-            n = ord(s)
-            if n < 0x10000:
-                #return '\\u{0:04x}'.format(n)
-                return '\\u%04x' % (n,)
-            else:
-                # surrogate pair
-                n -= 0x10000
-                s1 = 0xd800 | ((n >> 10) & 0x3ff)
-                s2 = 0xdc00 | (n & 0x3ff)
-                #return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
-                return '\\u%04x\\u%04x' % (s1, s2)
-    return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
-
-
-encode_basestring_ascii = c_encode_basestring_ascii or py_encode_basestring_ascii
-
-class JSONEncoder(object):
-    """Extensible JSON <http://json.org> encoder for Python data structures.
-
-    Supports the following objects and types by default:
-
-    +-------------------+---------------+
-    | Python            | JSON          |
-    +===================+===============+
-    | dict              | object        |
-    +-------------------+---------------+
-    | list, tuple       | array         |
-    +-------------------+---------------+
-    | str, unicode      | string        |
-    +-------------------+---------------+
-    | int, long, float  | number        |
-    +-------------------+---------------+
-    | True              | true          |
-    +-------------------+---------------+
-    | False             | false         |
-    +-------------------+---------------+
-    | None              | null          |
-    +-------------------+---------------+
-
-    To extend this to recognize other objects, subclass and implement a
-    ``.default()`` method with another method that returns a serializable
-    object for ``o`` if possible, otherwise it should call the superclass
-    implementation (to raise ``TypeError``).
-
-    """
-    item_separator = ', '
-    key_separator = ': '
-    def __init__(self, skipkeys=False, ensure_ascii=True,
-            check_circular=True, allow_nan=True, sort_keys=False,
-            indent=None, separators=None, encoding='utf-8', default=None):
-        """Constructor for JSONEncoder, with sensible defaults.
-
-        If skipkeys is false, then it is a TypeError to attempt
-        encoding of keys that are not str, int, long, float or None.  If
-        skipkeys is True, such items are simply skipped.
-
-        If ensure_ascii is true, the output is guaranteed to be str
-        objects with all incoming unicode characters escaped.  If
-        ensure_ascii is false, the output will be unicode object.
-
-        If check_circular is true, then lists, dicts, and custom encoded
-        objects will be checked for circular references during encoding to
-        prevent an infinite recursion (which would cause an OverflowError).
-        Otherwise, no such check takes place.
-
-        If allow_nan is true, then NaN, Infinity, and -Infinity will be
-        encoded as such.  This behavior is not JSON specification compliant,
-        but is consistent with most JavaScript based encoders and decoders.
-        Otherwise, it will be a ValueError to encode such floats.
-
-        If sort_keys is true, then the output of dictionaries will be
-        sorted by key; this is useful for regression tests to ensure
-        that JSON serializations can be compared on a day-to-day basis.
-
-        If indent is a non-negative integer, then JSON array
-        elements and object members will be pretty-printed with that
-        indent level.  An indent level of 0 will only insert newlines.
-        None is the most compact representation.
-
-        If specified, separators should be a (item_separator, key_separator)
-        tuple.  The default is (', ', ': ').  To get the most compact JSON
-        representation you should specify (',', ':') to eliminate whitespace.
-
-        If specified, default is a function that gets called for objects
-        that can't otherwise be serialized.  It should return a JSON encodable
-        version of the object or raise a ``TypeError``.
-
-        If encoding is not None, then all input strings will be
-        transformed into unicode using that encoding prior to JSON-encoding.
-        The default is UTF-8.
-
-        """
-
-        self.skipkeys = skipkeys
-        self.ensure_ascii = ensure_ascii
-        self.check_circular = check_circular
-        self.allow_nan = allow_nan
-        self.sort_keys = sort_keys
-        self.indent = indent
-        if separators is not None:
-            self.item_separator, self.key_separator = separators
-        if default is not None:
-            self.default = default
-        self.encoding = encoding
-
-    def default(self, o):
-        """Implement this method in a subclass such that it returns
-        a serializable object for ``o``, or calls the base implementation
-        (to raise a ``TypeError``).
-
-        For example, to support arbitrary iterators, you could
-        implement default like this::
-
-            def default(self, o):
-                try:
-                    iterable = iter(o)
-                except TypeError:
-                    pass
-                else:
-                    return list(iterable)
-                return JSONEncoder.default(self, o)
-
-        """
-        raise TypeError(repr(o) + " is not JSON serializable")
-
-    def encode(self, o):
-        """Return a JSON string representation of a Python data structure.
-
-        >>> JSONEncoder().encode({"foo": ["bar", "baz"]})
-        '{"foo": ["bar", "baz"]}'
-
-        """
-        # This is for extremely simple cases and benchmarks.
-        if isinstance(o, basestring):
-            if isinstance(o, str):
-                _encoding = self.encoding
-                if (_encoding is not None
-                        and not (_encoding == 'utf-8')):
-                    o = o.decode(_encoding)
-            if self.ensure_ascii:
-                return encode_basestring_ascii(o)
-            else:
-                return encode_basestring(o)
-        # This doesn't pass the iterator directly to ''.join() because the
-        # exceptions aren't as detailed.  The list call should be roughly
-        # equivalent to the PySequence_Fast that ''.join() would do.
-        chunks = self.iterencode(o, _one_shot=True)
-        if not isinstance(chunks, (list, tuple)):
-            chunks = list(chunks)
-        return ''.join(chunks)
-
-    def iterencode(self, o, _one_shot=False):
-        """Encode the given object and yield each string
-        representation as available.
-
-        For example::
-
-            for chunk in JSONEncoder().iterencode(bigobject):
-                mysocket.write(chunk)
-
-        """
-        if self.check_circular:
-            markers = {}
-        else:
-            markers = None
-        if self.ensure_ascii:
-            _encoder = encode_basestring_ascii
-        else:
-            _encoder = encode_basestring
-        if self.encoding != 'utf-8':
-            def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):
-                if isinstance(o, str):
-                    o = o.decode(_encoding)
-                return _orig_encoder(o)
-
-        def floatstr(o, allow_nan=self.allow_nan, _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY):
-            # Check for specials.  Note that this type of test is processor- and/or
-            # platform-specific, so do tests which don't depend on the internals.
-
-            if o != o:
-                text = 'NaN'
-            elif o == _inf:
-                text = 'Infinity'
-            elif o == _neginf:
-                text = '-Infinity'
-            else:
-                return _repr(o)
-
-            if not allow_nan:
-                raise ValueError(
-                    "Out of range float values are not JSON compliant: " +
-                    repr(o))
-
-            return text
-
-
-        if _one_shot and c_make_encoder is not None and not self.indent and not self.sort_keys:
-            _iterencode = c_make_encoder(
-                markers, self.default, _encoder, self.indent,
-                self.key_separator, self.item_separator, self.sort_keys,
-                self.skipkeys, self.allow_nan)
-        else:
-            _iterencode = _make_iterencode(
-                markers, self.default, _encoder, self.indent, floatstr,
-                self.key_separator, self.item_separator, self.sort_keys,
-                self.skipkeys, _one_shot)
-        return _iterencode(o, 0)
-
-def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot,
-        ## HACK: hand-optimized bytecode; turn globals into locals
-        False=False,
-        True=True,
-        ValueError=ValueError,
-        basestring=basestring,
-        dict=dict,
-        float=float,
-        id=id,
-        int=int,
-        isinstance=isinstance,
-        list=list,
-        long=long,
-        str=str,
-        tuple=tuple,
-    ):
-
-    def _iterencode_list(lst, _current_indent_level):
-        if not lst:
-            yield '[]'
-            return
-        if markers is not None:
-            markerid = id(lst)
-            if markerid in markers:
-                raise ValueError("Circular reference detected")
-            markers[markerid] = lst
-        buf = '['
-        if _indent is not None:
-            _current_indent_level += 1
-            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
-            separator = _item_separator + newline_indent
-            buf += newline_indent
-        else:
-            newline_indent = None
-            separator = _item_separator
-        first = True
-        for value in lst:
-            if first:
-                first = False
-            else:
-                buf = separator
-            if isinstance(value, basestring):
-                yield buf + _encoder(value)
-            elif value is None:
-                yield buf + 'null'
-            elif value is True:
-                yield buf + 'true'
-            elif value is False:
-                yield buf + 'false'
-            elif isinstance(value, (int, long)):
-                yield buf + str(value)
-            elif isinstance(value, float):
-                yield buf + _floatstr(value)
-            else:
-                yield buf
-                if isinstance(value, (list, tuple)):
-                    chunks = _iterencode_list(value, _current_indent_level)
-                elif isinstance(value, dict):
-                    chunks = _iterencode_dict(value, _current_indent_level)
-                else:
-                    chunks = _iterencode(value, _current_indent_level)
-                for chunk in chunks:
-                    yield chunk
-        if newline_indent is not None:
-            _current_indent_level -= 1
-            yield '\n' + (' ' * (_indent * _current_indent_level))
-        yield ']'
-        if markers is not None:
-            del markers[markerid]
-
-    def _iterencode_dict(dct, _current_indent_level):
-        if not dct:
-            yield '{}'
-            return
-        if markers is not None:
-            markerid = id(dct)
-            if markerid in markers:
-                raise ValueError("Circular reference detected")
-            markers[markerid] = dct
-        yield '{'
-        if _indent is not None:
-            _current_indent_level += 1
-            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
-            item_separator = _item_separator + newline_indent
-            yield newline_indent
-        else:
-            newline_indent = None
-            item_separator = _item_separator
-        first = True
-        if _sort_keys:
-            items = dct.items()
-            items.sort(key=lambda kv: kv[0])
-        else:
-            items = dct.iteritems()
-        for key, value in items:
-            if isinstance(key, basestring):
-                pass
-            # JavaScript is weakly typed for these, so it makes sense to
-            # also allow them.  Many encoders seem to do something like this.
-            elif isinstance(key, float):
-                key = _floatstr(key)
-            elif key is True:
-                key = 'true'
-            elif key is False:
-                key = 'false'
-            elif key is None:
-                key = 'null'
-            elif isinstance(key, (int, long)):
-                key = str(key)
-            elif _skipkeys:
-                continue
-            else:
-                raise TypeError("key " + repr(key) + " is not a string")
-            if first:
-                first = False
-            else:
-                yield item_separator
-            yield _encoder(key)
-            yield _key_separator
-            if isinstance(value, basestring):
-                yield _encoder(value)
-            elif value is None:
-                yield 'null'
-            elif value is True:
-                yield 'true'
-            elif value is False:
-                yield 'false'
-            elif isinstance(value, (int, long)):
-                yield str(value)
-            elif isinstance(value, float):
-                yield _floatstr(value)
-            else:
-                if isinstance(value, (list, tuple)):
-                    chunks = _iterencode_list(value, _current_indent_level)
-                elif isinstance(value, dict):
-                    chunks = _iterencode_dict(value, _current_indent_level)
-                else:
-                    chunks = _iterencode(value, _current_indent_level)
-                for chunk in chunks:
-                    yield chunk
-        if newline_indent is not None:
-            _current_indent_level -= 1
-            yield '\n' + (' ' * (_indent * _current_indent_level))
-        yield '}'
-        if markers is not None:
-            del markers[markerid]
-
-    def _iterencode(o, _current_indent_level):
-        if isinstance(o, basestring):
-            yield _encoder(o)
-        elif o is None:
-            yield 'null'
-        elif o is True:
-            yield 'true'
-        elif o is False:
-            yield 'false'
-        elif isinstance(o, (int, long)):
-            yield str(o)
-        elif isinstance(o, float):
-            yield _floatstr(o)
-        elif isinstance(o, (list, tuple)):
-            for chunk in _iterencode_list(o, _current_indent_level):
-                yield chunk
-        elif isinstance(o, dict):
-            for chunk in _iterencode_dict(o, _current_indent_level):
-                yield chunk
-        else:
-            if markers is not None:
-                markerid = id(o)
-                if markerid in markers:
-                    raise ValueError("Circular reference detected")
-                markers[markerid] = o
-            o = _default(o)
-            for chunk in _iterencode(o, _current_indent_level):
-                yield chunk
-            if markers is not None:
-                del markers[markerid]
-
-    return _iterencode
diff --git a/branch-1.3/lang/py/lib/simplejson/scanner.py b/branch-1.3/lang/py/lib/simplejson/scanner.py
deleted file mode 100644
index adbc6ec..0000000
--- a/branch-1.3/lang/py/lib/simplejson/scanner.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""JSON token scanner
-"""
-import re
-try:
-    from simplejson._speedups import make_scanner as c_make_scanner
-except ImportError:
-    c_make_scanner = None
-
-__all__ = ['make_scanner']
-
-NUMBER_RE = re.compile(
-    r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?',
-    (re.VERBOSE | re.MULTILINE | re.DOTALL))
-
-def py_make_scanner(context):
-    parse_object = context.parse_object
-    parse_array = context.parse_array
-    parse_string = context.parse_string
-    match_number = NUMBER_RE.match
-    encoding = context.encoding
-    strict = context.strict
-    parse_float = context.parse_float
-    parse_int = context.parse_int
-    parse_constant = context.parse_constant
-    object_hook = context.object_hook
-
-    def _scan_once(string, idx):
-        try:
-            nextchar = string[idx]
-        except IndexError:
-            raise StopIteration
-
-        if nextchar == '"':
-            return parse_string(string, idx + 1, encoding, strict)
-        elif nextchar == '{':
-            return parse_object((string, idx + 1), encoding, strict, _scan_once, object_hook)
-        elif nextchar == '[':
-            return parse_array((string, idx + 1), _scan_once)
-        elif nextchar == 'n' and string[idx:idx + 4] == 'null':
-            return None, idx + 4
-        elif nextchar == 't' and string[idx:idx + 4] == 'true':
-            return True, idx + 4
-        elif nextchar == 'f' and string[idx:idx + 5] == 'false':
-            return False, idx + 5
-
-        m = match_number(string, idx)
-        if m is not None:
-            integer, frac, exp = m.groups()
-            if frac or exp:
-                res = parse_float(integer + (frac or '') + (exp or ''))
-            else:
-                res = parse_int(integer)
-            return res, m.end()
-        elif nextchar == 'N' and string[idx:idx + 3] == 'NaN':
-            return parse_constant('NaN'), idx + 3
-        elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity':
-            return parse_constant('Infinity'), idx + 8
-        elif nextchar == '-' and string[idx:idx + 9] == '-Infinity':
-            return parse_constant('-Infinity'), idx + 9
-        else:
-            raise StopIteration
-
-    return _scan_once
-
-make_scanner = c_make_scanner or py_make_scanner
diff --git a/branch-1.3/lang/py/lib/simplejson/tool.py b/branch-1.3/lang/py/lib/simplejson/tool.py
deleted file mode 100644
index 9044331..0000000
--- a/branch-1.3/lang/py/lib/simplejson/tool.py
+++ /dev/null
@@ -1,37 +0,0 @@
-r"""Command-line tool to validate and pretty-print JSON
-
-Usage::
-
-    $ echo '{"json":"obj"}' | python -m simplejson.tool
-    {
-        "json": "obj"
-    }
-    $ echo '{ 1.2:3.4}' | python -m simplejson.tool
-    Expecting property name: line 1 column 2 (char 2)
-
-"""
-import sys
-import simplejson
-
-def main():
-    if len(sys.argv) == 1:
-        infile = sys.stdin
-        outfile = sys.stdout
-    elif len(sys.argv) == 2:
-        infile = open(sys.argv[1], 'rb')
-        outfile = sys.stdout
-    elif len(sys.argv) == 3:
-        infile = open(sys.argv[1], 'rb')
-        outfile = open(sys.argv[2], 'wb')
-    else:
-        raise SystemExit(sys.argv[0] + " [infile [outfile]]")
-    try:
-        obj = simplejson.load(infile)
-    except ValueError, e:
-        raise SystemExit(e)
-    simplejson.dump(obj, outfile, sort_keys=True, indent=4)
-    outfile.write('\n')
-
-
-if __name__ == '__main__':
-    main()
diff --git a/branch-1.3/lang/py/setup.py b/branch-1.3/lang/py/setup.py
deleted file mode 100755
index 6029744..0000000
--- a/branch-1.3/lang/py/setup.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#! /usr/bin/env python
-
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-try:
-  from setuptools import setup
-except ImportError:
-  from distutils.core import setup
-
-VERSION_FILE='../../share/VERSION.txt'
-
-setup(
-  name = 'avro',
-  version = file(VERSION_FILE, 'r').read(),
-  packages = ['avro',],
-  package_dir = {'avro': 'src/avro'},
-
-  # Project uses simplejson, so ensure that it gets installed or upgraded
-  # on the target machine
-  install_requires = ['simplejson >= 2.0.9'],
-
-  # metadata for upload to PyPI
-  author = 'Apache Avro',
-  author_email = 'avro-dev@hadoop.apache.org',
-  description = 'Avro is a serialization and RPC framework.',
-  license = 'Apache License 2.0',
-  keywords = 'avro serialization rpc',
-  url = 'http://hadoop.apache.org/avro',
-)
diff --git a/branch-1.3/lang/py/src/avro/__init__.py b/branch-1.3/lang/py/src/avro/__init__.py
deleted file mode 100644
index da51d9b..0000000
--- a/branch-1.3/lang/py/src/avro/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-__all__ = ['schema', 'io', 'datafile', 'protocol', 'ipc']
-
diff --git a/branch-1.3/lang/py/src/avro/datafile.py b/branch-1.3/lang/py/src/avro/datafile.py
deleted file mode 100644
index 8b68dc2..0000000
--- a/branch-1.3/lang/py/src/avro/datafile.py
+++ /dev/null
@@ -1,324 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Read/Write Avro File Object Containers.
-"""
-import zlib
-import uuid
-try:
-  from cStringIO import StringIO
-except ImportError:
-  from StringIO import StringIO
-from avro import schema
-from avro import io
-
-#
-# Constants
-#
-
-VERSION = 1
-MAGIC = 'Obj' + chr(VERSION)
-MAGIC_SIZE = len(MAGIC)
-SYNC_SIZE = 16
-SYNC_INTERVAL = 1000 * SYNC_SIZE # TODO(hammer): make configurable
-META_SCHEMA = schema.parse("""\
-{"type": "record", "name": "org.apache.avro.file.Header",
- "fields" : [
-   {"name": "magic", "type": {"type": "fixed", "name": "magic", "size": %d}},
-   {"name": "meta", "type": {"type": "map", "values": "bytes"}},
-   {"name": "sync", "type": {"type": "fixed", "name": "sync", "size": %d}}]}
-""" % (MAGIC_SIZE, SYNC_SIZE))
-VALID_CODECS = ['null', 'deflate']
-VALID_ENCODINGS = ['binary'] # not used yet
-
-CODEC_KEY = "avro.codec"
-SCHEMA_KEY = "avro.schema"
-
-#
-# Exceptions
-#
-
-class DataFileException(schema.AvroException):
-  """
-  Raised when there's a problem reading or writing file object containers.
-  """
-  def __init__(self, fail_msg):
-    schema.AvroException.__init__(self, fail_msg)
-
-#
-# Write Path
-#
-
-class DataFileWriter(object):
-  @staticmethod
-  def generate_sync_marker():
-    return uuid.uuid4().bytes
-
-  # TODO(hammer): make 'encoder' a metadata property
-  def __init__(self, writer, datum_writer, writers_schema=None, codec='null'):
-    """
-    If the schema is not present, presume we're appending.
-
-    @param writer: File-like object to write into.
-    """
-    self._writer = writer
-    self._encoder = io.BinaryEncoder(writer)
-    self._datum_writer = datum_writer
-    self._buffer_writer = StringIO()
-    self._buffer_encoder = io.BinaryEncoder(self._buffer_writer)
-    self._block_count = 0
-    self._meta = {}
-
-    if writers_schema is not None:
-      if codec not in VALID_CODECS:
-        raise DataFileException("Unknown codec: %r" % codec)
-      self._sync_marker = DataFileWriter.generate_sync_marker()
-      self.set_meta('avro.codec', codec)
-      self.set_meta('avro.schema', str(writers_schema))
-      self.datum_writer.writers_schema = writers_schema
-      self._write_header()
-    else:
-      # open writer for reading to collect metadata
-      dfr = DataFileReader(writer, io.DatumReader())
-      
-      # TODO(hammer): collect arbitrary metadata
-      # collect metadata
-      self._sync_marker = dfr.sync_marker
-      self.set_meta('avro.codec', dfr.get_meta('avro.codec'))
-
-      # get schema used to write existing file
-      schema_from_file = dfr.get_meta('avro.schema')
-      self.set_meta('avro.schema', schema_from_file)
-      self.datum_writer.writers_schema = schema.parse(schema_from_file)
-
-      # seek to the end of the file and prepare for writing
-      writer.seek(0, 2)
-
-  # read-only properties
-  writer = property(lambda self: self._writer)
-  encoder = property(lambda self: self._encoder)
-  datum_writer = property(lambda self: self._datum_writer)
-  buffer_writer = property(lambda self: self._buffer_writer)
-  buffer_encoder = property(lambda self: self._buffer_encoder)
-  sync_marker = property(lambda self: self._sync_marker)
-  meta = property(lambda self: self._meta)
-
-  # read/write properties
-  def set_block_count(self, new_val):
-    self._block_count = new_val
-  block_count = property(lambda self: self._block_count, set_block_count)
-
-  # utility functions to read/write metadata entries
-  def get_meta(self, key):
-    return self._meta.get(key)
-  def set_meta(self, key, val):
-    self._meta[key] = val
-
-  def _write_header(self):
-    header = {'magic': MAGIC,
-              'meta': self.meta,
-              'sync': self.sync_marker}
-    self.datum_writer.write_data(META_SCHEMA, header, self.encoder)
-
-  # TODO(hammer): make a schema for blocks and use datum_writer
-  def _write_block(self):
-    if self.block_count > 0:
-      # write number of items in block
-      self.encoder.write_long(self.block_count)
-
-      # write block contents
-      uncompressed_data = self.buffer_writer.getvalue()
-      if self.get_meta(CODEC_KEY) == 'null':
-        compressed_data = uncompressed_data
-      elif self.get_meta(CODEC_KEY) == 'deflate':
-        # The first two characters and last character are zlib
-        # wrappers around deflate data.
-        compressed_data = zlib.compress(uncompressed_data)[2:-1]
-      else:
-        fail_msg = '"%s" codec is not supported.' % self.get_meta(CODEC_KEY)
-        raise DataFileException(fail_msg)
-
-      # Write length of block
-      self.encoder.write_long(len(compressed_data))
-
-      # Write block
-      self.writer.write(compressed_data)
-
-      # write sync marker
-      self.writer.write(self.sync_marker)
-
-      # reset buffer
-      self.buffer_writer.truncate(0) 
-      self.block_count = 0
-
-  def append(self, datum):
-    """Append a datum to the file."""
-    self.datum_writer.write(datum, self.buffer_encoder)
-    self.block_count += 1
-
-    # if the data to write is larger than the sync interval, write the block
-    if self.buffer_writer.tell() >= SYNC_INTERVAL:
-      self._write_block()
-
-  def sync(self):
-    """
-    Return the current position as a value that may be passed to
-    DataFileReader.seek(long). Forces the end of the current block,
-    emitting a synchronization marker.
-    """
-    self._write_block()
-    return self.writer.tell()
-
-  def flush(self):
-    """Flush the current state of the file, including metadata."""
-    self._write_block()
-    self.writer.flush()
-
-  def close(self):
-    """Close the file."""
-    self.flush()
-    self.writer.close()
-
-class DataFileReader(object):
-  """Read files written by DataFileWriter."""
-  # TODO(hammer): allow user to specify expected schema?
-  # TODO(hammer): allow user to specify the encoder
-  def __init__(self, reader, datum_reader):
-    self._reader = reader
-    self._raw_decoder = io.BinaryDecoder(reader)
-    self._datum_decoder = None # Maybe reset at every block.
-    self._datum_reader = datum_reader
-    
-    # read the header: magic, meta, sync
-    self._read_header()
-
-    # ensure codec is valid
-    self.codec = self.get_meta('avro.codec')
-    if self.codec is None:
-      self.codec = "null"
-    if self.codec not in VALID_CODECS:
-      raise DataFileException('Unknown codec: %s.' % self.codec)
-
-    # get file length
-    self._file_length = self.determine_file_length()
-
-    # get ready to read
-    self._block_count = 0
-    self.datum_reader.writers_schema = schema.parse(self.get_meta(SCHEMA_KEY))
-  
-  def __iter__(self):
-    return self
-
-  # read-only properties
-  reader = property(lambda self: self._reader)
-  raw_decoder = property(lambda self: self._raw_decoder)
-  datum_decoder = property(lambda self: self._datum_decoder)
-  datum_reader = property(lambda self: self._datum_reader)
-  sync_marker = property(lambda self: self._sync_marker)
-  meta = property(lambda self: self._meta)
-  file_length = property(lambda self: self._file_length)
-
-  # read/write properties
-  def set_block_count(self, new_val):
-    self._block_count = new_val
-  block_count = property(lambda self: self._block_count, set_block_count)
-
-  # utility functions to read/write metadata entries
-  def get_meta(self, key):
-    return self._meta.get(key)
-  def set_meta(self, key, val):
-    self._meta[key] = val
-
-  def determine_file_length(self):
-    """
-    Get file length and leave file cursor where we found it.
-    """
-    remember_pos = self.reader.tell()
-    self.reader.seek(0, 2)
-    file_length = self.reader.tell()
-    self.reader.seek(remember_pos)
-    return file_length
-
-  def is_EOF(self):
-    return self.reader.tell() == self.file_length
-
-  def _read_header(self):
-    # seek to the beginning of the file to get magic block
-    self.reader.seek(0, 0) 
-
-    # read header into a dict
-    header = self.datum_reader.read_data(
-      META_SCHEMA, META_SCHEMA, self.raw_decoder)
-
-    # check magic number
-    if header.get('magic') != MAGIC:
-      fail_msg = "Not an Avro data file: %s doesn't match %s."\
-                 % (header.get('magic'), MAGIC)
-      raise schema.AvroException(fail_msg)
-
-    # set metadata
-    self._meta = header['meta']
-
-    # set sync marker
-    self._sync_marker = header['sync']
-
-  def _read_block_header(self):
-    self.block_count = self.raw_decoder.read_long()
-    if self.codec == "null":
-      # Skip a long; we don't need to use the length.
-      self.raw_decoder.skip_long()
-      self._datum_decoder = self._raw_decoder
-    else:
-      # Compressed data is stored as (length, data), which
-      # corresponds to how the "bytes" type is encoded.
-      data = self.raw_decoder.read_bytes()
-      # -15 is the log of the window size; negative indicates
-      # "raw" (no zlib headers) decompression.  See zlib.h.
-      uncompressed = zlib.decompress(data, -15)
-      self._datum_decoder = io.BinaryDecoder(StringIO(uncompressed))
-
-  def _skip_sync(self):
-    """
-    Read the length of the sync marker; if it matches the sync marker,
-    return True. Otherwise, seek back to where we started and return False.
-    """
-    proposed_sync_marker = self.reader.read(SYNC_SIZE)
-    if proposed_sync_marker != self.sync_marker:
-      self.reader.seek(-SYNC_SIZE, 1)
-      return False
-    else:
-      return True
-
-  # TODO(hammer): handle block of length zero
-  # TODO(hammer): clean this up with recursion
-  def next(self):
-    """Return the next datum in the file."""
-    if self.block_count == 0:
-      if self.is_EOF():
-        raise StopIteration
-      elif self._skip_sync():
-        if self.is_EOF(): raise StopIteration
-        self._read_block_header()
-      else:
-        self._read_block_header()
-
-    datum = self.datum_reader.read(self.datum_decoder) 
-    self.block_count -= 1
-    return datum
-
-  def close(self):
-    """Close this reader."""
-    self.reader.close()
diff --git a/branch-1.3/lang/py/src/avro/io.py b/branch-1.3/lang/py/src/avro/io.py
deleted file mode 100644
index 55e1563..0000000
--- a/branch-1.3/lang/py/src/avro/io.py
+++ /dev/null
@@ -1,848 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Input/Output utilities, including:
-
- * i/o-specific constants
- * i/o-specific exceptions
- * schema validation
- * leaf value encoding and decoding
- * datum reader/writer stuff (?)
-
-Also includes a generic representation for data, which
-uses the following mapping:
-
-  * Schema records are implemented as dict.
-  * Schema arrays are implemented as list.
-  * Schema maps are implemented as dict.
-  * Schema strings are implemented as unicode.
-  * Schema bytes are implemented as str.
-  * Schema ints are implemented as int.
-  * Schema longs are implemented as long.
-  * Schema floats are implemented as float.
-  * Schema doubles are implemented as float.
-  * Schema booleans are implemented as bool. 
-"""
-import struct
-from avro import schema
-
-#
-# Constants
-#
-
-INT_MIN_VALUE = -(1 << 31)
-INT_MAX_VALUE = (1 << 31) - 1
-LONG_MIN_VALUE = -(1 << 63)
-LONG_MAX_VALUE = (1 << 63) - 1
-
-# TODO(hammer): shouldn't ! be < for little-endian (according to spec?)
-STRUCT_INT = struct.Struct('!I')     # big-endian unsigned int
-STRUCT_LONG = struct.Struct('!Q')    # big-endian unsigned long long
-STRUCT_FLOAT = struct.Struct('!f')   # big-endian float
-STRUCT_DOUBLE = struct.Struct('!d')  # big-endian double
-
-#
-# Exceptions
-#
-
-class AvroTypeException(schema.AvroException):
-  """Raised when datum is not an example of schema."""
-  def __init__(self, expected_schema, datum):
-    fail_msg = "The datum %s is not an example of the schema %s"\
-               % (datum, expected_schema)
-    schema.AvroException.__init__(self, fail_msg)
-
-class SchemaResolutionException(schema.AvroException):
-  def __init__(self, fail_msg, writers_schema=None, readers_schema=None):
-    if writers_schema: fail_msg += "\nWriter's Schema: %s" % writers_schema
-    if readers_schema: fail_msg += "\nReader's Schema: %s" % writers_schema
-    schema.AvroException.__init__(self, fail_msg)
-
-#
-# Validate
-#
-
-def validate(expected_schema, datum):
-  """Determine if a python datum is an instance of a schema."""
-  schema_type = expected_schema.type
-  if schema_type == 'null':
-    return datum is None
-  elif schema_type == 'boolean':
-    return isinstance(datum, bool)
-  elif schema_type == 'string':
-    return isinstance(datum, basestring)
-  elif schema_type == 'bytes':
-    return isinstance(datum, str)
-  elif schema_type == 'int':
-    return ((isinstance(datum, int) or isinstance(datum, long)) 
-            and INT_MIN_VALUE <= datum <= INT_MAX_VALUE)
-  elif schema_type == 'long':
-    return ((isinstance(datum, int) or isinstance(datum, long)) 
-            and LONG_MIN_VALUE <= datum <= LONG_MAX_VALUE)
-  elif schema_type in ['float', 'double']:
-    return (isinstance(datum, int) or isinstance(datum, long)
-            or isinstance(datum, float))
-  elif schema_type == 'fixed':
-    return isinstance(datum, str) and len(datum) == expected_schema.size
-  elif schema_type == 'enum':
-    return datum in expected_schema.symbols
-  elif schema_type == 'array':
-    return (isinstance(datum, list) and
-      False not in [validate(expected_schema.items, d) for d in datum])
-  elif schema_type == 'map':
-    return (isinstance(datum, dict) and
-      False not in [isinstance(k, basestring) for k in datum.keys()] and
-      False not in
-        [validate(expected_schema.values, v) for v in datum.values()])
-  elif schema_type in ['union', 'error_union']:
-    return True in [validate(s, datum) for s in expected_schema.schemas]
-  elif schema_type in ['record', 'error', 'request']:
-    return (isinstance(datum, dict) and
-      False not in
-        [validate(f.type, datum.get(f.name)) for f in expected_schema.fields])
-
-#
-# Decoder/Encoder
-#
-
-class BinaryDecoder(object):
-  """Read leaf values."""
-  def __init__(self, reader):
-    """
-    reader is a Python object on which we can call read, seek, and tell.
-    """
-    self._reader = reader
-
-  # read-only properties
-  reader = property(lambda self: self._reader)
-
-  def read(self, n):
-    """
-    Read n bytes.
-    """
-    return self.reader.read(n)
-
-  def read_null(self):
-    """
-    null is written as zero bytes
-    """
-    return None
-
-  def read_boolean(self):
-    """
-    a boolean is written as a single byte 
-    whose value is either 0 (false) or 1 (true).
-    """
-    return ord(self.read(1)) == 1
-
-  def read_int(self):
-    """
-    int and long values are written using variable-length, zig-zag coding.
-    """
-    return self.read_long()
-
-  def read_long(self):
-    """
-    int and long values are written using variable-length, zig-zag coding.
-    """
-    b = ord(self.read(1))
-    n = b & 0x7F
-    shift = 7
-    while (b & 0x80) != 0:
-      b = ord(self.read(1))
-      n |= (b & 0x7F) << shift
-      shift += 7
-    datum = (n >> 1) ^ -(n & 1)
-    return datum
-
-  def read_float(self):
-    """
-    A float is written as 4 bytes.
-    The float is converted into a 32-bit integer using a method equivalent to
-    Java's floatToIntBits and then encoded in little-endian format.
-    """
-    bits = (((ord(self.read(1)) & 0xffL)) |
-      ((ord(self.read(1)) & 0xffL) <<  8) |
-      ((ord(self.read(1)) & 0xffL) << 16) |
-      ((ord(self.read(1)) & 0xffL) << 24))
-    return STRUCT_FLOAT.unpack(STRUCT_INT.pack(bits))[0]
-
-  def read_double(self):
-    """
-    A double is written as 8 bytes.
-    The double is converted into a 64-bit integer using a method equivalent to
-    Java's doubleToLongBits and then encoded in little-endian format.
-    """
-    bits = (((ord(self.read(1)) & 0xffL)) |
-      ((ord(self.read(1)) & 0xffL) <<  8) |
-      ((ord(self.read(1)) & 0xffL) << 16) |
-      ((ord(self.read(1)) & 0xffL) << 24) |
-      ((ord(self.read(1)) & 0xffL) << 32) |
-      ((ord(self.read(1)) & 0xffL) << 40) |
-      ((ord(self.read(1)) & 0xffL) << 48) |
-      ((ord(self.read(1)) & 0xffL) << 56))
-    return STRUCT_DOUBLE.unpack(STRUCT_LONG.pack(bits))[0]
-
-  def read_bytes(self):
-    """
-    Bytes are encoded as a long followed by that many bytes of data. 
-    """
-    return self.read(self.read_long())
-
-  def read_utf8(self):
-    """
-    A string is encoded as a long followed by
-    that many bytes of UTF-8 encoded character data.
-    """
-    return unicode(self.read_bytes(), "utf-8")
-
-  def skip_null(self):
-    pass
-
-  def skip_boolean(self):
-    self.skip(1)
-
-  def skip_int(self):
-    self.skip_long()
-
-  def skip_long(self):
-    b = ord(self.read(1))
-    while (b & 0x80) != 0:
-      b = ord(self.read(1))
-
-  def skip_float(self):
-    self.skip(4)
-
-  def skip_double(self):
-    self.skip(8)
-
-  def skip_bytes(self):
-    self.skip(self.read_long())
-
-  def skip_utf8(self):
-    self.skip_bytes()
-
-  def skip(self, n):
-    self.reader.seek(self.reader.tell() + n)
-
-class BinaryEncoder(object):
-  """Write leaf values."""
-  def __init__(self, writer):
-    """
-    writer is a Python object on which we can call write.
-    """
-    self._writer = writer
-
-  # read-only properties
-  writer = property(lambda self: self._writer)
-
-  def write(self, datum):
-    """Write an abritrary datum."""
-    self.writer.write(datum)
-
-  def write_null(self, datum):
-    """
-    null is written as zero bytes
-    """
-    pass
-  
-  def write_boolean(self, datum):
-    """
-    a boolean is written as a single byte 
-    whose value is either 0 (false) or 1 (true).
-    """
-    if datum:
-      self.write(chr(1))
-    else:
-      self.write(chr(0))
-
-  def write_int(self, datum):
-    """
-    int and long values are written using variable-length, zig-zag coding.    
-    """
-    self.write_long(datum);
-
-  def write_long(self, datum):
-    """
-    int and long values are written using variable-length, zig-zag coding.
-    """
-    datum = (datum << 1) ^ (datum >> 63)
-    while (datum & ~0x7F) != 0:
-      self.write(chr((datum & 0x7f) | 0x80))
-      datum >>= 7
-    self.write(chr(datum))
-
-  def write_float(self, datum):
-    """
-    A float is written as 4 bytes.
-    The float is converted into a 32-bit integer using a method equivalent to
-    Java's floatToIntBits and then encoded in little-endian format.
-    """
-    bits = STRUCT_INT.unpack(STRUCT_FLOAT.pack(datum))[0]
-    self.write(chr((bits) & 0xFF))
-    self.write(chr((bits >> 8) & 0xFF))
-    self.write(chr((bits >> 16) & 0xFF))
-    self.write(chr((bits >> 24) & 0xFF))
-
-  def write_double(self, datum):
-    """
-    A double is written as 8 bytes.
-    The double is converted into a 64-bit integer using a method equivalent to
-    Java's doubleToLongBits and then encoded in little-endian format.
-    """
-    bits = STRUCT_LONG.unpack(STRUCT_DOUBLE.pack(datum))[0]
-    self.write(chr((bits) & 0xFF))
-    self.write(chr((bits >> 8) & 0xFF))
-    self.write(chr((bits >> 16) & 0xFF))
-    self.write(chr((bits >> 24) & 0xFF))
-    self.write(chr((bits >> 32) & 0xFF))
-    self.write(chr((bits >> 40) & 0xFF))
-    self.write(chr((bits >> 48) & 0xFF))
-    self.write(chr((bits >> 56) & 0xFF))
-
-  def write_bytes(self, datum):
-    """
-    Bytes are encoded as a long followed by that many bytes of data. 
-    """
-    self.write_long(len(datum))
-    self.write(struct.pack('%ds' % len(datum), datum))
-
-  def write_utf8(self, datum):
-    """
-    A string is encoded as a long followed by
-    that many bytes of UTF-8 encoded character data.
-    """
-    datum = datum.encode("utf-8")
-    self.write_bytes(datum)
-
-#
-# DatumReader/Writer
-#
-
-class DatumReader(object):
-  """Deserialize Avro-encoded data into a Python data structure."""
-  @staticmethod
-  def check_props(schema_one, schema_two, prop_list):
-    for prop in prop_list:
-      if getattr(schema_one, prop) != getattr(schema_two, prop):
-        return False
-    return True
-
-  @staticmethod
-  def match_schemas(writers_schema, readers_schema):
-    w_type = writers_schema.type
-    r_type = readers_schema.type
-    if 'union' in [w_type, r_type] or 'error_union' in [w_type, r_type]:
-      return True
-    elif (w_type in schema.PRIMITIVE_TYPES and r_type in schema.PRIMITIVE_TYPES
-          and w_type == r_type):
-      return True
-    elif (w_type == r_type == 'record' and
-          DatumReader.check_props(writers_schema, readers_schema, 
-                                  ['fullname'])):
-      return True
-    elif (w_type == r_type == 'error' and
-          DatumReader.check_props(writers_schema, readers_schema, 
-                                  ['fullname'])):
-      return True
-    elif (w_type == r_type == 'request'):
-      return True
-    elif (w_type == r_type == 'fixed' and 
-          DatumReader.check_props(writers_schema, readers_schema, 
-                                  ['fullname', 'size'])):
-      return True
-    elif (w_type == r_type == 'enum' and 
-          DatumReader.check_props(writers_schema, readers_schema, 
-                                  ['fullname'])):
-      return True
-    elif (w_type == r_type == 'map' and 
-          DatumReader.check_props(writers_schema.values,
-                                  readers_schema.values, ['type'])):
-      return True
-    elif (w_type == r_type == 'array' and 
-          DatumReader.check_props(writers_schema.items,
-                                  readers_schema.items, ['type'])):
-      return True
-    
-    # Handle schema promotion
-    if w_type == 'int' and r_type in ['long', 'float', 'double']:
-      return True
-    elif w_type == 'long' and r_type in ['float', 'double']:
-      return True
-    elif w_type == 'float' and r_type == 'double':
-      return True
-    return False
-
-  def __init__(self, writers_schema=None, readers_schema=None):
-    """
-    As defined in the Avro specification, we call the schema encoded
-    in the data the "writer's schema", and the schema expected by the
-    reader the "reader's schema".
-    """
-    self._writers_schema = writers_schema
-    self._readers_schema = readers_schema 
-
-  # read/write properties
-  def set_writers_schema(self, writers_schema):
-    self._writers_schema = writers_schema
-  writers_schema = property(lambda self: self._writers_schema,
-                            set_writers_schema)
-  def set_readers_schema(self, readers_schema):
-    self._readers_schema = readers_schema
-  readers_schema = property(lambda self: self._readers_schema,
-                            set_readers_schema)
-  
-  def read(self, decoder):
-    if self.readers_schema is None:
-      self.readers_schema = self.writers_schema
-    return self.read_data(self.writers_schema, self.readers_schema, decoder)
-
-  def read_data(self, writers_schema, readers_schema, decoder):
-    # schema matching
-    if not DatumReader.match_schemas(writers_schema, readers_schema):
-      fail_msg = 'Schemas do not match.'
-      raise SchemaResolutionException(fail_msg, writers_schema, readers_schema)
-
-    # schema resolution: reader's schema is a union, writer's schema is not
-    if (writers_schema.type not in ['union', 'error_union']
-        and readers_schema.type in ['union', 'error_union']):
-      for s in readers_schema.schemas:
-        if DatumReader.match_schemas(writers_schema, s):
-          return self.read_data(writers_schema, s, decoder)
-      fail_msg = 'Schemas do not match.'
-      raise SchemaResolutionException(fail_msg, writers_schema, readers_schema)
-
-    # function dispatch for reading data based on type of writer's schema
-    if writers_schema.type == 'null':
-      return decoder.read_null()
-    elif writers_schema.type == 'boolean':
-      return decoder.read_boolean()
-    elif writers_schema.type == 'string':
-      return decoder.read_utf8()
-    elif writers_schema.type == 'int':
-      return decoder.read_int()
-    elif writers_schema.type == 'long':
-      return decoder.read_long()
-    elif writers_schema.type == 'float':
-      return decoder.read_float()
-    elif writers_schema.type == 'double':
-      return decoder.read_double()
-    elif writers_schema.type == 'bytes':
-      return decoder.read_bytes()
-    elif writers_schema.type == 'fixed':
-      return self.read_fixed(writers_schema, readers_schema, decoder)
-    elif writers_schema.type == 'enum':
-      return self.read_enum(writers_schema, readers_schema, decoder)
-    elif writers_schema.type == 'array':
-      return self.read_array(writers_schema, readers_schema, decoder)
-    elif writers_schema.type == 'map':
-      return self.read_map(writers_schema, readers_schema, decoder)
-    elif writers_schema.type in ['union', 'error_union']:
-      return self.read_union(writers_schema, readers_schema, decoder)
-    elif writers_schema.type in ['record', 'error', 'request']:
-      return self.read_record(writers_schema, readers_schema, decoder)
-    else:
-      fail_msg = "Cannot read unknown schema type: %s" % writers_schema.type
-      raise schema.AvroException(fail_msg)
-
-  def skip_data(self, writers_schema, decoder):
-    if writers_schema.type == 'null':
-      return decoder.skip_null()
-    elif writers_schema.type == 'boolean':
-      return decoder.skip_boolean()
-    elif writers_schema.type == 'string':
-      return decoder.skip_utf8()
-    elif writers_schema.type == 'int':
-      return decoder.skip_int()
-    elif writers_schema.type == 'long':
-      return decoder.skip_long()
-    elif writers_schema.type == 'float':
-      return decoder.skip_float()
-    elif writers_schema.type == 'double':
-      return decoder.skip_double()
-    elif writers_schema.type == 'bytes':
-      return decoder.skip_bytes()
-    elif writers_schema.type == 'fixed':
-      return self.skip_fixed(writers_schema, decoder)
-    elif writers_schema.type == 'enum':
-      return self.skip_enum(writers_schema, decoder)
-    elif writers_schema.type == 'array':
-      return self.skip_array(writers_schema, decoder)
-    elif writers_schema.type == 'map':
-      return self.skip_map(writers_schema, decoder)
-    elif writers_schema.type in ['union', 'error_union']:
-      return self.skip_union(writers_schema, decoder)
-    elif writers_schema.type in ['record', 'error', 'request']:
-      return self.skip_record(writers_schema, decoder)
-    else:
-      fail_msg = "Unknown schema type: %s" % writers_schema.type
-      raise schema.AvroException(fail_msg)
-
-  def read_fixed(self, writers_schema, readers_schema, decoder):
-    """
-    Fixed instances are encoded using the number of bytes declared
-    in the schema.
-    """
-    return decoder.read(writers_schema.size)
-
-  def skip_fixed(self, writers_schema, decoder):
-    return decoder.skip(writers_schema.size)
-
-  def read_enum(self, writers_schema, readers_schema, decoder):
-    """
-    An enum is encoded by a int, representing the zero-based position
-    of the symbol in the schema.
-    """
-    # read data
-    index_of_symbol = decoder.read_int()
-    read_symbol = writers_schema.symbols[index_of_symbol]
-
-    # schema resolution
-    if read_symbol not in readers_schema.symbols:
-      fail_msg = "Symbol %s not present in Reader's Schema" % read_symbol
-      raise SchemaResolutionException(fail_msg, writers_schema, readers_schema)
-
-    return read_symbol
-
-  def skip_enum(self, writers_schema, decoder):
-    return decoder.skip_int()
-
-  def read_array(self, writers_schema, readers_schema, decoder):
-    """
-    Arrays are encoded as a series of blocks.
-
-    Each block consists of a long count value,
-    followed by that many array items.
-    A block with count zero indicates the end of the array.
-    Each item is encoded per the array's item schema.
-
-    If a block's count is negative,
-    then the count is followed immediately by a long block size,
-    indicating the number of bytes in the block.
-    The actual count in this case
-    is the absolute value of the count written.
-    """
-    read_items = []
-    block_count = decoder.read_long()
-    while block_count != 0:
-      if block_count < 0:
-        block_count = -block_count
-        block_size = decoder.read_long()
-      for i in range(block_count):
-        read_items.append(self.read_data(writers_schema.items,
-                                         readers_schema.items, decoder))
-      block_count = decoder.read_long()
-    return read_items
-
-  def skip_array(self, writers_schema, decoder):
-    block_count = decoder.read_long()
-    while block_count != 0:
-      if block_count < 0:
-        block_size = decoder.read_long()
-        decoder.skip(block_size)
-      else:
-        for i in range(block_count):
-          self.skip_data(writers_schema.items, decoder)
-      block_count = decoder.read_long()
-
-  def read_map(self, writers_schema, readers_schema, decoder):
-    """
-    Maps are encoded as a series of blocks.
-
-    Each block consists of a long count value,
-    followed by that many key/value pairs.
-    A block with count zero indicates the end of the map.
-    Each item is encoded per the map's value schema.
-
-    If a block's count is negative,
-    then the count is followed immediately by a long block size,
-    indicating the number of bytes in the block.
-    The actual count in this case
-    is the absolute value of the count written.
-    """
-    read_items = {}
-    block_count = decoder.read_long()
-    while block_count != 0:
-      if block_count < 0:
-        block_count = -block_count
-        block_size = decoder.read_long()
-      for i in range(block_count):
-        key = decoder.read_utf8()
-        read_items[key] = self.read_data(writers_schema.values,
-                                         readers_schema.values, decoder)
-      block_count = decoder.read_long()
-    return read_items
-
-  def skip_map(self, writers_schema, decoder):
-    block_count = decoder.read_long()
-    while block_count != 0:
-      if block_count < 0:
-        block_size = decoder.read_long()
-        decoder.skip(block_size)
-      else:
-        for i in range(block_count):
-          decoder.skip_utf8()
-          self.skip_data(writers_schema.values, decoder)
-      block_count = decoder.read_long()
-
-  def read_union(self, writers_schema, readers_schema, decoder):
-    """
-    A union is encoded by first writing a long value indicating
-    the zero-based position within the union of the schema of its value.
-    The value is then encoded per the indicated schema within the union.
-    """
-    # schema resolution
-    index_of_schema = int(decoder.read_long())
-    if index_of_schema >= len(writers_schema.schemas):
-      fail_msg = "Can't access branch index %d for union with %d branches"\
-                 % (index_of_schema, writers_schema.schemas)
-      raise SchemaResolutionException(fail_msg, writers_schema, readers_schema)
-    selected_writers_schema = writers_schema.schemas[index_of_schema]
-    
-    # read data
-    return self.read_data(selected_writers_schema, readers_schema, decoder)
-
-  def skip_union(self, writers_schema, decoder):
-    index_of_schema = int(decoder.read_long())
-    return self.skip_data(writers_schema.schemas[index_of_schema], decoder)
-
-  def read_record(self, writers_schema, readers_schema, decoder):
-    """
-    A record is encoded by encoding the values of its fields
-    in the order that they are declared. In other words, a record
-    is encoded as just the concatenation of the encodings of its fields.
-    Field values are encoded per their schema.
-
-    Schema Resolution:
-     * the ordering of fields may be different: fields are matched by name.
-     * schemas for fields with the same name in both records are resolved
-       recursively.
-     * if the writer's record contains a field with a name not present in the
-       reader's record, the writer's value for that field is ignored.
-     * if the reader's record schema has a field that contains a default value,
-       and writer's schema does not have a field with the same name, then the
-       reader should use the default value from its field.
-     * if the reader's record schema has a field with no default value, and 
-       writer's schema does not have a field with the same name, then the
-       field's value is unset.
-    """
-    # schema resolution
-    readers_fields_dict = readers_schema.fields_dict
-    read_record = {}
-    for field in writers_schema.fields:
-      readers_field = readers_fields_dict.get(field.name)
-      if readers_field is not None:
-        field_val = self.read_data(field.type, readers_field.type, decoder)
-        read_record[field.name] = field_val
-      else:
-        self.skip_data(field.type, decoder)
-
-    # fill in default values
-    if len(readers_fields_dict) > len(read_record):
-      writers_fields_dict = writers_schema.fields_dict
-      for field_name, field in readers_fields_dict.items():
-        if not writers_fields_dict.has_key(field_name):
-          if field.has_default:
-            field_val = self._read_default_value(field.type, field.default)
-            read_record[field.name] = field_val
-          else:
-            fail_msg = 'No default value for field %s' % field_name
-            raise SchemaResolutionException(fail_msg, writers_schema,
-                                            readers_schema)
-    return read_record
-
-  def skip_record(self, writers_schema, decoder):
-    for field in writers_schema.fields:
-      self.skip_data(field.type, decoder)
-
-  def _read_default_value(self, field_schema, default_value):
-    """
-    Basically a JSON Decoder?
-    """
-    if field_schema.type == 'null':
-      return None
-    elif field_schema.type == 'boolean':
-      return bool(default_value)
-    elif field_schema.type == 'int':
-      return int(default_value)
-    elif field_schema.type == 'long':
-      return long(default_value)
-    elif field_schema.type in ['float', 'double']:
-      return float(default_value)
-    elif field_schema.type in ['enum', 'fixed', 'string', 'bytes']:
-      return default_value
-    elif field_schema.type == 'array':
-      read_array = []
-      for json_val in default_value:
-        item_val = self._read_default_value(field_schema.items, json_val)
-        read_array.append(item_val)
-      return read_array
-    elif field_schema.type == 'map':
-      read_map = {}
-      for key, json_val in default_value.items():
-        map_val = self._read_default_value(field_schema.values, json_val)
-        read_map[key] = map_val
-      return read_map
-    elif field_schema.type in ['union', 'error_union']:
-      return self._read_default_value(field_schema.schemas[0], default_value)
-    elif field_schema.type == 'record':
-      read_record = {}
-      for field in field_schema.fields:
-        json_val = default_value.get(field.name)
-        if json_val is None: json_val = field.default
-        field_val = self._read_default_value(field.type, json_val)
-        read_record[field.name] = field_val
-      return read_record
-    else:
-      fail_msg = 'Unknown type: %s' % field_schema.type
-      raise schema.AvroException(fail_msg)
-
-class DatumWriter(object):
-  """DatumWriter for generic python objects."""
-  def __init__(self, writers_schema=None):
-    self._writers_schema = writers_schema
-
-  # read/write properties
-  def set_writers_schema(self, writers_schema):
-    self._writers_schema = writers_schema
-  writers_schema = property(lambda self: self._writers_schema,
-                            set_writers_schema)
-
-  def write(self, datum, encoder):
-    self.write_data(self.writers_schema, datum, encoder)
-
-  def write_data(self, writers_schema, datum, encoder):
-    # validate datum
-    if not validate(writers_schema, datum):
-      raise AvroTypeException(writers_schema, datum)
-    
-    # function dispatch to write datum
-    if writers_schema.type == 'null':
-      encoder.write_null(datum)
-    elif writers_schema.type == 'boolean':
-      encoder.write_boolean(datum)
-    elif writers_schema.type == 'string':
-      encoder.write_utf8(datum)
-    elif writers_schema.type == 'int':
-      encoder.write_int(datum)
-    elif writers_schema.type == 'long':
-      encoder.write_long(datum)
-    elif writers_schema.type == 'float':
-      encoder.write_float(datum)
-    elif writers_schema.type == 'double':
-      encoder.write_double(datum)
-    elif writers_schema.type == 'bytes':
-      encoder.write_bytes(datum)
-    elif writers_schema.type == 'fixed':
-      self.write_fixed(writers_schema, datum, encoder)
-    elif writers_schema.type == 'enum':
-      self.write_enum(writers_schema, datum, encoder)
-    elif writers_schema.type == 'array':
-      self.write_array(writers_schema, datum, encoder)
-    elif writers_schema.type == 'map':
-      self.write_map(writers_schema, datum, encoder)
-    elif writers_schema.type in ['union', 'error_union']:
-      self.write_union(writers_schema, datum, encoder)
-    elif writers_schema.type in ['record', 'error', 'request']:
-      self.write_record(writers_schema, datum, encoder)
-    else:
-      fail_msg = 'Unknown type: %s' % writers_schema.type
-      raise schema.AvroException(fail_msg)
-
-  def write_fixed(self, writers_schema, datum, encoder):
-    """
-    Fixed instances are encoded using the number of bytes declared
-    in the schema.
-    """
-    encoder.write(datum)
-
-  def write_enum(self, writers_schema, datum, encoder):
-    """
-    An enum is encoded by a int, representing the zero-based position
-    of the symbol in the schema.
-    """
-    index_of_datum = writers_schema.symbols.index(datum)
-    encoder.write_int(index_of_datum)
-
-  def write_array(self, writers_schema, datum, encoder):
-    """
-    Arrays are encoded as a series of blocks.
-
-    Each block consists of a long count value,
-    followed by that many array items.
-    A block with count zero indicates the end of the array.
-    Each item is encoded per the array's item schema.
-
-    If a block's count is negative,
-    then the count is followed immediately by a long block size,
-    indicating the number of bytes in the block.
-    The actual count in this case
-    is the absolute value of the count written.
-    """
-    if len(datum) > 0:
-      encoder.write_long(len(datum))
-      for item in datum:
-        self.write_data(writers_schema.items, item, encoder)
-    encoder.write_long(0)
-
-  def write_map(self, writers_schema, datum, encoder):
-    """
-    Maps are encoded as a series of blocks.
-
-    Each block consists of a long count value,
-    followed by that many key/value pairs.
-    A block with count zero indicates the end of the map.
-    Each item is encoded per the map's value schema.
-
-    If a block's count is negative,
-    then the count is followed immediately by a long block size,
-    indicating the number of bytes in the block.
-    The actual count in this case
-    is the absolute value of the count written.
-    """
-    if len(datum) > 0:
-      encoder.write_long(len(datum))
-      for key, val in datum.items():
-        encoder.write_utf8(key)
-        self.write_data(writers_schema.values, val, encoder)
-    encoder.write_long(0)
-
-  def write_union(self, writers_schema, datum, encoder):
-    """
-    A union is encoded by first writing a long value indicating
-    the zero-based position within the union of the schema of its value.
-    The value is then encoded per the indicated schema within the union.
-    """
-    # resolve union
-    index_of_schema = -1
-    for i, candidate_schema in enumerate(writers_schema.schemas):
-      if validate(candidate_schema, datum):
-        index_of_schema = i
-    if index_of_schema < 0: raise AvroTypeException(writers_schema, datum)
-
-    # write data
-    encoder.write_long(index_of_schema)
-    self.write_data(writers_schema.schemas[index_of_schema], datum, encoder)
-
-  def write_record(self, writers_schema, datum, encoder):
-    """
-    A record is encoded by encoding the values of its fields
-    in the order that they are declared. In other words, a record
-    is encoded as just the concatenation of the encodings of its fields.
-    Field values are encoded per their schema.
-    """
-    for field in writers_schema.fields:
-      self.write_data(field.type, datum.get(field.name), encoder)
diff --git a/branch-1.3/lang/py/src/avro/ipc.py b/branch-1.3/lang/py/src/avro/ipc.py
deleted file mode 100644
index c763c2b..0000000
--- a/branch-1.3/lang/py/src/avro/ipc.py
+++ /dev/null
@@ -1,501 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Support for inter-process calls.
-"""
-import httplib
-try:
-  from cStringIO import StringIO
-except ImportError:
-  from StringIO import StringIO
-import struct
-from avro import io
-from avro import protocol
-from avro import schema
-
-#
-# Constants
-#
-
-HANDSHAKE_REQUEST_SCHEMA = schema.parse("""\
-{
-  "type": "record",
-  "name": "HandshakeRequest", "namespace":"org.apache.avro.ipc",
-  "fields": [
-    {"name": "clientHash",
-     "type": {"type": "fixed", "name": "MD5", "size": 16}},
-    {"name": "clientProtocol", "type": ["null", "string"]},
-    {"name": "serverHash", "type": "MD5"},
-    {"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]}
-  ]
-}""")
-
-HANDSHAKE_RESPONSE_SCHEMA = schema.parse("""\
-{
-  "type": "record",
-  "name": "HandshakeResponse", "namespace": "org.apache.avro.ipc",
-  "fields": [
-    {"name": "match",
-     "type": {"type": "enum", "name": "HandshakeMatch",
-              "symbols": ["BOTH", "CLIENT", "NONE"]}},
-    {"name": "serverProtocol", "type": ["null", "string"]},
-    {"name": "serverHash",
-     "type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]},
-    {"name": "meta",
-     "type": ["null", {"type": "map", "values": "bytes"}]}
-  ]
-}
-""")
-
-HANDSHAKE_REQUESTOR_WRITER = io.DatumWriter(HANDSHAKE_REQUEST_SCHEMA)
-HANDSHAKE_REQUESTOR_READER = io.DatumReader(HANDSHAKE_RESPONSE_SCHEMA)
-HANDSHAKE_RESPONDER_WRITER = io.DatumWriter(HANDSHAKE_RESPONSE_SCHEMA)
-HANDSHAKE_RESPONDER_READER = io.DatumReader(HANDSHAKE_REQUEST_SCHEMA)
-
-META_SCHEMA = schema.parse('{"type": "map", "values": "bytes"}')
-META_WRITER = io.DatumWriter(META_SCHEMA)
-META_READER = io.DatumReader(META_SCHEMA)
-
-SYSTEM_ERROR_SCHEMA = schema.parse('["string"]')
-
-# protocol cache
-REMOTE_HASHES = {}
-REMOTE_PROTOCOLS = {}
-
-BIG_ENDIAN_INT_STRUCT = struct.Struct('!I')
-BUFFER_HEADER_LENGTH = 4
-BUFFER_SIZE = 8192
-
-#
-# Exceptions
-#
-
-class AvroRemoteException(schema.AvroException):
-  """
-  Raised when an error message is sent by an Avro requestor or responder.
-  """
-  def __init__(self, fail_msg=None):
-    schema.AvroException.__init__(self, fail_msg)
-
-class ConnectionClosedException(schema.AvroException):
-  pass
-
-#
-# Base IPC Classes (Requestor/Responder)
-#
-
-class Requestor(object):
-  """Base class for the client side of a protocol interaction."""
-  def __init__(self, local_protocol, transceiver):
-    self._local_protocol = local_protocol
-    self._transceiver = transceiver
-    self._remote_protocol = None
-    self._remote_hash = None
-    self._send_protocol = None
-
-  # read-only properties
-  local_protocol = property(lambda self: self._local_protocol)
-  transceiver = property(lambda self: self._transceiver)
-
-  # read/write properties
-  def set_remote_protocol(self, new_remote_protocol):
-    self._remote_protocol = new_remote_protocol
-    REMOTE_PROTOCOLS[self.transceiver.remote_name] = self.remote_protocol
-  remote_protocol = property(lambda self: self._remote_protocol,
-                             set_remote_protocol)
-
-  def set_remote_hash(self, new_remote_hash):
-    self._remote_hash = new_remote_hash
-    REMOTE_HASHES[self.transceiver.remote_name] = self.remote_hash
-  remote_hash = property(lambda self: self._remote_hash, set_remote_hash)
-
-  def set_send_protocol(self, new_send_protocol):
-    self._send_protocol = new_send_protocol
-  send_protocol = property(lambda self: self._send_protocol, set_send_protocol)
-
-  def request(self, message_name, request_datum):
-    """
-    Writes a request message and reads a response or error message.
-    """
-    # build handshake and call request
-    buffer_writer = StringIO()
-    buffer_encoder = io.BinaryEncoder(buffer_writer)
-    self.write_handshake_request(buffer_encoder)
-    self.write_call_request(message_name, request_datum, buffer_encoder)
-
-    # send the handshake and call request; block until call response
-    call_request = buffer_writer.getvalue()
-    call_response = self.transceiver.transceive(call_request)
-
-    # process the handshake and call response
-    buffer_decoder = io.BinaryDecoder(StringIO(call_response))
-    call_response_exists = self.read_handshake_response(buffer_decoder)
-    if call_response_exists:
-      return self.read_call_response(message_name, buffer_decoder)
-    else:
-      return self.request(message_name, request_datum)
-
-  def write_handshake_request(self, encoder):
-    local_hash = self.local_protocol.md5
-    remote_name = self.transceiver.remote_name
-    remote_hash = REMOTE_HASHES.get(remote_name)
-    if remote_hash is None:
-      remote_hash = local_hash
-      self.remote_protocol = self.local_protocol
-    request_datum = {}
-    request_datum['clientHash'] = local_hash
-    request_datum['serverHash'] = remote_hash
-    if self.send_protocol:
-      request_datum['clientProtocol'] = str(self.local_protocol)
-    HANDSHAKE_REQUESTOR_WRITER.write(request_datum, encoder)
-
-  def write_call_request(self, message_name, request_datum, encoder):
-    """
-    The format of a call request is:
-      * request metadata, a map with values of type bytes
-      * the message name, an Avro string, followed by
-      * the message parameters. Parameters are serialized according to
-        the message's request declaration.
-    """
-    # request metadata (not yet implemented)
-    request_metadata = {}
-    META_WRITER.write(request_metadata, encoder)
-
-    # message name
-    message = self.local_protocol.messages.get(message_name)
-    if message is None:
-      raise schema.AvroException('Unknown message: %s' % message_name)
-    encoder.write_utf8(message.name)
-
-    # message parameters
-    self.write_request(message.request, request_datum, encoder)
-
-  def write_request(self, request_schema, request_datum, encoder):
-    datum_writer = io.DatumWriter(request_schema)
-    datum_writer.write(request_datum, encoder)
-
-  def read_handshake_response(self, decoder):
-    handshake_response = HANDSHAKE_REQUESTOR_READER.read(decoder)
-    match = handshake_response.get('match')
-    if match == 'BOTH':
-      self.send_protocol = False
-      return True
-    elif match == 'CLIENT':
-      if self.send_protocol:
-        raise schema.AvroException('Handshake failure.')
-      self.remote_protocol = protocol.parse(
-                             handshake_response.get('serverProtocol'))
-      self.remote_hash = handshake_response.get('serverHash')
-      self.send_protocol = False
-      return True
-    elif match == 'NONE':
-      if self.send_protocol:
-        raise schema.AvroException('Handshake failure.')
-      self.remote_protocol = protocol.parse(
-                             handshake_response.get('serverProtocol'))
-      self.remote_hash = handshake_response.get('serverHash')
-      self.send_protocol = True
-      return False
-    else:
-      raise schema.AvroException('Unexpected match: %s' % match)
-
-  def read_call_response(self, message_name, decoder):
-    """
-    The format of a call response is:
-      * response metadata, a map with values of type bytes
-      * a one-byte error flag boolean, followed by either:
-        o if the error flag is false,
-          the message response, serialized per the message's response schema.
-        o if the error flag is true, 
-          the error, serialized per the message's error union schema.
-    """
-    # response metadata
-    response_metadata = META_READER.read(decoder)
-
-    # remote response schema
-    remote_message_schema = self.remote_protocol.messages.get(message_name)
-    if remote_message_schema is None:
-      raise schema.AvroException('Unknown remote message: %s' % message_name)
-
-    # local response schema
-    local_message_schema = self.local_protocol.messages.get(message_name)
-    if local_message_schema is None:
-      raise schema.AvroException('Unknown local message: %s' % message_name)
-
-    # error flag
-    if not decoder.read_boolean():
-      writers_schema = remote_message_schema.response
-      readers_schema = local_message_schema.response
-      return self.read_response(writers_schema, readers_schema, decoder)
-    else:
-      writers_schema = remote_message_schema.errors
-      readers_schema = local_message_schema.errors
-      raise self.read_error(writers_schema, readers_schema, decoder)
-
-  def read_response(self, writers_schema, readers_schema, decoder):
-    datum_reader = io.DatumReader(writers_schema, readers_schema)
-    result = datum_reader.read(decoder)
-    return result
-
-  def read_error(self, writers_schema, readers_schema, decoder):
-    datum_reader = io.DatumReader(writers_schema, readers_schema)
-    return AvroRemoteException(datum_reader.read(decoder))
-
-class Responder(object):
-  """Base class for the server side of a protocol interaction."""
-  def __init__(self, local_protocol):
-    self._local_protocol = local_protocol
-    self._local_hash = self.local_protocol.md5
-    self._protocol_cache = {}
-    self.set_protocol_cache(self.local_hash, self.local_protocol)
-
-  # read-only properties
-  local_protocol = property(lambda self: self._local_protocol)
-  local_hash = property(lambda self: self._local_hash)
-  protocol_cache = property(lambda self: self._protocol_cache)
-
-  # utility functions to manipulate protocol cache
-  def get_protocol_cache(self, hash):
-    return self.protocol_cache.get(hash)
-  def set_protocol_cache(self, hash, protocol):
-    self.protocol_cache[hash] = protocol
-
-  def respond(self, call_request):
-    """
-    Called by a server to deserialize a request, compute and serialize
-    a response or error. Compare to 'handle()' in Thrift.
-    """
-    buffer_reader = StringIO(call_request)
-    buffer_decoder = io.BinaryDecoder(buffer_reader)
-    buffer_writer = StringIO()
-    buffer_encoder = io.BinaryEncoder(buffer_writer)
-    error = None
-    response_metadata = {}
-    
-    try:
-      remote_protocol = self.process_handshake(buffer_decoder, buffer_encoder)
-      # handshake failure
-      if remote_protocol is None:  
-        return buffer_writer.getvalue()
-
-      # read request using remote protocol
-      request_metadata = META_READER.read(buffer_decoder)
-      remote_message_name = buffer_decoder.read_utf8()
-
-      # get remote and local request schemas so we can do
-      # schema resolution (one fine day)
-      remote_message = remote_protocol.messages.get(remote_message_name)
-      if remote_message is None:
-        fail_msg = 'Unknown remote message: %s' % remote_message_name
-        raise schema.AvroException(fail_msg)
-      local_message = self.local_protocol.messages.get(remote_message_name)
-      if local_message is None:
-        fail_msg = 'Unknown local message: %s' % remote_message_name
-        raise schema.AvroException(fail_msg)
-      writers_schema = remote_message.request
-      readers_schema = local_message.request
-      request = self.read_request(writers_schema, readers_schema,
-                                  buffer_decoder)
-
-      # perform server logic
-      try:
-        response = self.invoke(local_message, request)
-      except AvroRemoteException, e:
-        error = e
-      except Exception, e:
-        error = AvroRemoteException(str(e))
-
-      # write response using local protocol
-      META_WRITER.write(response_metadata, buffer_encoder)
-      buffer_encoder.write_boolean(error is not None)
-      if error is None:
-        writers_schema = local_message.response
-        self.write_response(writers_schema, response, buffer_encoder)
-      else:
-        writers_schema = local_message.errors
-        self.write_error(writers_schema, error, buffer_encoder)
-    except schema.AvroException, e:
-      error = AvroRemoteException(str(e))
-      buffer_encoder = io.BinaryEncoder(StringIO())
-      META_WRITER.write(response_metadata, buffer_encoder)
-      buffer_encoder.write_boolean(True)
-      self.write_error(SYSTEM_ERROR_SCHEMA, error, buffer_encoder)
-    return buffer_writer.getvalue()
-
-  def process_handshake(self, decoder, encoder):
-    handshake_request = HANDSHAKE_RESPONDER_READER.read(decoder)
-    handshake_response = {}
-
-    # determine the remote protocol
-    client_hash = handshake_request.get('clientHash')
-    client_protocol = handshake_request.get('clientProtocol')
-    remote_protocol = self.get_protocol_cache(client_hash)
-    if remote_protocol is None and client_protocol is not None:
-      remote_protocol = protocol.parse(client_protocol)
-      self.set_protocol_cache(client_hash, remote_protocol)
-
-    # evaluate remote's guess of the local protocol
-    server_hash = handshake_request.get('serverHash')
-    if self.local_hash == server_hash:
-      if remote_protocol is None:
-        handshake_response['match'] = 'NONE'
-      else:
-        handshake_response['match'] = 'BOTH'
-    else:
-      if remote_protocol is None:
-        handshake_response['match'] = 'NONE'
-      else:
-        handshake_response['match'] = 'CLIENT'
-
-    if handshake_response['match'] != 'BOTH':
-      handshake_response['serverProtocol'] = str(self.local_protocol)
-      handshake_response['serverHash'] = self.local_hash
-
-    HANDSHAKE_RESPONDER_WRITER.write(handshake_response, encoder)
-    return remote_protocol
-
-  def invoke(self, local_message, request):
-    """
-    Aactual work done by server: cf. handler in thrift.
-    """
-    pass
-
-  def read_request(self, writers_schema, readers_schema, decoder):
-    datum_reader = io.DatumReader(writers_schema, readers_schema)
-    return datum_reader.read(decoder)
-
-  def write_response(self, writers_schema, response_datum, encoder):
-    datum_writer = io.DatumWriter(writers_schema)
-    datum_writer.write(response_datum, encoder)
-
-  def write_error(self, writers_schema, error_exception, encoder):
-    datum_writer = io.DatumWriter(writers_schema)
-    datum_writer.write(str(error_exception), encoder)
-
-#
-# Utility classes
-#
-
-class FramedReader(object):
-  """Wrapper around a file-like object to read framed data."""
-  def __init__(self, reader):
-    self._reader = reader
-
-  # read-only properties
-  reader = property(lambda self: self._reader)
-
-  def read_framed_message(self):
-    message = []
-    while True:
-      buffer = StringIO()
-      buffer_length = self._read_buffer_length()
-      if buffer_length == 0:
-        return ''.join(message)
-      while buffer.tell() < buffer_length:
-        chunk = self.reader.read(buffer_length - buffer.tell())
-        if chunk == '':
-          raise ConnectionClosedException("Reader read 0 bytes.")
-        buffer.write(chunk)
-      message.append(buffer.getvalue())
-
-  def _read_buffer_length(self):
-    read = self.reader.read(BUFFER_HEADER_LENGTH)
-    if read == '':
-      raise ConnectionClosedException("Reader read 0 bytes.")
-    return BIG_ENDIAN_INT_STRUCT.unpack(read)[0]
-
-class FramedWriter(object):
-  """Wrapper around a file-like object to write framed data."""
-  def __init__(self, writer):
-    self._writer = writer
-
-  # read-only properties
-  writer = property(lambda self: self._writer)
-
-  def write_framed_message(self, message):
-    message_length = len(message)
-    total_bytes_sent = 0
-    while message_length - total_bytes_sent > 0:
-      if message_length - total_bytes_sent > BUFFER_SIZE:
-        buffer_length = BUFFER_SIZE
-      else:
-        buffer_length = message_length - total_bytes_sent
-      self.write_buffer(message[total_bytes_sent:
-                                (total_bytes_sent + buffer_length)])
-      total_bytes_sent += buffer_length
-    # A message is always terminated by a zero-length buffer.
-    self.write_buffer_length(0)
-
-  def write_buffer(self, chunk):
-    buffer_length = len(chunk)
-    self.write_buffer_length(buffer_length)
-    self.writer.write(chunk)
-
-  def write_buffer_length(self, n):
-    self.writer.write(BIG_ENDIAN_INT_STRUCT.pack(n))
-
-#
-# Transceiver Implementations
-#
-
-class HTTPTransceiver(object):
-  """
-  A simple HTTP-based transceiver implementation.
-  Useful for clients but not for servers
-  """
-  def __init__(self, host, port):
-    self.conn = httplib.HTTPConnection(host, port)
-    self.conn.connect()
-
-  # read-only properties
-  sock = property(lambda self: self.conn.sock)
-  remote_name = property(lambda self: self.sock.getsockname())
-
-  # read/write properties
-  def set_conn(self, new_conn):
-    self._conn = new_conn
-  conn = property(lambda self: self._conn, set_conn)
-
-  def transceive(self, request):
-    self.write_framed_message(request)
-    result = self.read_framed_message()
-    return result
-
-  def read_framed_message(self):
-    response = self.conn.getresponse()
-    response_reader = FramedReader(response)
-    framed_message = response_reader.read_framed_message()
-    response.read()    # ensure we're ready for subsequent requests
-    return framed_message
-
-  def write_framed_message(self, message):
-    req_method = 'POST'
-    req_resource = '/'
-    req_headers = {'Content-Type': 'avro/binary'}
-
-    req_body_buffer = FramedWriter(StringIO())
-    req_body_buffer.write_framed_message(message)
-    req_body = req_body_buffer.writer.getvalue()
-
-    self.conn.request(req_method, req_resource, req_body, req_headers)
-
-  def close(self):
-    self.conn.close()
-
-#
-# Server Implementations (none yet)
-#
-
diff --git a/branch-1.3/lang/py/src/avro/protocol.py b/branch-1.3/lang/py/src/avro/protocol.py
deleted file mode 100644
index 055b857..0000000
--- a/branch-1.3/lang/py/src/avro/protocol.py
+++ /dev/null
@@ -1,224 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Protocol implementation.
-"""
-try:
-  import hashlib
-except ImportError:
-  import md5
-try:
-  import simplejson as json
-except ImportError:
-  import json
-from avro import schema
-
-#
-# Constants
-#
-
-# TODO(hammer): confirmed 'fixed' with Doug
-VALID_TYPE_SCHEMA_TYPES = ('enum', 'record', 'error', 'fixed')
-
-#
-# Exceptions
-#
-
-class ProtocolParseException(schema.AvroException):
-  pass
-
-#
-# Base Classes
-#
-
-class Protocol(object):
-  """An application protocol."""
-  def _parse_types(self, types, type_names):
-    type_objects = []
-    for type in types:
-      type_object = schema.make_avsc_object(type, type_names)
-      if type_object.type not in VALID_TYPE_SCHEMA_TYPES:
-        fail_msg = 'Type %s not an enum, fixed, record, or error.' % type
-        raise ProtocolParseException(fail_msg)
-      type_objects.append(type_object)
-    return type_objects
-
-  def _parse_messages(self, messages, names):
-    message_objects = {}
-    for name, body in messages.iteritems():
-      if message_objects.has_key(name):
-        fail_msg = 'Message name "%s" repeated.' % name
-        raise ProtocolParseException(fail_msg)
-      elif not(hasattr(body, 'get') and callable(body.get)):
-        fail_msg = 'Message name "%s" has non-object body %s.' % (name, body)
-        raise ProtocolParseException(fail_msg)
-      request = body.get('request')
-      response = body.get('response')
-      errors = body.get('errors')
-      message_objects[name] = Message(name, request, response, errors, names)
-    return message_objects
-
-  def __init__(self, name, namespace=None, types=None, messages=None):
-    # Ensure valid ctor args
-    if not name:
-      fail_msg = 'Protocols must have a non-empty name.'
-      raise ProtocolParseException(fail_msg)
-    elif not isinstance(name, basestring):
-      fail_msg = 'The name property must be a string.'
-      raise ProtocolParseException(fail_msg)
-    elif namespace is not None and not isinstance(namespace, basestring):
-      fail_msg = 'The namespace property must be a string.'
-      raise ProtocolParseException(fail_msg)
-    elif types is not None and not isinstance(types, list):
-      fail_msg = 'The types property must be a list.'
-      raise ProtocolParseException(fail_msg)
-    elif (messages is not None and 
-          not(hasattr(messages, 'get') and callable(messages.get))):
-      fail_msg = 'The messages property must be a JSON object.'
-      raise ProtocolParseException(fail_msg)
-
-    self._props = {}
-    self.set_prop('name', name)
-    if namespace is not None: self.set_prop('namespace', namespace)
-    type_names = {}
-    if types is not None:
-      self.set_prop('types', self._parse_types(types, type_names))
-    if messages is not None:
-      self.set_prop('messages', self._parse_messages(messages, type_names))
-    if hashlib:
-      self._md5 = hashlib.md5(str(self)).digest()
-    else:
-      self._md5 = md5.new(str(self)).digest()
-
-  # read-only properties
-  name = property(lambda self: self.get_prop('name'))
-  namespace = property(lambda self: self.get_prop('namespace'))
-  fullname = property(lambda self: 
-                      schema.Name.make_fullname(self.name, self.namespace))
-  types = property(lambda self: self.get_prop('types'))
-  types_dict = property(lambda self: dict([(type.name, type)
-                                           for type in self.types]))
-  messages = property(lambda self: self.get_prop('messages'))
-  md5 = property(lambda self: self._md5)
-  props = property(lambda self: self._props)
-
-  # utility functions to manipulate properties dict
-  def get_prop(self, key):
-    return self.props.get(key)
-  def set_prop(self, key, value):
-    self.props[key] = value  
-
-  def __str__(self):
-    # until we implement a JSON encoder for Schema and Message objects,
-    # we'll have to go through and call str() by hand.
-    to_dump = {}
-    to_dump['protocol'] = self.name
-    if self.namespace: to_dump['namespace'] = self.namespace
-    if self.types:
-      to_dump['types'] = [json.loads(str(t)) for t in self.types]
-    if self.messages:
-      messages_dict = {}
-      for name, body in self.messages.iteritems():
-        messages_dict[name] = json.loads(str(body))
-      to_dump['messages'] = messages_dict
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-class Message(object):
-  """A Protocol message."""
-  def _parse_request(self, request, names):
-    if not isinstance(request, list):
-      fail_msg = 'Request property not a list: %s' % request
-      raise ProtocolParseException(fail_msg)
-    return schema.RecordSchema(None, None, request, names, 'request')
-  
-  def _parse_response(self, response, names):
-    if isinstance(response, basestring) and names.has_key(response):
-      self._response_from_names = True
-      return names.get(response)
-    else:
-      return schema.make_avsc_object(response, names)
-
-  def _parse_errors(self, errors, names):
-    if not isinstance(errors, list):
-      fail_msg = 'Errors property not a list: %s' % errors
-      raise ProtocolParseException(fail_msg)
-    errors_for_parsing = {'type': 'error_union', 'declared_errors': errors}
-    return schema.make_avsc_object(errors_for_parsing, names)
-
-  def __init__(self,  name, request, response, errors=None, names=None):
-    self._name = name
-    self._response_from_names = False
-
-    self._props = {}
-    self.set_prop('request', self._parse_request(request, names))
-    self.set_prop('response', self._parse_response(response, names))
-    if errors is not None:
-      self.set_prop('errors', self._parse_errors(errors, names))
-
-  # read-only properties
-  name = property(lambda self: self._name)
-  response_from_names = property(lambda self: self._response_from_names)
-  request = property(lambda self: self.get_prop('request'))
-  response = property(lambda self: self.get_prop('response'))
-  errors = property(lambda self: self.get_prop('errors'))
-  props = property(lambda self: self._props)
-
-  # utility functions to manipulate properties dict
-  def get_prop(self, key):
-    return self.props.get(key)
-  def set_prop(self, key, value):
-    self.props[key] = value  
-
-  # TODO(hammer): allow schemas and fields to be JSON Encoded!
-  def __str__(self):
-    to_dump = {}
-    to_dump['request'] = json.loads(str(self.request))
-    if self.response_from_names:
-      to_dump['response'] = self.response.fullname
-    else:
-      to_dump['response'] = json.loads(str(self.response))
-    if self.errors:
-      to_dump['errors'] = json.loads(str(self.errors))
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    return self.name == that.name and self.props == that.props
-      
-def make_avpr_object(json_data):
-  """Build Avro Protocol from data parsed out of JSON string."""
-  if hasattr(json_data, 'get') and callable(json_data.get):
-    name = json_data.get('protocol')
-    namespace = json_data.get('namespace')
-    types = json_data.get('types')
-    messages = json_data.get('messages')
-    return Protocol(name, namespace, types, messages)
-  else:
-    raise ProtocolParseException('Not a JSON object: %s' % json_data)
-
-def parse(json_string):
-  """Constructs the Protocol from the JSON text."""
-  try:
-    json_data = json.loads(json_string)
-  except:
-    raise ProtocolParseException('Error parsing JSON: %s' % json_string)
-
-  # construct the Avro Protocol object
-  return make_avpr_object(json_data)
-
diff --git a/branch-1.3/lang/py/src/avro/schema.py b/branch-1.3/lang/py/src/avro/schema.py
deleted file mode 100644
index 689c3d0..0000000
--- a/branch-1.3/lang/py/src/avro/schema.py
+++ /dev/null
@@ -1,619 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Contains the Schema classes.
-
-A schema may be one of:
-  A record, mapping field names to field value data;
-  An error, equivalent to a record;
-  An enum, containing one of a small set of symbols;
-  An array of values, all of the same schema;
-  A map containing string/value pairs, each of a declared schema;
-  A union of other schemas;
-  A fixed sized binary object;
-  A unicode string;
-  A sequence of bytes;
-  A 32-bit signed int;
-  A 64-bit signed long;
-  A 32-bit floating-point float;
-  A 64-bit floating-point double;
-  A boolean; or
-  Null.
-"""
-try:
-  import simplejson as json
-except ImportError:
-  import json
-
-#
-# Constants
-#
-
-PRIMITIVE_TYPES = (
-  'null',
-  'boolean',
-  'string',
-  'bytes',
-  'int',
-  'long',
-  'float',
-  'double',
-)
-
-NAMED_TYPES = (
-  'fixed',
-  'enum',
-  'record',
-  'error',
-)
-
-VALID_TYPES = PRIMITIVE_TYPES + NAMED_TYPES + (
-  'array',
-  'map',
-  'union',
-  'request',
-  'error_union'
-)
-
-RESERVED_PROPS = (
-  'type',
-  'name',
-  'namespace',
-  'fields',     # Record
-  'items',      # Array
-  'size',       # Fixed
-  'symbols',    # Enum
-  'values',     # Map
-)
-
-VALID_FIELD_SORT_ORDERS = (
-  'ascending',
-  'descending',
-  'ignore',
-)
-
-#
-# Exceptions
-#
-
-class AvroException(Exception):
-  pass
-
-class SchemaParseException(AvroException):
-  pass
-
-#
-# Base Classes
-#
-
-class Schema(object):
-  """Base class for all Schema classes."""
-  def __init__(self, type):
-    # Ensure valid ctor args
-    if not isinstance(type, basestring):
-      fail_msg = 'Schema type must be a string.'
-      raise SchemaParseException(fail_msg)
-    elif type not in VALID_TYPES:
-      fail_msg = '%s is not a valid type.' % type
-      raise SchemaParseException(fail_msg)
-
-    # add members
-    if not hasattr(self, '_props'): self._props = {}
-    self.set_prop('type', type)
-
-  # read-only properties
-  props = property(lambda self: self._props)
-  type = property(lambda self: self.get_prop('type'))
-
-  # utility functions to manipulate properties dict
-  def get_prop(self, key):
-    return self.props.get(key)
-  def set_prop(self, key, value):
-    self.props[key] = value
-
-class Name(object):
-  """Container class for static methods on Avro names."""
-  @staticmethod
-  def make_fullname(name, namespace):
-    if name.find('.') < 0 and namespace is not None:
-      return '.'.join([namespace, name])
-    else:
-      return name
-
-  @staticmethod
-  def extract_namespace(name, namespace):
-    parts = name.rsplit('.', 1)
-    if len(parts) > 1:
-      namespace, name = parts
-    return name, namespace
-
-  @staticmethod
-  def add_name(names, new_schema):
-    """Add a new schema object to the names dictionary (in place)."""
-    new_fullname = new_schema.fullname
-    if new_fullname in VALID_TYPES:
-      fail_msg = '%s is a reserved type name.' % new_fullname
-      raise SchemaParseException(fail_msg)
-    elif names is not None and names.has_key(new_fullname):
-      fail_msg = 'The name "%s" is already in use.' % new_fullname
-      raise SchemaParseException(fail_msg)
-    elif names is None:
-      names = {}
-
-    names[new_fullname] = new_schema
-    return names
-
-class NamedSchema(Schema):
-  """Named Schemas specified in NAMED_TYPES."""
-  def __init__(self, type, name, namespace=None, names=None):
-    # Ensure valid ctor args
-    if not name:
-      fail_msg = 'Named Schemas must have a non-empty name.'
-      raise SchemaParseException(fail_msg)
-    elif not isinstance(name, basestring):
-      fail_msg = 'The name property must be a string.'
-      raise SchemaParseException(fail_msg)
-    elif namespace is not None and not isinstance(namespace, basestring):
-      fail_msg = 'The namespace property must be a string.'
-      raise SchemaParseException(fail_msg)
-
-    # Call parent ctor
-    Schema.__init__(self, type)
-
-    # Add class members
-    name, namespace = Name.extract_namespace(name, namespace)
-    self.set_prop('name', name)
-    if namespace is not None: self.set_prop('namespace', namespace)
-
-    # Add name to names dictionary
-    names = Name.add_name(names, self)
-
-  # read-only properties
-  name = property(lambda self: self.get_prop('name'))
-  namespace = property(lambda self: self.get_prop('namespace'))
-  fullname = property(lambda self: 
-                      Name.make_fullname(self.name, self.namespace))
-
-class Field(object):
-  def __init__(self, type, name, has_default, default=None, order=None, names=None):
-    # Ensure valid ctor args
-    if not name:
-      fail_msg = 'Fields must have a non-empty name.'
-      raise SchemaParseException(fail_msg)
-    elif not isinstance(name, basestring):
-      fail_msg = 'The name property must be a string.'
-      raise SchemaParseException(fail_msg)
-    elif order is not None and order not in VALID_FIELD_SORT_ORDERS:
-      fail_msg = 'The order property %s is not valid.' % order
-      raise SchemaParseException(fail_msg)
-
-    # add members
-    self._props = {}
-    self._type_from_names = False
-    self._has_default = has_default
-    if (isinstance(type, basestring) and names is not None
-        and names.has_key(type)):
-      type_schema = names[type]
-      self._type_from_names = True
-    else:
-      try:
-        type_schema = make_avsc_object(type, names)
-      except:
-        fail_msg = 'Type property not a valid Avro schema.'
-        raise SchemaParseException(fail_msg)
-    self.set_prop('type', type_schema)
-    self.set_prop('name', name)
-    # TODO(hammer): check to ensure default is valid
-    if has_default: self.set_prop('default', default)
-    if order is not None: self.set_prop('order', order)
-
-  # read-only properties
-  type = property(lambda self: self.get_prop('type'))
-  name = property(lambda self: self.get_prop('name'))
-  default = property(lambda self: self.get_prop('default'))
-  has_default = property(lambda self: self._has_default)
-  order = property(lambda self: self.get_prop('order'))
-  props = property(lambda self: self._props)
-  type_from_names = property(lambda self: self._type_from_names)
-
-  # utility functions to manipulate properties dict
-  def get_prop(self, key):
-    return self.props.get(key)
-  def set_prop(self, key, value):
-    self.props[key] = value
-
-  def __str__(self):
-    to_dump = self.props.copy()
-    if self.type_from_names:
-      to_dump['type'] = self.type.fullname
-    else:
-      to_dump['type'] = json.loads(str(to_dump['type']))
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-#
-# Primitive Types
-#
-class PrimitiveSchema(Schema):
-  """Valid primitive types are in PRIMITIVE_TYPES."""
-  def __init__(self, type):
-    # Ensure valid ctor args
-    if type not in PRIMITIVE_TYPES:
-      raise AvroException("%s is not a valid primitive type." % type)
-
-    # Call parent ctor
-    Schema.__init__(self, type)
-
-  def __str__(self):
-    # if there are no arbitrary properties, use short form
-    if len(self.props) == 1:
-      return '"%s"' % self.type
-    else:
-      return json.dumps(self.props)
-
-  def __eq__(self, that):
-    return self.props == that.props
-
-#
-# Complex Types (non-recursive)
-#
-
-class FixedSchema(NamedSchema):
-  def __init__(self, name, namespace, size, names=None):
-    # Ensure valid ctor args
-    if not isinstance(size, int):
-      fail_msg = 'Fixed Schema requires a valid integer for size property.'
-      raise AvroException(fail_msg)
-
-    # Call parent ctor
-    NamedSchema.__init__(self, 'fixed', name, namespace, names)
-
-    # Add class members
-    self.set_prop('size', size)
-
-  # read-only properties
-  size = property(lambda self: self.get_prop('size'))
-
-  def __str__(self):
-    return json.dumps(self.props)
-
-  def __eq__(self, that):
-    return self.props == that.props
-
-class EnumSchema(NamedSchema):
-  def __init__(self, name, namespace, symbols, names=None):
-    # Ensure valid ctor args
-    if not isinstance(symbols, list):
-      fail_msg = 'Enum Schema requires a JSON array for the symbols property.'
-      raise AvroException(fail_msg)
-    elif False in [isinstance(s, basestring) for s in symbols]:
-      fail_msg = 'Enum Schems requires All symbols to be JSON strings.'
-      raise AvroException(fail_msg)
-    elif len(set(symbols)) < len(symbols):
-      fail_msg = 'Duplicate symbol: %s' % symbols
-      raise AvroException(fail_msg)
-
-    # Call parent ctor
-    NamedSchema.__init__(self, 'enum', name, namespace, names)
-
-    # Add class members
-    self.set_prop('symbols', symbols)
-
-  # read-only properties
-  symbols = property(lambda self: self.get_prop('symbols'))
-
-  def __str__(self):
-    return json.dumps(self.props)
-
-  def __eq__(self, that):
-    return self.props == that.props
-
-#
-# Complex Types (recursive)
-#
-
-class ArraySchema(Schema):
-  def __init__(self, items, names=None):
-    # initialize private class members
-    self._items_schema_from_names = False
-
-    # Call parent ctor
-    Schema.__init__(self, 'array')
-
-    # Add class members
-    if isinstance(items, basestring) and names.has_key(items):
-      items_schema = names[items]
-      self._items_schema_from_names = True
-    else:
-      try:
-        items_schema = make_avsc_object(items, names)
-      except:
-        fail_msg = 'Items schema not a valid Avro schema.'
-        raise SchemaParseException(fail_msg)
-
-    self.set_prop('items', items_schema)
-
-  # read-only properties
-  items = property(lambda self: self.get_prop('items'))
-  items_schema_from_names = property(lambda self: self._items_schema_from_names)
-
-  def __str__(self):
-    to_dump = self.props.copy()
-    if self.items_schema_from_names:
-      to_dump['items'] = self.get_prop('items').fullname
-    else:
-      to_dump['items'] = json.loads(str(to_dump['items']))
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-class MapSchema(Schema):
-  def __init__(self, values, names=None):
-    # initialize private class members
-    self._values_schema_from_names = False
-
-    # Call parent ctor
-    Schema.__init__(self, 'map')
-
-    # Add class members
-    if isinstance(values, basestring) and names.has_key(values):
-      values_schema = names[values]
-      self._values_schema_from_names = True
-    else:
-      try:
-        values_schema = make_avsc_object(values, names)
-      except:
-        fail_msg = 'Values schema not a valid Avro schema.'
-        raise SchemaParseException(fail_msg)
-
-    self.set_prop('values', values_schema)
-
-  # read-only properties
-  values = property(lambda self: self.get_prop('values'))
-  values_schema_from_names = property(lambda self:
-                                      self._values_schema_from_names)
-
-  def __str__(self):
-    to_dump = self.props.copy()
-    if self.values_schema_from_names:
-      to_dump['values'] = self.get_prop('values').fullname
-    else:
-      to_dump['values'] = json.loads(str(to_dump['values']))
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-class UnionSchema(Schema):
-  """
-  names is a dictionary of schema objects
-  """
-  def __init__(self, schemas, names=None):
-    # Ensure valid ctor args
-    if not isinstance(schemas, list):
-      fail_msg = 'Union schema requires a list of schemas.'
-      raise SchemaParseException(fail_msg)
-
-    # Call parent ctor
-    Schema.__init__(self, 'union')
-
-    # Add class members
-    schema_objects = []
-    self._schema_from_names_indices = []
-    for i, schema in enumerate(schemas):
-      from_names = False
-      if isinstance(schema, basestring) and names.has_key(schema):
-        new_schema = names[schema]
-        from_names = True
-      else:
-        try:
-          new_schema = make_avsc_object(schema, names)
-        except:
-          raise SchemaParseException('Union item must be a valid Avro schema.')
-      # check the new schema
-      if (new_schema.type in VALID_TYPES and new_schema.type not in NAMED_TYPES
-          and new_schema.type in [schema.type for schema in schema_objects]):
-        raise SchemaParseException('%s type already in Union' % new_schema.type)
-      elif new_schema.type == 'union':
-        raise SchemaParseException('Unions cannont contain other unions.')
-      else:
-        schema_objects.append(new_schema)
-        if from_names: self._schema_from_names_indices.append(i)
-    self._schemas = schema_objects
-
-  # read-only properties
-  schemas = property(lambda self: self._schemas)
-  schema_from_names_indices = property(lambda self:
-                                       self._schema_from_names_indices)
-
-  def __str__(self):
-    to_dump = []
-    for i, schema in enumerate(self.schemas):
-      if i in self.schema_from_names_indices:
-        to_dump.append(schema.fullname)
-      else:
-        to_dump.append(json.loads(str(schema)))
-    return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-class ErrorUnionSchema(UnionSchema):
-  def __init__(self, schemas, names=None):
-    # Prepend "string" to handle system errors
-    UnionSchema.__init__(self, ['string'] + schemas, names)
-
-  def __str__(self):
-    to_dump = []
-    for i, schema in enumerate(self.schemas):
-      # Don't print the system error schema
-      if schema.type == 'string': continue
-      if i in self.schema_from_names_indices:
-        to_dump.append(schema.fullname)
-      else:
-        to_dump.append(json.loads(str(schema)))
-    return json.dumps(to_dump)
-
-class RecordSchema(NamedSchema):
-  @staticmethod
-  def make_field_objects(field_data, names):
-    """We're going to need to make message parameters too."""
-    field_objects = []
-    field_names = []
-    for i, field in enumerate(field_data):
-      if hasattr(field, 'get') and callable(field.get):
-        type = field.get('type')
-        name = field.get('name')
-
-        # null values can have a default value of None
-        has_default = False
-        default = None
-        if field.has_key('default'):
-          has_default = True
-          default = field.get('default')
-
-        order = field.get('order')
-        new_field = Field(type, name, has_default, default, order, names)
-        # make sure field name has not been used yet
-        if new_field.name in field_names:
-          fail_msg = 'Field name %s already in use.' % new_field.name
-          raise SchemaParseException(fail_msg)
-        field_names.append(new_field.name)
-      else:
-        raise SchemaParseException('Not a valid field: %s' % field)
-      field_objects.append(new_field)
-    return field_objects
-
-  def __init__(self, name, namespace, fields, names=None, schema_type='record'):
-    # Ensure valid ctor args
-    if fields is None:
-      fail_msg = 'Record schema requires a non-empty fields property.'
-      raise SchemaParseException(fail_msg)
-    elif not isinstance(fields, list):
-      fail_msg = 'Fields property must be a list of Avro schemas.'
-      raise SchemaParseException(fail_msg)
-
-    # Call parent ctor (adds own name to namespace, too)
-    if schema_type == 'request':
-      Schema.__init__(self, schema_type)
-    else:
-      NamedSchema.__init__(self, schema_type, name, namespace, names)
-
-    # Add class members
-    field_objects = RecordSchema.make_field_objects(fields, names)
-    self.set_prop('fields', field_objects)
-
-  # read-only properties
-  fields = property(lambda self: self.get_prop('fields'))
-
-  @property
-  def fields_dict(self):
-    fields_dict = {}
-    for field in self.fields:
-      fields_dict[field.name] = field
-    return fields_dict
-
-  def __str__(self):
-    to_dump = self.props.copy()
-    to_dump['fields'] = [json.loads(str(f)) for f in self.fields]
-    if self.type == 'request':
-      return json.dumps(to_dump['fields'])
-    else:
-      return json.dumps(to_dump)
-
-  def __eq__(self, that):
-    to_cmp = json.loads(str(self))
-    return to_cmp == json.loads(str(that))
-
-#
-# Module Methods
-#
-
-# TODO(hammer): handle non-reserved properties
-def make_avsc_object(json_data, names=None):
-  """
-  Build Avro Schema from data parsed out of JSON string.
-
-  @arg names: dict of schema name, object pairs
-  """
-  # JSON object (non-union)
-  if hasattr(json_data, 'get') and callable(json_data.get):
-    type = json_data.get('type')
-    if type in PRIMITIVE_TYPES:
-      return PrimitiveSchema(type)
-    elif type in NAMED_TYPES:
-      name = json_data.get('name')
-      namespace = json_data.get('namespace')
-      if type == 'fixed':
-        size = json_data.get('size')
-        return FixedSchema(name, namespace, size, names)
-      elif type == 'enum':
-        symbols = json_data.get('symbols')
-        return EnumSchema(name, namespace, symbols, names)
-      elif type in ['record', 'error']:
-        fields = json_data.get('fields')
-        return RecordSchema(name, namespace, fields, names, type)
-      else:
-        raise SchemaParseException('Unknown Named Type: %s' % type)
-    elif type in VALID_TYPES:
-      if type == 'array':
-        items = json_data.get('items')
-        return ArraySchema(items, names)
-      elif type == 'map':
-        values = json_data.get('values')
-        return MapSchema(values, names)
-      elif type == 'error_union':
-        declared_errors = json_data.get('declared_errors')
-        return ErrorUnionSchema(declared_errors, names)
-      else:
-        raise SchemaParseException('Unknown Valid Type: %s' % type)
-    elif type is None:
-      raise SchemaParseException('No "type" property: %s' % json_data)
-    else:
-      raise SchemaParseException('Undefined type: %s' % type)
-  # JSON array (union)
-  elif isinstance(json_data, list):
-    return UnionSchema(json_data, names)
-  # JSON string (primitive)
-  elif json_data in PRIMITIVE_TYPES:
-    return PrimitiveSchema(json_data)
-  # not for us!
-  else:
-    fail_msg = "Could not make an Avro Schema object from %s." % json_data
-    raise SchemaParseException(fail_msg)
-
-# TODO(hammer): make method for reading from a file?
-def parse(json_string):
-  """Constructs the Schema from the JSON text."""
-  # TODO(hammer): preserve stack trace from JSON parse
-  # parse the JSON
-  try:
-    json_data = json.loads(json_string)
-  except:
-    raise SchemaParseException('Error parsing JSON: %s' % json_string)
-
-  # Initialize the names dictionary
-  names = {}
-
-  # construct the Avro Schema object
-  return make_avsc_object(json_data, names)
diff --git a/branch-1.3/lang/py/src/avro/tool.py b/branch-1.3/lang/py/src/avro/tool.py
deleted file mode 100644
index edd6f18..0000000
--- a/branch-1.3/lang/py/src/avro/tool.py
+++ /dev/null
@@ -1,160 +0,0 @@
-#! /usr/bin/env python
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Command-line tool
-
-NOTE: The API for the command-line tool is experimental.
-"""
-import sys
-from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
-import urlparse
-from avro import io
-from avro import datafile
-from avro import protocol
-from avro import ipc
-
-class GenericResponder(ipc.Responder):
-  def __init__(self, proto, msg, datum):
-    proto_json = file(proto, 'r').read()
-    ipc.Responder.__init__(self, protocol.parse(proto_json))
-    self.msg = msg
-    self.datum = datum
-
-  def invoke(self, message, request):
-    if message.name == self.msg:
-      print >> sys.stderr, "Message: %s Datum: %s" % (message.name, self.datum)
-      # server will shut down after processing a single Avro request
-      global server_should_shutdown
-      server_should_shutdown = True
-      return self.datum
-
-class GenericHandler(BaseHTTPRequestHandler):
-  def do_POST(self):
-    self.responder = responder
-    call_request_reader = ipc.FramedReader(self.rfile)
-    call_request = call_request_reader.read_framed_message()
-    resp_body = self.responder.respond(call_request)
-    self.send_response(200)
-    self.send_header('Content-Type', 'avro/binary')
-    self.end_headers()
-    resp_writer = ipc.FramedWriter(self.wfile)
-    resp_writer.write_framed_message(resp_body)
-    if server_should_shutdown:
-      print >> sys.stderr, "Shutting down server."
-      self.server.force_stop()
-
-class StoppableHTTPServer(HTTPServer):
-  """HTTPServer.shutdown added in Python 2.6. FML."""
-  stopped = False
-  allow_reuse_address = True
-  def __init__(self, *args, **kw):
-    HTTPServer.__init__(self, *args, **kw)
-    self.allow_reuse_address = True
-
-  def serve_forever(self):
-    while not self.stopped:
-      self.handle_request()
-
-  def force_stop(self):
-    self.server_close()
-    self.stopped = True
-    self.serve_forever()
-
-def run_server(uri, proto, msg, datum):
-  url_obj = urlparse.urlparse(uri)
-  server_addr = (url_obj.hostname, url_obj.port)
-  global responder
-  global server_should_shutdown
-  server_should_shutdown = False
-  responder = GenericResponder(proto, msg, datum)
-  server = StoppableHTTPServer(server_addr, GenericHandler)
-  print "Port: %s" % server.server_port
-  sys.stdout.flush()
-  server.allow_reuse_address = True
-  print >> sys.stderr, "Starting server."
-  server.serve_forever()
-
-def send_message(uri, proto, msg, datum):
-  url_obj = urlparse.urlparse(uri)
-  client = ipc.HTTPTransceiver(url_obj.hostname, url_obj.port)
-  proto_json = file(proto, 'r').read()
-  requestor = ipc.Requestor(protocol.parse(proto_json), client)
-  print requestor.request(msg, datum)
-
-def file_or_stdin(f):
-  if f == "-":
-    return sys.stdin
-  else:
-    return file(f)
-
-def main(args=sys.argv):
-  if len(args) == 1:
-    print "Usage: %s [dump|rpcreceive|rpcsend]" % args[0]
-    return 1
-
-  if args[1] == "dump":
-    if len(args) != 3:
-      print "Usage: %s dump input_file" % args[0]
-      return 1
-    for d in datafile.DataFileReader(file_or_stdin(args[2]), io.DatumReader()):
-      print repr(d)
-  elif args[1] == "rpcreceive":
-    usage_str = "Usage: %s rpcreceive uri protocol_file " % args[0]
-    usage_str += "message_name (-data d | -file f)"
-    if len(args) not in [5, 7]:
-      print usage_str
-      return 1
-    uri, proto, msg = args[2:5]
-    datum = None
-    if len(args) > 5:
-      if args[5] == "-file":
-        reader = open(args[6], 'rb')
-        datum_reader = io.DatumReader()
-        dfr = datafile.DataFileReader(reader, datum_reader)
-        datum = dfr.next()
-      elif args[5] == "-data":
-        print "JSON Decoder not yet implemented."
-        return 1
-      else:
-        print usage_str
-        return 1
-    run_server(uri, proto, msg, datum)
-  elif args[1] == "rpcsend":
-    usage_str = "Usage: %s rpcsend uri protocol_file " % args[0]
-    usage_str += "message_name (-data d | -file f)"
-    if len(args) not in [5, 7]:
-      print usage_str
-      return 1
-    uri, proto, msg = args[2:5]
-    datum = None
-    if len(args) > 5:
-      if args[5] == "-file":
-        reader = open(args[6], 'rb')
-        datum_reader = io.DatumReader()
-        dfr = datafile.DataFileReader(reader, datum_reader)
-        datum = dfr.next()
-      elif args[5] == "-data":
-        print "JSON Decoder not yet implemented."
-        return 1
-      else:
-        print usage_str
-        return 1
-    send_message(uri, proto, msg, datum)
-  return 0
-  
-if __name__ == "__main__":
-  sys.exit(main(sys.argv))
diff --git a/branch-1.3/lang/py/test/gen_interop_data.py b/branch-1.3/lang/py/test/gen_interop_data.py
deleted file mode 100644
index 579505a..0000000
--- a/branch-1.3/lang/py/test/gen_interop_data.py
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/usr/bin/env python
-
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import sys
-from avro import schema
-from avro import io
-from avro import datafile
-
-DATUM = {
-  'intField': 12,
-  'longField': 15234324L,
-  'stringField': unicode('hey'),
-  'boolField': True,
-  'floatField': 1234.0,
-  'doubleField': -1234.0,
-  'bytesField': '12312adf',
-  'nullField': None,
-  'arrayField': [5.0, 0.0, 12.0],
-  'mapField': {'a': {'label': 'a'}, 'bee': {'label': 'cee'}},
-  'unionField': 12.0,
-  'enumField': 'C',
-  'fixedField': '1019181716151413',
-  'recordField': {'label': 'blah', 'children': [{'label': 'inner', 'children': []}]},
-}
-
-if __name__ == "__main__":
-  interop_schema = schema.parse(open(sys.argv[1], 'r').read())
-  writer = open(sys.argv[2], 'wb')
-  datum_writer = io.DatumWriter()
-  # NB: not using compression
-  dfw = datafile.DataFileWriter(writer, datum_writer, interop_schema)
-  dfw.append(DATUM)
-  dfw.close()
diff --git a/branch-1.3/lang/py/test/sample_http_client.py b/branch-1.3/lang/py/test/sample_http_client.py
deleted file mode 100644
index 86942d8..0000000
--- a/branch-1.3/lang/py/test/sample_http_client.py
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/usr/bin/env python
-
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import sys
-
-from avro import ipc
-from avro import protocol
-
-MAIL_PROTOCOL_JSON = """\
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-"""
-MAIL_PROTOCOL = protocol.parse(MAIL_PROTOCOL_JSON)
-SERVER_HOST = 'localhost'
-SERVER_PORT = 9090
-
-class UsageError(Exception):
-  def __init__(self, value):
-    self.value = value
-  def __str__(self):
-    return repr(self.value)
-
-def make_requestor(server_host, server_port, protocol):
-  client = ipc.HTTPTransceiver(SERVER_HOST, SERVER_PORT)
-  return ipc.Requestor(protocol, client)
-
-if __name__ == '__main__':
-  if len(sys.argv) not in [4, 5]:
-    raise UsageError("Usage: <to> <from> <body> [<count>]")
-
-  # client code - attach to the server and send a message
-  # fill in the Message record
-  message = dict()
-  message['to'] = sys.argv[1]
-  message['from'] = sys.argv[2]
-  message['body'] = sys.argv[3]
-
-  try:
-    num_messages = int(sys.argv[4])
-  except:
-    num_messages = 1
-
-  # build the parameters for the request
-  params = {}
-  params['message'] = message
-   
-  # send the requests and print the result
-  for msg_count in range(num_messages):
-    requestor = make_requestor(SERVER_HOST, SERVER_PORT, MAIL_PROTOCOL)
-    result = requestor.request('send', params)
-    print("Result: " + result)
-
-  # try out a replay message
-  requestor = make_requestor(SERVER_HOST, SERVER_PORT, MAIL_PROTOCOL)
-  result = requestor.request('replay', dict())
-  print("Replay Result: " + result)
diff --git a/branch-1.3/lang/py/test/sample_http_server.py b/branch-1.3/lang/py/test/sample_http_server.py
deleted file mode 100644
index 53f6928..0000000
--- a/branch-1.3/lang/py/test/sample_http_server.py
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/usr/bin/env python
-
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
-from avro import ipc
-from avro import protocol
-
-MAIL_PROTOCOL_JSON = """\
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-"""
-MAIL_PROTOCOL = protocol.parse(MAIL_PROTOCOL_JSON)
-SERVER_ADDRESS = ('localhost', 9090)
-
-class MailResponder(ipc.Responder):
-  def __init__(self):
-    ipc.Responder.__init__(self, MAIL_PROTOCOL)
-
-  def invoke(self, message, request):
-    if message.name == 'send':
-      request_content = request['message']
-      response = "Sent message to %(to)s from %(from)s with body %(body)s" % \
-                 request_content
-      return response
-    elif message.name == 'replay':
-      return 'replay'
-
-class MailHandler(BaseHTTPRequestHandler):
-  def do_POST(self):
-    self.responder = MailResponder()
-    call_request_reader = ipc.FramedReader(self.rfile)
-    call_request = call_request_reader.read_framed_message()
-    resp_body = self.responder.respond(call_request)
-    self.send_response(200)
-    self.send_header('Content-Type', 'avro/binary')
-    self.end_headers()
-    resp_writer = ipc.FramedWriter(self.wfile)
-    resp_writer.write_framed_message(resp_body)
-
-if __name__ == '__main__':
-  mail_server = HTTPServer(SERVER_ADDRESS, MailHandler)
-  mail_server.allow_reuse_address = True
-  mail_server.serve_forever()
diff --git a/branch-1.3/lang/py/test/test_datafile.py b/branch-1.3/lang/py/test/test_datafile.py
deleted file mode 100644
index 2f6f550..0000000
--- a/branch-1.3/lang/py/test/test_datafile.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import os
-import unittest
-from avro import schema
-from avro import io
-from avro import datafile
-
-SCHEMAS_TO_VALIDATE = (
-  ('"null"', None),
-  ('"boolean"', True),
-  ('"string"', unicode('adsfasdf09809dsf-=adsf')),
-  ('"bytes"', '12345abcd'),
-  ('"int"', 1234),
-  ('"long"', 1234),
-  ('"float"', 1234.0),
-  ('"double"', 1234.0),
-  ('{"type": "fixed", "name": "Test", "size": 1}', 'B'),
-  ('{"type": "enum", "name": "Test", "symbols": ["A", "B"]}', 'B'),
-  ('{"type": "array", "items": "long"}', [1, 3, 2]),
-  ('{"type": "map", "values": "long"}', {'a': 1, 'b': 3, 'c': 2}),
-  ('["string", "null", "long"]', None),
-  ("""\
-   {"type": "record",
-    "name": "Test",
-    "fields": [{"name": "f", "type": "long"}]}
-   """, {'f': 5}),
-  ("""\
-   {"type": "record",
-    "name": "Lisp",
-    "fields": [{"name": "value",
-                "type": ["null", "string",
-                         {"type": "record",
-                          "name": "Cons",
-                          "fields": [{"name": "car", "type": "Lisp"},
-                                     {"name": "cdr", "type": "Lisp"}]}]}]}
-   """, {'value': {'car': {'value': 'head'}, 'cdr': {'value': None}}}),
-)
-
-FILENAME = 'test_datafile.out'
-CODECS_TO_VALIDATE = ('null', 'deflate')
-
-# TODO(hammer): clean up written files with ant, not os.remove
-class TestDataFile(unittest.TestCase):
-  def test_round_trip(self):
-    print ''
-    print 'TEST ROUND TRIP'
-    print '==============='
-    print ''
-    correct = 0
-    for i, (example_schema, datum) in enumerate(SCHEMAS_TO_VALIDATE):
-      for codec in CODECS_TO_VALIDATE:
-        print ''
-        print 'SCHEMA NUMBER %d' % (i + 1)
-        print '================'
-        print ''
-        print 'Schema: %s' % example_schema
-        print 'Datum: %s' % datum
-        print 'Codec: %s' % codec
-
-        # write data in binary to file 10 times
-        writer = open(FILENAME, 'wb')
-        datum_writer = io.DatumWriter()
-        schema_object = schema.parse(example_schema)
-        dfw = datafile.DataFileWriter(writer, datum_writer, schema_object, codec=codec)
-        for i in range(10):
-          dfw.append(datum)
-        dfw.close()
-
-        # read data in binary from file
-        reader = open(FILENAME, 'rb')
-        datum_reader = io.DatumReader()
-        dfr = datafile.DataFileReader(reader, datum_reader)
-        round_trip_data = []
-        for datum in dfr:
-          round_trip_data.append(datum)
-
-        print 'Round Trip Data: %s' % round_trip_data
-        print 'Round Trip Data Length: %d' % len(round_trip_data)
-        is_correct = [datum] * 10 == round_trip_data
-        if is_correct: correct += 1
-        print 'Correct Round Trip: %s' % is_correct
-        print ''
-    os.remove(FILENAME)
-    self.assertEquals(correct, len(CODECS_TO_VALIDATE)*len(SCHEMAS_TO_VALIDATE))
-
-  def test_append(self):
-    print ''
-    print 'TEST APPEND'
-    print '==========='
-    print ''
-    correct = 0
-    for i, (example_schema, datum) in enumerate(SCHEMAS_TO_VALIDATE):
-      for codec in CODECS_TO_VALIDATE:
-        print ''
-        print 'SCHEMA NUMBER %d' % (i + 1)
-        print '================'
-        print ''
-        print 'Schema: %s' % example_schema
-        print 'Datum: %s' % datum
-        print 'Codec: %s' % codec
-
-        # write data in binary to file once
-        writer = open(FILENAME, 'wb')
-        datum_writer = io.DatumWriter()
-        schema_object = schema.parse(example_schema)
-        dfw = datafile.DataFileWriter(writer, datum_writer, schema_object, codec=codec)
-        dfw.append(datum)
-        dfw.close()
-
-        # open file, write, and close nine times
-        for i in range(9):
-          writer = open(FILENAME, 'ab+')
-          dfw = datafile.DataFileWriter(writer, io.DatumWriter())
-          dfw.append(datum)
-          dfw.close()
-
-        # read data in binary from file
-        reader = open(FILENAME, 'rb')
-        datum_reader = io.DatumReader()
-        dfr = datafile.DataFileReader(reader, datum_reader)
-        appended_data = []
-        for datum in dfr:
-          appended_data.append(datum)
-
-        print 'Appended Data: %s' % appended_data
-        print 'Appended Data Length: %d' % len(appended_data)
-        is_correct = [datum] * 10 == appended_data
-        if is_correct: correct += 1
-        print 'Correct Appended: %s' % is_correct
-        print ''
-    os.remove(FILENAME)
-    self.assertEquals(correct, len(CODECS_TO_VALIDATE)*len(SCHEMAS_TO_VALIDATE))
-
-if __name__ == '__main__':
-  unittest.main()
diff --git a/branch-1.3/lang/py/test/test_datafile_interop.py b/branch-1.3/lang/py/test/test_datafile_interop.py
deleted file mode 100644
index d449630..0000000
--- a/branch-1.3/lang/py/test/test_datafile_interop.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import os
-import unittest
-from avro import io
-from avro import datafile
-
-INTEROP_DATA_DIR = os.path.join(os.path.dirname(__file__), '../../../build/interop/data')
-
-class TestDataFileInterop(unittest.TestCase):
-  def test_interop(self):
-    print ''
-    print 'TEST INTEROP'
-    print '============'
-    print ''
-    for f in os.listdir(INTEROP_DATA_DIR):
-      print 'READING %s' % f
-      print ''
-
-      # read data in binary from file
-      reader = open(os.path.join(INTEROP_DATA_DIR, f), 'rb')
-      datum_reader = io.DatumReader()
-      dfr = datafile.DataFileReader(reader, datum_reader)
-      for datum in dfr:
-        assert datum is not None
-
-if __name__ == '__main__':
-  unittest.main()
diff --git a/branch-1.3/lang/py/test/test_io.py b/branch-1.3/lang/py/test/test_io.py
deleted file mode 100644
index 52e2c70..0000000
--- a/branch-1.3/lang/py/test/test_io.py
+++ /dev/null
@@ -1,328 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import unittest
-try:
-  from cStringIO import StringIO
-except ImportError:
-  from StringIO import StringIO
-from binascii import hexlify
-from avro import schema
-from avro import io
-
-SCHEMAS_TO_VALIDATE = (
-  ('"null"', None),
-  ('"boolean"', True),
-  ('"string"', unicode('adsfasdf09809dsf-=adsf')),
-  ('"bytes"', '12345abcd'),
-  ('"int"', 1234),
-  ('"long"', 1234),
-  ('"float"', 1234.0),
-  ('"double"', 1234.0),
-  ('{"type": "fixed", "name": "Test", "size": 1}', 'B'),
-  ('{"type": "enum", "name": "Test", "symbols": ["A", "B"]}', 'B'),
-  ('{"type": "array", "items": "long"}', [1, 3, 2]),
-  ('{"type": "map", "values": "long"}', {'a': 1, 'b': 3, 'c': 2}),
-  ('["string", "null", "long"]', None),
-  ("""\
-   {"type": "record",
-    "name": "Test",
-    "fields": [{"name": "f", "type": "long"}]}
-   """, {'f': 5}),
-  ("""\
-   {"type": "record",
-    "name": "Lisp",
-    "fields": [{"name": "value",
-                "type": ["null", "string",
-                         {"type": "record",
-                          "name": "Cons",
-                          "fields": [{"name": "car", "type": "Lisp"},
-                                     {"name": "cdr", "type": "Lisp"}]}]}]}
-   """, {'value': {'car': {'value': 'head'}, 'cdr': {'value': None}}}),
-)
-
-BINARY_ENCODINGS = (
-  (0, '00'),
-  (-1, '01'),
-  (1, '02'),
-  (-2, '03'),
-  (2, '04'),
-  (-64, '7f'),
-  (64, '80 01'),
-  (8192, '80 80 01'),
-  (-8193, '81 80 01'),
-)
-
-DEFAULT_VALUE_EXAMPLES = (
-  ('"null"', 'null', None),
-  ('"boolean"', 'true', True),
-  ('"string"', '"foo"', u'foo'),
-  ('"bytes"', '"\u00FF\u00FF"', u'\xff\xff'),
-  ('"int"', '5', 5),
-  ('"long"', '5', 5L),
-  ('"float"', '1.1', 1.1),
-  ('"double"', '1.1', 1.1),
-  ('{"type": "fixed", "name": "F", "size": 2}', '"\u00FF\u00FF"', u'\xff\xff'),
-  ('{"type": "enum", "name": "F", "symbols": ["FOO", "BAR"]}', '"FOO"', 'FOO'),
-  ('{"type": "array", "items": "int"}', '[1, 2, 3]', [1, 2, 3]),
-  ('{"type": "map", "values": "int"}', '{"a": 1, "b": 2}', {'a': 1, 'b': 2}),
-  ('["int", "null"]', '5', 5),
-  ('{"type": "record", "name": "F", "fields": [{"name": "A", "type": "int"}]}',
-   '{"A": 5}', {'A': 5}),
-)
-
-LONG_RECORD_SCHEMA = schema.parse("""\
-  {"type": "record",
-   "name": "Test",
-   "fields": [{"name": "A", "type": "int"},
-              {"name": "B", "type": "int"},
-              {"name": "C", "type": "int"},
-              {"name": "D", "type": "int"},
-              {"name": "E", "type": "int"},
-              {"name": "F", "type": "int"},
-              {"name": "G", "type": "int"}]}""")
-
-LONG_RECORD_DATUM = {'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7}
-
-def avro_hexlify(reader):
-  """Return the hex value, as a string, of a binary-encoded int or long."""
-  bytes = []
-  current_byte = reader.read(1)
-  bytes.append(hexlify(current_byte))
-  while (ord(current_byte) & 0x80) != 0:
-    current_byte = reader.read(1)
-    bytes.append(hexlify(current_byte))
-  return ' '.join(bytes)
-
-def print_test_name(test_name):
-  print ''
-  print test_name
-  print '=' * len(test_name)
-  print ''
-
-def write_datum(datum, writers_schema):
-  writer = StringIO()
-  encoder = io.BinaryEncoder(writer)
-  datum_writer = io.DatumWriter(writers_schema)
-  datum_writer.write(datum, encoder)
-  return writer, encoder, datum_writer
-
-def read_datum(buffer, writers_schema, readers_schema=None):
-  reader = StringIO(buffer.getvalue())
-  decoder = io.BinaryDecoder(reader)
-  datum_reader = io.DatumReader(writers_schema, readers_schema)
-  return datum_reader.read(decoder)
-
-def check_binary_encoding(number_type):
-  print_test_name('TEST BINARY %s ENCODING' % number_type.upper())
-  correct = 0
-  for datum, hex_encoding in BINARY_ENCODINGS:
-    print 'Datum: %d' % datum
-    print 'Correct Encoding: %s' % hex_encoding
-
-    writers_schema = schema.parse('"%s"' % number_type.lower())
-    writer, encoder, datum_writer = write_datum(datum, writers_schema)
-    writer.seek(0)
-    hex_val = avro_hexlify(writer)
-
-    print 'Read Encoding: %s' % hex_val
-    if hex_encoding == hex_val: correct += 1
-    print ''
-  return correct
-
-def check_skip_number(number_type):
-  print_test_name('TEST SKIP %s' % number_type.upper())
-  correct = 0
-  for value_to_skip, hex_encoding in BINARY_ENCODINGS:
-    VALUE_TO_READ = 6253
-    print 'Value to Skip: %d' % value_to_skip
-
-    # write the value to skip and a known value
-    writers_schema = schema.parse('"%s"' % number_type.lower())
-    writer, encoder, datum_writer = write_datum(value_to_skip, writers_schema)
-    datum_writer.write(VALUE_TO_READ, encoder)
-
-    # skip the value
-    reader = StringIO(writer.getvalue())
-    decoder = io.BinaryDecoder(reader)
-    decoder.skip_long()
-
-    # read data from string buffer
-    datum_reader = io.DatumReader(writers_schema)
-    read_value = datum_reader.read(decoder)
-
-    print 'Read Value: %d' % read_value
-    if read_value == VALUE_TO_READ: correct += 1
-    print ''
-  return correct
-    
-class TestIO(unittest.TestCase):
-  #
-  # BASIC FUNCTIONALITY
-  #
-
-  def test_validate(self):
-    print_test_name('TEST VALIDATE')
-    passed = 0
-    for example_schema, datum in SCHEMAS_TO_VALIDATE:
-      print 'Schema: %s' % example_schema
-      print 'Datum: %s' % datum
-      validated = io.validate(schema.parse(example_schema), datum)
-      print 'Valid: %s' % validated
-      if validated: passed += 1
-    self.assertEquals(passed, len(SCHEMAS_TO_VALIDATE))
-
-  def test_round_trip(self):
-    print_test_name('TEST ROUND TRIP')
-    correct = 0
-    for example_schema, datum in SCHEMAS_TO_VALIDATE:
-      print 'Schema: %s' % example_schema
-      print 'Datum: %s' % datum
-
-      writers_schema = schema.parse(example_schema)
-      writer, encoder, datum_writer = write_datum(datum, writers_schema)
-      round_trip_datum = read_datum(writer, writers_schema)
-
-      print 'Round Trip Datum: %s' % round_trip_datum
-      if datum == round_trip_datum: correct += 1
-    self.assertEquals(correct, len(SCHEMAS_TO_VALIDATE))
-
-  #
-  # BINARY ENCODING OF INT AND LONG
-  #
-
-  def test_binary_int_encoding(self):
-    correct = check_binary_encoding('int')
-    self.assertEquals(correct, len(BINARY_ENCODINGS))
-
-  def test_binary_long_encoding(self):
-    correct = check_binary_encoding('long')
-    self.assertEquals(correct, len(BINARY_ENCODINGS))
-
-  def test_skip_int(self):
-    correct = check_skip_number('int')
-    self.assertEquals(correct, len(BINARY_ENCODINGS))
-
-  def test_skip_long(self):
-    correct = check_skip_number('long')
-    self.assertEquals(correct, len(BINARY_ENCODINGS))
-
-  #
-  # SCHEMA RESOLUTION
-  #
-
-  def test_schema_promotion(self):
-    print_test_name('TEST SCHEMA PROMOTION')
-    # note that checking writers_schema.type in read_data
-    # allows us to handle promotion correctly
-    promotable_schemas = ['"int"', '"long"', '"float"', '"double"']
-    incorrect = 0
-    for i, ws in enumerate(promotable_schemas):
-      writers_schema = schema.parse(ws)
-      datum_to_write = 219
-      for rs in promotable_schemas[i + 1:]:
-        readers_schema = schema.parse(rs)
-        writer, enc, dw = write_datum(datum_to_write, writers_schema)
-        datum_read = read_datum(writer, writers_schema, readers_schema)
-        print 'Writer: %s Reader: %s' % (writers_schema, readers_schema)
-        print 'Datum Read: %s' % datum_read
-        if datum_read != datum_to_write: incorrect += 1
-    self.assertEquals(incorrect, 0)
-
-  def test_unknown_symbol(self):
-    print_test_name('TEST UNKNOWN SYMBOL')
-    writers_schema = schema.parse("""\
-      {"type": "enum", "name": "Test",
-       "symbols": ["FOO", "BAR"]}""")
-    datum_to_write = 'FOO'
-
-    readers_schema = schema.parse("""\
-      {"type": "enum", "name": "Test",
-       "symbols": ["BAR", "BAZ"]}""")
-
-    writer, encoder, datum_writer = write_datum(datum_to_write, writers_schema)
-    reader = StringIO(writer.getvalue())
-    decoder = io.BinaryDecoder(reader)
-    datum_reader = io.DatumReader(writers_schema, readers_schema)
-    self.assertRaises(io.SchemaResolutionException, datum_reader.read, decoder)
-
-  def test_default_value(self):
-    print_test_name('TEST DEFAULT VALUE')
-    writers_schema = LONG_RECORD_SCHEMA
-    datum_to_write = LONG_RECORD_DATUM
-
-    correct = 0
-    for field_type, default_json, default_datum in DEFAULT_VALUE_EXAMPLES:
-      readers_schema = schema.parse("""\
-        {"type": "record", "name": "Test",
-         "fields": [{"name": "H", "type": %s, "default": %s}]}
-        """ % (field_type, default_json))
-      datum_to_read = {'H': default_datum}
-
-      writer, encoder, datum_writer = write_datum(datum_to_write, writers_schema)
-      datum_read = read_datum(writer, writers_schema, readers_schema)
-      print 'Datum Read: %s' % datum_read
-      if datum_to_read == datum_read: correct += 1
-    self.assertEquals(correct, len(DEFAULT_VALUE_EXAMPLES))
-
-  def test_no_default_value(self):
-    print_test_name('TEST NO DEFAULT VALUE')
-    writers_schema = LONG_RECORD_SCHEMA
-    datum_to_write = LONG_RECORD_DATUM
-
-    readers_schema = schema.parse("""\
-      {"type": "record", "name": "Test",
-       "fields": [{"name": "H", "type": "int"}]}""")
-
-    writer, encoder, datum_writer = write_datum(datum_to_write, writers_schema)
-    reader = StringIO(writer.getvalue())
-    decoder = io.BinaryDecoder(reader)
-    datum_reader = io.DatumReader(writers_schema, readers_schema)
-    self.assertRaises(io.SchemaResolutionException, datum_reader.read, decoder)
-
-  def test_projection(self):
-    print_test_name('TEST PROJECTION')
-    writers_schema = LONG_RECORD_SCHEMA
-    datum_to_write = LONG_RECORD_DATUM
-
-    readers_schema = schema.parse("""\
-      {"type": "record", "name": "Test",
-       "fields": [{"name": "E", "type": "int"},
-                  {"name": "F", "type": "int"}]}""")
-    datum_to_read = {'E': 5, 'F': 6}
-
-    writer, encoder, datum_writer = write_datum(datum_to_write, writers_schema)
-    datum_read = read_datum(writer, writers_schema, readers_schema)
-    print 'Datum Read: %s' % datum_read
-    self.assertEquals(datum_to_read, datum_read)
-
-  def test_field_order(self):
-    print_test_name('TEST FIELD ORDER')
-    writers_schema = LONG_RECORD_SCHEMA
-    datum_to_write = LONG_RECORD_DATUM
-
-    readers_schema = schema.parse("""\
-      {"type": "record", "name": "Test",
-       "fields": [{"name": "F", "type": "int"},
-                  {"name": "E", "type": "int"}]}""")
-    datum_to_read = {'E': 5, 'F': 6}
-
-    writer, encoder, datum_writer = write_datum(datum_to_write, writers_schema)
-    datum_read = read_datum(writer, writers_schema, readers_schema)
-    print 'Datum Read: %s' % datum_read
-    self.assertEquals(datum_to_read, datum_read)
-
-if __name__ == '__main__':
-  unittest.main()
diff --git a/branch-1.3/lang/py/test/test_protocol.py b/branch-1.3/lang/py/test/test_protocol.py
deleted file mode 100644
index 855bc11..0000000
--- a/branch-1.3/lang/py/test/test_protocol.py
+++ /dev/null
@@ -1,257 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Test the protocol parsing logic.
-"""
-import unittest
-from avro import protocol
-
-class ExampleProtocol(object):
-  def __init__(self, protocol_string, valid, name='', comment=''):
-    self._protocol_string = protocol_string
-    self._valid = valid
-    self._name = name or protocol_string # default to schema_string for name
-    self._comment = comment
-
-  # read-only properties
-  protocol_string = property(lambda self: self._protocol_string)
-  valid = property(lambda self: self._valid)
-  name = property(lambda self: self._name)
-
-  # read/write properties
-  def set_comment(self, new_comment): self._comment = new_comment
-  comment = property(lambda self: self._comment, set_comment)
-
-#
-# Example Protocols
-#
-
-EXAMPLES = [
-  ExampleProtocol("""\
-{
-  "namespace": "com.acme",
-  "protocol": "HelloWorld",
-
-  "types": [
-    {"name": "Greeting", "type": "record", "fields": [
-      {"name": "message", "type": "string"}]},
-    {"name": "Curse", "type": "error", "fields": [
-      {"name": "message", "type": "string"}]}
-  ],
-
-  "messages": {
-    "hello": {
-      "request": [{"name": "greeting", "type": "Greeting" }],
-      "response": "Greeting",
-      "errors": ["Curse"]
-    }
-  }
-}
-    """, True),
-  ExampleProtocol("""\
-{"namespace": "org.apache.avro.test",
- "protocol": "Simple",
-
- "types": [
-     {"name": "Kind", "type": "enum", "symbols": ["FOO","BAR","BAZ"]},
-
-     {"name": "MD5", "type": "fixed", "size": 16},
-
-     {"name": "TestRecord", "type": "record",
-      "fields": [
-          {"name": "name", "type": "string", "order": "ignore"},
-          {"name": "kind", "type": "Kind", "order": "descending"},
-          {"name": "hash", "type": "MD5"}
-      ]
-     },
-
-     {"name": "TestError", "type": "error", "fields": [
-         {"name": "message", "type": "string"}
-      ]
-     }
-
- ],
-
- "messages": {
-
-     "hello": {
-         "request": [{"name": "greeting", "type": "string"}],
-         "response": "string"
-     },
-
-     "echo": {
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "add": {
-         "request": [{"name": "arg1", "type": "int"}, {"name": "arg2", "type": "int"}],
-         "response": "int"
-     },
-
-     "echoBytes": {
-         "request": [{"name": "data", "type": "bytes"}],
-         "response": "bytes"
-     },
-
-     "error": {
-         "request": [],
-         "response": "null",
-         "errors": ["TestError"]
-     }
- }
-
-}
-    """, True),
-  ExampleProtocol("""\
-{"namespace": "org.apache.avro.test.namespace",
- "protocol": "TestNamespace",
-
- "types": [
-     {"name": "org.apache.avro.test.util.MD5", "type": "fixed", "size": 16},
-     {"name": "TestRecord", "type": "record",
-      "fields": [ {"name": "hash", "type": "org.apache.avro.test.util.MD5"} ]
-     },
-     {"name": "TestError", "namespace": "org.apache.avro.test.errors",
-      "type": "error", "fields": [ {"name": "message", "type": "string"} ]
-     }
- ],
-
- "messages": {
-     "echo": {
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "error": {
-         "request": [],
-         "response": "null",
-         "errors": ["org.apache.avro.test.errors.TestError"]
-     }
-
- }
-
-}
-    """, True),
-  ExampleProtocol("""\
-{"namespace": "org.apache.avro.test",
- "protocol": "BulkData",
-
- "types": [],
-
- "messages": {
-
-     "read": {
-         "request": [],
-         "response": "bytes"
-     },
-
-     "write": {
-         "request": [ {"name": "data", "type": "bytes"} ],
-         "response": "null"
-     }
-
- }
-
-}
-    """, True),
-]
-
-VALID_EXAMPLES = [e for e in EXAMPLES if e.valid]
-
-class TestProtocol(unittest.TestCase):
-  def test_parse(self):
-    print ''
-    print 'TEST PARSE'
-    print '=========='
-    print ''
-
-    num_correct = 0
-    for example in EXAMPLES:
-      try:
-        protocol.parse(example.protocol_string)
-        if example.valid: num_correct += 1
-        debug_msg = "%s: PARSE SUCCESS" % example.name
-      except:
-        if not example.valid: num_correct += 1
-        debug_msg = "%s: PARSE FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Parse behavior correct on %d out of %d protocols." % \
-      (num_correct, len(EXAMPLES))
-    self.assertEqual(num_correct, len(EXAMPLES), fail_msg)
-
-  def test_valid_cast_to_string_after_parse(self):
-    """
-    Test that the string generated by an Avro Protocol object
-    is, in fact, a valid Avro protocol.
-    """
-    print ''
-    print 'TEST CAST TO STRING'
-    print '==================='
-    print ''
-
-    num_correct = 0
-    for example in VALID_EXAMPLES:
-      protocol_data = protocol.parse(example.protocol_string)
-      try:
-        protocol.parse(str(protocol_data))
-        debug_msg = "%s: STRING CAST SUCCESS" % example.name
-        num_correct += 1
-      except:
-        debug_msg = "%s: STRING CAST FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Cast to string success on %d out of %d protocols" % \
-      (num_correct, len(VALID_EXAMPLES))
-    self.assertEqual(num_correct, len(VALID_EXAMPLES), fail_msg)
-
-  def test_equivalence_after_round_trip(self):
-    """
-    1. Given a string, parse it to get Avro protocol "original".
-    2. Serialize "original" to a string and parse that string
-         to generate Avro protocol "round trip".
-    3. Ensure "original" and "round trip" protocols are equivalent.
-    """
-    print ''
-    print 'TEST ROUND TRIP'
-    print '==============='
-    print ''
-
-    num_correct = 0
-    for example in VALID_EXAMPLES:
-      try:
-        original_protocol = protocol.parse(example.protocol_string)
-        round_trip_protocol = protocol.parse(str(original_protocol))
-
-        if original_protocol == round_trip_protocol:
-          num_correct += 1
-          debug_msg = "%s: ROUND TRIP SUCCESS" % example.name
-        else:       
-          debug_msg = "%s: ROUND TRIP FAILURE" % example.name
-      except:
-        debug_msg = "%s: ROUND TRIP FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Round trip success on %d out of %d protocols" % \
-      (num_correct, len(VALID_EXAMPLES))
-    self.assertEqual(num_correct, len(VALID_EXAMPLES), fail_msg)
-
-if __name__ == '__main__':
-  unittest.main()
diff --git a/branch-1.3/lang/py/test/test_schema.py b/branch-1.3/lang/py/test/test_schema.py
deleted file mode 100644
index cb41c15..0000000
--- a/branch-1.3/lang/py/test/test_schema.py
+++ /dev/null
@@ -1,374 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Test the schema parsing logic.
-"""
-import unittest
-from avro import schema
-
-def print_test_name(test_name):
-  print ''
-  print test_name
-  print '=' * len(test_name)
-  print ''
-
-class ExampleSchema(object):
-  def __init__(self, schema_string, valid, name='', comment=''):
-    self._schema_string = schema_string
-    self._valid = valid
-    self._name = name or schema_string # default to schema_string for name
-    self.comment = comment
-
-  @property
-  def schema_string(self):
-    return self._schema_string
-
-  @property
-  def valid(self):
-    return self._valid
-
-  @property
-  def name(self):
-    return self._name
-
-#
-# Example Schemas
-#
-
-def make_primitive_examples():
-  examples = []
-  for type in schema.PRIMITIVE_TYPES:
-    examples.append(ExampleSchema('"%s"' % type, True))
-    examples.append(ExampleSchema('{"type": "%s"}' % type, True))
-  return examples
-
-PRIMITIVE_EXAMPLES = [
-  ExampleSchema('"True"', False),
-  ExampleSchema('True', False),
-  ExampleSchema('{"no_type": "test"}', False),
-  ExampleSchema('{"type": "panther"}', False),
-] + make_primitive_examples()
-
-FIXED_EXAMPLES = [
-  ExampleSchema('{"type": "fixed", "name": "Test", "size": 1}', True),
-  ExampleSchema("""\
-    {"type": "fixed",
-     "name": "MyFixed",
-     "namespace": "org.apache.hadoop.avro",
-     "size": 1}
-    """, True),
-  ExampleSchema("""\
-    {"type": "fixed",
-     "name": "Missing size"}
-    """, False),
-  ExampleSchema("""\
-    {"type": "fixed",
-     "size": 314}
-    """, False),
-]
-
-ENUM_EXAMPLES = [
-  ExampleSchema('{"type": "enum", "name": "Test", "symbols": ["A", "B"]}', True),
-  ExampleSchema("""\
-    {"type": "enum",
-     "name": "Status",
-     "symbols": "Normal Caution Critical"}
-    """, False),
-  ExampleSchema("""\
-    {"type": "enum",
-     "name": [ 0, 1, 1, 2, 3, 5, 8 ],
-     "symbols": ["Golden", "Mean"]}
-    """, False),
-  ExampleSchema("""\
-    {"type": "enum",
-     "symbols" : ["I", "will", "fail", "no", "name"]}
-    """, False),
-  ExampleSchema("""\
-    {"type": "enum",
-     "name": "Test"
-     "symbols" : ["AA", "AA"]}
-    """, False),
-]
-
-ARRAY_EXAMPLES = [
-  ExampleSchema('{"type": "array", "items": "long"}', True),
-  ExampleSchema("""\
-    {"type": "array",
-     "items": {"type": "enum", "name": "Test", "symbols": ["A", "B"]}}
-    """, True),
-]
-
-MAP_EXAMPLES = [
-  ExampleSchema('{"type": "map", "values": "long"}', True),
-  ExampleSchema("""\
-    {"type": "map",
-     "values": {"type": "enum", "name": "Test", "symbols": ["A", "B"]}}
-    """, True),
-]
-
-UNION_EXAMPLES = [
-  ExampleSchema('["string", "null", "long"]', True),
-  ExampleSchema('["null", "null"]', False),
-  ExampleSchema('["long", "long"]', False),
-  ExampleSchema("""\
-    [{"type": "array", "items": "long"}
-     {"type": "array", "items": "string"}]
-    """, False),
-]
-
-RECORD_EXAMPLES = [
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Test",
-     "fields": [{"name": "f",
-                 "type": "long"}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "error",
-     "name": "Test",
-     "fields": [{"name": "f",
-                 "type": "long"}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Node",
-     "fields": [{"name": "label", "type": "string"},
-                {"name": "children",
-                 "type": {"type": "array", "items": "Node"}}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Lisp",
-     "fields": [{"name": "value",
-                 "type": ["null", "string",
-                          {"type": "record",
-                           "name": "Cons",
-                           "fields": [{"name": "car", "type": "Lisp"},
-                                      {"name": "cdr", "type": "Lisp"}]}]}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "HandshakeRequest",
-     "namespace": "org.apache.avro.ipc",
-     "fields": [{"name": "clientHash",
-                 "type": {"type": "fixed", "name": "MD5", "size": 16}},
-                {"name": "clientProtocol", "type": ["null", "string"]},
-                {"name": "serverHash", "type": "MD5"},
-                {"name": "meta", 
-                 "type": ["null", {"type": "map", "values": "bytes"}]}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "HandshakeResponse",
-     "namespace": "org.apache.avro.ipc",
-     "fields": [{"name": "match",
-                 "type": {"type": "enum",
-                          "name": "HandshakeMatch",
-                          "symbols": ["BOTH", "CLIENT", "NONE"]}},
-                {"name": "serverProtocol", "type": ["null", "string"]},
-                {"name": "serverHash",
-                 "type": ["null",
-                          {"name": "MD5", "size": 16, "type": "fixed"}]},
-                {"name": "meta",
-                 "type": ["null", {"type": "map", "values": "bytes"}]}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Interop",
-     "namespace": "org.apache.avro",
-     "fields": [{"name": "intField", "type": "int"},
-                {"name": "longField", "type": "long"},
-                {"name": "stringField", "type": "string"},
-                {"name": "boolField", "type": "boolean"},
-                {"name": "floatField", "type": "float"},
-                {"name": "doubleField", "type": "double"},
-                {"name": "bytesField", "type": "bytes"},
-                {"name": "nullField", "type": "null"},
-                {"name": "arrayField",
-                 "type": {"type": "array", "items": "double"}},
-                {"name": "mapField",
-                 "type": {"type": "map",
-                          "values": {"name": "Foo",
-                                     "type": "record",
-                                     "fields": [{"name": "label",
-                                                 "type": "string"}]}}},
-                {"name": "unionField",
-                 "type": ["boolean",
-                          "double",
-                          {"type": "array", "items": "bytes"}]},
-                {"name": "enumField",
-                 "type": {"type": "enum",
-                          "name": "Kind",
-                          "symbols": ["A", "B", "C"]}},
-                {"name": "fixedField",
-                 "type": {"type": "fixed", "name": "MD5", "size": 16}},
-                {"name": "recordField",
-                 "type": {"type": "record",
-                          "name": "Node",
-                          "fields": [{"name": "label", "type": "string"},
-                                     {"name": "children",
-                                      "type": {"type": "array",
-                                               "items": "Node"}}]}}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "ipAddr",
-     "fields": [{"name": "addr", 
-                 "type": [{"name": "IPv6", "type": "fixed", "size": 16},
-                          {"name": "IPv4", "type": "fixed", "size": 4}]}]}
-    """, True),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Address",
-     "fields": [{"type": "string"},
-                {"type": "string", "name": "City"}]}
-    """, False),
-  ExampleSchema("""\
-    {"type": "record",
-     "name": "Event",
-     "fields": [{"name": "Sponsor"},
-                {"name": "City", "type": "string"}]}
-    """, False),
-  ExampleSchema("""\
-    {"type": "record",
-     "fields": "His vision, from the constantly passing bars,"
-     "name", "Rainer"}
-    """, False),
-  ExampleSchema("""\
-    {"name": ["Tom", "Jerry"],
-     "type": "record",
-     "fields": [{"name": "name", "type": "string"}]}
-    """, False),
-]
-
-EXAMPLES = PRIMITIVE_EXAMPLES
-EXAMPLES += FIXED_EXAMPLES
-EXAMPLES += ENUM_EXAMPLES
-EXAMPLES += ARRAY_EXAMPLES
-EXAMPLES += MAP_EXAMPLES
-EXAMPLES += UNION_EXAMPLES
-EXAMPLES += RECORD_EXAMPLES
-
-VALID_EXAMPLES = [e for e in EXAMPLES if e.valid]
-
-# TODO(hammer): refactor into harness for examples
-# TODO(hammer): pretty-print detailed output
-# TODO(hammer): make verbose flag
-# TODO(hammer): show strack trace to user
-# TODO(hammer): use logging module?
-class TestSchema(unittest.TestCase):
-  def test_parse(self):
-    print_test_name('TEST PARSE')
-    correct = 0
-    for example in EXAMPLES:
-      try:
-        schema.parse(example.schema_string)
-        if example.valid: correct += 1
-        debug_msg = "%s: PARSE SUCCESS" % example.name
-      except:
-        if not example.valid: correct += 1
-        debug_msg = "%s: PARSE FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Parse behavior correct on %d out of %d schemas." % \
-      (correct, len(EXAMPLES))
-    self.assertEqual(correct, len(EXAMPLES), fail_msg)
-
-  def test_valid_cast_to_string_after_parse(self):
-    """
-    Test that the string generated by an Avro Schema object
-    is, in fact, a valid Avro schema.
-    """
-    print_test_name('TEST CAST TO STRING AFTER PARSE')
-    correct = 0
-    for example in VALID_EXAMPLES:
-      schema_data = schema.parse(example.schema_string)
-      try:
-        schema.parse(str(schema_data))
-        debug_msg = "%s: STRING CAST SUCCESS" % example.name
-        correct += 1
-      except:
-        debug_msg = "%s: STRING CAST FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Cast to string success on %d out of %d schemas" % \
-      (correct, len(VALID_EXAMPLES))
-    self.assertEqual(correct, len(VALID_EXAMPLES), fail_msg)
-
-  def test_equivalence_after_round_trip(self):
-    """
-    1. Given a string, parse it to get Avro schema "original".
-    2. Serialize "original" to a string and parse that string
-         to generate Avro schema "round trip".
-    3. Ensure "original" and "round trip" schemas are equivalent.
-    """
-    print_test_name('TEST ROUND TRIP')
-    correct = 0
-    for example in VALID_EXAMPLES:
-      try:
-        original_schema = schema.parse(example.schema_string)
-        round_trip_schema = schema.parse(str(original_schema))
-        if original_schema == round_trip_schema:
-          correct += 1
-          debug_msg = "%s: ROUND TRIP SUCCESS" % example.name
-        else:       
-          debug_msg = "%s: ROUND TRIP FAILURE" % example.name
-      except:
-        debug_msg = "%s: ROUND TRIP FAILURE" % example.name
-      finally:
-        print debug_msg
-
-    fail_msg = "Round trip success on %d out of %d schemas" % \
-      (correct, len(VALID_EXAMPLES))
-    self.assertEqual(correct, len(VALID_EXAMPLES), fail_msg)
-
-  # TODO(hammer): more tests
-  def test_fullname(self):
-    """
-    The fullname is determined in one of the following ways:
-     * A name and namespace are both specified.  For example,
-       one might use "name": "X", "namespace": "org.foo"
-       to indicate the fullname "org.foo.X".
-     * A fullname is specified.  If the name specified contains
-       a dot, then it is assumed to be a fullname, and any
-       namespace also specified is ignored.  For example,
-       use "name": "org.foo.X" to indicate the
-       fullname "org.foo.X".
-     * A name only is specified, i.e., a name that contains no
-       dots.  In this case the namespace is taken from the most
-       tightly encosing schema or protocol.  For example,
-       if "name": "X" is specified, and this occurs
-       within a field of the record definition
-       of "org.foo.Y", then the fullname is "org.foo.X".
-
-    References to previously defined names are as in the latter
-    two cases above: if they contain a dot they are a fullname, if
-    they do not contain a dot, the namespace is the namespace of
-    the enclosing definition.
-
-    Primitive type names have no namespace and their names may
-    not be defined in any namespace.  A schema may only contain
-    multiple definitions of a fullname if the definitions are
-    equivalent.
-    """
-    print_test_name('TEST FULLNAME')
-    fullname = schema.Name.make_fullname('a', 'o.a.h')
-    self.assertEqual(fullname, 'o.a.h.a')
-
-if __name__ == '__main__':
-  unittest.main()
diff --git a/branch-1.3/lang/ruby/.gitignore b/branch-1.3/lang/ruby/.gitignore
deleted file mode 100644
index 1c2f433..0000000
--- a/branch-1.3/lang/ruby/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-tmp
\ No newline at end of file
diff --git a/branch-1.3/lang/ruby/CHANGELOG b/branch-1.3/lang/ruby/CHANGELOG
deleted file mode 100644
index 16d7595..0000000
--- a/branch-1.3/lang/ruby/CHANGELOG
+++ /dev/null
@@ -1 +0,0 @@
-v0.0.1 stuff
\ No newline at end of file
diff --git a/branch-1.3/lang/ruby/Manifest b/branch-1.3/lang/ruby/Manifest
deleted file mode 100644
index 33108b1..0000000
--- a/branch-1.3/lang/ruby/Manifest
+++ /dev/null
@@ -1,18 +0,0 @@
-CHANGELOG
-Manifest
-Rakefile
-avro.gemspec
-interop/test_interop.rb
-lib/avro.rb
-lib/avro/collect_hash.rb
-lib/avro/data_file.rb
-lib/avro/io.rb
-lib/avro/ipc.rb
-lib/avro/protocol.rb
-lib/avro/schema.rb
-test/random_data.rb
-test/sample_ipc_client.rb
-test/sample_ipc_server.rb
-test/test_help.rb
-test/test_io.rb
-test/test_protocol.rb
diff --git a/branch-1.3/lang/ruby/Rakefile b/branch-1.3/lang/ruby/Rakefile
deleted file mode 100644
index 09ddcdb..0000000
--- a/branch-1.3/lang/ruby/Rakefile
+++ /dev/null
@@ -1,59 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'rubygems'
-require 'echoe'
-VERSION = File.open('../../share/VERSION.txt').read.sub('-SNAPSHOT', '.pre1').chomp
-Echoe.new('avro', VERSION) do |p|
-  p.author = "Apache Software Foundation"
-  p.email = "avro-dev@hadoop.apache.org"
-  p.summary = "Apache Avro for Ruby"
-  p.description = "Apache is a data serialization and RPC format"
-  p.url = "http://hadoop.apache.org/avro/"
-  p.runtime_dependencies = %w[yajl-ruby]
-end
-
-t = Rake::TestTask.new(:interop)
-t.pattern = 'interop/test*.rb'
-
-task :generate_interop do
-  $:.unshift(HERE + '/lib')
-  $:.unshift(HERE + '/test')
-  require 'avro'
-  require 'random_data'
-
-  schema = Avro::Schema.parse(File.read(SCHEMAS + '/interop.avsc'))
-  r = RandomData.new(schema, ENV['SEED'])
-  f = File.open(BUILD + '/interop/data/ruby.avro', 'w')
-  writer = Avro::DataFile::Writer.new(f, Avro::IO::DatumWriter.new(schema), schema)
-  begin
-    writer << r.next
-    writer << r.next
-  ensure
-    writer.close
-  end
-end
-
-
-HERE = File.expand_path(File.dirname(__FILE__))
-SHARE = HERE + '/../../share'
-SCHEMAS = SHARE + '/test/schemas'
-BUILD = HERE + '/../../build'
-
-task :dist => [:manifest, :gem] do
-  mkdir_p "../../dist/ruby"
-  cp "pkg/avro-#{VERSION}.gem", "../../dist/ruby"
-end
diff --git a/branch-1.3/lang/ruby/interop/test_interop.rb b/branch-1.3/lang/ruby/interop/test_interop.rb
deleted file mode 100644
index da1f004..0000000
--- a/branch-1.3/lang/ruby/interop/test_interop.rb
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/usr/bin/env ruby
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'rubygems'
-require 'test/unit'
-require 'avro'
-
-class TestInterop < Test::Unit::TestCase
-  HERE = File.expand_path(File.dirname(__FILE__))
-  SHARE = HERE + '/../../../share'
-  SCHEMAS = SHARE + '/test/schemas'
-  Dir[HERE + '/../../../build/interop/data/*'].each do |fn|  
-    define_method("test_read_#{File.basename(fn, 'avro')}") do
-      projection = Avro::Schema.parse(File.read(SCHEMAS+'/interop.avsc'))
-
-      File.open(fn) do |f|
-        r = Avro::DataFile::Reader.new(f, Avro::IO::DatumReader.new(projection))
-        i = 0
-        r.each do |datum|
-          i += 1
-          assert_not_nil datum, "nil datum from #{fn}"
-        end
-        assert_not_equal 0, i, "no data read in from #{fn}"
-      end
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro.rb b/branch-1.3/lang/ruby/lib/avro.rb
deleted file mode 100644
index 3208867..0000000
--- a/branch-1.3/lang/ruby/lib/avro.rb
+++ /dev/null
@@ -1,41 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'yajl'
-require 'set'
-require 'md5'
-require 'net/http'
-require 'stringio'
-
-module Avro
-  VERSION = "FIXME"
-
-  class AvroError < StandardError; end
-
-  class AvroTypeError < Avro::AvroError
-    def initialize(schm=nil, datum=nil, msg=nil)
-      msg ||= "Not a #{schm.to_s}: #{datum}"
-      super(msg)
-    end
-  end
-end
-
-require 'avro/collect_hash'
-require 'avro/schema'
-require 'avro/io'
-require 'avro/data_file'
-require 'avro/protocol'
-require 'avro/ipc'
diff --git a/branch-1.3/lang/ruby/lib/avro/collect_hash.rb b/branch-1.3/lang/ruby/lib/avro/collect_hash.rb
deleted file mode 100644
index 4975080..0000000
--- a/branch-1.3/lang/ruby/lib/avro/collect_hash.rb
+++ /dev/null
@@ -1,25 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-module Enumerable
-  def collect_hash
-    inject(Hash.new) do |memo, i|
-      k, v = yield(i)
-      memo[k] = v if k
-      memo
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro/data_file.rb b/branch-1.3/lang/ruby/lib/avro/data_file.rb
deleted file mode 100644
index 4a08a2e..0000000
--- a/branch-1.3/lang/ruby/lib/avro/data_file.rb
+++ /dev/null
@@ -1,276 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'openssl'
-
-module Avro
-  module DataFile
-    VERSION = 1
-    MAGIC = "Obj" + [VERSION].pack('c')
-    MAGIC_SIZE = MAGIC.size
-    SYNC_SIZE = 16
-    SYNC_INTERVAL = 1000 * SYNC_SIZE
-    META_SCHEMA = Schema.parse('{"type": "map", "values": "bytes"}')
-    VALID_CODECS = ['null']
-    VALID_ENCODINGS = ['binary'] # not used yet
-
-    class DataFileError < AvroError; end
-
-    def self.open(file_path, mode='r', schema=nil)
-      schema = Avro::Schema.parse(schema) if schema
-      case mode
-      when 'w'
-        unless schema
-          raise DataFileError, "Writing an Avro file requires a schema."
-        end
-        io = open_writer(File.open(file_path, 'wb'), schema)
-      when 'r'
-        io = open_reader(File.open(file_path, 'rb'), schema)
-      else
-        raise DataFileError, "Only modes 'r' and 'w' allowed. You gave #{mode.inspect}."
-      end
-
-      yield io if block_given?
-      io
-    ensure
-      io.close if block_given? && io
-    end
-
-    class << self
-      private
-      def open_writer(file, schema)
-        writer = Avro::IO::DatumWriter.new(schema)
-        Avro::DataFile::Writer.new(file, writer, schema)
-      end
-
-      def open_reader(file, schema)
-        reader = Avro::IO::DatumReader.new(nil, schema)
-        Avro::DataFile::Reader.new(file, reader)
-      end
-    end
-
-    class Writer
-      def self.generate_sync_marker
-        OpenSSL::Random.random_bytes(16)
-      end
-
-      attr_reader :writer, :encoder, :datum_writer, :buffer_writer, :buffer_encoder, :sync_marker, :meta
-      attr_accessor :block_count
-
-      def initialize(writer, datum_writer, writers_schema=nil)
-        # If writers_schema is not present, presume we're appending
-        @writer = writer
-        @encoder = IO::BinaryEncoder.new(@writer)
-        @datum_writer = datum_writer
-        @buffer_writer = StringIO.new('', 'w')
-        @buffer_encoder = IO::BinaryEncoder.new(@buffer_writer)
-        @block_count = 0
-
-        @meta = {}
-
-        if writers_schema
-          @sync_marker = Writer.generate_sync_marker
-          meta['avro.codec'] = 'null'
-          meta['avro.schema'] = writers_schema.to_s
-          datum_writer.writers_schema = writers_schema
-          write_header
-        else
-          # open writer for reading to collect metadata
-          dfr = Reader.new(writer, Avro::IO::DatumReader.new)
-
-          # FIXME(jmhodges): collect arbitrary metadata
-          # collect metadata
-          @sync_marker = dfr.sync_marker
-          meta['avro.codec'] = dfr.meta['avro.codec']
-
-          # get schema used to write existing file
-          schema_from_file = dfr.meta['avro.schema']
-          meta['avro.schema'] = schema_from_file
-          datum_writer.writers_schema = Schema.parse(schema_from_file)
-
-          # seek to the end of the file and prepare for writing
-          writer.seek(0,2)
-        end
-      end
-
-      # Append a datum to the file
-      def <<(datum)
-        datum_writer.write(datum, buffer_encoder)
-        self.block_count += 1
-
-        # if the data to write is larger than the sync interval, write
-        # the block
-        if buffer_writer.tell >= SYNC_INTERVAL
-          write_block
-        end
-      end
-
-      # Return the current position as a value that may be passed to
-      # DataFileReader.seek(long). Forces the end of the current block,
-      # emitting a synchronization marker.
-      def sync
-        write_block
-        writer.tell
-      end
-
-      # Flush the current state of the file, including metadata
-      def flush
-        write_block
-        writer.flush
-      end
-
-      def close
-        flush
-        writer.close
-      end
-
-      private
-
-      def write_header
-        # write magic
-        writer.write(MAGIC)
-
-        # write metadata
-        datum_writer.write_data(META_SCHEMA, meta, encoder)
-
-        # write sync marker
-        writer.write(sync_marker)
-      end
-
-      # TODO(jmhodges): make a schema for blocks and use datum_writer
-      # TODO(jmhodges): do we really need the number of items in the block?
-      # TODO(jmhodges): use codec when writing the block contents
-      def write_block
-        if block_count > 0
-          # write number of items in block and block size in bytes
-          encoder.write_long(block_count)
-          to_write = buffer_writer.string
-          encoder.write_long(to_write.size)
-
-          # write block contents
-          if meta['avro.codec'] == 'null'
-            writer.write(to_write)
-          else
-            msg = "#{meta['avro.codec'].inspect} coded is not supported"
-            raise DataFileError, msg
-          end
-
-          # write sync marker
-          writer.write(sync_marker)
-
-          # reset buffer
-          buffer_writer.truncate(0)
-          self.block_count = 0
-        end
-      end
-    end
-
-    # Read files written by DataFileWriter
-    class Reader
-      include ::Enumerable
-
-      attr_reader :reader, :decoder, :datum_reader, :sync_marker, :meta, :file_length
-      attr_accessor :block_count
-
-      def initialize(reader, datum_reader)
-        @reader = reader
-        @decoder = IO::BinaryDecoder.new(reader)
-        @datum_reader = datum_reader
-
-        # read the header: magic, meta, sync
-        read_header
-
-        # ensure the codec is valid
-        codec_from_file = meta['avro.codec']
-        if codec_from_file && ! VALID_CODECS.include?(codec_from_file)
-          raise DataFileError, "Unknown codec: #{codec_from_file}"
-        end
-
-        # get ready to read
-        @block_count = 0
-        datum_reader.writers_schema = Schema.parse meta['avro.schema']
-      end
-
-      # Iterates through each datum in this file
-      # TODO(jmhodges): handle block of length zero
-      def each
-        loop do
-          if block_count == 0
-            case
-            when eof?; break
-            when skip_sync
-              break if eof?
-              read_block_header
-            else
-              read_block_header
-            end
-          end
-
-          datum = datum_reader.read(decoder)
-          self.block_count -= 1
-          yield(datum)
-        end
-      end
-
-      def eof?; reader.eof?; end
-
-      def close
-        reader.close
-      end
-
-      private
-      def read_header
-        # seek to the beginning of the file to get magic block
-        reader.seek(0, 0)
-
-        # check magic number
-        magic_in_file = reader.read(MAGIC_SIZE)
-        if magic_in_file.size < MAGIC_SIZE
-          msg = 'Not an Avro data file: shorter than the Avro magic block'
-          raise DataFileError, msg
-        elsif magic_in_file != MAGIC
-          msg = "Not an Avro data file: #{magic_in_file.inspect} doesn't match #{MAGIC.inspect}"
-          raise DataFileError, msg
-        end
-
-        # read metadata
-        @meta = datum_reader.read_data(META_SCHEMA,
-                                       META_SCHEMA,
-                                       decoder)
-        # read sync marker
-        @sync_marker = reader.read(SYNC_SIZE)
-      end
-
-      def read_block_header
-        self.block_count = decoder.read_long
-        decoder.read_long # not doing anything with length in bytes
-      end
-
-      # read the length of the sync marker; if it matches the sync
-      # marker, return true. Otherwise, seek back to where we started
-      # and return false
-      def skip_sync
-        proposed_sync_marker = reader.read(SYNC_SIZE)
-        if proposed_sync_marker != sync_marker
-          reader.seek(-SYNC_SIZE, 1)
-          false
-        else
-          true
-        end
-      end
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro/io.rb b/branch-1.3/lang/ruby/lib/avro/io.rb
deleted file mode 100644
index da76f0d..0000000
--- a/branch-1.3/lang/ruby/lib/avro/io.rb
+++ /dev/null
@@ -1,617 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-module Avro
-  module IO
-    # Raised when datum is not an example of schema
-    class AvroTypeError < AvroError
-      def initialize(expected_schema, datum)
-        super("The datum #{datum.inspect} is not an example of schema #{expected_schema}")
-      end
-    end
-
-    # Raised when writer's and reader's schema do not match
-    class SchemaMatchException < AvroError
-      def initialize(writers_schema, readers_schema)
-        super("Writer's schema #{writers_schema} and Reader's schema " +
-              "#{readers_schema} do not match.")
-      end
-    end
-
-    # FIXME(jmhodges) move validate to this module?
-
-    class BinaryDecoder
-      # Read leaf values
-
-      # reader is an object on which we can call read, seek and tell.
-      attr_reader :reader
-      def initialize(reader)
-        @reader = reader
-      end
-
-      def byte!
-        @reader.read(1)[0]
-      end
-
-      def read_null
-        # null is written as zero byte's
-        nil
-      end
-
-      def read_boolean
-        byte! == 1
-      end
-
-      def read_int; read_long; end
-
-      def read_long
-        # int and long values are written using variable-length,
-        # zig-zag coding.
-        b = byte!
-        n = b & 0x7F
-        shift = 7
-        while (b & 0x80) != 0
-          b = byte!
-          n |= (b & 0x7F) << shift
-          shift += 7
-        end
-        (n >> 1) ^ -(n & 1)
-      end
-
-      def read_float
-        # A float is written as 4 bytes.
-        # The float is converted into a 32-bit integer using a method
-        # equivalent to Java's floatToIntBits and then encoded in
-        # little-endian format.
-        @reader.read(4).unpack('e')[0]
-      end
-
-      def read_double
-        #  A double is written as 8 bytes.
-        # The double is converted into a 64-bit integer using a method
-        # equivalent to Java's doubleToLongBits and then encoded in
-        # little-endian format.
-        @reader.read(8).unpack('E')[0]
-      end
-
-      def read_bytes
-        # Bytes are encoded as a long followed by that many bytes of
-        # data.
-        read(read_long)
-      end
-
-      def read_string
-        # A string is encoded as a long followed by that many bytes of
-        # UTF-8 encoded character data.
-        # FIXME utf-8 encode this in 1.9
-        read_bytes
-      end
-
-      def read(len)
-        # Read n bytes
-        @reader.read(len)
-      end
-
-      def skip_null
-        nil
-      end
-
-      def skip_boolean
-        skip(1)
-      end
-
-      def skip_int
-        skip_long
-      end
-
-      def skip_long
-        b = byte!
-        while (b & 0x80) != 0
-          b = byte!
-        end
-      end
-
-      def skip_float
-        skip(4)
-      end
-
-      def skip_double
-        skip(8)
-      end
-
-      def skip_bytes
-        skip(read_long)
-      end
-
-      def skip_string
-        skip_bytes
-      end
-
-      def skip(n)
-        reader.seek(reader.tell() + n)
-      end
-    end
-
-    # Write leaf values
-    class BinaryEncoder
-      attr_reader :writer
-
-      def initialize(writer)
-        @writer = writer
-      end
-
-      # null is written as zero bytes
-      def write_null(datum)
-        nil
-      end
-
-      # a boolean is written as a single byte 
-      # whose value is either 0 (false) or 1 (true).
-      def write_boolean(datum)
-        on_disk = datum ? 1.chr : 0.chr
-        writer.write(on_disk)
-      end
-
-      # int and long values are written using variable-length,
-      # zig-zag coding.
-      def write_int(n)
-        write_long(n)
-      end
-
-      # int and long values are written using variable-length,
-      # zig-zag coding.
-      def write_long(n)
-        foo = n
-        n = (n << 1) ^ (n >> 63)
-        while (n & ~0x7F) != 0
-          @writer.write(((n & 0x7f) | 0x80).chr)
-          n >>= 7
-        end
-        @writer.write(n.chr)
-      end
-
-      # A float is written as 4 bytes.
-      # The float is converted into a 32-bit integer using a method
-      # equivalent to Java's floatToIntBits and then encoded in
-      # little-endian format.
-      def write_float(datum)
-        @writer.write([datum].pack('e'))
-      end
-
-      # A double is written as 8 bytes.
-      # The double is converted into a 64-bit integer using a method
-      # equivalent to Java's doubleToLongBits and then encoded in
-      # little-endian format.
-      def write_double(datum)
-        @writer.write([datum].pack('E'))
-      end
-
-      # Bytes are encoded as a long followed by that many bytes of data.
-      def write_bytes(datum)
-        write_long(datum.size)
-        @writer.write(datum)
-      end
-
-      # A string is encoded as a long followed by that many bytes of
-      # UTF-8 encoded character data
-      def write_string(datum)
-        # FIXME utf-8 encode this in 1.9
-        write_bytes(datum)
-      end
-
-      # Write an arbritary datum.
-      def write(datum)
-        writer.write(datum)
-      end
-    end
-
-    class DatumReader
-      def self.check_props(schema_one, schema_two, prop_list)
-        prop_list.all? do |prop|
-          schema_one.send(prop) == schema_two.send(prop)
-        end
-      end
-
-      def self.match_schemas(writers_schema, readers_schema)
-        w_type = writers_schema.type
-        r_type = readers_schema.type
-
-        # This conditional is begging for some OO love.
-        if w_type == 'union' || r_type == 'union'
-          return true
-        end
-
-        if w_type == r_type
-          if Schema::PRIMITIVE_TYPES.include?(w_type) &&
-              Schema::PRIMITIVE_TYPES.include?(r_type)
-            return true
-          end
-
-          case r_type
-          when 'record'
-            return check_props(writers_schema, readers_schema, [:fullname])
-          when 'error'
-            return check_props(writers_scheam, readers_schema, [:fullname])
-          when 'request'
-            return true
-          when 'fixed'
-            return check_props(writers_schema, readers_schema, [:fullname, :size])
-          when 'enum'
-            return check_props(writers_schema, readers_schema, [:fullname])
-          when 'map'
-            return check_props(writers_schema.values, readers_schema.values, [:type])
-          when 'array'
-            return check_props(writers_schema.items, readers_schema.items, [:type])
-          end
-        end
-
-        # Handle schema promotion
-        if w_type == 'int' && ['long', 'float', 'double'].include?(r_type)
-          return true
-        elsif w_type == 'long' && ['float', 'double'].include?(r_type)
-          return true
-        elsif w_type == 'float' && r_type == 'double'
-          return true
-        end
-
-        return false
-      end
-
-      attr_accessor :writers_schema, :readers_schema
-
-      def initialize(writers_schema=nil, readers_schema=nil)
-        @writers_schema = writers_schema
-        @readers_schema = readers_schema
-      end
-
-      def read(decoder)
-        self.readers_schema = writers_schema unless readers_schema
-        read_data(writers_schema, readers_schema, decoder)
-      end
-
-      def read_data(writers_schema, readers_schema, decoder)
-        # schema matching
-        unless self.class.match_schemas(writers_schema, readers_schema)
-          raise SchemaMatchException.new(writers_schema, readers_schema)
-        end
-
-        # schema resolution: reader's schema is a union, writer's
-        # schema is not
-        if writers_schema.type != 'union' && readers_schema.type == 'union'
-          rs = readers_schema.schemas.find{|s|
-            self.class.match_schemas(writers_schema, s)
-          }
-          return read_data(writers_schema, rs, decoder) if rs
-          raise SchemaMatchException.new(writers_schema, readers_schema)
-        end
-
-        # function dispatch for reading data based on type of writer's
-        # schema
-        case writers_schema.type
-        when 'null';    decoder.read_null
-        when 'boolean'; decoder.read_boolean
-        when 'string';  decoder.read_string
-        when 'int';     decoder.read_int
-        when 'long';    decoder.read_long
-        when 'float';   decoder.read_float
-        when 'double';  decoder.read_double
-        when 'bytes';   decoder.read_bytes
-        when 'fixed';   read_fixed(writers_schema, readers_schema, decoder)
-        when 'enum';    read_enum(writers_schema, readers_schema, decoder)
-        when 'array';   read_array(writers_schema, readers_schema, decoder)
-        when 'map';     read_map(writers_schema, readers_schema, decoder)
-        when 'union';   read_union(writers_schema, readers_schema, decoder)
-        when 'record', 'errors', 'request';  read_record(writers_schema, readers_schema, decoder)
-        else
-          raise AvroError, "Cannot read unknown schema type: #{writers_schema.type}"
-        end
-      end
-
-      def read_fixed(writers_schema, readers_schema, decoder)
-        decoder.read(writers_schema.size)
-      end
-
-      def read_enum(writers_schema, readers_schema, decoder)
-        index_of_symbol = decoder.read_int
-        read_symbol = writers_schema.symbols[index_of_symbol]
-
-        # TODO(jmhodges): figure out what unset means for resolution
-        # schema resolution
-        unless readers_schema.symbols.include?(read_symbol)
-          # 'unset' here
-        end
-
-        read_symbol
-      end
-
-      def read_array(writers_schema, readers_schema, decoder)
-        read_items = []
-        block_count = decoder.read_long
-        while block_count != 0
-          if block_count < 0
-            block_count = -block_count
-            block_size = decoder.read_long
-          end
-          block_count.times do
-            read_items << read_data(writers_schema.items,
-                                    readers_schema.items,
-                                    decoder)
-          end
-          block_count = decoder.read_long
-        end
-
-        read_items
-      end
-
-      def read_map(writers_schema, readers_schema, decoder)
-        read_items = {}
-        block_count = decoder.read_long
-        while block_count != 0
-          if block_count < 0
-            block_count = -block_count
-            block_size = decoder.read_long
-          end
-          block_count.times do
-            key = decoder.read_string
-            read_items[key] = read_data(writers_schema.values,
-                                        readers_schema.values,
-                                        decoder)
-          end
-          block_count = decoder.read_long
-        end
-
-        read_items
-      end
-
-      def read_union(writers_schema, readers_schema, decoder)
-        index_of_schema = decoder.read_long
-        selected_writers_schema = writers_schema.schemas[index_of_schema]
-
-        read_data(selected_writers_schema, readers_schema, decoder)
-      end
-
-      def read_record(writers_schema, readers_schema, decoder)
-        readers_fields_hash = readers_schema.fields_hash
-        read_record = {}
-        writers_schema.fields.each do |field|
-          if readers_field = readers_fields_hash[field.name]
-            field_val = read_data(field.type, readers_field.type, decoder)
-            read_record[field.name] = field_val
-          else
-            skip_data(field.type, decoder)
-          end
-        end
-
-        # fill in the default values
-        if readers_fields_hash.size > read_record.size
-          writers_fields_hash = writers_schema.fields_hash
-          readers_fields_hash.each do |field_name, field|
-            unless writers_fields_hash.has_key? field_name
-              if !field.default.nil?
-                field_val = read_default_value(field.type, field.default)
-                read_record[field.name] = field_val
-              else
-                # FIXME(jmhodges) another 'unset' here
-              end
-            end
-          end
-        end
-
-        read_record
-      end
-
-      def read_default_value(field_schema, default_value)
-        # Basically a JSON Decoder?
-        case field_schema.type
-        when 'null'
-          return nil
-        when 'boolean'
-          return default_value
-        when 'int', 'long'
-          return Integer(default_value)
-        when 'float', 'double'
-          return Float(default_value)
-        when 'enum', 'fixed', 'string', 'bytes'
-          return default_value
-        when 'array'
-          read_array = []
-          default_value.each do |json_val|
-            item_val = read_default_value(field_schema.items, json_val)
-            read_array << item_val
-          end
-          return read_array
-        when 'map'
-          read_map = {}
-          default_value.each do |key, json_val|
-            map_val = read_default_value(field_schema.values, json_val)
-            read_map[key] = map_val
-          end
-          return read_map
-        when 'union'
-          return read_default_value(field_schema.schemas[0], default_value)
-        when 'record'
-          read_record = {}
-          field_schema.fields.each do |field|
-            json_val = default_value[field.name]
-            json_val = field.default unless json_val
-            field_val = read_default_value(field.type, json_val)
-            read_record[field.name] = field_val
-          end
-          return read_record
-        else
-          fail_msg = "Unknown type: #{field_schema.type}"
-          raise AvroError(fail_msg)
-        end
-      end
-
-      def skip_data(writers_schema, decoder)
-        case writers_schema.type
-        when 'null'
-          decoder.skip_null
-        when 'boolean'
-          decoder.skip_boolean
-        when 'string'
-          decoder.skip_string
-        when 'int'
-          decoder.skip_int
-        when 'long'
-          decoder.skip_long
-        when 'float'
-          decoder.skip_float
-        when 'double'
-          decoder.skip_double
-        when 'bytes'
-          decoder.skip_bytes
-        when 'fixed'
-          skip_fixed(writers_schema, decoder)
-        when 'enum'
-          skip_enum(writers_schema, decoder)
-        when 'array'
-          skip_array(writers_schema, decoder)
-        when 'map'
-          skip_map(writers_schema, decoder)
-        when 'union'
-          skip_union(writers_schema, decoder)
-        when 'record', 'error', 'request'
-          skip_record(writers_schema, decoder)
-        else
-          raise AvroError, "Unknown schema type: #{schm.type}"
-        end
-      end
-
-      def skip_fixed(writers_schema, decoder)
-        decoder.skip(writers_schema.size)
-      end
-
-      def skip_enum(writers_schema, decoder)
-        decoder.skip_int
-      end
-
-      def skip_array(writers_schema, decoder)
-        skip_blocks(decoder) { skip_data(writers_schema.items, decoder) }
-      end
-
-      def skip_map(writers_schema, decoder)
-        skip_blocks(decoder) {
-          decoder.skip_string
-          skip_data(writers_schema.values, decoder)
-        }
-      end
-
-      def skip_record(writers_schema, decoder)
-        writers_schema.fields.each{|f| skip_data(f.type, decoder) }
-      end
-
-      private
-      def skip_blocks(decoder, &blk)
-        block_count = decoder.read_long
-        while block_count != 0
-          if block_count < 0
-            decoder.skip(decoder.read_long)
-          else
-            block_count.times &blk
-          end
-          block_count = decoder.read_long
-        end
-      end
-    end # DatumReader
-
-    # DatumWriter for generic ruby objects
-    class DatumWriter
-      attr_accessor :writers_schema
-      def initialize(writers_schema=nil)
-        @writers_schema = writers_schema
-      end
-
-      def write(datum, encoder)
-        write_data(writers_schema, datum, encoder)
-      end
-
-      def write_data(writers_schema, datum, encoder)
-        unless Schema.validate(writers_schema, datum)
-          raise AvroTypeError.new(writers_schema, datum)
-        end
-
-        # function dispatch to write datum
-        case writers_schema.type
-        when 'null';    encoder.write_null(datum)
-        when 'boolean'; encoder.write_boolean(datum)
-        when 'string';  encoder.write_string(datum)
-        when 'int';     encoder.write_int(datum)
-        when 'long';    encoder.write_long(datum)
-        when 'float';   encoder.write_float(datum)
-        when 'double';  encoder.write_double(datum)
-        when 'bytes';   encoder.write_bytes(datum)
-        when 'fixed';   write_fixed(writers_schema, datum, encoder)
-        when 'enum';    write_enum(writers_schema, datum, encoder)
-        when 'array';   write_array(writers_schema, datum, encoder)
-        when 'map';     write_map(writers_schema, datum, encoder)
-        when 'union';   write_union(writers_schema, datum, encoder)
-        when 'record', 'errors', 'request';  write_record(writers_schema, datum, encoder)
-        else
-          raise AvroError.new("Unknown type: #{writers_schema.type}")
-        end
-      end
-
-      def write_fixed(writers_schema, datum, encoder)
-        encoder.write(datum)
-      end
-
-      def write_enum(writers_schema, datum, encoder)
-        index_of_datum = writers_schema.symbols.index(datum)
-        encoder.write_int(index_of_datum)
-      end
-
-      def write_array(writers_schema, datum, encoder)
-        if datum.size > 0
-          encoder.write_long(datum.size)
-          datum.each do |item|
-            write_data(writers_schema.items, item, encoder)
-          end
-        end
-        encoder.write_long(0)
-      end
-
-      def write_map(writers_schema, datum, encoder)
-        if datum.size > 0
-          encoder.write_long(datum.size)
-          datum.each do |k,v|
-            encoder.write_string(k)
-            write_data(writers_schema.values, v, encoder)
-          end
-        end
-        encoder.write_long(0)
-      end
-
-      def write_union(writers_schema, datum, encoder)
-        index_of_schema = -1
-        found = writers_schema.schemas.
-          find{|e| index_of_schema += 1; found = Schema.validate(e, datum) }
-        unless found  # Because find_index doesn't exist in 1.8.6
-          raise AvroTypeError.new(writers_schema, datum)
-        end
-        encoder.write_long(index_of_schema)
-        write_data(writers_schema.schemas[index_of_schema], datum, encoder)
-      end
-
-      def write_record(writers_schema, datum, encoder)
-        writers_schema.fields.each do |field|
-          write_data(field.type, datum[field.name], encoder)
-        end
-      end
-    end # DatumWriter
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro/ipc.rb b/branch-1.3/lang/ruby/lib/avro/ipc.rb
deleted file mode 100644
index 971ffb8..0000000
--- a/branch-1.3/lang/ruby/lib/avro/ipc.rb
+++ /dev/null
@@ -1,535 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-module Avro::IPC
-
-  class AvroRemoteError < Avro::AvroError; end
-
-  HANDSHAKE_REQUEST_SCHEMA = Avro::Schema.parse <<-JSON
-  {
-    "type": "record",
-    "name": "HandshakeRequest", "namespace":"org.apache.avro.ipc",
-    "fields": [
-      {"name": "clientHash",
-       "type": {"type": "fixed", "name": "MD5", "size": 16}},
-      {"name": "clientProtocol", "type": ["null", "string"]},
-      {"name": "serverHash", "type": "MD5"},
-      {"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]}
-    ]
-  }
-  JSON
-
-  HANDSHAKE_RESPONSE_SCHEMA = Avro::Schema.parse <<-JSON
-  {
-    "type": "record",
-    "name": "HandshakeResponse", "namespace": "org.apache.avro.ipc",
-    "fields": [
-      {"name": "match",
-       "type": {"type": "enum", "name": "HandshakeMatch",
-                "symbols": ["BOTH", "CLIENT", "NONE"]}},
-      {"name": "serverProtocol", "type": ["null", "string"]},
-      {"name": "serverHash",
-       "type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]},
-      {"name": "meta",
-       "type": ["null", {"type": "map", "values": "bytes"}]}
-    ]
-  }
-  JSON
-
-  HANDSHAKE_REQUESTOR_WRITER = Avro::IO::DatumWriter.new(HANDSHAKE_REQUEST_SCHEMA)
-  HANDSHAKE_REQUESTOR_READER = Avro::IO::DatumReader.new(HANDSHAKE_RESPONSE_SCHEMA)
-  HANDSHAKE_RESPONDER_WRITER = Avro::IO::DatumWriter.new(HANDSHAKE_RESPONSE_SCHEMA)
-  HANDSHAKE_RESPONDER_READER = Avro::IO::DatumReader.new(HANDSHAKE_REQUEST_SCHEMA)
-
-  META_SCHEMA = Avro::Schema.parse('{"type": "map", "values": "bytes"}')
-  META_WRITER = Avro::IO::DatumWriter.new(META_SCHEMA)
-  META_READER = Avro::IO::DatumReader.new(META_SCHEMA)
-
-  SYSTEM_ERROR_SCHEMA = Avro::Schema.parse('["string"]')
-
-  # protocol cache
-  REMOTE_HASHES = {}
-  REMOTE_PROTOCOLS = {}
-
-  BUFFER_HEADER_LENGTH = 4
-  BUFFER_SIZE = 8192
-
-  # Raised when an error message is sent by an Avro requestor or responder.
-  class AvroRemoteException < Avro::AvroError; end
-
-  class ConnectionClosedException < Avro::AvroError; end
-
-  class Requestor
-    """Base class for the client side of a protocol interaction."""
-    attr_reader :local_protocol, :transport
-    attr_accessor :remote_protocol, :remote_hash, :send_protocol
-
-    def initialize(local_protocol, transport)
-      @local_protocol = local_protocol
-      @transport = transport
-      @remote_protocol = nil
-      @remote_hash = nil
-      @send_protocol = nil
-    end
-
-    def remote_protocol=(new_remote_protocol)
-      @remote_protocol = new_remote_protocol
-      REMOTE_PROTOCOLS[transport.remote_name] = remote_protocol
-    end
-
-    def remote_hash=(new_remote_hash)
-      @remote_hash = new_remote_hash
-      REMOTE_HASHES[transport.remote_name] = remote_hash
-    end
-
-    def request(message_name, request_datum)
-      # Writes a request message and reads a response or error message.
-      # build handshake and call request
-      buffer_writer = StringIO.new('', 'w+')
-      buffer_encoder = Avro::IO::BinaryEncoder.new(buffer_writer)
-      write_handshake_request(buffer_encoder)
-      write_call_request(message_name, request_datum, buffer_encoder)
-
-      # send the handshake and call request;  block until call response
-      call_request = buffer_writer.string
-      call_response = transport.transceive(call_request)
-
-      # process the handshake and call response
-      buffer_decoder = Avro::IO::BinaryDecoder.new(StringIO.new(call_response))
-      if read_handshake_response(buffer_decoder)
-        read_call_response(message_name, buffer_decoder)
-      else
-        request(message_name, request_datum)
-      end
-    end
-
-    def write_handshake_request(encoder)
-      local_hash = local_protocol.md5
-      remote_name = transport.remote_name
-      remote_hash = REMOTE_HASHES[remote_name]
-      unless remote_hash
-        remote_hash = local_hash
-        self.remote_protocol = local_protocol
-      end
-      request_datum = {
-        'clientHash' => local_hash,
-        'serverHash' => remote_hash
-      }
-      if send_protocol
-        request_datum['clientProtocol'] = local_protocol.to_s
-      end
-      HANDSHAKE_REQUESTOR_WRITER.write(request_datum, encoder)
-    end
-
-    def write_call_request(message_name, request_datum, encoder)
-      # The format of a call request is:
-      #   * request metadata, a map with values of type bytes
-      #   * the message name, an Avro string, followed by
-      #   * the message parameters. Parameters are serialized according to
-      #     the message's request declaration.
-
-      # TODO request metadata (not yet implemented)
-      request_metadata = {}
-      META_WRITER.write(request_metadata, encoder)
-
-      message = local_protocol.messages[message_name]
-      unless message
-        raise AvroError, "Unknown message: #{message_name}"
-      end
-      encoder.write_string(message.name)
-
-      write_request(message.request, request_datum, encoder)
-    end
-
-    def write_request(request_schema, request_datum, encoder)
-      datum_writer = Avro::IO::DatumWriter.new(request_schema)
-      datum_writer.write(request_datum, encoder)
-    end
-
-    def read_handshake_response(decoder)
-      handshake_response = HANDSHAKE_REQUESTOR_READER.read(decoder)
-      we_have_matching_schema = false
-
-      case handshake_response['match']
-      when 'BOTH'
-        self.send_protocol = false
-        we_have_matching_schema = true
-      when 'CLIENT'
-        raise AvroError.new('Handshake failure. match == CLIENT') if send_protocol
-        self.remote_protocol = Avro::Protocol.parse(handshake_response['serverProtocol'])
-        self.remote_hash = handshake_response['serverHash']
-        self.send_protocol = false
-        we_have_matching_schema = true
-      when 'NONE'
-        raise AvroError.new('Handshake failure. match == NONE') if send_protocol
-        self.remote_protocol = Avro::Protocol.parse(handshake_response['serverProtocol'])
-        self.remote_hash = handshake_response['serverHash']
-        self.send_protocol = true
-      else
-        raise AvroError.new("Unexpected match: #{match}")
-      end
-
-      return we_have_matching_schema
-    end
-
-    def read_call_response(message_name, decoder)
-      # The format of a call response is:
-      #   * response metadata, a map with values of type bytes
-      #   * a one-byte error flag boolean, followed by either:
-      #     * if the error flag is false,
-      #       the message response, serialized per the message's response schema.
-      #     * if the error flag is true, 
-      #       the error, serialized per the message's error union schema.
-      response_metadata = META_READER.read(decoder)
-
-      # remote response schema
-      remote_message_schema = remote_protocol.messages[message_name]
-      raise AvroError.new("Unknown remote message: #{message_name}") unless remote_message_schema
-
-      # local response schema
-      local_message_schema = local_protocol.messages[message_name]
-      unless local_message_schema
-        raise AvroError.new("Unknown local message: #{message_name}")
-      end
-
-      # error flag
-      if !decoder.read_boolean
-        writers_schema = remote_message_schema.response
-        readers_schema = local_message_schema.response
-        read_response(writers_schema, readers_schema, decoder)
-      else
-        writers_schema = remote_message_schema.errors || SYSTEM_ERROR_SCHEMA
-        readers_schema = local_message_schema.errors || SYSTEM_ERROR_SCHEMA
-        raise read_error(writers_schema, readers_schema, decoder)
-      end
-    end
-
-    def read_response(writers_schema, readers_schema, decoder)
-      datum_reader = Avro::IO::DatumReader.new(writers_schema, readers_schema)
-      datum_reader.read(decoder)
-    end
-
-    def read_error(writers_schema, readers_schema, decoder)
-      datum_reader = Avro::IO::DatumReader.new(writers_schema, readers_schema)
-      AvroRemoteError.new(datum_reader.read(decoder))
-    end
-  end
-
-  # Base class for the server side of a protocol interaction.
-  class Responder
-    attr_reader :local_protocol, :local_hash, :protocol_cache
-    def initialize(local_protocol)
-      @local_protocol = local_protocol
-      @local_hash = self.local_protocol.md5
-      @protocol_cache = {}
-      protocol_cache[local_hash] = local_protocol
-    end
-
-    # Called by a server to deserialize a request, compute and serialize
-    # a response or error. Compare to 'handle()' in Thrift.
-    def respond(call_request)
-      buffer_decoder = Avro::IO::BinaryDecoder.new(StringIO.new(call_request))
-      buffer_writer = StringIO.new('', 'w+')
-      buffer_encoder = Avro::IO::BinaryEncoder.new(buffer_writer)
-      error = nil
-      response_metadata = {}
-
-      begin
-        remote_protocol = process_handshake(buffer_decoder, buffer_encoder)
-        # handshake failure
-        unless remote_protocol
-          return buffer_writer.string
-        end
-
-        # read request using remote protocol
-        request_metadata = META_READER.read(buffer_decoder)
-        remote_message_name = buffer_decoder.read_string
-
-        # get remote and local request schemas so we can do
-        # schema resolution (one fine day)
-        remote_message = remote_protocol.messages[remote_message_name]
-        unless remote_message
-          raise AvroError.new("Unknown remote message: #{remote_message_name}")
-        end
-        local_message = local_protocol.messages[remote_message_name]
-        unless local_message
-          raise AvroError.new("Unknown local message: #{remote_message_name}")
-        end
-        writers_schema = remote_message.request
-        readers_schema = local_message.request
-        request = read_request(writers_schema, readers_schema, buffer_decoder)
-        # perform server logic
-        begin
-          response = call(local_message, request)
-        rescue AvroRemoteError => e
-          error = e
-        rescue Exception => e
-          error = AvroRemoteError.new(e.to_s)
-        end
-
-        # write response using local protocol
-        META_WRITER.write(response_metadata, buffer_encoder)
-        buffer_encoder.write_boolean(!!error)
-        if error.nil?
-          writers_schema = local_message.response
-          write_response(writers_schema, response, buffer_encoder)
-        else
-          writers_schema = local_message.errors || SYSTEM_ERROR_SCHEMA
-          write_error(writers_schema, error, buffer_encoder)
-        end
-      rescue Avro::AvroError => e
-        error = AvroRemoteException.new(e.to_s)
-        buffer_encoder = Avro::IO::BinaryEncoder.new(StringIO.new)
-        META_WRITER.write(response_metadata, buffer_encoder)
-        buffer_encoder.write_boolean(true)
-        self.write_error(SYSTEM_ERROR_SCHEMA, error, buffer_encoder)
-      end
-      buffer_writer.string
-    end
-
-    def process_handshake(decoder, encoder)
-      handshake_request = HANDSHAKE_RESPONDER_READER.read(decoder)
-      handshake_response = {}
-
-      # determine the remote protocol
-      client_hash = handshake_request['clientHash']
-      client_protocol = handshake_request['clientProtocol']
-      remote_protocol = protocol_cache[client_hash]
-
-      if !remote_protocol && client_protocol
-        remote_protocol = Avro::Protocol.parse(client_protocol)
-        protocol_cache[client_hash] = remote_protocol
-      end
-
-      # evaluate remote's guess of the local protocol
-      server_hash = handshake_request['serverHash']
-      if local_hash == server_hash
-        if !remote_protocol
-          handshake_response['match'] = 'NONE'
-        else
-          handshake_response['match'] = 'BOTH'
-        end
-      else
-        if !remote_protocol
-          handshake_response['match'] = 'NONE'
-        else
-          handshake_response['match'] = 'CLIENT'
-        end
-      end
-
-      if handshake_response['match'] != 'BOTH'
-        handshake_response['serverProtocol'] = local_protocol.to_s
-        handshake_response['serverHash'] = local_hash
-      end
-
-      HANDSHAKE_RESPONDER_WRITER.write(handshake_response, encoder)
-      remote_protocol
-    end
-
-    def call(local_message, request)
-      # Actual work done by server: cf. handler in thrift.
-      raise NotImplementedError
-    end
-
-    def read_request(writers_schema, readers_schema, decoder)
-      datum_reader = Avro::IO::DatumReader.new(writers_schema, readers_schema)
-      datum_reader.read(decoder)
-    end
-
-    def write_response(writers_schema, response_datum, encoder)
-      datum_writer = Avro::IO::DatumWriter.new(writers_schema)
-      datum_writer.write(response_datum, encoder)
-    end
-
-    def write_error(writers_schema, error_exception, encoder)
-      datum_writer = Avro::IO::DatumWriter.new(writers_schema)
-      datum_writer.write(error_exception.to_s, encoder)
-    end
-  end
-
-  class SocketTransport
-    # A simple socket-based Transport implementation.
-
-    attr_reader :sock, :remote_name
-
-    def initialize(sock)
-      @sock = sock
-    end
-
-    def transceive(request)
-      write_framed_message(request)
-      read_framed_message
-    end
-
-    def read_framed_message
-      message = []
-      loop do
-        buffer = StringIO.new
-        buffer_length = read_buffer_length
-        if buffer_length == 0
-          return message.join
-        end
-        while buffer.tell < buffer_length
-          chunk = sock.read(buffer_length - buffer.tell)
-          if chunk == ''
-            raise ConnectionClosedException.new("Socket read 0 bytes.")
-          end
-          buffer.write(chunk)
-        end
-        message << buffer.string
-      end
-    end
-
-    def write_framed_message(message)
-      message_length = message.size
-      total_bytes_sent = 0
-      while message_length - total_bytes_sent > 0
-        if message_length - total_bytes_sent > BUFFER_SIZE:
-          buffer_length = BUFFER_SIZE
-        else
-          buffer_length = message_length - total_bytes_sent
-        end
-        write_buffer(message[total_bytes_sent,buffer_length])
-        total_bytes_sent += buffer_length
-      end
-      # A message is always terminated by a zero-length buffer.
-      write_buffer_length(0)
-    end
-
-    def write_buffer(chunk)
-      buffer_length = chunk.size
-      write_buffer_length(buffer_length)
-      total_bytes_sent = 0
-      while total_bytes_sent < buffer_length
-        bytes_sent = self.sock.write(chunk[total_bytes_sent..-1])
-        if bytes_sent == 0
-          raise ConnectionClosedException.new("Socket sent 0 bytes.")
-        end
-        total_bytes_sent += bytes_sent
-      end
-    end
-
-    def write_buffer_length(n)
-      bytes_sent = sock.write([n].pack('N'))
-      if bytes_sent == 0
-        raise ConnectionClosedException.new("socket sent 0 bytes")
-      end
-    end
-
-    def read_buffer_length
-      read = sock.read(BUFFER_HEADER_LENGTH)
-      if read == '' || read == nil
-        raise ConnectionClosedException.new("Socket read 0 bytes.")
-      end
-      read.unpack('N')[0]
-    end
-
-    def close
-      sock.close
-    end
-  end
-
-  class ConnectionClosedError < StandardError; end
-
-  class FramedWriter
-    attr_reader :writer
-    def initialize(writer)
-      @writer = writer
-    end
-
-    def write_framed_message(message)
-      message_size = message.size
-      total_bytes_sent = 0
-      while message_size - total_bytes_sent > 0
-        if message_size - total_bytes_sent > BUFFER_SIZE
-          buffer_size = BUFFER_SIZE
-        else
-          buffer_size = message_size - total_bytes_sent
-        end
-        write_buffer(message[total_bytes_sent, buffer_size])
-        total_bytes_sent += buffer_size
-      end
-      write_buffer_size(0)
-    end
-
-    def to_s; writer.string; end
-
-    private
-    def write_buffer(chunk)
-      buffer_size = chunk.size
-      write_buffer_size(buffer_size)
-      writer << chunk
-    end
-
-    def write_buffer_size(n)
-      writer.write([n].pack('N'))
-    end
-  end
-
-  class FramedReader
-    attr_reader :reader
-
-    def initialize(reader)
-      @reader = reader
-    end
-
-    def read_framed_message
-      message = []
-      loop do
-        buffer = ""
-        buffer_size = read_buffer_size
-
-        return message.join if buffer_size == 0
-
-        while buffer.size < buffer_size
-          chunk = reader.read(buffer_size - buffer.size)
-          chunk_error?(chunk)
-          buffer << chunk
-        end
-        message << buffer
-      end
-    end
-
-    private
-    def read_buffer_size
-      header = reader.read(BUFFER_HEADER_LENGTH)
-      chunk_error?(header)
-      header.unpack('N')[0]
-    end
-
-    def chunk_error?(chunk)
-      raise ConnectionClosedError.new("Reader read 0 bytes") if chunk == ''
-    end
-  end
-
-  # Only works for clients. Sigh.
-  class HTTPTransceiver
-    attr_reader :remote_name, :host, :port
-    def initialize(host, port)
-      @host, @port = host, port
-      @remote_name = "#{host}:#{port}"
-    end
-
-    def transceive(message)
-      writer = FramedWriter.new(StringIO.new)
-      writer.write_framed_message(message)
-      resp = Net::HTTP.start(host, port) do |http|
-        http.post('/', writer.to_s, {'Content-Type' => 'avro/binary'})
-      end
-      FramedReader.new(StringIO.new(resp.body)).read_framed_message
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro/protocol.rb b/branch-1.3/lang/ruby/lib/avro/protocol.rb
deleted file mode 100644
index deed8f7..0000000
--- a/branch-1.3/lang/ruby/lib/avro/protocol.rb
+++ /dev/null
@@ -1,165 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-module Avro
-  class Protocol
-    VALID_TYPE_SCHEMA_TYPES = Set.new(%w[enum record error fixed])
-    class ProtocolParseError < Avro::AvroError; end
-
-    attr_reader :name, :namespace, :types, :messages, :md5
-    def self.parse(protocol_string)
-      json_data = Yajl.load(protocol_string)
-
-      if json_data.is_a? Hash
-        name = json_data['protocol']
-        namespace = json_data['namespace']
-        types = json_data['types']
-        messages = json_data['messages']
-        Protocol.new(name, namespace, types, messages)
-      else
-        raise ProtocolParseError, "Not a JSON object: #{json_data}"
-      end
-    end
-
-    def initialize(name, namespace=nil, types=nil, messages=nil)
-      # Ensure valid ctor args
-      if !name
-        raise ProtocolParseError, 'Protocols must have a non-empty name.'
-      elsif !name.is_a?(String)
-        raise ProtocolParseError, 'The name property must be a string.'
-      elsif !namespace.is_a?(String)
-        raise ProtocolParseError, 'The namespace property must be a string.'
-      elsif !types.is_a?(Array)
-        raise ProtocolParseError, 'The types property must be a list.'
-      elsif !messages.is_a?(Hash)
-        raise ProtocolParseError, 'The messages property must be a JSON object.'
-      end
-
-      @name = name
-      @namespace = namespace
-      type_names = {}
-      @types = parse_types(types, type_names)
-      @messages = parse_messages(messages, type_names)
-      @md5 = Digest::MD5.digest(to_s)
-    end
-
-    def to_s
-      Yajl.dump to_avro
-    end
-
-    def ==(other)
-      to_avro == other.to_avro
-    end
-
-    private
-    def parse_types(types, type_names)
-      type_objects = []
-      types.collect do |type|
-        # FIXME adding type.name to type_names is not defined in the
-        # spec. Possible bug in the python impl and the spec.
-        type_object = Schema.real_parse(type, type_names)
-        unless VALID_TYPE_SCHEMA_TYPES.include?(type_object.type)
-          msg = "Type #{type} not an enum, record, fixed or error."
-          raise ProtocolParseError, msg
-        end
-        type_object
-      end
-    end
-
-    def parse_messages(messages, names)
-      message_objects = {}
-      messages.each do |name, body|
-        if message_objects.has_key?(name)
-          raise ProtocolParseError, "Message name \"#{name}\" repeated."
-        elsif !body.is_a?(Hash)
-          raise ProtocolParseError, "Message name \"#{name}\" has non-object body #{body.inspect}"
-        end
-
-        request  = body['request']
-        response = body['response']
-        errors   = body['errors']
-        message_objects[name] = Message.new(name, request, response, errors, names)
-      end
-      message_objects
-    end
-
-    protected
-    def to_avro
-      hsh = {'protocol' => name}
-      hsh['namespace'] = namespace if namespace
-      hsh['types'] = types.map{|t| t.to_avro } if types
-
-      if messages
-        hsh['messages'] = messages.collect_hash{|k,t| [k, t.to_avro] }
-      end
-
-      hsh
-    end
-
-    class Message
-      attr_reader :name, :response_from_names, :request, :response, :errors
-      def initialize(name, request, response, errors=nil, names=nil)
-        @name = name
-        @response_from_names = false
-
-        @request = parse_request(request, names)
-        @response = parse_response(response, names)
-        @errors = parse_errors(errors, names) if errors
-      end
-
-      def to_avro
-        hsh = {'request' => request.to_avro}
-        if response_from_names
-          hsh['response'] = response.fullname
-        else
-          hsh['response'] = response.to_avro
-        end
-
-        if errors
-          hsh['errors'] = errors.to_avro
-        end
-        hsh
-      end
-
-      def to_s
-        Yajl.dump to_avro
-      end
-
-      def parse_request(request, names)
-        unless request.is_a?(Array)
-          raise ProtocolParseError, "Request property not an Array: #{request.inspect}"
-        end
-        Schema::RecordSchema.new(nil, nil, request, names, 'request')
-      end
-
-      def parse_response(response, names)
-        if response.is_a?(String) && names[response]
-          @response_from_names = true
-          names[response]
-        else
-          Schema.real_parse(response, names)
-        end
-      end
-
-      def parse_errors(errors, names)
-        unless errors.is_a?(Array)
-          raise ProtocolParseError, "Errors property not an Array: #{errors}"
-        end
-        Schema.real_parse(errors, names)
-      end
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/lib/avro/schema.rb b/branch-1.3/lang/ruby/lib/avro/schema.rb
deleted file mode 100644
index 96769fb..0000000
--- a/branch-1.3/lang/ruby/lib/avro/schema.rb
+++ /dev/null
@@ -1,424 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-module Avro
-  class Schema
-    # FIXME turn these into symbols to prevent some gc pressure
-    PRIMITIVE_TYPES = Set.new(%w[null boolean string bytes int long float double])
-    NAMED_TYPES =     Set.new(%w[fixed enum record error])
-
-    VALID_TYPES = PRIMITIVE_TYPES + NAMED_TYPES + Set.new(%w[array map union request])
-
-    INT_MIN_VALUE = -(1 << 31)
-    INT_MAX_VALUE = (1 << 31) - 1
-    LONG_MIN_VALUE = -(1 << 63)
-    LONG_MAX_VALUE = (1 << 63) - 1
-
-    def self.parse(json_string)
-      real_parse(Yajl.load(json_string), {})
-    end
-
-    # Build Avro Schema from data parsed out of JSON string.
-    def self.real_parse(json_obj, names=nil)
-      if json_obj.is_a? Hash
-        type = json_obj['type']
-        if PRIMITIVE_TYPES.include?(type)
-          return PrimitiveSchema.new(type)
-        elsif NAMED_TYPES.include? type
-          name = json_obj['name']
-          namespace = json_obj['namespace']
-          case type
-          when 'fixed'
-            size = json_obj['size']
-            return FixedSchema.new(name, namespace, size, names)
-          when 'enum'
-            symbols = json_obj['symbols']
-            return EnumSchema.new(name, namespace, symbols, names)
-          when 'record', 'error'
-            fields = json_obj['fields']
-            return RecordSchema.new(name, namespace, fields, names, type)
-          else
-            raise SchemaParseError.new("Unknown named type: #{type}")
-          end
-        elsif VALID_TYPES.include?(type)
-          case type
-          when 'array'
-            return ArraySchema.new(json_obj['items'], names)
-          when 'map'
-            return MapSchema.new(json_obj['values'], names)
-          else
-            raise SchemaParseError.new("Unknown Valid Type: #{type}")
-          end
-        elsif type.nil?
-          raise SchemaParseError.new("No \"type\" property: #{json_obj}")
-        else
-          raise SchemaParseError.new("Undefined type: #{type}")
-        end
-      elsif json_obj.is_a? Array
-        # JSON array (union)
-        return UnionSchema.new(json_obj, names)
-      elsif PRIMITIVE_TYPES.include? json_obj
-        return PrimitiveSchema.new(json_obj)
-      else
-        msg = "#{json_obj.inspect} is not a schema we know about."
-        raise SchemaParseError.new(msg)
-      end
-    end
-
-    # Determine if a ruby datum is an instance of a schema
-    def self.validate(expected_schema, datum)
-      case expected_schema.type
-      when 'null'
-        datum.nil?
-      when 'boolean'
-        datum == true || datum == false
-      when 'string', 'bytes'
-        datum.is_a? String
-      when 'int'
-        (datum.is_a?(Fixnum) || datum.is_a?(Bignum)) &&
-            (INT_MIN_VALUE <= datum) && (datum <= INT_MAX_VALUE)
-      when 'long'
-        (datum.is_a?(Fixnum) || datum.is_a?(Bignum)) &&
-            (LONG_MIN_VALUE <= datum) && (datum <= LONG_MAX_VALUE)
-      when 'float', 'double'
-        datum.is_a?(Float) || datum.is_a?(Fixnum) || datum.is_a?(Bignum)
-      when 'fixed'
-        datum.is_a?(String) && datum.size == expected_schema.size
-      when 'enum'
-        expected_schema.symbols.include? datum
-      when 'array'
-        datum.is_a?(Array) &&
-          datum.all?{|d| validate(expected_schema.items, d) }
-      when 'map':
-          datum.keys.all?{|k| k.is_a? String } &&
-          datum.values.all?{|v| validate(expected_schema.values, v) }
-      when 'union'
-        expected_schema.schemas.any?{|s| validate(s, datum) }
-      when 'record', 'error', 'request'
-        datum.is_a?(Hash) &&
-          expected_schema.fields.all?{|f| validate(f.type, datum[f.name]) }
-      else
-        raise "you suck #{expected_schema.inspect} is not allowed."
-      end
-    end
-
-    def initialize(type)
-      @type = type
-    end
-
-    def type; @type; end
-
-    def ==(other, seen=nil)
-      other.is_a?(Schema) && @type == other.type
-    end
-
-    def hash(seen=nil)
-      @type.hash
-    end
-
-    def subparse(json_obj, names=nil)
-      begin
-        Schema.real_parse(json_obj, names)
-      rescue => e
-        raise e if e.is_a? SchemaParseError
-        raise SchemaParseError, "Sub-schema for #{self.class.name} not a valid Avro schema. Bad schema: #{json_obj}"
-      end
-    end
-
-    def to_avro
-      {'type' => @type}
-    end
-
-    def to_s
-      Yajl.dump to_avro
-    end
-
-    class NamedSchema < Schema
-      attr_reader :name, :namespace
-      def initialize(type, name, namespace=nil, names=nil)
-        super(type)
-        @name, @namespace = Name.extract_namespace(name, namespace)
-        names = Name.add_name(names, self)
-      end
-
-      def to_avro
-        props = {'name' => @name}
-        props.merge!('namespace' => @namespace) if @namespace
-        super.merge props
-      end
-
-      def fullname
-        Name.make_fullname(@name, @namespace)
-      end
-    end
-
-    class RecordSchema < NamedSchema
-      attr_reader :fields
-
-      def self.make_field_objects(field_data, names)
-        field_objects, field_names = [], Set.new
-        field_data.each_with_index do |field, i|
-          if field.respond_to?(:[]) # TODO(jmhodges) wtffffff
-            type = field['type']
-            name = field['name']
-            default = field['default']
-            order = field['order']
-            new_field = Field.new(type, name, default, order, names)
-            # make sure field name has not been used yet
-            if field_names.include?(new_field.name)
-              raise SchemaParseError, "Field name #{new_field.name.inspect} is already in use"
-            end
-            field_names << new_field.name
-          else
-            raise SchemaParseError, "Not a valid field: #{field}"
-          end
-          field_objects << new_field
-        end
-        field_objects
-      end
-
-      def initialize(name, namespace, fields, names=nil, schema_type='record')
-        if schema_type == 'request'
-          @type = schema_type
-        else
-          super(schema_type, name, namespace, names)
-        end
-        @fields = RecordSchema.make_field_objects(fields, names)
-      end
-
-      def fields_hash
-        fields.inject({}){|hsh, field| hsh[field.name] = field; hsh }
-      end
-
-      def to_avro
-        hsh = super.merge('fields' => @fields.map {|f| f.to_avro } )
-        if type == 'request'
-          hsh['fields']
-        else
-          hsh
-        end
-      end
-    end
-
-    class ArraySchema < Schema
-      attr_reader :items, :items_schema_from_names
-      def initialize(items, names=nil)
-        @items_schema_from_names = false
-
-        super('array')
-
-        if items.is_a?(String) && names.has_key?(items)
-          @items = names[items]
-          @items_schema_from_names = true
-        else
-          @items = subparse(items, names)
-        end
-      end
-
-      def to_avro
-        name_or_json = if items_schema_from_names
-                         items.fullname
-                       else
-                         items.to_avro
-                       end
-        super.merge('items' => name_or_json)
-      end
-    end
-
-    class MapSchema < Schema
-      attr_reader :values, :values_schema_from_names
-
-      def initialize(values, names=nil)
-        @values_schema_from_names = false
-        super('map')
-        if values.is_a?(String) && names.has_key?(values)
-          values_schema = names[values]
-          @values_schema_from_names = true
-        else
-          values_schema = subparse(values, names)
-        end
-        @values = values_schema
-      end
-
-      def to_avro
-        to_dump = super
-        if values_schema_from_names
-          to_dump['values'] = values
-        else
-          to_dump['values'] = values.to_avro
-        end
-        to_dump
-      end
-    end
-
-    class UnionSchema < Schema
-      attr_reader :schemas, :schema_from_names_indices
-      def initialize(schemas, names=nil)
-        super('union')
-
-        schema_objects = []
-        @schema_from_names_indices = []
-        schemas.each_with_index do |schema, i|
-          from_names = false
-          if schema.is_a?(String) && names.has_key?(schema)
-            new_schema = names[schema]
-            from_names = true
-          else
-            new_schema = subparse(schema, names)
-          end
-
-          ns_type = new_schema.type
-          if VALID_TYPES.include?(ns_type) &&
-              !NAMED_TYPES.include?(ns_type) &&
-              schema_objects.map{|o| o.type }.include?(ns_type)
-            raise SchemaParseError, "#{ns_type} is already in Union"
-          elsif ns_type == 'union'
-            raise SchemaParseError, "Unions cannot contain other unions"
-          else
-            schema_objects << new_schema
-            @schema_from_names_indices << i if from_names
-          end
-          @schemas = schema_objects
-        end
-      end
-
-      def to_avro
-        # FIXME(jmhodges) this from_name pattern is really weird and
-        # seems code-smelly.
-        to_dump = []
-        schemas.each_with_index do |schema, i|
-          if schema_from_names_indices.include?(i)
-            to_dump << schema.fullname
-          else
-            to_dump << schema.to_avro
-          end
-        end
-        to_dump
-      end
-    end
-
-    class EnumSchema < NamedSchema
-      attr_reader :symbols
-      def initialize(name, space, symbols, names=nil)
-        if symbols.uniq.length < symbols.length
-          fail_msg = 'Duplicate symbol: %s' % symbols
-          raise Avro::SchemaParseError, fail_msg
-        end
-        super('enum', name, space, names)
-        @symbols = symbols
-      end
-
-      def to_avro
-        super.merge('symbols' => symbols)
-      end
-    end
-
-    # Valid primitive types are in PRIMITIVE_TYPES.
-    class PrimitiveSchema < Schema
-      def initialize(type)
-        unless PRIMITIVE_TYPES.include? type
-          raise AvroError.new("#{type} is not a valid primitive type.")
-        end
-
-        super(type)
-      end
-
-      def to_avro
-        hsh = super
-        hsh.size == 1 ? type : hsh
-      end
-    end
-
-    class FixedSchema < NamedSchema
-      attr_reader :size
-      def initialize(name, space, size, names=nil)
-        # Ensure valid cto args
-        unless size.is_a?(Fixnum) || size.is_a?(Bignum)
-          raise AvroError, 'Fixed Schema requires a valid integer for size property.'
-        end
-        super('fixed', name, space, names)
-        @size = size
-      end
-
-      def to_avro
-        super.merge('size' => @size)
-      end
-    end
-
-    class Field < Schema
-      attr_reader :type, :name, :default, :order, :type_from_names
-      def initialize(type, name, default=nil, order=nil, names=nil)
-        @type_from_names = false
-        if type.is_a?(String) && names && names.has_key?(type)
-          type_schema = names[type]
-          @type_from_names = true
-        else
-          type_schema = subparse(type, names)
-        end
-        @type = type_schema
-        @name = name
-        @default = default
-        @order = order
-      end
-
-      def to_avro
-        sigh_type = type_from_names ? type.fullname : type.to_avro
-        hsh = {
-          'name' => name,
-          'type' => sigh_type
-        }
-        hsh['default'] = default if default
-        hsh['order'] = order if order
-        hsh
-      end
-    end
-  end
-
-  class SchemaParseError < AvroError; end
-
-  module Name
-    def self.extract_namespace(name, namespace)
-      parts = name.split('.')
-      if parts.size > 1
-        namespace, name = parts[0..-2].join('.'), parts.last
-      end
-      return name, namespace
-    end
-
-    # Add a new schema object to the names dictionary (in place).
-    def self.add_name(names, new_schema)
-      new_fullname = new_schema.fullname
-      if Avro::Schema::VALID_TYPES.include?(new_fullname)
-        raise SchemaParseError, "#{new_fullname} is a reserved type name."
-      elsif names.nil?
-        names = {}
-      elsif names.has_key?(new_fullname)
-        raise SchemaParseError, "The name \"#{new_fullname}\" is already in use."
-      end
-
-      names[new_fullname] = new_schema
-      names
-    end
-
-    def self.make_fullname(name, namespace)
-      if !name.include?('.') && !namespace.nil?
-        namespace + '.' + name
-      else
-        name
-      end
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/random_data.rb b/branch-1.3/lang/ruby/test/random_data.rb
deleted file mode 100644
index 8c94b39..0000000
--- a/branch-1.3/lang/ruby/test/random_data.rb
+++ /dev/null
@@ -1,90 +0,0 @@
-#!/usr/bin/env ruby
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-class RandomData
-  def initialize(schm, seed=nil)
-    srand(seed) if seed
-    @seed = seed
-    @schm = schm
-  end
-
-  def next
-    nextdata(@schm)
-  end
-
-  def nextdata(schm, d=0)
-    case schm.type
-    when 'boolean'
-      rand > 0.5
-    when 'string'
-      randstr()
-    when 'int'
-      rand(Avro::Schema::INT_MAX_VALUE - Avro::Schema::INT_MIN_VALUE) + Avro::Schema::INT_MIN_VALUE
-    when 'long'
-      rand(Avro::Schema::LONG_MAX_VALUE - Avro::Schema::LONG_MIN_VALUE) + Avro::Schema::LONG_MIN_VALUE
-    when 'float'
-      (-1024 + 2048 * rand).round.to_f
-    when 'double'
-      Avro::Schema::LONG_MIN_VALUE + (Avro::Schema::LONG_MAX_VALUE - Avro::Schema::LONG_MIN_VALUE) * rand
-    when 'bytes'
-      randstr(BYTEPOOL)
-    when 'null'
-      nil
-    when 'array'
-      arr = []
-      len = rand(5) + 2 - d
-      len = 0 if len < 0
-      len.times{ arr << nextdata(schm.items, d+1) }
-      arr
-    when 'map'
-      map = {}
-      len = rand(5) + 2 - d
-      len = 0 if len < 0
-      len.times do
-        map[nextdata(Avro::Schema::PrimitiveSchema.new('string'))] = nextdata(schm.values, d+1)
-      end
-      map
-    when 'record'
-      m = {}
-      schm.fields.each do |field|
-        m[field.name] = nextdata(field.type, d+1)
-      end
-      m
-    when 'union'
-      types = schm.schemas
-      nextdata(types[rand(types.size)], d)
-    when 'enum'
-      symbols = schm.symbols
-      len = symbols.size
-      return nil if len == 0
-      symbols[rand(len)]
-    when 'fixed'
-      f = ""
-      schm.size.times { f << BYTEPOOL[rand(BYTEPOOL.size), 1] }
-      f
-    end
-  end
-
-  CHARPOOL = 'abcdefghjkmnpqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ23456789'
-  BYTEPOOL = '12345abcd'
-
-  def randstr(chars=CHARPOOL, length=20)
-    str = ''
-    rand(length+1).times { str << chars[rand(chars.size)] }
-    str
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/sample_ipc_client.rb b/branch-1.3/lang/ruby/test/sample_ipc_client.rb
deleted file mode 100644
index 0a25e58..0000000
--- a/branch-1.3/lang/ruby/test/sample_ipc_client.rb
+++ /dev/null
@@ -1,85 +0,0 @@
-#!/usr/bin/env ruby
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'socket'
-require 'avro'
-
-MAIL_PROTOCOL_JSON = <<-JSON
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-JSON
-
-MAIL_PROTOCOL = Avro::Protocol.parse(MAIL_PROTOCOL_JSON)
-
-def make_requestor(server_address, port, protocol)
-  sock = TCPSocket.new(server_address, port)
-  client = Avro::IPC::SocketTransport.new(sock)
-  Avro::IPC::Requestor.new(protocol, client)
-end
-
-if $0 == __FILE__
-  if ![3, 4].include?(ARGV.length)
-    raise "Usage: <to> <from> <body> [<count>]"
-  end
-
-  # client code - attach to the server and send a message
-  # fill in the Message record
-  message = {
-    'to'   => ARGV[0],
-    'from' => ARGV[1],
-    'body' => ARGV[2]
-  }
-
-  num_messages = (ARGV[3] || 1).to_i
-
-  # build the parameters for the request
-  params = {'message' => message}
-
-  # send the requests and print the result
-  num_messages.times do
-    requestor = make_requestor('localhost', 9090, MAIL_PROTOCOL)
-    result = requestor.request('send', params)
-    puts("Result: " + result)
-  end
-
-  # try out a replay message
-  requestor = make_requestor('localhost', 9090, MAIL_PROTOCOL)
-  result = requestor.request('replay', {})
-  puts("Replay Result: " + result)
-end
diff --git a/branch-1.3/lang/ruby/test/sample_ipc_http_client.rb b/branch-1.3/lang/ruby/test/sample_ipc_http_client.rb
deleted file mode 100644
index 404cc9c..0000000
--- a/branch-1.3/lang/ruby/test/sample_ipc_http_client.rb
+++ /dev/null
@@ -1,84 +0,0 @@
-#!/usr/bin/env ruby
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'socket'
-require 'avro'
-
-MAIL_PROTOCOL_JSON = <<-JSON
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-JSON
-
-MAIL_PROTOCOL = Avro::Protocol.parse(MAIL_PROTOCOL_JSON)
-
-def make_requestor(server_address, port, protocol)
-  transport = Avro::IPC::HTTPTransceiver.new(server_address, port)
-  Avro::IPC::Requestor.new(protocol, transport)
-end
-
-if $0 == __FILE__
-  if ![3, 4].include?(ARGV.length)
-    raise "Usage: <to> <from> <body> [<count>]"
-  end
-
-  # client code - attach to the server and send a message
-  # fill in the Message record
-  message = {
-    'to'   => ARGV[0],
-    'from' => ARGV[1],
-    'body' => ARGV[2]
-  }
-
-  num_messages = (ARGV[3] || 1).to_i
-
-  # build the parameters for the request
-  params = {'message' => message}
-  # send the requests and print the result
-
-  num_messages.times do
-    requestor = make_requestor('localhost', 9090, MAIL_PROTOCOL)
-    result = requestor.request('send', params)
-    puts("Result: " + result)
-  end
-
-  # try out a replay message
-  requestor = make_requestor('localhost', 9090, MAIL_PROTOCOL)
-  result = requestor.request('replay', {})
-  puts("Replay Result: " + result)
-end
diff --git a/branch-1.3/lang/ruby/test/sample_ipc_http_server.rb b/branch-1.3/lang/ruby/test/sample_ipc_http_server.rb
deleted file mode 100644
index f5266bb..0000000
--- a/branch-1.3/lang/ruby/test/sample_ipc_http_server.rb
+++ /dev/null
@@ -1,79 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-require 'avro'
-require 'webrick'
-
-MAIL_PROTOCOL_JSON = <<-JSON
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-JSON
-
-MAIL_PROTOCOL = Avro::Protocol.parse(MAIL_PROTOCOL_JSON)
-
-class MailResponder < Avro::IPC::Responder
-  def initialize
-    super(MAIL_PROTOCOL)
-  end
-
-  def call(message, request)
-    if message.name == 'send'
-      request_content = request['message']
-      "Sent message to #{request_content['to']} from #{request_content['from']} with body #{request_content['body']}"
-    elsif message.name == 'replay'
-      'replay'
-    end
-  end
-end
-
-class MailHandler < WEBrick::HTTPServlet::AbstractServlet
-  def do_POST(req, resp)
-    responder = MailResponder.new
-    call_request = Avro::IPC::FramedReader.new(StringIO.new(req.body)).read_framed_message
-    unframed_resp = responder.respond(call_request)
-    writer = Avro::IPC::FramedWriter.new(StringIO.new)
-    writer.write_framed_message(unframed_resp)
-    resp.body = writer.to_s
-  end
-end
-
-if $0 == __FILE__
-  server = WEBrick::HTTPServer.new(:Host => 'localhost', :Port => 9090)
-  server.mount '/', MailHandler
-  trap("INT") { server.shutdown }
-  server.start
-end
diff --git a/branch-1.3/lang/ruby/test/sample_ipc_server.rb b/branch-1.3/lang/ruby/test/sample_ipc_server.rb
deleted file mode 100644
index b18312d..0000000
--- a/branch-1.3/lang/ruby/test/sample_ipc_server.rb
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/usr/bin/env ruby
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'socket'
-require 'avro'
-
-MAIL_PROTOCOL_JSON = <<-EOS
-{"namespace": "example.proto",
- "protocol": "Mail",
-
- "types": [
-     {"name": "Message", "type": "record",
-      "fields": [
-          {"name": "to",   "type": "string"},
-          {"name": "from", "type": "string"},
-          {"name": "body", "type": "string"}
-      ]
-     }
- ],
-
- "messages": {
-     "send": {
-         "request": [{"name": "message", "type": "Message"}],
-         "response": "string"
-     },
-     "replay": {
-         "request": [],
-         "response": "string"
-     }
- }
-}
-EOS
-
-MAIL_PROTOCOL = Avro::Protocol.parse(MAIL_PROTOCOL_JSON)
-
-class MailResponder < Avro::IPC::Responder
-  def initialize
-    super(MAIL_PROTOCOL)
-  end
-
-  def call(message, request)
-    if message.name == 'send'
-      request_content = request['message']
-      "Sent message to #{request_content['to']} from #{request_content['from']} with body #{request_content['body']}"
-    elsif message.name == 'replay'
-      'replay'
-    end
-  end
-end
-
-class RequestHandler
-  def initialize(address, port)
-    @ip_address = address
-    @port = port
-  end
-
-  def run
-    server = TCPServer.new(@ip_address, @port)
-    while (session = server.accept)
-      handle(session)
-      session.close
-    end
-  end
-end
-
-class MailHandler < RequestHandler
-  def handle(request)
-    responder = MailResponder.new()
-    transport = Avro::IPC::SocketTransport.new(request)
-    str = transport.read_framed_message
-    transport.write_framed_message(responder.respond(str))
-  end
-end
-
-if $0 == __FILE__
-  handler = MailHandler.new('localhost', 9090)
-  handler.run
-end
diff --git a/branch-1.3/lang/ruby/test/test_datafile.rb b/branch-1.3/lang/ruby/test/test_datafile.rb
deleted file mode 100644
index 00d2aa7..0000000
--- a/branch-1.3/lang/ruby/test/test_datafile.rb
+++ /dev/null
@@ -1,121 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'test_help'
-
-class TestDataFile < Test::Unit::TestCase
-  HERE = File.expand_path File.dirname(__FILE__)
-  def setup
-    if File.exists?(HERE + '/data.avr')
-      File.unlink(HERE + '/data.avr')
-    end
-  end
-
-  def teardown
-    if File.exists?(HERE + '/data.avr')
-      File.unlink(HERE + '/data.avr')
-    end
-  end
-
-  def test_differing_schemas_with_primitives
-    writer_schema = <<-JSON
-{ "type": "record",
-  "name": "User",
-  "fields" : [
-    {"name": "username", "type": "string"},
-    {"name": "age", "type": "int"},
-    {"name": "verified", "type": "boolean", "default": "false"}
-  ]}
-JSON
-
-    data = [{"username" => "john", "age" => 25, "verified" => true},
-            {"username" => "ryan", "age" => 23, "verified" => false}]
-
-    Avro::DataFile.open('data.avr', 'w', writer_schema) do |dw|
-      data.each{|h| dw << h }
-    end
-
-    # extract the username only from the avro serialized file
-    reader_schema = <<-JSON
-{ "type": "record",
-  "name": "User",
-  "fields" : [
-    {"name": "username", "type": "string"}
- ]}
-JSON
-
-    Avro::DataFile.open('data.avr', 'r', reader_schema) do |dr|
-      dr.each_with_index do |record, i|
-        assert_equal data[i]['username'], record['username']
-      end
-    end
-  end
-
-  def test_differing_schemas_with_complex_objects
-    writer_schema = <<-JSON
-{ "type": "record",
-  "name": "something",
-  "fields": [
-    {"name": "something_fixed", "type": {"name": "inner_fixed",
-                                         "type": "fixed", "size": 3}},
-    {"name": "something_enum", "type": {"name": "inner_enum",
-                                        "type": "enum",
-                                        "symbols": ["hello", "goodbye"]}},
-    {"name": "something_array", "type": {"type": "array", "items": "int"}},
-    {"name": "something_map", "type": {"type": "map", "values": "int"}},
-    {"name": "something_record", "type": {"name": "inner_record",
-                                          "type": "record",
-                                          "fields": [
-                                            {"name": "inner", "type": "int"}
-                                          ]}},
-    {"name": "username", "type": "string"}
-]}
-JSON
-
-    data = [{"username" => "john",
-              "something_fixed" => "foo",
-              "something_enum" => "hello",
-              "something_array" => [1,2,3],
-              "something_map" => {"a" => 1, "b" => 2},
-              "something_record" => {"inner" => 2},
-              "something_error" => {"code" => 403}
-            },
-            {"username" => "ryan",
-              "something_fixed" => "bar",
-              "something_enum" => "goodbye",
-              "something_array" => [1,2,3],
-              "something_map" => {"a" => 2, "b" => 6},
-              "something_record" => {"inner" => 1},
-              "something_error" => {"code" => 401}
-            }]
-
-    Avro::DataFile.open('data.avr', 'w', writer_schema) do |dw|
-      data.each{|d| dw << d }
-    end
-
-    %w[fixed enum record error array map union].each do |s|
-      reader = Yajl.load(writer_schema)
-      reader['fields'] = reader['fields'].reject{|f| f['type']['type'] == s}
-      Avro::DataFile.open('data.avr', 'r', Yajl.dump(reader)) do |dr|
-        dr.each_with_index do |obj, i|
-          reader['fields'].each do |field|
-            assert_equal data[i][field['name']], obj[field['name']]
-          end
-        end
-      end
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/test_help.rb b/branch-1.3/lang/ruby/test/test_help.rb
deleted file mode 100644
index b921b1e..0000000
--- a/branch-1.3/lang/ruby/test/test_help.rb
+++ /dev/null
@@ -1,23 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'rubygems'
-require 'test/unit'
-require 'stringio'
-require 'fileutils'
-FileUtils.mkdir_p('tmp')
-require 'avro'
-require 'random_data'
diff --git a/branch-1.3/lang/ruby/test/test_io.rb b/branch-1.3/lang/ruby/test/test_io.rb
deleted file mode 100644
index 1d1540d..0000000
--- a/branch-1.3/lang/ruby/test/test_io.rb
+++ /dev/null
@@ -1,361 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'test_help'
-
-class TestIO < Test::Unit::TestCase
-  DATAFILE = 'tmp/test.rb.avro'
-  Schema = Avro::Schema
-
-  def test_null
-    check_default('"null"', "null", nil)
-  end
-
-  def test_boolean
-    check_default('"boolean"', "true", true)
-    check_default('"boolean"', "false", false)
-  end
-
-  def test_string
-    check_default('"string"', '"foo"', "foo")
-  end
-
-  def test_bytes
-    check_default('"bytes"', '"foo"', "foo")
-  end
-
-  def test_int
-    check_default('"int"', "5", 5)
-  end
-
-  def test_long
-    check_default('"long"', "9", 9)
-  end
-
-  def test_float
-    check_default('"float"', "1.2", 1.2)
-  end
-
-  def test_double
-    check_default('"double"', "1.2", 1.2)
-  end
-
-  def test_array
-    array_schema = '{"type": "array", "items": "long"}'
-    check_default(array_schema, "[1]", [1])
-  end
-
-  def test_map
-    map_schema = '{"type": "map", "values": "long"}'
-    check_default(map_schema, '{"a": 1}', {"a" => 1})
-  end
-
-  def test_record
-    record_schema = <<EOS
-      {"type": "record",
-       "name": "Test",
-       "fields": [{"name": "f",
-                   "type": "long"}]}
-EOS
-    check_default(record_schema, '{"f": 11}', {"f" => 11})
-  end
-
-  def test_enum
-    enum_schema = '{"type": "enum", "name": "Test","symbols": ["A", "B"]}'
-    check_default(enum_schema, '"B"', "B")
-  end
-
-  def test_recursive
-    recursive_schema = <<EOS
-      {"type": "record",
-       "name": "Node",
-       "fields": [{"name": "label", "type": "string"},
-                  {"name": "children",
-                   "type": {"type": "array", "items": "Node"}}]}
-EOS
-    check(recursive_schema)
-  end
-
-  def test_union
-    union_schema = <<EOS
-      ["string",
-       "null",
-       "long",
-       {"type": "record",
-        "name": "Cons",
-        "fields": [{"name": "car", "type": "string"},
-                   {"name": "cdr", "type": "string"}]}]
-EOS
-    check(union_schema)
-    check_default('["double", "long"]', "1.1", 1.1)
-  end
-
-  def test_lisp
-    lisp_schema = <<EOS
-      {"type": "record",
-       "name": "Lisp",
-       "fields": [{"name": "value",
-                   "type": ["null", "string",
-                            {"type": "record",
-                             "name": "Cons",
-                             "fields": [{"name": "car", "type": "Lisp"},
-                                        {"name": "cdr", "type": "Lisp"}]}]}]}
-EOS
-    check(lisp_schema)
-  end
-
-  def test_fixed
-    fixed_schema = '{"type": "fixed", "name": "Test", "size": 1}'
-    check_default(fixed_schema, '"a"', "a")
-  end
-
-  def test_enum_with_duplicate
-    str = '{"type": "enum", "name": "Test","symbols" : ["AA", "AA"]}'
-    assert_raises(Avro::SchemaParseError) do
-      schema = Avro::Schema.parse str
-    end
-  end
-
-  BINARY_INT_ENCODINGS = [
-    [0, '00'],
-    [-1, '01'],
-    [1, '02'],
-    [-2, '03'],
-    [2, '04'],
-    [-64, '7f'],
-    [64, '80 01'],
-    [8192, '80 80 01'],
-    [-8193, '81 80 01'],
-  ]
-
-  def avro_hexlify(reader)
-    bytes = []
-    current_byte = reader.read(1)
-    bytes << hexlify(current_byte)
-    while (current_byte[0] & 0x80) != 0
-      current_byte = reader.read(1)
-      bytes << hexlify(current_byte)
-    end
-    bytes.join ' '
-  end
-
-  def hexlify(msg)
-    msg.split("").collect { |c| c[0].to_s(16).rjust(2, '0') }.join
-  end
-
-  def test_binary_int_encoding
-    for value, hex_encoding in BINARY_INT_ENCODINGS
-      # write datum in binary to string buffer
-      buffer = StringIO.new
-      encoder = Avro::IO::BinaryEncoder.new(buffer)
-      datum_writer = Avro::IO::DatumWriter.new(Avro::Schema.parse('"int"'))
-      datum_writer.write(value, encoder)
-
-      buffer.seek(0)
-      hex_val = avro_hexlify(buffer)
-
-      assert_equal hex_encoding, hex_val
-    end
-  end
-
-  def test_binary_long_encoding
-    for value, hex_encoding in BINARY_INT_ENCODINGS
-      buffer = StringIO.new
-      encoder = Avro::IO::BinaryEncoder.new(buffer)
-      datum_writer = Avro::IO::DatumWriter.new(Avro::Schema.parse('"long"'))
-      datum_writer.write(value, encoder)
-
-      # read it out of the buffer and hexlify it
-      buffer.seek(0)
-      hex_val = avro_hexlify(buffer)
-
-      assert_equal hex_encoding, hex_val
-    end
-  end
-
-  def test_skip_long
-    for value_to_skip, hex_encoding in BINARY_INT_ENCODINGS
-      value_to_read = 6253
-
-      # write some data in binary to string buffer
-      writer = StringIO.new
-      encoder = Avro::IO::BinaryEncoder.new(writer)
-      datum_writer = Avro::IO::DatumWriter.new(Avro::Schema.parse('"long"'))
-      datum_writer.write(value_to_skip, encoder)
-      datum_writer.write(value_to_read, encoder)
-
-      # skip the value
-      reader = StringIO.new(writer.string())
-      decoder = Avro::IO::BinaryDecoder.new(reader)
-      decoder.skip_long()
-
-      # read data from string buffer
-      datum_reader = Avro::IO::DatumReader.new(Avro::Schema.parse('"long"'))
-      read_value = datum_reader.read(decoder)
-
-      # check it
-      assert_equal value_to_read, read_value
-    end
-  end
-
-  def test_skip_int
-    for value_to_skip, hex_encoding in BINARY_INT_ENCODINGS
-      value_to_read = 6253
-
-      writer = StringIO.new
-      encoder = Avro::IO::BinaryEncoder.new(writer)
-      datum_writer = Avro::IO::DatumWriter.new(Avro::Schema.parse('"int"'))
-      datum_writer.write(value_to_skip, encoder)
-      datum_writer.write(value_to_read, encoder)
-
-      reader = StringIO.new(writer.string)
-      decoder = Avro::IO::BinaryDecoder.new(reader)
-      decoder.skip_int
-
-      datum_reader = Avro::IO::DatumReader.new(Avro::Schema.parse('"int"'))
-      read_value = datum_reader.read(decoder)
-
-      assert_equal value_to_read, read_value
-    end
-  end
-
-  def test_schema_promotion
-    promotable_schemas = ['"int"', '"long"', '"float"', '"double"']
-    incorrect = 0
-    promotable_schemas.each_with_index do |ws, i|
-      writers_schema = Avro::Schema.parse(ws)
-      datum_to_write = 219
-      for rs in promotable_schemas[(i + 1)..-1]
-        readers_schema = Avro::Schema.parse(rs)
-        writer, enc, dw = write_datum(datum_to_write, writers_schema)
-        datum_read = read_datum(writer, writers_schema, readers_schema)
-        if datum_read != datum_to_write
-          incorrect += 1
-        end
-      end
-      assert_equal(incorrect, 0)
-    end
-  end
-  private
-
-  def check_default(schema_json, default_json, default_value)
-    check(schema_json)
-    actual_schema = '{"type": "record", "name": "Foo", "fields": []}'
-    actual = Avro::Schema.parse(actual_schema)
-
-    expected_schema = <<EOS
-      {"type": "record",
-       "name": "Foo",
-       "fields": [{"name": "f", "type": #{schema_json}, "default": #{default_json}}]}
-EOS
-    expected = Avro::Schema.parse(expected_schema)
-
-    reader = Avro::IO::DatumReader.new(actual, expected)
-    record = reader.read(Avro::IO::BinaryDecoder.new(StringIO.new))
-    assert_equal default_value, record["f"]
-  end
-
-  def check(str)
-    # parse schema, then convert back to string
-    schema = Avro::Schema.parse str
-
-    parsed_string = schema.to_s
-
-     # test that the round-trip didn't mess up anything
-    # NB: I don't think we should do this. Why enforce ordering?
-    assert_equal(Yajl.load(str),
-                  Yajl.load(parsed_string))
-
-    # test __eq__
-    assert_equal(schema, Avro::Schema.parse(str))
-
-    # test hashcode doesn't generate infinite recursion
-    schema.hash
-
-    # test serialization of random data
-    randomdata = RandomData.new(schema)
-    9.times { checkser(schema, randomdata) }
-
-    # test writing of data to file
-    check_datafile(schema)
-  end
-
-  def checkser(schm, randomdata)
-    datum = randomdata.next
-    assert validate(schm, datum)
-    w = Avro::IO::DatumWriter.new(schm)
-    writer = StringIO.new "", "w"
-    w.write(datum, Avro::IO::BinaryEncoder.new(writer))
-    r = datum_reader(schm)
-    reader = StringIO.new(writer.string)
-    ob = r.read(Avro::IO::BinaryDecoder.new(reader))
-    assert_equal(datum, ob) # FIXME check on assertdata conditional
-  end
-
-  def check_datafile(schm)
-    seed = 0
-    count = 10
-    random_data = RandomData.new(schm, seed)
-
-   
-    f = File.open(DATAFILE, 'wb')
-    dw = Avro::DataFile::Writer.new(f, datum_writer(schm), schm)
-    count.times{ dw << random_data.next }
-    dw.close
-
-    random_data = RandomData.new(schm, seed)
-
-
-    f = File.open(DATAFILE, 'r+')
-    dr = Avro::DataFile::Reader.new(f, datum_reader(schm))
-
-    last_index = nil
-    dr.each_with_index do |data, c|
-      last_index = c
-      # FIXME assertdata conditional
-      assert_equal(random_data.next, data)
-    end
-    dr.close
-    assert_equal count, last_index+1
-  end
-
-  def validate(schm, datum)
-    Avro::Schema.validate(schm, datum)
-  end
-
-  def datum_writer(schm)
-    Avro::IO::DatumWriter.new(schm)
-  end
-
-  def datum_reader(schm)
-    Avro::IO::DatumReader.new(schm)
-  end
-
-  def write_datum(datum, writers_schema)
-    writer = StringIO.new
-    encoder = Avro::IO::BinaryEncoder.new(writer)
-    datum_writer = Avro::IO::DatumWriter.new(writers_schema)
-    datum_writer.write(datum, encoder)
-    [writer, encoder, datum_writer]
-  end
-
-  def read_datum(buffer, writers_schema, readers_schema=nil)
-    reader = StringIO.new(buffer.string)
-    decoder = Avro::IO::BinaryDecoder.new(reader)
-    datum_reader = Avro::IO::DatumReader.new(writers_schema, readers_schema)
-    datum_reader.read(decoder)
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/test_protocol.rb b/branch-1.3/lang/ruby/test/test_protocol.rb
deleted file mode 100644
index 315e9d9..0000000
--- a/branch-1.3/lang/ruby/test/test_protocol.rb
+++ /dev/null
@@ -1,192 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-# http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'test_help'
-
-class TestProtocol < Test::Unit::TestCase
-
-  class ExampleProtocol
-    attr_reader :protocol_string, :valid, :name
-    attr_accessor :comment
-    def initialize(protocol_string, name=nil, comment='')
-      @protocol_string = protocol_string
-      @name = name || protocol_string # default to schema_string for name
-      @comment = comment
-    end
-  end
-#
-# Example Protocols
-#
-
-EXAMPLES = [
-  ExampleProtocol.new(<<-EOS, true),
-{
-  "namespace": "com.acme",
-  "protocol": "HelloWorld",
-
-  "types": [
-    {"name": "Greeting", "type": "record", "fields": [
-      {"name": "message", "type": "string"}]},
-    {"name": "Curse", "type": "error", "fields": [
-      {"name": "message", "type": "string"}]}
-  ],
-
-  "messages": {
-    "hello": {
-      "request": [{"name": "greeting", "type": "Greeting" }],
-      "response": "Greeting",
-      "errors": ["Curse"]
-    }
-  }
-}
-EOS
-
-  ExampleProtocol.new(<<-EOS, true),
-{"namespace": "org.apache.avro.test",
- "protocol": "Simple",
-
- "types": [
-     {"name": "Kind", "type": "enum", "symbols": ["FOO","BAR","BAZ"]},
-
-     {"name": "MD5", "type": "fixed", "size": 16},
-
-     {"name": "TestRecord", "type": "record",
-      "fields": [
-          {"name": "name", "type": "string", "order": "ignore"},
-          {"name": "kind", "type": "Kind", "order": "descending"},
-          {"name": "hash", "type": "MD5"}
-      ]
-     },
-
-     {"name": "TestError", "type": "error", "fields": [
-         {"name": "message", "type": "string"}
-      ]
-     }
-
- ],
-
- "messages": {
-
-     "hello": {
-         "request": [{"name": "greeting", "type": "string"}],
-         "response": "string"
-     },
-
-     "echo": {
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "add": {
-         "request": [{"name": "arg1", "type": "int"}, {"name": "arg2", "type": "int"}],
-         "response": "int"
-     },
-
-     "echoBytes": {
-         "request": [{"name": "data", "type": "bytes"}],
-         "response": "bytes"
-     },
-
-     "error": {
-         "request": [],
-         "response": "null",
-         "errors": ["TestError"]
-     }
- }
-
-}
-EOS
-  ExampleProtocol.new(<<-EOS, true),
-{"namespace": "org.apache.avro.test.namespace",
- "protocol": "TestNamespace",
-
- "types": [
-     {"name": "org.apache.avro.test.util.MD5", "type": "fixed", "size": 16},
-     {"name": "TestRecord", "type": "record",
-      "fields": [ {"name": "hash", "type": "org.apache.avro.test.util.MD5"} ]
-     },
-     {"name": "TestError", "namespace": "org.apache.avro.test.errors",
-      "type": "error", "fields": [ {"name": "message", "type": "string"} ]
-     }
- ],
-
- "messages": {
-     "echo": {
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "error": {
-         "request": [],
-         "response": "null",
-         "errors": ["org.apache.avro.test.errors.TestError"]
-     }
-
- }
-
-}
-EOS
-  ExampleProtocol.new(<<-EOS, true)
-{"namespace": "org.apache.avro.test",
- "protocol": "BulkData",
-
- "types": [],
-
- "messages": {
-
-     "read": {
-         "request": [],
-         "response": "bytes"
-     },
-
-     "write": {
-         "request": [ {"name": "data", "type": "bytes"} ],
-         "response": "null"
-     }
-
- }
-
-}
-EOS
-]
-
-  Protocol = Avro::Protocol
-  def test_parse
-    EXAMPLES.each do |example|
-      assert_nothing_raised("should be valid: #{example.protocol_string}") {
-        Protocol.parse(example.protocol_string)
-      }
-    end
-  end
-
-  def test_valid_cast_to_string_after_parse
-    EXAMPLES.each do |example|
-      assert_nothing_raised("round tripped okay #{example.protocol_string}") {
-        foo = Protocol.parse(example.protocol_string).to_s
-        Protocol.parse(foo)
-      }
-    end
-  end
-
-  def test_equivalence_after_round_trip
-    EXAMPLES.each do |example|
-      original = Protocol.parse(example.protocol_string)
-      round_trip = Protocol.parse(original.to_s)
-
-      assert_equal original, round_trip
-    end
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/test_socket_transport.rb b/branch-1.3/lang/ruby/test/test_socket_transport.rb
deleted file mode 100644
index 0a15a9a..0000000
--- a/branch-1.3/lang/ruby/test/test_socket_transport.rb
+++ /dev/null
@@ -1,40 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'test_help'
-
-class TestSocketTransport < Test::Unit::TestCase
-  def test_buffer_writing
-    io = StringIO.new
-    st = Avro::IPC::SocketTransport.new(io)
-    buffer_length = "\000\000\000\006"  # 6 in big-endian
-    message = 'abcdef'
-    null_ending = "\000\000\000\000" # 0 in big-endian
-    full = buffer_length + message + null_ending
-    st.write_framed_message('abcdef')
-    assert_equal full, io.string
-  end
-
-  def test_buffer_reading
-    buffer_length = "\000\000\000\005" # 5 in big-endian
-    message = "hello"
-    null_ending = "\000\000\000\000" # 0 in big-endian
-    full = buffer_length + message + null_ending
-    io = StringIO.new(full)
-    st = Avro::IPC::SocketTransport.new(io)
-    assert_equal 'hello', st.read_framed_message
-  end
-end
diff --git a/branch-1.3/lang/ruby/test/tool.rb b/branch-1.3/lang/ruby/test/tool.rb
deleted file mode 100644
index 5d17363..0000000
--- a/branch-1.3/lang/ruby/test/tool.rb
+++ /dev/null
@@ -1,144 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-require 'avro'
-require 'webrick'
-require 'uri'
-require 'logger'
-
-class GenericResponder < Avro::IPC::Responder
-  def initialize(proto, msg, datum)
-    proto_json = open(proto).read
-    super(Avro::Protocol.parse(proto_json))
-    @msg = msg
-    @datum = datum
-  end
-
-  def call(message, request)
-    if message.name == @msg
-      STDERR.puts "Message: #{message.name} Datum: #{@datum.inspect}"
-      @datum
-    end
-  end
-end
-
-class GenericHandler < WEBrick::HTTPServlet::AbstractServlet
-  def do_POST(req, resp)
-    call_request = Avro::IPC::FramedReader.new(StringIO.new(req.body)).read_framed_message
-    unframed_resp = $responder.respond(call_request)
-    writer = Avro::IPC::FramedWriter.new(StringIO.new)
-    writer.write_framed_message(unframed_resp)
-    resp.body = writer.to_s
-    @server.stop
-  end
-end
-
-def run_server(uri, proto, msg, datum)
-  uri = URI.parse(uri)
-  $responder = GenericResponder.new(proto, msg, datum)
-  server = WEBrick::HTTPServer.new(:Host => uri.host,
-                                   :Port => uri.port,
-                                   :Logger => Logger.new(StringIO.new))
-  server.mount '/', GenericHandler
-  puts "Port: #{server.config[:Port]}"
-  STDOUT.flush
-  trap("INT") { server.stop }
-  trap("TERM") { server.stop }
-  server.start
-end
-
-def send_message(uri, proto, msg, datum)
-  uri = URI.parse(uri)
-  trans = Avro::IPC::HTTPTransceiver.new(uri.host, uri.port)
-  proto_json = open(proto).read
-  requestor = Avro::IPC::Requestor.new(Avro::Protocol.parse(proto_json),
-                                       trans)
-  p requestor.request(msg, datum)
-end
-
-def file_or_stdin(f)
-  f == "-" ? STDIN : open(f)
-end
-
-def main
-  if ARGV.size == 0
-    puts "Usage: #{$0} [dump|rpcreceive|rpcsend]"
-    return 1
-  end
-
-  case ARGV[0]
-  when "dump"
-    if ARGV.size != 3
-      puts "Usage: #{$0} dump input_file"
-      return 1
-    end
-    d = Avro::DataFile.new(file_or_stdin(ARGV[1]), Avro::IO::DatumReader.new)
-    d.each{|o| puts o.inspect }
-    d.close
-  when "rpcreceive"
-    usage_str = "Usage: #{$0} rpcreceive uri protocol_file "
-    usage_str += "message_name (-data d | -file f)"
-
-    unless [4, 6].include?(ARGV.size)
-      puts usage_str
-      return 1
-    end
-    uri, proto, msg = ARGV[1,3]
-    datum = nil
-    if ARGV.size > 4
-      case ARGV[4]
-      when "-file"
-        Avro::DataFile.open(ARGV[5]) {|f|
-          f.each{|d| datum = d; break }
-        }
-      when "-data"
-        puts "JSON Decoder not yet implemented."
-        return 1
-      else
-        puts usage_str
-        return 1
-      end
-    end
-    run_server(uri, proto, msg, datum)
-  when "rpcsend"
-    usage_str = "Usage: #{$0} rpcsend uri protocol_file "
-    usage_str += "message_name (-data d | -file f)"
-    unless [4,6].include?(ARGV.size)
-      puts usage_str
-      return 1
-    end
-    uri, proto, msg = ARGV[1,3]
-    datum = nil
-    if ARGV.size > 4
-      case ARGV[4]
-      when "-file"
-        Avro::DataFile.open(ARGV[5]){|f| f.each{|d| datum = d; break } }
-      when "-data"
-        puts "JSON Decoder not yet implemented"
-        return 1
-      else
-        puts usage_str
-        return 1
-      end
-    end
-    send_message(uri, proto, msg, datum)
-  end
-  return 0
-end
-
-if __FILE__ == $0
-  exit(main)
-end
diff --git a/branch-1.3/share/VERSION.txt b/branch-1.3/share/VERSION.txt
deleted file mode 100644
index 785cda8..0000000
--- a/branch-1.3/share/VERSION.txt
+++ /dev/null
@@ -1 +0,0 @@
-1.3.3
\ No newline at end of file
diff --git a/branch-1.3/share/rat-excludes.txt b/branch-1.3/share/rat-excludes.txt
deleted file mode 100644
index a5bb15b..0000000
--- a/branch-1.3/share/rat-excludes.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-**/*.avpr
-**/*.avro
-**/*.avsc
-**/*.gperf
-**/*.html
-**/*.js
-**/*.la
-**/*.m4
-**/*.md5
-**/*.pom
-**/*.sha1
-**/*.txt
-**/.gitignore
-**/Makefile**
-**/configure**
-doc/**
-lang/py/lib/simplejson/**
-lang/c++/Doxyfile
-lang/c++/jsonschemas/**
-lang/c/autom4te.cache/**
-lang/c/config**
-lang/c++/config**
-lang/c/docs/**
-lang/c/json/**
-lang/c/tests/**
-lang/c/version.sh
-lang/ruby/Manifest
-lang/c/build.sh
-lang/c/jansson/**
-lang/c/src/queue.h
-lang/c/src/st.h
-lang/c/src/st.c
diff --git a/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeRequest.avsc b/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeRequest.avsc
deleted file mode 100644
index 47f0256..0000000
--- a/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeRequest.avsc
+++ /dev/null
@@ -1,11 +0,0 @@
-{
-    "type": "record",
-    "name": "HandshakeRequest", "namespace":"org.apache.avro.ipc",
-    "fields": [
-        {"name": "clientHash",
-	 "type": {"type": "fixed", "name": "MD5", "size": 16}},
-        {"name": "clientProtocol", "type": ["null", "string"]},
-        {"name": "serverHash", "type": "MD5"},
- 	{"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]}
- ]
-}
diff --git a/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeResponse.avsc b/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeResponse.avsc
deleted file mode 100644
index b54d16d..0000000
--- a/branch-1.3/share/schemas/org/apache/avro/ipc/HandshakeResponse.avsc
+++ /dev/null
@@ -1,15 +0,0 @@
-{
-    "type": "record",
-    "name": "HandshakeResponse", "namespace": "org.apache.avro.ipc",
-    "fields": [
-        {"name": "match",
-         "type": {"type": "enum", "name": "HandshakeMatch",
-                  "symbols": ["BOTH", "CLIENT", "NONE"]}},
-        {"name": "serverProtocol",
-         "type": ["null", "string"]},
-        {"name": "serverHash",
-         "type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]},
- 	{"name": "meta",
-         "type": ["null", {"type": "map", "values": "bytes"}]}
-    ]
-}
diff --git a/branch-1.3/share/test/interop/bin/test_rpc_interop.sh b/branch-1.3/share/test/interop/bin/test_rpc_interop.sh
deleted file mode 100755
index a6c3afb..0000000
--- a/branch-1.3/share/test/interop/bin/test_rpc_interop.sh
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/bin/bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e						  # exit on error
-
-cd `dirname "$0"`/../../../..			  # connect to root
-
-VERSION=`cat share/VERSION.txt`
-
-#set -x						  # echo commands
-
-java_client="java -jar lang/java/build/avro-tools-$VERSION.jar rpcsend"
-java_server="java -jar lang/java/build/avro-tools-$VERSION.jar rpcreceive"
-
-py_client="python lang/py/src/avro/tool.py rpcsend"
-py_server="python lang/py/src/avro/tool.py rpcreceive"
-
-ruby_client="ruby -rubygems -Ilang/ruby/lib lang/ruby/test/tool.rb rpcsend"
-ruby_server="ruby -rubygems -Ilang/ruby/lib lang/ruby/test/tool.rb rpcreceive"
-
-export PYTHONPATH=lang/py/src                     # path to avro Python module
-
-clients=("$java_client" "$py_client" "$ruby_client")
-servers=("$java_server" "$py_server" "$ruby_server")
-
-proto=share/test/schemas/simple.avpr
-
-portfile=/tmp/interop_$$
-
-function cleanup() {
-    rm -rf $portfile
-    for job in `jobs -p` ; do kill $job; done
-}
-
-trap 'cleanup' EXIT
-
-for server in "${servers[@]}"
-do
-    for msgDir in share/test/interop/rpc/*
-    do
-	msg=`basename "$msgDir"`
-	for c in ${msgDir}/*
-	do
-	    echo TEST: $c
-	    for client in "${clients[@]}"
-	    do
-		$server http://0.0.0.0:0/ $proto $msg -file $c/response.avro \
-		    > $portfile &
-		sleep 1				  # wait for server to start
-		read ignore port < $portfile
-	    	$client http://127.0.0.1:$port $proto $msg -file $c/request.avro
-		wait
-	    done
-	done
-    done
-done
-
-echo RPC INTEROP TESTS PASS
diff --git a/branch-1.3/share/test/interop/rpc/add/onePlusOne/request.avro b/branch-1.3/share/test/interop/rpc/add/onePlusOne/request.avro
deleted file mode 100644
index 172f237..0000000
--- a/branch-1.3/share/test/interop/rpc/add/onePlusOne/request.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/interop/rpc/add/onePlusOne/response.avro b/branch-1.3/share/test/interop/rpc/add/onePlusOne/response.avro
deleted file mode 100644
index 62ea90c..0000000
--- a/branch-1.3/share/test/interop/rpc/add/onePlusOne/response.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/interop/rpc/echo/foo/request.avro b/branch-1.3/share/test/interop/rpc/echo/foo/request.avro
deleted file mode 100644
index 4d12ecc..0000000
--- a/branch-1.3/share/test/interop/rpc/echo/foo/request.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/interop/rpc/echo/foo/response.avro b/branch-1.3/share/test/interop/rpc/echo/foo/response.avro
deleted file mode 100644
index e2c8d54..0000000
--- a/branch-1.3/share/test/interop/rpc/echo/foo/response.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/interop/rpc/hello/world/request.avro b/branch-1.3/share/test/interop/rpc/hello/world/request.avro
deleted file mode 100644
index 71adb63..0000000
--- a/branch-1.3/share/test/interop/rpc/hello/world/request.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/interop/rpc/hello/world/response.avro b/branch-1.3/share/test/interop/rpc/hello/world/response.avro
deleted file mode 100644
index 5a79c74..0000000
--- a/branch-1.3/share/test/interop/rpc/hello/world/response.avro
+++ /dev/null
Binary files differ
diff --git a/branch-1.3/share/test/schemas/BulkData.avpr b/branch-1.3/share/test/schemas/BulkData.avpr
deleted file mode 100644
index 608bf43..0000000
--- a/branch-1.3/share/test/schemas/BulkData.avpr
+++ /dev/null
@@ -1,21 +0,0 @@
-
-{"namespace": "org.apache.avro.test",
- "protocol": "BulkData",
-
- "types": [],
-
- "messages": {
-
-     "read": {
-         "request": [],
-         "response": "bytes"
-     },
-
-     "write": {
-         "request": [ {"name": "data", "type": "bytes"} ],
-         "response": "null"
-     }
-
- }
-
-}
diff --git a/branch-1.3/share/test/schemas/interop.avsc b/branch-1.3/share/test/schemas/interop.avsc
deleted file mode 100644
index 8cfbba2..0000000
--- a/branch-1.3/share/test/schemas/interop.avsc
+++ /dev/null
@@ -1,28 +0,0 @@
-{"type": "record", "name":"Interop", "namespace": "org.apache.avro",
-  "fields": [
-      {"name": "intField", "type": "int"},
-      {"name": "longField", "type": "long"},
-      {"name": "stringField", "type": "string"},
-      {"name": "boolField", "type": "boolean"},
-      {"name": "floatField", "type": "float"},
-      {"name": "doubleField", "type": "double"},
-      {"name": "bytesField", "type": "bytes"},
-      {"name": "nullField", "type": "null"},
-      {"name": "arrayField", "type": {"type": "array", "items": "double"}},
-      {"name": "mapField", "type":
-       {"type": "map", "values":
-        {"type": "record", "name": "Foo",
-         "fields": [{"name": "label", "type": "string"}]}}},
-      {"name": "unionField", "type":
-       ["boolean", "double", {"type": "array", "items": "bytes"}]},
-      {"name": "enumField", "type":
-       {"type": "enum", "name": "Kind", "symbols": ["A","B","C"]}},
-      {"name": "fixedField", "type":
-       {"type": "fixed", "name": "MD5", "size": 16}},
-      {"name": "recordField", "type":
-       {"type": "record", "name": "Node",
-        "fields": [
-            {"name": "label", "type": "string"},
-            {"name": "children", "type": {"type": "array", "items": "Node"}}]}}
-  ]
-}
diff --git a/branch-1.3/share/test/schemas/namespace.avpr b/branch-1.3/share/test/schemas/namespace.avpr
deleted file mode 100644
index 11b6bf1..0000000
--- a/branch-1.3/share/test/schemas/namespace.avpr
+++ /dev/null
@@ -1,28 +0,0 @@
-{"namespace": "org.apache.avro.test.namespace",
- "protocol": "TestNamespace",
-
- "types": [
-     {"name": "org.apache.avro.test.util.MD5", "type": "fixed", "size": 16},
-     {"name": "TestRecord", "type": "record",
-      "fields": [ {"name": "hash", "type": "org.apache.avro.test.util.MD5"} ]
-     },
-     {"name": "TestError", "namespace": "org.apache.avro.test.errors",
-      "type": "error", "fields": [ {"name": "message", "type": "string"} ]
-     }
- ],
-
- "messages": {
-     "echo": {	
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "error": {
-         "request": [],
-         "response": "null",
-         "errors": ["org.apache.avro.test.errors.TestError"]
-     }
-
- }
-
-}
diff --git a/branch-1.3/share/test/schemas/simple.avpr b/branch-1.3/share/test/schemas/simple.avpr
deleted file mode 100644
index 3ad569e..0000000
--- a/branch-1.3/share/test/schemas/simple.avpr
+++ /dev/null
@@ -1,57 +0,0 @@
-{"namespace": "org.apache.avro.test",
- "protocol": "Simple",
- "doc": "Protocol used for testing.",
-
- "types": [
-     {"name": "Kind", "type": "enum", "symbols": ["FOO","BAR","BAZ"]},
-
-     {"name": "MD5", "type": "fixed", "size": 16},
-
-     {"name": "TestRecord", "type": "record",
-      "fields": [
-          {"name": "name", "type": "string", "order": "ignore"},
-          {"name": "kind", "type": "Kind", "order": "descending"},
-          {"name": "hash", "type": "MD5"}
-      ]
-     },
-
-     {"name": "TestError", "type": "error", "fields": [
-         {"name": "message", "type": "string"}
-      ]
-     }
-
- ],
-
- "messages": {
-
-     "hello": {
-         "doc": "Send a greeting",
-         "request": [{"name": "greeting", "type": "string"}],
-         "response": "string"
-     },
-
-     "echo": {
-         "doc": "Pretend you're in a cave!",
-         "request": [{"name": "record", "type": "TestRecord"}],
-         "response": "TestRecord"
-     },
-
-     "add": {
-         "request": [{"name": "arg1", "type": "int"}, {"name": "arg2", "type": "int"}],
-         "response": "int"
-     },
-
-     "echoBytes": {
-         "request": [{"name": "data", "type": "bytes"}],
-         "response": "bytes"
-     },
-
-     "error": {
-         "doc": "Always throws an error.",
-         "request": [],
-         "response": "null",
-         "errors": ["TestError"]
-     }
- }
-
-}
diff --git a/lang/c++/api/Reader.hh b/lang/c++/api/Reader.hh
index cdf14aa..be6c2d9 100644
--- a/lang/c++/api/Reader.hh
+++ b/lang/c++/api/Reader.hh
@@ -92,19 +92,12 @@
 
     void readBytes(std::vector<uint8_t> &val) {
         int64_t size = readSize();
-        
-        val.reserve(size);
-        uint8_t bval;
-        for(size_t bytes = 0; bytes < static_cast<size_t>(size); bytes++) {
-            in_.readByte(bval);
-            val.push_back(bval);
-        }
+        val.resize(size);
+        in_.readBytes(&val[0], size);
     }
 
     void readFixed(uint8_t *val, size_t size) {
-        for(size_t bytes = 0; bytes < size; bytes++) {
-            in_.readByte(val[bytes]);
-        }
+        in_.readBytes(val, size);
     }
 
     template <size_t N>
diff --git a/share/VERSION.txt b/share/VERSION.txt
index 6d37d87..785cda8 100644
--- a/share/VERSION.txt
+++ b/share/VERSION.txt
@@ -1 +1 @@
-1.3.3-SNAPSHOT
\ No newline at end of file
+1.3.3
\ No newline at end of file