[maven-release-plugin]  copy for tag apache-gora-0.2.1

git-svn-id: https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1@1366099 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/trunk/.gitignore b/trunk/.gitignore
new file mode 100644
index 0000000..972a8df
--- /dev/null
+++ b/trunk/.gitignore
@@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*~
+.idea
+*.iml
+*.iws
+*.ipr
+.classpath
+.externalToolBuilders
+.project
+.settings
+.git
+.svn
+build
+target
+dist
+lib
+**/lib/*.jar
+ivy/ivy*.jar
+/conf/*-site.xml
+**/conf/*-site.xml
diff --git a/trunk/CHANGES.txt b/trunk/CHANGES.txt
new file mode 100644
index 0000000..e6dbc38
--- /dev/null
+++ b/trunk/CHANGES.txt
@@ -0,0 +1,182 @@
+ =======================================================================
+ ==CHANGES.txt
+ =======================================================================
+
+Gora Change Log
+
+0.2.1 release: 26/07/2012
+Release Report: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311172&version=12322496
+
+* GORA-157 gora-cassandra test failure - proposal to skip 10 test cases for a while (kazk)
+
+* GORA-156 Properly implement getSchemaName in CassandraStore (lewismc)
+
+* GORA-153 gora-cassandra does not correctly handle DELETED State for MAP (kazk)
+
+* GORA-152 gora-core test incorrectly uses ByteBuffer's array() method to get its byte array (kazk)
+
+* GORA-151 CassandraStore's schemaExists() method always returns false (kazk)
+
+* GORA-150 Introduce Configuration property preferred.schema.name (ferdy)
+
+* GORA-142 Creates org.apache.gora.cassandra.serializers package in order to clean the code of store and query packages and to support additional types in future. (kazk)
+
+* GORA-148 CassandraMapping supports only (first) keyspace and class in gora-cassandra-mapping.xml (kazk)
+
+* GORA-143 GoraCompiler needs to add "import FixedSize" statement for FIXED type (kazk)
+
+* GORA-147 fix threading issue caused by multiple threads trying to flush (ferdy)
+
+* GORA-146 HBaseStore does not properly set endkey (ferdy)
+
+* GORA-140 Requires some adjustments on dependency at gora-cassandra (kazk, lewismc)
+
+* GORA-138 gora-cassandra array type support: Double fix for GORA-81 Replace CassandraStore#addOrUpdateField with TypeInferringSerializer to take advantage of when the value is already of type ByteBuffer. (Kazuomi Kashii via lewismc)
+
+* GORA-139  Creates Cassandra column family with BytesType for column value validator (and comparators), instead of UTF8Type (Kazuomi Kashii via lewismc)
+
+* GORA-131 gora-cassandra should support other key types than String (Kazuomi Kashii via lewismc)
+
+* GORA-132 Uses ByteBufferSerializer for column value to support various data types rather than StringSerializer (Kazuomi Kashii via lewismc)
+
+* GORA-77 Replace commons logging with Slf4j (Renato Javier Marroquín Mogrovejo via lewismc)
+
+* GORA-134 ListGenericArray's hashCode causes StackOverflowError (Kazuomi Kashii via lewismc)
+
+* GORA-95 Catch incorrect mapping configurations and implement sufficient logging in CassandraMapping. (lewismc)
+
+* GORA-** Commit to fix classloading for CLI execution (lewismc)
+
+* GORA-122 gora-accumulo/lib is not cleaned after mvn clean (lewismc)
+
+* GORA-133 & 63 GoraCompiler cannot compile array type & bin/compile-examples.sh does not work respectively (enis, Kazuomi Kashii via lewismc)
+
+* GORA-129 redundant conf field in HBaseStore (ferdy)
+
+* GORA-123 Append correct submodule directories to SCM paths in submodule pom's (lewismc)
+
+* GORA-127 Result objects are not closed properly from GoraRecordReader. (enis)
+
+0.2 Release: 20/04/2012
+Jira Release Report: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311172&version=12315541
+ 
+* GORA-120 Dirty fields are not correctly applied after serialization and map clearance (ferdy)
+ 
+* GORA-115 Flushing HBaseStore should flush all HTable instances. (ferdy)
+
+* Make hbase autoflush default to false and make autoflush configurable rather than hardcoded (stack via lewismc)
+
+* GORA-76 Upgrade to Hadoop 1.0.1 (ferdy & lewismc)
+
+* GORA-65 Initial checkin of gora accumulo datastore (kturner)
+
+* GORA-108 Change CassandraClient#init() to CassandraClient#initialize() for consistency with other Datastores. (lewismc)
+
+* GORA-105 DataStoreFactory does not properly support multiple stores (ferdy)
+
+* GORA-** Update Gora parent pom to include maven release plugin targets, and update developer credentials. (lewismc)
+
+* GORA-74 Remove sqlbuilder library (lewismc)
+
+* GORA-101 HBaseStore should properly support multiple tables in the mapping file. (ferdy)
+
+* GORA-82 Add missing license headers & RAT target to pom.xml (lewismc)
+
+* GORA-88 HBaseByteInterface not thread safe (ferdy)
+
+* GORA-93 [gora-cassandra] Add implementation of CassandraStore.get(key) (Sujit Pal via lewismc)
+
+* GORA-58 Upgrade Gora-Cassandra to use Cassandra 1.0.2 (lewismc)
+
+* GORA-80 Implement functionality to define consistency used for Cassandra read and 
+write operations. (lewismc)
+
+* GORA-91 Ensure that Gora adheres to ASF branding requirements (lewismc)
+
+* GORA-90 Create DOAP for Gora (lewismc)
+
+* GORA-79 Block keyspace creation until the whole cassandra cluster converges to the new keyspace. (Patricio Echagüe via lewismc)
+
+* GORA-83 add 'target' dirs to svn ignore (ferdy)
+
+* GORA-66 testDeleteByQueryFields seems incorrect (Keith Turner via lewismc)
+
+* GORA-72 Made all artifacts proper OSGi bundles. Added ClassLoadingUtils which will fallback to thread context classloader if Class.forName fails. (iocanel)
+
+* GORA-55 Removed second assertion for schema existence, since schema is already deleted, Excluded ranged queries from the tests, since the tests assume end key to be inclussive while Hbase considers it exclusive. (iocanel)
+
+* GORA-** Updated the maven surefire plugin version to properly set the system properties (iocanel)
+
+* GORA-22 Upgrade cassandra backend to 0.7 (jnioche & Alexis Detreglode)
+
+* GORA-69 Added xerces to dependencyManagement to avoid version conflicts (iocanel)
+
+* GORA-54 Split TestHBaseStoreMapReduce to two individual tests, so that they can run properly, Added jvm args to maven surefire plugin to increase the max heap size that the tests will use to 512M. (iocanel)
+
+* GORA-70 Upgrade deprecated expressions in pom.xml for use in building effective models, Replace reference to artifactId/version with project.artifactId/project.version (iocanel)
+
+* GORA-68 Added test.build.data System property so that test data can be stored under target. Added port reservation for gora-sql tests. Added mapping xml to the gora-sql test classpath (iocanel)
+
+* GORA-51 Added surefire plugin configuration to run the tests isolated (iocanel)
+
+* GORA-64 Query should document inclusiveness (Keith Turner via lewismc) 
+
+* GORA-52 Run JUnit tests from the command line (lewismc)
+
+* GORA-62 Run bin/gora commands after maven build (lewismc)
+
+* GORA-43 Forward port 0.1.1 incubating Maven poms to trunk (lewismc)
+
+* GORA-57 HBaseStore does not correctly read family definitions (Ferdy via lewismc)
+
+* GORA-56 HBaseStore is not thread safe. (Ferdy via lewismc)
+
+* GORA-48. HBaseStore initialization of table without configuration in constructor will throw Exception (Ferdy via lewismc)
+
+* GORA-47&46. fix tar ant target & Add nightly target to build.xml respectively (lewismc)
+
+* GORA-45. Add dependency to 'clean-cache' ant target (Lewis John McGibbney via mattmann)
+
+* GORA-44. Ant build fails (Lakshmi Narasimhan via mattmann)
+
+* GORA-28. Merge back recent changes in 0.1-incubating to trunk (mattmann, ab)
+
+* GORA-12. Semantics of DataStore.delete* (ab via mattmann)
+
+* GORA-18. Configuration not found error breaks the build on trunk (ioannis via mattmann)
+
+* GORA-26. Configurable classes should pass around Configuration consistently (ab via mattmann)
+
+* GORA-32. Map type with long values generates non-compilable Java class (Yves Langisch)
+
+* GORA-29. Gora maven support (Ioannis Canellos via mattmann)
+
+* GORA-31. jersey-json dependency not in repositories currently in ivysettings.xml (Marsall Pierce via hsaputra)
+
+0.1-incubating release:
+
+*  INFRA-3038. Initial import of code.
+
+*  GORA-1. Organize source code for Apache. (enis)
+
+*  GORA-6. Add methods that take dataStoreClass instead of data store instance 
+   to Gora{Input|Output}Format and Gora{Mapper|Reducer}. (enis)
+
+*  GORA-7. DataStoreFactory.createDataStore() should throw exceptions on 
+   failure. (enis)
+
+*  GORA-15. Primitive types cannot be used as keys in DataStore. (enis)
+
+*  GORA-16. Create a tutorial for Gora. (enis)
+
+*  GORA-21. Commons-lang needs a configuration in gora-core/ivy/ivy.xml (jnioche)
+
+*  GORA-20. Flush datastore regularly (Alexis Detreglode via jnioche)
+
+*  GORA-23. Limit result set in store reads (Alexis Detreglode via hsaputra)
+
+*  GORA-25. Upgrade Gora-hbase to HBase 0.90.0 (ab)
+
+*  GORA-2. Create Gora web site (enis)
+
+*  GORA-5. Add champions and mentors to Gora website (hsaputra)
diff --git a/trunk/KEYS b/trunk/KEYS
new file mode 100644
index 0000000..4f1e9fd
--- /dev/null
+++ b/trunk/KEYS
@@ -0,0 +1,171 @@
+This file contains the PGP keys of various Apache developers.
+Please don't use them for email unless you have to. Their main
+purpose is code signing.
+
+Apache users: pgp < KEYS
+Apache developers:
+        (pgpk -ll <your name> && pgpk -xa <your name>) >> this file.
+      or
+        (gpg --fingerprint --list-sigs <your name>
+             && gpg --armor --export <your name>) >> this file.
+
+Apache developers: please ensure that your key is also available via the
+PGP keyservers (such as pgpkeys.mit.edu).
+
+
+----------------------------------------------------------------
+
+pub   4096R/3592721E 2011-01-21
+      Key fingerprint = A0BF C76E F6A5 6F2C DC3D  0E2C EDF4 C958 3592 721E
+uid                  Henry Saputra (CODE SIGNING KEY) <hsaputra@apache.org>
+sig 3        3592721E 2011-01-21  Henry Saputra (CODE SIGNING KEY) <hsaputra@apache.org>
+sub   4096R/C318E93E 2011-01-21
+sig          3592721E 2011-01-21  Henry Saputra (CODE SIGNING KEY) <hsaputra@apache.org>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: GnuPG/MacGPG2 v2.0.16 (Darwin)
+
+mQINBE05Kw0BEACuSMJSAKgbac4pMNfzssckf4hAZMnd23DYQYMwxPNjeelqw1qJ
+1JPsDW6nOfBGVEj3OLqd6ZPH6K2i8+GdY+p+buxaRJRdpYh03P3+Hno3OMySA9wo
+7mdy/NbmYsqIfvA6+WRmRUAKOnJv8lplMwwQMoPGPkHsMJNUeIAjGyCBzwhpn3y1
+NkC6QEAE9iq1DZFhURD5D3ZfH/XO8WfQz74cghAySEF3bPCO7l5DPM3ddWYE1rmZ
+UcltqxxAy4xBzuDP7gqxKdSqZXAQfyRGgbAxu5MfHYL+XgVbn+4ZszCzFIUUIYFE
+dvFjizzG3ZM0oQXnZYnWzxY3Flui7tFrQNvy78SsWRwfjxzuRWlJLrbm0h8Rdnj6
+9MjxH6OW3ihdw2yrw79aUT5ArmE3j2wZtcLnhXoTYqqxAk6UUbAsS1EcIRte01t4
+iZrtJdkS+MkVHOQewdFJe6dQcAWy62z/pfrs2dII7EhxZCi5kx4Wwb4Ud2wrnZJu
+MhMvb91y2Lwz2WF8BXj6UdAiV/VGupmWvjvod58X6GyKsmS6+dXEkTeWr44kn6qb
+cKwuAqdZEfSDT1K/koNdWGF4ExfI8pO2IaM8YWFglX7eiP6o2GhcQ2qZwQzW3KnW
+m8aMw1BKV//xAeudR6agh/i9F+DC3wqdovFb+2NvU3d9OAK7xhbBWfWbzQARAQAB
+tDZIZW5yeSBTYXB1dHJhIChDT0RFIFNJR05JTkcgS0VZKSA8aHNhcHV0cmFAYXBh
+Y2hlLm9yZz6JAjcEEwEKACECGwMCHgECF4AFAk05OAgFCwkIBwMFFQoJCAsFFgID
+AQAACgkQ7fTJWDWSch7trg//Wq8TCY5F+tTGWA8yaKFwnKzt7SVfVDeKu9HS+O0t
+4mSMj7n0fZsMWtgUfkq7eQqswAxYMxsLNh9Y4Tc1FJ1ivzKf7AomWiFrM2YDtvSu
+5/iF0a1JgGcbxwFaoyYMZOttdx+DneCUBvVrvwAFVxuIIrcH1mSYOCy6T7A1cdjB
+RtRmvIh2qQ4d6lw4ZlB9sy04/kRK+yPmGCfpE3SD4b64tEIByPOgZABp1nXlHKQX
+rJUwfVYuPkIzzEETePzPibIlvK1/njg/AMUSibek0v2RWqaZspxh6b2QhvHfW0df
+vvTy5TxNi9WoXdIX0laS9gaj4ItyDd8tShMOl0LFNd7X+UexxiAVQ37wAvnSYPRL
+P9Lxvj0BIbzllz2+J0F4gMq5j9gcuYkuHc2bdNJ7NWbhIgEuMrWgzqxQpYkjDMSo
+r3s4KaR84+y/9JuZofjiV/Afh3i0imjczHWV1u2tp6Yh9eSKH37cQKeIlfTwgmdn
+brj7J5UF53w4O5iorUuO3scGwP2HOnWHSUx8sCaEXHk8v08y89mW1c+je7GwzQoT
+TCRdbMkaqzotHZOzJ/ha32ZEy6Q6WW2Zeo50bnDfR94ZXCY2NR9JNznNNc4Bn0G2
+WdUL/LbV3j3n8aqAUPShEzbrTtkQJdFCFddrLCxWLdNXSAroslhOAEnWhC+ZLGTy
+qh65Ag0ETTkrDQEQAMJbM3CtxrVUSh4XP4YJImC9+iOGgt4rWjqxEIkojlI3eUDa
+lU6fGHguRPs9aVLfHQoCjLw5aGfjvnYa/NE13ihPcu5CFk9TjtganTa1Av0ODG5E
+2HC95WX9k9KZsOHcmhH7nubGCLx9RDiuhyVU0dLw2qHZGqQooibc1p4r0crqsSMj
+JU2x9i7eNtFY/jNgHedaWjDcOYZEsVia9XQsaTzORmxtcbkrNJdcssYRmMjPMR1t
+PeOtofedXvVY0w2h51XTgIPzYhR9Ho7ZDPQyn5VQJ4REIrtBz0jxz/Lcj2JAtYsr
+9bfOeYe7+9h7sVTKmb4wBWZBDvZmAoGLj9f2jSmsugGBVT6TlK1R5PtPrmj1hExo
+PFH/gPpt8aLQYR73xR0fzunqfGFS651EArZoB3GSM2CoY2vxzwQpqHH9dFtf2wu9
+a+tT60rvPdag3y98N4RlGmK6KUuClOk4oRsQcI4H06zF+jwmbikMlTc3JqC+ev2c
+fExVjaFM9Hz/bzEyZerNwbntz/yBIg4NR1RfPjQUVolNg66dkMe2GwFs1f2Nd/1b
+40+vimRJXeGuyzyH3o86UAHUQu92hOMWgMlilsU0GUUMk6qZHA5SvAB3t3atZ6IR
+2453kXlOFUGl0NkGn3nFxF2xj8E6tUG72glDBW6GZP+hZilhs853HPt2UknXABEB
+AAGJAh8EGAEKAAkFAk05Kw0CGwwACgkQ7fTJWDWSch4knQ//cfFjcmFls2siwGwc
+q1H9QyhiiJcDbHFyTb7UvVwTgZ3rkdoQkrslMp3Ti/mm1NVH0c5ywuHCT1dip+yO
+7Rt8ruepDQMPDLHRsudzwJNKaZQPbtacESvDayqADOYaDN1req8+Z4+G2jyYaLve
+TTI/HpyJyp5YLXlaBM+EicqP+7WZeMvw8MkxhfXEKeaZ7K6E326USlo1os/puOtl
+VB9ywr4LD+4ZB7jzAGI0hsCn/ZxS+Uu9iAfrr1LVNQMxTbJ/nBFwrQof75SYvjwQ
+A07D6KBdea7TpTj0JI9saaI4ytuKwCi9WpyvC7bxec7GAbOSvhK/vEV0dVAnAQlL
+nyv8SS+m3gcukZNnNMz0Yz9IeFKB29J8sFgFW4f+Ww6ygmaU9+kN/4C7hzWbUjE8
+GyFr4RDRIfIQKG7D50IUzeY+jfzpa5Z1SqmBqPymKz/w7l/N9GmtzFptox30+C0l
+7CEp+B3Wyb8zdiX3EgAzka8kbsZESRC22ziowOJ8DS25vFLmw5q8W6MDLnovT1a0
+rCJsgoQny9AvP0VzCm3Xxwgcf+Q1HyHGEUIS+lwPRpeNgfgWogzDMrEUjvhQDeDZ
+e9Hhz6NeQVWGJXByjaITCa3CMY4HxRxWALQ21vJmy8WSTGGno4VaMfJ+mrnVXPt7
+PsjMuqud3oNyQDFEI+qH9ZUbnPI=
+=W5eW
+-----END PGP PUBLIC KEY BLOCK-----
+pub   1024D/B876884A 2007-12-24
+uid                  Chris Mattmann (CODE SIGNING KEY) <mattmann@apache.org>
+sig 3        B876884A 2007-12-24  Chris Mattmann (CODE SIGNING KEY) <mattmann@apache.org>
+sub   2048g/D3B4F350 2007-12-24
+sig          B876884A 2007-12-24  Chris Mattmann (CODE SIGNING KEY) <mattmann@apache.org>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: GnuPG v1.4.8 (Darwin)
+
+mQGiBEdvL9QRBACuaV06by+pxZHXIxBsfAFYJk7XJgsqR23m5ClCDPusMeaI4XGB
+eU8Nw4iVwgG3p5VLWLXeMIm/KPz3pmxiNyEP/dHoDxOPR+hAqlP5v03D1iK19H7q
+46BIecIwo8q0ei70fBLvMQN+apIFlvYDqVCTm1lxoCQafagqd9p2JtTf+wCg70yM
+nGtrejB+ZTTcb08f7SAHsLED/11vIdcxViN3u+3klhbb99bd/g9KvCU/I/7+MDx1
+3zrSvJV2b2wrxabUJ1Oxsb4/4BXq8A1FyhC1h/d2PsawqiY0GZ02cucbzEmdXH51
+UnrRLM9/txtZ2b7V6YkDmPf0k6rD0SjqAAy1ERekEVUOxnY4sPGmJoyac4j9+pO9
+1vH/A/9LRoJlPTfv/mFYty6/Egckhv48YoRUBo1dNh6IPQY0oVpAFbcXc3GiTyCu
+5iQp7utxP7hoJTUM2Hn5tF9D7IniRC9wsrcW8Gi/f82O4HlmyV4+Tt75nWx018oI
+ObGmwitT27EkOnFcQc9F+Q53nKr+a22SBbpfffF9Xdbkw7V73bQ3Q2hyaXMgTWF0
+dG1hbm4gKENPREUgU0lHTklORyBLRVkpIDxtYXR0bWFubkBhcGFjaGUub3JnPohg
+BBMRAgAgBQJHby/UAhsDBgsJCAcDAgQVAggDBBYCAwECHgECF4AACgkQcPCcxrh2
+iEr8KwCffMIKMu3TBrGZVu1BPLbMBhjsrl8AoI15rg+tzYZZmZJD6tDS40klTsVA
+uQINBEdvL9QQCAClHjwXMu38iDR3nvbYkWmcz5rfBFvDm/KVQGLnnY96C1r890Ir
+cHxAlSpbGb6qPi5n27v87LoS2bYEitqCUUwB7AQLOgqmLvqMJ4qp5HUfTQ/wH9Br
+wK2LX1oGFJXH14lbZ7xW36n9A/JtXHY8vGz3GuDvKYqbdOCFo8fBLwotdFOHhNYy
+bBYS1G4gtmemXwzH8kcuoIW6LuoRNxluHi1tJGFC1F1uBoxKir7F7BC38DDNvhak
+dSJpm3WxFkEEkIUyIERVGVRoFzLlk72W0R3kZVvnXbtgPklTg/2Sy13Gb+MzTBYt
+5TF841neM/kHdgt45EgBhchHN3Ys3ljabihbAAMFB/4ke4Xe573V78UR/WTMUzfw
+pIysMUzEjNKqOfnAoNnR4WDDca4MwIUl62QqGTRrWZxTD8fAGYxc+m0qmygGKtYq
+LUYB5N/pLGu1sg2j23G8aBKthiCCE+jOr3uebU/j0BTzN/BwXCqIGogELFlPC5Tj
+Hr6c8LpkRFIOjVfuYB2TV4o2RfSFzrSFHCbrU82ojxhYSwyqDGAdD6EGtbbqaEMX
+tGZzHaMVm2gDeV9W2veurxOulgndNg2+FXvgUlOa+KZ2J2DxNBcJv1uBtDAWDyR9
+dTgTbK62ZnSjsnRYbgf0HdA+kW9n9XBMEHwgYk0q+doOWUOQFqC84TgrrhyDd1XZ
+iEkEGBECAAkFAkdvL9QCGwwACgkQcPCcxrh2iEplXwCgraY3ELlDStqpJDSUzVsN
+rGuNiwsAoKz92ycEjcMnoLnX8AaPADdo1m/P
+=zEfO
+-----END PGP PUBLIC KEY BLOCK-----
+
+pub   4096R/C601BCA7 2012-04-17
+uid                  Lewis John McGibbney (CODE SIGNING KEY) <lewismc@apache.org>
+sig 3        C601BCA7 2012-04-17  Lewis John McGibbney (CODE SIGNING KEY) <lewismc@apache.org>
+sub   4096R/FCD9FF28 2012-04-17
+sig          C601BCA7 2012-04-17  Lewis John McGibbney (CODE SIGNING KEY) <lewismc@apache.org>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: GnuPG v1.4.10 (GNU/Linux)
+
+mQINBE+NSUkBEAC3Qu1mT3x0swS4zXta2NnJtrepOqpsU292U+hzkbjdG8W+W2WA
+3oRdd5f/iKkkE1Z3q53qD++PazLQf+g+378Ce+CP4bwhZuz/CgSa8EO2rIXadVUG
+M+XBAiSlLWyQhwW8qbipGQvpT1PXp8mjwXlWzt+0+4F9ybepYxStUPaybIFfSn+f
+M8YzYLgfKSsHMgPeK6TGRJAqC+u7t+XMYWmfVS9TpoOyfZ3tsn3YmeH4JiqF49/0
+XzkqgM7FW52By64Nm6xCOfqXCaMmVV5JRuZFhLB4VmWlH/Mikv5Tu99gsAdGwFIb
+MhMWtWZ/azKarTkQiZjDka09Mxc6skXCBBbxz9lstE4X50d5PMqOgVBtFstmL64h
+Km2dSIdVEUyjM9y1HBRZO1+ooNs5xja1DnSAuytstrRnt5Vdnuk/RS8t2qfcm2jP
+NWrZNOix9U+pT7qUQ1wbK/ew+qWbNFlvp9i3XyZdfPpyEmYD4CsBvkVbiH+FULwS
+F4OJQlJoDJ1vHnSPMNSGtiNRTLSQ2+E6huqktyAY+rcTamCEkCdoZ5NTyMbEgqZ1
+P4fr+h+EpV0h/ACzjhE4sq6MK6KZFv3a3Erlk4oC93BVJpcYyZyQneKQSapbAv9u
+oYCTLHyCrBdXItnFEHhy1zN0DvbWoGtsxDvAVjY3D9YP32Yu3WvxeW25bQARAQAB
+tDxMZXdpcyBKb2huIE1jR2liYm5leSAoQ09ERSBTSUdOSU5HIEtFWSkgPGxld2lz
+bWNAYXBhY2hlLm9yZz6JAjgEEwECACIFAk+NSUkCGwMGCwkIBwMCBhUIAgkKCwQW
+AgMBAh4BAheAAAoJEPReeXDGAbynxnQP/1s1e1eDUAvZv1k+OVhG+nDhqtBtmFV6
+sx67atpzZCj6ckKXphkiWAFmYsAH7pujHgASuAIoMY7MLjaRuG2MiEdWINYH5LVB
+xmZ3M9f1+YBuTSs/0KKBfqVBYm5vbEC+vBkjez54DOJ7OfRQllra98FR5GxEoYhh
+bIQDtUtYrLjzd9kbUH5J+cTgSJ08ciIxanscvFRE7+X2sQTopor6f+o7iea7k6KM
+b5FJ9mi4Q3RQbkorncyyDp4O7rBsuaGeD2oORdSM1zT5ql3glq7cYUI8havHY696
+jWYLOc951l6fDofGi4ZirX0+Mlxj+d2BNY54rx9dl6pZOmahvD4pveq/vbzwOH9E
+vb1uTfRIYLaNW++1nXzPBZ5nzsemDb3K8yVYXnCDrqmzOZMJu5AinvUUusTrRhT/
+4oy2AO1YEIjgwHFzYvv7C7/wYSQC5AxvO0plvyH/kMK/vQk3H7I13isHdyZhEjrR
+e+ciNzPWh4R6W8zVbe29MljItmINWniJ/CnYi9/r7ZtkQUBUCmHQZcsCm2DflA83
+ueLozFY3NH2eQ4q9dY8QIJDOpsX1SrP8DUOpuai3PvEiE8stHxGpamFq2DgnS81x
+/e/kSbIBD6QGgP1S7Zrkdz4jriCCY4mv9mYMu9De/sObYcpGdg6rE49lz9NWeE8w
+Wtt1oexR6DhpuQINBE+NSUkBEADOm92hnYd9ZNSmaVSUegmo0Rx9CMIzRZzHXPXT
+SxxMnJScWDKeTWa7U1A0peiNIUKKlgFcnUY176o4wk8y2sNgyYkYO6wQlzmoyQIh
+Ft0fqE3LMKBJcW2JONWFVrFZpRPTFvRWnDOSur8IQq3rJkyiqfT5y0E7PAdd8aa3
+l7anp8gfKCf9iIYtgfNsKNphngkwOLNDVsED7G/VRfAezjDKyf0M9HSL0fjQ5YDe
+L5MMmgduvYKBtWISM5tqJAunkMpGeWJ6/khJZT+bLK8iLM2073W5uSlNs6oO2AM8
+lDvfmnsFC4178mbU9nJNi+KAXzwZXH4xcqywRKZhuWI5BVPGi50HJ/RIZtDyrkrK
+W7NACtmniuFzSy9PxrM2iappUsfY8b7uZBzGoo1BzT7F7VM7sSte+X+zs8TZ0dam
+6TbuGMuv5rPQGAwu2JWUNOeBzXvfkg3gzk4qZrBdHtUrQjx33c1NBZddLcoSqzgC
+ph2cz4NG4Fs/Mi8SXoKBwJGVeWE+ZCBma8vFP/zctb/XroIaFSE5rAwHydwCB4gu
+VB3rNuLCoiiB50lPzAPFjjFxOuZeTZfl4bp1XRE1KKYi+n974At4HDd5g0Az8w37
+5/9G+pARCzjytvIHJTYQDsG0hfnj2Vfb5WWYF6LMib0ZGf739Yp7L602/yE9QAKm
+bifPCQARAQABiQIfBBgBAgAJBQJPjUlJAhsMAAoJEPReeXDGAbynzc4P/AomVPfY
+bY61TE+QSKAJl8/dyyw+LSddTPFTleVBFHlq1tnQmLWxoNq5t1CRXUJOv3q6haPE
+PLKR5pXXtNzAGVP74Jipa5r8FQjBG0j+XriiHmr861xyno0uPG23c0LSRqHrcLi6
+tgN2Q2ihu1Tjaql+ukzPI6u2v97FD0qhJWKvFFo64p7HTNUXHJLQ9N/m1Pien7Nm
+KFLRI0Pu0CW95I1w2gAAlS++lIxT3/ANfw6SpK9+lNBaan1g0xM5/P54MIQvZgCQ
+gdIcWdAOmXjTyMryconkeNRWpkYjXG4hZj9crP48j3lZPlUYol4pdkQ1CtSq1emv
+VDGoUrn5bRWoybOFfx3joOLpUqJA5PDjeN7YMpJNWc3O/lz+S+sW9WZY7vwbK+Mn
+E/l4Bz2k9fQDsxm2rPzM2aS/qaBo9v7vj+NE85B2/NE9cXo0WoC8u5o+KEQY6urV
+ANW/A0k94wmfoBMbmzNZ5Y5zJ9vceW9d4FE2FXaynRke2awYHBZE2Ty3MSxCQAvp
+MREQKzxB1XcR+Frj0nMKMmdEmM55OmIgAqAct1OuGDbOATJMcmVuwHqTZIdynzqh
+NPgXHx4ASqesjF/9GUrAQfOmXqHdOF6xOb7YYGssl1kgvOQRVJhkWtmTckyk+xu9
+U3Wt+q9F6O+RmemV6a6mrpog+Aq+BkIMWCJ8
+=xHbT
+-----END PGP PUBLIC KEY BLOCK-----
diff --git a/trunk/LICENSE.txt b/trunk/LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/trunk/LICENSE.txt
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/trunk/NOTICE.txt b/trunk/NOTICE.txt
new file mode 100644
index 0000000..677c81f
--- /dev/null
+++ b/trunk/NOTICE.txt
@@ -0,0 +1,5 @@
+Apache Gora
+Copyright 2012 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
diff --git a/trunk/README.txt b/trunk/README.txt
new file mode 100644
index 0000000..3601089
--- /dev/null
+++ b/trunk/README.txt
@@ -0,0 +1,72 @@
+Apache Gora Project
+===================
+ 
+The Apache Gora open source framework provides an in-memory data model 
+and persistence for big data. Gora supports persisting to column stores, 
+key value stores, document stores and RDBMSs, and analyzing the data 
+with extensive Apache Hadoop MapReduce support. 
+
+Why Gora?
+---------
+
+Although there are various excellent ORM frameworks for relational
+databases, data modeling in NoSQL data stores differ profoundly
+from their relational cousins. Moreover, data-model agnostic
+frameworks such as JDO are not sufficient for use cases, where one
+needs to use the full power of the data models in column stores.
+Gora fills this gap by giving the user an easy-to-use ORM framework
+with data store specific mappings and built in Apache Hadoop support.
+
+The overall goal for Gora is to become the standard data representation
+and persistence framework for big data. The roadmap of Gora can be
+grouped as follows.
+
+* Data Persistence : Persisting objects to Column stores such as
+  HBase, Cassandra, Hypertable; key-value stores such as Voldermort,
+  Redis, etc; SQL databases, such as MySQL, HSQLDB, flat files in local
+  file system or Hadoop HDFS.
+
+* Data Access : An easy to use Java-friendly common API for accessing
+  the data regardless of its location.
+
+* Indexing : Persisting objects to Lucene and Solr indexes,
+  accessing/querying the data with Gora API.
+
+* Analysis : Accesing the data and making analysis through adapters for
+  Apache Pig, Apache Hive and Cascading
+
+* MapReduce support : Out-of-the-box and extensive MapReduce (Apache
+  Hadoop) support for data in the data store.
+
+Background
+----------
+
+ORM stands for Object Relation Mapping. It is a technology which
+abstacts the persistency layer (mostly Relational Databases) so
+that plain domain level objects can be used, without the cumbersome
+effort to save/load the data to and from the database. Gora differs
+from current solutions in that:
+
+* Gora is specially focussed at NoSQL data stores, but also has limited
+  support for SQL databases.
+
+* The main use case for Gora is to access/analyze big data using Hadoop.
+
+* Gora uses Avro for bean definition, not byte code enhancement or annotations.
+
+* Object-to-data store mappings are backend specific, so that full data
+  model can be utilized.
+
+* Gora is simple since it ignores complex SQL mappings.
+
+* Gora will support persistence, indexing and anaysis of data, using Pig,
+  Lucene, Hive, etc.
+
+
+ For the latest information about Gora, please visit our website at:
+ 
+   http://gora.apache.org
+ 
+License
+-------
+Gora is provided under Apache License version 2.0. See LICENSE.txt for more details.
diff --git a/trunk/bin/compile-examples.sh b/trunk/bin/compile-examples.sh
new file mode 100755
index 0000000..57d95ad
--- /dev/null
+++ b/trunk/bin/compile-examples.sh
@@ -0,0 +1,43 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# resolve links - $0 may be a softlink
+THIS="$0"
+while [ -h "$THIS" ]; do
+  ls=`ls -ld "$THIS"`
+  link=`expr "$ls" : '.*-> \(.*\)$'`
+  if expr "$link" : '.*/.*' > /dev/null; then
+    THIS="$link"
+  else
+    THIS=`dirname "$THIS"`/"$link"
+  fi
+done
+
+# some directories
+THIS_DIR=`dirname "$THIS"`
+GORA_HOME=`cd "$THIS_DIR/.." ; pwd`
+
+MODULE=gora-core
+DIR=$GORA_HOME/$MODULE/src/examples/avro/
+OUTDIR=$GORA_HOME/$MODULE/src/examples/java
+GORA_BIN=$GORA_HOME/bin/gora
+
+for f in `ls $DIR` ; do
+  echo "Compiling $DIR$f"
+  $GORA_BIN compile $DIR$f $OUTDIR 
+done
+
diff --git a/trunk/bin/gora b/trunk/bin/gora
new file mode 100755
index 0000000..af02553
--- /dev/null
+++ b/trunk/bin/gora
@@ -0,0 +1,152 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+##
+# The script to run Java components.
+#
+# Environment Variables
+#
+#   GORA_HEAPSIZE  The maximum amount of heap to use, in MB. 
+#                   Default is 1024.
+#
+#   GORA_OPTS      Extra Java runtime option.
+#
+
+# resolve links - $0 may be a softlink
+THIS="$0"
+while [ -h "$THIS" ]; do
+  ls=`ls -ld "$THIS"`
+  link=`expr "$ls" : '.*-> \(.*\)$'`
+  if expr "$link" : '.*/.*' > /dev/null; then
+    THIS="$link"
+  else
+    THIS=`dirname "$THIS"`/"$link"
+  fi
+done
+
+# if no args specified, show usage
+if [ $# = 0 ]; then
+  echo "Usage: run COMMAND [COMMAND options]"
+  echo "where COMMAND is one of:"
+  echo "  compile                    Run Compiler"
+  echo "  specificcompiler           Run Avro Specific Compiler"
+  echo "  logmanager                 Run the tutorial log manager"
+  echo "  loganalytics               Run the tutorial log analytics"
+  echo "  junit         	     Run the given JUnit test"
+  echo " or"
+  echo " MODULE CLASSNAME   run the class named CLASSNAME in module MODULE"
+  echo "Most commands print help when invoked w/o parameters."
+  exit 1
+fi
+
+# get arguments
+COMMAND=$1
+shift
+
+# some directories
+THIS_DIR=`dirname "$THIS"`
+GORA_HOME=`cd "$THIS_DIR/.." ; pwd`
+
+if [ -f "${GORA_HOME}/conf/gora-env.sh" ]; then
+  . "${GORA_HOME}/conf/gora-env.sh"
+fi
+
+if [ "$JAVA_HOME" = "" ]; then
+  echo "Error: JAVA_HOME is not set."
+  exit 1
+fi
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1024m 
+
+# check envvars which might override default args
+if [ "$GORA_HEAPSIZE" != "" ]; then
+  #echo "run with heapsize $GORA_HEAPSIZE"
+  JAVA_HEAP_MAX="-Xmx""$GORA_HEAPSIZE""m"
+  #echo $JAVA_HEAP_MAX
+fi
+
+# CLASSPATH initially contains $GORA_CONF_DIR, or defaults to $GORA_HOME/conf
+CLASSPATH=${GORA_CONF_DIR:=$GORA_HOME/conf}
+CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+# restore ordinary behaviour
+unset IFS
+
+# default log directory & file
+if [ "$GORA_LOG_DIR" = "" ]; then
+  GORA_LOG_DIR="$GORA_HOME/logs"
+fi
+if [ "$GORA_LOGFILE" = "" ]; then
+  GORA_LOGFILE='gora.log'
+fi
+
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+  JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+fi
+
+#GORA_OPTS="$GORA_OPTS -Dhadoop.log.dir=$GORA_LOG_DIR"
+#GORA_OPTS="$GORA_OPTS -Dhadoop.log.file=$GORA_LOGFILE"
+
+# figure out which class to run
+if [ "$COMMAND" = "compile" ] ; then
+  MODULE=gora-core
+  CLASSPATH=$CLASSPATH:$GORA_HOME/$MODULE/target/classes/
+  CLASS=org.apache.gora.compiler.GoraCompiler
+elif [ "$COMMAND" = "specificcompiler" ] ; then
+  MODULE=gora-core
+  CLASSPATH=$CLASSPATH:$GORA_HOME/$MODULE/target/classes/
+  CLASS=org.apache.avro.specific.SpecificCompiler
+elif [ "$COMMAND" = "logmanager" ] ; then
+  MODULE=gora-tutorial
+  CLASSPATH=$CLASSPATH:$GORA_HOME/$MODULE/target/classes/
+  CLASS=org.apache.gora.tutorial.log.LogManager
+elif [ "$COMMAND" = "loganalytics" ] ; then
+  MODULE=gora-tutorial
+  CLASS=org.apache.gora.tutorial.log.LogAnalytics
+  CLASSPATH=$CLASSPATH:$GORA_HOME/$MODULE/target/classes/
+elif [ "$COMMAND" = "junit" ] ; then
+  MODULE=*
+  CLASSPATH=$CLASSPATH:target/test-classes/
+  CLASS=junit.textui.TestRunner
+else
+  MODULE="$COMMAND"
+  CLASS=$1
+  shift
+fi
+
+# add libs to CLASSPATH
+for f in $GORA_HOME/$MODULE/lib/*.jar; do
+  CLASSPATH=${CLASSPATH}:$f;
+done
+
+for f in $GORA_HOME/$MODULE/target/*.jar; do
+  CLASSPATH=${CLASSPATH}:$f;
+done
+
+CLASSPATH=${CLASSPATH}:$GORA_HOME/$MODULE/target/classes/
+CLASSPATH=${CLASSPATH}:$GORA_HOME/$MODULE/target/test-classes/
+
+CLASSPATH=${CLASSPATH}:$GORA_HOME/conf
+CLASSPATH=${CLASSPATH}:$GORA_HOME/$MODULE/conf
+
+# run it
+exec "$JAVA" $JAVA_HEAP_MAX $JAVA_OPTS $GORA_OPTS -classpath "$CLASSPATH" $CLASS "$@"
diff --git a/trunk/build-common.xml b/trunk/build-common.xml
new file mode 100644
index 0000000..454ed1a
--- /dev/null
+++ b/trunk/build-common.xml
@@ -0,0 +1,487 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-common" xmlns:ivy="antlib:org.apache.ivy.ant">
+
+  <property name="name" value="${ant.project.name}" />
+  <property name="version" value="0.2-incubating" />
+  <property name="final.name" value="${ant.project.name}-${version}" />
+  <property name="year" value="2011" />
+  <property name="module.version.target" value="${version}" />
+  <property name="module.name" value="${ant.project.name}-${version}"/>
+	
+  <!-- Load all the default properties, and any the user wants    -->
+  <!-- to contribute (without having to type -D or edit this file -->
+  <property file="${user.home}/build.properties" />
+  <property file="${basedir}/build.properties" />
+  <property file="${basedir}/default.properties" />
+	
+  <property name="project.dir" value="${basedir}/.."/>
+  <property name="lib.dir" value="${basedir}/lib" />
+  <property name="lib-ext.dir" value="${basedir}/lib-ext" />
+  <property name="build.dir" value="${basedir}/build" />
+  <property name="build.classes.dir" value="${build.dir}/classes" />
+  <property name="src.dir" value="${basedir}/src/main/java" />
+  <property name="conf.dir" value="${basedir}/conf" />
+  <property name="jar.file" value="${build.dir}/${module.name}.jar" />
+  <property name="job.file" value="${build.dir}/${module.name}.job" />
+  <property name="dist.dir" value="${build.dir}/${name}" />
+
+  <property name="examples.src.dir" value="${basedir}/src/examples/java" />
+  <property name="examples.build.dir" value="${build.dir}/examples" />
+  <property name="examples.build.classes.dir" value="${examples.build.dir}/classes" />
+  
+  <property name="test.src.dir" value="${basedir}/src/test/java" />
+  <property name="test.conf.dir" value="${basedir}/src/test/conf" />
+  <property name="test.build.dir" value="${build.dir}/test" />
+  <property name="test.build.data" value="${test.build.dir}/data" />
+  <property name="test.build.classes.dir" value="${test.build.dir}/classes" />
+  <property name="test.jar.file" value="${build.dir}/${ant.project.name}-test-${version}.jar" />
+  <property name="test.lib.dir" value="${basedir}/src/test/lib"/>
+  <property name="test.log.dir" value="${test.build.dir}/logs"/>
+  <property name="test.include" value="Test*"/>
+  <property name="test.junit.output" value="no"/>
+  <property name="test.junit.timeout" value="3600000"/>
+  <property name="test.junit.output.format" value="plain"/>
+  <property name="test.junit.fork.mode" value="perTest" />
+  <property name="test.junit.printsummary" value="yes" />
+  <property name="test.junit.haltonfailure" value="no" />
+  <property name="test.junit.maxmemory" value="512m" />
+
+  <property name="build.encoding" value="UTF-8" />
+  <property name="javac.debug" value="on" />
+  <property name="javac.optimize" value="on" />
+  <property name="javac.deprecation" value="off" />
+  <property name="javac.version" value="1.6" />
+
+
+  <!-- Include project's build file -->
+  <import file="${project.dir}/build.xml"/>
+  <!--<ivy:settings file="${basedir}/ivy/ivysettings.xml" />-->
+
+  <!-- ====================================================== -->
+  <!-- path: classpath                                        -->
+  <!-- ====================================================== -->
+  <path id="classpath">
+    <pathelement location="${build.classes.dir}"/>
+    <pathelement location="${conf.dir}"/>
+    <fileset dir="${lib.dir}">
+      <include name="*.jar" />
+    </fileset>
+  </path>
+
+  <!-- the unit test classpath: uses test.src.dir for configuration -->
+  <path id="test.classpath">
+    <pathelement location="${examples.build.classes.dir}"/>
+    <pathelement location="${conf.dir}"/>
+    <pathelement location="${test.build.classes.dir}" />
+    <pathelement location="${test.src.dir}"/>
+    <pathelement location="${build.dir}"/>
+    <pathelement location="${test.conf.dir}"/>
+    <path refid="classpath"/>
+  </path>
+
+
+  <!-- ====================================================== -->
+  <!-- target: init                                           -->
+  <!-- ====================================================== -->
+  <target name="init" depends="ivy-init">
+    <mkdir dir="${build.dir}"/>
+    <mkdir dir="${build.classes.dir}"/>
+    
+    <mkdir dir="${examples.build.dir}"/>
+    <mkdir dir="${examples.build.classes.dir}"/>
+
+    <mkdir dir="${test.build.dir}"/>
+    <mkdir dir="${test.build.classes.dir}"/>
+
+    <copy todir="${conf.dir}" verbose="true" failonerror="false">
+      <fileset dir="${conf.dir}" includes="**/*.template"/>
+      <mapper type="glob" from="*.template" to="*"/>
+    </copy>
+    
+    <antcall target="init-module"/>
+  </target>
+
+  <target name="init-module" description="Modules can override this for post-init scripting"/>
+
+  <!-- ====================================================== -->
+  <!-- target: compile                                        -->
+  <!-- ====================================================== -->
+  <target name="compile" depends="init, resolve">
+    <javac 
+        encoding="${build.encoding}" 
+        srcdir="${src.dir}"
+        includes="**/*.java"
+        destdir="${build.classes.dir}"
+        debug="${javac.debug}"
+        optimize="${javac.optimize}"
+        target="${javac.version}"
+        source="${javac.version}"
+        deprecation="${javac.deprecation}"
+        includeAntRuntime="yes">
+      <classpath refid="classpath"/>
+    </javac>
+    
+    <antcall target="compile-module"/>
+  </target>
+
+  <target name="compile-module" description="Modules can override this for post-compile scripting"/>
+  
+  <!-- ====================================================== -->
+  <!-- target: compile-examples                               -->
+  <!-- ====================================================== -->
+  <target name="compile-examples" depends="compile">
+    <javac 
+        encoding="${build.encoding}" 
+        srcdir="${examples.src.dir}"
+        includes="**/*.java"
+        destdir="${examples.build.classes.dir}"
+        debug="${javac.debug}"
+        optimize="${javac.optimize}"
+        target="${javac.version}"
+        source="${javac.version}"
+        deprecation="${javac.deprecation}"
+        includeAntRuntime="yes">
+      <classpath refid="classpath"/>
+    </javac>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: compile-test                                               -->
+  <!-- ================================================================== -->
+  <target name="compile-test" depends="compile-examples, resolve-test">
+    <javac 
+         encoding="${build.encoding}" 
+         srcdir="${test.src.dir}"
+         includes="**/*.java"
+         destdir="${test.build.classes.dir}"
+         debug="${javac.debug}"
+         optimize="${javac.optimize}"
+         target="${javac.version}"
+         source="${javac.version}"
+         deprecation="${javac.deprecation}"
+         includeAntRuntime="yes">
+      <classpath refid="test.classpath"/>
+    </javac>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: jar                                                        -->
+  <!-- ================================================================== -->
+  <target name="jar" depends="compile, version" description="--> make a jar file for this project">
+    <jar destfile="${jar.file}">
+      <fileset dir="${build.classes.dir}" />
+      <!--<zipfileset dir="${conf.dir}" excludes="*.template"/>-->
+      <manifest>
+        <attribute name="Built-By" value="${user.name}"/>
+        <attribute name="Build-Version" value="${version}" />
+      </manifest>
+      <metainf dir="${project.dir}" includes="LICENSE.txt,NOTICE.txt" />
+    </jar>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: test-jar                                                   -->
+  <!-- ================================================================== -->
+  <target name="test-jar" depends="version, compile-test, jar" description="--> make a jar file for this project">
+    <jar destfile="${test.jar.file}">
+      <fileset dir="${examples.build.classes.dir}" />
+      <fileset dir="${test.build.classes.dir}" />
+      <fileset dir="${test.conf.dir}" excludes="*.template" />
+      <!--<zipfileset dir="${conf.dir}" excludes="*.template"/>-->
+      <manifest>
+        <attribute name="Built-By" value="${user.name}"/>
+        <attribute name="Build-Version" value="${version}" />
+      </manifest>
+      <metainf dir="${project.dir}" includes="LICENSE.txt,NOTICE.txt" />
+    </jar>
+  </target>
+
+  <target name="jar-snapshot" description="copies the jar files to append timestamp to artifact name">
+    <tstamp>
+      <format property="now" pattern="yyyyMMddHHmmss"/>
+    </tstamp>
+    <copy file="${jar.file}" tofile="${build.dir}/${module.name}-${now}.jar"/>
+    <echo message="New snapshot jar file copied to ${build.dir}/${module.name}-${now}.jar"/>
+  </target>
+
+  <target name="test-jar-snapshot" description="copies the jar files to append timestamp to artifact name">
+    <tstamp>
+      <format property="now" pattern="yyyyMMddHHmmss"/>
+    </tstamp>
+    <copy file="${jar.file}" tofile="${build.dir}/${module.name}-${now}.jar"/>
+    <copy file="${test.jar.file}" tofile="${build.dir}/${ant.project.name}-test-${version}-${now}.jar"/>
+
+    <echo message="New snapshot jar file copied to ${build.dir}/${module.name}-${now}.jar"/>
+    <echo message="New snapshot jar file copied to ${build.dir}/${ant.project.name}-test-${version}-${now}.jar"/>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: job                                                        -->
+  <!-- ================================================================== -->
+  <target name="job" depends="compile" description="--> make a job file for running on hadoop">
+    <jar destfile="${job.file}">
+      <fileset dir="${build.classes.dir}" />
+      <zipfileset dir="conf" excludes="*.template" />
+      <zipfileset dir="${lib.dir}" prefix="lib" includes="**/*.jar" excludes="hadoop-*.jar" followsymlinks="true"/>
+      <zipfileset dir="${lib.dir}" prefix="lib" includes="hadoop-gpl-compression*.jar" followsymlinks="true"/>
+      <manifest>
+        <attribute name="Built-By" value="${user.name}" />
+        <attribute name="Build-Version" value="${version}" />
+      </manifest>
+      <metainf dir="${project.dir}" includes="LICENSE.txt,NOTICE.txt" />
+    </jar>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: test                                                       -->
+  <!-- ================================================================== -->
+  <target name="test" depends="compile-test" description="Run core unit tests">
+
+    <delete dir="${test.build.data}"/>
+    <mkdir dir="${test.build.data}"/>
+    <delete dir="${test.log.dir}"/>
+    <mkdir dir="${test.log.dir}"/>
+    <junit showoutput="${test.junit.output}"
+      printsummary="${test.junit.printsummary}"
+      haltonfailure="${test.junit.haltonfailure}"
+      fork="yes"
+      forkmode="${test.junit.fork.mode}"
+      maxmemory="${test.junit.maxmemory}"
+      dir="${basedir}" timeout="${test.junit.timeout}"
+      errorProperty="tests.failed" failureProperty="tests.failed">
+      <sysproperty key="test.build.data" value="${test.build.data}"/>
+      <sysproperty key="hadoop.log.dir" value="${test.log.dir}"/>
+      <classpath refid="test.classpath"/>
+      <formatter type="${test.junit.output.format}" />
+      <batchtest todir="${test.build.dir}" unless="testcase">
+        <fileset dir="${test.src.dir}"
+                 includes="**/${test.include}.java"
+                 excludes="**/${test.exclude}.java" />
+      </batchtest>
+      <batchtest todir="${test.build.dir}" if="testcase">
+        <fileset dir="${test.src.dir}" includes="**/${testcase}.java"/>
+      </batchtest>
+    </junit>
+    <fail if="tests.failed">Tests failed!</fail>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- D I S T R I B U T I O N                                            -->
+  <!-- ================================================================== -->
+  <target name="package" depends="compile, jar, test-jar"
+	  description="Build distribution">
+    <mkdir dir="${dist.dir}"/>
+    <mkdir dir="${dist.dir}/lib"/>
+
+    <copy todir="${dist.dir}/lib" includeEmptyDirs="false">
+      <fileset dir="lib"/>
+    </copy>
+
+    <copy todir="${dist.dir}">
+      <fileset file="${jar.file}"/>
+      <fileset file="${test.jar.file}"/>
+    </copy>
+    
+    <copy todir="${dist.dir}/conf">
+      <fileset dir="${conf.dir}" excludes="**/*.template"/>
+    </copy>
+
+    <copy todir="${dist.dir}/ivy">
+      <fileset dir="ivy"/>
+    </copy>
+
+    <copy todir="${dist.dir}">
+      <fileset dir=".">
+        <include name="*.txt" />
+      </fileset>
+    </copy>
+
+    <copy todir="${dist.dir}/src" includeEmptyDirs="true">
+      <fileset dir="src" excludes="**/*.template"/>
+    </copy>
+  	
+    <copy todir="${dist.dir}/" file="build.xml"/>
+
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- Make release tarball                                               -->
+  <!-- ================================================================== -->
+  <target name="tar" depends="package" description="Make release tarball">
+    <macro_tar param.destfile="${build.dir}/${final.name}.tar.gz">
+      <param.listofitems>
+        <tarfileset dir="${build.dir}" mode="664">
+          <exclude name="${name}/bin/*" />
+          <include name="${name}/**" />
+        </tarfileset>
+        <tarfileset dir="${build.dir}" mode="755">
+          <include name="${name}/bin/*" />
+        </tarfileset>
+      </param.listofitems>
+    </macro_tar>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: resolve                                                    -->
+  <!-- ================================================================== -->
+  <target name="resolve" depends="clean-lib" description="--> resolve and retrieve dependencies with ivy">
+    <mkdir dir="${lib.dir}"/> <!-- not usually necessary, ivy creates the directory IF there are dependencies -->
+    	
+    <!-- the call to resolve is not mandatory, retrieve makes an implicit call if we don't -->
+    <ivy:resolve file="${ivy.dir}/${ivy.file}" conf="compile" log="download-only"/>
+    <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision](-[classifier]).[ext]" symlink="true" log="quiet" conf="compile"/> 
+
+    <!-- copy the libs in lib-ext, which are not ivy enabled, should change in the future -->
+    <copy todir="${lib.dir}/" failonerror="false">
+      <fileset dir="${lib-ext.dir}" includes="**/*.jar"/>
+    </copy>
+  </target>
+   
+  <!-- ================================================================== -->
+  <!-- target: resolve-test                                               -->
+  <!-- ================================================================== -->
+  <target name="resolve-test" depends="clean-lib" description="--> resolve and retrieve dependencies with ivy">
+    <mkdir dir="${lib.dir}"/> <!-- not usually necessary, ivy creates the directory IF there are dependencies -->
+
+    <!-- the call to resolve is not mandatory, retrieve makes an implicit call if we don't -->
+    <ivy:resolve file="${ivy.dir}/${ivy.file}" conf="test" log="download-only"/>
+    <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" symlink="true" log="quiet" conf="test"/>
+
+    <!-- copy the libs in lib-ext, which are not ivy enabled, should change in the future -->
+    <copy todir="${lib.dir}/" failonerror="false">
+      <fileset dir="${lib-ext.dir}" includes="**/*.jar"/>
+    </copy>
+  </target>	 
+
+  <!-- ================================================================== -->
+  <!-- target: report                                                     -->
+  <!-- ================================================================== -->
+  <target name="report" depends="resolve" description="--> generates a report of dependencies">
+    <ivy:report todir="${build.dir}"/>
+  </target>
+        
+
+  <!-- ================================================================== -->
+  <!-- target: ivy-new-version                                            -->
+  <!-- ================================================================== -->
+  <target name="ivy-new-version" depends="" unless="ivy.new.revision">
+    <!-- default module version prefix value -->
+    <property name="module.version.prefix" value="${module.version.target}-dev-b" />
+	
+    <!-- asks to ivy an available version number -->
+    <ivy:info file="${ivy.dir}/${ivy.file}" />
+      <ivy:buildnumber organisation="${ivy.organisation}" module="${ivy.module}"
+                       revision="${module.version.prefix}" defaultBuildNumber="1" revSep=""/>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: local-version                                              -->
+  <!-- ================================================================== -->
+  <target name="local-version">
+    <tstamp>
+      <format property="now" pattern="yyyyMMddHHmmss"/>
+    </tstamp>
+    <property name="ivy.new.revision" value="${module.version.target}-local-${now}"/>
+  </target>	
+
+  <!-- ================================================================== -->
+  <!-- target: version                                                    -->
+  <!-- ================================================================== -->
+  <target name="version" depends="ivy-new-version">
+    <!-- create version file in classpath for later inclusion in jar -->
+    <mkdir dir="${build.classes.dir}"/>
+    <echo message="version=${ivy.new.revision}" file="${build.classes.dir}/${ant.project.name}.properties" append="false" />
+
+    <!-- load generated version properties file -->
+    <property file="${classes.dir}/${ant.project.name}.properties" />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: publish                                                    -->
+  <!-- ================================================================== -->
+   <target name="publish" depends="version, clean-build, jar" description="--> publish this project in the ivy repository">
+     <ivy:publish artifactspattern="${build.dir}/[artifact].[ext]" 
+                  resolver="shared" 
+                  pubrevision="${version}" 
+                  status="release"/>
+
+     <echo message="project ${ant.project.name} released with version ${version}" />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: publish-local                                              -->
+  <!-- ================================================================== -->
+  <target name="publish-local" depends="local-version, jar" description="--> publish this project in the local ivy repository">
+    <ivy:publish artifactspattern="${build.dir}/[artifact]-${version}.[ext]" 
+    			        resolver="local"
+    			        pubrevision="${version}"
+				pubdate="${now}"
+    			        status="integration"
+    				forcedeliver="true"
+    				overwrite="true"
+                                conf="compile"
+    	/>
+    <echo message="project ${ant.project.name} published locally with version ${version}" />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: publish-local-test                                         -->
+  <!-- ================================================================== -->
+  <target name="publish-local-test" depends="local-version, test-jar" description="--> publish this project in the local ivy repository">
+    <ivy:publish artifactspattern="${build.dir}/[artifact]-${version}.[ext]"
+                                resolver="local"
+                                pubrevision="${version}"
+                                pubdate="${now}"
+                                status="integration"
+                                forcedeliver="true"
+                                overwrite="true"
+        />
+    <echo message="project ${ant.project.name} published locally with version ${version}" />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: clean-local                                                -->
+  <!-- ================================================================== -->
+  <target name="clean-local" depends="" description="--> cleans the local repository for the current module">
+    <ivy:info file="${ivy.dir}/${ivy.file}" />
+      <delete dir="${ivy.local.default.root}/${ivy.organisation}/${ivy.module}"/>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: clean-lib                                                  -->
+  <!-- ================================================================== -->
+  <target name="clean-lib" description="--> clean the project libraries directory (dependencies)">
+    <delete includeemptydirs="true" dir="${lib.dir}"/>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: clean-build                                                -->
+  <!-- ================================================================== -->
+  <target name="clean-build" description="--> clean the project built files">
+    <delete includeemptydirs="true" dir="${build.dir}"/>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: clean                                                      -->
+  <!-- ================================================================== -->
+  <target name="clean" depends="clean-build, clean-lib" description="--> clean the project" />
+
+</project>
diff --git a/trunk/build.xml b/trunk/build.xml
new file mode 100644
index 0000000..bc1a2fe
--- /dev/null
+++ b/trunk/build.xml
@@ -0,0 +1,390 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora" default="publish-local-all" 
+  xmlns:ivy="antlib:org.apache.ivy.ant">
+
+  <property name="year" value="2011" />
+  <property name="project.dir" value="${basedir}"/>
+  <property name="version" value="0.2-incubating" />
+  <property name="final.name" value="${ant.project.name}-${version}" />
+
+  <property name="dist.base.dir" value="${basedir}/dist"/>
+  <property name="dist.dir" value="${dist.base.dir}/${final.name}"/>
+
+  <property name="build.dir" value="${basedir}/build" />
+  <property name="docs.dir" value="${basedir}/docs" />
+  <property name="build.docs" value="${build.dir}/docs" />  
+  <property name="build.javadoc" value="${build.docs}/api/" />
+  <property name="javadoc.link.java" value="http://java.sun.com/javase/6/docs/api/" />
+  <property name="javadoc.link.hadoop" value="http://hadoop.apache.org/common/docs/r0.20.2/api/" />
+  <property name="javadoc.link.avro" value="http://avro.apache.org/docs/1.3.3/api/java/" />
+  <property name="build.javadoc.timestamp" value="${build.javadoc}/index.html" />
+
+  <!-- module directories -->
+  <property name="src.dir" value="src/main/java" />
+  <property name="conf.dir" value="conf" />
+  <property name="lib.dir" value="lib" />
+  <property name="lib-ext.dir" value="lib-ext" />
+  <property name="build.classes.dir" value="build/classes" />
+
+
+  <!-- Load all the default properties, and any the user wants    -->
+  <!-- to contribute (without having to type -D or edit this file -->
+  <property file="${user.home}/build.properties" />
+  <property file="${basedir}/build.properties" />
+  <property file="${basedir}/default.properties" />
+  
+  <!-- setup ivy default configuration with some custom info -->
+  <property name="ivy.file" value="ivy.xml" />
+  <property name="ivy.version" value="2.1.0" />
+  <property name="ivy.dir" value="${basedir}/ivy" />
+  <property name="project.ivy.dir" value="${project.dir}/ivy" />
+  <property name="ivy.jar" location="${project.ivy.dir}/ivy-${ivy.version}.jar" />
+  <property name="ivy.repo.url" value="http://repo2.maven.org/maven2/org/apache/ivy/ivy/${ivy.version}/ivy-${ivy.version}.jar" />
+
+  <property name="ivy.local.default.root" value="${ivy.default.ivy.user.dir}/local" />
+  <property name="ivy.local.default.ivy.pattern" value="[organisation]/[module]/[revision]/[type]s/[artifact].[ext]" />
+  <property name="ivy.local.default.artifact.pattern" value="[organisation]/[module]/[revision]/[type]s/[artifact].[ext]" />
+
+  <property name="ivy.shared.default.root" value="${ivy.default.ivy.user.dir}/shared" />
+  <property name="ivy.shared.default.ivy.pattern" value="[organisation]/[module]/[revision]/[type]s/[artifact].[ext]" />
+  <property name="ivy.shared.default.artifact.pattern" value="[organisation]/[module]/[revision]/[type]s/[artifact].[ext]" />
+
+  <!-- target: init  ================================================ -->
+  <target name="init" depends="ivy-init">
+    <chmod dir="bin" perm="ugo+rx" includes="*.sh, gora"/>
+  </target>
+
+  <!-- target: -buildlist  ================================================ -->    
+  <target name="-buildlist" depends="init">
+    <ivy:buildlist reference="build-path" ivyfilepath="ivy/${ivy.file}">
+      <fileset dir="." includes="*/build.xml" excludes="build.xml"/>
+    </ivy:buildlist>
+  </target>
+  
+  <target name="compile" depends="-buildlist">
+    <subant target="compile" buildpathref="build-path" />
+  </target>
+
+  <target name="jar" depends="-buildlist">
+    <subant target="jar" buildpathref="build-path" />
+  </target>
+
+  <target name="jar-snapshot" depends="-buildlist, jar" description="copies the jar files to append timestamp to artifact name">
+    <subant target="jar-snapshot" buildpathref="build-path" />
+  </target>
+
+  <target name="compile-test" depends="publish-local-all-test">
+  </target>
+
+  <target name="test-jar" depends="publish-local-all-test"/>
+
+  <target name="test-jar-snapshot" depends="test-jar" description="copies the jar files to append timestamp to artifact name">
+    <subant target="test-jar-snapshot" buildpathref="build-path" />
+  </target>
+
+  <target name="test" depends="publish-local-all-test">
+    <subant target="test" buildpathref="build-path" />
+  </target>
+	
+  <target name="nightly" depends="test, tar" />
+
+  <!-- ================================================================== -->
+  <!-- Documentation                                                      -->
+  <!-- ================================================================== -->
+
+  <path id="classpath">
+    <dirset dir=".">
+      <include name="*/${build.classes.dir}"/>
+      <include name="*/${conf.dir}"/>
+    </dirset>    
+
+    <fileset dir=".">
+      <include name="**/*.jar" />
+    </fileset>
+  </path>
+
+  <target name="javadoc-uptodate" depends="compile">
+    <uptodate property="javadoc.is.uptodate">
+      <srcfiles dir="${project.dir}/gora-core/${src.dir}">
+        <include name="**/*.java" />
+        <include name="**/*.html" />
+      </srcfiles>
+      <mapper type="merge" to="${build.javadoc.timestamp}" />
+    </uptodate>
+  </target>
+ 
+  <target name="javadoc" description="Generate javadoc" depends="javadoc-uptodate"
+       unless="javadoc.is.uptodate">
+    <mkdir dir="${build.javadoc}"/>
+    <javadoc
+      overview="gora-core/${src.dir}/overview.html"
+      destdir="${build.javadoc}"
+      author="true"
+      version="true"
+      use="true"
+      windowtitle="Apache Gora API"
+      doctitle="Apache Gora API"
+      bottom="Copyright &amp;copy; ${year} The Apache Software Foundation">
+
+      <packageset dir="gora-core/src/main/java"/>
+      <packageset dir="gora-cassandra/src/main/java"/>
+      <packageset dir="gora-hbase/src/main/java"/>
+      <packageset dir="gora-sql/src/main/java"/>
+      
+      <link href="${javadoc.link.java}"/>
+      <link href="${javadoc.link.avro}"/>
+      <link href="${javadoc.link.hadoop}"/>
+      
+      <classpath refid="classpath"/>
+
+      <group title="Core" packages="org.apache.gora.*"/>
+      <group title="Cassandra Module" packages="org.apache.gora.cassandra.*"/>
+      <group title="HBase Module" packages="org.apache.gora.hbase.*"/>
+      <group title="Sql Module" packages="org.apache.gora.sql.*"/>
+
+    </javadoc>
+  </target>
+
+  <target name="docs" depends="forrest.check, javadoc" description="Generate forrest-based documentation. To use, specify -Dforrest.home=&lt;base of Apache Forrest installation&gt; on the command line." if="forrest.home">
+    <exec dir="${docs.dir}" executable="${forrest.home}/bin/forrest" failonerror="true">
+      <arg value="-Dforrest.validate.xdocs.failonerror=false"/>
+      <arg value="-Dproject.build-dir=${build.docs}"/>
+    </exec>
+
+    <copy todir="${build.docs}/site/api/" includeEmptyDirs="false">
+      <fileset dir="${build.javadoc}"/>
+    </copy>
+    
+  </target>
+
+  <target name="forrest.check" unless="forrest.home">
+    <fail message="'forrest.home' is not defined. Please pass -Dforrest.home=&lt;base of Apache Forrest installation&gt; to Ant on the command-line." />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- Distribution                                                       -->
+  <!-- ================================================================== -->
+  <target name="package" depends="-buildlist, test-jar"
+	  description="Build distribution">
+
+    <subant target="package" buildpathref="build-path" />
+
+    <mkdir dir="${dist.dir}"/>
+    <mkdir dir="${dist.dir}/bin"/>
+    <mkdir dir="${dist.dir}/docs"/>
+    <mkdir dir="${dist.dir}/docs/api"/>
+    
+    <copy todir="${dist.dir}/bin">
+      <fileset dir="bin"/>
+    </copy>
+
+    <copy todir="${dist.dir}/conf">
+      <fileset dir="${conf.dir}" excludes="**/*.template"/>
+    </copy>
+
+    <copy todir="${dist.dir}/ivy">
+      <fileset dir="ivy">
+        <exclude name="ivy*.jar"/>
+      </fileset>
+    </copy>
+
+    <copy todir="${dist.dir}">
+      <fileset dir=".">
+        <include name="*.txt" />
+        <include name="*.xml" />
+      </fileset>
+    </copy>
+
+    <copy todir="${dist.dir}/docs">
+      <fileset dir="${build.docs}/site"/>
+    </copy>
+
+    <chmod perm="ugo+x" type="file" parallel="false">
+        <fileset dir="${dist.dir}/bin"/>
+    </chmod>
+
+    <!-- modules -->
+    <copy todir="${dist.dir}/gora-core">
+      <fileset dir="gora-core/build/gora-core"/>
+    </copy>
+    <copy todir="${dist.dir}/gora-cassandra">
+      <fileset dir="gora-cassandra/build/gora-cassandra"/>
+    </copy>
+    <copy todir="${dist.dir}/gora-hbase">
+      <fileset dir="gora-hbase/build/gora-hbase"/>
+    </copy>
+    <copy todir="${dist.dir}/gora-sql">
+      <fileset dir="gora-sql/build/gora-sql"/>
+    </copy>
+  </target>
+
+  <macrodef name="macro_tar" description="Worker Macro for tar">
+    <attribute name="param.destfile"/>
+    <element name="param.listofitems"/>
+    <sequential>
+      <tar compression="gzip" longfile="gnu"
+      destfile="@{param.destfile}">
+      <param.listofitems/>
+      </tar>
+    </sequential>
+  </macrodef>
+
+  <macrodef name="macro_zip" description="Worker Macro for zip">
+    <attribute name="param.destfile"/>
+    <element name="param.listofitems"/>
+    <sequential>
+      <zip destfile="@{param.destfile}">
+      <param.listofitems/>
+      </zip>
+    </sequential>
+  </macrodef>
+
+  <!-- ================================================================== -->
+  <!-- Make release tarball                                               -->
+  <!-- ================================================================== -->
+  <target name="tar" depends="docs, package" description="Make release tarball">
+    <macro_tar param.destfile="${dist.base.dir}/${final.name}.tar.gz">
+      <param.listofitems>
+        <tarfileset dir="${dist.base.dir}" mode="664">
+          <exclude name="${final.name}/bin/*" />
+          <include name="${final.name}/**" />
+        </tarfileset>
+        <tarfileset dir="${dist.base.dir}" mode="755">
+          <include name="${final.name}/bin/*" />
+        </tarfileset>
+      </param.listofitems>
+    </macro_tar>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- Make release zipball                                               -->
+  <!-- ================================================================== -->
+  <target name="zip" depends="package" description="Make release zipball">
+    <macro_zip param.destfile="${dist.base.dir}/${final.name}.zip">
+      <param.listofitems>
+        <zipfileset dir="${dist.base.dir}">
+          <exclude name="${final.name}/bin/*" />
+          <include name="${final.name}/**" />
+        </zipfileset>
+        <zipfileset dir="${dist.base.dir}">
+          <include name="${final.name}/bin/*" />
+        </zipfileset>
+      </param.listofitems>
+    </macro_zip>
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- Publish Targets                                                    --> 
+  <!-- ================================================================== -->
+
+  <!-- target: publish-local-all  ========================================== --> 
+  <target name="publish-local-all" depends="-buildlist"
+                           description="publish the projects to local ivy repo">
+    <subant target="publish-local" buildpathref="build-path" />
+  </target>
+  
+  <!-- target: publish-local-all-test  ===================================== --> 
+  <target name="publish-local-all-test" depends="-buildlist"
+                           description="publish the projects test jars to local ivy repo">
+    <subant target="publish-local-test" buildpathref="build-path" />
+  </target>
+
+  <!-- target: publish-all  ================================================ -->
+  <target name="publish-all" depends="-buildlist" 
+  			description="compile, jar and publish all projects in the right order">
+    <subant target="publish" buildpathref="build-path" />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- Clean Targets                                                      --> 
+  <!-- ================================================================== -->
+
+  <!-- target: clean-all  ================================================ -->
+  <target name="clean-all" depends="-buildlist" description="clean all projects">
+    <subant target="clean" buildpathref="build-path" />
+  </target>
+
+  <!-- target: clean  ================================================ -->  
+  <target name="clean" depends="clean-all" 
+  			description="clean all projects">
+    <delete includeemptydirs="true" dir="${dist.base.dir}"/>
+    <delete includeemptydirs="true" dir="${build.dir}"/>
+  </target>
+
+  <!-- target: clean-cache  ================================================ -->  
+  <target name="clean-cache" depends="ivy-init" 
+  			description="delete ivy cache">
+    <ivy:cleancache />
+  </target>
+
+  <!-- ================================================================== -->
+  <!-- target: clean-docs                                                -->
+  <!-- ================================================================== -->
+  <target name="clean-docs">
+    <delete dir="${build.docs}"/>
+  </target>
+  
+  <!-- ================================================================== -->
+  <!-- Ivy Targets                                                        --> 
+  <!-- ================================================================== -->
+  
+  <!-- target: ivy-init  ================================================ -->
+  <target name="ivy-init" depends="ivy-probe-antlib, ivy-init-antlib">
+    <ivy:settings file="${project.ivy.dir}/ivysettings.xml" />
+  </target>
+  
+  <!-- target: ivy-probe-antlib  ======================================== -->
+  <target name="ivy-probe-antlib">
+    <condition property="ivy.found">
+      <typefound uri="antlib:org.apache.ivy.ant" name="cleancache" />
+    </condition>
+  </target>
+
+  <!-- target: ivy-download  ============================================ -->
+  <target name="ivy-download" description="Download ivy">
+    <available file="${ivy.jar}" property="ivy.jar.found"/>
+    <antcall target="-ivy-download-unchecked"/>
+  </target>
+
+  <!-- target: ivy-download-unchecked  ================================== -->
+  <target name="-ivy-download-unchecked" unless="ivy.jar.found">
+    <get src="${ivy.repo.url}" dest="${ivy.jar}" usetimestamp="true" />
+  </target>
+
+  <!-- target: ivy-init-antlib  ========================================= -->
+  <target name="ivy-init-antlib" depends="ivy-download" unless="ivy.found">
+    <typedef uri="antlib:org.apache.ivy.ant" onerror="fail" loaderRef="ivyLoader">
+      <classpath>
+        <pathelement location="${ivy.jar}" />
+      </classpath>
+    </typedef>
+    <fail>
+      <condition>
+        <not>
+          <typefound uri="antlib:org.apache.ivy.ant" name="cleancache" />
+        </not>
+      </condition>
+      You need Apache Ivy 2.0 or later from http://ant.apache.org/
+      It could not be loaded from ${ivy.repo.url}
+    </fail>
+  </target>
+  
+</project>
diff --git a/trunk/conf/log4j.properties b/trunk/conf/log4j.properties
new file mode 100644
index 0000000..b7553c8
--- /dev/null
+++ b/trunk/conf/log4j.properties
@@ -0,0 +1,58 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+gora.root.logger=INFO,console
+gora.log.dir=.
+gora.log.file=gora.log
+
+log4j.rootLogger=${gora.root.logger}
+
+# Define some default values that can be overridden by system properties
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${gora.log.dir}/${gora.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this 
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+#log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+log4j.appender.console.layout.ConversionPattern=%-5p %-30.30c{2} - %m%n
+
+# Custom Logging levels
+log4j.logger.net.sf.jml=WARN
+log4j.logger.org.apache=WARN
+log4j.logger.org.apache.gora=INFO
diff --git a/trunk/doap_Gora.rdf b/trunk/doap_Gora.rdf
new file mode 100644
index 0000000..7d58455
--- /dev/null
+++ b/trunk/doap_Gora.rdf
@@ -0,0 +1,73 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl"?>
+<rdf:RDF xml:lang="en"
+         xmlns="http://usefulinc.com/ns/doap#" 
+         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" 
+         xmlns:asfext="http://projects.apache.org/ns/asfext#"
+         xmlns:foaf="http://xmlns.com/foaf/0.1/">
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+   
+         http://www.apache.org/licenses/LICENSE-2.0
+   
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+  <Project rdf:about="http://gora.apache.org">
+    <created>2012-01-30</created>
+    <license rdf:resource="http://usefulinc.com/doap/licenses/asl20" />
+    <name>Apache Gora</name>
+    <homepage rdf:resource="http://gora.apache.org" />
+    <asfext:pmc rdf:resource="http://gora.apache.org" />
+    <shortdesc>The Apache Gora open source framework provides an in-memory data model and persistence for big data. Gora supports persisting to column stores, key value stores, document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce support.</shortdesc>
+    <description>Although there are various excellent ORM frameworks for relational databases, data modeling in NoSQL data stores differ profoundly from their relational cousins. Moreover, data-model agnostic frameworks such as JDO are not sufficient for use cases, where one needs to use the full power of the data models in column stores. Gora fills this gap by giving the user an easy-to-use in-memory data model and persistence for big data framework with data store specific mappings and built in Apache Hadoop support.</description>
+    <bug-database rdf:resource="https://issues.apache.org/jira/browse/GORA" />
+    <mailing-list rdf:resource="http://gora.apache.org/mailing_lists.html" />
+    <download-page rdf:resource="http://gora.apache.org/releases.html" />
+    <programming-language>Java</programming-language>
+    <category rdf:resource="http://projects.apache.org/category/database" />
+    <releases>
+    <release>
+      <Version>
+        <name>0.2 release</name>
+        <created>2012-04-20</created> 
+        <revision>0.2</revision>
+      </Version>
+    </release>
+    <release>
+      <Version>
+        <name>0.1.1-incubating release</name>
+        <created>2011-09-24</created>
+        <revision>0.1.1-incubating</revision>
+      </Version>
+    </release>
+    <release>
+      <Version>
+        <name>0.1-incubating release</name>
+        <created>2011-04-06</created>
+        <revision>0.1-incubating</revision>
+      </Version>
+    </release>
+    </releases>
+    <repository>
+      <SVNRepository>
+        <location rdf:resource="https://svn.apache.org/repos/asf/gora/trunk/"/>
+        <browse rdf:resource="http://svn.apache.org/viewvc/gora/trunk/"/>
+      </SVNRepository>
+    </repository>
+    <maintainer>
+      <foaf:Person>
+        <foaf:name>Gora Development Team</foaf:name>
+          <foaf:mbox rdf:resource="mailto:dev@gora.apache.org"/>
+      </foaf:Person>
+    </maintainer>
+  </Project>
+</rdf:RDF>
diff --git a/trunk/docs/.gitignore b/trunk/docs/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/docs/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/docs/forrest.properties b/trunk/docs/forrest.properties
new file mode 100644
index 0000000..88c04d4
--- /dev/null
+++ b/trunk/docs/forrest.properties
@@ -0,0 +1,159 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+##############
+# These are the defaults, un-comment them only if you need to change them.
+#
+# You can even have a completely empty file, to assist with maintenance.
+# This file is required, even if empty.
+#
+# The file obtained from 'forrest seed-sample' shows the defaults.
+##############
+
+# Prints out a summary of Forrest settings for this project
+#forrest.echo=true
+
+# Project name (used to name .war file)
+#project.name=my-project
+
+# Specifies name of Forrest skin to use
+# See list at http://forrest.apache.org/docs/skins.html
+#project.skin=pelt
+
+# codename: Dispatcher
+# Dispatcher is using a fallback mechanism for theming.
+# You can configure the theme name and its extension here
+#project.theme-extension=.fv
+#project.theme=pelt
+
+
+# Descriptors for plugins and skins
+# comma separated list, file:// is supported
+#forrest.skins.descriptors=http://forrest.apache.org/skins/skins.xml,file:///c:/myskins/skins.xml
+#forrest.plugins.descriptors=http://forrest.apache.org/plugins/plugins.xml,http://forrest.apache.org/plugins/whiteboard-plugins.xml
+
+##############
+# behavioural properties
+#project.menu-scheme=tab_attributes
+#project.menu-scheme=directories
+
+##############
+# layout properties
+
+# Properties that can be set to override the default locations
+#
+# Parent properties must be set. This usually means uncommenting
+# project.content-dir if any other property using it is uncommented
+
+#project.status=status.xml
+project.content-dir=src
+#project.raw-content-dir=${project.content-dir}/content
+#project.conf-dir=${project.content-dir}/conf
+#project.sitemap-dir=${project.content-dir}
+#project.xdocs-dir=${project.content-dir}/content/xdocs
+#project.resources-dir=${project.content-dir}/resources
+#project.stylesheets-dir=${project.resources-dir}/stylesheets
+#project.images-dir=${project.resources-dir}/images
+#project.schema-dir=${project.resources-dir}/schema
+#project.skins-dir=${project.content-dir}/skins
+#project.skinconf=${project.content-dir}/skinconf.xml
+#project.lib-dir=${project.content-dir}/lib
+#project.classes-dir=${project.content-dir}/classes
+#project.translations-dir=${project.content-dir}/translations
+
+#project.build-dir=../build
+#project.site=site
+#project.site-dir=${project.build-dir}/${project.site}
+#project.temp-dir=${project.build-dir}/tmp
+
+##############
+# Cocoon catalog entity resolver properties
+# A local OASIS catalog file to supplement the default Forrest catalog
+#project.catalog=${project.schema-dir}/catalog.xcat
+
+##############
+# validation properties
+
+# This set of properties determine if validation is performed
+# Values are inherited unless overridden.
+# e.g. if forrest.validate=false then all others are false unless set to true.
+forrest.validate=false
+#forrest.validate.xdocs=${forrest.validate}
+#forrest.validate.skinconf=${forrest.validate}
+#forrest.validate.sitemap=${forrest.validate}
+#forrest.validate.stylesheets=${forrest.validate}
+#forrest.validate.skins=${forrest.validate}
+#forrest.validate.skins.stylesheets=${forrest.validate.skins}
+
+# *.failonerror=(true|false) - stop when an XML file is invalid
+#forrest.validate.failonerror=true
+
+# *.excludes=(pattern) - comma-separated list of path patterns to not validate
+# Note: If you do add an "excludes" list then you need to specify site.xml too.
+# e.g.
+#forrest.validate.xdocs.excludes=site.xml, samples/subdir/**, samples/faq.xml
+#forrest.validate.xdocs.excludes=site.xml
+
+
+##############
+# General Forrest properties
+
+# The URL to start crawling from
+#project.start-uri=linkmap.html
+
+# Set logging level for messages printed to the console
+# (DEBUG, INFO, WARN, ERROR, FATAL_ERROR)
+#project.debuglevel=ERROR
+
+# Max memory to allocate to Java
+#forrest.maxmemory=64m
+
+# Any other arguments to pass to the JVM. For example, to run on an X-less
+# server, set to -Djava.awt.headless=true
+#forrest.jvmargs=
+
+# The bugtracking URL - the issue number will be appended
+# Projects would use their own issue tracker, of course.
+#project.bugtracking-url=http://issues.apache.org/bugzilla/show_bug.cgi?id=
+#project.bugtracking-url=http://issues.apache.org/jira/browse/
+
+# The issues list as rss
+#project.issues-rss-url=
+
+#I18n Property. Based on the locale request for the browser.
+#If you want to use it for static site then modify the JVM system.language
+# and run once per language
+#project.i18n=false
+
+# The names of plugins that are required to build the project
+# comma separated list (no spaces)
+# You can request a specific version by appending "-VERSION" to the end of
+# the plugin name. If you exclude a version number, the latest released version
+# will be used. However, be aware that this may be a development version. In
+# a production environment it is recommended that you specify a known working
+# version.
+# Run "forrest available-plugins" for a list of plug-ins currently available.
+project.required.plugins=org.apache.forrest.plugin.output.pdf
+
+# codename: Dispatcher
+# Add the following plugins to project.required.plugins:
+#org.apache.forrest.plugin.internal.dispatcher,org.apache.forrest.themes.core,org.apache.forrest.plugin.output.inputModule
+
+# Proxy configuration
+# - proxy.user and proxy.password are only needed if the proxy is an authenticated one...
+# proxy.host=myproxy.myhost.com
+# proxy.port=<ProxyPort, if not the default : 80>
+# proxy.user=<login, if authenticated proxy>
+# proxy.password=<password, if authenticated proxy>
diff --git a/trunk/docs/src/content/xdocs/gora-cassandra.xml b/trunk/docs/src/content/xdocs/gora-cassandra.xml
new file mode 100644
index 0000000..1c4340d
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/gora-cassandra.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora Cassandra Module</title>
+  </header>
+  
+  <body>
+
+  <section>
+    <title> Overview </title>
+    <p> This is the main documentation for the <b>gora-cassandra</b> module. gora-cassandra 
+     module enables <a href="ext:cassandra">Apache Cassandra</a> backend support for Gora. </p>
+  </section>
+
+  <section>
+    <title> gora.properties </title>
+    <p> Coming soon </p>
+  </section>
+
+
+  <section>
+    <title> Gora Cassandra mappings </title>
+    <p> Coming soon </p>
+  </section>
+
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/gora-conf.xml b/trunk/docs/src/content/xdocs/gora-conf.xml
new file mode 100644
index 0000000..29bb3e5
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/gora-conf.xml
@@ -0,0 +1,100 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora Configuration</title>
+  </header>
+  
+  <body>
+
+  <section>
+    <title> gora.properties </title>
+
+<p>Gora reads necessary configuration from a properties file name 
+<code>gora.properties</code>. The file is searched in the classpath, which is 
+obtained using the ClassLoader of the <code>DataStoreFactory</code> class.
+
+The following properties are recognized:</p>
+<p><br/><table>
+  <caption>Common Properties</caption>
+  <tr><th align="left">Property</th> <th align="left">Required</th> <th align="left">Default</th> <th align="left">Explanation</th></tr>
+  <tr><td>gora.datastore.default</td><td>No</td> <td> – </td> <td>The full classname of the default data store implementation to use </td></tr>
+  <tr><td>gora.datastore.autocreateschema</td><td>No</td><td>true</td><td>Whether to create schemas automatically</td></tr>
+</table><br/></p>
+
+<p> <code>gora.datastore.default</code> is perhaps the most important property in this file. 
+This property configures the default <code>DataStore</code> implementation to use. 
+However, other data stores can still be instantiated thorough the API. 
+Data store implementation in Gora distribution include:</p>
+
+<p><br/><table>
+  <caption>DataStore implementations</caption>
+  <tr><th align="left">DataStore Implementation</th> <th align="left">Full Class Name</th> <th align="left">Module Name</th> <th align="left">Explanation</th></tr>
+  <tr><td>AvroStore</td> <td>org.apache.gora.avro.store.AvroStore</td> <td>gora-core</td> <td>An adapter DataStore for binary-compatible Avro serializations. AvroDataStore supports Binary and JSON serializations. </td></tr>
+  <tr><td>DataFileAvroStore</td> <td>org.apache.gora.avro.store.DataFileAvroStore</td> <td>gora-core</td> <td>DataFileAvroStore is file based store which uses Avro's DataFile{Writer,Reader}'s as a backend. This datastore supports mapreduce.</td></tr>
+  <tr><td>HBaseStore</td> <td>org.apache.gora.hbase.store.HBaseStore</td> <td><a href="site:gora-hbase">gora-hbase</a></td> <td> DataStore for <a href="ext:hbase">HBase</a>. </td></tr>
+  <tr><td>CassandraStore</td> <td>org.apache.gora.cassandra.store.CasssandraStore</td> <td><a href="site:gora-cassandra">gora-cassandra</a></td> <td> DataStore for <a href="ext:cassandra">Cassandra</a>. </td></tr>
+  <tr><td>SqlStore</td> <td>org.apache.gora.sql.store.SqlStore</td> <td><a href="site:gora-sql">gora-sql</a></td> <td> A DataStore implementation for RDBMS with a SQL interface. SqlStore uses JDBC drivers to communicate with the DB. Mysql and Hsqldb are supported for now.</td></tr>
+  <tr><td>MemStore</td> <td>org.apache.gora.memory.store.MemStore</td> <td>gora-core</td> <td> Memory based DataStore implementation for tests. </td></tr>
+</table><br/></p>
+
+<p>Some of the properties can be customized per datastore. The format of these 
+properties is as follows: <code>gora.&lt;data_store_class&gt;.&lt;property_name&gt;</code>. 
+Note that <code>&lt;data_store_class&gt;</code> is the classname of the datastore 
+implementation w/o the package name, for example <code>hbasestore</code>. 
+You can also use the string <code>datastore</code> instead of the specific 
+data store class name, in which case, the property setting is effective 
+to all data stores. The following properties can be set per data store.</p>
+
+<p><br/><table>
+  <caption>Per DataStore Properties</caption>
+  <tr><th align="left">Property</th> <th align="left">Required</th> <th align="left">Default</th> <th align="left">Explanation</th></tr>
+
+  <tr><td>gora.&lt;data_store_class&gt;.autocreateschema</td> <td>No</td> <td>true</td> <td>Whether to create schemas automatically for the specific data store</td></tr>
+  <tr><td>gora.&lt;data_store_class&gt;.mapping.file</td> <td>No</td> <td>gora-{hbase|cassandra|sql}-mapping.xml</td> <td>The name of the mapping file</td></tr>
+</table><br/></p>
+
+<p> </p>
+
+  </section>
+
+  <!--TODO: Avro data store properties -->
+
+  <section>
+    <title>Data store specific settings</title>
+    <p> Other than the properties above, some of the data stores have their 
+    own configurations. These properties are listed at the module documentations:
+    <ul>
+      <li><a href="site:gora-hbase">Gora HBase Module</a></li>
+      <li><a href="site:gora-cassandra">Gora Cassandra Module</a></li>
+      <li><a href="site:gora-sql">Gora SQL Module</a></li>
+    </ul>
+    </p>
+  </section>
+
+<!--
+  <section>
+  <title>Example gora.properties file </title>
+  
+  </section>
+-->
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/gora-core.xml b/trunk/docs/src/content/xdocs/gora-core.xml
new file mode 100644
index 0000000..f4d3894
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/gora-core.xml
@@ -0,0 +1,38 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora Core Module</title>
+  </header>
+  <body>
+
+  <section>
+    <title> Overview </title>
+    <p> This is the main documentation for the <b>gora-core</b> module. gora-core 
+    holds most of the core functionality for the gora project. Every module 
+    in gora depends on gora-core. Therefore most of the generic documentation 
+    about the project is gathered here. 
+    </p>
+  </section>
+
+  
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/gora-hbase.xml b/trunk/docs/src/content/xdocs/gora-hbase.xml
new file mode 100644
index 0000000..9479216
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/gora-hbase.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora HBase Module</title>
+  </header>
+  
+  <body>
+
+  <section>
+    <title> Overview </title>
+    <p> This is the main documentation for the <b>gora-hbase</b> module. gora-hbase 
+     module enables <a href="ext:hbase">Apache HBase</a> backend support for Gora. </p>
+  </section>
+
+  <section>
+    <title> gora.properties </title>
+    <p> Coming soon </p>
+  </section>
+
+
+  <section>
+    <title> Gora HBase mappings </title>
+    <p> Coming soon </p>
+  </section>
+
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/gora-sql.xml b/trunk/docs/src/content/xdocs/gora-sql.xml
new file mode 100644
index 0000000..18bd027
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/gora-sql.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora SQL Module</title>
+  </header>
+  
+  <body>
+
+  <section>
+    <title> Overview </title>
+    <p> This is the main documentation for the <b>gora-sql</b> module. gora-sql 
+     module enables SQL backend support for Gora. Currently MySQL and HSQLDB is supported.
+    </p>
+  </section>
+
+  <section>
+    <title> gora.properties </title>
+    <p> Coming soon </p>
+  </section>
+
+
+  <section>
+    <title> Gora HBase mappings </title>
+    <p> Coming soon </p>
+  </section>
+
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/index.xml b/trunk/docs/src/content/xdocs/index.xml
new file mode 100644
index 0000000..68f0282
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/index.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+<document>
+  <header>
+    <title>Gora Core Module</title>
+  </header>
+  <body>
+
+
+  <section>
+    <title> Introduction </title>
+    <p> This is the main entry point for Gora documentation. Here are some pointers for further info:</p>
+
+    <p><ul>
+      <li>First if you haven't already done so, make sure to check the <a href="site:quickstart">quick start guide</a>. </li>
+      <li>Information about gora modules can be found at the <a href="#Gora+Modules">section below</a>. </li>
+      <li>You can also take a look at the <a href="ext:api/index"> API</a> documentation which 
+    contains the javadoc for all of the modules combined. </li>
+      <li>You can find how to configure gora in <a href="site:gora-conf"> Gora Configuration</a>. </li>
+    </ul></p>
+  </section>
+
+
+  <section> 
+    <title>Gora Modules </title>
+    <p> Gora source code is organized in a modular architecture. The 
+    <b>gora-core</b> module is the main module which contains the core of 
+    the code. All other modules depend on the gora-core module. Each data 
+    store backend in Gora resides in it's own module. The documentation for 
+    the specific module can be found at the module's documentation directory. 
+    </p>
+
+    <p> It is wise so start with going over the documentation for the gora-core 
+    module and then the specific data store module(s) you want to use. The 
+    following modules are implemented in gora. </p>
+
+   <p> <ul>
+     <li> <b><a href="site:gora-core">gora-core</a></b>: Module containing core functionality </li>
+     <li> <b><a href="site:gora-cassandra">gora-cassandra</a></b>: Module for Apache Cassandra backend </li>
+     <li> <b><a href="site:gora-hbase">gora-hbase</a></b>: Module for Apache HBase backend </li>
+     <li> <b><a href="site:gora-sql">gora-sql</a></b>: Module for SQL database backends </li>
+   </ul></p>
+
+  </section>
+
+  </body>
+</document>
diff --git a/trunk/docs/src/content/xdocs/quickstart.xml b/trunk/docs/src/content/xdocs/quickstart.xml
new file mode 100644
index 0000000..960273f
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/quickstart.xml
@@ -0,0 +1,189 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<document>
+  
+  <header>
+    <title>Quick Start</title>
+  </header>
+  
+  <body>
+  
+    <section>
+      <title>Introduction</title>
+      <p>This is a quick start guide to help you setup the project.</p>
+    </section>
+
+  <section>
+   <title> Download </title>
+   <p> First you need to check out the most stable Gora release through the official Apache Gora release page <a href="ext:releases">here</a>.</p>  
+   <p>For those who would like to use a development version Gora or simply wish to work with the bleeding edge, instructions for how to check out the source code using svn or git can be found 
+   <a href="ext:vcs">here</a>. 
+   </p>
+  </section>
+
+  <section>
+    <title> Compiling the project (Maven users)</title>
+    <p>Once you have the source code for Gora, you can compile the project using</p>
+    <p>
+    <code>
+      $ cd gora 
+    </code> <br/>
+    <code>
+      $ mvn clean compile
+    </code>
+    </p>
+    <p> You can also compile individual modules by cd'ing to the module directory and 
+    running <code>$ mvn clean compile</code> there. </p>
+  </section>
+
+  <section>
+    <title> Setting up your project </title>
+    <p>
+    More recently Gora began using Maven to manage it's dependencies and build lifecycle. Stable Gora releases are available on the central maven repository 
+    or ivy repositories and Gora-SNAPSHOT OSGi bundle artifacts are now pushed to the Apache Nexus <a href="https://repository.apache.org/index.html#nexus-search;quick~gora">here</a>.</p>
+    <p> You can manage Gora dependencies in a few ways. </p>
+
+    <section>
+      <title> Using ivy to manage gora </title>
+    <p>If your project already uses ivy, then you can include gora dependencies
+    to your ivy by adding the following lines to your <code>ivy.xml</code> file: </p>
+
+    <p>
+    <code>
+      &lt;dependency org="org.apache.gora" name="gora-hbase" rev="${version}" conf="*-&gt;compile" changing="true"&gt;
+    </code><br/>
+    <code>
+      &lt;dependency org="org.apache.gora" name="gora-cassandra" rev="${version}" conf="*-&gt;compile" changing="true"&gt;
+    </code><br/>
+    <code>
+      &lt;dependency org="org.apache.gora" name="gora-sql" rev="${version}" conf="*-&gt;compile" changing="true"&gt;
+    </code>
+    </p>
+
+    <p><b>N.B.</b> The <code>${version}</code> variable should be replaced by the most stable Gora release.</p>
+    
+    <p>Only add the modules <code>gora-hbase, gora-cassandra, gora-sql</code>
+    that you will use, and set the <code>conf</code> to point to the 
+    configurations (of your project) that you want to depend on gora. The 
+    <code>changing="true"</code> attribute states that, gora artifacts 
+    should not be cached, which is required if you want to change gora's 
+    source and use the recompiled version.</p>
+
+    <p> Add the following to your <code>ivysettings.xml</code></p>
+    <p>
+    <source>
+    &lt;resolvers&gt;
+      ...
+      &lt;chain name="internal"&gt;
+        &lt;resolver ref="local"/&gt;
+      &lt;/chain&gt;
+      ...
+    &lt;/resolvers&gt;
+    &lt;modules&gt;
+      ...
+      &lt;module organisation="org.apache.gora" name=".*" resolver="internal"/&gt;
+      ...
+    &lt;/modules&gt;
+    </source>
+    </p>
+
+    <p>This forces gora to be built locally rather than look for it in other 
+    repositories.</p>
+  </section>
+    <section>
+      <title> Using Maven to manage Gora </title>
+    <p>If your project however uses maven, then you can include gora dependencies
+    to your project by adding the following lines to your <code>pom.xml</code> file: </p>
+
+    <p>
+    <source>
+	&lt;dependency&gt;
+  		&lt;groupId>org.apache.gora&lt;/groupId&gt;
+  		&lt;artifactId>gora-hbase&lt;/artifactId&gt;
+  		&lt;version>${version}&lt;/version&gt;
+	&lt;/dependency&gt;
+    </source><br/>
+    <source>
+	&lt;dependency&gt;
+  		&lt;groupId>org.apache.gora&lt;/groupId&gt;
+  		&lt;artifactId>gora-cassandra&lt;/artifactId&gt;
+  		&lt;version>${version}&lt;/version&gt;
+	&lt;/dependency&gt;
+    </source><br/>
+    <source>
+	&lt;dependency&gt;
+  		&lt;groupId>org.apache.gora&lt;/groupId&gt;
+  		&lt;artifactId>gora-sql&lt;/artifactId&gt;
+  		&lt;version>${version}&lt;/version&gt;
+	&lt;/dependency&gt;
+    </source>
+    </p>
+
+    <p><b>N.B.</b> The <code>${version}</code> variable should be replaced by the most stable Gora release.</p>
+    
+    <p>Only add the modules <code>gora-hbase, gora-cassandra, gora-sql</code>
+    that you will use.</p>
+  </section>
+
+  <section>
+    <title>Managing gora jars manually </title>
+    <p>You can include gora jars manually, if you prefer so. After compiling gora 
+    first copy all the jars in <code>gora-[modulename]/lib/</code> dir. Then 
+    copy all the jars in <code>gora-core/lib/</code> since all of the modules depend 
+    on <code>gora-core</code>. Last, copy the actual gora-jars in
+    <code>gora-core/build/gora-core-x.×.jar</code> and the jars of all the other 
+    modules that you want to use ( for example 
+    <code>gora-hbase/build/gora-hbase-x.×.jar</code>)</p>
+  </section>
+  </section>
+
+  <section>
+    <title> What's next </title>
+    <p> After setting up gora, you might want to check out the documentation. 
+    Most of the documentation can be find at the project 
+    <a href="ext:gora">web site</a> or at the <a href="ext:wiki">wiki</a>.</p> 
+
+    <section> 
+      <title>Gora Modules </title>
+      <p> Gora source code is organized in a modular architecture. The 
+      <b>gora-core</b> module is the main module which contains the core of 
+      the code. All other modules depend on the gora-core module. Each data 
+      store backend in Gora resides in it's own module. The documentation for 
+      the specific module can be found at the module's documentation directory. 
+      </p>
+
+      <p> It is wise so start with going over the documentation for the gora-core 
+      module and then the specific data store module(s) you want to use. Below are the 
+      modules in gora. </p>
+
+     <p> <ul>
+       <li> <b><a href="site:gora-core">gora-core</a></b>: Module containing core functionality </li>
+       <li> <b><a href="site:gora-cassandra">gora-cassandra</a></b>: Module for Apache Cassandra backend </li>
+       <li> <b><a href="site:gora-hbase">gora-hbase</a></b>: Module for Apache HBase backend </li>
+       <li> <b><a href="site:gora-sql">gora-sql</a></b>: Module for SQL database backends </li>
+     </ul></p>
+
+    </section>
+  </section>
+
+  </body>
+  
+</document>
diff --git a/trunk/docs/src/content/xdocs/site.xml b/trunk/docs/src/content/xdocs/site.xml
new file mode 100644
index 0000000..da3897c
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/site.xml
@@ -0,0 +1,124 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+Forrest site.xml
+
+This file contains an outline of the site's information content.  It is used to:
+- Generate the website menus (though these can be overridden - see docs)
+- Provide semantic, location-independent aliases for internal 'site:' URIs, eg
+<link href="site:changes"> links to changes.html (or ../changes.html if in
+  subdir).
+- Provide aliases for external URLs in the external-refs section.  Eg, <link
+  href="ext:cocoon"> links to http://cocoon.apache.org/ 
+
+See http://forrest.apache.org/docs/linking.html for more info
+-->
+<!-- The label attribute of the outer "site" element will only show
+  in the linkmap (linkmap.html).
+  Use elements project-name and group-name in skinconfig to change name of 
+  your site or project that is usually shown at the top of page.
+  No matter what you configure for the href attribute, Forrest will
+  always use index.html when you request http://yourHost/
+  See FAQ: "How can I use a start-up-page other than index.html?"
+-->
+<site label="Gora Core" href="" xmlns="http://apache.org/forrest/linkmap/1.0" tab=""
+  xmlns:xi="http://www.w3.org/2001/XInclude">
+
+  <docs label="Documentation">
+    <overview   label="Overview"           href="index.html" />
+    <quickstart label="Quick Start"        href="quickstart.html" />
+    <tutorial   label="Gora Tutorial"      href="tutorial.html" />
+    <gora-conf  label="Gora Configuration" href="gora-conf.html" />
+    <gora-core  label="gora-core"          href="gora-core.html" />
+    <gora-cassandra  label="gora-cassandra" href="gora-cassandra.html" />
+    <gora-hbase label="gora-hbase"         href="gora-hbase.html" />
+    <gora-sql   label="gora-sql"           href="gora-sql.html" />
+    <api        label="API docs"           href="ext:api/index" />
+  </docs>
+
+  <external-refs>
+
+    <wiki     href="https://cwiki.apache.org/confluence/display/GORA/Index" />
+    <releases href="http://gora.apache.org/releases.html"/>
+    <issues   href="http://issues.apache.org/jira/browse/GORA"/>
+    <vcs      href="http://gora.apache.org/version_control.html" />
+    <devmail  href="http://gora.apache.org/mailing_lists.html"/>
+
+    <gora     href="http://gora.apache.org/"/>
+    <avro     href="http://avro.apache.org/"/>
+    <avrospec href="http://avro.apache.org/docs/current/spec.html"/>
+    <hadoop   href="http://hadoop.apache.org/mapreduce/"/>
+    <hdfs     href="http://hadoop.apache.org/hdfs/"/>
+    <mapreduce      href="http://hadoop.apache.org/mapreduce/"/>
+    <hbase    href="http://hbase.apache.org/"/>
+    <hector   href="http://hector-client.org/" />
+    <cassandra    href="http://cassandra.apache.org/"/>
+    <nutch     href="http://nutch.apache.org/"/>
+    <sponsors href="http://www.apache.org/foundation/thanks.html"/>
+
+    <api href="api/">
+      <index href="index.html" />
+      <org href="org/">
+        <apache href="apache/">
+          <gora href="gora/">
+            <mapreduce href="mapreduce/">
+              <gorainputformat href="GoraInputFormat.html">
+              </gorainputformat>
+              <goraoutputformat href="GoraOutputFormat.html">
+              </goraoutputformat>
+              <goramapper href="GoraMapper.html">
+                <initmapperjob href="#initMapperJob(org.apache.hadoop.mapreduce.Job,%20org.gora.store.DataStore,%20java.lang.Class,%20java.lang.Class,%20java.lang.Class,%20boolean)"/>
+              </goramapper>
+              <gorareducer href="GoraReducer.html">
+                <initreducerjob href="#initReducerJob(org.apache.hadoop.mapreduce.Job,%20org.apache.gora.store.DataStore,%20java.lang.Class)"/>
+              </gorareducer>
+              <persistentserialization href="PersistentSerialization.html"/>
+              <stringserialization href="StringSerialization.html"/>
+            </mapreduce>
+            <persistency href="persistency/">
+              <persistent href="Persistent.html"/>
+            </persistency>
+            <query href="query/">
+              <query href="Query.html">
+                <execute href="#execute()"/>
+              </query>
+              <result href="Result.html">
+                <getkey href="#getKey()"/>
+                <get href="#get()"/>
+                <next href="#next()"/>
+              </result>
+            </query>
+            <store href="store/">
+              <datastore href="DataStore.html">
+                <close href="#close()"/>
+                <delete href="#delete(K)"/>
+                <deletebyquery href="#deleteByQuery(org.gora.query.Query)"/>
+                <flush href="#flush()"/>
+                <put href="#put(K,T)"/>
+                <get href="#get(K)"/>
+                <newquery href="#newQuery()"/>
+              </datastore>
+              <datastorefactory href="DataStoreFactory.html"/>
+            </store>
+          </gora>
+        </apache>
+      </org>
+    </api>
+
+  </external-refs>
+</site>
diff --git a/trunk/docs/src/content/xdocs/tabs.xml b/trunk/docs/src/content/xdocs/tabs.xml
new file mode 100644
index 0000000..494ad88
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/tabs.xml
@@ -0,0 +1,40 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!DOCTYPE tabs PUBLIC "-//APACHE//DTD Cocoon Documentation Tab V1.1//EN" "http://forrest.apache.org/dtd/tab-cocoon-v11.dtd">
+<tabs software="The Forresters"
+  title="The Forresters"
+  copyright="The Apache Software Foundation"
+  xmlns:xlink="http://www.w3.org/1999/xlink">
+<!-- The rules for tabs are:
+    @dir will always have '/@indexfile' added.
+    @indexfile gets appended to @dir if the tab is selected. Defaults to 'index.html'
+    @href is not modified unless it is root-relative and obviously specifies a
+    directory (ends in '/'), in which case /index.html will be added
+    If @id's are present, site.xml entries with a matching @tab will be in that tab.
+
+   Tabs can be embedded to a depth of two. The second level of tabs will only 
+    be displayed when their parent tab is selected.    
+  -->
+  <tab label="Project" href="http://incubator.apache.org/gora/"/>
+  <tab label="Wiki"  href="https://cwiki.apache.org/confluence/display/GORA/Index"/>
+  <tab label="Issues" href="http://issues.apache.org/jira/browse/GORA"/>
+<!--  <tab label="gora-core" href="gora-core/"/>
+  <tab label="gora-cassandra" href="gora-cassandra/"/>
+  <tab label="gora-hbase" href="gora-hbase/"/>
+  <tab label="gora-sql" href="gora-sql/"/> -->
+</tabs>
diff --git a/trunk/docs/src/content/xdocs/tutorial.xml b/trunk/docs/src/content/xdocs/tutorial.xml
new file mode 100644
index 0000000..e64f9b8
--- /dev/null
+++ b/trunk/docs/src/content/xdocs/tutorial.xml
@@ -0,0 +1,1188 @@
+<?xml version="1.0"?>
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<document>
+  <header>
+    <title>Gora Tutorial</title>
+  </header>
+  
+  <body>
+
+  <p><hr/> <b>Author :</b> Enis Söztutar, enis [at] apache [dot] org<hr/> </p>
+
+    <section>
+      <title>Introduction</title>
+      <p>This is the official tutorial for Apache Gora. For this tutorial, we 
+      will be implementing a system to store our web server logs in Apache HBase,
+      and analyze the results using Apache Hadoop and store the results either in HSQLDB or MySQL.</p>
+
+      <p> In this tutorial we will first look at how to set up the environment and 
+      configure Gora and the data stores. Later, we will go over the data we will use and
+      define the data beans that will be used to interact with the persistency layer. 
+      Next, we will go over the API of Gora to do some basic tasks such as storing objects, 
+      fetching and querying objects, and deleting objects. Last, we will go over an example 
+      program which uses Hadoop MapReduce to analyze the web server logs, and discuss the Gora 
+      MapReduce API in some detail. </p>
+
+    <section>
+      <title>Introduction to Gora</title>
+      <p> The Apache Gora open source framework provides an in-memory data 
+      model and persistence for big data. Gora supports persisting to 
+      column stores, key value stores, document stores and RDBMSs, and 
+      analyzing the data with extensive Apache Hadoop MapReduce support. In Avro, the 
+      beans to hold the data and RPC interfaces are defined using a JSON 
+      schema. In mapping the data beans to data store specific settings, 
+      Gora depends on mapping files, which are specific to each data store. 
+      Unlike other ORM implementations, Gora the data bean to data store 
+      specific schema mapping is explicit. This has the advantage that, 
+      when using data models such as HBase and Cassandra, you can always 
+      know how the values are persisted. </p>
+      
+      <p>Gora has a modular architecture. Most of the data stores in Gora, 
+      has it's own module, such as <code>gora-hbase, gora-cassandra</code>,
+      and <code>gora-sql</code>. In your projects, you need to only include 
+      the artifacts from the modules you use. You can consult the <a href="quickstart.html#Setting+up+your+project">
+      Setting up your project</a> section in the quick start guide.</p>
+    </section>
+    </section>
+
+  <section>
+    <title>Setting up the environment</title>
+    
+    <section>
+    <title>Setting up Gora</title>
+    <p>As a first step, we need to download and compile the Gora source code. The source codes 
+    for the tutorial is in the <code>gora-tutorial</code> module. If you have
+    already downloaded Gora, that's cool, otherwise, please go
+    over the steps at the <a href="site:quickstart">Quick Start</a> guide for
+    how to download and compile Gora. </p>
+    <p> 
+    Now, after the source code for Gora is at hand, let's have a look at the files under the 
+    directory <code>gora-tutorial</code>. </p>
+    
+    <p>
+      <code>$ cd gora-tutorial</code><br/>
+      <code>$ tree</code><br/>
+      <source>
+|-- build.xml
+|-- conf
+|   |-- gora-hbase-mapping.xml
+|   |-- gora-sql-mapping.xml
+|   `-- gora.properties
+|-- ivy
+|   `-- ivy.xml
+`-- src
+    |-- examples
+    |   `-- java
+    |-- main
+    |   |-- avro
+    |   |   |-- metricdatum.json
+    |   |   `-- pageview.json
+    |   |-- java
+    |   |   `-- org
+    |   |       `-- apache
+    |   |           `-- gora
+    |   |               `-- tutorial
+    |   |                   `-- log
+    |   |                       |-- KeyValueWritable.java
+    |   |                       |-- LogAnalytics.java
+    |   |                       |-- LogManager.java
+    |   |                       |-- TextLong.java
+    |   |                       `-- generated
+    |   |                           |-- MetricDatum.java
+    |   |                           `-- Pageview.java
+    |   `-- resources
+    |       `-- access.log.tar.gz
+    `-- test
+        |-- conf
+        `-- java
+      </source>
+    </p>
+
+  <p>Since gora-tutorial is a top level module of Gora, it depends on the directory
+  structure imposed by Gora's main build scripts (<code>build.xml</code> and 
+  <code>build-common.xml</code> with Ivy and pom.xml for Maven). The Java source code resides in directory <code>
+  src/main/java/</code>, avro schemas in <code>src/main/avro/</code>, and data in 
+  <code>src/main/resources/</code>.</p>
+  </section>
+  
+  <section>
+    <title>Setting up HBase</title>
+    <p> For this tutorial we will be using <a href="ext:hbase"> HBase</a> to 
+    store the logs. For those of you not familiar with HBase, it is a NoSQL
+    column store with an architecture very similar to Google's BigTable. </p>
+    <!-- TODO: Tutorial for SQL and Cassandra -->
+    <p> If you don't already have already HBase setup, you can go over the steps at 
+    <a href="http://hbase.apache.org/docs/r0.20.6/api/overview-summary.html#overview_description"> HBase Overview </a>
+    documentation. Although Gora aims to support the most recent HBase versions, the above tutorial is 
+    specifically for HBase 0.20.6 (don't worry the principals are the same), so download a version from 
+    <a href="http://hbase.apache.org/releases.html">HBase releases</a>. After extracting 
+    the file, cd to the hbase-${dist} directory and start the HBase server. </p> 
+    <p><code>$ bin/start-hbase.sh</code> </p>
+    <p> and make sure that HBase is available by using the Hbase shell. 
+    <p><code>$ bin/hbase shell</code> </p>
+    </p>
+  </section>
+  
+    <section>
+      <title>Configuring Gora</title>
+      <p> Gora is configured through a file in the classpath named <code>gora.properties</code>. 
+      We will be using the following file <code>gora-tutorial/conf/gora.properties</code> </p>
+      
+      <p><source>
+      gora.datastore.default=org.apache.gora.hbase.store.HBaseStore
+      gora.datastore.autocreateschema=true
+      </source></p>
+      
+      <p> This file states that the default store will be <code>HBaseStore</code>,
+      and schemas(tables) should be automatically created. </p>
+      
+      <p> More information for configuring different settings in gora.properties 
+      can be found <a href="site:gora-conf"> here </a>. </p>
+    </section>
+  
+  </section>
+
+  <section>
+  <title> Modelling the data </title>
+  <section>
+    <title>Data for the tutorial</title>
+    <p>For this tutorial, we will be parsing and storing the logs of a web server. 
+    Some example logs are at <code>src/main/resources/access.log.tar.gz</code>, which 
+    belongs to the (now shutdown) server at http://www.buldinle.com/. Example logs contain 10,000 lines, between dates 2009/03/10 - 2009/03/15. <br/>
+    The first thing, we need to do is to extract the logs. </p>
+    <p><code>$ tar zxvf src/main/resources/access.log.tar.gz -C src/main/resources/</code></p>
+    <p> You can also use your own log files, given that the log 
+    format is <a href="http://httpd.apache.org/docs/current/logs.html"> 
+    Combined Log Format</a>. Some example lines from the log are: </p>
+    <code>88.254.190.73 - - [10/Mar/2009:20:40:26 +0200] "GET / HTTP/1.1" 200 43 "http://www.buldinle.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; GTB5; .NET CLR 2.0.50727; InfoPath.2)"</code><br/>
+    <code>78.179.56.27 - - [11/Mar/2009:00:07:40 +0200] "GET /index.php?i=3&amp;a=1__6x39kovbji8&amp;k=3750105 HTTP/1.1" 200 43 "http://www.buldinle.com/index.php?i=3&amp;a=1__6X39Kovbji8&amp;k=3750105" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; OfficeLiveConnector.1.3; OfficeLivePatch.0.0)"</code><br/>
+    <code>78.163.99.14 - - [12/Mar/2009:18:18:25 +0200] "GET /index.php?a=3__x7l72c&amp;k=4476881 HTTP/1.1" 200 43 "http://www.buldinle.com/index.php?a=3__x7l72c&amp;k=4476881" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; InfoPath.1)"</code><br/>
+
+    <p>The first fields in order are: User's ip, ignored, ignored, Date and 
+    time, HTTP method, URL, HTTP Method, HTTP status code, Number of bytes 
+    returned, Referrer, and User Agent.</p>
+
+  </section>
+
+  <section>
+    <title>Defining data beans</title>
+    
+    <p> Data beans are the main way to hold the data in memory and persist in Gora. Gora 
+    needs to explicitly keep track of the status of the data in memory, so 
+    we use <a href="ext:avro">Apache Avro</a> for defining the beans. Using 
+    avro gives us the possibility to explicitly keep track object's persistency state, 
+    and a way to serialize object's data. </p> 
+    <p>Defining data beans is a very easy task, but for the exact syntax, please 
+    consult to <a href="ext:avrospec"> Avro Specification</a>.</p>
+    <p> First, we need to define the bean <b><code>Pageview</code></b> to hold a
+    single URL access in the logs. Let's go over the class at <code> src/main/avro/pageview.json </code>
+    </p>
+    <p>
+    <source>
+ {
+  "type": "record",
+  "name": "Pageview",
+  "namespace": "org.apache.gora.tutorial.log.generated",
+  "fields" : [
+    {"name": "url", "type": "string"},
+    {"name": "timestamp", "type": "long"},
+    {"name": "ip", "type": "string"},
+    {"name": "httpMethod", "type": "string"},
+    {"name": "httpStatusCode", "type": "int"},
+    {"name": "responseSize", "type": "int"},
+    {"name": "referrer", "type": "string"},
+    {"name": "userAgent", "type": "string"}
+  ]
+}
+    </source>
+    </p>
+
+    <p>Avro schemas are declared in JSON. 
+    <a href="http://avro.apache.org/docs/current/spec.html#schema_record">
+    Records</a> are defined with type 
+    <code>"record"</code>, with a name as the name of the class, and a 
+    namespace which is mapped to the package name in Java. The fields 
+    are listed in the <code>"fields"</code> element. Each field is given 
+    with its type. </p>
+    
+  </section>
+
+  <section>
+    <title>Compiling Avro Schemas</title>
+    
+    <p>The next step after defining the data beans is to compile the schemas 
+    into Java classes. For that we will use <code>GoraCompiler</code>. 
+    Invoking the Gora compiler by (from Gora top level directory) </p>
+    <p> 
+    <code>
+    $ bin/gora compile
+    </code>
+    </p> results in:
+    <p> 
+    <code>
+    $ Usage: SpecificCompiler &lt;schema file&gt; &lt;output dir&gt;
+    </code>
+    </p> <p>so we will issue :</p>
+    <p> 
+    <code>
+    $ bin/gora compile gora-tutorial/src/main/avro/pageview.json gora-tutorial/src/main/java/
+    </code>
+    </p>
+    <p>to compile the Pageview class into 
+    <code>gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/Pageview.java</code>. 
+    However, the tutorial java classes are already committed, so you do not need to do that 
+    now. </p>
+    
+    <p> Gora compiler extends Avro's <code>SpecificCompiler</code> to convert JSON definition 
+    into a Java class. Generated classes extend 
+    the <a href="ext:api/org/apache/gora/persistency/persistent">Persistent</a> interface. 
+    Most of the methods of the <code>Persistent</code> interface deal with bookkeeping for 
+    persistence, and state tracking, so most of the time they are not used explicitly by the
+    user. Now, let's look at the internals of the generated class <code>Pageview.java</code>.
+    </p>
+    <p>
+    <source>
+public class Pageview extends PersistentBase {
+
+  private Utf8 url;
+  private long timestamp;
+  private Utf8 ip;
+  private Utf8 httpMethod;
+  private int httpStatusCode;
+  private int responseSize;
+  private Utf8 referrer;
+  private Utf8 userAgent;
+
+  ...
+
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\", ... ");
+  public static enum Field {
+    URL(0,"url"),
+    TIMESTAMP(1,"timestamp"),
+    IP(2,"ip"),
+    HTTP_METHOD(3,"httpMethod"),
+    HTTP_STATUS_CODE(4,"httpStatusCode"),
+    RESPONSE_SIZE(5,"responseSize"),
+    REFERRER(6,"referrer"),
+    USER_AGENT(7,"userAgent"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"url","timestamp","ip","httpMethod"
+    ,"httpStatusCode","responseSize","referrer","userAgent",};
+  
+  ...
+  }
+    </source>
+    </p>
+    
+    <p> We can see the actual field declarations in the class. Note that Avro uses <code>Utf8</code> 
+    class as a placeholder for string fields. We can also see the embedded Avro 
+    Schema declaration and an inner enum named <code>Field</code>. This enum and 
+    the <code>_ALL_FIELDS</code> field will come in handy when we will use them 
+    to query the datastore for specific fields. 
+    </p>
+  </section>
+      
+  
+  <section>
+    <title>Defining data store mappings</title>
+    <p>Gora is designed to flexibly work with various types of data modeling, 
+    including column stores(such as HBase, Cassandra, etc), SQL databases, flat files(binary, 
+    JSON, XML encoded), and key-value stores. The mapping between the data bean and 
+    the data store is thus defined in XML mapping files. Each data store has its own 
+    mapping format, so that data-store specific settings can be leveraged more easily.
+    The mapping files declare how the fields of the classes declared in Avro schemas 
+    are serialized and persisted to the data store.</p>
+    
+    <section>
+      <title> HBase mappings </title>
+      <p> HBase mappings are stored at file named <code>gora-hbase-mappings.xml</code>. 
+      For this tutorial we will be using the file <code>gora-tutorial/conf/gora-hbase-mappings.xml</code>.</p>
+      
+      <!--  This is gora-sql-mapping.xml
+ <source>
+ &lt;gora-orm&gt;
+  &lt;class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog"&gt;
+    &lt;primarykey column="line"/&gt;
+    &lt;field name="url" column="url" length="512" primarykey="true"/&gt;
+    &lt;field name="timestamp" column="timestamp"/&gt;
+    &lt;field name="ip" column="ip" length="16"/&gt;
+    &lt;field name="httpMethod" column="httpMethod" length="6"/&gt;
+    &lt;field name="httpStatusCode" column="httpStatusCode"/&gt;
+    &lt;field name="responseSize" column="responseSize"/&gt;
+    &lt;field name="referrer" column="referrer" length="512"/&gt;
+    &lt;field name="userAgent" column="userAgent" length="512"/&gt;
+  &lt;/class&gt;
+
+  ...
+
+&lt;/gora-orm&gt;
+
+      </source>
+      -->
+      
+      <p><source>  
+&lt;gora-orm&gt;
+  &lt;table name="Pageview"&gt; &lt;!-- optional descriptors for tables --&gt;
+    &lt;family name="common"/&gt; &lt;!-- This can also have params like compression, bloom filters --&gt;
+    &lt;family name="http"/&gt;
+    &lt;family name="misc"/&gt;
+  &lt;/table&gt;
+
+  &lt;class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog"&gt;
+    &lt;field name="url" family="common" qualifier="url"/&gt;
+    &lt;field name="timestamp" family="common" qualifier="timestamp"/&gt;
+    &lt;field name="ip" family="common" qualifier="ip" /&gt;
+    &lt;field name="httpMethod" family="http" qualifier="httpMethod"/&gt;
+    &lt;field name="httpStatusCode" family="http" qualifier="httpStatusCode"/&gt;
+    &lt;field name="responseSize" family="http" qualifier="responseSize"/&gt;
+    &lt;field name="referrer" family="misc" qualifier="referrer"/&gt;
+    &lt;field name="userAgent" family="misc" qualifier="userAgent"/&gt;
+  &lt;/class&gt;
+  
+  ...
+  
+&lt;/gora-orm&gt;  
+      </source> </p>
+      
+      <p>
+      Every mapping file starts with the top level element <code>&lt;gora-orm&gt;</code>. 
+      Gora HBase mapping files can have two type of child elements, <code>table</code> and 
+      <code>class</code> declarations. All of the table and class definitions should be 
+      listed at this level.</p> 
+      
+      <p><code>table</code> declaration is optional and most of the time, Gora infers the table 
+      declaration from the <code>class</code> sub elements. However, some of the HBase 
+      specific table configuration such as compression, blockCache, etc can be given here, 
+      if Gora is used to auto-create the tables. The exact syntax for the file can be found 
+      <a href="gora-hbase.html#Gora+HBase+mappings">here</a>.</p>
+      
+      <p>In Gora, data store access is always 
+      done in a key-value data model, since most of the target backends support this model.
+      DataStore API expects to know the class names of the key and persistent classes, so that 
+      they can be instantiated. The key value pair is declared in the <code>class</code> element.
+      The <code>name</code> attribute is the fully qualified name of the class, 
+      and the <code>keyClass</code> attribute is 
+      the fully qualified class name of the key class. </p>
+      
+      <p>Children of the <code>&lt;class&gt;</code> element are <code>&lt;field&gt;</code> 
+      elements. Each field element has a <code>name</code> and <code>family</code> attribute, and 
+      an optional <code>qualifier</code> attribute. <code>name</code> attribute contains the name 
+      of the field in the persistent class, and <code>family</code> declares the column family 
+      of the HBase data model. If the qualifier is not given, the name of the field is used 
+      as the column qualifier. Note that map and array type fields are stored in unique column 
+      families, so the configuration should be list unique column families for each map and 
+      array type, and no qualifier should be given. The exact data model is discussed further 
+      at the <a href="site:gora-hbase">gora-hbase documentation</a>. </p>
+    </section>
+  </section>
+  </section>  
+
+  <section>
+  <title> Basic API </title>
+
+  <section>
+    <title>Parsing the logs</title>
+    <p> Now that we have the basic setup, we can see Gora API in action. As you can notice below the API 
+    is pretty simple to use. We will be using the class <code>LogManager</code> (which is located at
+    <code>gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogManager.java</code>) for parsing 
+    and storing the logs, deleting some lines and querying.</p> 
+
+
+    <p> First of all, let us look at the constructor. The only real thing it does is to call the 
+    <code>init()</code> method. <code>init()</code> method constructs the 
+    <code>DataStore</code> instance so that it can be used by the <code>LogManager</code>'s methods.</p>
+    <p><source>
+  public LogManager() {
+    try {
+      init();
+    } catch (IOException ex) {
+      throw new RuntimeException(ex);
+    }
+  }
+  private void init() throws IOException {
+    dataStore = DataStoreFactory.getDataStore(Long.class, Pageview.class);
+  }
+    </source></p>
+
+    <p> <a href="ext:api/org/apache/gora/store/datastore">DataStore</a> is probably the most important 
+    class in the Gora API. <code>DataStore</code> handles actual object persistence. Objects can be persisted, 
+    fetched, queried or deleted by the DataStore methods. Every data store that Gora supports, defines its own subclass 
+    of the DataStore class. For example <code>gora-hbase</code> module defines <code>HBaseStore</code>, and 
+    <code>gora-sql</code> module defines <code>SqlStore</code>. However, these subclasses are not explicitly 
+    used by the user. </p>
+
+    <p> DataStores always have associated key and value(persistent) classes. Key class is the class of the keys of the 
+    data store, and the value is the actual data bean's class. The value class is almost always generated by 
+    Avro schema definitions using the Gora compiler. </p>
+
+    <p> Data store objects are created by <a href="ext:api/org/apache/gora/store/datastorefactory">DataStoreFactory</a>. It is necessary to 
+    provide the key and value class. The datastore class is optional, 
+    and if not specified it will be read from the configuration (gora.properties).</p>
+
+    <p> For this tutorial, we have already defined the avro schema to use and compiled
+    our data bean into <code>Pageview</code> class. For keys in the data store, we will be using <code>Long</code>s. 
+    The keys will hold the line of the pageview in the data file. </p>
+
+    <p>Next, let's look at the main function of the <code>LogManager</code> class.</p>
+    <p><source>
+   public static void main(String[] args) throws Exception {
+    if(args.length &lg; 2) {
+      System.err.println(USAGE);
+      System.exit(1);
+    }
+    
+    LogManager manager = new LogManager();
+    
+    if("-parse".equals(args[0])) {
+      manager.parse(args[1]);
+    } else if("-query".equals(args[0])) {
+      if(args.length == 2) 
+        manager.query(Long.parseLong(args[1]));
+      else 
+        manager.query(Long.parseLong(args[1]), Long.parseLong(args[2]));
+    } else if("-delete".equals(args[0])) {
+      manager.delete(Long.parseLong(args[1]));
+    } else if("-deleteByQuery".equalsIgnoreCase(args[0])) {
+      manager.deleteByQuery(Long.parseLong(args[1]), Long.parseLong(args[2]));
+    } else {
+      System.err.println(USAGE);
+      System.exit(1);
+    }
+    
+    manager.close();
+  }
+    </source></p>
+
+    <p>We can use the example log manager program from the command line (in the top level Gora directory): </p>
+    <p><code>
+      $ bin/gora logmanager 
+    </code></p>
+    <p> which lists the usage as: </p>
+    <p><source>
+LogManager -parse &lt;input_log_file&gt;
+           -get &lt;lineNum&gt;
+           -query &lt;lineNum&gt;
+           -query &lt;startLineNum&gt; &lt;endLineNum&gt;
+           -delete &lt;lineNum&gt;
+           -deleteByQuery &lt;startLineNum&gt; &lt;endLineNum&gt;
+    </source></p>
+
+    <p> So to parse and store our logs located at <code>gora-tutorial/src/main/resources/access.log</code>, we will issue: </p>
+    <p><code>
+      $ bin/gora logmanager -parse gora-tutorial/src/main/resources/access.log
+    </code></p>
+
+    <p> This should output something like: </p>
+    <p><source>
+10/09/30 18:30:17 INFO log.LogManager: Parsing file:gora-tutorial/src/main/resources/access.log
+10/09/30 18:30:23 INFO log.LogManager: finished parsing file. Total number of log lines:10000
+    </source></p>
+    <p> Now, let's look at the code which parses the data and stores the logs. </p>
+    <p><source>
+  private void parse(String input) throws IOException, ParseException {
+    BufferedReader reader = new BufferedReader(new FileReader(input));
+    long lineCount = 0;
+    try {
+      String line = reader.readLine();
+      do {
+        Pageview pageview = parseLine(line);
+        
+        if(pageview != null) {
+          //store the pageview 
+          storePageview(lineCount++, pageview);
+        }
+        
+        line = reader.readLine();
+      } while(line != null);
+      
+    } finally {
+      reader.close();  
+    }
+  }
+    </source></p>
+
+    <p> The file is iterated line-by-line. Notice that the <code>parseLine(line)</code> 
+    function does the actual parsing converting the string to a <code>Pageview</code> object 
+    defined earlier. </p>
+    
+   <p><source>
+  private Pageview parseLine(String line) throws ParseException {
+    StringTokenizer matcher = new StringTokenizer(line);
+    //parse the log line
+    String ip = matcher.nextToken();
+    ...
+    
+    //construct and return pageview object
+    Pageview pageview = new Pageview();
+    pageview.setIp(new Utf8(ip));
+    pageview.setTimestamp(timestamp);
+    ...
+    
+    return pageview;
+  }
+   </source></p>
+   <p><code>parseLine()</code> uses standard <code>StringTokenizer</code>s for the job 
+   and constructs and returns a <code>Pageview</code> object.</p>
+   </section>
+
+
+   <section>
+   <title>Storing objects in the DataStore</title>
+
+   <p> If we look back at the <code>parse()</code> method above, we can see that the 
+   <code>Pageview</code> objects returned by <code>parseLine() </code> are stored via 
+   <code>storePageview()</code> method. </p>
+
+   <p> The storePageview() method is where magic happens, but if we look at the code,
+   we can see that it is dead simple. </p>
+ 
+   <p><source>
+  /** Stores the pageview object with the given key */
+  private void storePageview(long key, Pageview pageview) throws IOException {
+    dataStore.put(key, pageview);
+  }
+  </source></p>
+
+  <p> All we need to do is to call the <a href="ext:api/org/apache/gora/store/datastore/put">
+  put()</a> method, which expects a long as key and an instance of <code>Pageview</code> 
+  as a value.</p>
+
+  </section>
+  
+  <section>
+    <title> Closing the DataStore</title>
+    <p> <code>DataStore</code> implementations can do a lot of caching for performance. 
+    However, this means that data is not always flushed to persistent storage all the times. 
+    So we need to make sure that upon finishing storing objects, we need to close the datastore 
+    instance by calling it's <a href="ext:api/org/apache/gora/store/datastore/close">close()</a> method. 
+    LogManager always closes it's datastore in it's own <code>close()</code> method.  </p>
+
+  <p><source>    
+  private void close() throws IOException {
+    //It is very important to close the datastore properly, otherwise
+    //some data loss might occur.
+    if(dataStore != null)
+      dataStore.close();
+  }
+  </source></p>
+  
+  <p>If you are pushing a lot of data, or if you want your data to be accessible before closing 
+  the data store, you can also the <a href="ext:api/org/apache/gora/store/datastore/flush">flush()</a> 
+  method which, as expected, flushes the data to the underlying data store. However, the actual flush 
+  semantics can vary by the data store backend. For example, in SQL flush calls <code>commit()</code>
+  on the jdbc <code>Connection</code> object, whereas in Hbase, <code>HTable#flush()</code> is called.
+  Also note that even if you call <code>flush()</code> at the end of all data manipulation operations, 
+  you still need to call the <code>close()</code> on the datastore.
+  </p>
+   
+  </section>
+
+  <section>
+    <title>Persisted data in HBase</title>
+    <p>Now that we have stored the web access log data in HBase, we can look at
+    how the data is stored at HBase. For that, start the HBase shell.</p>
+    <p><code>$ cd ../hbase-0.20.6</code></p>
+    <p><code>$ bin/hbase shell</code></p>
+
+    <p> If you have a fresh HBase installation, there should be one table.</p>
+    <p><code>hbase(main):010:0> list</code></p>
+  <p><source>
+AccessLog                                                                                                     
+1 row(s) in 0.0470 seconds
+  </source></p>
+  <p> Remember that AccessLog is the name of the table we specified at 
+  <code>gora-hbase-mapping.xml</code>. Looking at the contents of the table: </p>
+
+  <p><code>hbase(main):010:0> scan 'AccessLog', {LIMIT=>1}</code></p>
+  <p><source> 
+ROW                          COLUMN+CELL                                                                      
+ \x00\x00\x00\x00\x00\x00\x0 column=common:ip, timestamp=1285860617341, value=88.240.129.183                  
+ 0\x00                                                                                                        
+ \x00\x00\x00\x00\x00\x00\x0 column=common:timestamp, timestamp=1285860617341, value=\x00\x00\x01\x1F\xF1\xAEl
+ 0\x00                       P                                                                                
+ \x00\x00\x00\x00\x00\x00\x0 column=common:url, timestamp=1285860617341, value=/index.php?a=1__wwv40pdxdpo&amp;k=2
+ 0\x00                       18978                                                                            
+ \x00\x00\x00\x00\x00\x00\x0 column=http:httpMethod, timestamp=1285860617341, value=GET                       
+ 0\x00                                                                                                        
+ \x00\x00\x00\x00\x00\x00\x0 column=http:httpStatusCode, timestamp=1285860617341, value=\x00\x00\x00\xC8      
+ 0\x00                                                                                                        
+ \x00\x00\x00\x00\x00\x00\x0 column=http:responseSize, timestamp=1285860617341, value=\x00\x00\x00+           
+ 0\x00                                                                                                        
+ \x00\x00\x00\x00\x00\x00\x0 column=misc:referrer, timestamp=1285860617341, value=http://www.buldinle.com/inde
+ 0\x00                       x.php?a=1__WWV40pdxdpo&amp;k=218978                                                  
+ \x00\x00\x00\x00\x00\x00\x0 column=misc:userAgent, timestamp=1285860617341, value=Mozilla/4.0 (compatible; MS
+ 0\x00                       IE 6.0; Windows NT 5.1)
+  </source></p>
+  
+  <p>The output shows all the columns matching the first line with key 0. We can see 
+  the columns <code>common:ip, common:timestamp, common:url, </code> etc. Remember that 
+  these are the columns that we have described in the <code>gora-hbase-mapping.xml</code> 
+  file. </p>
+
+  <p> You can also count the number of entries in the table to make sure that all the records
+   have been stored.</p>
+  <p><code>hbase(main):010:0> count 'AccessLog'</code></p>
+  <p><source> 
+... 
+10000 row(s) in 1.0580 seconds
+  </source></p>
+  </section>
+
+  <section>
+    <title>Fetching objects from data store</title>
+    <p> Fetching objects from the data store is as easy as storing them. There are essentially 
+    two methods for fetching objects. First one is to fetch a single object given it's key. The 
+    second method is to run a query through the data store. </p>
+
+    <p>To fetch objects one by one, we can use one of the overloaded 
+    <a href="ext:api/org/apache/gora/store/datastore/get">get()</a> methods. 
+    The method with signature <code>get(K key)</code> returns the object corresponding to the given key fetching all the 
+    fields. On the other hand <code>get(K key, String[] fields) </code> returns the object corresponding to the 
+    given key, but fetching only the fields given as the second argument.</p>
+
+    <p>When run with the argument -get <code>LogManager</code> class fetches the pageview object 
+    from the data store and prints the results. </p>
+            
+    <p><source>
+  /** Fetches a single pageview object and prints it*/
+  private void get(long key) throws IOException {
+    Pageview pageview = dataStore.get(key);
+    printPageview(pageview);
+  }
+    </source></p>
+
+    <p> To display the 42nd line of the access log : </p>
+    <p><code>$ bin/gora logmanager -get 42 </code></p>
+    <p><source>
+org.apache.gora.tutorial.log.generated.Pageview@321ce053 {
+  "url":"/index.php?i=0&amp;a=1__rntjt9z0q9w&amp;k=398179"
+  "timestamp":"1236710649000"
+  "ip":"88.240.129.183"
+  "httpMethod":"GET"
+  "httpStatusCode":"200"
+  "responseSize":"43"
+  "referrer":"http://www.buldinle.com/index.php?i=0&amp;a=1__RnTjT9z0Q9w&amp;k=398179"
+  "userAgent":"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"
+}
+    </source></p>
+  </section>
+
+  <section>
+    <title> Querying objects </title>
+    <p> DataStore API defines a <a href="ext:api/org/apache/gora/query/query">Query</a> 
+    interface to query the objects at the data store. Each data store implementation 
+    can use a specific implementation of the <code>Query</code> interface. Queries are 
+    instantiated by calling <a href="ext:api/org/apache/gora/store/datastore/newquery">
+    DataStore#newQuery()</a>. When the query is run through the datastore, the results 
+    are returned via the <a href="ext:api/org/apache/gora/query/result"> Result</a> 
+    interface. Let's see how we can run a query and display the results below in the 
+    the LogManager class. </p>
+ 
+    <p><source>
+  /** Queries and prints pageview object that have keys between startKey and endKey*/
+  private void query(long startKey, long endKey) throws IOException {
+    Query&lt;Long, Pageview&gt; query = dataStore.newQuery();
+    //set the properties of query
+    query.setStartKey(startKey);
+    query.setEndKey(endKey);
+    
+    Result&lt;Long, Pageview&gt; result = query.execute();
+    
+    printResult(result);
+  }
+    </source> </p>
+
+    <p> After constructing a <a href="ext:api/org/apache/gora/query/query">Query</a>, its properties 
+    are set via the setter methods. Then calling 
+    <a href="ext:api/org/apache/gora/query/query/execute">query.execute()</a> returns
+    the Result object.</p>
+
+    <p> <a href="ext:api/org/apache/gora/query/result"> Result</a> interface allows us to 
+    iterate the results one by one by calling the <a href="ext:api/org/apache/gora/query/result/next"> 
+   next()</a> method. The <a href="ext:api/org/apache/gora/query/result/getkey"> 
+   getKey()</a> method returns the current key and <a href="ext:api/org/apache/gora/query/result/get"> 
+   get()</a> returns current persistent object. </p>
+
+    <p><source>
+  private void printResult(Result&lt;Long, Pageview&gt; result) throws IOException {
+    
+    while(result.next()) { //advances the Result object and breaks if at end
+      long resultKey = result.getKey(); //obtain current key
+      Pageview resultPageview = result.get(); //obtain current value object
+      
+      //print the results
+      System.out.println(resultKey + ":");
+      printPageview(resultPageview);
+    }
+    
+    System.out.println("Number of pageviews from the query:" + result.getOffset());
+  }
+    </source> </p>
+
+    <p>With these functions defined, we can run the Log Manager class, to query the 
+    access logs at HBase. For example, to display the log records between lines 10 and 12 
+    we can use  </p>
+    
+    <p><code> bin/gora logmanager -query 10 12 </code></p>
+ 
+    <p>Which results in:</p>
+    <p> <source>
+10:
+org.apache.gora.tutorial.log.generated.Pageview@d38d0eaa {
+  "url":"/"
+  "timestamp":"1236710442000"
+  "ip":"144.122.180.55"
+  "httpMethod":"GET"
+  "httpStatusCode":"200"
+  "responseSize":"43"
+  "referrer":"http://buldinle.com/"
+  "userAgent":"Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.6) Gecko/2009020911 Ubuntu/8.10 (intrepid) Firefox/3.0.6"
+}
+11:
+org.apache.gora.tutorial.log.generated.Pageview@b513110a {
+  "url":"/index.php?i=7&amp;a=1__gefuumyhl5c&amp;k=5143555"
+  "timestamp":"1236710453000"
+  "ip":"85.100.75.104"
+  "httpMethod":"GET"
+  "httpStatusCode":"200"
+  "responseSize":"43"
+  "referrer":"http://www.buldinle.com/index.php?i=7&amp;a=1__GeFUuMyHl5c&amp;k=5143555"
+  "userAgent":"Mozilla/5.0 (Windows; U; Windows NT 5.1; tr; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7"
+}
+    </source></p>
+ 
+  </section>
+
+
+  <section>
+    <title>Deleting objects</title>
+    <p> Just like fetching objects, there are two main methods to delete 
+    objects from the data store. The first one is to delete objects one by 
+    one using the <a href="ext:api/org/apache/gora/store/datastore/delete">
+    DataStore#delete(K)</a> method, which takes the key of the object. 
+    Alternatively we can delete all of the data that matches a given query by 
+    calling the <a href="ext:api/org/apache/gora/store/datastore/deletebyquery">
+    DataStore#deleteByQuery(Query)</a> method. By using deleteByQuery, we can 
+    do fine-grain deletes, for example deleting just a specific field 
+    from several records. </p>
+    <p>Continueing from the LogManager class, the api's for both are given below.</p>
+
+    <p> <source>    
+  /**Deletes the pageview with the given line number */
+  private void delete(long lineNum) throws Exception {
+    dataStore.delete(lineNum);
+    dataStore.flush(); //write changes may need to be flushed before
+                       //they are committed 
+  }
+  
+  /** This method illustrates delete by query call */
+  private void deleteByQuery(long startKey, long endKey) throws IOException {
+    //Constructs a query from the dataStore. The matching rows to this query will be deleted
+    Query&lg;Long, Pageview&gt; query = dataStore.newQuery();
+    //set the properties of query
+    query.setStartKey(startKey);
+    query.setEndKey(endKey);
+    
+    dataStore.deleteByQuery(query);
+  }    
+    </source></p>
+
+    <p>And from the command line :   </p>
+    <p><code> bin/gora logmanager -delete 12 </code></p>
+    <p><code> bin/gora logmanager -deleteByQuery 40 50 </code></p>
+
+  </section>
+  </section>
+
+  <section>
+    <title>MapReduce Support</title>
+    <p>Gora has first class MapReduce support for <a href="ext:hadoop">Apache Hadoop</a>. 
+    Gora data stores can be used as inputs and outputs of jobs. Moreover, the objects can 
+    be serialized, and passed between tasks keeping their persistency state. For the 
+    serialization, Gora extends Avro DatumWriters.  </p>
+
+    <section> 
+      <title> Log analytics in MapReduce </title>
+      <p> For this part of the tutorial, we will be analyzing the logs that have been 
+      stored at HBase earlier. Specifically, we will develop a MapReduce program to 
+      calculate the number of daily pageviews for each URL in the site. </p>
+
+      <p> We will be using the <code>LogAnalytics</code> class to analyze the logs, which can
+      be found at <code>gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogAnalytics.java</code>.
+      For computing the analytics, the mapper takes in pageviews, and outputs tuples of 
+      &lt;URL, timestamp&gt; pairs, with 1 as the value. The timestamp represents the day 
+      in which the pageview occurred, so that the daily pageviews are accumulated. 
+      The reducer just sums up the values, and outputs <code>MetricDatum</code> objects 
+      to be sent to the output Gora data store.</p>
+    </section>
+
+    <section> 
+      <title>Setting up the environment</title>
+      <p> We will be using the logs stored at HBase by the <code>LogManager</code> class. 
+      We will push the output of the job to an HSQL database, since it has a zero conf 
+      set up. However, you can also use MySQL or HBase for storing the analytics results. 
+      If you want to continue with HBase, you can skip the next sections. </p>
+
+      <section>
+      <title> Setting up the database </title>
+      <p> First we need to download HSQL dependencies. For that, uncomment the following line 
+      from <code>gora-tutorial/ivy/ivy.xml</code> (if using Maven hsqldb should already be available). 
+      Ofcourse MySQL users should uncomment the mysql dependency instead. </p>
+      <p><code>&lt;!--&lt;dependency org="org.hsqldb" name="hsqldb" rev="2.0.0" conf="*->default"/&gt;--&gt;
+      </code></p>
+
+      <p> Then we need to run ant so that the new dependencies can be downloaded. </p>
+      <p><code> $ ant </code></p>
+   
+      <p> If you are using Mysql, you should also setup the database server, create the database 
+      and give necessary permissions to create tables, etc so that Gora can run properly. </p>
+      </section>
+      
+      <section>
+      <title> Configuring Gora </title>
+      <p> We will put the configuration necessary to connect to the database to 
+      <code>gora-tutorial/conf/gora.properties</code>.  </p>
+
+      <p> <source>    
+#JDBC properties for gora-sql module using HSQL
+gora.sqlstore.jdbc.driver=org.hsqldb.jdbcDriver
+gora.sqlstore.jdbc.url=jdbc:hsqldb:hsql://localhost/goratest
+
+#JDBC properties for gora-sql module using MySQL
+#gora.sqlstore.jdbc.driver=com.mysql.jdbc.Driver
+#gora.sqlstore.jdbc.url=jdbc:mysql://localhost:3306/goratest
+#gora.sqlstore.jdbc.user=root
+#gora.sqlstore.jdbc.password=      
+      </source></p>
+
+      <p> As expected the <code>jdbc.driver</code> property is the JDBC driver class,
+      and <code>jdbc.url</code> is the JDBC connection URL. Moreover <code>jdbc.user</code>
+      and <code>jdbc.password</code> can be specific is needed. More information for these 
+      parameters can be found at <a href="site:gora-sql">gora-sql</a> documentation. </p>
+      </section>
+    </section>
+
+    <section>
+      <title> Modelling the data </title>
+
+      <section>
+       <title>Data Beans for Analytics</title>    
+       <p> For web site analytics, we will be using a generic <code>MetricDatum</code> 
+       data structure. It holds a string <code>metricDimension</code>, a long 
+       <code>timestamp</code>, and a long <code>metric</code> fields. The first two fields 
+       are the dimensions of the web analytics data, and the last is the actual aggregate 
+       metric value. For example we might have an instance <code>{metricDimension="/index", 
+       timestamp=101, metric=12}</code>, representing that there have been 12 pageviews to 
+       the URL "/index" for the given time interval 101. </p>
+
+       <p>The avro schema definition for <code>MetricDatum</code> can be found at 
+       <code>gora-tutorial/src/main/avro/metricdatum.json</code>, and the compiled source 
+       code at <code>gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/MetricDatum.java</code>.</p>
+       <p><source>
+{
+  "type": "record",
+  "name": "MetricDatum",
+  "namespace": "org.apache.gora.tutorial.log.generated",
+  "fields" : [
+    {"name": "metricDimension", "type": "string"},
+    {"name": "timestamp", "type": "long"},
+    {"name": "metric", "type" : "long"}
+  ]
+}
+       </source></p>
+      </section>
+
+      <section>
+        <title>Data store mappings </title>
+        <p> We will be using the SQL backend to store the job output data, just to 
+        demonstrate the SQL backend. </p>
+
+        <p> Similar to what we have seen with HBase, gora-sql plugin reads configuration from the 
+        <code>gora-sql-mappings.xml</code> file. 
+        Specifically, we will use the <code>gora-tutorial/conf/gora-sql-mappings.xml</code> file. </p>    
+
+        <p><source>
+&lt;gora-orm&gt;
+  ...
+  &lt;class name="org.apache.gora.tutorial.log.generated.MetricDatum" keyClass="java.lang.String" table="Metrics"&gt;
+    &lt;primarykey column="id" length="512"/&gt;
+    &lt;field name="metricDimension" column="metricDimension" length="512"/&gt;
+    &lt;field name="timestamp" column="ts"/&gt;
+    &lt;field name="metric" column="metric/&gt;
+  &lt;/class&gt;
+&lt;/gora-orm&gt;
+        </source></p>
+     
+        <p> SQL mapping files contain one or more <code>class</code> elements as the children of <code>gora-orm</code>. 
+        The key value pair is declared in the <code>class</code> element. The <code>name</code> attribute is the 
+        fully qualified name of the class, and the <code>keyClass</code> attribute is the fully qualified class 
+        name of the key class. </p>
+
+        <p>Children of the <code>class</code> element are <code>field</code> elements and one 
+        <code>primaryKey</code> element. Each <code>field</code> 
+        element has a <code>name</code> and <code>column</code> attribute, and optional 
+        <code>jdbc-type</code>, <code>length</code> and <code>scale</code> attributes. 
+        <code>name</code> attribute contains 
+        the name of the field in the persistent class, and <code>column</code> attribute is the name of the 
+        column in the database. The <code>primaryKey</code> holds the actual key as the primary key field. Currently, 
+        Gora only supports tables with one primary key. </p>
+
+      </section>
+    </section>
+
+    <section>
+        <title> Constructing the job </title>
+        <p> In constructing the job object for Hadoop, we need to define whether we will use 
+        Gora as job input, output or both. Gora defines 
+        its own <a href="ext:api/org/apache/gora/mapreduce/gorainputformat">GoraInputFormat</a>, 
+        and <a href="ext:api/org/apache/gora/mapreduce/goraoutputformat">GoraOutputFormat</a>, which 
+        uses <code>DataStore</code>'s as input sources and output sinks for the jobs. 
+        <code>Gora{In|Out}putFormat</code> classes define static methods to set up the job properly.
+        However, if the mapper or reducer extends Gora's mapper and reducer  classes, 
+        you can use the static methods defined in <a href="ext:api/org/apache/gora/mapreduce/goramapper">GoraMapper</a> and 
+        <a href="ext:api/org/apache/gora/mapreduce/gorareducer">GoraReducer</a> since they are more convenient. </p> 
+        
+
+        <p> For this tutorial we will use Gora as both input and output. As can be seen from the 
+        <code>createJob()</code> function, quoted below, we create the job 
+        as normal, and set the input parameters via 
+        <a href="ext:api/org/apache/gora/mapreduce/goramapper/initmapperjob">GoraMapper#initMapperJob()</a>, 
+        and <a href="ext:api/org/apache/gora/mapreduce/gorareducer/initreducerjob">GoraReducer#initReducerJob()
+        </a>. <code>GoraMapper#initMapperJob()</code> takes a store and an optional query to fetch the data from. 
+        When a query is given, only the results of the query is used as the input of the job, if not all the records 
+        are used. 
+        The actual Mapper, map output key and value classes are passed to <code>initMapperJob()</code> 
+        function as well. <code>GoraReducer#initReducerJob()</code> accepts 
+        the data store to store the job's output as well as the actual reducer class.
+        <code>initMapperJob</code> and 
+        <code>initReducerJob</code> functions have also overriden methods that take the data store class 
+        rather than data store instances.</p>
+
+        <p>
+        <source>
+  public Job createJob(DataStore&lt;Long, Pageview&gt; inStore
+      , DataStore&lt;String, MetricDatum&gt; outStore, int numReducer) throws IOException {
+    Job job = new Job(getConf());
+
+    job.setJobName("Log Analytics");
+    job.setNumReduceTasks(numReducer);
+    job.setJarByClass(getClass());
+
+    /* Mappers are initialized with GoraMapper.initMapper() or 
+     * GoraInputFormat.setInput()*/
+    GoraMapper.initMapperJob(job, inStore, TextLong.class, LongWritable.class
+        , LogAnalyticsMapper.class, true);
+
+    /* Reducers are initialized with GoraReducer#initReducer().
+     * If the output is not to be persisted via Gora, any reducer 
+     * can be used instead. */
+    GoraReducer.initReducerJob(job, outStore, LogAnalyticsReducer.class);
+    
+    return job;
+  }
+        </source>
+        </p>
+    </section>
+
+    <section>
+      <title> Gora mappers and using Gora an input </title>
+      <p> Typically, if Gora is used as job input, the Mapper class extends  
+      <a href="ext:api/org/apache/gora/mapreduce/goramapper">GoraMapper</a>. However, currently 
+      this is not forced by the API so other class hierarchies can be used instead. 
+      The mapper receives the key value pairs that are the results of the input query, and emits
+      the results of the custom map task. Note that output records from map are independent 
+      from the input and output data stores, so any Hadoop serializable key value class can be used. 
+      However, Gora persistent classes are also Hadoop serializable. Hadoop serialization is 
+      handled by the <a href="ext:api/org/apache/gora/mapreduce/persistentserialization">
+      PersistentSerialization</a> class. Gora also defines a <a href="ext:api/org/apache/gora/mapreduce/stringserialization">
+      StringSerialization</a> class, to serialize strings easily. 
+      </p>
+
+      <p> Coming back to the code for the tutorial, we can see that <code>LogAnalytics</code> 
+      class defines an inner class <code>LogAnalyticsMapper</code> which extends 
+      <code>GoraMapper</code>. The map function receives <code>Long</code> keys which are the line 
+      numbers, and <code>Pageview</code> values as read from the input data store. The map simply 
+      rolls up the timestamp up to the day (meaning that only the day of the timestamp is used), 
+      and outputs the key as a tuple of <code>&lt;URL,day&gt;</code>.
+      </p>
+
+      <p><source>
+    private TextLong tuple;
+
+    protected void map(Long key, Pageview pageview, Context context) 
+      throws IOException ,InterruptedException {
+      
+      Utf8 url = pageview.getUrl();
+      long day = getDay(pageview.getTimestamp());
+      
+      tuple.getKey().set(url.toString());
+      tuple.getValue().set(day);
+      
+      context.write(tuple, one);
+    };
+      </source></p>
+    </section>
+
+    <section>
+      <title> Gora reducers and using Gora as output</title>
+      <p>Similar to the input, typically, if Gora is used as job output, the Reducer extends 
+      <a href="ext:api/org/apache/gora/mapreduce/gorareducer">GoraReducer</a>. The values 
+      emitted by the reducer are persisted to the output data store as a result of the job. 
+      </p>
+
+      <p> For this tutorial, the <code>LogAnalyticsReducer</code> inner class, 
+      which extends <code>GoraReducer</code>, is used as the reducer. The reducer 
+      just sums up all the values that correspond to the <code>&lt;URL,day&gt;</code> tuple. 
+      Then the metric dimension object is constructed and emitted, which 
+      will be stored at the output data store. 
+      </p>
+        
+      <p><source>
+    protected void reduce(TextLong tuple
+        , Iterable&lt;LongWritable&gt; values, Context context) 
+      throws IOException ,InterruptedException {
+      
+      long sum = 0L; //sum up the values
+      for(LongWritable value: values) {
+        sum+= value.get();
+      }
+      
+      String dimension = tuple.getKey().toString();
+      long timestamp = tuple.getValue().get();
+      
+      metricDatum.setMetricDimension(new Utf8(dimension));
+      metricDatum.setTimestamp(timestamp);
+      
+      String key = metricDatum.getMetricDimension().toString();
+      metricDatum.setMetric(sum);
+      
+      context.write(key, metricDatum);
+    };
+      </source></p>
+    </section>
+    
+    <section> 
+      <title> Running the job </title>
+      <p> Now that the job is constructed, we can run the Hadoop job as usual. Note that the <code>run</code> function 
+      of the <code>LogAnalytics</code> class parses the arguments and runs the job. We can run the program by </p>
+      <p><code>$ bin/gora loganalytics [&lt;input data store&gt; [&lt;output data store&gt;]] </code></p>
+  
+      <section>
+      <title> Running the job with SQL </title>
+      <p>Now, let's run the log analytics tools with the SQL backend(either Hsql or MySql). The input data store will be 
+      <code>org.apache.gora.hbase.store.HBaseStore</code> and output store will be 
+      <code>org.apache.gora.sql.store.SqlStore</code>. Remember that we have already configured the database 
+      connection properties and which database will be used at the <a href="#Setting+up+the+environment-N103D7"> 
+      Setting up the environment</a> section. </p>
+      
+      <p><code>$ bin/gora loganalytics org.apache.gora.hbase.store.HBaseStore  org.apache.gora.sql.store.SqlStore</code></p>
+
+      <p> Now we should see some logging output from the job, and whether it finished with success. To check out the output
+      if we are using HSQLDB, below command can be used. </p>
+  
+      <p><code>$ java -jar gora-tutorial/lib/hsqldb-2.0.0.jar</code></p>
+
+      <p>In the connection URL, the same URL that we have provided in gora.properties should be used. If on the other hand 
+      MySQL is used, than we should be able to see the output using the mysql command line utility. </p>
+      
+      <p> The results of the job are stored at the table Metrics, which is defined at the <code>gora-sql-mapping.xml</code> 
+      file. Running a select query over this data confirms that the daily pageview metrics for the web site is indeed stored.
+      To see the most popular pages, run: </p>
+     
+      <p><code>&gt; SELECT METRICDIMENSION, TS, METRIC  FROM metrics order by metric desc</code></p>
+   
+      <p><table>
+<tr><th>METRICDIMENSION</th> <th>TS</th> <th>METRIC</th></tr>
+<tr><td>/</td> <td>	1236902400000</td> <td>	220</td></tr>
+<tr><td>/</td> <td>	1236988800000</td> <td>	212</td></tr>
+<tr><td>/</td> <td>	1236816000000</td> <td>	191</td></tr>
+<tr><td>/</td> <td>	1237075200000</td> <td>	155</td></tr>
+<tr><td>/</td> <td>	1241395200000</td> <td>	111</td></tr>
+<tr><td>/</td> <td>	1236643200000</td> <td>	110</td></tr>
+<tr><td>/</td> <td>	1236729600000</td> <td>	95</td></tr>
+<tr><td>/index.php?a=3__x8g0vi&amp;k=5508310</td> <td>	1236816000000</td> <td>	45</td></tr>
+<tr><td>/index.php?a=1__5kf9nvgrzos&amp;k=208773</td> <td>	1236816000000</td> <td>	37</td></tr>
+<tr><td>...</td> <td>...</td> <td>...</td></tr>
+      </table></p>
+      
+      <p>As you can see, the home page (<code>/</code>) for varios days and some other pages are listed. 
+      In total 3033 rows are present at the metrics table. </p>
+      </section>
+
+      <section>
+        <title>Running the job with HBase </title>
+        <p> Since HBaseStore is already defined as the default data store at <code>gora.properties</code>
+        we can run the job with HBase as:</p>
+        <p><code>$ bin/gora loganalytics</code></p>
+
+        <p>The outputs of the job will be saved in the Metrics table, whose layout is defined at 
+        <code>gora-hbase-mapping.xml</code> file. To see the results:</p>
+
+  <p><code>hbase(main):010:0> scan 'Metrics', {LIMIT=>1}</code></p>
+  <p><source> 
+ROW                          COLUMN+CELL
+ /?a=1__-znawtuabsy&amp;k=96804_ column=common:metric, timestamp=1289815441740, value=\x00\x00\x00\x00\x00\x00\x00
+ 1236902400000               \x09
+ /?a=1__-znawtuabsy&amp;k=96804_ column=common:metricDimension, timestamp=1289815441740, value=/?a=1__-znawtuabsy&amp;
+ 1236902400000               k=96804
+ /?a=1__-znawtuabsy&amp;k=96804_ column=common:ts, timestamp=1289815441740, value=\x00\x00\x01\x1F\xFD \xD0\x00
+ 1236902400000
+1 row(s) in 0.0490 seconds
+  </source></p>
+
+      </section>
+    </section>
+  </section>
+
+  <section>
+    <title>More Examples</title>
+    <p> Other than this tutorial, there are several places that you can find 
+    examples of Gora in action. </p>
+
+    <p>The first place to look at is the examples directories 
+    under various Gora modules. All the modules have a <code>&lt;gora-module&gt;/src/examples/</code> directory 
+    under which some example classes can be found. Especially, there are some classes that are used for tests under 
+    <code>&lt;gora-core&gt;/src/examples/</code></p>
+
+    <p>Second, various unit tests of Gora modules can be referred to see the API in use. The unit tests can be found 
+    at <code>&lt;gora-module&gt;/src/test/</code> </p>
+
+    <p>The source code for the projects using Gora can also be checked out as a reference. <a href="ext:nutch">Apache Nutch</a> is 
+    one of the first class users of Gora; so looking into how Nutch uses Gora is always a good idea.
+    </p>
+    <p> Please feel free to grab our <a href="http://gora.apache.org/images/powered-by-gora.png">poweredBy</a> sticker and embedded it in anything backed by Apache Gora.</p>
+  </section>
+
+  <section>
+    <title>Feedback</title>
+    <p> At last, thanks for trying out Gora. If you find any bugs or you have suggestions for improvement, 
+    do not hesitate to give feedback on the dev@gora.apache.org <a href="ext:devmail">mailing list</a>. </p>
+  </section>
+
+  </body>  
+</document>
diff --git a/trunk/docs/src/resources/images/favicon.ico b/trunk/docs/src/resources/images/favicon.ico
new file mode 100644
index 0000000..161bcf7
--- /dev/null
+++ b/trunk/docs/src/resources/images/favicon.ico
Binary files differ
diff --git a/trunk/docs/src/resources/images/gora-logo.jpg b/trunk/docs/src/resources/images/gora-logo.jpg
new file mode 100644
index 0000000..70b4685
--- /dev/null
+++ b/trunk/docs/src/resources/images/gora-logo.jpg
Binary files differ
diff --git a/trunk/docs/src/resources/images/gora-logo.png b/trunk/docs/src/resources/images/gora-logo.png
new file mode 100644
index 0000000..04a64a6
--- /dev/null
+++ b/trunk/docs/src/resources/images/gora-logo.png
Binary files differ
diff --git a/trunk/docs/src/resources/images/powered-by-gora.png b/trunk/docs/src/resources/images/powered-by-gora.png
new file mode 100644
index 0000000..9be0fda
--- /dev/null
+++ b/trunk/docs/src/resources/images/powered-by-gora.png
Binary files differ
diff --git a/trunk/docs/src/skinconf.xml b/trunk/docs/src/skinconf.xml
new file mode 100644
index 0000000..1772cf7
--- /dev/null
+++ b/trunk/docs/src/skinconf.xml
@@ -0,0 +1,417 @@
+<?xml version="1.0"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+Skin configuration file. This file contains details of your project,
+which will be used to configure the chosen Forrest skin.
+-->
+<!DOCTYPE skinconfig PUBLIC "-//APACHE//DTD Skin Configuration V0.8-1//EN" "http://forrest.apache.org/dtd/skinconfig-v08-1.dtd">
+<skinconfig>
+<!-- To enable lucene search add provider="lucene" (default is google).
+    Add box-location="alt" to move the search box to an alternate location
+    (if the skin supports it) and box-location="all" to show it in all
+    available locations on the page.  Remove the <search> element to show
+    no search box. @domain will enable sitesearch for the specific domain with google.
+    In other words google will search the @domain for the query string.
+  -->
+  <search name="Apache Gora" domain="gora.apache.org/" provider="google" box-location="default"/>
+<!-- Disable the print link? If enabled, invalid HTML 4.0.1 -->
+  <disable-print-link>true</disable-print-link>
+<!-- Disable the PDF link? -->
+  <disable-pdf-link>false</disable-pdf-link>
+<!-- Disable the POD link? -->
+  <disable-pod-link>true</disable-pod-link>
+<!-- Disable the Text link? FIXME: NOT YET IMPLEMENETED. -->
+  <disable-txt-link>true</disable-txt-link>
+<!-- Disable the xml source link? -->
+<!-- The xml source link makes it possible to access the xml rendition
+    of the source frim the html page, and to have it generated statically.
+    This can be used to enable other sites and services to reuse the
+    xml format for their uses. Keep this disabled if you don't want other
+    sites to easily reuse your pages.-->
+  <disable-xml-link>true</disable-xml-link>
+<!-- Disable navigation icons on all external links? -->
+  <disable-external-link-image>true</disable-external-link-image>
+<!-- Disable w3c compliance links? 
+    Use e.g. align="center" to move the compliance links logos to 
+    an alternate location default is left.
+    (if the skin supports it) -->
+  <disable-compliance-links>true</disable-compliance-links>
+<!-- Render mailto: links unrecognisable by spam harvesters? -->
+  <obfuscate-mail-links>true</obfuscate-mail-links>
+  <obfuscate-mail-value>.at.</obfuscate-mail-value>
+<!-- Disable the javascript facility to change the font size -->
+  <disable-font-script>true</disable-font-script>
+<!-- mandatory project logo
+       default skin: renders it at the top -->
+  <project-name>Apache Gora</project-name>
+  <project-description>The Apache Gora open source framework provides an in-memory data model and persistence for big data. Gora supports persisting to column stores, key value stores, document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce support. </project-description>
+  <project-url>http://gora.apache.org/</project-url>
+  <project-logo>images/gora-logo.png</project-logo>
+<!-- Alternative static image:
+  <project-logo>images/project-logo.gif</project-logo> -->
+<!-- optional group logo
+       default skin: renders it at the top-left corner -->
+  <group-name>Apache Software Foundation</group-name>
+  <group-description>The ASF is made up of nearly 100 top level projects that cover a wide range of technologies. Chances are if you are looking for a rewarding experience in Open Source, you are going to find it here.</group-description>
+  <group-url>http://apache.org/</group-url>
+  <group-logo>http://www.apache.org/images/asf-logo.gif</group-logo>
+<!-- Alternative static image:
+  <group-logo>images/group-logo.gif</group-logo> -->
+<!-- optional host logo (e.g. sourceforge logo)
+       default skin: renders it at the bottom-left corner -->
+  <host-url></host-url>
+  <host-logo></host-logo>
+<!-- relative url of a favicon file, normally favicon.ico -->
+  <favicon-url>images/favicon.ico</favicon-url>
+<!-- The following are used to construct a copyright statement -->
+  <disable-copyright-footer>false</disable-copyright-footer>
+<!-- @inception enable automatic generation of a date-range to current date -->
+  <year inception="true">2010</year>
+  <vendor>Apache Software Foundation.</vendor>
+<!-- The optional copyright-link URL will be used as a link in the
+    copyright statement -->
+  <copyright-link>http://www.apache.org/</copyright-link>
+<!-- Some skins use this to form a 'breadcrumb trail' of links.
+    Use location="alt" to move the trail to an alternate location
+    (if the skin supports it).
+    Omit the location attribute to display the trail in the default location.
+    Use location="none" to not display the trail (if the skin supports it).
+    For some skins just set the attributes to blank.
+    
+    NOTE: If a breadcrumb entry points at a local file the href must
+    be complete, that is it must point to the file itself, not to a 
+    directory.
+  -->
+  <trail>
+    <link1 name="Apache" href="http://apache.org/"/>
+    <link2 name="Gora" href="http://gora.apache.org/"/>
+    <link3 name="" href=""/>
+  </trail>
+<!-- Configure the TOC, i.e. the Table of Contents.
+  @max-depth
+   how many "section" levels need to be included in the
+   generated Table of Contents (TOC). 
+  @min-sections
+   Minimum required to create a TOC.
+  @location ("page","menu","page,menu", "none")
+   Where to show the TOC.
+  -->
+  <toc max-depth="2" min-sections="1" location="page"/>
+<!-- Heading types can be clean|underlined|boxed  -->
+  <headings type="underlined"/>
+<!-- The optional feedback element will be used to construct a
+    feedback link in the footer with the page pathname appended:
+    <a href="@href">{@to}</a>
+    -->
+<!--  <feedback to="webmaster@incubator.apache.org/gora/"
+    href="mailto:webmaster@incubator.apache.org/gora/?subject=Feedback&#160;" >
+    Send feedback about the website to:
+  </feedback> -->
+<!-- Optional message of the day (MOTD).
+    Note: This is only implemented in the pelt skin.
+    Note: Beware issue FOR-677 if you use an absolute path uri.
+    If the optional <motd> element is used, then messages will be appended
+    depending on the URI string pattern.
+    motd-option : Each option will match a pattern and apply its text.
+      The "pattern" attribute specifies the pattern to be matched.
+      This can be a specific page, or a general pattern to match a set of pages,
+      e.g. everything in the "samples" directory.
+      The @starts-with=true anchors the string to the start, otherwise contains 
+    motd-title : This text will betadded in brackets after the <html><title>
+      and this can be empty.
+    motd-page : This text will be added in a panel on the face of the page,
+      with the "motd-page-url" being the hyperlink "More".
+    Values for the "location" attribute are:
+      page : on the face of the page, e.g. in the spare space of the toc
+      alt : at the bottom of the left-hand navigation panel
+      both : both
+    -->
+<!--
+  <motd>
+    <motd-option pattern="samples/sample.html">
+      <motd-title>sample</motd-title>
+      <motd-page location="both">
+        This is an example of a Message of the day (MOTD).
+      </motd-page>
+      <motd-page-url>faq.html</motd-page-url>
+    </motd-option>
+    <motd-option pattern="samples/faq.html">
+      <motd-page location="page">
+        How to enable this MOTD is on this page.
+      </motd-page>
+      <motd-page-url>http://forrest.apache.org/docs/faq.html</motd-page-url>
+    </motd-option>
+  </motd>
+-->
+<!--
+    extra-css - here you can define custom css-elements that are 
+    A) overriding the fallback elements or 
+    B) adding the css definition from new elements that you may have 
+       used in your documentation.
+    -->
+  <extra-css>
+<!--Example of reason B:
+        To define the css definition of a new element that you may have used
+        in the class attribute of a <p> node. 
+        e.g. <p class="quote"/>
+    -->
+    p.quote {
+      margin-left: 2em;
+      padding: .5em;
+      background-color: #f0f0f0;
+      font-family: monospace;
+    }
+    <!--Example:
+        To override the colours of links only in the footer.
+    -->
+    #footer a { color: #0F3660; }
+    #footer a:visited { color: #009999; }
+  </extra-css>
+  <colors>
+<!-- These values are used for the generated CSS files.
+    They essentially "override" the default colors defined in the chosen skin.
+    There are four duplicate "groups" of colors below, denoted by comments:
+      Color group: Forrest, Krysalis, Collabnet, and Lenya using Pelt.
+    They are provided for example only. To customize the colors of any skin,
+    uncomment one of these groups of color elements and change the values
+    of the particular color elements that you wish to change.
+    Note that by default, all color groups are commented-out which means that
+    the default colors provided by the skin are being used.
+  -->
+<!-- Color group: Forrest: example colors similar to forrest.apache.org
+    Some of the element names are obscure, so comments are added to show how
+    the "pelt" skin uses them, other skins might use these elements in a different way.
+    Tip: temporarily change the value of an element to red (#ff0000) and see the effect.
+     pelt: breadtrail: the strip at the top of the page and the second strip under the tabs
+     pelt: header: top strip containing project and group logos
+     pelt: heading|subheading: section headings within the content
+     pelt: navstrip: the strip under the tabs which contains the published date
+     pelt: menu: the left-hand navigation panel
+     pelt: toolbox: the selected menu item
+     pelt: searchbox: the background of the searchbox
+     pelt: border: line border around selected menu item
+     pelt: body: any remaining parts, e.g. the bottom of the page
+     pelt: footer: the second from bottom strip containing credit logos and published date
+     pelt: feedback: the optional bottom strip containing feedback link
+  -->
+<!--
+    <color name="breadtrail" value="#cedfef" font="#0F3660" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="header" value="#294563"/>
+    <color name="tab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="tab-unselected" value="#b5c7e7" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="subtab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="subtab-unselected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="heading" value="#294563"/>
+    <color name="subheading" value="#4a6d8c"/>
+    <color name="published" value="#4C6C8F" font="#FFFFFF"/>
+    <color name="feedback" value="#4C6C8F" font="#FFFFFF" align="center"/>
+    <color name="navstrip" value="#4a6d8c" font="#ffffff" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+    <color name="menu" value="#4a6d8c" font="#cedfef" link="#ffffff" vlink="#ffffff" hlink="#ffcf00"/>    
+    <color name="toolbox" value="#4a6d8c"/>
+    <color name="border" value="#294563"/>
+    <color name="dialog" value="#4a6d8c"/>
+    <color name="searchbox" value="#4a6d8c" font="#000000"/>
+    <color name="body" value="#ffffff" link="#0F3660" vlink="#009999" hlink="#000066"/>
+    <color name="table" value="#7099C5"/>    
+    <color name="table-cell" value="#f0f0ff"/>    
+    <color name="highlight" value="#ffff00"/>
+    <color name="fixme" value="#cc6600"/>
+    <color name="note" value="#006699"/>
+    <color name="warning" value="#990000"/>
+    <color name="code" value="#CFDCED"/>
+    <color name="footer" value="#cedfef"/>
+-->
+<!-- Color group: Krysalis -->
+
+    <color name="header"    value="#FFFFFF"/>
+
+    <color name="tab-selected" value="#a5b6c6" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="tab-unselected" value="#F7F7F7"  link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="subtab-selected" value="#a5b6c6"  link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="subtab-unselected" value="#a5b6c6"  link="#000000" vlink="#000000" hlink="#000000"/>
+
+    <color name="heading" value="#a5b6c6"/>
+    <color name="subheading" value="#CFDCED"/>
+        
+    <color name="navstrip" value="#CFDCED" font="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="toolbox" value="#a5b6c6"/>
+    <color name="border" value="#a5b6c6"/>
+        
+    <color name="menu" value="#F7F7F7" link="#000000" vlink="#000000" hlink="#000000"/>    
+    <color name="dialog" value="#F7F7F7"/>
+            
+    <color name="body"    value="#ffffff" link="#0F3660" vlink="#009999" hlink="#000066"/>
+    
+    <color name="table" value="#a5b6c6"/>    
+    <color name="table-cell" value="#ffffff"/>    
+    <color name="highlight" value="#ffff00"/>
+    <color name="fixme" value="#cc6600"/>
+    <color name="note" value="#006699"/>
+    <color name="warning" value="#990000"/>
+    <color name="code" value="#a5b6c6"/>
+        
+    <color name="footer" value="#a5b6c6"/>
+
+<!-- Color group: Collabnet -->
+<!--
+    <color name="header"    value="#003366"/>
+
+    <color name="tab-selected" value="#dddddd" link="#555555" vlink="#555555" hlink="#555555"/>
+    <color name="tab-unselected" value="#999999" link="#ffffff" vlink="#ffffff" hlink="#ffffff"/>
+    <color name="subtab-selected" value="#cccccc" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="subtab-unselected" value="#cccccc" link="#555555" vlink="#555555" hlink="#555555"/>
+
+    <color name="heading" value="#003366"/>
+    <color name="subheading" value="#888888"/>
+    
+    <color name="navstrip" value="#dddddd" font="#555555"/>
+    <color name="toolbox" value="#dddddd" font="#555555"/>
+    <color name="border" value="#999999"/>
+    
+    <color name="menu" value="#ffffff"/>    
+    <color name="dialog" value="#eeeeee"/>
+            
+    <color name="body"      value="#ffffff"/>
+    
+    <color name="table" value="#ccc"/>    
+    <color name="table-cell" value="#ffffff"/>   
+    <color name="highlight" value="#ffff00"/>
+    <color name="fixme" value="#cc6600"/>
+    <color name="note" value="#006699"/>
+    <color name="warning" value="#990000"/>
+    <color name="code" value="#003366"/>
+        
+    <color name="footer" value="#ffffff"/>
+-->
+<!-- Color group: Lenya using pelt-->
+<!--
+    <color name="header" value="#ffffff"/>
+
+    <color name="tab-selected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="tab-unselected" value="#F5F4E9" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="subtab-selected" value="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="subtab-unselected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
+
+    <color name="heading" value="#E5E4D9"/>
+    <color name="subheading" value="#000000"/>
+    <color name="published" value="#000000"/>
+    <color name="navstrip" value="#E5E4D9" font="#000000"/>
+    <color name="toolbox" value="#CFDCED" font="#000000"/>
+    <color name="border" value="#999999"/>
+
+    <color name="menu" value="#E5E4D9" font="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
+    <color name="dialog" value="#CFDCED"/>
+    <color name="body" value="#ffffff" />
+
+    <color name="table" value="#ccc"/>
+    <color name="table-cell" value="#ffffff"/>
+    <color name="highlight" value="#ffff00"/>
+    <color name="fixme" value="#cc6600"/>
+    <color name="note" value="#006699"/>
+    <color name="warning" value="#990000"/>
+    <color name="code" value="#FF0000"/>
+
+    <color name="footer" value="#E5E4D9"/>
+-->
+  </colors>
+<!-- Settings specific to PDF output. -->
+  <pdf>
+<!-- 
+       Supported page sizes are a0, a1, a2, a3, a4, a5, executive,
+       folio, legal, ledger, letter, quarto, tabloid (default letter).
+       Supported page orientations are portrait, landscape (default
+       portrait).
+       Supported text alignments are left, right, justify (default left).
+    -->
+    <page size="letter" orientation="portrait" text-align="left"/>
+<!-- 
+       Pattern of the page numbering in the footer - Default is "Page x".
+       first occurrence of '1' digit represents the current page number,
+       second occurrence of '1' digit represents the total page number,
+       anything else is considered as the static part of the numbering pattern.
+       Examples : x is the current page number, y the total page number.
+       <page-numbering-format>none</page-numbering-format> Do not displays the page numbering
+       <page-numbering-format>1</page-numbering-format> Displays "x"
+       <page-numbering-format>p1.</page-numbering-format> Displays "px."
+       <page-numbering-format>Page 1/1</page-numbering-format> Displays "Page x/y"
+       <page-numbering-format>(1-1)</page-numbering-format> Displays "(x-y)"
+    -->
+    <page-numbering-format>Page 1</page-numbering-format>
+<!--
+       Margins can be specified for top, bottom, inner, and outer
+       edges. If double-sided="false", the inner edge is always left
+       and the outer is always right. If double-sided="true", the
+       inner edge will be left on odd pages, right on even pages,
+       the outer edge vice versa.
+       Specified below are the default settings.
+    -->
+    <margins double-sided="false">
+      <top>1in</top>
+      <bottom>1in</bottom>
+      <inner>1.25in</inner>
+      <outer>1in</outer>
+    </margins>
+<!--
+      Print the URL text next to all links going outside the file
+    -->
+    <show-external-urls>false</show-external-urls>
+<!--
+      Disable the copyright footer on each page of the PDF.
+      A footer is composed for each page. By default, a "credit" with role=pdf
+      will be used, as explained below. Otherwise a copyright statement
+      will be generated. This latter can be disabled.
+    -->
+    <disable-copyright-footer>false</disable-copyright-footer>
+  </pdf>
+<!-- 
+    Credits are typically rendered as a set of small clickable
+    images in the page footer.
+    
+    Use box-location="alt" to move the credits to an alternate location
+    (if the skin supports it).
+
+    For example, pelt skin:
+    - box-location="alt" will place the logo at the end of the
+      left-hand coloured menu panel.
+    - box-location="alt2" will place them underneath that panel
+      in the left-hand whitespace.
+    - Otherwise they are placed next to the compatibility icons
+      at the bottom of the screen.
+
+    Comment out the whole <credit>-element if you want no credits in the
+    web pages  
+   -->
+  <credits>
+    <credit box-location="alt">
+      <name>Built with Apache Forrest</name>
+      <url>http://forrest.apache.org/</url>
+      
+      <width>88</width>
+      <height>31</height>
+    </credit>
+<!-- A credit with @role="pdf" will be used to compose a footer
+     for each page in the PDF, using either "name" or "url" or both.
+    -->
+<!--
+    <credit role="pdf">
+      <name>Built with Apache Forrest</name>
+      <url>http://forrest.apache.org/</url>
+    </credit>
+    -->
+  </credits>
+</skinconfig>
diff --git a/trunk/gora-accumulo/pom.xml b/trunk/gora-accumulo/pom.xml
new file mode 100644
index 0000000..1a67c20
--- /dev/null
+++ b/trunk/gora-accumulo/pom.xml
@@ -0,0 +1,168 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    
+	<modelVersion>4.0.0</modelVersion>
+
+	<parent>
+		<groupId>org.apache.gora</groupId>
+		<artifactId>gora</artifactId>
+		<version>0.2.1</version>
+		<relativePath>../</relativePath>
+	</parent>
+	<artifactId>gora-accumulo</artifactId>
+	<packaging>bundle</packaging>
+
+	<name>Apache Gora :: Accumulo</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-accumulo</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-accumulo</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-accumulo</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+	
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora.accumulo*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+    
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <testResources>
+          <testResource>
+            <directory>${project.basedir}/src/test/resources</directory>
+            <includes>
+              <include>**/*</include>
+            </includes>
+            <!--targetPath>${project.basedir}/target/classes/</targetPath-->
+          </testResource>
+        </testResources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <goal>test-jar</goal>
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Gora Internal Dependencies -->
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <classifier>tests</classifier>
+            <scope>test</scope>
+        </dependency>
+
+        <!--Accumulo Dependency -->
+        <dependency>
+           <groupId>org.apache.accumulo</groupId>
+           <artifactId>accumulo-core</artifactId>
+           <version>1.4.0</version>
+        </dependency>
+
+        <!-- Hadoop Dependencies -->
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+	        <exclusions>
+	          <exclusion>
+                <groupId>javax.jms</groupId>
+	            <artifactId>jms</artifactId>
+	          </exclusion>
+            </exclusions>
+        </dependency>
+
+        <!-- Testing Dependencies -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-test</artifactId>
+        </dependency>
+
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-accumulo/src/examples/java/.gitignore b/trunk/gora-accumulo/src/examples/java/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-accumulo/src/examples/java/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/BinaryEncoder.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/BinaryEncoder.java
new file mode 100644
index 0000000..d3fffec
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/BinaryEncoder.java
@@ -0,0 +1,197 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.encoders;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+import org.apache.gora.accumulo.util.FixedByteArrayOutputStream;
+
+/**
+ * 
+ */
+public class BinaryEncoder implements Encoder {
+  public byte[] encodeShort(short s) {
+    return encodeShort(s, new byte[2]);
+  }
+  
+  public byte[] encodeShort(short s, byte ret[]) {
+    try {
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeShort(s);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public short decodeShort(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      short s = dis.readShort();
+      return s;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeInt(int i) {
+    return encodeInt(i, new byte[4]);
+  }
+  
+  public byte[] encodeInt(int i, byte ret[]) {
+    try {
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeInt(i);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public int decodeInt(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      int i = dis.readInt();
+      return i;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeLong(long l) {
+    return encodeLong(l, new byte[8]);
+  }
+  
+  public byte[] encodeLong(long l, byte ret[]) {
+    try {
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeLong(l);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public long decodeLong(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      long l = dis.readLong();
+      return l;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeDouble(double d) {
+    return encodeDouble(d, new byte[8]);
+  }
+  
+  public byte[] encodeDouble(double d, byte[] ret) {
+    try {
+      long l = Double.doubleToRawLongBits(d);
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeLong(l);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public double decodeDouble(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      long l = dis.readLong();
+      return Double.longBitsToDouble(l);
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeFloat(float d) {
+    return encodeFloat(d, new byte[4]);
+  }
+  
+  public byte[] encodeFloat(float f, byte[] ret) {
+    try {
+      int i = Float.floatToRawIntBits(f);
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeInt(i);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public float decodeFloat(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      int i = dis.readInt();
+      return Float.intBitsToFloat(i);
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeByte(byte b, byte[] ret) {
+    ret[0] = 0;
+    return ret;
+  }
+  
+  public byte[] encodeByte(byte b) {
+    return encodeByte(b, new byte[1]);
+  }
+  
+  public byte decodeByte(byte[] a) {
+    return a[0];
+  }
+  
+  public boolean decodeBoolean(byte[] a) {
+    try {
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(a));
+      return dis.readBoolean();
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  public byte[] encodeBoolean(boolean b) {
+    return encodeBoolean(b, new byte[1]);
+  }
+  
+  public byte[] encodeBoolean(boolean b, byte[] ret) {
+    try {
+      DataOutputStream dos = new DataOutputStream(new FixedByteArrayOutputStream(ret));
+      dos.writeBoolean(b);
+      return ret;
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+  
+  @Override
+  public byte[] lastPossibleKey(int size, byte[] er) {
+    return Utils.lastPossibleKey(size, er);
+  }
+  
+  @Override
+  public byte[] followingKey(int size, byte[] per) {
+    return Utils.followingKey(size, per);
+  }
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Encoder.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Encoder.java
new file mode 100644
index 0000000..7f79e8a
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Encoder.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.encoders;
+
+/**
+ * 
+ */
+public interface Encoder {
+  
+  public byte[] encodeByte(byte b, byte[] ret);
+  
+  public byte[] encodeByte(byte b);
+  
+  public byte decodeByte(byte[] a);
+
+  public byte[] encodeShort(short s);
+  
+  public byte[] encodeShort(short s, byte ret[]);
+  
+  public short decodeShort(byte[] a);
+  
+  public byte[] encodeInt(int i);
+  
+  public byte[] encodeInt(int i, byte ret[]);
+  
+  public int decodeInt(byte[] a);
+  
+  public byte[] encodeLong(long l);
+  
+  public byte[] encodeLong(long l, byte ret[]);
+  
+  public long decodeLong(byte[] a);
+  
+  public byte[] encodeDouble(double d);
+  
+  public byte[] encodeDouble(double d, byte[] ret);
+  
+  public double decodeDouble(byte[] a);
+  
+  public byte[] encodeFloat(float d);
+  
+  public byte[] encodeFloat(float f, byte[] ret);
+  
+  public float decodeFloat(byte[] a);
+  
+  public boolean decodeBoolean(byte[] val);
+  
+  public byte[] encodeBoolean(boolean b);
+  
+  public byte[] encodeBoolean(boolean b, byte[] ret);
+
+  byte[] followingKey(int size, byte[] per);
+
+  byte[] lastPossibleKey(int size, byte[] er);
+
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/HexEncoder.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/HexEncoder.java
new file mode 100644
index 0000000..cba08c2
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/HexEncoder.java
@@ -0,0 +1,207 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.encoders;
+
+/**
+ * Encodes data in a ascii hex representation
+ */
+
+public class HexEncoder implements Encoder {
+  
+  private byte chars[] = new byte[] {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
+
+  private void encode(byte[] a, long l) {
+    for (int i = a.length - 1; i >= 0; i--) {
+      a[i] = chars[(int) (l & 0x0f)];
+      l = l >>> 4;
+    }
+  }
+
+  private int fromChar(byte b) {
+    if (b >= '0' && b <= '9') {
+      return (b - '0');
+    } else if (b >= 'a' && b <= 'f') {
+      return (b - 'a' + 10);
+    }
+    
+    throw new IllegalArgumentException("Bad char " + b);
+  }
+  
+  private long decode(byte[] a) {
+    long b = 0;
+    for (int i = 0; i < a.length; i++) {
+      b = b << 4;
+      b |= fromChar(a[i]);
+    }
+    
+    return b;
+  }
+
+  @Override
+  public byte[] encodeByte(byte b, byte[] ret) {
+    encode(ret, 0xff & b);
+    return ret;
+  }
+  
+  @Override
+  public byte[] encodeByte(byte b) {
+    return encodeByte(b, new byte[2]);
+  }
+  
+  @Override
+  public byte decodeByte(byte[] a) {
+    return (byte) decode(a);
+  }
+  
+  @Override
+  public byte[] encodeShort(short s) {
+    return encodeShort(s, new byte[4]);
+  }
+  
+  @Override
+  public byte[] encodeShort(short s, byte[] ret) {
+    encode(ret, 0xffff & s);
+    return ret;
+  }
+  
+  @Override
+  public short decodeShort(byte[] a) {
+    return (short) decode(a);
+  }
+  
+  @Override
+  public byte[] encodeInt(int i) {
+    return encodeInt(i, new byte[8]);
+  }
+  
+  @Override
+  public byte[] encodeInt(int i, byte[] ret) {
+    encode(ret, i);
+    return ret;
+  }
+  
+  @Override
+  public int decodeInt(byte[] a) {
+    return (int) decode(a);
+  }
+  
+  @Override
+  public byte[] encodeLong(long l) {
+    return encodeLong(l, new byte[16]);
+  }
+  
+  @Override
+  public byte[] encodeLong(long l, byte[] ret) {
+    encode(ret, l);
+    return ret;
+  }
+  
+  @Override
+  public long decodeLong(byte[] a) {
+    return decode(a);
+  }
+  
+  @Override
+  public byte[] encodeDouble(double d) {
+    return encodeDouble(d, new byte[16]);
+  }
+  
+  @Override
+  public byte[] encodeDouble(double d, byte[] ret) {
+    return encodeLong(Double.doubleToRawLongBits(d), ret);
+  }
+  
+  @Override
+  public double decodeDouble(byte[] a) {
+    return Double.longBitsToDouble(decodeLong(a));
+  }
+  
+  @Override
+  public byte[] encodeFloat(float d) {
+    return encodeFloat(d, new byte[16]);
+  }
+  
+  @Override
+  public byte[] encodeFloat(float d, byte[] ret) {
+    return encodeInt(Float.floatToRawIntBits(d), ret);
+  }
+  
+  @Override
+  public float decodeFloat(byte[] a) {
+    return Float.intBitsToFloat(decodeInt(a));
+  }
+  
+  @Override
+  public boolean decodeBoolean(byte[] val) {
+    if (decodeByte(val) == 1) {
+      return true;
+    }
+    return false;
+  }
+  
+  @Override
+  public byte[] encodeBoolean(boolean b) {
+    return encodeBoolean(b, new byte[2]);
+  }
+  
+  @Override
+  public byte[] encodeBoolean(boolean b, byte[] ret) {
+    if (b)
+      encode(ret, 1);
+    else
+      encode(ret, 0);
+    
+    return ret;
+  }
+  
+  private byte[] toBinary(byte[] hex) {
+    byte[] bin = new byte[(hex.length / 2) + (hex.length % 2)];
+    
+    int j = 0;
+    for (int i = 0; i < bin.length; i++) {
+      bin[i] = (byte) (fromChar(hex[j++]) << 4);
+      if (j >= hex.length)
+        break;
+      bin[i] |= (byte) fromChar(hex[j++]);
+    }
+    
+    return bin;
+  }
+  
+  private byte[] fromBinary(byte[] bin) {
+    byte[] hex = new byte[bin.length * 2];
+    
+    int j = 0;
+    for (int i = 0; i < bin.length; i++) {
+      hex[j++] = chars[0x0f & (bin[i] >>> 4)];
+      hex[j++] = chars[0x0f & bin[i]];
+    }
+    
+    return hex;
+  }
+
+  @Override
+  public byte[] followingKey(int size, byte[] per) {
+    return fromBinary(Utils.followingKey(size, toBinary(per)));
+  }
+  
+  @Override
+  public byte[] lastPossibleKey(int size, byte[] er) {
+    return fromBinary(Utils.lastPossibleKey(size, toBinary(er)));
+  }
+  
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/SignedBinaryEncoder.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/SignedBinaryEncoder.java
new file mode 100644
index 0000000..87ebcd6
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/SignedBinaryEncoder.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.encoders;
+
+
+/**
+ * This class transforms this bits within a primitive type so that 
+ * the bit representation sorts correctly lexographicaly. Primarily 
+ * it does some simple transformations so that negative numbers sort 
+ * before positive numbers, when compared lexographically.
+ */
+public class SignedBinaryEncoder extends BinaryEncoder {
+  
+  public byte[] encodeShort(short s, byte ret[]){
+    s = (short)((s & 0xffff) ^ 0x8000);
+    return super.encodeShort(s, ret);
+  }
+  
+  public short decodeShort(byte[] a){
+    short s = super.decodeShort(a);
+    s = (short)((s & 0xffff) ^ 0x8000);
+    return s;
+  }
+  
+  public byte[] encodeInt(int i, byte ret[]){
+    i = i ^ 0x80000000;
+    return super.encodeInt(i, ret);
+  }
+  
+  public int decodeInt(byte[] a){
+    int i = super.decodeInt(a);
+    i = i ^ 0x80000000;
+    return i;
+  }
+  
+  public byte[] encodeLong(long l, byte ret[]){
+    l = l ^ 0x8000000000000000l;
+    return super.encodeLong(l, ret);
+  }
+  
+  public long decodeLong(byte[] a) {
+    long l = super.decodeLong(a);
+    l = l ^ 0x8000000000000000l;
+    return l;
+  }
+  
+  
+  public byte[] encodeDouble(double d, byte[] ret) {
+    long l = Double.doubleToRawLongBits(d);
+    if(l < 0)
+      l = ~l;
+    else
+      l = l ^ 0x8000000000000000l;
+    return super.encodeLong(l,ret);
+  }
+  
+  public double decodeDouble(byte[] a){
+    long l = super.decodeLong(a);
+    if(l < 0)
+      l = l ^ 0x8000000000000000l;
+    else
+      l = ~l;
+    return Double.longBitsToDouble(l);
+  }
+  
+  public byte[] encodeFloat(float f, byte[] ret) {
+    int i = Float.floatToRawIntBits(f);
+    if(i < 0)
+      i = ~i;
+    else
+      i = i ^ 0x80000000;
+    
+    return super.encodeInt(i, ret);
+    
+  }
+  
+  public float decodeFloat(byte[] a){
+    int i = super.decodeInt(a);
+    if(i < 0)
+      i = i ^ 0x80000000;
+    else
+      i = ~i;
+    return Float.intBitsToFloat(i);
+  }
+  
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Utils.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Utils.java
new file mode 100644
index 0000000..5fa7de0
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/encoders/Utils.java
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.encoders;
+
+import java.math.BigInteger;
+import java.util.Arrays;
+
+/**
+ * 
+ */
+public class Utils {
+  private static BigInteger newPositiveBigInteger(byte[] er) {
+    byte[] copy = new byte[er.length + 1];
+    System.arraycopy(er, 0, copy, 1, er.length);
+    BigInteger bi = new BigInteger(copy);
+    return bi;
+  }
+  
+  public static byte[] lastPossibleKey(int size, byte[] er) {
+    if (size == er.length)
+      return er;
+    
+    if (er.length > size)
+      throw new IllegalArgumentException();
+    
+    BigInteger bi = newPositiveBigInteger(er);
+    if (bi.equals(BigInteger.ZERO))
+      throw new IllegalArgumentException("Nothing comes before zero");
+    
+    bi = bi.subtract(BigInteger.ONE);
+    
+    byte ret[] = new byte[size];
+    Arrays.fill(ret, (byte) 0xff);
+    
+    System.arraycopy(getBytes(bi, er.length), 0, ret, 0, er.length);
+    
+    return ret;
+  }
+  
+  private static byte[] getBytes(BigInteger bi, int minLen) {
+    byte[] ret = bi.toByteArray();
+    
+    if (ret[0] == 0) {
+      // remove leading 0 that makes num positive
+      byte copy[] = new byte[ret.length - 1];
+      System.arraycopy(ret, 1, copy, 0, copy.length);
+      ret = copy;
+    }
+    
+    // leading digits are dropped
+    byte copy[] = new byte[minLen];
+    if (bi.compareTo(BigInteger.ZERO) < 0) {
+      Arrays.fill(copy, (byte) 0xff);
+    }
+    System.arraycopy(ret, 0, copy, minLen - ret.length, ret.length);
+    
+    return copy;
+  }
+  
+  public static byte[] followingKey(int size, byte[] per) {
+    
+    if (per.length > size)
+      throw new IllegalArgumentException();
+    
+    if (size == per.length) {
+      // add one
+      BigInteger bi = new BigInteger(per);
+      bi = bi.add(BigInteger.ONE);
+      if (bi.equals(BigInteger.ZERO)) {
+        throw new IllegalArgumentException("Wrapped");
+      }
+      return getBytes(bi, size);
+    } else {
+      return Arrays.copyOf(per, size);
+    }
+  }
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloQuery.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloQuery.java
new file mode 100644
index 0000000..77fdc0b
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloQuery.java
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.query;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.store.DataStore;
+
+/**
+ * 
+ */
+public class AccumuloQuery<K,T extends Persistent> extends QueryBase<K,T> {
+  
+  public AccumuloQuery() {
+    super(null);
+  }
+
+  public AccumuloQuery(DataStore<K,T> dataStore) {
+    super(dataStore);
+  }
+  
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloResult.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloResult.java
new file mode 100644
index 0000000..73f64b2
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/query/AccumuloResult.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.query;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.client.RowIterator;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.gora.accumulo.store.AccumuloStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.gora.store.DataStore;
+
+/**
+ * 
+ */
+public class AccumuloResult<K,T extends Persistent> extends ResultBase<K,T> {
+  
+  private RowIterator iterator;
+
+  public AccumuloStore<K,T> getDataStore() {
+    return (AccumuloStore<K,T>) super.getDataStore();
+  }
+
+  /**
+   * @param dataStore
+   * @param query
+   * @param scanner
+   */
+  public AccumuloResult(DataStore<K,T> dataStore, Query<K,T> query, Scanner scanner) {
+    super(dataStore, query);
+    
+    // TODO set batch size based on limit, and construct iterator later
+    iterator = new RowIterator(scanner.iterator());
+  }
+  
+  @Override
+  public float getProgress() throws IOException {
+    // TODO Auto-generated method stub
+    return 0;
+  }
+  
+  @Override
+  public void close() throws IOException {
+    
+  }
+  
+  @Override
+  protected boolean nextInner() throws IOException {
+    
+    if (!iterator.hasNext())
+      return false;
+    
+    key = null;
+    
+    Iterator<Entry<Key,Value>> nextRow = iterator.next();
+    ByteSequence row = getDataStore().populate(nextRow, persistent);
+    key = (K) ((AccumuloStore) dataStore).fromBytes(getKeyClass(), row.toArray());
+    
+    return true;
+  }
+  
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloMapping.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloMapping.java
new file mode 100644
index 0000000..08911e0
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloMapping.java
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.store;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.accumulo.core.util.Pair;
+import org.apache.hadoop.io.Text;
+
+public class AccumuloMapping {
+  Map<String,Pair<Text,Text>> fieldMap = new HashMap<String,Pair<Text,Text>>();
+  Map<Pair<Text,Text>,String> columnMap = new HashMap<Pair<Text,Text>,String>();
+  Map<String,String> tableConfig = new HashMap<String,String>();
+  String tableName;
+  String encoder;
+
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloStore.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloStore.java
new file mode 100644
index 0000000..65ad122
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/store/AccumuloStore.java
@@ -0,0 +1,842 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.store;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Properties;
+import java.util.Set;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+
+import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IsolatedScanner;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.RowIterator;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableDeletedException;
+import org.apache.accumulo.core.client.TableExistsException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.TableOfflineException;
+import org.apache.accumulo.core.client.ZooKeeperInstance;
+import org.apache.accumulo.core.client.impl.Tables;
+import org.apache.accumulo.core.client.impl.TabletLocator;
+import org.apache.accumulo.core.client.mock.MockConnector;
+import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.client.mock.MockTabletLocator;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.KeyExtent;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyIterator;
+import org.apache.accumulo.core.iterators.user.TimestampFilter;
+import org.apache.accumulo.core.master.state.tables.TableState;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.security.thrift.AuthInfo;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.core.util.TextUtil;
+import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.io.BinaryDecoder;
+import org.apache.avro.io.BinaryEncoder;
+import org.apache.avro.io.DecoderFactory;
+import org.apache.avro.specific.SpecificDatumReader;
+import org.apache.avro.specific.SpecificDatumWriter;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.accumulo.encoders.Encoder;
+import org.apache.gora.accumulo.query.AccumuloQuery;
+import org.apache.gora.accumulo.query.AccumuloResult;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.StatefulMap;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.apache.gora.util.AvroUtils;
+import org.apache.hadoop.io.Text;
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+import org.w3c.dom.NodeList;
+
+/**
+ * 
+ */
+public class AccumuloStore<K,T extends Persistent> extends DataStoreBase<K,T> {
+  
+  protected static final String MOCK_PROPERTY = "accumulo.mock";
+  protected static final String INSTANCE_NAME_PROPERTY = "accumulo.instance";
+  protected static final String ZOOKEEPERS_NAME_PROPERTY = "accumulo.zookeepers";
+  protected static final String USERNAME_PROPERTY = "accumulo.user";
+  protected static final String PASSWORD_PROPERTY = "accumulo.password";
+  protected static final String DEFAULT_MAPPING_FILE = "gora-accumulo-mapping.xml";
+
+  private Connector conn;
+  private BatchWriter batchWriter;
+  private AccumuloMapping mapping;
+  private AuthInfo authInfo;
+  private Encoder encoder;
+  
+  public Object fromBytes(Schema schema, byte data[]) {
+    return fromBytes(encoder, schema, data);
+  }
+
+  public static Object fromBytes(Encoder encoder, Schema schema, byte data[]) {
+    switch (schema.getType()) {
+      case BOOLEAN:
+        return encoder.decodeBoolean(data);
+      case DOUBLE:
+        return encoder.decodeDouble(data);
+      case FLOAT:
+        return encoder.decodeFloat(data);
+      case INT:
+        return encoder.decodeInt(data);
+      case LONG:
+        return encoder.decodeLong(data);
+      case STRING:
+        return new Utf8(data);
+      case BYTES:
+        return ByteBuffer.wrap(data);
+      case ENUM:
+        return AvroUtils.getEnumValue(schema, encoder.decodeInt(data));
+    }
+    throw new IllegalArgumentException("Unknown type " + schema.getType());
+    
+  }
+
+  public K fromBytes(Class<K> clazz, byte[] val) {
+    return fromBytes(encoder, clazz, val);
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <K> K fromBytes(Encoder encoder, Class<K> clazz, byte[] val) {
+    try {
+      if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+        return (K) Byte.valueOf(encoder.decodeByte(val));
+      } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+        return (K) Boolean.valueOf(encoder.decodeBoolean(val));
+      } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+        return (K) Short.valueOf(encoder.decodeShort(val));
+      } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+        return (K) Integer.valueOf(encoder.decodeInt(val));
+      } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+        return (K) Long.valueOf(encoder.decodeLong(val));
+      } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+        return (K) Float.valueOf(encoder.decodeFloat(val));
+      } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+        return (K) Double.valueOf(encoder.decodeDouble(val));
+      } else if (clazz.equals(String.class)) {
+        return (K) new String(val, "UTF-8");
+      } else if (clazz.equals(Utf8.class)) {
+        return (K) new Utf8(val);
+      }
+      
+      throw new IllegalArgumentException("Unknown type " + clazz.getName());
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+
+  private static byte[] copyIfNeeded(byte b[], int offset, int len) {
+    if (len != b.length || offset != 0) {
+      byte copy[] = new byte[len];
+      System.arraycopy(b, offset, copy, 0, copy.length);
+      b = copy;
+    }
+    return b;
+  }
+
+  public byte[] toBytes(Object o) {
+    return toBytes(encoder, o);
+  }
+  
+  public static byte[] toBytes(Encoder encoder, Object o) {
+    
+    try {
+      if (o instanceof String) {
+        return ((String) o).getBytes("UTF-8");
+      } else if (o instanceof Utf8) {
+        return copyIfNeeded(((Utf8) o).getBytes(), 0, ((Utf8) o).getLength());
+      } else if (o instanceof ByteBuffer) {
+        return copyIfNeeded(((ByteBuffer) o).array(), ((ByteBuffer) o).arrayOffset() + ((ByteBuffer) o).position(), ((ByteBuffer) o).remaining());
+      } else if (o instanceof Long) {
+        return encoder.encodeLong((Long) o);
+      } else if (o instanceof Integer) {
+        return encoder.encodeInt((Integer) o);
+      } else if (o instanceof Short) {
+        return encoder.encodeShort((Short) o);
+      } else if (o instanceof Byte) {
+        return encoder.encodeByte((Byte) o);
+      } else if (o instanceof Boolean) {
+        return encoder.encodeBoolean((Boolean) o);
+      } else if (o instanceof Float) {
+        return encoder.encodeFloat((Float) o);
+      } else if (o instanceof Double) {
+        return encoder.encodeDouble((Double) o);
+      } else if (o instanceof Enum) {
+        return encoder.encodeInt(((Enum) o).ordinal());
+      }
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+    
+    throw new IllegalArgumentException("Uknown type " + o.getClass().getName());
+  }
+
+  private BatchWriter getBatchWriter() throws IOException {
+    if (batchWriter == null)
+      try {
+        batchWriter = conn.createBatchWriter(mapping.tableName, 10000000, 60000l, 4);
+      } catch (TableNotFoundException e) {
+        throw new IOException(e);
+      }
+    return batchWriter;
+  }
+
+  @Override
+  public void initialize(Class<K> keyClass, Class<T> persistentClass, Properties properties) throws IOException {
+    super.initialize(keyClass, persistentClass, properties);
+
+    String mock = DataStoreFactory.findProperty(properties, this, MOCK_PROPERTY, null);
+    String mappingFile = DataStoreFactory.getMappingFile(properties, this, DEFAULT_MAPPING_FILE);
+    String user = DataStoreFactory.findProperty(properties, this, USERNAME_PROPERTY, null);
+    String password = DataStoreFactory.findProperty(properties, this, PASSWORD_PROPERTY, null);
+    
+    mapping = readMapping(mappingFile);
+
+    if (mapping.encoder == null || mapping.encoder.equals("")) {
+      encoder = new org.apache.gora.accumulo.encoders.BinaryEncoder();
+    } else {
+      try {
+        encoder = (Encoder) getClass().getClassLoader().loadClass(mapping.encoder).newInstance();
+      } catch (InstantiationException e) {
+        throw new IOException(e);
+      } catch (IllegalAccessException e) {
+        throw new IOException(e);
+      } catch (ClassNotFoundException e) {
+        throw new IOException(e);
+      }
+    }
+
+    try {
+      if (mock == null || !mock.equals("true")) {
+        String instance = DataStoreFactory.findProperty(properties, this, INSTANCE_NAME_PROPERTY, null);
+        String zookeepers = DataStoreFactory.findProperty(properties, this, ZOOKEEPERS_NAME_PROPERTY, null);
+        conn = new ZooKeeperInstance(instance, zookeepers).getConnector(user, password);
+        authInfo = new AuthInfo(user, ByteBuffer.wrap(password.getBytes()), conn.getInstance().getInstanceID());
+      } else {
+        conn = new MockInstance().getConnector(user, password);
+      }
+
+      if (autoCreateSchema)
+        createSchema();
+    } catch (AccumuloException e) {
+      throw new IOException(e);
+    } catch (AccumuloSecurityException e) {
+      throw new IOException(e);
+    }
+  }
+  
+  protected AccumuloMapping readMapping(String filename) throws IOException {
+    try {
+      
+      AccumuloMapping mapping = new AccumuloMapping();
+
+      DocumentBuilder db = DocumentBuilderFactory.newInstance().newDocumentBuilder();
+      Document dom = db.parse(getClass().getClassLoader().getResourceAsStream(filename));
+      
+      Element root = dom.getDocumentElement();
+      
+      NodeList nl = root.getElementsByTagName("class");
+      for (int i = 0; i < nl.getLength(); i++) {
+        
+        Element classElement = (Element) nl.item(i);
+        if (classElement.getAttribute("keyClass").equals(keyClass.getCanonicalName())
+            && classElement.getAttribute("name").equals(persistentClass.getCanonicalName())) {
+
+          mapping.tableName = getSchemaName(classElement.getAttribute("table"), persistentClass);
+          mapping.encoder = classElement.getAttribute("encoder");
+          
+          NodeList fields = classElement.getElementsByTagName("field");
+          for (int j = 0; j < fields.getLength(); j++) {
+            Element fieldElement = (Element) fields.item(j);
+
+            String name = fieldElement.getAttribute("name");
+            String family = fieldElement.getAttribute("family");
+            String qualifier = fieldElement.getAttribute("qualifier");
+            if (qualifier.equals(""))
+              qualifier = null;
+
+            Pair<Text,Text> col = new Pair<Text,Text>(new Text(family), qualifier == null ? null : new Text(qualifier));
+            mapping.fieldMap.put(name, col);
+            mapping.columnMap.put(col, name);
+          }
+        }
+
+      }
+      
+      nl = root.getElementsByTagName("table");
+      for (int i = 0; i < nl.getLength(); i++) {
+        Element tableElement = (Element) nl.item(i);
+        if (tableElement.getAttribute("name").equals(mapping.tableName)) {
+          NodeList configs = tableElement.getElementsByTagName("config");
+          for (int j = 0; j < configs.getLength(); j++) {
+            Element configElement = (Element) configs.item(j);
+            String key = configElement.getAttribute("key");
+            String val = configElement.getAttribute("value");
+            mapping.tableConfig.put(key, val);
+          }
+        }
+      }
+
+      return mapping;
+    } catch (Exception ex) {
+      throw new IOException(ex);
+    }
+
+  }
+  
+  @Override
+  public String getSchemaName() {
+    return mapping.tableName;
+  }
+  
+  @Override
+  public void createSchema() throws IOException {
+    try {
+      conn.tableOperations().create(mapping.tableName);
+      Set<Entry<String,String>> es = mapping.tableConfig.entrySet();
+      for (Entry<String,String> entry : es) {
+        conn.tableOperations().setProperty(mapping.tableName, entry.getKey(), entry.getValue());
+      }
+
+    } catch (AccumuloException e) {
+      throw new IOException(e);
+    } catch (AccumuloSecurityException e) {
+      throw new IOException(e);
+    } catch (TableExistsException e) {
+      return;
+    }
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+    try {
+      if (batchWriter != null)
+        batchWriter.close();
+      batchWriter = null;
+      conn.tableOperations().delete(mapping.tableName);
+    } catch (AccumuloException e) {
+      throw new IOException(e);
+    } catch (AccumuloSecurityException e) {
+      throw new IOException(e);
+    } catch (TableNotFoundException e) {
+      return;
+    }
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    return conn.tableOperations().exists(mapping.tableName);
+  }
+
+  public ByteSequence populate(Iterator<Entry<Key,Value>> iter, T persistent) throws IOException {
+    ByteSequence row = null;
+    
+    Map currentMap = null;
+    ArrayList currentArray = null;
+    Text currentFam = null;
+    int currentPos = 0;
+    Schema currentSchema = null;
+    Field currentField = null;
+
+    while (iter.hasNext()) {
+      Entry<Key,Value> entry = iter.next();
+      
+      if (currentMap != null) {
+        if (currentFam.equals(entry.getKey().getColumnFamily())) {
+          currentMap.put(new Utf8(entry.getKey().getColumnQualifierData().toArray()), fromBytes(currentSchema, entry.getValue().get()));
+          continue;
+        } else {
+          persistent.put(currentPos, currentMap);
+          currentMap = null;
+        }
+      } else if (currentArray != null) {
+        if (currentFam.equals(entry.getKey().getColumnFamily())) {
+          currentArray.add(fromBytes(currentSchema, entry.getValue().get()));
+          continue;
+        } else {
+          persistent.put(currentPos, new ListGenericArray<T>(currentField.schema(), currentArray));
+          currentArray = null;
+        }
+      }
+
+      if (row == null)
+        row = entry.getKey().getRowData();
+      
+      String fieldName = mapping.columnMap.get(new Pair<Text,Text>(entry.getKey().getColumnFamily(), entry.getKey().getColumnQualifier()));
+      if (fieldName == null)
+        fieldName = mapping.columnMap.get(new Pair<Text,Text>(entry.getKey().getColumnFamily(), null));
+
+      Field field = fieldMap.get(fieldName);
+
+      switch (field.schema().getType()) {
+        case MAP:
+          currentMap = new StatefulHashMap();
+          currentPos = field.pos();
+          currentFam = entry.getKey().getColumnFamily();
+          currentSchema = field.schema().getValueType();
+          
+          currentMap.put(new Utf8(entry.getKey().getColumnQualifierData().toArray()), fromBytes(currentSchema, entry.getValue().get()));
+
+          break;
+        case ARRAY:
+          currentArray = new ArrayList();
+          currentPos = field.pos();
+          currentFam = entry.getKey().getColumnFamily();
+          currentSchema = field.schema().getElementType();
+          currentField = field;
+          
+          currentArray.add(fromBytes(currentSchema, entry.getValue().get()));
+
+          break;
+        case RECORD:
+          SpecificDatumReader reader = new SpecificDatumReader(field.schema());
+          byte[] val = entry.getValue().get();
+          // TODO reuse decoder
+          BinaryDecoder decoder = DecoderFactory.defaultFactory().createBinaryDecoder(val, null);
+          persistent.put(field.pos(), reader.read(null, decoder));
+          break;
+        default:
+          persistent.put(field.pos(), fromBytes(field.schema(), entry.getValue().get()));
+      }
+    }
+    
+    if (currentMap != null) {
+      persistent.put(currentPos, currentMap);
+    } else if (currentArray != null) {
+      persistent.put(currentPos, new ListGenericArray<T>(currentField.schema(), currentArray));
+    }
+
+    persistent.clearDirty();
+
+    return row;
+  }
+
+  private void setFetchColumns(Scanner scanner, String fields[]) {
+    fields = getFieldsToQuery(fields);
+    for (String field : fields) {
+      Pair<Text,Text> col = mapping.fieldMap.get(field);
+      if (col.getSecond() == null) {
+        scanner.fetchColumnFamily(col.getFirst());
+      } else {
+        scanner.fetchColumn(col.getFirst(), col.getSecond());
+      }
+    }
+  }
+
+  @Override
+  public T get(K key, String[] fields) throws IOException {
+    try {
+      // TODO make isolated scanner optional?
+      Scanner scanner = new IsolatedScanner(conn.createScanner(mapping.tableName, Constants.NO_AUTHS));
+      Range rowRange = new Range(new Text(toBytes(key)));
+      
+      scanner.setRange(rowRange);
+      setFetchColumns(scanner, fields);
+      
+      T persistent = newPersistent();
+      ByteSequence row = populate(scanner.iterator(), persistent);
+      if (row == null)
+        return null;
+      return persistent;
+    } catch (TableNotFoundException e) {
+      return null;
+    }
+  }
+  
+  @Override
+  public void put(K key, T val) throws IOException {
+
+    Mutation m = new Mutation(new Text(toBytes(key)));
+    
+    Schema schema = val.getSchema();
+    StateManager stateManager = val.getStateManager();
+    
+    Iterator<Field> iter = schema.getFields().iterator();
+    
+    int count = 0;
+    for (int i = 0; iter.hasNext(); i++) {
+      Field field = iter.next();
+      if (!stateManager.isDirty(val, i)) {
+        continue;
+      }
+      
+      Object o = val.get(i);
+      Pair<Text,Text> col = mapping.fieldMap.get(field.name());
+
+      switch (field.schema().getType()) {
+        case MAP:
+          if (o instanceof StatefulMap) {
+            StatefulMap map = (StatefulMap) o;
+            Set<?> es = map.states().entrySet();
+            for (Object entry : es) {
+              Object mapKey = ((Entry) entry).getKey();
+              State state = (State) ((Entry) entry).getValue();
+
+              switch (state) {
+                case NEW:
+                case DIRTY:
+                  m.put(col.getFirst(), new Text(toBytes(mapKey)), new Value(toBytes(map.get(mapKey))));
+                  count++;
+                  break;
+                case DELETED:
+                  m.putDelete(col.getFirst(), new Text(toBytes(mapKey)));
+                  count++;
+                  break;
+              }
+              
+            }
+          } else {
+            Map map = (Map) o;
+            Set<?> es = map.entrySet();
+            for (Object entry : es) {
+              Object mapKey = ((Entry) entry).getKey();
+              Object mapVal = ((Entry) entry).getValue();
+              m.put(col.getFirst(), new Text(toBytes(mapKey)), new Value(toBytes(mapVal)));
+              count++;
+            }
+          }
+          break;
+        case ARRAY:
+          GenericArray array = (GenericArray) o;
+          int j = 0;
+          for (Object item : array) {
+            m.put(col.getFirst(), new Text(toBytes(j++)), new Value(toBytes(item)));
+            count++;
+          }
+          break;
+        case RECORD:
+          SpecificDatumWriter writer = new SpecificDatumWriter(field.schema());
+          ByteArrayOutputStream os = new ByteArrayOutputStream();
+          BinaryEncoder encoder = new BinaryEncoder(os);
+          writer.write(o, encoder);
+          encoder.flush();
+          m.put(col.getFirst(), col.getSecond(), new Value(os.toByteArray()));
+          break;
+        default:
+          m.put(col.getFirst(), col.getSecond(), new Value(toBytes(o)));
+          count++;
+      }
+
+    }
+    
+    if (count > 0)
+      try {
+        getBatchWriter().addMutation(m);
+      } catch (MutationsRejectedException e) {
+        throw new IOException(e);
+      }
+  }
+  
+  @Override
+  public boolean delete(K key) throws IOException {
+    Query<K,T> q = newQuery();
+    q.setKey(key);
+    return deleteByQuery(q) > 0;
+  }
+
+  @Override
+  public long deleteByQuery(Query<K,T> query) throws IOException {
+    try {
+      Scanner scanner = createScanner(query);
+      // add iterator that drops values on the server side
+      scanner.addScanIterator(new IteratorSetting(Integer.MAX_VALUE, SortedKeyIterator.class));
+      RowIterator iterator = new RowIterator(scanner.iterator());
+      
+      long count = 0;
+
+      while (iterator.hasNext()) {
+        Iterator<Entry<Key,Value>> row = iterator.next();
+        Mutation m = null;
+        while (row.hasNext()) {
+          Entry<Key,Value> entry = row.next();
+          Key key = entry.getKey();
+          if (m == null)
+            m = new Mutation(key.getRow());
+          // TODO optimize to avoid continually creating column vis? prob does not matter for empty
+          m.putDelete(key.getColumnFamily(), key.getColumnQualifier(), new ColumnVisibility(key.getColumnVisibility()), key.getTimestamp());
+        }
+        getBatchWriter().addMutation(m);
+        count++;
+      }
+      
+      return count;
+    } catch (TableNotFoundException e) {
+      // TODO return 0?
+      throw new IOException(e);
+    } catch (MutationsRejectedException e) {
+      throw new IOException(e);
+    }
+  }
+
+  private Range createRange(Query<K,T> query) {
+    Text startRow = null;
+    Text endRow = null;
+    
+    if (query.getStartKey() != null)
+      startRow = new Text(toBytes(query.getStartKey()));
+    
+    if (query.getEndKey() != null)
+      endRow = new Text(toBytes(query.getEndKey()));
+    
+    return new Range(startRow, true, endRow, true);
+    
+  }
+  
+  private Scanner createScanner(Query<K,T> query) throws TableNotFoundException {
+    // TODO make isolated scanner optional?
+    Scanner scanner = new IsolatedScanner(conn.createScanner(mapping.tableName, Constants.NO_AUTHS));
+    setFetchColumns(scanner, query.getFields());
+    
+    scanner.setRange(createRange(query));
+    
+    if (query.getStartTime() != -1 || query.getEndTime() != -1) {
+      IteratorSetting is = new IteratorSetting(30, TimestampFilter.class);
+      if (query.getStartTime() != -1)
+        TimestampFilter.setStart(is, query.getStartTime(), true);
+      if (query.getEndTime() != -1)
+        TimestampFilter.setEnd(is, query.getEndTime(), true);
+      
+      scanner.addScanIterator(is);
+    }
+    
+    return scanner;
+  }
+
+  @Override
+  public Result<K,T> execute(Query<K,T> query) throws IOException {
+    try {
+      Scanner scanner = createScanner(query);
+      return new AccumuloResult<K,T>(this, query, scanner);
+    } catch (TableNotFoundException e) {
+      // TODO return empty result?
+      throw new IOException(e);
+    }
+  }
+  
+  @Override
+  public Query<K,T> newQuery() {
+    return new AccumuloQuery<K,T>(this);
+  }
+
+  Text pad(Text key, int bytes) {
+    if (key.getLength() < bytes)
+      key = new Text(key);
+    
+    while (key.getLength() < bytes) {
+      key.append(new byte[] {0}, 0, 1);
+    }
+    
+    return key;
+  }
+  
+  @Override
+  public List<PartitionQuery<K,T>> getPartitions(Query<K,T> query) throws IOException {
+    try {
+      TabletLocator tl;
+      if (conn instanceof MockConnector)
+        tl = new MockTabletLocator();
+      else
+        tl = TabletLocator.getInstance(conn.getInstance(), authInfo, new Text(Tables.getTableId(conn.getInstance(), mapping.tableName)));
+      
+      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+      
+      tl.invalidateCache();
+      while (tl.binRanges(Collections.singletonList(createRange(query)), binnedRanges).size() > 0) {
+        // TODO log?
+        if (!Tables.exists(conn.getInstance(), Tables.getTableId(conn.getInstance(), mapping.tableName)))
+          throw new TableDeletedException(Tables.getTableId(conn.getInstance(), mapping.tableName));
+        else if (Tables.getTableState(conn.getInstance(), Tables.getTableId(conn.getInstance(), mapping.tableName)) == TableState.OFFLINE)
+          throw new TableOfflineException(conn.getInstance(), Tables.getTableId(conn.getInstance(), mapping.tableName));
+        UtilWaitThread.sleep(100);
+        tl.invalidateCache();
+      }
+      
+      List<PartitionQuery<K,T>> ret = new ArrayList<PartitionQuery<K,T>>();
+      
+      Text startRow = null;
+      Text endRow = null;
+      if (query.getStartKey() != null)
+        startRow = new Text(toBytes(query.getStartKey()));
+      if (query.getEndKey() != null)
+        endRow = new Text(toBytes(query.getEndKey()));
+     
+      //hadoop expects hostnames, accumulo keeps track of IPs... so need to convert
+      HashMap<String,String> hostNameCache = new HashMap<String,String>();
+ 
+      for (Entry<String,Map<KeyExtent,List<Range>>> entry : binnedRanges.entrySet()) {
+        String ip = entry.getKey().split(":", 2)[0];
+        String location = hostNameCache.get(ip);
+        if (location == null) {
+          InetAddress inetAddress = InetAddress.getByName(ip);
+          location = inetAddress.getHostName();
+          hostNameCache.put(ip, location);
+        }
+
+        Map<KeyExtent,List<Range>> tablets = entry.getValue();
+        for (KeyExtent ke : tablets.keySet()) {
+          
+          K startKey = null;
+          if (startRow == null || !ke.contains(startRow)) {
+            if (ke.getPrevEndRow() != null) {
+              startKey = followingKey(encoder, getKeyClass(), TextUtil.getBytes(ke.getPrevEndRow()));
+            }
+          } else {
+            startKey = fromBytes(getKeyClass(), TextUtil.getBytes(startRow));
+          }
+          
+          K endKey = null;
+          if (endRow == null || !ke.contains(endRow)) {
+            if (ke.getEndRow() != null)
+              endKey = lastPossibleKey(encoder, getKeyClass(), TextUtil.getBytes(ke.getEndRow()));
+          } else {
+            endKey = fromBytes(getKeyClass(), TextUtil.getBytes(endRow));
+          }
+          
+          PartitionQueryImpl pqi = new PartitionQueryImpl<K,T>(query, startKey, endKey, new String[] {location});
+          ret.add(pqi);
+        }
+      }
+      
+      return ret;
+    } catch (TableNotFoundException e) {
+      throw new IOException(e);
+    } catch (AccumuloException e) {
+      throw new IOException(e);
+    } catch (AccumuloSecurityException e) {
+      throw new IOException(e);
+    }
+    
+  }
+  
+  static <K> K lastPossibleKey(Encoder encoder, Class<K> clazz, byte[] er) {
+    
+    if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+      throw new UnsupportedOperationException();
+    } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+      throw new UnsupportedOperationException();
+    } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+      return fromBytes(encoder, clazz, encoder.lastPossibleKey(2, er));
+    } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+      return fromBytes(encoder, clazz, encoder.lastPossibleKey(4, er));
+    } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+      return fromBytes(encoder, clazz, encoder.lastPossibleKey(8, er));
+    } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+      return fromBytes(encoder, clazz, encoder.lastPossibleKey(4, er));
+    } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+      return fromBytes(encoder, clazz, encoder.lastPossibleKey(8, er));
+    } else if (clazz.equals(String.class)) {
+      throw new UnsupportedOperationException();
+    } else if (clazz.equals(Utf8.class)) {
+      return fromBytes(encoder, clazz, er);
+    }
+    
+    throw new IllegalArgumentException("Unknown type " + clazz.getName());
+  }
+
+
+  
+  /**
+   * @param keyClass
+   * @param bytes
+   * @return
+   */
+  static <K> K followingKey(Encoder encoder, Class<K> clazz, byte[] per) {
+    
+    if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+      return (K) Byte.valueOf(encoder.followingKey(1, per)[0]);
+    } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+      throw new UnsupportedOperationException();
+    } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+      return fromBytes(encoder, clazz, encoder.followingKey(2, per));
+    } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+      return fromBytes(encoder, clazz, encoder.followingKey(4, per));
+    } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+      return fromBytes(encoder, clazz, encoder.followingKey(8, per));
+    } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+      return fromBytes(encoder, clazz, encoder.followingKey(4, per));
+    } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+      return fromBytes(encoder, clazz, encoder.followingKey(8, per));
+    } else if (clazz.equals(String.class)) {
+      throw new UnsupportedOperationException();
+    } else if (clazz.equals(Utf8.class)) {
+      return fromBytes(encoder, clazz, Arrays.copyOf(per, per.length + 1));
+    }
+
+    throw new IllegalArgumentException("Unknown type " + clazz.getName());
+  }
+
+  @Override
+  public void flush() throws IOException {
+    try {
+      if (batchWriter != null) {
+        batchWriter.flush();
+      }
+    } catch (MutationsRejectedException e) {
+      throw new IOException(e);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    try {
+      if (batchWriter != null) {
+        batchWriter.close();
+        batchWriter = null;
+      }
+    } catch (MutationsRejectedException e) {
+      throw new IOException(e);
+    }
+    
+  }
+}
diff --git a/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/util/FixedByteArrayOutputStream.java b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/util/FixedByteArrayOutputStream.java
new file mode 100644
index 0000000..d2003bb
--- /dev/null
+++ b/trunk/gora-accumulo/src/main/java/org/apache/gora/accumulo/util/FixedByteArrayOutputStream.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.util;
+
+import java.io.IOException;
+import java.io.OutputStream;
+
+public class FixedByteArrayOutputStream extends OutputStream {
+  
+  private int i;
+  byte out[];
+  
+  public FixedByteArrayOutputStream(byte out[]) {
+    this.out = out;
+  }
+  
+  @Override
+  public void write(int b) throws IOException {
+    out[i++] = (byte) b;
+  }
+  
+  @Override
+  public void write(byte b[], int off, int len) throws IOException {
+    System.arraycopy(b, off, out, i, len);
+    i += len;
+  }
+  
+}
\ No newline at end of file
diff --git a/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/AccumuloStoreTest.java b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/AccumuloStoreTest.java
new file mode 100644
index 0000000..8995ebf
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/AccumuloStoreTest.java
@@ -0,0 +1,52 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.store;
+
+import java.io.IOException;
+
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestBase;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * 
+ */
+public class AccumuloStoreTest extends DataStoreTestBase {
+  
+  // TODO implement test driver
+
+  @Override
+  protected DataStore<String,Employee> createEmployeeDataStore() throws IOException {
+    return DataStoreFactory.getDataStore(String.class, Employee.class, new Configuration());
+  }
+  
+  @Override
+  protected DataStore<String,WebPage> createWebPageDataStore() throws IOException {
+    return DataStoreFactory.getDataStore(String.class, WebPage.class, new Configuration());
+  }
+
+  
+  //Until GORA-66 is resolved this test will always fail, so 
+  //do not run it
+  @Override
+  public void testDeleteByQueryFields() throws IOException {
+    return;
+  }
+}
diff --git a/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/PartitionTest.java b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/PartitionTest.java
new file mode 100644
index 0000000..ffafcb0
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/store/PartitionTest.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.store;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+import junit.framework.Assert;
+
+import org.apache.gora.accumulo.encoders.Encoder;
+import org.apache.gora.accumulo.encoders.SignedBinaryEncoder;
+import org.junit.Test;
+
+/**
+ * 
+ */
+public class PartitionTest {
+  // TODO test more types
+
+  private static Encoder encoder = new SignedBinaryEncoder();
+
+  static long encl(long l) {
+    ByteArrayOutputStream baos = new ByteArrayOutputStream();
+    DataOutputStream dos = new DataOutputStream(baos);
+    try {
+      dos.writeLong(l);
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+    return encoder.decodeLong(baos.toByteArray());
+  }
+
+  @Test
+  public void test1() {
+    Assert.assertEquals(encl(0x006f000000000000l), (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {0x00, 0x6f}));
+    Assert.assertEquals(encl(1l), (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {0, 0, 0, 0, 0, 0, 0, 0}));
+    Assert.assertEquals(encl(0x106f000000000001l), (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {0x10, 0x6f, 0, 0, 0, 0, 0, 0}));
+    Assert.assertEquals(
+        encl(-1l),
+        (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {(byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff,
+            (byte) 0xff,
+            (byte) 0xfe}));
+    
+    Assert.assertEquals(encl(0x8000000000000001l), (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {(byte) 0x80, 0, 0, 0, 0, 0, 0, 0}));
+    Assert.assertEquals(
+        encl(0x8000000000000000l),
+        (long) AccumuloStore.followingKey(encoder, Long.class, new byte[] {(byte) 0x7f, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff,
+            (byte) 0xff,
+            (byte) 0xff}));
+
+
+    try {
+      AccumuloStore.followingKey(encoder, Long.class,
+          new byte[] {(byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff});
+      Assert.assertTrue(false);
+    } catch (IllegalArgumentException iea) {
+      
+    }
+  }
+  
+  @Test
+  public void test2() {
+    Assert.assertEquals(encl(0x00ffffffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {0x01}));
+    Assert.assertEquals(encl(0x006effffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {0x00, 0x6f}));
+    Assert.assertEquals(encl(0xff6effffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0xff, 0x6f}));
+    Assert.assertEquals(encl(0xfffeffffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0xff, (byte) 0xff}));
+    Assert.assertEquals(encl(0l), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0, 0, 0, 0, 0, 0, 0, 0}));
+    
+    Assert.assertEquals(encl(0x7effffffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0x7f}));
+    Assert.assertEquals(encl(0x7fffffffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0x80}));
+    Assert.assertEquals(encl(0x80ffffffffffffffl), (long) AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0x81}));
+
+    try {
+      AccumuloStore.lastPossibleKey(encoder, Long.class, new byte[] {(byte) 0, 0, 0, 0, 0, 0, 0});
+      Assert.assertTrue(false);
+    } catch (IllegalArgumentException iea) {
+      
+    }
+  }
+}
diff --git a/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/HexEncoderTest.java b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/HexEncoderTest.java
new file mode 100644
index 0000000..b7ee02a
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/HexEncoderTest.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.util;
+
+import org.apache.gora.accumulo.encoders.HexEncoder;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * 
+ */
+public class HexEncoderTest {
+  
+  @Test
+  public void testByte() {
+    HexEncoder encoder = new HexEncoder();
+    
+    Assert.assertEquals("12", new String(encoder.encodeByte((byte) 0x12)));
+    Assert.assertEquals("f2", new String(encoder.encodeByte((byte) 0xf2)));
+    
+    byte b = Byte.MIN_VALUE;
+    while (b != Byte.MAX_VALUE) {
+      Assert.assertEquals(b, encoder.decodeByte(encoder.encodeByte(b)));
+      b++;
+    }
+  }
+
+  @Test
+  public void testShort() {
+    HexEncoder encoder = new HexEncoder();
+    
+    Assert.assertEquals("1234", new String(encoder.encodeShort((short) 0x1234)));
+    Assert.assertEquals("f234", new String(encoder.encodeShort((short) 0xf234)));
+    
+    short s = Short.MIN_VALUE;
+    while (s != Short.MAX_VALUE) {
+      Assert.assertEquals(s, encoder.decodeShort(encoder.encodeShort(s)));
+      s++;
+    }
+  }
+}
diff --git a/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/SignedBinaryEncoderTest.java b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/SignedBinaryEncoderTest.java
new file mode 100644
index 0000000..309ce97
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/java/org/apache/gora/accumulo/util/SignedBinaryEncoderTest.java
@@ -0,0 +1,166 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.accumulo.util;
+
+import java.util.ArrayList;
+import java.util.Collections;
+
+import junit.framework.Assert;
+
+import org.apache.gora.accumulo.encoders.SignedBinaryEncoder;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+
+/**
+ * 
+ */
+public class SignedBinaryEncoderTest {
+  @Test
+  public void testShort() {
+    short s = Short.MIN_VALUE;
+    Text prev = null;
+    
+    SignedBinaryEncoder encoder = new SignedBinaryEncoder();
+
+    while (true) {
+      byte[] enc = encoder.encodeShort(s);
+      Assert.assertEquals(s, encoder.decodeShort(enc));
+      Text current = new Text(enc);
+      if (prev != null)
+        Assert.assertTrue(prev.compareTo(current) < 0);
+      prev = current;
+      s++;
+      if (s == Short.MAX_VALUE)
+        break;
+    }
+  }
+
+  private void testInt(int start, int finish) {
+    int i = start;
+    Text prev = null;
+    
+    SignedBinaryEncoder encoder = new SignedBinaryEncoder();
+
+    while (true) {
+      byte[] enc = encoder.encodeInt(i);
+      Assert.assertEquals(i, encoder.decodeInt(enc));
+      Text current = new Text(enc);
+      if (prev != null)
+        Assert.assertTrue(prev.compareTo(current) < 0);
+      prev = current;
+      i++;
+      if (i == finish)
+        break;
+    }
+  }
+  
+  @Test
+  public void testInt() {
+    testInt(Integer.MIN_VALUE, Integer.MIN_VALUE + (1 << 16));
+    testInt(-(1 << 15), (1 << 15));
+    testInt(Integer.MAX_VALUE - (1 << 16), Integer.MAX_VALUE);
+  }
+  
+  private void testLong(long start, long finish) {
+    long l = start;
+    Text prev = null;
+    
+    SignedBinaryEncoder encoder = new SignedBinaryEncoder();
+
+    while (true) {
+      byte[] enc = encoder.encodeLong(l);
+      Assert.assertEquals(l, encoder.decodeLong(enc));
+      Text current = new Text(enc);
+      if (prev != null)
+        Assert.assertTrue(prev.compareTo(current) < 0);
+      prev = current;
+      l++;
+      if (l == finish)
+        break;
+    }
+  }
+  
+  @Test
+  public void testLong() {
+    testLong(Long.MIN_VALUE, Long.MIN_VALUE + (1 << 16));
+    testLong(-(1 << 15), (1 << 15));
+    testLong(Long.MAX_VALUE - (1 << 16), Long.MAX_VALUE);
+  }
+  
+  @Test
+  public void testDouble() {
+    
+    ArrayList<Double> testData = new ArrayList<Double>();
+    testData.add(Double.NEGATIVE_INFINITY);
+    testData.add(Double.MIN_VALUE);
+    testData.add(Math.nextUp(Double.NEGATIVE_INFINITY));
+    testData.add(Math.pow(10.0, 30.0) * -1.0);
+    testData.add(Math.pow(10.0, 30.0));
+    testData.add(Math.pow(10.0, -30.0) * -1.0);
+    testData.add(Math.pow(10.0, -30.0));
+    testData.add(Math.nextAfter(0.0, Double.NEGATIVE_INFINITY));
+    testData.add(0.0);
+    testData.add(Math.nextAfter(Double.MAX_VALUE, Double.NEGATIVE_INFINITY));
+    testData.add(Double.MAX_VALUE);
+    testData.add(Double.POSITIVE_INFINITY);
+    
+    Collections.sort(testData);
+    
+    SignedBinaryEncoder encoder = new SignedBinaryEncoder();
+
+    for (int i = 0; i < testData.size(); i++) {
+      byte[] enc = encoder.encodeDouble(testData.get(i));
+      Assert.assertEquals(testData.get(i), encoder.decodeDouble(enc));
+      if (i > 1) {
+        Assert.assertTrue("Checking " + testData.get(i) + " > " + testData.get(i - 1),
+            new Text(enc).compareTo(new Text(encoder.encodeDouble(testData.get(i - 1)))) > 0);
+      }
+    }
+  }
+
+  @Test
+  public void testFloat() {
+    
+    ArrayList<Float> testData = new ArrayList<Float>();
+    testData.add(Float.NEGATIVE_INFINITY);
+    testData.add(Float.MIN_VALUE);
+    testData.add(Math.nextUp(Float.NEGATIVE_INFINITY));
+    testData.add((float) Math.pow(10.0f, 30.0f) * -1.0f);
+    testData.add((float) Math.pow(10.0f, 30.0f));
+    testData.add((float) Math.pow(10.0f, -30.0f) * -1.0f);
+    testData.add((float) Math.pow(10.0f, -30.0f));
+    testData.add(Math.nextAfter(0.0f, Float.NEGATIVE_INFINITY));
+    testData.add(0.0f);
+    testData.add(Math.nextAfter(Float.MAX_VALUE, Float.NEGATIVE_INFINITY));
+    testData.add(Float.MAX_VALUE);
+    testData.add(Float.POSITIVE_INFINITY);
+    
+    Collections.sort(testData);
+    
+    SignedBinaryEncoder encoder = new SignedBinaryEncoder();
+
+    for (int i = 0; i < testData.size(); i++) {
+      byte[] enc = encoder.encodeFloat(testData.get(i));
+      Assert.assertEquals(testData.get(i), encoder.decodeFloat(enc));
+      if (i > 1) {
+        Assert.assertTrue("Checking " + testData.get(i) + " > " + testData.get(i - 1),
+            new Text(enc).compareTo(new Text(encoder.encodeFloat(testData.get(i - 1)))) > 0);
+      }
+    }
+  }
+
+}
diff --git a/trunk/gora-accumulo/src/test/resources/gora-accumulo-mapping.xml b/trunk/gora-accumulo/src/test/resources/gora-accumulo-mapping.xml
new file mode 100644
index 0000000..2766177
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/resources/gora-accumulo-mapping.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<gora-orm>
+  <table name="AccessLog">
+    <config key="table.file.compress.blocksize" value="32K"/>
+  </table>
+
+  <class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog">
+    <field name="url" family="common" qualifier="url"/>
+    <field name="timestamp" family="common" qualifier="timestamp"/>
+    <field name="ip" family="common" qualifier="ip" />
+    <field name="httpMethod" family="http" qualifier="httpMethod"/>
+    <field name="httpStatusCode" family="http" qualifier="httpStatusCode"/>
+    <field name="responseSize" family="http" qualifier="responseSize"/>
+    <field name="referrer" family="misc" qualifier="referrer"/>
+    <field name="userAgent" family="misc" qualifier="userAgent"/>
+  </class>
+  
+  <class name="org.apache.gora.examples.generated.Employee" keyClass="java.lang.String" table="Employee">
+    <field name="name" family="info" qualifier="nm"/>
+    <field name="dateOfBirth" family="info" qualifier="db"/>
+    <field name="ssn" family="info" qualifier="sn"/>
+    <field name="salary" family="info" qualifier="sl"/>
+  </class>
+  
+  <class name="org.apache.gora.examples.generated.WebPage" keyClass="java.lang.String" table="WebPage">
+    <field name="url" family="common" qualifier="u"/>
+    <field name="content" family="content" qualifier="c"/>
+    <field name="parsedContent" family="parsedContent"/>
+    <field name="outlinks" family="outlinks"/>
+    <field name="metadata" family="common" qualifier="metadata"/>
+  </class>
+
+  <class name="org.apache.gora.examples.generated.TokenDatum" keyClass="java.lang.String">
+    <field name="count" family="common" qualifier="count"/>
+  </class>  
+</gora-orm>  
\ No newline at end of file
diff --git a/trunk/gora-accumulo/src/test/resources/gora.properties b/trunk/gora-accumulo/src/test/resources/gora.properties
new file mode 100644
index 0000000..21a7e56
--- /dev/null
+++ b/trunk/gora-accumulo/src/test/resources/gora.properties
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+gora.datastore.default=org.apache.gora.accumulo.store.AccumuloStore
+gora.datastore.accumulo.mock=true
+gora.datastore.accumulo.instance=a14
+gora.datastore.accumulo.zookeepers=localhost
+gora.datastore.accumulo.user=root
+gora.datastore.accumulo.password=secret
\ No newline at end of file
diff --git a/trunk/gora-cassandra/build.xml b/trunk/gora-cassandra/build.xml
new file mode 100644
index 0000000..8222bf5
--- /dev/null
+++ b/trunk/gora-cassandra/build.xml
@@ -0,0 +1,24 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-cassandra" default="compile">
+  <property name="project.dir" value="${basedir}/.."/>
+
+  <import file="${project.dir}/build-common.xml"/>
+</project>
diff --git a/trunk/gora-cassandra/conf/.gitignore b/trunk/gora-cassandra/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-cassandra/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-cassandra/ivy/ivy.xml b/trunk/gora-cassandra/ivy/ivy.xml
new file mode 100644
index 0000000..1f11c6a
--- /dev/null
+++ b/trunk/gora-cassandra/ivy/ivy.xml
@@ -0,0 +1,60 @@
+<?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivy-module version="2.0">
+    <info 
+      organisation="org.apache.gora"
+      module="gora-cassandra"
+      status="integration"/>
+      
+  <configurations>
+    <include file="../../ivy/ivy-configurations.xml"/>
+  </configurations>
+  
+  <publications>
+    <artifact name="gora-cassandra" conf="compile"/>
+    <artifact name="gora-cassandra-test" conf="test"/>
+  </publications>
+
+  
+  <dependencies>
+    <!-- conf="*->@" means every conf is mapped to the conf of the same name of the artifact-->
+    
+    <dependency org="org.apache.gora" name="gora-core" rev="latest.integration" changing="true" conf="*->@"/>
+    
+    <dependency org="org.jdom" name="jdom" rev="1.1">
+    	<exclude org="xerces" name="xercesImpl"/>
+    </dependency>
+    
+    <!--
+        <dependency org="org.apache.cassandra" name="apache-cassandra" rev="0.8.1"/>
+    	<dependency org="me.prettyprint" name="hector" rev="0.8.0-1"/>
+    -->
+    <dependency org="org.apache.cassandra" name="cassandra-thrift" rev="0.8.1"/>
+    <dependency org="com.ecyrd.speed4j" name="speed4j" rev="0.9" conf="*->*,!javadoc,!sources"/>
+    <dependency org="com.github.stephenc.high-scale-lib" name="high-scale-lib" rev="1.1.2" conf="*->*,!javadoc,!sources"/>
+    <dependency org="com.google.collections" name="google-collections" rev="1.0" conf="*->*,!javadoc,!sources"/>
+    <dependency org="com.google.guava" name="guava" rev="r09" conf="*->*,!javadoc,!sources"/>
+
+    <!-- test dependencies -->
+
+  </dependencies>
+    
+</ivy-module>
+
diff --git a/trunk/gora-cassandra/pom.xml b/trunk/gora-cassandra/pom.xml
new file mode 100644
index 0000000..6a095c1
--- /dev/null
+++ b/trunk/gora-cassandra/pom.xml
@@ -0,0 +1,202 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+     <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>gora-cassandra</artifactId>
+    <packaging>bundle</packaging>
+
+    <name>Apache Gora :: Cassandra</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-cassandra</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-cassandra</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-cassandra</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora.cassandra*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <testResources>
+          <testResource>
+            <directory>${project.basedir}/src/test/conf</directory>
+            <includes>
+              <include>**/*</include>
+            </includes>
+            <!--targetPath>${project.basedir}/target/classes/</targetPath-->
+          </testResource>
+        </testResources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <goal>test-jar</goal>
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Gora Internal Dependencies -->
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <classifier>tests</classifier>
+        </dependency>
+
+        <!-- Cassandra Dependencies -->
+        <dependency>
+            <groupId>org.apache.cassandra</groupId>
+            <artifactId>cassandra-all</artifactId>
+            <scope>test</scope>
+            <exclusions>
+                <exclusion>
+                    <groupId>org.apache.cassandra.deps</groupId>
+    		    <artifactId>avro</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+        
+        <dependency>
+            <groupId>org.apache.cassandra</groupId>
+            <artifactId>cassandra-thrift</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.hectorclient</groupId>
+            <artifactId>hector-core</artifactId>
+                <exclusions>
+                    <exclusion>
+                        <groupId>org.apache.cassandra</groupId>
+    		        <artifactId>cassandra-all</artifactId>
+                    </exclusion>
+            </exclusions>
+        </dependency>
+        
+        <!-- Misc Dependencies -->
+        <dependency>
+            <groupId>com.google.guava</groupId>
+            <artifactId>guava</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.jdom</groupId>
+            <artifactId>jdom</artifactId>
+        </dependency>
+
+        <!-- Logging Dependencies -->
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+        </dependency>
+
+        <!-- Testing Dependencies -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-test</artifactId>
+        </dependency>
+        
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-cassandra/src/examples/java/.gitignore b/trunk/gora-cassandra/src/examples/java/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-cassandra/src/examples/java/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraColumn.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraColumn.java
new file mode 100644
index 0000000..b31a9ba
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraColumn.java
@@ -0,0 +1,79 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.nio.ByteBuffer;
+
+import me.prettyprint.hector.api.Serializer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.gora.cassandra.serializers.GoraSerializerTypeInferer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Represents a unit of data: a key value pair tagged by a family name
+ */
+public abstract class CassandraColumn {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraColumn.class);
+
+  public static final int SUB = 0;
+  public static final int SUPER = 1;
+  
+  private String family;
+  private int type;
+  private Field field;
+  
+  public String getFamily() {
+    return family;
+  }
+  public void setFamily(String family) {
+    this.family = family;
+  }
+  public int getType() {
+    return type;
+  }
+  public void setType(int type) {
+    this.type = type;
+  }
+  public void setField(Field field) {
+    this.field = field;
+  }
+  
+  protected Field getField() {
+    return this.field;
+  }
+  
+  public abstract ByteBuffer getName();
+  public abstract Object getValue();
+  
+  protected Object fromByteBuffer(Schema schema, ByteBuffer byteBuffer) {
+    Object value = null;
+    Serializer serializer = GoraSerializerTypeInferer.getSerializer(schema);
+    if (serializer == null) {
+      LOG.info("Schema is not supported: " + schema.toString());
+    } else {
+      value = serializer.fromByteBuffer(byteBuffer);
+    }
+    return value;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraQuery.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraQuery.java
new file mode 100644
index 0000000..e075f69
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraQuery.java
@@ -0,0 +1,73 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.util.List;
+import java.util.Map;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.store.DataStore;
+
+public class CassandraQuery<K, T extends Persistent> extends QueryBase<K, T> {
+
+  private Query<K, T> query;
+  
+  /**
+   * Maps Avro fields to Cassandra columns.
+   */
+  private Map<String, List<String>> familyMap;
+  
+  public CassandraQuery() {
+    super(null);
+  }
+  public CassandraQuery(DataStore<K, T> dataStore) {
+    super(dataStore);
+  }
+  public void setFamilyMap(Map<String, List<String>> familyMap) {
+    this.familyMap = familyMap;
+  }
+  public Map<String, List<String>> getFamilyMap() {
+    return familyMap;
+  }
+  
+  /**
+   * @param family the family name
+   * @return an array of the query column names belonging to the family
+   */
+  public String[] getColumns(String family) {
+    
+    List<String> columnList = familyMap.get(family);
+    String[] columns = new String[columnList.size()];
+    for (int i = 0; i < columns.length; ++i) {
+      columns[i] = columnList.get(i);
+    }
+    return columns;
+  }
+  public Query<K, T> getQuery() {
+    return query;
+  }
+  public void setQuery(Query<K, T> query) {
+    this.query = query;
+  }
+  
+  
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResult.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResult.java
new file mode 100644
index 0000000..792e116
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResult.java
@@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.serializers.StringSerializer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.gora.store.DataStore;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraResult<K, T extends Persistent> extends ResultBase<K, T> {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraResult.class);
+  
+  private int rowNumber;
+
+  private CassandraResultSet<K> cassandraResultSet;
+  
+  /**
+   * Maps Cassandra columns to Avro fields.
+   */
+  private Map<String, String> reverseMap;
+
+  public CassandraResult(DataStore<K, T> dataStore, Query<K, T> query) {
+    super(dataStore, query);
+  }
+
+  @Override
+  protected boolean nextInner() throws IOException {
+    if (this.rowNumber < this.cassandraResultSet.size()) {
+      updatePersistent();
+    }
+    ++this.rowNumber;
+    return (this.rowNumber <= this.cassandraResultSet.size());
+  }
+
+
+  /**
+   * Load key/value pair from Cassandra row to Avro record.
+   * @throws IOException
+   */
+  @SuppressWarnings("unchecked")
+  private void updatePersistent() throws IOException {
+    CassandraRow<K> cassandraRow = this.cassandraResultSet.get(this.rowNumber);
+    
+    // load key
+    this.key = cassandraRow.getKey();
+    
+    // load value
+    Schema schema = this.persistent.getSchema();
+    List<Field> fields = schema.getFields();
+    
+    for (CassandraColumn cassandraColumn: cassandraRow) {
+      
+      // get field name
+      String family = cassandraColumn.getFamily();
+      String fieldName = this.reverseMap.get(family + ":" + StringSerializer.get().fromByteBuffer(cassandraColumn.getName()));
+      
+      // get field
+      int pos = this.persistent.getFieldIndex(fieldName);
+      Field field = fields.get(pos);
+      
+      // get value
+      cassandraColumn.setField(field);
+      Object value = cassandraColumn.getValue();
+      
+      this.persistent.put(pos, value);
+      // this field does not need to be written back to the store
+      this.persistent.clearDirty(pos);
+    }
+
+  }
+
+  @Override
+  public void close() throws IOException {
+    // TODO Auto-generated method stub
+    
+  }
+
+  @Override
+  public float getProgress() throws IOException {
+    return (((float) this.rowNumber) / this.cassandraResultSet.size());
+  }
+
+  public void setResultSet(CassandraResultSet<K> cassandraResultSet) {
+    this.cassandraResultSet = cassandraResultSet;
+  }
+  
+  public void setReverseMap(Map<String, String> reverseMap) {
+    this.reverseMap = reverseMap;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResultSet.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResultSet.java
new file mode 100644
index 0000000..5fc4e6c
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraResultSet.java
@@ -0,0 +1,54 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+
+/**
+ * List data structure to keep the order coming from the Cassandra selects.
+ */
+public class CassandraResultSet<K> extends ArrayList<CassandraRow<K>> {
+
+  /**
+   * 
+   */
+  private static final long serialVersionUID = -7620939600192859652L;
+
+  /**
+   * Maps keys to indices in the list.
+   */
+  private HashMap<K, Integer> indexMap = new HashMap<K, Integer>();
+
+  public CassandraRow<K> getRow(K key) {
+    Integer integer = this.indexMap.get(key);
+    if (integer == null) {
+      return null;
+    }
+    
+    return this.get(integer);
+  }
+
+  public void putRow(K key, CassandraRow<K> cassandraRow) {
+    this.add(cassandraRow);
+    this.indexMap.put(key, this.size()-1);
+  } 
+  
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraRow.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraRow.java
new file mode 100644
index 0000000..544821a
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraRow.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.util.ArrayList;
+
+/**
+ * List of key value pairs representing a row, tagged by a key.
+ */
+public class CassandraRow<K> extends ArrayList<CassandraColumn> {
+
+  /**
+   * 
+   */
+  private static final long serialVersionUID = -7620939600192859652L;
+  private K key;
+
+  public K getKey() {
+    return this.key;
+  }
+
+  public void setKey(K key) {
+    this.key = key;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSubColumn.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSubColumn.java
new file mode 100644
index 0000000..5735c91
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSubColumn.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetEncoder;
+
+import me.prettyprint.cassandra.serializers.FloatSerializer;
+import me.prettyprint.cassandra.serializers.DoubleSerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.LongSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.hector.api.beans.HColumn;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.generic.GenericData;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.cassandra.serializers.GenericArraySerializer;
+import org.apache.gora.cassandra.serializers.StatefulHashMapSerializer;
+import org.apache.gora.cassandra.serializers.TypeUtils;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraSubColumn extends CassandraColumn {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraSubColumn.class);
+
+  private static final String ENCODING = "UTF-8";
+  
+  private static CharsetEncoder charsetEncoder = Charset.forName(ENCODING).newEncoder();;
+
+
+  /**
+   * Key-value pair containing the raw data.
+   */
+  private HColumn<ByteBuffer, ByteBuffer> hColumn;
+
+  public ByteBuffer getName() {
+    return hColumn.getName();
+  }
+
+  /**
+   * Deserialize a String into an typed Object, according to the field schema.
+   * @see org.apache.gora.cassandra.query.CassandraColumn#getValue()
+   */
+  public Object getValue() {
+    Field field = getField();
+    Schema fieldSchema = field.schema();
+    Type type = fieldSchema.getType();
+    ByteBuffer byteBuffer = hColumn.getValue();
+    if (byteBuffer == null) {
+      return null;
+    }
+    Object value = null;
+    if (type == Type.ARRAY) {
+      GenericArraySerializer serializer = GenericArraySerializer.get(fieldSchema.getElementType());
+      GenericArray genericArray = serializer.fromByteBuffer(byteBuffer);
+      value = genericArray;
+    } else if (type == Type.MAP) {
+      StatefulHashMapSerializer serializer = StatefulHashMapSerializer.get(fieldSchema.getValueType());
+      StatefulHashMap map = serializer.fromByteBuffer(byteBuffer);
+      value = map;
+    } else {
+      value = fromByteBuffer(fieldSchema, byteBuffer);
+    }
+
+    return value;
+  }
+
+  public void setValue(HColumn<ByteBuffer, ByteBuffer> hColumn) {
+    this.hColumn = hColumn;
+  }
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSuperColumn.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSuperColumn.java
new file mode 100644
index 0000000..f944a3d
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/query/CassandraSuperColumn.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.query;
+
+import java.nio.ByteBuffer;
+import java.util.Map;
+
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.hector.api.beans.HColumn;
+import me.prettyprint.hector.api.beans.HSuperColumn;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.cassandra.serializers.Utf8Serializer;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraSuperColumn extends CassandraColumn {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraSuperColumn.class);
+
+  private HSuperColumn<String, ByteBuffer, ByteBuffer> hSuperColumn;
+  
+  public ByteBuffer getName() {
+    return StringSerializer.get().toByteBuffer(hSuperColumn.getName());
+  }
+
+  public Object getValue() {
+    Field field = getField();
+    Schema fieldSchema = field.schema();
+    Type type = fieldSchema.getType();
+    
+    Object value = null;
+    
+    switch (type) {
+      case ARRAY:
+        ListGenericArray array = new ListGenericArray(fieldSchema.getElementType());
+        
+        for (HColumn<ByteBuffer, ByteBuffer> hColumn : this.hSuperColumn.getColumns()) {
+          ByteBuffer memberByteBuffer = hColumn.getValue();
+          Object memberValue = fromByteBuffer(fieldSchema.getElementType(), hColumn.getValue());
+          // int i = IntegerSerializer().get().fromByteBuffer(hColumn.getName());
+          array.add(memberValue);      
+        }
+        value = array;
+        
+        break;
+      case MAP:
+        Map<Utf8, Object> map = new StatefulHashMap<Utf8, Object>();
+        
+        for (HColumn<ByteBuffer, ByteBuffer> hColumn : this.hSuperColumn.getColumns()) {
+          ByteBuffer memberByteBuffer = hColumn.getValue();
+          Object memberValue = null;
+          memberValue = fromByteBuffer(fieldSchema.getValueType(), hColumn.getValue());
+          map.put(Utf8Serializer.get().fromByteBuffer(hColumn.getName()), memberValue);      
+        }
+        value = map;
+        
+        break;
+      case RECORD:
+        String fullName = fieldSchema.getFullName();
+        
+        Class<?> claz = null;
+        try {
+          claz = Class.forName(fullName);
+        } catch (ClassNotFoundException cnfe) {
+          LOG.warn("Unable to load class " + fullName, cnfe);
+          break;
+        }
+
+        try {
+          value = claz.newInstance();          
+        } catch (InstantiationException ie) {
+          LOG.warn("Instantiation error", ie);
+          break;
+        } catch (IllegalAccessException iae) {
+          LOG.warn("Illegal access error", iae);
+          break;
+        }
+        
+        // we updated the value instance, now update its members
+        if (value instanceof PersistentBase) {
+          PersistentBase record = (PersistentBase) value;
+
+          for (HColumn<ByteBuffer, ByteBuffer> hColumn : this.hSuperColumn.getColumns()) {
+            String memberName = StringSerializer.get().fromByteBuffer(hColumn.getName());
+            if (memberName == null || memberName.length() == 0) {
+              LOG.warn("member name is null or empty.");
+              continue;
+            }
+            Field memberField = fieldSchema.getField(memberName);
+            CassandraSubColumn cassandraColumn = new CassandraSubColumn();
+            cassandraColumn.setField(memberField);
+            cassandraColumn.setValue(hColumn);
+            record.put(record.getFieldIndex(memberName), cassandraColumn.getValue());
+          }
+        }
+        break;
+      default:
+        LOG.info("Type not supported: " + type);
+    }
+    
+    return value;
+  }
+
+  public void setValue(HSuperColumn<String, ByteBuffer, ByteBuffer> hSuperColumn) {
+    this.hSuperColumn = hSuperColumn;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GenericArraySerializer.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GenericArraySerializer.java
new file mode 100644
index 0000000..b03afce
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GenericArraySerializer.java
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.BufferUnderflowException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.serializers.AbstractSerializer;
+import me.prettyprint.cassandra.serializers.BytesArraySerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.hector.api.Serializer;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+import static me.prettyprint.hector.api.ddl.ComparatorType.UTF8TYPE;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.ListGenericArray;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A GenericArraySerializer translates the byte[] to and from GenericArray of Avro.
+ */
+public class GenericArraySerializer<T> extends AbstractSerializer<GenericArray<T>> {
+
+  public static final Logger LOG = LoggerFactory.getLogger(GenericArraySerializer.class);
+
+  private static Map<Type, GenericArraySerializer> elementTypeToSerializerMap = new HashMap<Type, GenericArraySerializer>();
+  private static Map<Class, GenericArraySerializer> fixedClassToSerializerMap = new HashMap<Class, GenericArraySerializer>();
+
+  public static GenericArraySerializer get(Type elementType) {
+    GenericArraySerializer serializer = elementTypeToSerializerMap.get(elementType);
+    if (serializer == null) {
+      serializer = new GenericArraySerializer(elementType);
+      elementTypeToSerializerMap.put(elementType, serializer);
+    }
+    return serializer;
+  }
+
+  public static GenericArraySerializer get(Type elementType, Class clazz) {
+    if (elementType != Type.FIXED) {
+      return null;
+    }
+    GenericArraySerializer serializer = elementTypeToSerializerMap.get(clazz);
+    if (serializer == null) {
+      serializer = new GenericArraySerializer(clazz);
+      fixedClassToSerializerMap.put(clazz, serializer);
+    }
+    return serializer;
+  }
+
+  public static GenericArraySerializer get(Schema elementSchema) {
+    Type type = elementSchema.getType();
+    if (type == Type.FIXED) {
+      return get(Type.FIXED, TypeUtils.getClass(elementSchema));
+    } else {
+      return get(type);
+    }
+  }
+
+  private Schema elementSchema = null;
+  private Type elementType = null;
+  private int size = -1;
+  private Class<T> clazz = null;
+  private Serializer<T> elementSerializer = null;
+
+  public GenericArraySerializer(Serializer<T> elementSerializer) {
+    this.elementSerializer = elementSerializer;
+  }
+
+  public GenericArraySerializer(Schema elementSchema) {
+    this.elementSchema = elementSchema;
+    elementType = elementSchema.getType();
+    size = TypeUtils.getFixedSize(elementSchema);
+    elementSerializer = GoraSerializerTypeInferer.getSerializer(elementSchema);
+  }
+
+  public GenericArraySerializer(Type elementType) {
+    this.elementType = elementType;
+    if (elementType != Type.FIXED) {
+      elementSchema = Schema.create(elementType);
+    }
+    clazz = TypeUtils.getClass(elementType);
+    size = TypeUtils.getFixedSize(elementType);
+    elementSerializer = GoraSerializerTypeInferer.getSerializer(elementType);
+  }
+
+  public GenericArraySerializer(Class<T> clazz) {
+    this.clazz = clazz;
+    elementType = TypeUtils.getType(clazz);
+    size = TypeUtils.getFixedSize(clazz);
+    if (elementType == null || elementType == Type.FIXED) {
+      elementType = Type.FIXED;
+      elementSchema = TypeUtils.getSchema(clazz);
+      elementSerializer = GoraSerializerTypeInferer.getSerializer(elementType, clazz);
+    } else {
+      elementSerializer = GoraSerializerTypeInferer.getSerializer(elementType);
+    }
+  }
+
+  @Override
+  public ByteBuffer toByteBuffer(GenericArray<T> array) {
+    if (array == null) {
+      return null;
+    }
+    if (size > 0) {
+      return toByteBufferWithFixedLengthElements(array);
+    } else {
+      return toByteBufferWithVariableLengthElements(array);
+    }
+  }
+
+  private ByteBuffer toByteBufferWithFixedLengthElements(GenericArray<T> array) {
+    ByteBuffer byteBuffer = ByteBuffer.allocate((int) array.size() * size);
+    for (T element : array) {
+      byteBuffer.put(elementSerializer.toByteBuffer(element));
+    }
+    byteBuffer.rewind();
+    return byteBuffer;
+  }
+
+  private ByteBuffer toByteBufferWithVariableLengthElements(GenericArray<T> array) {
+    int n = (int) array.size();
+    List<byte[]> list = new ArrayList<byte[]>(n);
+    n *= 4;
+    for (T element : array) {
+      byte[] bytes = BytesArraySerializer.get().fromByteBuffer(elementSerializer.toByteBuffer(element));
+      list.add(bytes);
+      n += bytes.length;
+    }
+    ByteBuffer byteBuffer = ByteBuffer.allocate(n);
+    for (byte[] bytes : list) {
+      byteBuffer.put(IntegerSerializer.get().toByteBuffer(bytes.length));
+      byteBuffer.put(BytesArraySerializer.get().toByteBuffer(bytes));
+    }
+    byteBuffer.rewind();
+    return byteBuffer;
+  }
+
+  @Override
+  public GenericArray<T> fromByteBuffer(ByteBuffer byteBuffer) {
+    if (byteBuffer == null) {
+      return null;
+    }
+    GenericArray<T> array = new ListGenericArray<T>(elementSchema);
+int i = 0;
+    while (true) {
+      T element = null;
+      try {
+        if (size > 0) {
+          element = elementSerializer.fromByteBuffer(byteBuffer);
+        }
+        else {
+          int n = IntegerSerializer.get().fromByteBuffer(byteBuffer);
+          byte[] bytes = new byte[n];
+          byteBuffer.get(bytes, 0, n);
+          element = elementSerializer.fromByteBuffer( BytesArraySerializer.get().toByteBuffer(bytes) );
+        }
+      } catch (BufferUnderflowException e) {
+        break;
+      }
+      if (element == null) {
+        break;
+      }
+      array.add(element);
+    }
+    return array;
+  }
+
+  @Override
+  public ComparatorType getComparatorType() {
+    return elementSerializer.getComparatorType();
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GoraSerializerTypeInferer.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GoraSerializerTypeInferer.java
new file mode 100644
index 0000000..55259fd
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/GoraSerializerTypeInferer.java
@@ -0,0 +1,221 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.ByteBuffer;
+
+import me.prettyprint.cassandra.serializers.BytesArraySerializer;
+import me.prettyprint.cassandra.serializers.ByteBufferSerializer;
+import me.prettyprint.cassandra.serializers.BooleanSerializer;
+import me.prettyprint.cassandra.serializers.DoubleSerializer;
+import me.prettyprint.cassandra.serializers.FloatSerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.LongSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.cassandra.serializers.SerializerTypeInferer;
+import me.prettyprint.hector.api.Serializer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+
+import org.apache.gora.persistency.StatefulHashMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Utility class that infers the concrete Serializer needed to turn a value into
+ * its binary representation
+ */
+public class GoraSerializerTypeInferer {
+
+  public static final Logger LOG = LoggerFactory.getLogger(GoraSerializerTypeInferer.class);
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Object value) {
+    Serializer serializer = null;
+    if (value == null) {
+      serializer = ByteBufferSerializer.get();
+    } else if (value instanceof Utf8) {
+      serializer = Utf8Serializer.get();
+    } else if (value instanceof Boolean) {
+      serializer = BooleanSerializer.get();
+    } else if (value instanceof ByteBuffer) {
+      serializer = ByteBufferSerializer.get();
+    } else if (value instanceof byte[]) {
+      serializer = BytesArraySerializer.get();
+    } else if (value instanceof Double) {
+      serializer = DoubleSerializer.get();
+    } else if (value instanceof Float) {
+      serializer = FloatSerializer.get();
+    } else if (value instanceof Integer) {
+      serializer = IntegerSerializer.get();
+    } else if (value instanceof Long) {
+      serializer = LongSerializer.get();
+    } else if (value instanceof String) {
+      serializer = StringSerializer.get();
+    } else if (value instanceof SpecificFixed) {
+      serializer = SpecificFixedSerializer.get(value.getClass());
+    } else if (value instanceof GenericArray) {
+      Schema schema = ((GenericArray)value).getSchema();
+      if (schema.getType() == Type.ARRAY) {
+        schema = schema.getElementType();
+      }
+      serializer = GenericArraySerializer.get(schema);
+    } else if (value instanceof StatefulHashMap) {
+      StatefulHashMap map = (StatefulHashMap)value;
+      if (map.size() == 0) {
+        serializer = ByteBufferSerializer.get();
+      }
+      else {
+        Object value0 = map.values().iterator().next();
+        Schema schema = TypeUtils.getSchema(value0);
+        serializer = StatefulHashMapSerializer.get(schema);
+      }
+    } else {
+      serializer = SerializerTypeInferer.getSerializer(value);
+    }
+    return serializer;
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Class<?> valueClass) {
+    Serializer serializer = null;
+    if (valueClass.equals(Utf8.class)) {
+      serializer = Utf8Serializer.get();
+    } else if (valueClass.equals(Boolean.class) || valueClass.equals(boolean.class)) {
+      serializer = BooleanSerializer.get();
+    } else if (valueClass.equals(ByteBuffer.class)) {
+      serializer = ByteBufferSerializer.get();
+    } else if (valueClass.equals(Double.class) || valueClass.equals(double.class)) {
+      serializer = DoubleSerializer.get();
+    } else if (valueClass.equals(Float.class) || valueClass.equals(float.class)) {
+      serializer = FloatSerializer.get();
+    } else if (valueClass.equals(Integer.class) || valueClass.equals(int.class)) {
+      serializer = IntegerSerializer.get();
+    } else if (valueClass.equals(Long.class) || valueClass.equals(long.class)) {
+      serializer = LongSerializer.get();
+    } else if (valueClass.equals(String.class)) {
+      serializer = StringSerializer.get();
+    } else {
+      serializer = SerializerTypeInferer.getSerializer(valueClass);
+    }
+    return serializer;
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Schema schema) {
+    Serializer serializer = null;
+    Type type = schema.getType();
+    if (type == Type.STRING) {
+      serializer = Utf8Serializer.get();
+    } else if (type == Type.BOOLEAN) {
+      serializer = BooleanSerializer.get();
+    } else if (type == Type.BYTES) {
+      serializer = ByteBufferSerializer.get();
+    } else if (type == Type.DOUBLE) {
+      serializer = DoubleSerializer.get();
+    } else if (type == Type.FLOAT) {
+      serializer = FloatSerializer.get();
+    } else if (type == Type.INT) {
+      serializer = IntegerSerializer.get();
+    } else if (type == Type.LONG) {
+      serializer = LongSerializer.get();
+    } else if (type == Type.FIXED) {
+      Class clazz = TypeUtils.getClass(schema);
+      serializer = SpecificFixedSerializer.get(clazz);
+      // serializer = SpecificFixedSerializer.get(schema);
+    } else if (type == Type.ARRAY) {
+      serializer = GenericArraySerializer.get(schema.getElementType());
+    } else if (type == Type.MAP) {
+      serializer = StatefulHashMapSerializer.get(schema.getValueType());
+    } else {
+      serializer = null;
+    }
+    return serializer;
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Type type) {
+    Serializer serializer = null;
+    if (type == Type.STRING) {
+      serializer = Utf8Serializer.get();
+    } else if (type == Type.BOOLEAN) {
+      serializer = BooleanSerializer.get();
+    } else if (type == Type.BYTES) {
+      serializer = ByteBufferSerializer.get();
+    } else if (type == Type.DOUBLE) {
+      serializer = DoubleSerializer.get();
+    } else if (type == Type.FLOAT) {
+      serializer = FloatSerializer.get();
+    } else if (type == Type.INT) {
+      serializer = IntegerSerializer.get();
+    } else if (type == Type.LONG) {
+      serializer = LongSerializer.get();
+    } else if (type == Type.FIXED) {
+      serializer = SpecificFixedSerializer.get();
+    } else {
+      serializer = null;
+    }
+    return serializer;
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Type type, Type elementType) {
+    Serializer serializer = null;
+    if (type == null) {
+      if (elementType == null) {
+        serializer = null;
+      } else {
+        serializer = getSerializer(elementType);
+      }
+    } else {
+      if (elementType == null) {
+        serializer = getSerializer(type);
+      }
+    }
+
+    if (type == Type.ARRAY) {
+      serializer = GenericArraySerializer.get(elementType);
+    } else if (type == Type.MAP) {
+      serializer = StatefulHashMapSerializer.get(elementType);
+    } else {
+      serializer = null;
+    }
+    return serializer;
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static <T> Serializer<T> getSerializer(Type type, Class<T> clazz) {
+    Serializer serializer = null;
+    if (type != Type.FIXED) {
+      serializer = null;
+    }
+    if (clazz == null) {
+      serializer = null;
+    } else {
+      serializer = SpecificFixedSerializer.get(clazz);
+    }
+    return serializer;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/SpecificFixedSerializer.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/SpecificFixedSerializer.java
new file mode 100644
index 0000000..b981fbf
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/SpecificFixedSerializer.java
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.BufferUnderflowException;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+
+import me.prettyprint.cassandra.serializers.AbstractSerializer;
+import me.prettyprint.cassandra.serializers.BytesArraySerializer;
+import me.prettyprint.hector.api.Serializer;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+import static me.prettyprint.hector.api.ddl.ComparatorType.BYTESTYPE;
+
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.util.Utf8;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A SpecificFixedSerializer translates the byte[] to and from SpecificFixed of Avro.
+ */
+public class SpecificFixedSerializer extends AbstractSerializer<SpecificFixed> {
+
+  public static final Logger LOG = LoggerFactory.getLogger(SpecificFixedSerializer.class);
+
+  // for toByteBuffer
+  private static SpecificFixedSerializer serializer = new SpecificFixedSerializer(SpecificFixed.class);
+
+  // for fromByteBuffer, requiring Class info
+  public static SpecificFixedSerializer get() {
+    return serializer;
+  }
+
+  private static Map<Class, SpecificFixedSerializer> classToSerializerMap = new HashMap<Class, SpecificFixedSerializer>();
+
+  public static SpecificFixedSerializer get(Class clazz) {
+    SpecificFixedSerializer serializer = classToSerializerMap.get(clazz);
+    if (serializer == null) {
+      serializer = new SpecificFixedSerializer(clazz);
+      classToSerializerMap.put(clazz, serializer);
+    }
+    return serializer;
+  }
+
+  private Class<? extends SpecificFixed> clazz;
+
+  public SpecificFixedSerializer(Class<? extends SpecificFixed> clazz) {
+    this.clazz = clazz;
+  }
+
+  @Override
+  public ByteBuffer toByteBuffer(SpecificFixed fixed) {
+    if (fixed == null) {
+      return null;
+    }
+    byte[] bytes = fixed.bytes();
+    if (bytes.length < 1) {
+      return null;
+    }
+    return BytesArraySerializer.get().toByteBuffer(bytes);
+  }
+
+  @Override
+  public SpecificFixed fromByteBuffer(ByteBuffer byteBuffer) {
+    if (byteBuffer == null) {
+      return null;
+    }
+
+    Object value = null;
+    try {
+      value = clazz.newInstance();
+    } catch (InstantiationException ie) {
+      LOG.warn("Instantiation error for class=" + clazz, ie);
+      return null;
+    } catch (IllegalAccessException iae) {
+      LOG.warn("Illegal access error for class=" + clazz, iae);
+      return null;
+    }
+
+    if (! (value instanceof SpecificFixed)) {
+      LOG.warn("Not an instance of SpecificFixed");
+      return null;
+    }
+
+    SpecificFixed fixed = (SpecificFixed) value;
+    byte[] bytes = fixed.bytes();
+    try {
+      byteBuffer.get(bytes, 0, bytes.length);
+    }
+    catch (BufferUnderflowException e) {
+      // LOG.info(e.toString() + " : class=" + clazz.getName() + " length=" + bytes.length);
+      throw e;
+    }
+    fixed.bytes(bytes);
+    return fixed;
+  }
+
+  @Override
+  public ComparatorType getComparatorType() {
+    return BYTESTYPE;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/StatefulHashMapSerializer.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/StatefulHashMapSerializer.java
new file mode 100644
index 0000000..4922220
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/StatefulHashMapSerializer.java
@@ -0,0 +1,236 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.BufferUnderflowException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.serializers.AbstractSerializer;
+import me.prettyprint.cassandra.serializers.BytesArraySerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.hector.api.Serializer;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+import static me.prettyprint.hector.api.ddl.ComparatorType.UTF8TYPE;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StatefulHashMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A StatefulHashMapSerializer translates the byte[] to and from StatefulHashMap of Avro.
+ */
+public class StatefulHashMapSerializer<T> extends AbstractSerializer<StatefulHashMap<Utf8, T>> {
+
+  public static final Logger LOG = LoggerFactory.getLogger(StatefulHashMapSerializer.class);
+
+  private static Map<Type, StatefulHashMapSerializer> valueTypeToSerializerMap = new HashMap<Type, StatefulHashMapSerializer>();
+  private static Map<Class, StatefulHashMapSerializer> fixedClassToSerializerMap = new HashMap<Class, StatefulHashMapSerializer>();
+
+  public static StatefulHashMapSerializer get(Type valueType) {
+    StatefulHashMapSerializer serializer = valueTypeToSerializerMap.get(valueType);
+    if (serializer == null) {
+      serializer = new StatefulHashMapSerializer(valueType);
+      valueTypeToSerializerMap.put(valueType, serializer);
+    }
+    return serializer;
+  }
+
+  public static StatefulHashMapSerializer get(Type valueType, Class clazz) {
+    if (valueType != Type.FIXED) {
+      return null;
+    }
+    StatefulHashMapSerializer serializer = valueTypeToSerializerMap.get(clazz);
+    if (serializer == null) {
+      serializer = new StatefulHashMapSerializer(clazz);
+      fixedClassToSerializerMap.put(clazz, serializer);
+    }
+    return serializer;
+  }
+
+  public static StatefulHashMapSerializer get(Schema valueSchema) {
+    Type type = valueSchema.getType();
+    if (type == Type.FIXED) {
+      return get(Type.FIXED, TypeUtils.getClass(valueSchema));
+    } else {
+      return get(type);
+    }
+  }
+
+  private Schema valueSchema = null;
+  private Type valueType = null;
+  private int size = -1;
+  private Class<T> clazz = null;
+  private Serializer<T> valueSerializer = null;
+
+  public StatefulHashMapSerializer(Serializer<T> valueSerializer) {
+    this.valueSerializer = valueSerializer;
+  }
+
+  public StatefulHashMapSerializer(Schema valueSchema) {
+    this.valueSchema = valueSchema;
+    valueType = valueSchema.getType();
+    size = TypeUtils.getFixedSize(valueSchema);
+    valueSerializer = GoraSerializerTypeInferer.getSerializer(valueSchema);
+  }
+
+  public StatefulHashMapSerializer(Type valueType) {
+    this.valueType = valueType;
+    if (valueType != Type.FIXED) {
+      valueSchema = Schema.create(valueType);
+    }
+    clazz = TypeUtils.getClass(valueType);
+    size = TypeUtils.getFixedSize(valueType);
+    valueSerializer = GoraSerializerTypeInferer.getSerializer(valueType);
+  }
+
+  public StatefulHashMapSerializer(Class<T> clazz) {
+    this.clazz = clazz;
+    valueType = TypeUtils.getType(clazz);
+    size = TypeUtils.getFixedSize(clazz);
+    if (valueType == null || valueType == Type.FIXED) {
+      valueType = Type.FIXED;
+      valueSchema = TypeUtils.getSchema(clazz);
+      valueSerializer = GoraSerializerTypeInferer.getSerializer(valueType, clazz);
+    } else {
+      valueSerializer = GoraSerializerTypeInferer.getSerializer(valueType);
+    }
+  }
+
+  @Override
+  public ByteBuffer toByteBuffer(StatefulHashMap<Utf8, T> map) {
+    if (map == null) {
+      return null;
+    }
+    if (size > 0) {
+      return toByteBufferWithFixedLengthElements(map);
+    } else {
+      return toByteBufferWithVariableLengthElements(map);
+    }
+  }
+
+  private ByteBuffer toByteBufferWithFixedLengthElements(StatefulHashMap<Utf8, T> map) {
+    List<byte[]> list = new ArrayList<byte[]>(map.size());
+    int n = 0;
+    for (Utf8 key : map.keySet()) {
+      if (map.getState(key) == State.DELETED) {
+        continue;
+      }
+      T value = map.get(key);
+      byte[] bytes = BytesArraySerializer.get().fromByteBuffer(Utf8Serializer.get().toByteBuffer(key));
+      list.add(bytes);
+      n += 4;
+      n += bytes.length;
+      bytes = BytesArraySerializer.get().fromByteBuffer(valueSerializer.toByteBuffer(value));
+      list.add(bytes);
+      n += bytes.length;
+    }
+    ByteBuffer byteBuffer = ByteBuffer.allocate(n);
+    int i = 0;
+    for (byte[] bytes : list) {
+      if (i % 2 == 0) {
+        byteBuffer.put(IntegerSerializer.get().toByteBuffer(bytes.length));
+      }
+      byteBuffer.put(BytesArraySerializer.get().toByteBuffer(bytes));
+      i += 1;
+    }
+    byteBuffer.rewind();
+    return byteBuffer;
+  }
+
+  private ByteBuffer toByteBufferWithVariableLengthElements(StatefulHashMap<Utf8, T> map) {
+    List<byte[]> list = new ArrayList<byte[]>(map.size());
+    int n = 0;
+    for (Utf8 key : map.keySet()) {
+      if (map.getState(key) == State.DELETED) {
+        continue;
+      }
+      T value = map.get(key);
+      byte[] bytes = BytesArraySerializer.get().fromByteBuffer(Utf8Serializer.get().toByteBuffer(key));
+      list.add(bytes);
+      n += 4;
+      n += bytes.length;
+      bytes = BytesArraySerializer.get().fromByteBuffer(valueSerializer.toByteBuffer(value));
+      list.add(bytes);
+      n += 4;
+      n += bytes.length;
+    }
+    ByteBuffer byteBuffer = ByteBuffer.allocate(n);
+    for (byte[] bytes : list) {
+      byteBuffer.put(IntegerSerializer.get().toByteBuffer(bytes.length));
+      byteBuffer.put(BytesArraySerializer.get().toByteBuffer(bytes));
+    }
+    byteBuffer.rewind();
+    return byteBuffer;
+  }
+
+  @Override
+  public StatefulHashMap<Utf8, T> fromByteBuffer(ByteBuffer byteBuffer) {
+    if (byteBuffer == null) {
+      return null;
+    }
+    StatefulHashMap<Utf8, T> map = new StatefulHashMap<Utf8, T>();
+int i = 0;
+    while (true) {
+      Utf8 key = null;
+      T value = null;
+      try {
+        int n = IntegerSerializer.get().fromByteBuffer(byteBuffer);
+        byte[] bytes = new byte[n];
+        byteBuffer.get(bytes, 0, n);
+        key = Utf8Serializer.get().fromByteBuffer( BytesArraySerializer.get().toByteBuffer(bytes) );
+
+        if (size > 0) {
+          value = valueSerializer.fromByteBuffer(byteBuffer);
+        }
+        else {
+          n = IntegerSerializer.get().fromByteBuffer(byteBuffer);
+          bytes = new byte[n];
+          byteBuffer.get(bytes, 0, n);
+          value = valueSerializer.fromByteBuffer( BytesArraySerializer.get().toByteBuffer(bytes) );
+        }
+      } catch (BufferUnderflowException e) {
+        break;
+      }
+      if (key == null) {
+        break;
+      }
+      if (value == null) {
+        break;
+      }
+      map.put(key, value);
+    }
+    return map;
+  }
+
+  @Override
+  public ComparatorType getComparatorType() {
+    return valueSerializer.getComparatorType();
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/TypeUtils.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/TypeUtils.java
new file mode 100644
index 0000000..48973d0
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/TypeUtils.java
@@ -0,0 +1,237 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.ByteBuffer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StatefulHashMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Utility class that infers the concrete Serializer needed to turn a value into
+ * its binary representation
+ */
+public class TypeUtils {
+
+  public static final Logger LOG = LoggerFactory.getLogger(TypeUtils.class);
+
+  // @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static Class getClass(Object value) {
+    return value.getClass();
+  }
+
+  public static Schema getSchema(Object value) {
+    if (value instanceof GenericArray) {
+      return Schema.createArray( getElementSchema((GenericArray)value) );
+    } else {
+      return getSchema( getClass(value) );
+    }
+  }
+
+  public static Type getType(Object value) {
+    return getType( getClass(value) );
+  }
+
+  public static Type getType(Class<?> clazz) {
+    if (clazz.equals(Utf8.class)) {
+      return Type.STRING;
+    } else if (clazz.equals(Boolean.class) || clazz.equals(boolean.class)) {
+      return Type.BOOLEAN;
+    } else if (clazz.equals(ByteBuffer.class)) {
+      return Type.BYTES;
+    } else if (clazz.equals(Double.class) || clazz.equals(double.class)) {
+      return Type.DOUBLE;
+    } else if (clazz.equals(Float.class) || clazz.equals(float.class)) {
+      return Type.FLOAT;
+    } else if (clazz.equals(Integer.class) || clazz.equals(int.class)) {
+      return Type.INT;
+    } else if (clazz.equals(Long.class) || clazz.equals(long.class)) {
+      return Type.LONG;
+    } else if (clazz.equals(ListGenericArray.class)) {
+      return Type.ARRAY;
+    } else if (clazz.equals(StatefulHashMap.class)) {
+      return Type.MAP;
+    } else if (clazz.equals(Persistent.class)) {
+      return Type.RECORD;
+    } else if (clazz.getSuperclass().equals(SpecificFixed.class)) {
+      return Type.FIXED;
+    } else {
+      return null;
+    }
+  }
+
+  public static Class getClass(Type type) {
+    if (type == Type.STRING) {
+      return Utf8.class;
+    } else if (type == Type.BOOLEAN) {
+      return Boolean.class;
+    } else if (type == Type.BYTES) {
+      return ByteBuffer.class;
+    } else if (type == Type.DOUBLE) {
+      return Double.class;
+    } else if (type == Type.FLOAT) {
+      return Float.class;
+    } else if (type == Type.INT) {
+      return Integer.class;
+    } else if (type == Type.LONG) {
+      return Long.class;
+    } else if (type == Type.ARRAY) {
+      return ListGenericArray.class;
+    } else if (type == Type.MAP) {
+      return StatefulHashMap.class;
+    } else if (type == Type.RECORD) {
+      return Persistent.class;
+    } else if (type == Type.FIXED) {
+      // return SpecificFixed.class;
+      return null;
+    } else {
+      return null;
+    }
+  }
+
+  public static Schema getSchema(Class clazz) {
+    Type type = getType(clazz);
+    if (type == null) {
+      return null;
+    } else if (type == Type.FIXED) {
+      int size = getFixedSize(clazz);
+      String name = clazz.getName();
+      String space = null;
+      int n = name.lastIndexOf(".");
+      if (n < 0) {
+        space = name.substring(0,n);
+        name = name.substring(n+1);
+      } else {
+        space = null;
+      }
+      String doc = null; // ?
+      // LOG.info(Schema.createFixed(name, doc, space, size).toString());
+      return Schema.createFixed(name, doc, space, size);
+    } else if (type == Type.ARRAY) {
+      Object obj = null;
+      try {
+        obj = clazz.newInstance();
+      } catch (InstantiationException e) {
+        LOG.warn(e.toString());
+        return null;
+      } catch (IllegalAccessException e) {
+        LOG.warn(e.toString());
+        return null;
+      }
+      return getSchema(obj);
+    } else if (type == Type.MAP) {
+      // TODO
+      // return Schema.createMap(...);
+      return null;
+    } else if (type == Type.RECORD) {
+      // TODO
+      // return Schema.createRecord(...);
+      return null;
+    } else {
+      return Schema.create(type);
+    }
+  }
+
+  public static Class getClass(Schema schema) {
+    Type type = schema.getType();
+    if (type == null) {
+      return null;
+    } else if (type == Type.FIXED) {
+      try {
+        return Class.forName( schema.getFullName() );
+      } catch (ClassNotFoundException e) {
+        LOG.warn(e.toString() + " : " + schema);
+        return null;
+      }
+    } else {
+      return getClass(type);
+    }
+  }
+
+  public static int getFixedSize(Type type) {
+    if (type == Type.BOOLEAN) {
+      return 1;
+    } else if (type == Type.DOUBLE) {
+      return 8;
+    } else if (type == Type.FLOAT) {
+      return 4;
+    } else if (type == Type.INT) {
+      return 4;
+    } else if (type == Type.LONG) {
+      return 8;
+    } else {
+      return -1;
+    }
+  }
+
+  public static int getFixedSize(Schema schema) {
+    Type type = schema.getType();
+    if (type == Type.FIXED) {
+      return schema.getFixedSize();
+    } else {
+      return getFixedSize(type);
+    }
+  }
+
+  public static int getFixedSize(Class clazz) {
+    Type type = getType(clazz);
+    if (type == Type.FIXED) {
+      try {
+        return ((SpecificFixed)clazz.newInstance()).bytes().length;
+      } catch (InstantiationException e) {
+        LOG.warn(e.toString());
+        return -1;
+      } catch (IllegalAccessException e) {
+        LOG.warn(e.toString());
+        return -1;
+      }
+    } else {
+      return getFixedSize(type);
+    }
+  }
+
+  public static Schema getElementSchema(GenericArray array) {
+    Schema schema = array.getSchema();
+    return (schema.getType() == Type.ARRAY) ? schema.getElementType() : schema;
+  }
+
+  public static Type getElementType(ListGenericArray array) {
+    return getElementSchema(array).getType();
+  }
+
+  /*
+  public static Schema getValueSchema(StatefulHashMap map) {
+    return map.getSchema().getValueType();
+  }
+
+  public static Type getValueType(StatefulHashMap map) {
+    return getValueSchema(map).getType();
+  }
+  */
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/Utf8Serializer.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/Utf8Serializer.java
new file mode 100644
index 0000000..19e3668
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/serializers/Utf8Serializer.java
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.serializers;
+
+import java.nio.ByteBuffer;
+
+import me.prettyprint.cassandra.serializers.AbstractSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+import static me.prettyprint.hector.api.ddl.ComparatorType.UTF8TYPE;
+
+import org.apache.avro.util.Utf8;
+
+/**
+ * A Utf8Serializer translates the byte[] to and from Utf8 object of Avro.
+ */
+public final class Utf8Serializer extends AbstractSerializer<Utf8> {
+
+  private static final Utf8Serializer instance = new Utf8Serializer();
+
+  public static Utf8Serializer get() {
+    return instance;
+  }
+
+  @Override
+  public ByteBuffer toByteBuffer(Utf8 obj) {
+    if (obj == null) {
+      return null;
+    }
+    return StringSerializer.get().toByteBuffer(obj.toString());
+  }
+
+  @Override
+  public Utf8 fromByteBuffer(ByteBuffer byteBuffer) {
+    if (byteBuffer == null) {
+      return null;
+    }
+    return new Utf8(StringSerializer.get().fromByteBuffer(byteBuffer));
+  }
+
+  @Override
+  public ComparatorType getComparatorType() {
+    return UTF8TYPE;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraClient.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraClient.java
new file mode 100644
index 0000000..65b6be0
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraClient.java
@@ -0,0 +1,416 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.store;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.model.ConfigurableConsistencyLevel;
+import me.prettyprint.cassandra.serializers.ByteBufferSerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.cassandra.service.CassandraHostConfigurator;
+import me.prettyprint.hector.api.Cluster;
+import me.prettyprint.hector.api.Keyspace;
+import me.prettyprint.hector.api.beans.OrderedRows;
+import me.prettyprint.hector.api.beans.OrderedSuperRows;
+import me.prettyprint.hector.api.beans.Row;
+import me.prettyprint.hector.api.beans.SuperRow;
+import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
+import me.prettyprint.hector.api.ddl.KeyspaceDefinition;
+import me.prettyprint.hector.api.factory.HFactory;
+import me.prettyprint.hector.api.mutation.Mutator;
+import me.prettyprint.hector.api.query.QueryResult;
+import me.prettyprint.hector.api.query.RangeSlicesQuery;
+import me.prettyprint.hector.api.query.RangeSuperSlicesQuery;
+import me.prettyprint.hector.api.HConsistencyLevel;
+import me.prettyprint.hector.api.Serializer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.cassandra.query.CassandraQuery;
+import org.apache.gora.cassandra.serializers.GenericArraySerializer;
+import org.apache.gora.cassandra.serializers.GoraSerializerTypeInferer;
+import org.apache.gora.cassandra.serializers.TypeUtils;
+import org.apache.gora.mapreduce.GoraRecordReader;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.query.Query;
+import org.apache.gora.util.ByteUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraClient<K, T extends Persistent> {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraClient.class);
+  
+  private Cluster cluster;
+  private Keyspace keyspace;
+  private Mutator<K> mutator;
+  private Class<K> keyClass;
+  private Class<T> persistentClass;
+  
+  private CassandraMapping cassandraMapping = null;
+
+  private Serializer<K> keySerializer;
+  
+  public void initialize(Class<K> keyClass, Class<T> persistentClass) throws Exception {
+    this.keyClass = keyClass;
+
+    // get cassandra mapping with persistent class
+    this.persistentClass = persistentClass;
+    this.cassandraMapping = CassandraMappingManager.getManager().get(persistentClass);
+
+    this.cluster = HFactory.getOrCreateCluster(this.cassandraMapping.getClusterName(), new CassandraHostConfigurator(this.cassandraMapping.getHostName()));
+    
+    // add keyspace to cluster
+    checkKeyspace();
+    
+    // Just create a Keyspace object on the client side, corresponding to an already existing keyspace with already created column families.
+    this.keyspace = HFactory.createKeyspace(this.cassandraMapping.getKeyspaceName(), this.cluster);
+    
+    this.keySerializer = GoraSerializerTypeInferer.getSerializer(keyClass);
+    this.mutator = HFactory.createMutator(this.keyspace, this.keySerializer);
+  }
+
+  /**
+   * Check if keyspace already exists.
+   */
+  public boolean keyspaceExists() {
+    KeyspaceDefinition keyspaceDefinition = this.cluster.describeKeyspace(this.cassandraMapping.getKeyspaceName());
+    return (keyspaceDefinition != null);
+  }
+  
+  /**
+   * Check if keyspace already exists. If not, create it.
+   * In this method, we also utilise Hector's {@ConfigurableConsistencyLevel}
+   * logic. It is set by passing a ConfigurableConsistencyLevel object right 
+   * when the Keyspace is created. Currently consistency level is .ONE which 
+   * permits consistency to wait until one replica has responded. 
+   */
+  public void checkKeyspace() {
+    // "describe keyspace <keyspaceName>;" query
+    KeyspaceDefinition keyspaceDefinition = this.cluster.describeKeyspace(this.cassandraMapping.getKeyspaceName());
+    if (keyspaceDefinition == null) {
+      List<ColumnFamilyDefinition> columnFamilyDefinitions = this.cassandraMapping.getColumnFamilyDefinitions();      
+      keyspaceDefinition = HFactory.createKeyspaceDefinition(this.cassandraMapping.getKeyspaceName(), "org.apache.cassandra.locator.SimpleStrategy", 1, columnFamilyDefinitions);      
+      this.cluster.addKeyspace(keyspaceDefinition, true);
+      // LOG.info("Keyspace '" + this.cassandraMapping.getKeyspaceName() + "' in cluster '" + this.cassandraMapping.getClusterName() + "' was created on host '" + this.cassandraMapping.getHostName() + "'");
+      
+      // Create a customized Consistency Level
+      ConfigurableConsistencyLevel configurableConsistencyLevel = new ConfigurableConsistencyLevel();
+      Map<String, HConsistencyLevel> clmap = new HashMap<String, HConsistencyLevel>();
+
+      // Define CL.ONE for ColumnFamily "ColumnFamily"
+      clmap.put("ColumnFamily", HConsistencyLevel.ONE);
+
+      // In this we use CL.ONE for read and writes. But you can use different CLs if needed.
+      configurableConsistencyLevel.setReadCfConsistencyLevels(clmap);
+      configurableConsistencyLevel.setWriteCfConsistencyLevels(clmap);
+
+      // Then let the keyspace know
+      HFactory.createKeyspace("Keyspace", this.cluster, configurableConsistencyLevel);
+
+      keyspaceDefinition = null;
+    }
+
+  }
+  
+  /**
+   * Drop keyspace.
+   */
+  public void dropKeyspace() {
+    // "drop keyspace <keyspaceName>;" query
+    this.cluster.dropKeyspace(this.cassandraMapping.getKeyspaceName());
+  }
+
+  /**
+   * Insert a field in a column.
+   * @param key the row key
+   * @param fieldName the field name
+   * @param value the field value.
+   */
+  public void addColumn(K key, String fieldName, Object value) {
+    if (value == null) {
+      return;
+    }
+
+    ByteBuffer byteBuffer = toByteBuffer(value);
+    
+    String columnFamily = this.cassandraMapping.getFamily(fieldName);
+    String columnName = this.cassandraMapping.getColumn(fieldName);
+    if (columnName == null) {
+      LOG.warn("Column name is null for field=" + fieldName + " with value=" + value.toString());
+      return;
+    }
+    
+    HectorUtils.insertColumn(mutator, key, columnFamily, columnName, byteBuffer);
+  }
+
+  /**
+   * Insert a member in a super column. This is used for map and record Avro types.
+   * @param key the row key
+   * @param fieldName the field name
+   * @param columnName the column name (the member name, or the index of array)
+   * @param value the member value
+   */
+  @SuppressWarnings("unchecked")
+  public void addSubColumn(K key, String fieldName, ByteBuffer columnName, Object value) {
+    if (value == null) {
+      return;
+    }
+
+    ByteBuffer byteBuffer = toByteBuffer(value);
+    
+    String columnFamily = this.cassandraMapping.getFamily(fieldName);
+    String superColumnName = this.cassandraMapping.getColumn(fieldName);
+    
+    HectorUtils.insertSubColumn(mutator, key, columnFamily, superColumnName, columnName, byteBuffer);
+  }
+
+  public void addSubColumn(K key, String fieldName, String columnName, Object value) {
+    addSubColumn(key, fieldName, StringSerializer.get().toByteBuffer(columnName), value);
+  }
+
+  public void addSubColumn(K key, String fieldName, Integer columnName, Object value) {
+    addSubColumn(key, fieldName, IntegerSerializer.get().toByteBuffer(columnName), value);
+  }
+
+
+  /**
+   * Delete a member in a super column. This is used for map and record Avro types.
+   * @param key the row key
+   * @param fieldName the field name
+   * @param columnName the column name (the member name, or the index of array)
+   */
+  @SuppressWarnings("unchecked")
+  public void deleteSubColumn(K key, String fieldName, ByteBuffer columnName) {
+
+    String columnFamily = this.cassandraMapping.getFamily(fieldName);
+    String superColumnName = this.cassandraMapping.getColumn(fieldName);
+    
+    HectorUtils.deleteSubColumn(mutator, key, columnFamily, superColumnName, columnName);
+  }
+
+  public void deleteSubColumn(K key, String fieldName, String columnName) {
+    deleteSubColumn(key, fieldName, StringSerializer.get().toByteBuffer(columnName));
+  }
+
+
+  @SuppressWarnings("unchecked")
+  public void addGenericArray(K key, String fieldName, GenericArray array) {
+    if (isSuper( cassandraMapping.getFamily(fieldName) )) {
+      int i= 0;
+      for (Object itemValue: array) {
+
+        // TODO: hack, do not store empty arrays
+        if (itemValue instanceof GenericArray<?>) {
+          if (((GenericArray)itemValue).size() == 0) {
+            continue;
+          }
+        } else if (itemValue instanceof StatefulHashMap<?,?>) {
+          if (((StatefulHashMap)itemValue).size() == 0) {
+            continue;
+          }
+        }
+
+        addSubColumn(key, fieldName, i++, itemValue);
+      }
+    }
+    else {
+      addColumn(key, fieldName, array);
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  public void addStatefulHashMap(K key, String fieldName, StatefulHashMap<Utf8,Object> map) {
+    if (isSuper( cassandraMapping.getFamily(fieldName) )) {
+      int i= 0;
+      for (Utf8 mapKey: map.keySet()) {
+        if (map.getState(mapKey) == State.DELETED) {
+          deleteSubColumn(key, fieldName, mapKey.toString());
+          continue;
+        }
+
+        // TODO: hack, do not store empty arrays
+        Object mapValue = map.get(mapKey);
+        if (mapValue instanceof GenericArray<?>) {
+          if (((GenericArray)mapValue).size() == 0) {
+            continue;
+          }
+        } else if (mapValue instanceof StatefulHashMap<?,?>) {
+          if (((StatefulHashMap)mapValue).size() == 0) {
+            continue;
+          }
+        }
+
+        addSubColumn(key, fieldName, mapKey.toString(), mapValue);
+      }
+    }
+    else {
+      addColumn(key, fieldName, map);
+    }
+  }
+
+  /**
+   * Serialize value to ByteBuffer.
+   * @param value the member value
+   * @return ByteBuffer object
+   */
+  @SuppressWarnings("unchecked")
+  public ByteBuffer toByteBuffer(Object value) {
+    ByteBuffer byteBuffer = null;
+    Serializer serializer = GoraSerializerTypeInferer.getSerializer(value);
+    if (serializer == null) {
+      LOG.info("Serializer not found for: " + value.toString());
+    }
+    else {
+      byteBuffer = serializer.toByteBuffer(value);
+    }
+
+    if (byteBuffer == null) {
+      LOG.info("value class=" + value.getClass().getName() + " value=" + value + " -> null");
+    }
+    
+    return byteBuffer;
+  }
+
+  /**
+   * Select a family column in the keyspace.
+   * @param cassandraQuery a wrapper of the query
+   * @param family the family name to be queried
+   * @return a list of family rows
+   */
+  public List<Row<K, ByteBuffer, ByteBuffer>> execute(CassandraQuery<K, T> cassandraQuery, String family) {
+    
+    String[] columnNames = cassandraQuery.getColumns(family);
+    ByteBuffer[] columnNameByteBuffers = new ByteBuffer[columnNames.length];
+    for (int i = 0; i < columnNames.length; i++) {
+      columnNameByteBuffers[i] = StringSerializer.get().toByteBuffer(columnNames[i]);
+    }
+    Query<K, T> query = cassandraQuery.getQuery();
+    int limit = (int) query.getLimit();
+    if (limit < 1) {
+      limit = Integer.MAX_VALUE;
+    }
+    K startKey = query.getStartKey();
+    K endKey = query.getEndKey();
+    
+    RangeSlicesQuery<K, ByteBuffer, ByteBuffer> rangeSlicesQuery = HFactory.createRangeSlicesQuery(this.keyspace, this.keySerializer, ByteBufferSerializer.get(), ByteBufferSerializer.get());
+    rangeSlicesQuery.setColumnFamily(family);
+    rangeSlicesQuery.setKeys(startKey, endKey);
+    rangeSlicesQuery.setRange(ByteBuffer.wrap(new byte[0]), ByteBuffer.wrap(new byte[0]), false, GoraRecordReader.BUFFER_LIMIT_READ_VALUE);
+    rangeSlicesQuery.setRowCount(limit);
+    rangeSlicesQuery.setColumnNames(columnNameByteBuffers);
+    
+    QueryResult<OrderedRows<K, ByteBuffer, ByteBuffer>> queryResult = rangeSlicesQuery.execute();
+    OrderedRows<K, ByteBuffer, ByteBuffer> orderedRows = queryResult.get();
+    
+    
+    return orderedRows.getList();
+  }
+
+  /**
+   * Select the families that contain at least one column mapped to a query field.
+   * @param query indicates the columns to select
+   * @return a map which keys are the family names and values the corresponding column names required to get all the query fields.
+   */
+  public Map<String, List<String>> getFamilyMap(Query<K, T> query) {
+    Map<String, List<String>> map = new HashMap<String, List<String>>();
+    for (String field: query.getFields()) {
+      String family = this.cassandraMapping.getFamily(field);
+      String column = this.cassandraMapping.getColumn(field);
+      
+      // check if the family value was already initialized 
+      List<String> list = map.get(family);
+      if (list == null) {
+        list = new ArrayList<String>();
+        map.put(family, list);
+      }
+      
+      if (column != null) {
+        list.add(column);
+      }
+      
+    }
+    
+    return map;
+  }
+  
+  /**
+   * Select the field names according to the column names, which format if fully qualified: "family:column"
+   * @param query
+   * @return a map which keys are the fully qualified column names and values the query fields
+   */
+  public Map<String, String> getReverseMap(Query<K, T> query) {
+    Map<String, String> map = new HashMap<String, String>();
+    for (String field: query.getFields()) {
+      String family = this.cassandraMapping.getFamily(field);
+      String column = this.cassandraMapping.getColumn(field);
+      
+      map.put(family + ":" + column, field);
+    }
+    
+    return map;
+     
+  }
+
+  public boolean isSuper(String family) {
+    return this.cassandraMapping.isSuper(family);
+  }
+
+  public List<SuperRow<K, String, ByteBuffer, ByteBuffer>> executeSuper(CassandraQuery<K, T> cassandraQuery, String family) {
+    String[] columnNames = cassandraQuery.getColumns(family);
+    Query<K, T> query = cassandraQuery.getQuery();
+    int limit = (int) query.getLimit();
+    if (limit < 1) {
+      limit = Integer.MAX_VALUE;
+    }
+    K startKey = query.getStartKey();
+    K endKey = query.getEndKey();
+    
+    RangeSuperSlicesQuery<K, String, ByteBuffer, ByteBuffer> rangeSuperSlicesQuery = HFactory.createRangeSuperSlicesQuery(this.keyspace, this.keySerializer, StringSerializer.get(), ByteBufferSerializer.get(), ByteBufferSerializer.get());
+    rangeSuperSlicesQuery.setColumnFamily(family);    
+    rangeSuperSlicesQuery.setKeys(startKey, endKey);
+    rangeSuperSlicesQuery.setRange("", "", false, GoraRecordReader.BUFFER_LIMIT_READ_VALUE);
+    rangeSuperSlicesQuery.setRowCount(limit);
+    rangeSuperSlicesQuery.setColumnNames(columnNames);
+    
+    
+    QueryResult<OrderedSuperRows<K, String, ByteBuffer, ByteBuffer>> queryResult = rangeSuperSlicesQuery.execute();
+    OrderedSuperRows<K, String, ByteBuffer, ByteBuffer> orderedRows = queryResult.get();
+    return orderedRows.getList();
+
+
+  }
+
+  /**
+   * Obtain Schema/Keyspace name
+   * @return Keyspace
+   */
+  public String getKeyspaceName() {
+	return this.cassandraMapping.getKeyspaceName();
+  }
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMapping.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMapping.java
new file mode 100644
index 0000000..ab7c206
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMapping.java
@@ -0,0 +1,210 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.store;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.model.BasicColumnFamilyDefinition;
+import me.prettyprint.cassandra.service.ThriftCfDef;
+import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
+import me.prettyprint.hector.api.ddl.ColumnType;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+
+import org.jdom.Document;
+import org.jdom.Element;
+import org.jdom.JDOMException;
+import org.jdom.input.SAXBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraMapping {
+  
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraMapping.class);
+  
+  private static final String MAPPING_FILE = "gora-cassandra-mapping.xml";
+  private static final String KEYSPACE_ELEMENT = "keyspace";
+  private static final String NAME_ATTRIBUTE = "name";
+  private static final String MAPPING_ELEMENT = "class";
+  private static final String COLUMN_ATTRIBUTE = "qualifier";
+  private static final String FAMILY_ATTRIBUTE = "family";
+  private static final String SUPER_ATTRIBUTE = "type";
+  private static final String CLUSTER_ATTRIBUTE = "cluster";
+  private static final String HOST_ATTRIBUTE = "host";
+
+
+  private String hostName;
+  private String clusterName;
+  private String keyspaceName;
+  
+  
+  /**
+   * List of the super column families.
+   */
+  private List<String> superFamilies = new ArrayList<String>();
+
+  /**
+   * Look up the column family associated to the Avro field.
+   */
+  private Map<String, String> familyMap = new HashMap<String, String>();
+  
+  /**
+   * Look up the column associated to the Avro field.
+   */
+  private Map<String, String> columnMap = new HashMap<String, String>();
+
+  /**
+   * Look up the column family from its name.
+   */
+  private Map<String, BasicColumnFamilyDefinition> columnFamilyDefinitions = 
+		  new HashMap<String, BasicColumnFamilyDefinition>();
+
+  
+  /**
+   * Simply gets the Cassandra host name.
+   * @return hostName
+   */
+  public String getHostName() {
+    return this.hostName;
+  }
+  
+  /**
+   * Simply gets the Cassandra cluster (the machines (nodes) 
+   * in a logical Cassandra instance) name.
+   * Clusters can contain multiple keyspaces. 
+   * @return clusterName
+   */
+  public String getClusterName() {
+    return this.clusterName;
+  }
+
+  /**
+   * Simply gets the Cassandra namespace for ColumnFamilies, typically one per application
+   * @return
+   */
+  public String getKeyspaceName() {
+    return this.keyspaceName;
+  }
+
+  /**
+   * Primary class for loading Cassandra configuration from the 'MAPPING_FILE'.
+   */
+  public CassandraMapping(Element keyspace, Element mapping) {
+    if (keyspace == null) {
+    	LOG.warn("Error locating Cassandra Keyspace element!");
+    } else {
+    	// LOG.info("Located Cassandra Keyspace: '" + KEYSPACE_ELEMENT + "'");
+    }
+    this.keyspaceName = keyspace.getAttributeValue(NAME_ATTRIBUTE);
+    if (this.keyspaceName == null) {
+    	LOG.warn("Error locating Cassandra Keyspace name attribute!");
+    } else {
+    	// LOG.info("Located Cassandra Keyspace name: '" + NAME_ATTRIBUTE + "'");
+    }
+    this.clusterName = keyspace.getAttributeValue(CLUSTER_ATTRIBUTE);
+    if (this.clusterName == null) {
+    	LOG.warn("Error locating Cassandra Keyspace cluster attribute!");
+    } else {
+    	// LOG.info("Located Cassandra Keyspace cluster: '" + CLUSTER_ATTRIBUTE + "'");
+    }
+    this.hostName = keyspace.getAttributeValue(HOST_ATTRIBUTE);
+    if (this.hostName == null) {
+    	LOG.warn("Error locating Cassandra Keyspace host attribute!");
+    } else {
+    	// LOG.info("Located Cassandra Keyspace host: '" + HOST_ATTRIBUTE + "'");
+    }
+    
+    // load column family definitions
+    List<Element> elements = keyspace.getChildren();
+    for (Element element: elements) {
+      BasicColumnFamilyDefinition cfDef = new BasicColumnFamilyDefinition();
+      
+      String familyName = element.getAttributeValue(NAME_ATTRIBUTE);
+      if (familyName == null) {
+      	LOG.warn("Error locating column family name attribute!");
+      } else {
+      	// LOG.info("Located column family name: '" + NAME_ATTRIBUTE + "'");
+      }
+      String superAttribute = element.getAttributeValue(SUPER_ATTRIBUTE);
+      if (superAttribute != null) {
+    	// LOG.info("Located super column family");
+        this.superFamilies.add(familyName);
+        // LOG.info("Added super column family: '" + familyName + "'");
+        cfDef.setColumnType(ColumnType.SUPER);
+        cfDef.setSubComparatorType(ComparatorType.BYTESTYPE);
+      }
+      
+      cfDef.setKeyspaceName(this.keyspaceName);
+      cfDef.setName(familyName);
+      cfDef.setComparatorType(ComparatorType.BYTESTYPE);
+      cfDef.setDefaultValidationClass(ComparatorType.BYTESTYPE.getClassName());
+      
+      this.columnFamilyDefinitions.put(familyName, cfDef);
+
+    }
+    
+    // load column definitions    
+    elements = mapping.getChildren();
+    for (Element element: elements) {
+      String fieldName = element.getAttributeValue(NAME_ATTRIBUTE);
+      String familyName = element.getAttributeValue(FAMILY_ATTRIBUTE);
+      String columnName = element.getAttributeValue(COLUMN_ATTRIBUTE);
+      BasicColumnFamilyDefinition columnFamilyDefinition = this.columnFamilyDefinitions.get(familyName);
+      if (columnFamilyDefinition == null) {
+        LOG.warn("Family " + familyName + " was not declared in the keyspace.");
+      }
+      
+      this.familyMap.put(fieldName, familyName);
+      this.columnMap.put(fieldName, columnName);
+      
+    }    
+  }
+
+  public String getFamily(String name) {
+    return this.familyMap.get(name);
+  }
+
+  public String getColumn(String name) {
+    return this.columnMap.get(name);
+  }
+
+  /**
+   * Read family super attribute.
+   * @param family the family name
+   * @return true is the family is a super column family
+   */
+  public boolean isSuper(String family) {
+    return this.superFamilies.indexOf(family) != -1;
+  }
+
+  public List<ColumnFamilyDefinition> getColumnFamilyDefinitions() {
+    List<ColumnFamilyDefinition> list = new ArrayList<ColumnFamilyDefinition>();
+    for (String key: this.columnFamilyDefinitions.keySet()) {
+      ColumnFamilyDefinition columnFamilyDefinition = this.columnFamilyDefinitions.get(key);
+      ThriftCfDef thriftCfDef = new ThriftCfDef(columnFamilyDefinition);
+      list.add(thriftCfDef);
+    }
+    
+    return list;
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMappingManager.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMappingManager.java
new file mode 100644
index 0000000..ee85a66
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraMappingManager.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.store;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import me.prettyprint.cassandra.model.BasicColumnFamilyDefinition;
+import me.prettyprint.cassandra.service.ThriftCfDef;
+import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
+import me.prettyprint.hector.api.ddl.ColumnType;
+import me.prettyprint.hector.api.ddl.ComparatorType;
+
+import org.jdom.Document;
+import org.jdom.Element;
+import org.jdom.JDOMException;
+import org.jdom.input.SAXBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraMappingManager {
+  
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraMappingManager.class);
+  
+  private static final String MAPPING_FILE = "gora-cassandra-mapping.xml";
+  private static final String KEYSPACE_ELEMENT = "keyspace";
+  private static final String NAME_ATTRIBUTE = "name";
+  private static final String MAPPING_ELEMENT = "class";
+  private static final String COLUMN_ATTRIBUTE = "qualifier";
+  private static final String FAMILY_ATTRIBUTE = "family";
+  private static final String SUPER_ATTRIBUTE = "type";
+  private static final String CLUSTER_ATTRIBUTE = "cluster";
+  private static final String HOST_ATTRIBUTE = "host";
+
+  // singleton
+  private static CassandraMappingManager manager = new CassandraMappingManager();
+
+  public static CassandraMappingManager getManager() {
+    return manager;
+  }
+
+  //
+  private Map<String, Element> keyspaceMap = null;
+  private Map<String, Element>  mappingMap = null;
+
+  private CassandraMappingManager() {
+    keyspaceMap = new HashMap<String, Element>();
+    mappingMap  = new HashMap<String, Element>();
+    try {
+      loadConfiguration();
+    }
+    catch (JDOMException e) {
+      LOG.error(e.toString());
+    }
+    catch (IOException e) {
+      LOG.error(e.toString());
+    }
+  }
+
+  public CassandraMapping get(Class persistentClass) {
+    String className = persistentClass.getName();
+    Element mappingElement = mappingMap.get(className);
+    String keyspaceName = mappingElement.getAttributeValue(KEYSPACE_ELEMENT);
+    Element keyspaceElement = keyspaceMap.get(keyspaceName);
+    return new CassandraMapping(keyspaceElement, mappingElement);
+  }
+
+  /**
+   * Primary class for loading Cassandra configuration from the 'MAPPING_FILE'.
+   * 
+   * @throws JDOMException
+   * @throws IOException
+   */
+  @SuppressWarnings("unchecked")
+  public void loadConfiguration() throws JDOMException, IOException {
+    SAXBuilder saxBuilder = new SAXBuilder();
+    Document document = saxBuilder.build(getClass().getClassLoader().getResourceAsStream(MAPPING_FILE));
+    if (document == null) {
+      LOG.warn("Mapping file '" + MAPPING_FILE + "' could not be found!");
+    }
+    Element root = document.getRootElement();
+    
+    List<Element> keyspaces = root.getChildren(KEYSPACE_ELEMENT);
+    if (keyspaces == null || keyspaces.size() == 0) {
+      LOG.warn("Error locating Cassandra Keyspace element!");
+    }
+    else {
+      LOG.info("Located Cassandra Keyspace: '" + KEYSPACE_ELEMENT + "'");
+      for (Element keyspace : keyspaces) {
+        String keyspaceName = keyspace.getAttributeValue(NAME_ATTRIBUTE);
+        if (keyspaceName == null) {
+    	    LOG.warn("Error locating Cassandra Keyspace name attribute!");
+        }
+    	LOG.info("Located Cassandra Keyspace name: '" + NAME_ATTRIBUTE + "'");
+        keyspaceMap.put(keyspaceName, keyspace);
+      }
+    }
+      
+    // load column definitions    
+    List<Element> mappings = root.getChildren(MAPPING_ELEMENT);
+    if (mappings == null || mappings.size() == 0) {
+      LOG.warn("Error locating Cassandra Mapping element!");
+    }
+    else {
+      LOG.info("Located Cassandra Mapping: '" + MAPPING_ELEMENT + "'");
+      for (Element mapping : mappings) {
+        String className = mapping.getAttributeValue(NAME_ATTRIBUTE);
+        if (className == null) {
+    	    LOG.warn("Error locating Cassandra Mapping class name attribute!");
+    	    continue;
+        }
+    	LOG.info("Located Cassandra Mapping class name: '" + NAME_ATTRIBUTE + "'");
+        mappingMap.put(className, mapping);
+      }
+    }
+  }
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraStore.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraStore.java
new file mode 100644
index 0000000..c242f01
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/CassandraStore.java
@@ -0,0 +1,398 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.store;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.hector.api.beans.ColumnSlice;
+import me.prettyprint.hector.api.beans.HColumn;
+import me.prettyprint.hector.api.beans.HSuperColumn;
+import me.prettyprint.hector.api.beans.Row;
+import me.prettyprint.hector.api.beans.SuperRow;
+import me.prettyprint.hector.api.beans.SuperSlice;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.cassandra.query.CassandraQuery;
+import org.apache.gora.cassandra.query.CassandraResult;
+import org.apache.gora.cassandra.query.CassandraResultSet;
+import org.apache.gora.cassandra.query.CassandraRow;
+import org.apache.gora.cassandra.query.CassandraSubColumn;
+import org.apache.gora.cassandra.query.CassandraSuperColumn;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CassandraStore<K, T extends Persistent> extends DataStoreBase<K, T> {
+  public static final Logger LOG = LoggerFactory.getLogger(CassandraStore.class);
+
+  private CassandraClient<K, T>  cassandraClient = new CassandraClient<K, T>();
+
+  /**
+   * The values are Avro fields pending to be stored.
+   *
+   * We want to iterate over the keys in insertion order.
+   * We don't want to lock the entire collection before iterating over the keys, since in the meantime other threads are adding entries to the map.
+   */
+  private Map<K, T> buffer = new LinkedHashMap<K, T>();
+  
+  public CassandraStore() throws Exception {
+    // this.cassandraClient.initialize();
+  }
+
+  public void initialize(Class<K> keyClass, Class<T> persistent, Properties properties) throws IOException {
+    super.initialize(keyClass, persistent, properties);
+    try {
+      this.cassandraClient.initialize(keyClass, persistent);
+    }
+    catch (Exception e) {
+      throw new IOException(e.getMessage(), e);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    LOG.debug("close");
+    flush();
+  }
+
+  @Override
+  public void createSchema() {
+    LOG.debug("creating Cassandra keyspace");
+    this.cassandraClient.checkKeyspace();
+  }
+
+  @Override
+  public boolean delete(K key) throws IOException {
+    LOG.debug("delete " + key);
+    return false;
+  }
+
+  @Override
+  public long deleteByQuery(Query<K, T> query) throws IOException {
+    LOG.debug("delete by query " + query);
+    return 0;
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+    LOG.debug("delete schema");
+    this.cassandraClient.dropKeyspace();
+  }
+
+  @Override
+  public Result<K, T> execute(Query<K, T> query) throws IOException {
+    
+    Map<String, List<String>> familyMap = this.cassandraClient.getFamilyMap(query);
+    Map<String, String> reverseMap = this.cassandraClient.getReverseMap(query);
+    
+    CassandraQuery<K, T> cassandraQuery = new CassandraQuery<K, T>();
+    cassandraQuery.setQuery(query);
+    cassandraQuery.setFamilyMap(familyMap);
+    
+    CassandraResult<K, T> cassandraResult = new CassandraResult<K, T>(this, query);
+    cassandraResult.setReverseMap(reverseMap);
+
+    CassandraResultSet cassandraResultSet = new CassandraResultSet();
+    
+    // We query Cassandra keyspace by families.
+    for (String family : familyMap.keySet()) {
+      if (family == null) {
+        continue;
+      }
+      if (this.cassandraClient.isSuper(family)) {
+        addSuperColumns(family, cassandraQuery, cassandraResultSet);
+         
+      } else {
+        addSubColumns(family, cassandraQuery, cassandraResultSet);
+      }
+    }
+    
+    cassandraResult.setResultSet(cassandraResultSet);
+    
+    return cassandraResult;
+  }
+
+  private void addSubColumns(String family, CassandraQuery<K, T> cassandraQuery,
+      CassandraResultSet cassandraResultSet) {
+    // select family columns that are included in the query
+    List<Row<K, ByteBuffer, ByteBuffer>> rows = this.cassandraClient.execute(cassandraQuery, family);
+    
+    for (Row<K, ByteBuffer, ByteBuffer> row : rows) {
+      K key = row.getKey();
+      
+      // find associated row in the resultset
+      CassandraRow<K> cassandraRow = cassandraResultSet.getRow(key);
+      if (cassandraRow == null) {
+        cassandraRow = new CassandraRow<K>();
+        cassandraResultSet.putRow(key, cassandraRow);
+        cassandraRow.setKey(key);
+      }
+      
+      ColumnSlice<ByteBuffer, ByteBuffer> columnSlice = row.getColumnSlice();
+      
+      for (HColumn<ByteBuffer, ByteBuffer> hColumn : columnSlice.getColumns()) {
+        CassandraSubColumn cassandraSubColumn = new CassandraSubColumn();
+        cassandraSubColumn.setValue(hColumn);
+        cassandraSubColumn.setFamily(family);
+        cassandraRow.add(cassandraSubColumn);
+      }
+      
+    }
+  }
+
+  private void addSuperColumns(String family, CassandraQuery<K, T> cassandraQuery, 
+      CassandraResultSet cassandraResultSet) {
+    
+    List<SuperRow<K, String, ByteBuffer, ByteBuffer>> superRows = this.cassandraClient.executeSuper(cassandraQuery, family);
+    for (SuperRow<K, String, ByteBuffer, ByteBuffer> superRow: superRows) {
+      K key = superRow.getKey();
+      CassandraRow<K> cassandraRow = cassandraResultSet.getRow(key);
+      if (cassandraRow == null) {
+        cassandraRow = new CassandraRow();
+        cassandraResultSet.putRow(key, cassandraRow);
+        cassandraRow.setKey(key);
+      }
+      
+      SuperSlice<String, ByteBuffer, ByteBuffer> superSlice = superRow.getSuperSlice();
+      for (HSuperColumn<String, ByteBuffer, ByteBuffer> hSuperColumn: superSlice.getSuperColumns()) {
+        CassandraSuperColumn cassandraSuperColumn = new CassandraSuperColumn();
+        cassandraSuperColumn.setValue(hSuperColumn);
+        cassandraSuperColumn.setFamily(family);
+        cassandraRow.add(cassandraSuperColumn);
+      }
+    }
+  }
+
+  /**
+   * Flush the buffer. Write the buffered rows.
+   * @see org.apache.gora.store.DataStore#flush()
+   */
+  @Override
+  public void flush() throws IOException {
+    
+    Set<K> keys = this.buffer.keySet();
+    
+    // this duplicates memory footprint
+    K[] keyArray = (K[]) keys.toArray();
+    
+    // iterating over the key set directly would throw ConcurrentModificationException with java.util.HashMap and subclasses
+    for (K key: keyArray) {
+      T value = this.buffer.get(key);
+      if (value == null) {
+        LOG.info("Value to update is null for key " + key);
+        continue;
+      }
+      Schema schema = value.getSchema();
+      for (Field field: schema.getFields()) {
+        if (value.isDirty(field.pos())) {
+          addOrUpdateField(key, field, value.get(field.pos()));
+        }
+      }
+    }
+    
+    // remove flushed rows
+    for (K key: keyArray) {
+      this.buffer.remove(key);
+    }
+  }
+
+  @Override
+  public T get(K key, String[] fields) throws IOException {
+    CassandraQuery<K,T> query = new CassandraQuery<K,T>();
+    query.setDataStore(this);
+    query.setKeyRange(key, key);
+    query.setFields(fields);
+    query.setLimit(1);
+    Result<K,T> result = execute(query);
+    boolean hasResult = result.next();
+    return hasResult ? result.get() : null;
+  }
+
+  @Override
+  public List<PartitionQuery<K, T>> getPartitions(Query<K, T> query)
+      throws IOException {
+    // just a single partition
+    List<PartitionQuery<K,T>> partitions = new ArrayList<PartitionQuery<K,T>>();
+    partitions.add(new PartitionQueryImpl<K,T>(query));
+    return partitions;
+  }
+  
+  /**
+   * In Cassandra Schemas are referred to as Keyspaces
+   * @return Keyspace
+   */
+  @Override
+  public String getSchemaName() {
+	return this.cassandraClient.getKeyspaceName();
+  }
+
+  @Override
+  public Query<K, T> newQuery() {
+    Query<K,T> query = new CassandraQuery<K, T>(this);
+    query.setFields(getFieldsToQuery(null));
+    return query;
+  }
+
+  /**
+   * Duplicate instance to keep all the objects in memory till flushing.
+   * @see org.apache.gora.store.DataStore#put(java.lang.Object, org.apache.gora.persistency.Persistent)
+   */
+  @Override
+  public void put(K key, T value) throws IOException {
+    T p = (T) value.newInstance(new StateManagerImpl());
+    Schema schema = value.getSchema();
+    for (Field field: schema.getFields()) {
+      int fieldPos = field.pos();
+      if (value.isDirty(fieldPos)) {
+        Object fieldValue = value.get(fieldPos);
+        
+        // check if field has a nested structure (array, map, or record)
+        Schema fieldSchema = field.schema();
+        Type type = fieldSchema.getType();
+        switch(type) {
+          case RECORD:
+            Persistent persistent = (Persistent) fieldValue;
+            Persistent newRecord = persistent.newInstance(new StateManagerImpl());
+            for (Field member: fieldSchema.getFields()) {
+              newRecord.put(member.pos(), persistent.get(member.pos()));
+            }
+            fieldValue = newRecord;
+            break;
+          case MAP:
+            // needs to keep State.DELETED.
+            break;
+          case ARRAY:
+            GenericArray array = (GenericArray) fieldValue;
+            ListGenericArray newArray = new ListGenericArray(fieldSchema.getElementType());
+            Iterator iter = array.iterator();
+            while (iter.hasNext()) {
+              newArray.add(iter.next());
+            }
+            fieldValue = newArray;
+            break;
+        }
+        
+        p.put(fieldPos, fieldValue);
+      }
+    }
+    
+    // this performs a structural modification of the map
+    this.buffer.put(key, p);
+ }
+
+  /**
+   * Add a field to Cassandra according to its type.
+   * @param key     the key of the row where the field should be added
+   * @param field   the Avro field representing a datum
+   * @param value   the field value
+   */
+  private void addOrUpdateField(K key, Field field, Object value) {
+    Schema schema = field.schema();
+    Type type = schema.getType();
+    switch (type) {
+      case STRING:
+      case BOOLEAN:
+      case INT:
+      case LONG:
+      case BYTES:
+      case FLOAT:
+      case DOUBLE:
+      case FIXED:
+        this.cassandraClient.addColumn(key, field.name(), value);
+        break;
+      case RECORD:
+        if (value != null) {
+          if (value instanceof PersistentBase) {
+            PersistentBase persistentBase = (PersistentBase) value;
+            for (Field member: schema.getFields()) {
+              
+              // TODO: hack, do not store empty arrays
+              Object memberValue = persistentBase.get(member.pos());
+              if (memberValue instanceof GenericArray<?>) {
+                if (((GenericArray)memberValue).size() == 0) {
+                  continue;
+                }
+              } else if (memberValue instanceof StatefulHashMap<?,?>) {
+                if (((StatefulHashMap)memberValue).size() == 0) {
+                  continue;
+                }
+              }
+
+              this.cassandraClient.addSubColumn(key, field.name(), member.name(), memberValue);
+            }
+          } else {
+            LOG.info("Record not supported: " + value.toString());
+            
+          }
+        }
+        break;
+      case MAP:
+        if (value != null) {
+          if (value instanceof StatefulHashMap<?, ?>) {
+            this.cassandraClient.addStatefulHashMap(key, field.name(), (StatefulHashMap<Utf8,Object>)value);
+          } else {
+            LOG.info("Map not supported: " + value.toString());
+          }
+        }
+        break;
+      case ARRAY:
+        if (value != null) {
+          if (value instanceof GenericArray<?>) {
+            this.cassandraClient.addGenericArray(key, field.name(), (GenericArray)value);
+          } else {
+            LOG.info("Array not supported: " + value.toString());
+          }
+        }
+        break;
+      default:
+        LOG.info("Type not considered: " + type.name());      
+    }
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    LOG.info("schema exists");
+    return cassandraClient.keyspaceExists();
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/HectorUtils.java b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/HectorUtils.java
new file mode 100644
index 0000000..1ebdd83
--- /dev/null
+++ b/trunk/gora-cassandra/src/main/java/org/apache/gora/cassandra/store/HectorUtils.java
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.cassandra.store;
+
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+
+import me.prettyprint.cassandra.serializers.ByteBufferSerializer;
+import me.prettyprint.cassandra.serializers.IntegerSerializer;
+import me.prettyprint.cassandra.serializers.StringSerializer;
+import me.prettyprint.hector.api.beans.HColumn;
+import me.prettyprint.hector.api.beans.HSuperColumn;
+import me.prettyprint.hector.api.factory.HFactory;
+import me.prettyprint.hector.api.mutation.Mutator;
+import me.prettyprint.hector.api.Serializer;
+
+import org.apache.gora.persistency.Persistent;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class HectorUtils<K,T extends Persistent> {
+
+  public static final Logger LOG = LoggerFactory.getLogger(HectorUtils.class);
+  
+  public static<K> void insertColumn(Mutator<K> mutator, K key, String columnFamily, ByteBuffer columnName, ByteBuffer columnValue) {
+    mutator.insert(key, columnFamily, createColumn(columnName, columnValue));
+  }
+
+  public static<K> void insertColumn(Mutator<K> mutator, K key, String columnFamily, String columnName, ByteBuffer columnValue) {
+    mutator.insert(key, columnFamily, createColumn(columnName, columnValue));
+  }
+
+
+  public static<K> HColumn<ByteBuffer,ByteBuffer> createColumn(ByteBuffer name, ByteBuffer value) {
+    return HFactory.createColumn(name, value, ByteBufferSerializer.get(), ByteBufferSerializer.get());
+  }
+
+  public static<K> HColumn<String,ByteBuffer> createColumn(String name, ByteBuffer value) {
+    return HFactory.createColumn(name, value, StringSerializer.get(), ByteBufferSerializer.get());
+  }
+
+  public static<K> HColumn<Integer,ByteBuffer> createColumn(Integer name, ByteBuffer value) {
+    return HFactory.createColumn(name, value, IntegerSerializer.get(), ByteBufferSerializer.get());
+  }
+
+
+  public static<K> void insertSubColumn(Mutator<K> mutator, K key, String columnFamily, String superColumnName, ByteBuffer columnName, ByteBuffer columnValue) {
+    mutator.insert(key, columnFamily, createSuperColumn(superColumnName, columnName, columnValue));
+  }
+
+  public static<K> void insertSubColumn(Mutator<K> mutator, K key, String columnFamily, String superColumnName, String columnName, ByteBuffer columnValue) {
+    mutator.insert(key, columnFamily, createSuperColumn(superColumnName, columnName, columnValue));
+  }
+
+  public static<K> void insertSubColumn(Mutator<K> mutator, K key, String columnFamily, String superColumnName, Integer columnName, ByteBuffer columnValue) {
+    mutator.insert(key, columnFamily, createSuperColumn(superColumnName, columnName, columnValue));
+  }
+
+
+  public static<K> void deleteSubColumn(Mutator<K> mutator, K key, String columnFamily, String superColumnName, ByteBuffer columnName) {
+    mutator.subDelete(key, columnFamily, superColumnName, columnName, StringSerializer.get(), ByteBufferSerializer.get());
+  }
+
+
+  public static<K> HSuperColumn<String,ByteBuffer,ByteBuffer> createSuperColumn(String superColumnName, ByteBuffer columnName, ByteBuffer columnValue) {
+    return HFactory.createSuperColumn(superColumnName, Arrays.asList(createColumn(columnName, columnValue)), StringSerializer.get(), ByteBufferSerializer.get(), ByteBufferSerializer.get());
+  }
+
+  public static<K> HSuperColumn<String,String,ByteBuffer> createSuperColumn(String superColumnName, String columnName, ByteBuffer columnValue) {
+    return HFactory.createSuperColumn(superColumnName, Arrays.asList(createColumn(columnName, columnValue)), StringSerializer.get(), StringSerializer.get(), ByteBufferSerializer.get());
+  }
+
+  public static<K> HSuperColumn<String,Integer,ByteBuffer> createSuperColumn(String superColumnName, Integer columnName, ByteBuffer columnValue) {
+    return HFactory.createSuperColumn(superColumnName, Arrays.asList(createColumn(columnName, columnValue)), StringSerializer.get(), IntegerSerializer.get(), ByteBufferSerializer.get());
+  }
+
+}
diff --git a/trunk/gora-cassandra/src/test/conf/cassandra.yaml b/trunk/gora-cassandra/src/test/conf/cassandra.yaml
new file mode 100644
index 0000000..e56d1b5
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/conf/cassandra.yaml
@@ -0,0 +1,418 @@
+# Cassandra storage config YAML
+
+# NOTE:
+# See http://wiki.apache.org/cassandra/StorageConfiguration for
+# full explanations of configuration directives
+# /NOTE
+
+# The name of the cluster. This is mainly used to prevent machines in
+# one logical cluster from joining another.
+cluster_name: 'Gora Cassandra Test Cluster'
+
+# You should always specify InitialToken when setting up a production
+# cluster for the first time, and often when adding capacity later.
+# The principle is that each node should be given an equal slice of
+# the token ring; see http://wiki.apache.org/cassandra/Operations
+# for more details.
+#
+# If blank, Cassandra will request a token bisecting the range of
+# the heaviest-loaded existing node. If there is no load information
+# available, such as is the case with a new cluster, it will pick
+# a random token, which will lead to hot spots.
+initial_token:
+
+# See http://wiki.apache.org/cassandra/HintedHandoff
+hinted_handoff_enabled: true
+# this defines the maximum amount of time a dead host will have hints
+# generated. After it has been dead this long, hints will be dropped.
+max_hint_window_in_ms: 3600000 # one hour
+# Sleep this long after delivering each row or row fragment
+hinted_handoff_throttle_delay_in_ms: 50
+
+# authentication backend, implementing IAuthenticator; used to identify users
+authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
+
+# authorization backend, implementing IAuthority; used to limit access/provide permissions
+authority: org.apache.cassandra.auth.AllowAllAuthority
+
+# The partitioner is responsible for distributing rows (by key) across
+# nodes in the cluster. Any IPartitioner may be used, including your
+# own as long as it is on the classpath. Out of the box, Cassandra
+# provides org.apache.cassandra.dht.RandomPartitioner
+# org.apache.cassandra.dht.ByteOrderedPartitioner,
+# org.apache.cassandra.dht.OrderPreservingPartitioner (deprecated),
+# and org.apache.cassandra.dht.CollatingOrderPreservingPartitioner
+# (deprecated).
+#
+# - RandomPartitioner distributes rows across the cluster evenly by md5.
+# When in doubt, this is the best option.
+# - ByteOrderedPartitioner orders rows lexically by key bytes. BOP allows
+# scanning rows in key order, but the ordering can generate hot spots
+# for sequential insertion workloads.
+# - OrderPreservingPartitioner is an obsolete form of BOP, that stores
+# - keys in a less-efficient format and only works with keys that are
+# UTF8-encoded Strings.
+# - CollatingOPP colates according to EN,US rules rather than lexical byte
+# ordering. Use this as an example if you need custom collation.
+#
+# See http://wiki.apache.org/cassandra/Operations for more on
+# partitioners and token selection.
+# partitioner: org.apache.cassandra.dht.RandomPartitioner
+partitioner: org.apache.cassandra.dht.ByteOrderedPartitioner
+
+# directories where Cassandra should store data on disk.
+data_file_directories:
+    - target/test/var/lib/cassandra/data
+
+# commit log
+commitlog_directory: target/test/var/lib/cassandra/commitlog
+
+# saved caches
+saved_caches_directory: target/test/var/lib/cassandra/saved_caches
+
+# commitlog_sync may be either "periodic" or "batch."
+# When in batch mode, Cassandra won't ack writes until the commit log
+# has been fsynced to disk. It will wait up to
+# commitlog_sync_batch_window_in_ms milliseconds for other writes, before
+# performing the sync.
+#
+# commitlog_sync: batch
+# commitlog_sync_batch_window_in_ms: 50
+#
+# the other option is "periodic" where writes may be acked immediately
+# and the CommitLog is simply synced every commitlog_sync_period_in_ms
+# milliseconds.
+commitlog_sync: periodic
+commitlog_sync_period_in_ms: 10000
+
+# any class that implements the SeedProvider interface and has a
+# constructor that takes a Map<String, String> of parameters will do.
+seed_provider:
+    # Addresses of hosts that are deemed contact points.
+    # Cassandra nodes use this list of hosts to find each other and learn
+    # the topology of the ring. You must change this if you are running
+    # multiple nodes!
+    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
+      parameters:
+          # seeds is actually a comma-delimited list of addresses.
+          # Ex: "<ip1>,<ip2>,<ip3>"
+          - seeds: "127.0.0.1"
+
+# emergency pressure valve: each time heap usage after a full (CMS)
+# garbage collection is above this fraction of the max, Cassandra will
+# flush the largest memtables.
+#
+# Set to 1.0 to disable. Setting this lower than
+# CMSInitiatingOccupancyFraction is not likely to be useful.
+#
+# RELYING ON THIS AS YOUR PRIMARY TUNING MECHANISM WILL WORK POORLY:
+# it is most effective under light to moderate load, or read-heavy
+# workloads; under truly massive write load, it will often be too
+# little, too late.
+flush_largest_memtables_at: 0.75
+
+# emergency pressure valve #2: the first time heap usage after a full
+# (CMS) garbage collection is above this fraction of the max,
+# Cassandra will reduce cache maximum _capacity_ to the given fraction
+# of the current _size_. Should usually be set substantially above
+# flush_largest_memtables_at, since that will have less long-term
+# impact on the system.
+#
+# Set to 1.0 to disable. Setting this lower than
+# CMSInitiatingOccupancyFraction is not likely to be useful.
+reduce_cache_sizes_at: 0.85
+reduce_cache_capacity_to: 0.6
+
+# For workloads with more data than can fit in memory, Cassandra's
+# bottleneck will be reads that need to fetch data from
+# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
+# order to allow the operations to enqueue low enough in the stack
+# that the OS and drives can reorder them.
+#
+# On the other hand, since writes are almost never IO bound, the ideal
+# number of "concurrent_writes" is dependent on the number of cores in
+# your system; (8 * number_of_cores) is a good rule of thumb.
+concurrent_reads: 32
+concurrent_writes: 32
+
+# Total memory to use for memtables. Cassandra will flush the largest
+# memtable when this much memory is used.
+# If omitted, Cassandra will set it to 1/3 of the heap.
+# memtable_total_space_in_mb: 2048
+
+# Total space to use for commitlogs.
+# If space gets above this value (it will round up to the next nearest
+# segment multiple), Cassandra will flush every dirty CF in the oldest
+# segment and remove it.
+# commitlog_total_space_in_mb: 4096
+
+# This sets the amount of memtable flush writer threads. These will
+# be blocked by disk io, and each one will hold a memtable in memory
+# while blocked. If you have a large heap and many data directories,
+# you can increase this value for better flush performance.
+# By default this will be set to the amount of data directories defined.
+#memtable_flush_writers: 1
+
+# the number of full memtables to allow pending flush, that is,
+# waiting for a writer thread. At a minimum, this should be set to
+# the maximum number of secondary indexes created on a single CF.
+memtable_flush_queue_size: 4
+
+# Buffer size to use when performing contiguous column slices.
+# Increase this to the size of the column slices you typically perform
+# This property is not accepted in Cassandra 1.1.X
+# sliced_buffer_size_in_kb: 64
+
+# TCP port, for commands and data
+storage_port: 17000
+
+# Address to bind to and tell other Cassandra nodes to connect to. You
+# _must_ change this if you want multiple nodes to be able to
+# communicate!
+#
+# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
+# will always do the Right Thing *if* the node is properly configured
+# (hostname, name resolution, etc), and the Right Thing is to use the
+# address associated with the hostname (it might not be).
+#
+# Setting this to 0.0.0.0 is always wrong.
+listen_address: localhost
+
+# Address to broadcast to other Cassandra nodes
+# Leaving this blank will set it to the same value as listen_address
+# broadcast_address: 1.2.3.4
+
+# The address to bind the Thrift RPC service to -- clients connect
+# here. Unlike ListenAddress above, you *can* specify 0.0.0.0 here if
+# you want Thrift to listen on all interfaces.
+#
+# Leaving this blank has the same effect it does for ListenAddress,
+# (i.e. it will be based on the configured hostname of the node).
+rpc_address: localhost
+# port for Thrift to listen for clients on
+rpc_port: 9160
+
+# enable or disable keepalive on rpc connections
+rpc_keepalive: true
+
+# Cassandra provides three options for the RPC Server:
+#
+# sync -> One connection per thread in the rpc pool (see below).
+# For a very large number of clients, memory will be your limiting
+# factor; on a 64 bit JVM, 128KB is the minimum stack size per thread.
+# Connection pooling is very, very strongly recommended.
+#
+# async -> Nonblocking server implementation with one thread to serve
+# rpc connections. This is not recommended for high throughput use
+# cases. Async has been tested to be about 50% slower than sync
+# or hsha and is deprecated: it will be removed in the next major release.
+#
+# hsha -> Stands for "half synchronous, half asynchronous." The rpc thread pool
+# (see below) is used to manage requests, but the threads are multiplexed
+# across the different clients.
+#
+# The default is sync because on Windows hsha is about 30% slower. On Linux,
+# sync/hsha performance is about the same, with hsha of course using less memory.
+rpc_server_type: sync
+
+# Uncomment rpc_min|max|thread to set request pool size.
+# You would primarily set max for the sync server to safeguard against
+# misbehaved clients; if you do hit the max, Cassandra will block until one
+# disconnects before accepting more. The defaults for sync are min of 16 and max
+# unlimited.
+#
+# For the Hsha server, the min and max both default to quadruple the number of
+# CPU cores.
+#
+# This configuration is ignored by the async server.
+#
+# rpc_min_threads: 16
+# rpc_max_threads: 2048
+
+# uncomment to set socket buffer sizes on rpc connections
+# rpc_send_buff_size_in_bytes:
+# rpc_recv_buff_size_in_bytes:
+
+# Frame size for thrift (maximum field length).
+# 0 disables TFramedTransport in favor of TSocket. This option
+# is deprecated; we strongly recommend using Framed mode.
+thrift_framed_transport_size_in_mb: 15
+
+# The max length of a thrift message, including all fields and
+# internal thrift overhead.
+thrift_max_message_length_in_mb: 16
+
+# Set to true to have Cassandra create a hard link to each sstable
+# flushed or streamed locally in a backups/ subdirectory of the
+# Keyspace data. Removing these links is the operator's
+# responsibility.
+incremental_backups: false
+
+# Whether or not to take a snapshot before each compaction. Be
+# careful using this option, since Cassandra won't clean up the
+# snapshots for you. Mostly useful if you're paranoid when there
+# is a data format change.
+snapshot_before_compaction: false
+
+# Add column indexes to a row after its contents reach this size.
+# Increase if your column values are large, or if you have a very large
+# number of columns. The competing causes are, Cassandra has to
+# deserialize this much of the row to read a single column, so you want
+# it to be small - at least if you do many partial-row reads - but all
+# the index data is read for each access, so you don't want to generate
+# that wastefully either.
+column_index_size_in_kb: 64
+
+# Size limit for rows being compacted in memory. Larger rows will spill
+# over to disk and use a slower two-pass compaction process. A message
+# will be logged specifying the row key.
+in_memory_compaction_limit_in_mb: 64
+
+# Number of simultaneous compactions to allow, NOT including
+# validation "compactions" for anti-entropy repair. Simultaneous
+# compactions can help preserve read performance in a mixed read/write
+# workload, by mitigating the tendency of small sstables to accumulate
+# during a single long running compactions. The default is usually
+# fine and if you experience problems with compaction running too
+# slowly or too fast, you should look at
+# compaction_throughput_mb_per_sec first.
+#
+# This setting has no effect on LeveledCompactionStrategy.
+#
+# concurrent_compactors defaults to the number of cores.
+# Uncomment to make compaction mono-threaded, the pre-0.8 default.
+#concurrent_compactors: 1
+
+# Multi-threaded compaction. When enabled, each compaction will use
+# up to one thread per core, plus one thread per sstable being merged.
+# This is usually only useful for SSD-based hardware: otherwise,
+# your concern is usually to get compaction to do LESS i/o (see:
+# compaction_throughput_mb_per_sec), not more.
+multithreaded_compaction: false
+
+# Throttles compaction to the given total throughput across the entire
+# system. The faster you insert data, the faster you need to compact in
+# order to keep the sstable count down, but in general, setting this to
+# 16 to 32 times the rate you are inserting data is more than sufficient.
+# Setting this to 0 disables throttling. Note that this account for all types
+# of compaction, including validation compaction.
+compaction_throughput_mb_per_sec: 16
+
+# Track cached row keys during compaction, and re-cache their new
+# positions in the compacted sstable. Disable if you use really large
+# key caches.
+compaction_preheat_key_cache: true
+
+# Throttles all outbound streaming file transfers on this node to the
+# given total throughput in Mbps. This is necessary because Cassandra does
+# mostly sequential IO when streaming data during bootstrap or repair, which
+# can lead to saturating the network connection and degrading rpc performance.
+# When unset, the default is 400 Mbps or 50 MB/s.
+# stream_throughput_outbound_megabits_per_sec: 400
+
+# Time to wait for a reply from other nodes before failing the command
+rpc_timeout_in_ms: 10000
+
+# phi value that must be reached for a host to be marked down.
+# most users should never need to adjust this.
+# phi_convict_threshold: 8
+
+# endpoint_snitch -- Set this to a class that implements
+# IEndpointSnitch, which will let Cassandra know enough
+# about your network topology to route requests efficiently.
+# Out of the box, Cassandra provides
+# - org.apache.cassandra.locator.SimpleSnitch:
+# Treats Strategy order as proximity. This improves cache locality
+# when disabling read repair, which can further improve throughput.
+# - org.apache.cassandra.locator.RackInferringSnitch:
+# Proximity is determined by rack and data center, which are
+# assumed to correspond to the 3rd and 2nd octet of each node's
+# IP address, respectively
+# org.apache.cassandra.locator.PropertyFileSnitch:
+# - Proximity is determined by rack and data center, which are
+# explicitly configured in cassandra-topology.properties.
+endpoint_snitch: org.apache.cassandra.locator.SimpleSnitch
+
+# controls how often to perform the more expensive part of host score
+# calculation
+dynamic_snitch_update_interval_in_ms: 100
+# controls how often to reset all host scores, allowing a bad host to
+# possibly recover
+dynamic_snitch_reset_interval_in_ms: 600000
+# if set greater than zero and read_repair_chance is < 1.0, this will allow
+# 'pinning' of replicas to hosts in order to increase cache capacity.
+# The badness threshold will control how much worse the pinned host has to be
+# before the dynamic snitch will prefer other replicas over it. This is
+# expressed as a double which represents a percentage. Thus, a value of
+# 0.2 means Cassandra would continue to prefer the static snitch values
+# until the pinned host was 20% worse than the fastest.
+dynamic_snitch_badness_threshold: 0.1
+
+# request_scheduler -- Set this to a class that implements
+# RequestScheduler, which will schedule incoming client requests
+# according to the specific policy. This is useful for multi-tenancy
+# with a single Cassandra cluster.
+# NOTE: This is specifically for requests from the client and does
+# not affect inter node communication.
+# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
+# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
+# client requests to a node with a separate queue for each
+# request_scheduler_id. The scheduler is further customized by
+# request_scheduler_options as described below.
+request_scheduler: org.apache.cassandra.scheduler.NoScheduler
+
+# Scheduler Options vary based on the type of scheduler
+# NoScheduler - Has no options
+# RoundRobin
+# - throttle_limit -- The throttle_limit is the number of in-flight
+# requests per client. Requests beyond
+# that limit are queued up until
+# running requests can complete.
+# The value of 80 here is twice the number of
+# concurrent_reads + concurrent_writes.
+# - default_weight -- default_weight is optional and allows for
+# overriding the default which is 1.
+# - weights -- Weights are optional and will default to 1 or the
+# overridden default_weight. The weight translates into how
+# many requests are handled during each turn of the
+# RoundRobin, based on the scheduler id.
+#
+# request_scheduler_options:
+# throttle_limit: 80
+# default_weight: 5
+# weights:
+# Keyspace1: 1
+# Keyspace2: 5
+
+# request_scheduler_id -- An identifer based on which to perform
+# the request scheduling. Currently the only valid option is keyspace.
+# request_scheduler_id: keyspace
+
+# index_interval controls the sampling of entries from the primrary
+# row index in terms of space versus time. The larger the interval,
+# the smaller and less effective the sampling will be. In technicial
+# terms, the interval coresponds to the number of index entries that
+# are skipped between taking each sample. All the sampled entries
+# must fit in memory. Generally, a value between 128 and 512 here
+# coupled with a large key cache size on CFs results in the best trade
+# offs. This value is not often changed, however if you have many
+# very small rows (many to an OS page), then increasing this will
+# often lower memory usage without a impact on performance.
+index_interval: 128
+
+# Enable or disable inter-node encryption
+# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
+# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
+# suite for authentication, key exchange and encryption of the actual data transfers.
+# NOTE: No custom encryption options are enabled at the moment
+# The available internode options are : all, none
+#
+# The passwords used in these options must match the passwords used when generating
+# the keystore and truststore. For instructions on generating these files, see:
+# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
+encryption_options:
+    internode_encryption: none
+    keystore: conf/.keystore
+    keystore_password: cassandra
+    truststore: conf/.truststore
+    truststore_password: cassandra
diff --git a/trunk/gora-cassandra/src/test/conf/gora-cassandra-mapping.xml b/trunk/gora-cassandra/src/test/conf/gora-cassandra-mapping.xml
new file mode 100644
index 0000000..1e9ec79
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/conf/gora-cassandra-mapping.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<gora-orm>
+  <keyspace name="Employee" host="localhost" cluster="Gora Cassandra Test Cluster">
+    <family name="p"/>
+    <family name="f"/>
+     <family name="sc" type="super" />
+  </keyspace>
+
+  <keyspace name="WebPage" host="localhost" cluster="Gora Cassandra Test Cluster">
+    <family name="p"/>
+    <family name="f"/>
+    <family name="sc" type="super"/>
+  </keyspace>
+
+  <keyspace name="TokenDatum" host="localhost" cluster="Gora Cassandra Test Cluster">
+    <family name="p"/>
+    <family name="f"/>
+    <family name="sc" type="super"/>
+  </keyspace>
+
+  <class name="org.apache.gora.examples.generated.Employee" keyClass="java.lang.String" keyspace="Employee">
+    <field name="name"  family="p" qualifier="info:nm"/>
+    <field name="dateOfBirth"  family="p" qualifier="info:db"/>
+    <field name="ssn"  family="p" qualifier="info:sn"/>
+    <field name="salary"  family="p" qualifier="info:sl"/>
+  </class>
+
+  <class name="org.apache.gora.examples.generated.WebPage" keyClass="java.lang.String" keyspace="WebPage">
+    <field name="url" family="p" qualifier="c:u"/>
+    <field name="content" family="p" qualifier="p:cnt:c"/>
+    <field name="parsedContent" family="sc" qualifier="p:parsedContent"/>
+    <field name="outlinks" family="sc" qualifier="p:outlinks"/>
+    <field name="metadata" family="sc" qualifier="c:mt"/>
+  </class>
+
+  <class name="org.apache.gora.examples.generated.TokenDatum" keyClass="java.lang.String" keyspace="TokenDatum">
+    <field name="count"  family="p" qualifier="common:count"/>
+  </class>
+
+</gora-orm>
diff --git a/trunk/gora-cassandra/src/test/conf/gora.properties b/trunk/gora-cassandra/src/test/conf/gora.properties
new file mode 100644
index 0000000..80427b4
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/conf/gora.properties
@@ -0,0 +1,29 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+gora.datastore.default=org.apache.gora.cassandra.CassandraStore
+gora.cassandrastore.keyspace=
+gora.cassandrastore.name=
+gora.cassandrastore.class=
+gora.cassandrastore.qualifier=
+gora.cassandrastore.family=
+gora.cassandrastore.type=
+gora.cassandraStore.cluster=Test Cluster
+gora.cassandraStore.host=localhost
+
+
+
+
+
diff --git a/trunk/gora-cassandra/src/test/conf/log4j-server.properties b/trunk/gora-cassandra/src/test/conf/log4j-server.properties
new file mode 100644
index 0000000..086306e
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/conf/log4j-server.properties
@@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# for production, you should probably set pattern to %c instead of %l.  
+# (%l is slower.)
+
+# output messages into a rolling log file as well as stdout
+log4j.rootLogger=INFO,stdout,R
+
+# stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%5p %d{HH:mm:ss,SSS} %m%n
+
+# rolling log file
+log4j.appender.R=org.apache.log4j.RollingFileAppender
+log4j.appender.R.maxFileSize=20MB
+log4j.appender.R.maxBackupIndex=50
+log4j.appender.R.layout=org.apache.log4j.PatternLayout
+log4j.appender.R.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L) %m%n
+# Edit the next line to point to your logs directory
+log4j.appender.R.File=/var/log/cassandra/system.log
+
+# Application logging options
+#log4j.logger.org.apache.cassandra=DEBUG
+#log4j.logger.org.apache.cassandra.db=DEBUG
+#log4j.logger.org.apache.cassandra.service.StorageProxy=DEBUG
+
+# Adding this to avoid thrift logging disconnect errors.
+log4j.logger.org.apache.thrift.server.TNonblockingServer=ERROR
+
diff --git a/trunk/gora-cassandra/src/test/java/.gitignore b/trunk/gora-cassandra/src/test/java/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/java/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/GoraCassandraTestDriver.java b/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/GoraCassandraTestDriver.java
new file mode 100644
index 0000000..995f904
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/GoraCassandraTestDriver.java
@@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * @author lewismc
+ *
+ */
+
+package org.apache.gora.cassandra;
+
+import java.io.IOException;
+
+import org.apache.gora.GoraTestDriver;
+import org.apache.gora.cassandra.store.CassandraStore;
+
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.File;
+
+import org.apache.cassandra.io.util.FileUtils;
+import org.apache.cassandra.thrift.CassandraDaemon;
+
+// Logging imports
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Helper class for third party tests using gora-cassandra backend. 
+ * @see GoraTestDriver for test specifics.
+ * This driver is the base for all test cases that require an embedded Cassandra
+ * server. In this case we draw on Hector's @see EmbeddedServerHelper.
+ * It starts (setUp) and stops (tearDown) embedded Cassandra server.
+ * 
+ * @author lewismc
+ */
+
+public class GoraCassandraTestDriver extends GoraTestDriver {
+  private static Logger log = LoggerFactory.getLogger(GoraCassandraTestDriver.class);
+  
+  private String baseDirectory = "target/test";
+
+  private CassandraDaemon cassandraDaemon;
+
+  private Thread cassandraThread;
+
+  /**
+   * @return temporary base directory of running cassandra instance
+   */
+  public String getBaseDirectory() {
+    return baseDirectory;
+  }
+
+  public GoraCassandraTestDriver() {
+    super(CassandraStore.class);
+  }
+	
+  /**
+   * starts embedded Cassandra server.
+   *
+   * @throws Exception
+   * 	if an error occurs
+   */
+  @Override
+  public void setUpClass() throws Exception {
+    super.setUpClass();
+    log.info("Starting embedded Cassandra Server...");
+    try {
+      cleanupDirectoriesFailover();
+      FileUtils.createDirectory(baseDirectory);
+      System.setProperty("log4j.configuration", "file:target/test-classes/log4j-server.properties");
+      System.setProperty("cassandra.config", "file:target/test-classes/cassandra.yaml");
+      
+      cassandraDaemon = new CassandraDaemon();
+      cassandraDaemon.init(null);
+      cassandraThread = new Thread(new Runnable() {
+	
+        public void run() {
+          try {
+	    cassandraDaemon.start();
+	  } catch (Exception e) {
+	    log.error("Embedded casandra server run failed!", e);
+	  }
+        }
+      });
+	
+      cassandraThread.setDaemon(true);
+      cassandraThread.start();
+      } catch (Exception e) {
+	log.error("Embedded casandra server start failed!", e);
+
+	// cleanup
+	tearDownClass();
+      }
+  }
+
+  /**
+   * Stops embedded Cassandra server.
+   *
+   * @throws Exception
+   * 	if an error occurs
+   */
+  @Override
+  public void tearDownClass() throws Exception {
+    super.tearDownClass();
+    log.info("Shutting down Embedded Cassandra server...");
+    if (cassandraThread != null) {
+      cassandraDaemon.stop();
+      cassandraDaemon.destroy();
+      cassandraThread.interrupt();
+      cassandraThread = null;
+    }
+    cleanupDirectoriesFailover();
+  }  
+
+  /**
+   * Cleans up cassandra's temporary base directory.
+   *
+   * In case o failure waits for 250 msecs and then tries it again, 3 times totally.
+   */
+  public void cleanupDirectoriesFailover() {
+    int tries = 3;
+    while (tries-- > 0) {
+      try {
+	cleanupDirectories();
+	break;
+      } catch (Exception e) {
+	// ignore exception
+	try {
+	  Thread.sleep(250);
+	} catch (InterruptedException e1) {
+	  // ignore exception
+	}
+      }
+    }
+  }
+
+  /**
+   * Cleans up cassandra's temporary base directory.
+   *
+   * @throws Exception
+   * 	if an error occurs
+   */
+  public void cleanupDirectories() throws Exception {
+    File dirFile = new File(baseDirectory);
+    if (dirFile.exists()) {
+      FileUtils.deleteRecursive(dirFile);
+    }
+  }
+}
diff --git a/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/store/TestCassandraStore.java b/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/store/TestCassandraStore.java
new file mode 100644
index 0000000..6e77982
--- /dev/null
+++ b/trunk/gora-cassandra/src/test/java/org/apache/gora/cassandra/store/TestCassandraStore.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * @author lewismc
+ *
+ */
+package org.apache.gora.cassandra.store;
+
+import java.io.IOException;
+
+import org.apache.gora.cassandra.GoraCassandraTestDriver;
+import org.apache.gora.cassandra.store.CassandraStore;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestBase;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test for CassandraStore.
+ * @author lewismc
+ */
+public class TestCassandraStore extends DataStoreTestBase{
+	
+  private Configuration conf;
+	
+  static {
+    setTestDriver(new GoraCassandraTestDriver());
+  }
+	
+  @Before
+  public void setUp() throws Exception {
+    super.setUp();
+  }
+	
+  @SuppressWarnings("unchecked")
+  @Override
+  protected DataStore<String, Employee> createEmployeeDataStore() throws IOException {
+    return DataStoreFactory.getDataStore(CassandraStore.class, String.class, Employee.class, conf);
+  }
+	
+  @SuppressWarnings("unchecked")
+  @Override
+  protected DataStore<String, WebPage> createWebPageDataStore() throws IOException {
+    return DataStoreFactory.getDataStore(CassandraStore.class, String.class, WebPage.class, conf);
+  }
+	
+  public GoraCassandraTestDriver getTestDriver() {
+    return (GoraCassandraTestDriver) testDriver;
+  }
+	
+
+// ============================================================================
+  //We need to skip the following tests for a while until we fix some issues..
+  
+  @Override
+  public void testGetWebPageDefaultFields() throws IOException {}
+  @Override
+  public void testQuery() throws IOException {}
+  @Override
+  public void testQueryStartKey() throws IOException {}
+  @Override
+  public void testQueryEndKey() throws IOException {}
+  @Override
+  public void testQueryKeyRange() throws IOException {}
+  @Override
+  public void testQueryWebPageSingleKeyDefaultFields() throws IOException {}
+  @Override
+  public void testDelete() throws IOException {}
+  @Override
+  public void testDeleteByQuery() throws IOException {}
+  @Override
+  public void testDeleteByQueryFields() throws IOException {}
+  @Override
+  public void testGetPartitions() throws IOException {}
+// ============================================================================
+
+
+  public static void main(String[] args) throws Exception {
+    TestCassandraStore test = new TestCassandraStore();
+    test.setUpClass();
+    test.setUp();
+    
+    test.tearDown();
+    test.tearDownClass();
+  }
+
+}
diff --git a/trunk/gora-core/build.xml b/trunk/gora-core/build.xml
new file mode 100644
index 0000000..6413ad3
--- /dev/null
+++ b/trunk/gora-core/build.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-core" default="compile">
+  <property name="project.dir" value="${basedir}/.."/>
+
+  <import file="${project.dir}/build-common.xml"/>
+</project>
diff --git a/trunk/gora-core/conf/.gitignore b/trunk/gora-core/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-core/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-core/ivy/ivy.xml b/trunk/gora-core/ivy/ivy.xml
new file mode 100644
index 0000000..7347ce2
--- /dev/null
+++ b/trunk/gora-core/ivy/ivy.xml
@@ -0,0 +1,62 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivy-module version="2.0">
+    <info 
+      organisation="org.apache.gora"
+      module="gora-core"
+      status="integration"/>
+
+  <configurations>
+    <include file="../../ivy/ivy-configurations.xml"/>
+  </configurations>
+
+  <publications defaultconf="compile">
+    <artifact name="gora-core" conf="compile"/>
+    <artifact name="gora-core-test" conf="test"/>
+  </publications>
+
+  <dependencies>
+
+    <dependency org="commons-logging" name="commons-logging" rev="1.1.1" conf="*->default"/>
+    <dependency org="commons-lang" name="commons-lang" conf="*->default" rev="2.5"/>
+    <dependency org="log4j" name="log4j" rev="1.2.15" conf="*->default">   
+      <exclude org="com.sun.jdmk"/>
+      <exclude org="com.sun.jmx"/>
+      <exclude org="javax.jms"/>
+    </dependency>
+    
+    <dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2" conf="*->default">
+      <exclude org="hsqldb" name="hsqldb"/>
+      <exclude org="net.sf.kosmosfs" name="kfs"/>
+      <exclude org="net.java.dev.jets3t" name="jets3t"/>
+      <exclude org="org.eclipse.jdt" name="core"/>
+      <exclude org="org.mortbay.jetty" name="jsp-*"/>
+    </dependency>
+    <dependency org="org.apache.hadoop" name="avro" rev="1.3.2" conf="*->default">
+      <exclude org="ant" name="ant"/>
+    </dependency>
+
+    <!-- test dependencies -->
+    <dependency org="org.apache.hadoop" name="hadoop-test" rev="0.20.2" conf="test->master"/>
+    <dependency org="org.slf4j" name="slf4j-simple" rev="1.5.8" conf="test -> *,!sources,!javadoc"/>
+    <dependency org="junit" name="junit" rev="4.6" conf="test->default"/>
+
+  </dependencies>
+</ivy-module>
+
diff --git a/trunk/gora-core/lib-ext/.gitignore b/trunk/gora-core/lib-ext/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-core/lib-ext/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-core/pom.xml b/trunk/gora-core/pom.xml
new file mode 100644
index 0000000..6542747
--- /dev/null
+++ b/trunk/gora-core/pom.xml
@@ -0,0 +1,170 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+     <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>gora-core</artifactId>
+    <packaging>bundle</packaging>
+
+    <name>Apache Gora :: Core</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-core</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-core</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-core</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <testResources>
+            <testResource>
+                <directory>src/test/conf/</directory>
+                <includes>
+                    <include>**</include>
+                </includes>
+            </testResource>
+        </testResources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <goal>test-jar</goal>
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Hadoop Dependencies -->
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-core</artifactId>
+        </dependency>
+        
+        <dependency>
+            <groupId>org.apache.cxf</groupId>
+            <artifactId>cxf-rt-frontend-jaxrs</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>commons-lang</groupId>
+            <artifactId>commons-lang</artifactId>
+        </dependency>
+
+        <!-- Logging Dependencies -->
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+            <exclusions> 
+             <exclusion>
+              <groupId>javax.jms</groupId>
+              <artifactId>jms</artifactId>
+             </exclusion>
+            </exclusions>
+        </dependency>
+
+        <!-- Testing Dependencies -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>test</scope>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-simple</artifactId>
+            <scope>test</scope>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-test</artifactId>
+            <scope>test</scope>
+        </dependency>
+        
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-core/src/examples/avro/employee.json b/trunk/gora-core/src/examples/avro/employee.json
new file mode 100644
index 0000000..88ad873
--- /dev/null
+++ b/trunk/gora-core/src/examples/avro/employee.json
@@ -0,0 +1,11 @@
+  {
+    "type": "record",
+    "name": "Employee",
+    "namespace": "org.apache.gora.examples.generated",
+    "fields" : [
+      {"name": "name", "type": "string"},
+      {"name": "dateOfBirth", "type": "long"},
+      {"name": "ssn", "type": "string"},
+      {"name": "salary", "type": "int"}
+    ]
+  }
diff --git a/trunk/gora-core/src/examples/avro/tokendatum.json b/trunk/gora-core/src/examples/avro/tokendatum.json
new file mode 100644
index 0000000..9dccb4b
--- /dev/null
+++ b/trunk/gora-core/src/examples/avro/tokendatum.json
@@ -0,0 +1,8 @@
+{
+  "type": "record",
+  "name": "TokenDatum",
+  "namespace": "org.apache.gora.examples.generated",
+  "fields" : [
+    {"name": "count", "type": "int"}
+  ]
+}
diff --git a/trunk/gora-core/src/examples/avro/webpage.json b/trunk/gora-core/src/examples/avro/webpage.json
new file mode 100644
index 0000000..b196d12
--- /dev/null
+++ b/trunk/gora-core/src/examples/avro/webpage.json
@@ -0,0 +1,20 @@
+{
+  "type": "record",
+  "name": "WebPage",
+  "namespace": "org.apache.gora.examples.generated",
+  "fields" : [
+    {"name": "url", "type": "string"},
+    {"name": "content", "type": "bytes"},
+    {"name": "parsedContent", "type": {"type":"array", "items": "string"}},
+    {"name": "outlinks", "type": {"type":"map", "values":"string"}},
+    {"name": "metadata", "type": {
+      "name": "Metadata",
+      "type": "record",
+      "namespace": "org.apache.gora.examples.generated",
+      "fields": [
+        {"name": "version", "type": "int"},
+        {"name": "data", "type": {"type": "map", "values": "string"}}
+      ]
+    }}
+  ]
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/WebPageDataCreator.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/WebPageDataCreator.java
new file mode 100644
index 0000000..914198b
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/WebPageDataCreator.java
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.examples;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.HashMap;
+
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.examples.generated.Metadata;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Creates and stores some data to be used in the tests.
+ */
+public class WebPageDataCreator {
+
+  private static final Logger log = LoggerFactory.getLogger(WebPageDataCreator.class);
+  
+  public static final String[] URLS = {
+    "http://foo.com/",
+    "http://foo.com/1.html",
+    "http://foo.com/2.html",
+    "http://bar.com/3.jsp",
+    "http://bar.com/1.html",
+    "http://bar.com/",
+    "http://baz.com/1.jsp&q=barbaz",
+    "http://baz.com/1.jsp&q=barbaz&p=foo",
+    "http://baz.com/1.jsp&q=foo",
+    "http://bazbar.com",
+  };
+  
+  public static HashMap<String, Integer> URL_INDEXES = new HashMap<String, Integer>();
+  
+  static {
+    for(int i=0; i<URLS.length; i++) {
+      URL_INDEXES.put(URLS[i], i);
+    }  
+  }
+  
+  public static final String[] CONTENTS = {
+    "foo baz bar",
+    "foo",
+    "foo1 bar1 baz1",
+    "a b c d e",
+    "aa bb cc dd ee",
+    "1",
+    "2 3",
+    "a b b b b b a",
+    "a a a",
+    "foo bar baz",
+  };
+  
+  public static final int[][] LINKS = {
+    {1, 2, 3, 9},
+    {3, 9},
+    {},
+    {9},
+    {5},
+    {1, 2, 3, 4, 6, 7, 8, 9},
+    {1},
+    {2},
+    {3},
+    {8, 1},
+  };
+
+  public static final String[][] ANCHORS = {
+    {"foo", "foo", "foo", "foo"},
+    {"a1", "a2"},
+    {},
+    {"anchor1"},
+    {"bar"},
+    {"a1", "a2", "a3", "a4","a5", "a6", "a7", "a8", "a9"},
+    {"foo"},
+    {"baz"},
+    {"bazbar"},
+    {"baz", "bar"},
+  };
+
+  public static final String[] SORTED_URLS = new String[URLS.length];
+  static {
+    for (int i = 0; i < URLS.length; i++) {
+      SORTED_URLS[i] = URLS[i];
+    }
+    Arrays.sort(SORTED_URLS);
+  }
+  
+  public static void createWebPageData(DataStore<String, WebPage> dataStore) 
+  throws IOException {
+    WebPage page;
+    log.info("creating web page data");
+    
+    for(int i=0; i<URLS.length; i++) {
+      page = new WebPage();
+      page.setUrl(new Utf8(URLS[i]));
+      page.setContent(ByteBuffer.wrap(CONTENTS[i].getBytes()));
+      for(String token : CONTENTS[i].split(" ")) {
+        page.addToParsedContent(new Utf8(token));  
+      }
+      
+      for(int j=0; j<LINKS[i].length; j++) {
+        page.putToOutlinks(new Utf8(URLS[LINKS[i][j]]), new Utf8(ANCHORS[i][j]));
+      }
+      
+      Metadata metadata = new Metadata();
+      metadata.setVersion(1);
+      metadata.putToData(new Utf8("metakey"), new Utf8("metavalue"));
+      page.setMetadata(metadata);
+      
+      dataStore.put(URLS[i], page);
+    }
+    dataStore.flush();
+    log.info("finished creating web page data");
+  }
+  
+  public int run(String[] args) throws Exception {
+    String dataStoreClass = "org.apache.gora.hbase.store.HBaseStore";
+    if(args.length > 0) {
+      dataStoreClass = args[0];
+    }
+    
+    DataStore<String,WebPage> store 
+      = DataStoreFactory.getDataStore(dataStoreClass, String.class, WebPage.class, new Configuration());
+    createWebPageData(store);
+    
+    return 0;
+  }
+  
+  public static void main(String[] args) throws Exception {
+    new WebPageDataCreator().run(args);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Employee.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Employee.java
new file mode 100644
index 0000000..96e0c10
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Employee.java
@@ -0,0 +1,102 @@
+package org.apache.gora.examples.generated;
+
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.avro.Protocol;
+import org.apache.avro.Schema;
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Protocol;
+import org.apache.avro.util.Utf8;
+import org.apache.avro.ipc.AvroRemoteException;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificExceptionBase;
+import org.apache.avro.specific.SpecificRecordBase;
+import org.apache.avro.specific.SpecificRecord;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.ListGenericArray;
+
+@SuppressWarnings("all")
+public class Employee extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"Employee\",\"namespace\":\"org.apache.gora.examples.generated\",\"fields\":[{\"name\":\"name\",\"type\":\"string\"},{\"name\":\"dateOfBirth\",\"type\":\"long\"},{\"name\":\"ssn\",\"type\":\"string\"},{\"name\":\"salary\",\"type\":\"int\"}]}");
+  public static enum Field {
+    NAME(0,"name"),
+    DATE_OF_BIRTH(1,"dateOfBirth"),
+    SSN(2,"ssn"),
+    SALARY(3,"salary"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"name","dateOfBirth","ssn","salary",};
+  static {
+    PersistentBase.registerFields(Employee.class, _ALL_FIELDS);
+  }
+  private Utf8 name;
+  private long dateOfBirth;
+  private Utf8 ssn;
+  private int salary;
+  public Employee() {
+    this(new StateManagerImpl());
+  }
+  public Employee(StateManager stateManager) {
+    super(stateManager);
+  }
+  public Employee newInstance(StateManager stateManager) {
+    return new Employee(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return name;
+    case 1: return dateOfBirth;
+    case 2: return ssn;
+    case 3: return salary;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:name = (Utf8)_value; break;
+    case 1:dateOfBirth = (Long)_value; break;
+    case 2:ssn = (Utf8)_value; break;
+    case 3:salary = (Integer)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public Utf8 getName() {
+    return (Utf8) get(0);
+  }
+  public void setName(Utf8 value) {
+    put(0, value);
+  }
+  public long getDateOfBirth() {
+    return (Long) get(1);
+  }
+  public void setDateOfBirth(long value) {
+    put(1, value);
+  }
+  public Utf8 getSsn() {
+    return (Utf8) get(2);
+  }
+  public void setSsn(Utf8 value) {
+    put(2, value);
+  }
+  public int getSalary() {
+    return (Integer) get(3);
+  }
+  public void setSalary(int value) {
+    put(3, value);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Metadata.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Metadata.java
new file mode 100644
index 0000000..cfa588e
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/Metadata.java
@@ -0,0 +1,93 @@
+package org.apache.gora.examples.generated;
+
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.avro.Protocol;
+import org.apache.avro.Schema;
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Protocol;
+import org.apache.avro.util.Utf8;
+import org.apache.avro.ipc.AvroRemoteException;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificExceptionBase;
+import org.apache.avro.specific.SpecificRecordBase;
+import org.apache.avro.specific.SpecificRecord;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.ListGenericArray;
+
+@SuppressWarnings("all")
+public class Metadata extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"Metadata\",\"namespace\":\"org.apache.gora.examples.generated\",\"fields\":[{\"name\":\"version\",\"type\":\"int\"},{\"name\":\"data\",\"type\":{\"type\":\"map\",\"values\":\"string\"}}]}");
+  public static enum Field {
+    VERSION(0,"version"),
+    DATA(1,"data"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"version","data",};
+  static {
+    PersistentBase.registerFields(Metadata.class, _ALL_FIELDS);
+  }
+  private int version;
+  private Map<Utf8,Utf8> data;
+  public Metadata() {
+    this(new StateManagerImpl());
+  }
+  public Metadata(StateManager stateManager) {
+    super(stateManager);
+    data = new StatefulHashMap<Utf8,Utf8>();
+  }
+  public Metadata newInstance(StateManager stateManager) {
+    return new Metadata(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return version;
+    case 1: return data;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:version = (Integer)_value; break;
+    case 1:data = (Map<Utf8,Utf8>)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public int getVersion() {
+    return (Integer) get(0);
+  }
+  public void setVersion(int value) {
+    put(0, value);
+  }
+  public Map<Utf8, Utf8> getData() {
+    return (Map<Utf8, Utf8>) get(1);
+  }
+  public Utf8 getFromData(Utf8 key) {
+    if (data == null) { return null; }
+    return data.get(key);
+  }
+  public void putToData(Utf8 key, Utf8 value) {
+    getStateManager().setDirty(this, 1);
+    data.put(key, value);
+  }
+  public Utf8 removeFromData(Utf8 key) {
+    if (data == null) { return null; }
+    getStateManager().setDirty(this, 1);
+    return data.remove(key);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/TokenDatum.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/TokenDatum.java
new file mode 100644
index 0000000..bd4fc65
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/TokenDatum.java
@@ -0,0 +1,72 @@
+package org.apache.gora.examples.generated;
+
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.avro.Protocol;
+import org.apache.avro.Schema;
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Protocol;
+import org.apache.avro.util.Utf8;
+import org.apache.avro.ipc.AvroRemoteException;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificExceptionBase;
+import org.apache.avro.specific.SpecificRecordBase;
+import org.apache.avro.specific.SpecificRecord;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.ListGenericArray;
+
+@SuppressWarnings("all")
+public class TokenDatum extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"TokenDatum\",\"namespace\":\"org.apache.gora.examples.generated\",\"fields\":[{\"name\":\"count\",\"type\":\"int\"}]}");
+  public static enum Field {
+    COUNT(0,"count"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"count",};
+  static {
+    PersistentBase.registerFields(TokenDatum.class, _ALL_FIELDS);
+  }
+  private int count;
+  public TokenDatum() {
+    this(new StateManagerImpl());
+  }
+  public TokenDatum(StateManager stateManager) {
+    super(stateManager);
+  }
+  public TokenDatum newInstance(StateManager stateManager) {
+    return new TokenDatum(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return count;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:count = (Integer)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public int getCount() {
+    return (Integer) get(0);
+  }
+  public void setCount(int value) {
+    put(0, value);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/WebPage.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/WebPage.java
new file mode 100644
index 0000000..283f182
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/generated/WebPage.java
@@ -0,0 +1,125 @@
+package org.apache.gora.examples.generated;
+
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.avro.Protocol;
+import org.apache.avro.Schema;
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Protocol;
+import org.apache.avro.util.Utf8;
+import org.apache.avro.ipc.AvroRemoteException;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.specific.SpecificExceptionBase;
+import org.apache.avro.specific.SpecificRecordBase;
+import org.apache.avro.specific.SpecificRecord;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.ListGenericArray;
+
+@SuppressWarnings("all")
+public class WebPage extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"WebPage\",\"namespace\":\"org.apache.gora.examples.generated\",\"fields\":[{\"name\":\"url\",\"type\":\"string\"},{\"name\":\"content\",\"type\":\"bytes\"},{\"name\":\"parsedContent\",\"type\":{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"outlinks\",\"type\":{\"type\":\"map\",\"values\":\"string\"}},{\"name\":\"metadata\",\"type\":{\"type\":\"record\",\"name\":\"Metadata\",\"fields\":[{\"name\":\"version\",\"type\":\"int\"},{\"name\":\"data\",\"type\":{\"type\":\"map\",\"values\":\"string\"}}]}}]}");
+  public static enum Field {
+    URL(0,"url"),
+    CONTENT(1,"content"),
+    PARSED_CONTENT(2,"parsedContent"),
+    OUTLINKS(3,"outlinks"),
+    METADATA(4,"metadata"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"url","content","parsedContent","outlinks","metadata",};
+  static {
+    PersistentBase.registerFields(WebPage.class, _ALL_FIELDS);
+  }
+  private Utf8 url;
+  private ByteBuffer content;
+  private GenericArray<Utf8> parsedContent;
+  private Map<Utf8,Utf8> outlinks;
+  private Metadata metadata;
+  public WebPage() {
+    this(new StateManagerImpl());
+  }
+  public WebPage(StateManager stateManager) {
+    super(stateManager);
+    parsedContent = new ListGenericArray<Utf8>(getSchema().getField("parsedContent").schema());
+    outlinks = new StatefulHashMap<Utf8,Utf8>();
+  }
+  public WebPage newInstance(StateManager stateManager) {
+    return new WebPage(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return url;
+    case 1: return content;
+    case 2: return parsedContent;
+    case 3: return outlinks;
+    case 4: return metadata;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:url = (Utf8)_value; break;
+    case 1:content = (ByteBuffer)_value; break;
+    case 2:parsedContent = (GenericArray<Utf8>)_value; break;
+    case 3:outlinks = (Map<Utf8,Utf8>)_value; break;
+    case 4:metadata = (Metadata)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public Utf8 getUrl() {
+    return (Utf8) get(0);
+  }
+  public void setUrl(Utf8 value) {
+    put(0, value);
+  }
+  public ByteBuffer getContent() {
+    return (ByteBuffer) get(1);
+  }
+  public void setContent(ByteBuffer value) {
+    put(1, value);
+  }
+  public GenericArray<Utf8> getParsedContent() {
+    return (GenericArray<Utf8>) get(2);
+  }
+  public void addToParsedContent(Utf8 element) {
+    getStateManager().setDirty(this, 2);
+    parsedContent.add(element);
+  }
+  public Map<Utf8, Utf8> getOutlinks() {
+    return (Map<Utf8, Utf8>) get(3);
+  }
+  public Utf8 getFromOutlinks(Utf8 key) {
+    if (outlinks == null) { return null; }
+    return outlinks.get(key);
+  }
+  public void putToOutlinks(Utf8 key, Utf8 value) {
+    getStateManager().setDirty(this, 3);
+    outlinks.put(key, value);
+  }
+  public Utf8 removeFromOutlinks(Utf8 key) {
+    if (outlinks == null) { return null; }
+    getStateManager().setDirty(this, 3);
+    return outlinks.remove(key);
+  }
+  public Metadata getMetadata() {
+    return (Metadata) get(4);
+  }
+  public void setMetadata(Metadata value) {
+    put(4, value);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/QueryCounter.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/QueryCounter.java
new file mode 100644
index 0000000..d82ddfd
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/QueryCounter.java
@@ -0,0 +1,152 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.examples.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.mapreduce.GoraMapper;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.gora.util.ClassLoadingUtils;
+
+/**
+ * Example Hadoop job to count the row of a gora {@link Query}.
+ */
+public class QueryCounter<K, T extends Persistent> extends Configured implements Tool {
+
+  public static final String COUNTER_GROUP = "QueryCounter";
+  public static final String ROWS = "ROWS";
+
+  public QueryCounter(Configuration conf) {
+    setConf(conf);
+  }
+
+  public static class QueryCounterMapper<K, T extends Persistent>
+  extends GoraMapper<K, T
+    , NullWritable, NullWritable> {
+
+    @Override
+    protected void map(K key, T value,
+        Context context) throws IOException ,InterruptedException {
+
+      context.getCounter(COUNTER_GROUP, ROWS).increment(1L);
+    };
+  }
+
+  /** Returns the Query to count the results of. Subclasses can
+   * override this function to customize the query.
+   * @return the Query object to count the results of.
+   */
+  public Query<K, T> getQuery(DataStore<K,T> dataStore) {
+    Query<K,T> query = dataStore.newQuery();
+    return query;
+  }
+
+  /**
+   * Creates and returns the {@link Job} for submitting to Hadoop mapreduce.
+   * @param dataStore
+   * @param query
+   * @return
+   * @throws IOException
+   */
+  public Job createJob(DataStore<K,T> dataStore, Query<K,T> query) throws IOException {
+    Job job = new Job(getConf());
+
+    job.setJobName("QueryCounter");
+    job.setNumReduceTasks(0);
+    job.setJarByClass(getClass());
+    /* Mappers are initialized with GoraMapper.initMapper()*/
+    GoraMapper.initMapperJob(job, query, dataStore, NullWritable.class
+        , NullWritable.class, QueryCounterMapper.class, true);
+
+    job.setOutputFormatClass(NullOutputFormat.class);
+    return job;
+  }
+
+
+  /**
+   * Returns the number of results to the Query
+   */
+  public long countQuery(DataStore<K,T> dataStore, Query<K,T> query) throws Exception {
+    Job job = createJob(dataStore, query);
+    job.waitForCompletion(true);
+
+    return job.getCounters().findCounter(COUNTER_GROUP, ROWS).getValue();
+  }
+
+  /**
+   * Returns the number of results to the Query obtained by the
+   * {@link #getQuery(DataStore)} method.
+   */
+  public long countQuery(DataStore<K,T> dataStore) throws Exception {
+    Query<K,T> query = getQuery(dataStore);
+
+    Job job = createJob(dataStore, query);
+    job.waitForCompletion(true);
+
+    return job.getCounters().findCounter(COUNTER_GROUP, ROWS).getValue();
+  }
+
+  @SuppressWarnings("unchecked")
+  @Override
+  public int run(String[] args) throws Exception {
+
+    if(args.length < 2) {
+      System.err.println("Usage QueryCounter <keyClass> <persistentClass> [dataStoreClass]");
+      return 1;
+    }
+
+    Class<K> keyClass = (Class<K>) ClassLoadingUtils.loadClass(args[0]);
+    Class<T> persistentClass = (Class<T>) ClassLoadingUtils.loadClass(args[1]);
+
+    DataStore<K,T> dataStore;
+    Configuration conf = new Configuration();
+
+    if(args.length > 2) {
+      Class<? extends DataStore<K,T>> dataStoreClass
+          = (Class<? extends DataStore<K, T>>) Class.forName(args[2]);
+      dataStore = DataStoreFactory.getDataStore(dataStoreClass, keyClass, persistentClass, conf);
+    }
+    else {
+      dataStore = DataStoreFactory.getDataStore(keyClass, persistentClass, conf);
+    }
+
+    long results = countQuery(dataStore);
+
+    System.out.println("Number of result to the query:" + results);
+
+    return 0;
+  }
+
+
+  @SuppressWarnings("rawtypes")
+  public static void main(String[] args) throws Exception {
+    int ret = ToolRunner.run(new QueryCounter(new Configuration()), args);
+    System.exit(ret);
+  }
+}
diff --git a/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/WordCount.java b/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/WordCount.java
new file mode 100644
index 0000000..3e89e68
--- /dev/null
+++ b/trunk/gora-core/src/examples/java/org/apache/gora/examples/mapreduce/WordCount.java
@@ -0,0 +1,169 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.examples.mapreduce;
+
+import java.io.IOException;
+import java.util.StringTokenizer;
+
+import org.apache.gora.examples.generated.TokenDatum;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.mapreduce.GoraMapper;
+import org.apache.gora.mapreduce.GoraReducer;
+import org.apache.gora.query.Query;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Classic word count example in Gora.
+ */
+public class WordCount extends Configured implements Tool {
+
+  public WordCount() {
+    
+  }
+  
+  public WordCount(Configuration conf) {
+    setConf(conf);
+  }
+  
+  /**
+   * TokenizerMapper takes &lt;String, WebPage&gt; pairs as obtained 
+   * from the input DataStore, and tokenizes the content via 
+   * {@link WebPage#getContent()}. The tokens are emitted as 
+   * &lt;String, WebPage&gt; pairs.
+   */
+  public static class TokenizerMapper 
+    extends GoraMapper<String, WebPage, Text, IntWritable> {
+    
+    private final static IntWritable one = new IntWritable(1);
+    private Text word = new Text();
+    
+    @Override
+    protected void map(String key, WebPage page, Context context) 
+      throws IOException ,InterruptedException {
+      
+      //Get the content from a WebPage as obtained from the DataStore
+      String content =  new String(page.getContent().array());
+      
+      StringTokenizer itr = new StringTokenizer(content);
+      while (itr.hasMoreTokens()) {
+        word.set(itr.nextToken());
+        context.write(word, one);
+      }
+    };
+  }
+  
+  public static class WordCountReducer extends GoraReducer<Text, IntWritable, 
+  String, TokenDatum> {
+    
+    TokenDatum result = new TokenDatum();
+    
+    @Override
+    protected void reduce(Text key, Iterable<IntWritable> values, Context context) 
+      throws IOException ,InterruptedException {
+      int sum = 0;
+      for (IntWritable val : values) {
+        sum += val.get();
+      }
+      result.setCount(sum);
+      context.write(key.toString(), result);
+    };
+    
+  }
+  
+  /**
+   * Creates and returns the {@link Job} for submitting to Hadoop mapreduce.
+   * @param inStore
+   * @param query
+   * @return
+   * @throws IOException
+   */
+  public Job createJob(DataStore<String,WebPage> inStore, Query<String,WebPage> query
+      , DataStore<String,TokenDatum> outStore) throws IOException {
+    Job job = new Job(getConf());
+   
+    job.setJobName("WordCount");
+    
+    job.setNumReduceTasks(10);
+    job.setJarByClass(getClass());
+    
+    /* Mappers are initialized with GoraMapper#initMapper().
+     * Instead of the TokenizerMapper defined here, if the input is not 
+     * obtained via Gora, any other mapper can be used, such as 
+     * Hadoop-MapReduce's WordCount.TokenizerMapper.
+     */
+    GoraMapper.initMapperJob(job, query, inStore, Text.class
+        , IntWritable.class, TokenizerMapper.class, true);
+    
+    /* Reducers are initialized with GoraReducer#initReducer().
+     * If the output is not to be persisted via Gora, any reducer 
+     * can be used instead.
+     */
+    GoraReducer.initReducerJob(job, outStore, WordCountReducer.class);
+    
+    //TODO: set combiner
+    
+    return job;
+  }
+  
+  public int wordCount(DataStore<String,WebPage> inStore, 
+      DataStore<String, TokenDatum> outStore) throws IOException, InterruptedException, ClassNotFoundException {
+    Query<String,WebPage> query = inStore.newQuery();
+    
+    Job job = createJob(inStore, query, outStore);
+    return job.waitForCompletion(true) ? 0 : 1;
+  }
+  
+  @Override
+  public int run(String[] args) throws Exception {
+    
+    DataStore<String,WebPage> inStore;
+    DataStore<String, TokenDatum> outStore;
+    Configuration conf = new Configuration();
+    if(args.length > 0) {
+      String dataStoreClass = args[0];
+      inStore = DataStoreFactory.getDataStore(dataStoreClass, 
+          String.class, WebPage.class, conf);
+      if(args.length > 1) {
+        dataStoreClass = args[1];
+      }
+      outStore = DataStoreFactory.getDataStore(dataStoreClass, 
+          String.class, TokenDatum.class, conf);
+    } else {
+      inStore = DataStoreFactory.getDataStore(String.class, WebPage.class, conf);
+      outStore = DataStoreFactory.getDataStore(String.class, TokenDatum.class, conf);
+    }
+    
+    return wordCount(inStore, outStore);
+  }
+  
+  // Usage WordCount [<input datastore class> [output datastore class]]
+  public static void main(String[] args) throws Exception {
+    int ret = ToolRunner.run(new WordCount(), args);
+    System.exit(ret);
+  }
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumReader.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumReader.java
new file mode 100644
index 0000000..4a7c7c1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumReader.java
@@ -0,0 +1,259 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.WeakHashMap;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.io.Decoder;
+import org.apache.avro.io.ResolvingDecoder;
+import org.apache.avro.specific.SpecificDatumReader;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.mapreduce.FakeResolvingDecoder;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.StatefulMap;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.util.IOUtils;
+
+/**
+ * PersistentDatumReader reads, fields' dirty and readable information.
+ */
+public class PersistentDatumReader<T extends Persistent>
+  extends SpecificDatumReader<T> {
+
+  private Schema rootSchema;
+  private T cachedPersistent; // for creating objects
+
+  private WeakHashMap<Decoder, ResolvingDecoder> decoderCache
+    = new WeakHashMap<Decoder, ResolvingDecoder>();
+
+  private boolean readDirtyBits = true;
+
+  public PersistentDatumReader() {
+  }
+
+  public PersistentDatumReader(Schema schema, boolean readDirtyBits) {
+    this.readDirtyBits = readDirtyBits;
+    setSchema(schema);
+  }
+
+  @Override
+  public void setSchema(Schema actual) {
+    this.rootSchema = actual;
+    super.setSchema(actual);
+  }
+
+  @SuppressWarnings("unchecked")
+  public T newPersistent() {
+    if(cachedPersistent == null) {
+      cachedPersistent = (T)super.newRecord(null, rootSchema);
+      return cachedPersistent; //we can return the cached object
+    }
+    return (T)cachedPersistent.newInstance(new StateManagerImpl());
+  }
+
+  @Override
+  protected Object newRecord(Object old, Schema schema) {
+    if(old != null) {
+      return old;
+    }
+
+    if(schema.equals(rootSchema)) {
+      return newPersistent();
+    } else {
+      return super.newRecord(old, schema);
+    }
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public T read(T reuse, Decoder in) throws IOException {
+    return (T) read(reuse, rootSchema, in);
+  }
+
+  public Object read(Object reuse, Schema schema, Decoder decoder)
+    throws IOException {
+    return super.read(reuse, schema, getResolvingDecoder(decoder));
+  }
+
+  protected ResolvingDecoder getResolvingDecoder(Decoder decoder)
+  throws IOException {
+    ResolvingDecoder resolvingDecoder = decoderCache.get(decoder);
+    if(resolvingDecoder == null) {
+      resolvingDecoder = new FakeResolvingDecoder(rootSchema, decoder);
+      decoderCache.put(decoder, resolvingDecoder);
+    }
+    return resolvingDecoder;
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  protected Object readRecord(Object old, Schema expected, ResolvingDecoder in)
+      throws IOException {
+
+    Object record = newRecord(old, expected);
+
+    //check if top-level
+    if(expected.equals(rootSchema) && readDirtyBits) {
+      T persistent = (T)record;
+      persistent.clear();
+
+      boolean[] dirtyFields = IOUtils.readBoolArray(in);
+      boolean[] readableFields = IOUtils.readBoolArray(in);
+
+      //read fields
+      int i = 0;
+
+      for (Field f : expected.getFields()) {
+        if(readableFields[f.pos()]) {
+          int pos = f.pos();
+          String name = f.name();
+          Object oldDatum = (old != null) ? getField(record, name, pos) : null;
+          setField(record, name, pos, read(oldDatum, f.schema(), in));
+        }
+      }
+
+      // Now set changed bits
+      for (i = 0; i < dirtyFields.length; i++) {
+        if (dirtyFields[i]) {
+          persistent.setDirty(i);
+        } 
+        else {
+          persistent.clearDirty(i);
+        }
+      }
+      return record;
+    } else {
+      //since ResolvingDecoder.readFieldOrder is final, we cannot override it
+      //so this is a copy of super.readReacord, with the readFieldOrder change
+
+      for (Field f : expected.getFields()) {
+        int pos = f.pos();
+        String name = f.name();
+        Object oldDatum = (old != null) ? getField(record, name, pos) : null;
+        setField(record, name, pos, read(oldDatum, f.schema(), in));
+      }
+
+      return record;
+    }
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  protected Object readMap(Object old, Schema expected, ResolvingDecoder in)
+      throws IOException {
+    StatefulMap<Utf8, ?> map = (StatefulMap<Utf8, ?>) newMap(old, 0);
+    Map<Utf8, State> tempStates = null;
+    if (readDirtyBits) {
+      tempStates = new HashMap<Utf8, State>();
+      int size = in.readInt();
+      for (int j = 0; j < size; j++) {
+        Utf8 key = in.readString(null);
+        State state = State.values()[in.readInt()];
+        tempStates.put(key, state);
+      }
+    }
+    super.readMap(map, expected, in);
+    map.clearStates();
+    if (readDirtyBits) {
+      for (Entry<Utf8, State> entry : tempStates.entrySet()) {
+        map.putState(entry.getKey(), entry.getValue());
+      }
+    }
+    return map;
+  }
+
+  @Override
+  @SuppressWarnings({ "rawtypes" })
+  protected Object newMap(Object old, int size) {
+    if (old instanceof StatefulHashMap) {
+      ((StatefulHashMap)old).reuse();
+      return old;
+    }
+    return new StatefulHashMap<Object, Object>();
+  }
+
+  /** Called to create new array instances.  Subclasses may override to use a
+   * different array implementation.  By default, this returns a 
+   * GenericData.Array instance.*/
+  @Override
+  @SuppressWarnings("rawtypes")
+  protected Object newArray(Object old, int size, Schema schema) {
+    if (old instanceof ListGenericArray) {
+      ((GenericArray) old).clear();
+      return old;
+    } else return new ListGenericArray(size, schema);
+  }
+  
+  public Persistent clone(Persistent persistent, Schema schema) {
+    Persistent cloned = persistent.newInstance(new StateManagerImpl());
+    List<Field> fields = schema.getFields();
+    for(Field field: fields) {
+      int pos = field.pos();
+      switch(field.schema().getType()) {
+        case MAP    :
+        case ARRAY  :
+        case RECORD : 
+        case STRING : cloned.put(pos, cloneObject(
+            field.schema(), persistent.get(pos), cloned.get(pos))); break;
+        case NULL   : break;
+        default     : cloned.put(pos, persistent.get(pos)); break;
+      }
+    }
+    
+    return cloned;
+  }
+  
+  @SuppressWarnings("unchecked")
+  protected Object cloneObject(Schema schema, Object toClone, Object cloned) {
+    if(toClone == null) {
+      return null;
+    }
+    
+    switch(schema.getType()) {
+      case MAP    :
+        Map<Utf8, Object> map = (Map<Utf8, Object>)newMap(cloned, 0);
+        for(Map.Entry<Utf8, Object> entry: ((Map<Utf8, Object>)toClone).entrySet()) {
+          map.put((Utf8)createString(entry.getKey().toString())
+              , cloneObject(schema.getValueType(), entry.getValue(), null));
+        }
+        return map;
+      case ARRAY  :
+        GenericArray<Object> array = (GenericArray<Object>) 
+          newArray(cloned, (int)((GenericArray<?>)toClone).size(), schema);
+        for(Object element: (GenericArray<Object>)toClone) {
+          array.add(cloneObject(schema.getElementType(), element, null));
+        }
+        return array;
+      case RECORD : return clone((Persistent)toClone, schema);
+      case STRING : return createString(toClone.toString());
+      default     : return toClone; //shallow copy is enough
+    }
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumWriter.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumWriter.java
new file mode 100644
index 0000000..8faa519
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/PersistentDatumWriter.java
@@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro;
+
+import java.io.IOException;
+import java.util.Map.Entry;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.io.Encoder;
+import org.apache.avro.specific.SpecificDatumWriter;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.StatefulMap;
+import org.apache.gora.util.IOUtils;
+
+/**
+ * PersistentDatumWriter writes, fields' dirty and readable information.
+ */
+public class PersistentDatumWriter<T extends Persistent>
+  extends SpecificDatumWriter<T> {
+
+  private T persistent = null;
+
+  private boolean writeDirtyBits = true;
+
+  public PersistentDatumWriter() {
+  }
+
+  public PersistentDatumWriter(Schema schema, boolean writeDirtyBits) {
+    setSchema(schema);
+    this.writeDirtyBits = writeDirtyBits;
+  }
+
+  public void setPersistent(T persistent) {
+    this.persistent = persistent;
+  }
+
+  @Override
+  /**exposing this function so that fields can be written individually*/
+  public void write(Schema schema, Object datum, Encoder out)
+      throws IOException {
+    super.write(schema, datum, out);
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  protected void writeRecord(Schema schema, Object datum, Encoder out)
+      throws IOException {
+
+    if(persistent == null) {
+      persistent = (T) datum;
+    }
+
+    if (!writeDirtyBits) {
+      super.writeRecord(schema, datum, out);
+      return;
+    }
+
+    //check if top level schema
+    if(schema.equals(persistent.getSchema())) {
+      //write readable fields and dirty fields info
+      boolean[] dirtyFields = new boolean[schema.getFields().size()];
+      boolean[] readableFields = new boolean[schema.getFields().size()];
+      StateManager manager = persistent.getStateManager();
+
+      int i=0;
+      for (@SuppressWarnings("unused") Field field : schema.getFields()) {
+        dirtyFields[i] = manager.isDirty(persistent, i);
+        readableFields[i] = manager.isReadable(persistent, i);
+        i++;
+      }
+
+      IOUtils.writeBoolArray(out, dirtyFields);
+      IOUtils.writeBoolArray(out, readableFields);
+
+      for (Field field : schema.getFields()) {
+        if(readableFields[field.pos()]) {
+          write(field.schema(), getField(datum, field.name(), field.pos()), out);
+        }
+      }
+
+    } else {
+      super.writeRecord(schema, datum, out);
+    }
+  }
+
+  @Override
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  protected void writeMap(Schema schema, Object datum, Encoder out)
+      throws IOException {
+
+    if (writeDirtyBits) {
+      // write extra state information for maps
+      StatefulMap<Utf8, ?> map = (StatefulMap) datum;
+      out.writeInt(map.states().size());
+      for (Entry<Utf8, State> e2 : map.states().entrySet()) {
+        out.writeString(e2.getKey());
+        out.writeInt(e2.getValue().ordinal());
+      }
+    }
+    super.writeMap(schema, datum, out);
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/mapreduce/FsInput.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/mapreduce/FsInput.java
new file mode 100644
index 0000000..9d5eabf
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/mapreduce/FsInput.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.mapreduce;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.avro.file.SeekableInput;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.Path;
+
+/*
+ * Copied from Avro trunk, when 1.4 is released and Gora switches to it
+ * remove this file
+ */
+
+/** Adapt an {@link FSDataInputStream} to {@link SeekableInput}. */
+public class FsInput implements Closeable, SeekableInput {
+  private final FSDataInputStream stream;
+  private final long len;
+
+  /** Construct given a path and a configuration. */
+  public FsInput(Path path, Configuration conf) throws IOException {
+    this.stream = path.getFileSystem(conf).open(path);
+    this.len = path.getFileSystem(conf).getFileStatus(path).getLen();
+  }
+
+  public long length() {
+    return len;
+  }
+
+  public int read(byte[] b, int off, int len) throws IOException {
+    return stream.read(b, off, len);
+  }
+
+  public void seek(long p) throws IOException {
+    stream.seek(p);
+  }
+
+  public long tell() throws IOException {
+    return stream.getPos();
+  }
+
+  public void close() throws IOException {
+    stream.close();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroQuery.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroQuery.java
new file mode 100644
index 0000000..6667eef
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroQuery.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.query;
+
+import org.apache.gora.avro.store.AvroStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.impl.QueryBase;
+
+/**
+ * A simple Query implementation for Avro. Due to the data model, 
+ * most of the operations for Query, like setting start,end keys is not 
+ * supported. Setting query limit is supported.
+ */
+public class AvroQuery<K, T extends Persistent> extends QueryBase<K,T> {
+
+  public AvroQuery() {
+    super(null);
+  }
+  
+  public AvroQuery(AvroStore<K,T> dataStore) {
+    super(dataStore);
+  }
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroResult.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroResult.java
new file mode 100644
index 0000000..2492c6d
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/AvroResult.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.query;
+
+import java.io.EOFException;
+import java.io.IOException;
+
+import org.apache.avro.AvroTypeException;
+import org.apache.avro.io.DatumReader;
+import org.apache.avro.io.Decoder;
+import org.apache.gora.avro.store.AvroStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.impl.ResultBase;
+
+/**
+ * Adapter to convert DatumReader to Result.
+ */
+public class AvroResult<K, T extends Persistent> extends ResultBase<K, T> {
+
+  private DatumReader<T> reader;
+  private Decoder decoder;
+  
+  public AvroResult(AvroStore<K,T> dataStore, AvroQuery<K,T> query
+      , DatumReader<T> reader, Decoder decoder) {
+    super(dataStore, query);
+    this.reader = reader;
+    this.decoder = decoder;
+  }
+
+  @Override
+  public void close() throws IOException {
+  }
+
+  @Override
+  public float getProgress() throws IOException {
+    //TODO: FIXME
+    return 0;
+  }
+
+  @Override
+  public boolean nextInner() throws IOException {
+    try {
+      persistent = reader.read(persistent, decoder);
+      
+    } catch (AvroTypeException ex) {
+      //TODO: it seems that avro does not respect end-of file and return null
+      //gracefully. Report the issue.
+      return false;
+    } catch (EOFException ex) {
+      return false;
+    }
+    
+    return persistent != null;
+  }  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/query/DataFileAvroResult.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/DataFileAvroResult.java
new file mode 100644
index 0000000..650b830
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/query/DataFileAvroResult.java
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.query;
+
+import java.io.IOException;
+
+import org.apache.avro.file.DataFileReader;
+import org.apache.avro.file.SeekableInput;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.gora.store.DataStore;
+
+/**
+ * An Avro {@link DataFileReader} backed Result.
+ */
+public class DataFileAvroResult<K, T extends Persistent> extends ResultBase<K, T> {
+
+  private SeekableInput in;
+  private DataFileReader<T> reader;
+  private long start;
+  private long end;
+  
+  public DataFileAvroResult(DataStore<K, T> dataStore, Query<K, T> query
+      , DataFileReader<T> reader) 
+  throws IOException {
+    this(dataStore, query, reader, null, 0, 0);
+  }
+  
+  public DataFileAvroResult(DataStore<K, T> dataStore, Query<K, T> query
+      , DataFileReader<T> reader, SeekableInput in, long start, long length) 
+  throws IOException {
+    super(dataStore, query);
+    this.reader = reader;
+    this.start = start;
+    this.end = start + length;
+    this.in = in;
+    if(start > 0) {
+      reader.sync(start);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    if(reader != null)
+      reader.close();
+    reader = null;
+  }
+
+  @Override
+  public float getProgress() throws IOException {
+    if (end == start) {
+      return 0.0f;
+    } else {
+      return Math.min(1.0f, (in.tell() - start) / (float)(end - start));
+    }
+  }
+
+  @Override
+  public boolean nextInner() throws IOException {
+    if (!reader.hasNext())
+      return false;
+    if(end > 0 && reader.pastSync(end))
+      return false;
+    persistent = reader.next(persistent);
+    return true;
+  }
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/store/AvroStore.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/store/AvroStore.java
new file mode 100644
index 0000000..35a9a2e
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/store/AvroStore.java
@@ -0,0 +1,251 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.store;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Properties;
+
+import org.apache.avro.io.BinaryDecoder;
+import org.apache.avro.io.BinaryEncoder;
+import org.apache.avro.io.DatumReader;
+import org.apache.avro.io.DatumWriter;
+import org.apache.avro.io.Decoder;
+import org.apache.avro.io.Encoder;
+import org.apache.avro.io.JsonDecoder;
+import org.apache.avro.io.JsonEncoder;
+import org.apache.avro.specific.SpecificDatumReader;
+import org.apache.avro.specific.SpecificDatumWriter;
+import org.apache.gora.avro.query.AvroQuery;
+import org.apache.gora.avro.query.AvroResult;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.FileSplitPartitionQuery;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.impl.FileBackedDataStoreBase;
+import org.apache.gora.util.OperationNotSupportedException;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * An adapter DataStore for binary-compatible Avro serializations.
+ * AvroDataStore supports Binary and JSON serializations.
+ * @param <T>
+ */
+public class AvroStore<K, T extends Persistent>
+  extends FileBackedDataStoreBase<K, T> implements Configurable {
+
+  /** The property key specifying avro encoder/decoder type to use. Can take values
+   * "BINARY" or "JSON". */
+  public static final String CODEC_TYPE_KEY = "codec.type";
+
+  /**
+   * The type of the avro Encoder/Decoder.
+   */
+  public static enum CodecType {
+    /** Avro binary encoder */
+    BINARY,
+    /** Avro JSON encoder */
+    JSON,
+  }
+
+  private DatumReader<T> datumReader;
+  private DatumWriter<T> datumWriter;
+  private Encoder encoder;
+  private Decoder decoder;
+
+  private CodecType codecType = CodecType.JSON;
+
+  @Override
+  public void initialize(Class<K> keyClass, Class<T> persistentClass,
+      Properties properties) throws IOException {
+    super.initialize(keyClass, persistentClass, properties);
+
+    if(properties != null) {
+      if(this.codecType == null) {
+        String codecType = DataStoreFactory.findProperty(
+            properties, this, CODEC_TYPE_KEY, "BINARY");
+        this.codecType = CodecType.valueOf(codecType);
+      }
+    }
+  }
+
+  public void setCodecType(CodecType codecType) {
+    this.codecType = codecType;
+  }
+
+  public void setEncoder(Encoder encoder) {
+    this.encoder = encoder;
+  }
+
+  public void setDecoder(Decoder decoder) {
+    this.decoder = decoder;
+  }
+
+  public void setDatumReader(DatumReader<T> datumReader) {
+    this.datumReader = datumReader;
+  }
+
+  public void setDatumWriter(DatumWriter<T> datumWriter) {
+    this.datumWriter = datumWriter;
+  }
+
+  @Override
+  public void close() throws IOException {
+    super.close();
+    if(encoder != null) {
+      encoder.flush();
+    }
+    encoder = null;
+    decoder = null;
+  }
+
+  @Override
+  public boolean delete(K key) throws IOException {
+    throw new OperationNotSupportedException("delete is not supported for AvroStore");
+  }
+
+  @Override
+  public long deleteByQuery(Query<K, T> query) throws IOException {
+    throw new OperationNotSupportedException("delete is not supported for AvroStore");
+  }
+
+  /**
+   * Executes a normal Query reading the whole data. #execute() calls this function
+   * for non-PartitionQuery's.
+   */
+  @Override
+  protected Result<K,T> executeQuery(Query<K,T> query) throws IOException {
+    return new AvroResult<K,T>(this, (AvroQuery<K,T>)query,
+        getDatumReader(), getDecoder());
+  }
+
+  /**
+   * Executes a PartitialQuery, reading the data between start and end.
+   */
+  @Override
+  protected Result<K,T> executePartial(FileSplitPartitionQuery<K,T> query)
+  throws IOException {
+    throw new OperationNotSupportedException("Not yet implemented");
+  }
+
+  @Override
+  public void flush() throws IOException {
+    super.flush();
+    if(encoder != null)
+      encoder.flush();
+  }
+
+  @Override
+  public T get(K key, String[] fields) throws IOException {
+    throw new OperationNotSupportedException();
+  }
+
+  @Override
+  public AvroQuery<K,T> newQuery() {
+    return new AvroQuery<K,T>(this);
+  }
+
+  @Override
+  public void put(K key, T obj) throws IOException {
+    getDatumWriter().write(obj, getEncoder());
+  }
+
+  public Encoder getEncoder() throws IOException {
+    if(encoder == null) {
+      encoder = createEncoder();
+    }
+    return encoder;
+  }
+
+  public Decoder getDecoder() throws IOException {
+    if(decoder == null) {
+      decoder = createDecoder();
+    }
+    return decoder;
+  }
+
+  public DatumReader<T> getDatumReader() {
+    if(datumReader == null) {
+      datumReader = createDatumReader();
+    }
+    return datumReader;
+  }
+
+  public DatumWriter<T> getDatumWriter() {
+    if(datumWriter == null) {
+      datumWriter = createDatumWriter();
+    }
+    return datumWriter;
+  }
+
+  protected Encoder createEncoder() throws IOException {
+    switch(codecType) {
+      case BINARY:
+        return new BinaryEncoder(getOrCreateOutputStream());
+      case JSON:
+        return new JsonEncoder(schema, getOrCreateOutputStream());
+    }
+    return null;
+  }
+
+  @SuppressWarnings("deprecation")
+  protected Decoder createDecoder() throws IOException {
+    switch(codecType) {
+      case BINARY:
+        return new BinaryDecoder(getOrCreateInputStream());
+      case JSON:
+        return new JsonDecoder(schema, getOrCreateInputStream());
+    }
+    return null;
+  }
+
+  protected DatumWriter<T> createDatumWriter() {
+    return new SpecificDatumWriter<T>(schema);
+  }
+
+  protected DatumReader<T> createDatumReader() {
+    return new SpecificDatumReader<T>(schema);
+  }
+
+  @Override
+  public Configuration getConf() {
+    if(conf == null) {
+      conf = new Configuration();
+    }
+    return conf;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+  }
+
+  @Override
+  public String getSchemaName() {
+    return "default";
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/avro/store/DataFileAvroStore.java b/trunk/gora-core/src/main/java/org/apache/gora/avro/store/DataFileAvroStore.java
new file mode 100644
index 0000000..6af81a8
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/avro/store/DataFileAvroStore.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.store;
+
+import java.io.IOException;
+
+import org.apache.avro.file.DataFileReader;
+import org.apache.avro.file.DataFileWriter;
+import org.apache.gora.avro.mapreduce.FsInput;
+import org.apache.gora.avro.query.DataFileAvroResult;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.FileSplitPartitionQuery;
+import org.apache.gora.util.OperationNotSupportedException;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * DataFileAvroStore is file based store which uses Avro's 
+ * DataFile{Writer,Reader}'s as a backend. This datastore supports 
+ * mapreduce.
+ */
+public class DataFileAvroStore<K, T extends Persistent> extends AvroStore<K, T> {
+
+  public DataFileAvroStore() {
+  }
+  
+  private DataFileWriter<T> writer;
+  
+  @Override
+  public T get(K key, String[] fields) throws java.io.IOException {
+    throw new OperationNotSupportedException(
+        "Avro DataFile's does not support indexed retrieval");
+  };
+  
+  @Override
+  public void put(K key, T obj) throws java.io.IOException {
+    getWriter().append(obj);
+  };
+  
+  private DataFileWriter<T> getWriter() throws IOException {
+    if(writer == null) {
+      writer = new DataFileWriter<T>(getDatumWriter());
+      writer.create(schema, getOrCreateOutputStream());
+    }
+    return writer;
+  }
+  
+  @Override
+  protected Result<K, T> executeQuery(Query<K, T> query) throws IOException {
+    return new DataFileAvroResult<K, T>(this, query
+        , createReader(createFsInput()));
+  }
+ 
+  @Override
+  protected Result<K,T> executePartial(FileSplitPartitionQuery<K,T> query) 
+    throws IOException {
+    FsInput fsInput = createFsInput();
+    DataFileReader<T> reader = createReader(fsInput);
+    return new DataFileAvroResult<K, T>(this, query, reader, fsInput
+        , query.getStart(), query.getLength());
+  }
+  
+  private DataFileReader<T> createReader(FsInput fsInput) throws IOException {
+    return new DataFileReader<T>(fsInput, getDatumReader());
+  }
+  
+  private FsInput createFsInput() throws IOException {
+    Path path = new Path(getInputPath());
+    return new FsInput(path, getConf());
+  }
+  
+  @Override
+  public void flush() throws IOException {
+    super.flush();
+    if(writer != null) {
+      writer.flush();
+    }
+  }
+  
+  @Override
+  public void close() throws IOException {
+    if(writer != null)  
+      writer.close(); //hadoop 0.20.2 HDFS streams do not allow 
+                      //to close twice, so close the writer first 
+    writer = null;
+    super.close();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/compiler/GoraCompiler.java b/trunk/gora-core/src/main/java/org/apache/gora/compiler/GoraCompiler.java
new file mode 100644
index 0000000..47a17c7
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/compiler/GoraCompiler.java
@@ -0,0 +1,458 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.compiler;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.avro.Protocol;
+import org.apache.avro.Schema;
+import org.apache.avro.Protocol.Message;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.specific.SpecificData;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Generate specific Java interfaces and classes for protocols and schemas. */
+public class GoraCompiler {
+  private File dest;
+  private Writer out;
+  private Set<Schema> queue = new HashSet<Schema>();
+  private static final Logger log = LoggerFactory.getLogger(GoraCompiler.class);
+
+  private GoraCompiler(File dest) {
+    this.dest = dest;                             // root directory for output
+  }
+
+  /** Generates Java interface and classes for a protocol.
+   * @param src the source Avro protocol file
+   * @param dest the directory to place generated files in
+   */
+  public static void compileProtocol(File src, File dest) throws IOException {
+    GoraCompiler compiler = new GoraCompiler(dest);
+    Protocol protocol = Protocol.parse(src);
+    for (Schema s : protocol.getTypes())          // enqueue types
+      compiler.enqueue(s);
+    compiler.compileInterface(protocol);          // generate interface
+    compiler.compile();                           // generate classes for types
+  }
+
+  /** Generates Java classes for a schema. */
+  public static void compileSchema(File src, File dest) throws IOException {
+	log.info("Compiling " + src + " to " + dest );
+    GoraCompiler compiler = new GoraCompiler(dest);
+    compiler.enqueue(Schema.parse(src));          // enqueue types
+    compiler.compile();                           // generate classes for types
+  }
+
+  private static String camelCasify(String s) {
+    return s.substring(0, 1).toUpperCase() + s.substring(1);
+  }
+
+  /** Recognizes camel case */
+  private static String toUpperCase(String s) {
+    StringBuilder builder = new StringBuilder();
+
+    for(int i=0; i<s.length(); i++) {
+      if(i > 0) {
+        if(Character.isUpperCase(s.charAt(i))
+         && Character.isLowerCase(s.charAt(i-1))
+         && Character.isLetter(s.charAt(i))) {
+          builder.append("_");
+        }
+      }
+      builder.append(Character.toUpperCase(s.charAt(i)));
+    }
+
+    return builder.toString();
+  }
+
+  /** Recursively enqueue schemas that need a class generated. */
+  private void enqueue(Schema schema) throws IOException {
+    if (queue.contains(schema)) return;
+    switch (schema.getType()) {
+    case RECORD:
+      queue.add(schema);
+      for (Field field : schema.getFields())
+        enqueue(field.schema());
+      break;
+    case MAP:
+      enqueue(schema.getValueType());
+      break;
+    case ARRAY:
+      enqueue(schema.getElementType());
+      break;
+    case UNION:
+      for (Schema s : schema.getTypes())
+        enqueue(s);
+      break;
+    case ENUM:
+    case FIXED:
+      queue.add(schema);
+      break;
+    case STRING: case BYTES:
+    case INT: case LONG:
+    case FLOAT: case DOUBLE:
+    case BOOLEAN: case NULL:
+      break;
+    default: throw new RuntimeException("Unknown type: "+schema);
+    }
+  }
+
+  /** Generate java classes for enqueued schemas. */
+  private void compile() throws IOException {
+    for (Schema schema : queue)
+      compile(schema);
+  }
+
+  private void compileInterface(Protocol protocol) throws IOException {
+    startFile(protocol.getName(), protocol.getNamespace());
+    try {
+      line(0, "public interface "+protocol.getName()+" {");
+
+      out.append("\n");
+      for (Map.Entry<String,Message> e : protocol.getMessages().entrySet()) {
+        String name = e.getKey();
+        Message message = e.getValue();
+        Schema request = message.getRequest();
+        Schema response = message.getResponse();
+        line(1, unbox(response)+" "+name+"("+params(request)+")");
+        line(2,"throws AvroRemoteException"+errors(message.getErrors())+";");
+      }
+      line(0, "}");
+    } finally {
+      out.close();
+    }
+  }
+
+  private void startFile(String name, String space) throws IOException {
+    File dir = new File(dest, space.replace('.', File.separatorChar));
+    if (!dir.exists())
+      if (!dir.mkdirs())
+        throw new IOException("Unable to create " + dir);
+    name = cap(name) + ".java";
+    out = new OutputStreamWriter(new FileOutputStream(new File(dir, name)));
+    header(space);
+  }
+
+  private void header(String namespace) throws IOException {
+    if(namespace != null) {
+      line(0, "package "+namespace+";\n");
+    }
+    line(0, "import java.nio.ByteBuffer;");
+    line(0, "import java.util.Map;");
+    line(0, "import java.util.HashMap;");
+    line(0, "import org.apache.avro.Protocol;");
+    line(0, "import org.apache.avro.Schema;");
+    line(0, "import org.apache.avro.AvroRuntimeException;");
+    line(0, "import org.apache.avro.Protocol;");
+    line(0, "import org.apache.avro.util.Utf8;");
+    line(0, "import org.apache.avro.ipc.AvroRemoteException;");
+    line(0, "import org.apache.avro.generic.GenericArray;");
+    line(0, "import org.apache.avro.specific.FixedSize;");
+    line(0, "import org.apache.avro.specific.SpecificExceptionBase;");
+    line(0, "import org.apache.avro.specific.SpecificRecordBase;");
+    line(0, "import org.apache.avro.specific.SpecificRecord;");
+    line(0, "import org.apache.avro.specific.SpecificFixed;");
+    line(0, "import org.apache.gora.persistency.StateManager;");
+    line(0, "import org.apache.gora.persistency.impl.PersistentBase;");
+    line(0, "import org.apache.gora.persistency.impl.StateManagerImpl;");
+    line(0, "import org.apache.gora.persistency.StatefulHashMap;");
+    line(0, "import org.apache.gora.persistency.ListGenericArray;");
+    for (Schema s : queue)
+      if (namespace == null
+          ? (s.getNamespace() != null)
+          : !namespace.equals(s.getNamespace()))
+        line(0, "import "+SpecificData.get().getClassName(s)+";");
+    line(0, "");
+    line(0, "@SuppressWarnings(\"all\")");
+  }
+
+  private String params(Schema request) throws IOException {
+    StringBuilder b = new StringBuilder();
+    int count = 0;
+    for (Field field : request.getFields()) {
+      b.append(unbox(field.schema()));
+      b.append(" ");
+      b.append(field.name());
+      if (++count < request.getFields().size())
+        b.append(", ");
+    }
+    return b.toString();
+  }
+
+  private String errors(Schema errs) throws IOException {
+    StringBuilder b = new StringBuilder();
+    for (Schema error : errs.getTypes().subList(1, errs.getTypes().size())) {
+      b.append(", ");
+      b.append(error.getName());
+    }
+    return b.toString();
+  }
+
+  private void compile(Schema schema) throws IOException {
+    startFile(schema.getName(), schema.getNamespace());
+    try {
+      switch (schema.getType()) {
+      case RECORD:
+        String type = type(schema);
+        line(0, "public class "+ type
+             +" extends PersistentBase {");
+        // schema definition
+        line(1, "public static final Schema _SCHEMA = Schema.parse(\""
+             +esc(schema)+"\");");
+
+        //field information
+        line(1, "public static enum Field {");
+        int i=0;
+        for (Field field : schema.getFields()) {
+          line(2,toUpperCase(field.name())+"("+(i++)+ ",\"" + field.name() + "\"),");
+        }
+        line(2, ";");
+        line(2, "private int index;");
+        line(2, "private String name;");
+        line(2, "Field(int index, String name) {this.index=index;this.name=name;}");
+        line(2, "public int getIndex() {return index;}");
+        line(2, "public String getName() {return name;}");
+        line(2, "public String toString() {return name;}");
+        line(1, "};");
+
+        StringBuilder builder = new StringBuilder(
+        "public static final String[] _ALL_FIELDS = {");
+        for (Field field : schema.getFields()) {
+          builder.append("\"").append(field.name()).append("\",");
+        }
+        builder.append("};");
+        line(1, builder.toString());
+
+        line(1, "static {");
+        line(2, "PersistentBase.registerFields("+type+".class, _ALL_FIELDS);");
+        line(1, "}");
+
+        // field declations
+        for (Field field : schema.getFields()) {
+          line(1,"private "+unbox(field.schema())+" "+field.name()+";");
+        }
+
+        //constructors
+        line(1, "public " + type + "() {");
+        line(2, "this(new StateManagerImpl());");
+        line(1, "}");
+        line(1, "public " + type + "(StateManager stateManager) {");
+        line(2, "super(stateManager);");
+        for (Field field : schema.getFields()) {
+          Schema fieldSchema = field.schema();
+          switch (fieldSchema.getType()) {
+          case ARRAY:
+            String valueType = type(fieldSchema.getElementType());
+            line(2, field.name()+" = new ListGenericArray<"+valueType+">(getSchema()" +
+                ".getField(\""+field.name()+"\").schema());");
+            break;
+          case MAP:
+            valueType = type(fieldSchema.getValueType());
+            line(2, field.name()+" = new StatefulHashMap<Utf8,"+valueType+">();");
+          }
+        }
+        line(1, "}");
+
+        //newInstance(StateManager)
+        line(1, "public " + type + " newInstance(StateManager stateManager) {");
+        line(2, "return new " + type + "(stateManager);" );
+        line(1, "}");
+
+        // schema method
+        line(1, "public Schema getSchema() { return _SCHEMA; }");
+        // get method
+        line(1, "public Object get(int _field) {");
+        line(2, "switch (_field) {");
+        i = 0;
+        for (Field field : schema.getFields()) {
+          line(2, "case "+(i++)+": return "+field.name()+";");
+        }
+        line(2, "default: throw new AvroRuntimeException(\"Bad index\");");
+        line(2, "}");
+        line(1, "}");
+        // put method
+        line(1, "@SuppressWarnings(value=\"unchecked\")");
+        line(1, "public void put(int _field, Object _value) {");
+        line(2, "if(isFieldEqual(_field, _value)) return;");
+        line(2, "getStateManager().setDirty(this, _field);");
+        line(2, "switch (_field) {");
+        i = 0;
+        for (Field field : schema.getFields()) {
+          line(2, "case "+i+":"+field.name()+" = ("+
+               type(field.schema())+")_value; break;");
+          i++;
+        }
+        line(2, "default: throw new AvroRuntimeException(\"Bad index\");");
+        line(2, "}");
+        line(1, "}");
+
+        // java bean style getters and setters
+        i = 0;
+        for (Field field : schema.getFields()) {
+          String camelKey = camelCasify(field.name());
+          Schema fieldSchema = field.schema();
+          switch (fieldSchema.getType()) {
+          case INT:case LONG:case FLOAT:case DOUBLE:
+          case BOOLEAN:case BYTES:case STRING: case ENUM: case RECORD:
+          case FIXED:
+            String unboxed = unbox(fieldSchema);
+            String fieldType = type(fieldSchema);
+            line(1, "public "+unboxed+" get" +camelKey+"() {");
+            line(2, "return ("+fieldType+") get("+i+");");
+            line(1, "}");
+            line(1, "public void set"+camelKey+"("+unboxed+" value) {");
+            line(2, "put("+i+", value);");
+            line(1, "}");
+            break;
+          case ARRAY:
+            unboxed = unbox(fieldSchema.getElementType());
+            fieldType = type(fieldSchema.getElementType());
+            line(1, "public GenericArray<"+fieldType+"> get"+camelKey+"() {");
+            line(2, "return (GenericArray<"+fieldType+">) get("+i+");");
+            line(1, "}");
+            line(1, "public void addTo"+camelKey+"("+unboxed+" element) {");
+            line(2, "getStateManager().setDirty(this, "+i+");");
+            line(2, field.name()+".add(element);");
+            line(1, "}");
+            break;
+          case MAP:
+            unboxed = unbox(fieldSchema.getValueType());
+            fieldType = type(fieldSchema.getValueType());
+            line(1, "public Map<Utf8, "+fieldType+"> get"+camelKey+"() {");
+            line(2, "return (Map<Utf8, "+fieldType+">) get("+i+");");
+            line(1, "}");
+            line(1, "public "+fieldType+" getFrom"+camelKey+"(Utf8 key) {");
+            line(2, "if ("+field.name()+" == null) { return null; }");
+            line(2, "return "+field.name()+".get(key);");
+            line(1, "}");
+            line(1, "public void putTo"+camelKey+"(Utf8 key, "+unboxed+" value) {");
+            line(2, "getStateManager().setDirty(this, "+i+");");
+            line(2, field.name()+".put(key, value);");
+            line(1, "}");
+            line(1, "public "+fieldType+" removeFrom"+camelKey+"(Utf8 key) {");
+            line(2, "if ("+field.name()+" == null) { return null; }");
+            line(2, "getStateManager().setDirty(this, "+i+");");
+            line(2, "return "+field.name()+".remove(key);");
+            line(1, "}");
+          }
+          i++;
+        }
+        line(0, "}");
+
+        break;
+      case ENUM:
+        line(0, "public enum "+type(schema)+" { ");
+        StringBuilder b = new StringBuilder();
+        int count = 0;
+        for (String symbol : schema.getEnumSymbols()) {
+          b.append(symbol);
+          if (++count < schema.getEnumSymbols().size())
+            b.append(", ");
+        }
+        line(1, b.toString());
+        line(0, "}");
+        break;
+      case FIXED:
+        line(0, "@FixedSize("+schema.getFixedSize()+")");
+        line(0, "public class "+type(schema)+" extends SpecificFixed {}");
+        break;
+      case MAP: case ARRAY: case UNION: case STRING: case BYTES:
+      case INT: case LONG: case FLOAT: case DOUBLE: case BOOLEAN: case NULL:
+        break;
+      default: throw new RuntimeException("Unknown type: "+schema);
+      }
+    } finally {
+      out.close();
+    }
+  }
+
+  private static final Schema NULL_SCHEMA = Schema.create(Schema.Type.NULL);
+
+  public static String type(Schema schema) {
+    switch (schema.getType()) {
+    case RECORD:
+    case ENUM:
+    case FIXED:
+      return schema.getName();
+    case ARRAY:
+      return "GenericArray<"+type(schema.getElementType())+">";
+    case MAP:
+      return "Map<Utf8,"+type(schema.getValueType())+">";
+    case UNION:
+      List<Schema> types = schema.getTypes();     // elide unions with null
+      if ((types.size() == 2) && types.contains(NULL_SCHEMA))
+        return type(types.get(types.get(0).equals(NULL_SCHEMA) ? 1 : 0));
+      return "Object";
+    case STRING:  return "Utf8";
+    case BYTES:   return "ByteBuffer";
+    case INT:     return "Integer";
+    case LONG:    return "Long";
+    case FLOAT:   return "Float";
+    case DOUBLE:  return "Double";
+    case BOOLEAN: return "Boolean";
+    case NULL:    return "Void";
+    default: throw new RuntimeException("Unknown type: "+schema);
+    }
+  }
+
+  public static String unbox(Schema schema) {
+    switch (schema.getType()) {
+    case INT:     return "int";
+    case LONG:    return "long";
+    case FLOAT:   return "float";
+    case DOUBLE:  return "double";
+    case BOOLEAN: return "boolean";
+    default:      return type(schema);
+    }
+  }
+
+  private void line(int indent, String text) throws IOException {
+    for (int i = 0; i < indent; i ++) {
+      out.append("  ");
+    }
+    out.append(text);
+    out.append("\n");
+  }
+
+  static String cap(String name) {
+    return name.substring(0,1).toUpperCase()+name.substring(1,name.length());
+  }
+
+  private static String esc(Object o) {
+    return o.toString().replace("\"", "\\\"");
+  }
+
+  public static void main(String[] args) throws Exception {
+    if (args.length < 2) {
+      System.err.println("Usage: Compiler <schema file> <output dir>");
+      System.exit(1);
+    }
+    compileSchema(new File(args[0]), new File(args[1]));
+  }
+
+}
+
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/FakeResolvingDecoder.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/FakeResolvingDecoder.java
new file mode 100644
index 0000000..5fed0c2
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/FakeResolvingDecoder.java
@@ -0,0 +1,170 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.io.Decoder;
+import org.apache.avro.io.ResolvingDecoder;
+import org.apache.avro.io.parsing.Symbol;
+import org.apache.avro.util.Utf8;
+
+/**
+ * Avro uses a ResolvingDecoder which resolves two schemas and converts records 
+ * written by one to the other, and validates the input. However, Gora needs to 
+ * write extra information along with the data, so the validation is not consistent 
+ * with the grammer generated by Avro. So we need to fake the ResolvingDecoder (which
+ * is sadly hard codec into GenericDatumReader) until we can write our own GrammerGenerator
+ * extending ResolvingGrammerGenerator of avro.
+ */
+public class FakeResolvingDecoder extends ResolvingDecoder {
+
+  public FakeResolvingDecoder(Schema schema, Decoder in) throws IOException {
+    super(schema, schema, in);
+  }
+  
+  @Override
+  public long arrayNext() throws IOException {
+    return in.arrayNext();
+  }
+  
+  @Override
+  public Symbol doAction(Symbol input, Symbol top) throws IOException {
+    return null;
+  }
+  
+  @Override
+  public void init(InputStream in) throws IOException {
+    this.in.init(in);
+  }
+  
+  @Override
+  public long mapNext() throws IOException {
+    return in.mapNext();
+  }
+
+  @Override
+  public double readDouble() throws IOException {
+    return in.readDouble();
+  }
+
+  @Override
+  public int readEnum() throws IOException {
+    return in.readEnum();
+  }
+
+  @Override
+  public int readIndex() throws IOException {
+    return in.readIndex();
+  }
+
+  @Override
+  public long readLong() throws IOException {
+    return in.readLong();
+  }
+
+  @Override
+  public void skipAction() throws IOException {
+  }
+
+  @Override
+  public long readArrayStart() throws IOException {
+    return in.readArrayStart();
+  }
+
+  @Override
+  public boolean readBoolean() throws IOException {
+    return in.readBoolean();
+  }
+
+  @Override
+  public ByteBuffer readBytes(ByteBuffer old) throws IOException {
+    return in.readBytes(old);
+  }
+
+  @Override
+  public void readFixed(byte[] bytes, int start, int len) throws IOException {
+    in.readFixed(bytes, start, len);
+  }
+
+  @Override
+  public float readFloat() throws IOException {
+    return in.readFloat();
+  }
+
+  @Override
+  public int readInt() throws IOException {
+    return in.readInt();
+  }
+
+  @Override
+  public long readMapStart() throws IOException {
+    return in.readMapStart();
+  }
+
+  @Override
+  public void readNull() throws IOException {
+    in.readNull();
+  }
+
+  @Override
+  public Utf8 readString(Utf8 old) throws IOException {
+    return in.readString(old);
+  }
+
+  @Override
+  public long skipArray() throws IOException {
+    return in.skipArray();
+  }
+
+  @Override
+  public void skipBytes() throws IOException {
+    in.skipBytes();
+  }
+
+  @Override
+  protected void skipFixed() throws IOException {
+  }
+
+  @Override
+  public void skipFixed(int length) throws IOException {
+    in.skipFixed(length);
+  }
+
+  @Override
+  public long skipMap() throws IOException {
+    return in.skipMap();
+  }
+
+  @Override
+  public void skipString() throws IOException {
+  }
+
+  @Override
+  public void skipTopSymbol() throws IOException {
+  }
+
+  @Override
+  public void readFixed(byte[] bytes) throws IOException {
+    in.readFixed(bytes);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputFormat.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputFormat.java
new file mode 100644
index 0000000..a9737a1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputFormat.java
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.FileSplitPartitionQuery;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.FileBackedDataStore;
+import org.apache.gora.util.IOUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileSplit;
+
+/**
+ * {@link InputFormat} to fetch the input from Gora data stores. The
+ * query to fetch the items from the datastore should be prepared and
+ * set via {@link #setQuery(Job, Query)}, before submitting the job.
+ *
+ * <p> The {@link InputSplit}s are prepared from the {@link PartitionQuery}s
+ * obtained by calling {@link DataStore#getPartitions(Query)}.
+ * <p>
+ * Hadoop jobs can be either configured through static 
+ * <code>setInput()</code> methods, or from {@link GoraMapper}.
+ * 
+ * @see GoraMapper
+ */
+public class GoraInputFormat<K, T extends Persistent>
+  extends InputFormat<K, T> implements Configurable {
+
+  public static final String QUERY_KEY   = "gora.inputformat.query";
+
+  private DataStore<K, T> dataStore;
+
+  private Configuration conf;
+
+  private Query<K, T> query;
+
+  @SuppressWarnings({ "rawtypes" })
+  private void setInputPath(PartitionQuery<K,T> partitionQuery
+      , TaskAttemptContext context) throws IOException {
+    //if the data store is file based
+    if(partitionQuery instanceof FileSplitPartitionQuery) {
+      FileSplit split = ((FileSplitPartitionQuery<K,T>)partitionQuery).getSplit();
+      //set the input path to FileSplit's path.
+      ((FileBackedDataStore)partitionQuery.getDataStore()).setInputPath(
+          split.getPath().toString());
+    }
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public RecordReader<K, T> createRecordReader(InputSplit split,
+      TaskAttemptContext context) throws IOException, InterruptedException {
+    PartitionQuery<K,T> partitionQuery = (PartitionQuery<K, T>)
+      ((GoraInputSplit)split).getQuery();
+
+    setInputPath(partitionQuery, context);
+    return new GoraRecordReader<K, T>(partitionQuery, context);
+  }
+
+  @Override
+  public List<InputSplit> getSplits(JobContext context) throws IOException,
+      InterruptedException {
+
+    List<PartitionQuery<K, T>> queries = dataStore.getPartitions(query);
+    List<InputSplit> splits = new ArrayList<InputSplit>(queries.size());
+
+    for(PartitionQuery<K,T> query : queries) {
+      splits.add(new GoraInputSplit(context.getConfiguration(), query));
+    }
+
+    return splits;
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+    try {
+      this.query = getQuery(conf);
+      this.dataStore = query.getDataStore();
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
+    }
+  }
+
+  public static<K, T extends Persistent> void setQuery(Job job
+      , Query<K, T> query) throws IOException {
+    IOUtils.storeToConf(query, job.getConfiguration(), QUERY_KEY);
+  }
+
+  public Query<K, T> getQuery(Configuration conf) throws IOException {
+    return IOUtils.loadFromConf(conf, QUERY_KEY);
+  }
+
+  /**
+   * Sets the input parameters for the job
+   * @param job the job to set the properties for
+   * @param query the query to get the inputs from
+   * @param reuseObjects whether to reuse objects in serialization
+   * @throws IOException
+   */
+  public static <K1, V1 extends Persistent> void setInput(Job job
+      , Query<K1,V1> query, boolean reuseObjects) throws IOException {
+    setInput(job, query, query.getDataStore(), reuseObjects);
+  }
+
+  /**
+   * Sets the input parameters for the job
+   * @param job the job to set the properties for
+   * @param query the query to get the inputs from
+   * @param dataStore the datastore as the input
+   * @param reuseObjects whether to reuse objects in serialization
+   * @throws IOException
+   */
+  public static <K1, V1 extends Persistent> void setInput(Job job
+      , Query<K1,V1> query, DataStore<K1,V1> dataStore, boolean reuseObjects)
+  throws IOException {
+
+    Configuration conf = job.getConfiguration();
+
+    GoraMapReduceUtils.setIOSerializations(conf, reuseObjects);
+
+    job.setInputFormatClass(GoraInputFormat.class);
+    GoraInputFormat.setQuery(job, query);
+  }
+  
+  /**
+   * Sets the input parameters for the job
+   * @param job the job to set the properties for
+   * @param dataStoreClass the datastore class
+   * @param inKeyClass Map input key class
+   * @param inValueClass Map input value class
+   * @param reuseObjects whether to reuse objects in serialization
+   * @throws IOException
+   */
+  public static <K1, V1 extends Persistent> void setInput(
+      Job job, 
+      Class<? extends DataStore<K1,V1>> dataStoreClass, 
+      Class<K1> inKeyClass, 
+      Class<V1> inValueClass,
+      boolean reuseObjects)
+  throws IOException {
+
+    DataStore<K1,V1> store = DataStoreFactory.getDataStore(dataStoreClass
+        , inKeyClass, inValueClass, job.getConfiguration());
+    setInput(job, store.newQuery(), store, reuseObjects);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputSplit.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputSplit.java
new file mode 100644
index 0000000..190a5c1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraInputSplit.java
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.util.IOUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * InputSplit using {@link PartitionQuery}s. 
+ */
+public class GoraInputSplit extends InputSplit 
+  implements Writable, Configurable {
+
+  protected PartitionQuery<?,?> query;
+  private Configuration conf;
+  
+  public GoraInputSplit() {
+  }
+  
+  public GoraInputSplit(Configuration conf, PartitionQuery<?,?> query) {
+    setConf(conf);
+    this.query = query;
+  }
+  
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+  
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+  @Override
+  public long getLength() throws IOException, InterruptedException {
+    return 0;
+  }
+
+  @Override
+  public String[] getLocations() {
+    return query.getLocations();
+  }
+
+  public PartitionQuery<?, ?> getQuery() {
+    return query;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    try {
+      query = (PartitionQuery<?, ?>) IOUtils.deserialize(conf, in, null);
+    } catch (ClassNotFoundException ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    IOUtils.serialize(getConf(), out, query);
+  }
+  
+  @Override
+  public boolean equals(Object obj) {
+    if(obj instanceof GoraInputSplit) {
+      return this.query.equals(((GoraInputSplit)obj).query);
+    }
+    return false;
+  }
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapReduceUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapReduceUtils.java
new file mode 100644
index 0000000..3c17f62
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapReduceUtils.java
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.gora.util.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+
+/**
+ * MapReduce related utilities for Gora
+ */
+public class GoraMapReduceUtils {
+
+  public static class HelperInputFormat<K,V> extends FileInputFormat<K, V> {
+    @Override
+    public RecordReader<K, V> createRecordReader(InputSplit arg0,
+        TaskAttemptContext arg1) throws IOException, InterruptedException {
+      return null;
+    }
+  }
+  
+  public static void setIOSerializations(Configuration conf, boolean reuseObjects) {
+    String serializationClass =
+      PersistentSerialization.class.getCanonicalName();
+    if (!reuseObjects) {
+      serializationClass =
+        PersistentNonReusingSerialization.class.getCanonicalName();
+    }
+    String[] serializations = StringUtils.joinStringArrays(
+        conf.getStrings("io.serializations"), 
+        "org.apache.hadoop.io.serializer.WritableSerialization",
+        StringSerialization.class.getCanonicalName(),
+        serializationClass); 
+    conf.setStrings("io.serializations", serializations);
+  }  
+  
+  public static List<InputSplit> getSplits(Configuration conf, String inputPath) 
+    throws IOException {
+    JobContext context = createJobContext(conf, inputPath);
+    
+    HelperInputFormat<?,?> inputFormat = new HelperInputFormat<Object,Object>();
+    return inputFormat.getSplits(context);
+  }
+  
+  public static JobContext createJobContext(Configuration conf, String inputPath) 
+    throws IOException {
+    
+    if(inputPath != null) {
+      Job job = new Job(conf);
+      FileInputFormat.addInputPath(job, new Path(inputPath));
+      return new JobContext(job.getConfiguration(), null);
+    } 
+    
+    return new JobContext(conf, null);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapper.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapper.java
new file mode 100644
index 0000000..5228895
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraMapper.java
@@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.Partitioner;
+
+/**
+ * Base class for Gora based {@link Mapper}s.
+ */
+public class GoraMapper<K1, V1 extends Persistent, K2, V2>
+  extends Mapper<K1, V1, K2, V2> {
+
+  /**
+   * Initializes the Mapper, and sets input parameters for the job. All of 
+   * the records in the dataStore are used as the input. If you want to 
+   * include a specific subset, use one of the overloaded methods which takes
+   * query parameter.
+   * @param job the job to set the properties for
+   * @param dataStoreClass the datastore class
+   * @param inKeyClass Map input key class
+   * @param inValueClass Map input value class
+   * @param outKeyClass Map output key class
+   * @param outValueClass Map output value class
+   * @param mapperClass the mapper class extending GoraMapper
+   * @param partitionerClass optional partitioner class
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  @SuppressWarnings("rawtypes")
+  public static <K1, V1 extends Persistent, K2, V2>
+  void initMapperJob(
+      Job job,
+      Class<? extends DataStore<K1,V1>> dataStoreClass,
+      Class<K1> inKeyClass, 
+      Class<V1> inValueClass,
+      Class<K2> outKeyClass, 
+      Class<V2> outValueClass,
+      Class<? extends GoraMapper> mapperClass,
+      Class<? extends Partitioner> partitionerClass, 
+      boolean reuseObjects)
+  throws IOException {
+    
+    //set the input via GoraInputFormat
+    GoraInputFormat.setInput(job, dataStoreClass, inKeyClass, inValueClass, reuseObjects);
+
+    job.setMapperClass(mapperClass);
+    job.setMapOutputKeyClass(outKeyClass);
+    job.setMapOutputValueClass(outValueClass);
+
+    if (partitionerClass != null) {
+      job.setPartitionerClass(partitionerClass);
+    }
+  }
+  
+  /**
+   * Initializes the Mapper, and sets input parameters for the job. All of 
+   * the records in the dataStore are used as the input. If you want to 
+   * include a specific subset, use one of the overloaded methods which takes
+   * query parameter.
+   * @param job the job to set the properties for
+   * @param dataStoreClass the datastore class
+   * @param inKeyClass Map input key class
+   * @param inValueClass Map input value class
+   * @param outKeyClass Map output key class
+   * @param outValueClass Map output value class
+   * @param mapperClass the mapper class extending GoraMapper
+   * @param reuseObjects whether to reuse objects in serialization
+   */  
+  @SuppressWarnings("rawtypes")
+  public static <K1, V1 extends Persistent, K2, V2>
+  void initMapperJob(
+      Job job,
+      Class<? extends DataStore<K1,V1>> dataStoreClass,
+      Class<K1> inKeyClass, 
+      Class<V1> inValueClass,
+      Class<K2> outKeyClass, 
+      Class<V2> outValueClass,
+      Class<? extends GoraMapper> mapperClass,
+      boolean reuseObjects)
+  throws IOException {
+    initMapperJob(job, dataStoreClass, inKeyClass, inValueClass, outKeyClass
+        , outValueClass, mapperClass, null, reuseObjects);
+  }
+  
+  /**
+   * Initializes the Mapper, and sets input parameters for the job
+   * @param job the job to set the properties for
+   * @param query the query to get the inputs from
+   * @param dataStore the datastore as the input
+   * @param outKeyClass Map output key class
+   * @param outValueClass Map output value class
+   * @param mapperClass the mapper class extending GoraMapper
+   * @param partitionerClass optional partitioner class
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  @SuppressWarnings("rawtypes")
+  public static <K1, V1 extends Persistent, K2, V2>
+  void initMapperJob(
+      Job job, 
+      Query<K1,V1> query,
+      DataStore<K1,V1> dataStore,
+      Class<K2> outKeyClass, 
+      Class<V2> outValueClass,
+      Class<? extends GoraMapper> mapperClass,
+      Class<? extends Partitioner> partitionerClass, 
+      boolean reuseObjects)
+  throws IOException {
+    //set the input via GoraInputFormat
+    GoraInputFormat.setInput(job, query, dataStore, reuseObjects);
+
+    job.setMapperClass(mapperClass);
+    job.setMapOutputKeyClass(outKeyClass);
+    job.setMapOutputValueClass(outValueClass);
+
+    if (partitionerClass != null) {
+      job.setPartitionerClass(partitionerClass);
+    }
+  }
+
+  /**
+   * Initializes the Mapper, and sets input parameters for the job
+   * @param job the job to set the properties for
+   * @param dataStore the datastore as the input
+   * @param outKeyClass Map output key class
+   * @param outValueClass Map output value class
+   * @param mapperClass the mapper class extending GoraMapper
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  @SuppressWarnings({ "rawtypes" })
+  public static <K1, V1 extends Persistent, K2, V2>
+  void initMapperJob(
+      Job job, 
+      DataStore<K1,V1> dataStore,
+      Class<K2> outKeyClass, 
+      Class<V2> outValueClass,
+      Class<? extends GoraMapper> mapperClass, 
+      boolean reuseObjects)
+  throws IOException {
+    initMapperJob(job, dataStore.newQuery(), dataStore, 
+        outKeyClass, outValueClass, mapperClass, reuseObjects);
+  }
+  
+  /**
+   * Initializes the Mapper, and sets input parameters for the job
+   * @param job the job to set the properties for
+   * @param query the query to get the inputs from
+   * @param dataStore the datastore as the input
+   * @param outKeyClass Map output key class
+   * @param outValueClass Map output value class
+   * @param mapperClass the mapper class extending GoraMapper
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  @SuppressWarnings({ "rawtypes" })
+  public static <K1, V1 extends Persistent, K2, V2>
+  void initMapperJob(
+      Job job, 
+      Query<K1,V1> query, 
+      DataStore<K1,V1> dataStore,
+      Class<K2> outKeyClass, 
+      Class<V2> outValueClass,
+      Class<? extends GoraMapper> mapperClass, 
+      boolean reuseObjects)
+  throws IOException {
+
+    initMapperJob(job, query, dataStore, outKeyClass, outValueClass,
+        mapperClass, null, reuseObjects);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraOutputFormat.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraOutputFormat.java
new file mode 100644
index 0000000..2cb536c
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraOutputFormat.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.FileBackedDataStore;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+
+/**
+ * {@link OutputFormat} for Hadoop jobs that want to store the job outputs 
+ * to a Gora store. 
+ * <p>
+ * Hadoop jobs can be either configured through static 
+ * <code>setOutput()</code> methods, or if the job is not map-only from {@link GoraReducer}.
+ * @see GoraReducer 
+ */
+public class GoraOutputFormat<K, T extends Persistent>
+  extends OutputFormat<K, T> {
+
+  public static final String DATA_STORE_CLASS = "gora.outputformat.datastore.class";
+
+  public static final String OUTPUT_KEY_CLASS   = "gora.outputformat.key.class";
+
+  public static final String OUTPUT_VALUE_CLASS = "gora.outputformat.value.class";
+
+  @Override
+  public void checkOutputSpecs(JobContext context)
+  throws IOException, InterruptedException { }
+
+  @Override
+  public OutputCommitter getOutputCommitter(TaskAttemptContext context)
+  throws IOException, InterruptedException {
+    return new NullOutputCommitter();
+  }
+
+  private void setOutputPath(DataStore<K,T> store, TaskAttemptContext context) {
+    if(store instanceof FileBackedDataStore) {
+      FileBackedDataStore<K, T> fileStore = (FileBackedDataStore<K, T>) store;
+      String uniqueName = FileOutputFormat.getUniqueFile(context, "part", "");
+
+      //if file store output is not set, then get the output from FileOutputFormat
+      if(fileStore.getOutputPath() == null) {
+        fileStore.setOutputPath(FileOutputFormat.getOutputPath(context).toString());
+      }
+
+      //set the unique name of the data file
+      String path = fileStore.getOutputPath();
+      fileStore.setOutputPath( path + Path.SEPARATOR  + uniqueName);
+    }
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public RecordWriter<K, T> getRecordWriter(TaskAttemptContext context)
+      throws IOException, InterruptedException {
+    Configuration conf = context.getConfiguration();
+    Class<? extends DataStore<K,T>> dataStoreClass
+      = (Class<? extends DataStore<K,T>>) conf.getClass(DATA_STORE_CLASS, null);
+    Class<K> keyClass = (Class<K>) conf.getClass(OUTPUT_KEY_CLASS, null);
+    Class<T> rowClass = (Class<T>) conf.getClass(OUTPUT_VALUE_CLASS, null);
+    final DataStore<K, T> store =
+      DataStoreFactory.createDataStore(dataStoreClass, keyClass, rowClass, context.getConfiguration());
+
+    setOutputPath(store, context);
+
+    return new GoraRecordWriter(store, context);
+  }
+
+  /**
+   * Sets the output parameters for the job
+   * @param job the job to set the properties for
+   * @param dataStore the datastore as the output
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  public static <K, V extends Persistent> void setOutput(Job job,
+      DataStore<K,V> dataStore, boolean reuseObjects) {
+    setOutput(job, dataStore.getClass(), dataStore.getKeyClass()
+        , dataStore.getPersistentClass(), reuseObjects);
+  }
+
+  /**
+   * Sets the output parameters for the job 
+   * @param job the job to set the properties for
+   * @param dataStoreClass the datastore class
+   * @param keyClass output key class
+   * @param persistentClass output value class
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  @SuppressWarnings("rawtypes")
+  public static <K, V extends Persistent> void setOutput(Job job,
+      Class<? extends DataStore> dataStoreClass,
+      Class<K> keyClass, Class<V> persistentClass,
+      boolean reuseObjects) {
+
+    Configuration conf = job.getConfiguration();
+
+    GoraMapReduceUtils.setIOSerializations(conf, reuseObjects);
+
+    job.setOutputFormatClass(GoraOutputFormat.class);
+    job.setOutputKeyClass(keyClass);
+    job.setOutputValueClass(persistentClass);
+    conf.setClass(GoraOutputFormat.DATA_STORE_CLASS, dataStoreClass,
+        DataStore.class);
+    conf.setClass(GoraOutputFormat.OUTPUT_KEY_CLASS, keyClass, Object.class);
+    conf.setClass(GoraOutputFormat.OUTPUT_VALUE_CLASS,
+        persistentClass, Persistent.class);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordCounter.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordCounter.java
new file mode 100644
index 0000000..30484d0
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordCounter.java
@@ -0,0 +1,50 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+public class GoraRecordCounter {
+  /**
+   * Count the number of records read from the datastore per system call.
+   */
+  private int recordsNumber = 0;
+  
+  /**
+   * Define the flush frequency.
+   */
+  private int recordsMax;
+
+  public int getRecordsNumber() {
+    return recordsNumber;
+  }
+
+  public int getRecordsMax() {
+    return recordsMax;
+  }
+
+  public void setRecordsMax(int recordsMax) {
+    this.recordsMax = recordsMax;
+  }
+
+  public void increment() {
+    ++this.recordsNumber;
+  }
+
+  public boolean isModulo() {
+    return ((this.recordsNumber % this.recordsMax) == 0);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordReader.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordReader.java
new file mode 100644
index 0000000..2ee8c9f
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordReader.java
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An adapter for Result to Hadoop RecordReader.
+ */
+public class GoraRecordReader<K, T extends Persistent> extends RecordReader<K,T> {
+  public static final Logger LOG = LoggerFactory.getLogger(GoraRecordReader.class);
+
+  public static final String BUFFER_LIMIT_READ_NAME = "gora.buffer.read.limit";
+  public static final int BUFFER_LIMIT_READ_VALUE = 10000;
+
+  protected Query<K,T> query;
+  protected Result<K,T> result;
+  
+  private GoraRecordCounter counter = new GoraRecordCounter();
+  
+  public GoraRecordReader(Query<K,T> query, TaskAttemptContext context) {
+    this.query = query;
+
+    Configuration configuration = context.getConfiguration();
+    int recordsMax = configuration.getInt(BUFFER_LIMIT_READ_NAME, BUFFER_LIMIT_READ_VALUE);
+    
+    // Check if result set will at least contain 2 rows
+    if (recordsMax <= 1) {
+      LOG.info("Limit " + recordsMax + " changed to " + BUFFER_LIMIT_READ_VALUE);
+      recordsMax = BUFFER_LIMIT_READ_VALUE;
+    }
+    
+    counter.setRecordsMax(recordsMax);
+    LOG.info("gora.buffer.read.limit = " + recordsMax);
+    
+    this.query.setLimit(recordsMax);
+  }
+
+  public void executeQuery() throws IOException {
+    this.result = query.execute();
+  }
+  
+  @Override
+  public K getCurrentKey() throws IOException, InterruptedException {
+    return result.getKey();
+  }
+
+  @Override
+  public T getCurrentValue() throws IOException, InterruptedException {
+    return result.get();
+  }
+
+  @Override
+  public float getProgress() throws IOException, InterruptedException {
+    return result.getProgress();
+  }
+
+  @Override
+  public void initialize(InputSplit split, TaskAttemptContext context)
+  throws IOException, InterruptedException { }
+
+  @Override
+  public boolean nextKeyValue() throws IOException, InterruptedException {
+    if (counter.isModulo()) {
+      boolean firstBatch = (this.result == null);
+      if (! firstBatch) {
+        this.query.setStartKey(this.result.getKey());
+        if (this.query.getLimit() == counter.getRecordsMax()) {
+          this.query.setLimit(counter.getRecordsMax() + 1);
+        }
+      }
+      if (this.result != null) {
+        this.result.close();
+      }
+      
+      executeQuery();
+      
+      if (! firstBatch) {
+        // skip first result
+        this.result.next();
+      }
+    }
+    
+    counter.increment();
+    return this.result.next();
+  }
+
+  @Override
+  public void close() throws IOException {
+    if (result != null) {
+      result.close();
+    }
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordWriter.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordWriter.java
new file mode 100644
index 0000000..4cc134d
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraRecordWriter.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Hadoop record writer that flushes the Gora datastore regularly.
+ *
+ */
+public class GoraRecordWriter<K, T> extends RecordWriter<K, T> {
+  public static final Logger LOG = LoggerFactory.getLogger(GoraRecordWriter.class);
+  
+  private static final String BUFFER_LIMIT_WRITE_NAME = "gora.buffer.write.limit";
+  private static final int BUFFER_LIMIT_WRITE_VALUE = 10000;
+
+  private DataStore<K, Persistent> store;
+  private GoraRecordCounter counter = new GoraRecordCounter();
+
+  public GoraRecordWriter(DataStore<K, Persistent> store, TaskAttemptContext context) {
+    this.store = store;
+    
+    Configuration configuration = context.getConfiguration();
+    int recordsMax = configuration.getInt(BUFFER_LIMIT_WRITE_NAME, BUFFER_LIMIT_WRITE_VALUE);
+    counter.setRecordsMax(recordsMax);
+    LOG.info("gora.buffer.write.limit = " + recordsMax);
+  }
+
+  @Override
+  public void close(TaskAttemptContext context) throws IOException,
+      InterruptedException {
+    store.close();
+  }
+
+  @Override
+  public void write(K key, T value) throws IOException, InterruptedException {
+    store.put(key, (Persistent) value);
+    
+    counter.increment();
+    if (counter.isModulo()) {
+      LOG.info("Flushing the datastore after " + counter.getRecordsNumber() + " records");
+      store.flush();
+    }
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraReducer.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraReducer.java
new file mode 100644
index 0000000..6f2f146
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/GoraReducer.java
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+
+/**
+ * Base class for Gora based {@link Reducer}s.
+ */
+public class GoraReducer<K1, V1, K2, V2 extends Persistent>
+  extends Reducer<K1, V1, K2, V2> {
+ 
+  /**
+   * Initializes the Reducer, and sets output parameters for the job. 
+   * @param job the job to set the properties for
+   * @param dataStoreClass the datastore class
+   * @param keyClass output key class
+   * @param persistentClass output value class
+   * @param reducerClass the reducer class extending GoraReducer
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  public static <K1, V1, K2, V2 extends Persistent>
+  void initReducerJob(
+      Job job, 
+      Class<? extends DataStore<K2,V2>> dataStoreClass,
+      Class<K2> keyClass, 
+      Class<V2> persistentClass,
+      Class<? extends GoraReducer<K1, V1, K2, V2>> reducerClass, 
+      boolean reuseObjects) {
+    
+    GoraOutputFormat.setOutput(job, dataStoreClass, keyClass, persistentClass, reuseObjects);
+    
+    job.setReducerClass(reducerClass);
+  }
+  
+  /**
+   * Initializes the Reducer, and sets output parameters for the job. 
+   * @param job the job to set the properties for
+   * @param dataStore the datastore as the output
+   * @param reducerClass the reducer class extending GoraReducer
+   */
+  public static <K1, V1, K2, V2 extends Persistent>
+  void initReducerJob(
+      Job job, 
+      DataStore<K2,V2> dataStore,
+      Class<? extends GoraReducer<K1, V1, K2, V2>> reducerClass) {
+
+    initReducerJob(job, dataStore, reducerClass, true);
+  }
+
+  /**
+   * Initializes the Reducer, and sets output parameters for the job. 
+   * @param job the job to set the properties for
+   * @param dataStore the datastore as the output
+   * @param reducerClass the reducer class extending GoraReducer
+   * @param reuseObjects whether to reuse objects in serialization
+   */
+  public static <K1, V1, K2, V2 extends Persistent>
+  void initReducerJob(
+      Job job, 
+      DataStore<K2,V2> dataStore,
+      Class<? extends GoraReducer<K1, V1, K2, V2>> reducerClass, 
+      boolean reuseObjects) {
+
+    GoraOutputFormat.setOutput(job, dataStore, reuseObjects);
+    job.setReducerClass(reducerClass);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/NullOutputCommitter.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/NullOutputCommitter.java
new file mode 100644
index 0000000..4b95cc3
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/NullOutputCommitter.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * An OutputCommitter that does nothing.
+ */
+public class NullOutputCommitter extends OutputCommitter {
+
+  @Override
+  public void abortTask(TaskAttemptContext arg0) throws IOException {
+  }
+
+  @Override
+  public void cleanupJob(JobContext arg0) throws IOException {
+  }
+
+  @Override
+  public void commitTask(TaskAttemptContext arg0) throws IOException {
+  }
+
+  @Override
+  public boolean needsTaskCommit(TaskAttemptContext arg0) throws IOException {
+    return false;
+  }
+
+  @Override
+  public void setupJob(JobContext arg0) throws IOException {
+  }
+
+  @Override
+  public void setupTask(TaskAttemptContext arg0) throws IOException {
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentDeserializer.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentDeserializer.java
new file mode 100644
index 0000000..20b151d
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentDeserializer.java
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.avro.Schema;
+import org.apache.avro.io.BinaryDecoder;
+import org.apache.avro.io.DecoderFactory;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.util.AvroUtils;
+import org.apache.hadoop.io.serializer.Deserializer;
+
+/**
+* Hadoop deserializer using {@link PersistentDatumReader}
+* with {@link BinaryDecoder}.
+*/
+public class PersistentDeserializer
+   implements Deserializer<Persistent> {
+
+  private BinaryDecoder decoder;
+  private Class<? extends Persistent> persistentClass;
+  private boolean reuseObjects;
+  private PersistentDatumReader<Persistent> datumReader;
+
+  public PersistentDeserializer(Class<? extends Persistent> c, boolean reuseObjects) {
+    this.persistentClass = c;
+    this.reuseObjects = reuseObjects;
+    try {
+      Schema schema = AvroUtils.getSchema(persistentClass);
+      datumReader = new PersistentDatumReader<Persistent>(schema, true);
+
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
+    }
+  }
+
+  @Override
+  public void open(InputStream in) throws IOException {
+    /* It is very important to use a direct buffer, since Hadoop
+     * supplies an input stream that is only valid until the end of one
+     * record serialization. Each time deserialize() is called, the IS
+     * is advanced to point to the right location, so we should not
+     * buffer the whole input stream at once.
+     */
+    decoder = new DecoderFactory().configureDirectDecoder(true)
+      .createBinaryDecoder(in, decoder);
+  }
+
+  @Override
+  public void close() throws IOException { }
+
+  @Override
+  public Persistent deserialize(Persistent persistent) throws IOException {
+    return datumReader.read(reuseObjects ? persistent : null, decoder);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentNonReusingSerialization.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentNonReusingSerialization.java
new file mode 100644
index 0000000..1b3ff8d
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentNonReusingSerialization.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.hadoop.io.serializer.Deserializer;
+import org.apache.hadoop.io.serializer.Serialization;
+import org.apache.hadoop.io.serializer.Serializer;
+
+public class PersistentNonReusingSerialization
+implements Serialization<Persistent> {
+
+  @Override
+  public boolean accept(Class<?> c) {
+    return Persistent.class.isAssignableFrom(c);
+  }
+
+  @Override
+  public Deserializer<Persistent> getDeserializer(Class<Persistent> c) {
+    return new PersistentDeserializer(c, false);
+  }
+
+  @Override
+  public Serializer<Persistent> getSerializer(Class<Persistent> c) {
+    return new PersistentSerializer();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerialization.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerialization.java
new file mode 100644
index 0000000..008e222
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerialization.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.hadoop.io.serializer.Deserializer;
+import org.apache.hadoop.io.serializer.Serialization;
+import org.apache.hadoop.io.serializer.Serializer;
+
+public class PersistentSerialization
+implements Serialization<Persistent> {
+
+  @Override
+  public boolean accept(Class<?> c) {
+    return Persistent.class.isAssignableFrom(c);
+  }
+
+  @Override
+  public Deserializer<Persistent> getDeserializer(Class<Persistent> c) {
+    return new PersistentDeserializer(c, true);
+  }
+
+  @Override
+  public Serializer<Persistent> getSerializer(Class<Persistent> c) {
+    return new PersistentSerializer();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerializer.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerializer.java
new file mode 100644
index 0000000..6cf855f
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/PersistentSerializer.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.io.OutputStream;
+
+import org.apache.avro.io.BinaryEncoder;
+import org.apache.gora.avro.PersistentDatumWriter;
+import org.apache.gora.persistency.Persistent;
+import org.apache.hadoop.io.serializer.Serializer;
+
+/**
+ * Hadoop serializer using {@link PersistentDatumWriter} 
+ * with {@link BinaryEncoder}. 
+ */
+public class PersistentSerializer implements Serializer<Persistent> {
+
+  private PersistentDatumWriter<Persistent> datumWriter;
+  private BinaryEncoder encoder;  
+  
+  public PersistentSerializer() {
+    this.datumWriter = new PersistentDatumWriter<Persistent>();
+  }
+  
+  @Override
+  public void close() throws IOException {
+    encoder.flush();
+  }
+
+  @Override
+  public void open(OutputStream out) throws IOException {
+    encoder = new BinaryEncoder(out);
+  }
+
+  @Override
+  public void serialize(Persistent persistent) throws IOException {   
+    datumWriter.setSchema(persistent.getSchema());
+    datumWriter.setPersistent(persistent);
+        
+    datumWriter.write(persistent, encoder);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringComparator.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringComparator.java
new file mode 100644
index 0000000..a26fde2
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringComparator.java
@@ -0,0 +1,35 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.Text;
+
+public class StringComparator implements RawComparator<String> {
+
+  @Override
+  public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
+    return Text.Comparator.compareBytes(b1, s1, l1, b2, s2, l2);
+  }
+
+  @Override
+  public int compare(String o1, String o2) {
+    return o1.compareTo(o2);
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringSerialization.java b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringSerialization.java
new file mode 100644
index 0000000..2af37a8
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/mapreduce/StringSerialization.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.mapreduce;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.serializer.Deserializer;
+import org.apache.hadoop.io.serializer.Serialization;
+import org.apache.hadoop.io.serializer.Serializer;
+
+public class StringSerialization implements Serialization<String> {
+
+  @Override
+  public boolean accept(Class<?> c) {
+    return c.equals(String.class);
+  }
+
+  @Override
+  public Deserializer<String> getDeserializer(Class<String> c) {
+    return new Deserializer<String>() {
+      private DataInputStream in;
+
+      @Override
+      public void open(InputStream in) throws IOException {
+        this.in = new DataInputStream(in);
+      }
+
+      @Override
+      public void close() throws IOException {
+        this.in.close();
+      }
+
+      @Override
+      public String deserialize(String t) throws IOException {
+        return Text.readString(in);
+      }
+    };
+  }
+
+  @Override
+  public Serializer<String> getSerializer(Class<String> c) {
+    return new Serializer<String>() {
+
+      private DataOutputStream out;
+
+      @Override
+      public void close() throws IOException {
+        this.out.close();
+      }
+
+      @Override
+      public void open(OutputStream out) throws IOException {
+        this.out = new DataOutputStream(out);
+      }
+
+      @Override
+      public void serialize(String str) throws IOException {
+        Text.writeString(out, str);
+      }
+    };
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/memory/store/MemStore.java b/trunk/gora-core/src/main/java/org/apache/gora/memory/store/MemStore.java
new file mode 100644
index 0000000..26b5319
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/memory/store/MemStore.java
@@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.memory.store;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.impl.DataStoreBase;
+
+/**
+ * Memory based {@link DataStore} implementation for tests.
+ */
+public class MemStore<K, T extends Persistent> extends DataStoreBase<K, T> {
+
+  public static class MemQuery<K, T extends Persistent> extends QueryBase<K, T> {
+    public MemQuery() {
+      super(null);
+    }
+    public MemQuery(DataStore<K, T> dataStore) {
+      super(dataStore);
+    }
+  }
+
+  public static class MemResult<K, T extends Persistent> extends ResultBase<K, T> {
+    private NavigableMap<K, T> map;
+    private Iterator<K> iterator;
+    public MemResult(DataStore<K, T> dataStore, Query<K, T> query
+        , NavigableMap<K, T> map) {
+      super(dataStore, query);
+      this.map = map;
+      iterator = map.navigableKeySet().iterator();
+    }
+    @Override
+    public void close() throws IOException { }
+    @Override
+    public float getProgress() throws IOException {
+      return 0;
+    }
+
+    @Override
+    protected void clear() {  } //do not clear the object in the store
+
+    @Override
+    public boolean nextInner() throws IOException {
+      if(!iterator.hasNext()) {
+        return false;
+      }
+
+      key = iterator.next();
+      persistent = map.get(key);
+
+      return true;
+    }
+  }
+
+  private TreeMap<K, T> map = new TreeMap<K, T>();
+
+  @Override
+  public String getSchemaName() {
+    return "default";
+  }
+
+  @Override
+  public boolean delete(K key) throws IOException {
+    return map.remove(key) != null;
+  }
+
+  @Override
+  public long deleteByQuery(Query<K, T> query) throws IOException {
+    long deletedRows = 0;
+    Result<K,T> result = query.execute();
+
+    while(result.next()) {
+      if(delete(result.getKey()))
+        deletedRows++;
+    }
+
+    return 0;
+  }
+
+  @Override
+  public Result<K, T> execute(Query<K, T> query) throws IOException {
+    K startKey = query.getStartKey();
+    K endKey = query.getEndKey();
+    if(startKey == null) {
+      startKey = map.firstKey();
+    }
+    if(endKey == null) {
+      endKey = map.lastKey();
+    }
+
+    //check if query.fields is null
+    query.setFields(getFieldsToQuery(query.getFields()));
+
+    NavigableMap<K, T> submap = map.subMap(startKey, true, endKey, true);
+
+    return new MemResult<K,T>(this, query, submap);
+  }
+
+  @Override
+  public T get(K key, String[] fields) throws IOException {
+    T obj = map.get(key);
+    return getPersistent(obj, getFieldsToQuery(fields));
+  }
+
+  /**
+   * Returns a clone with exactly the requested fields shallowly copied
+   */
+  @SuppressWarnings("unchecked")
+  private static<T extends Persistent> T getPersistent(T obj, String[] fields) {
+    if(Arrays.equals(fields, obj.getFields())) {
+      return obj;
+    }
+    T newObj = (T) obj.newInstance(new StateManagerImpl());
+    for(String field:fields) {
+      int index = newObj.getFieldIndex(field);
+      newObj.put(index, obj.get(index));
+    }
+    return newObj;
+  }
+
+  @Override
+  public Query<K, T> newQuery() {
+    return new MemQuery<K, T>(this);
+  }
+
+  @Override
+  public void put(K key, T obj) throws IOException {
+    map.put(key, obj);
+  }
+
+  @Override
+  /**
+   * Returns a single partition containing the original query
+   */
+  public List<PartitionQuery<K, T>> getPartitions(Query<K, T> query)
+      throws IOException {
+    List<PartitionQuery<K, T>> list = new ArrayList<PartitionQuery<K,T>>();
+    list.add(new PartitionQueryImpl<K, T>(query));
+    return list;
+  }
+
+  @Override
+  public void close() throws IOException {
+    map.clear();
+  }
+
+  @Override
+  public void createSchema() throws IOException { }
+
+  @Override
+  public void deleteSchema() throws IOException {
+    map.clear();
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    return true;
+  }
+
+  @Override
+  public void flush() throws IOException { }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/BeanFactory.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/BeanFactory.java
new file mode 100644
index 0000000..66304a1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/BeanFactory.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+/**
+ * BeanFactory's enable contruction of keys and Persistent objects. 
+ */
+public interface BeanFactory<K, T extends Persistent> {
+
+  /**
+   * Constructs a new instance of the key class
+   * @return a new instance of the key class
+   */
+  K newKey() throws Exception;
+
+  /**
+   * Constructs a new instance of the Persistent class
+   * @return a new instance of the Persistent class
+   */
+  T newPersistent();
+
+  /**
+   * Returns an instance of the key object to be 
+   * used to access static fields of the object. Returned object MUST  
+   * be treated as read-only. No fields other than the static fields 
+   * of the object should be assumed to be readable. 
+   * @return a cached instance of the key object
+   */
+  K getCachedKey();
+  
+  /**
+   * Returns an instance of the {@link Persistent} object to be 
+   * used to access static fields of the object. Returned object MUST  
+   * be treated as read-only. No fields other than the static fields 
+   * of the object should be assumed to be readable. 
+   * @return a cached instance of the Persistent object
+   */
+  T getCachedPersistent();
+
+  /**
+   * Returns the key class
+   * @return the key class
+   */
+  Class<K> getKeyClass();
+
+  /**
+   * Returns the persistent class
+   * @return the persistent class
+   */
+  Class<T> getPersistentClass();
+
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/ListGenericArray.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/ListGenericArray.java
new file mode 100644
index 0000000..8b02e4a
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/ListGenericArray.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.avro.Schema;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.generic.GenericData;
+
+/**
+ * An {@link ArrayList} based implementation of Avro {@link GenericArray}.
+ */
+public class ListGenericArray<T> implements GenericArray<T>
+  , Comparable<ListGenericArray<T>> {
+
+  private static final int LIST_DEFAULT_SIZE = 10;
+  
+  private List<T> list;
+  private Schema schema;
+
+  public ListGenericArray(Schema schema, List<T> list) {
+    this.schema = schema;
+    this.list = list;
+  }
+
+  public ListGenericArray(Schema schema) {
+    this(LIST_DEFAULT_SIZE, schema);
+  }
+  
+  public ListGenericArray(int size, Schema schema) {
+    this.schema = schema;
+    this.list = new ArrayList<T>(size);
+  }
+
+  @Override
+  public void add(T element) {
+    list.add(element);
+  }
+
+  @Override
+  public void clear() {
+    list.clear();
+  }
+
+  @Override
+  public T peek() {
+    return null;
+  }
+
+  @Override
+  public long size() {
+    return list.size();
+  }
+
+  @Override
+  public Iterator<T> iterator() {
+    return list.iterator();
+  }
+
+  @Override
+  public Schema getSchema() {
+    return schema;
+  }
+
+  @Override
+  public int hashCode() {
+    return this.list.hashCode();
+  }
+
+  @SuppressWarnings({ "unchecked", "rawtypes" })
+  @Override
+  public boolean equals(Object obj) {
+    if (obj == this) return true;
+    if (!(obj instanceof ListGenericArray)) return false;
+    ListGenericArray that = (ListGenericArray)obj;
+    if (!schema.equals(that.schema))
+      return false;
+    return this.compareTo(that) == 0;
+  }
+
+  @Override
+  public int compareTo(ListGenericArray<T> o) {
+    return GenericData.get().compare(this, o, schema);
+  }
+  
+  @Override
+  public String toString() {
+    return list.toString();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/Persistent.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/Persistent.java
new file mode 100644
index 0000000..b0febcc
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/Persistent.java
@@ -0,0 +1,186 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.persistency;
+
+import org.apache.avro.specific.SpecificRecord;
+
+/**
+ * Objects that are persisted by Gora implements this interface.
+ */
+public interface Persistent extends SpecificRecord, Cloneable {
+
+  /**
+   * Returns the StateManager which manages the persistent 
+   * state of the object.
+   * @return the StateManager of the object
+   */
+  StateManager getStateManager();
+
+  /**
+   * Constructs a new instance of the object with the given StateManager.
+   * This method is intended to be used by Gora framework.
+   * @param stateManager the StateManager to manage the persistent state 
+   * of the object
+   * @return a new instance of the object
+   */
+  Persistent newInstance(StateManager stateManager);
+
+  /**
+   * Returns sorted field names of the object
+   * @return the field names of the object as a String[]
+   */
+  String[] getFields();
+  
+  /**
+   * Returns the field name with the given index
+   * @param index the index of the field  
+   * @return the name of the field
+   */
+  String getField(int index);
+  
+  /**
+   * Returns the index of the field with the given name
+   * @param field the name of the field
+   * @return the index of the field
+   */
+  int getFieldIndex(String field);
+  
+  /**
+   * Clears the inner state of the object without any modification
+   * to the actual data on the data store. This method should be called 
+   * before re-using the object to hold the data for another result.  
+   */
+  void clear();
+  
+  /**
+   * Returns whether the object is newly constructed.
+   * @return true if the object is newly constructed, false if
+   * retrieved from a datastore. 
+   */
+  boolean isNew();
+  
+  /**
+   * Sets the state of the object as new for persistency
+   */
+  void setNew();
+  
+  /**
+   * Clears the new state 
+   */
+  void clearNew();
+  
+  /**
+   * Returns whether any of the fields of the object has been modified 
+   * after construction or loading. 
+   * @return whether any of the fields of the object has changed
+   */
+  boolean isDirty();
+  
+  /**
+   * Returns whether the field has been modified.
+   * @param fieldIndex the offset of the field in the object
+   * @return whether the field has been modified.
+   */
+  boolean isDirty(int fieldIndex);
+
+  /**
+   * Returns whether the field has been modified.
+   * @param field the name of the field
+   * @return whether the field has been modified.
+   */
+  boolean isDirty(String field);
+  
+  /**
+   * Sets all the fields of the object as dirty.
+   */
+  void setDirty();
+  
+  /**
+   * Sets the field as dirty.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void setDirty(int fieldIndex);
+ 
+  /**
+   * Sets the field as dirty.
+   * @param field the name of the field
+   */
+  void setDirty(String field);
+  
+  /**
+   * Clears the field as dirty.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void clearDirty(int fieldIndex);
+  
+  /**
+   * Clears the field as dirty.
+   * @param field the name of the field
+   */
+  void clearDirty(String field);
+  
+  /**
+   * Clears the dirty state.
+   */
+  void clearDirty();
+  
+  /**
+   * Returns whether the field has been loaded from the datastore. 
+   * @param fieldIndex the offset of the field in the object
+   * @return whether the field has been loaded 
+   */
+  boolean isReadable(int fieldIndex);
+
+  /**
+   * Returns whether the field has been loaded from the datastore. 
+   * @param field the name of the field
+   * @return whether the field has been loaded 
+   */
+  boolean isReadable(String field);
+  
+  /**
+   * Sets the field as readable.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void setReadable(int fieldIndex);
+
+  /**
+   * Sets the field as readable.
+   * @param field the name of the field
+   */
+  void setReadable(String field);
+
+  /**
+   * Clears the field as readable.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void clearReadable(int fieldIndex);
+  
+  /**
+   * Sets the field as readable.
+   * @param field the name of the field
+   */
+  void clearReadable(String field);
+  
+  /**
+   * Clears the readable state.
+   */
+  void clearReadable();
+  
+  Persistent clone();
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/State.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/State.java
new file mode 100644
index 0000000..a372ee1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/State.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+/**
+ * Persistency state of an object or field.
+ */
+public enum State {
+  
+  /** The object is newly loaded */
+  NEW,
+  
+  /** The value of the field has not been changed after loading*/
+  CLEAN,
+  
+  /** The value of the field has been altered*/
+  DIRTY,
+  
+  /** The object or field is deleted */
+  DELETED
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/StateManager.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StateManager.java
new file mode 100644
index 0000000..024db88
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StateManager.java
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+/**
+ * StateManager manages objects state for persistency.
+ */
+public interface StateManager {
+
+  /**
+   * If one state manager is allocated per persistent object, 
+   * call this method to set the managed persistent. 
+   * @param persistent the persistent to manage
+   */
+  void setManagedPersistent(Persistent persistent);
+
+  /**
+   * Returns whether the object is newly constructed.
+   * @return true if the object is newly constructed, false if
+   * retrieved from a datastore. 
+   */
+  boolean isNew(Persistent persistent);
+  
+  /**
+   * Sets the state of the object as new for persistency
+   */
+  void setNew(Persistent persistent);
+  
+  /**
+   * Clears the new state 
+   */
+  void clearNew(Persistent persistent);
+
+  /**
+   * Returns whether any of the fields of the object has been modified 
+   * after construction or loading. 
+   * @return whether any of the fields of the object has changed
+   */
+  boolean isDirty(Persistent persistent);
+  
+  /**
+   * Returns whether the field has been modified.
+   * @param fieldIndex the offset of the field in the object
+   * @return whether the field has been modified.
+   */
+  boolean isDirty(Persistent persistent, int fieldIndex);
+  
+  /**
+   * Sets all the fields of the object as dirty.
+   */
+  void setDirty(Persistent persistent);
+  
+  /**
+   * Sets the field as dirty.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void setDirty(Persistent persistent, int fieldIndex);
+
+  /**
+   * Clears the field as dirty.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void clearDirty(Persistent persistent, int fieldIndex);
+  
+  /**
+   * Clears the dirty state.
+   */
+  void clearDirty(Persistent persistent);
+  
+  /**
+   * Returns whether the field has been loaded from the datastore. 
+   * @param fieldIndex the offset of the field in the object
+   * @return whether the field has been loaded 
+   */
+  boolean isReadable(Persistent persistent, int fieldIndex);
+  
+  /**
+   * Sets the field as readable.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void setReadable(Persistent persistent, int fieldIndex);
+
+  /**
+   * Clears the field as readable.
+   * @param fieldIndex the offset of the field in the object
+   */
+  void clearReadable(Persistent persistent, int fieldIndex);
+  
+  /**
+   * Clears the readable state.
+   */
+  void clearReadable(Persistent persistent);
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulHashMap.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulHashMap.java
new file mode 100644
index 0000000..7615035
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulHashMap.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.persistency;
+
+import java.util.HashMap;
+import java.util.Map;
+
+@SuppressWarnings("serial")
+public class StatefulHashMap<K, V> extends HashMap<K, V> 
+  implements StatefulMap<K, V> {
+  
+  /* This is probably a terrible design but I do not yet have a better
+   * idea of managing write/delete info on a per-key basis
+   */
+  private Map<K, State> keyStates = new HashMap<K, State>();
+
+  /**
+   * Create an empty instance.
+   */
+  public StatefulHashMap() {
+    this(null);
+  }
+
+  /**
+   * Create an instance with initial entries. These entries are added stateless;
+   * in other words the statemap will be clear after the construction.
+   * 
+   * @param m The map with initial entries.
+   */
+  public StatefulHashMap(Map<K, V> m) {
+    super();
+    if (m == null) {
+      return;
+    }
+    for (java.util.Map.Entry<K, V> entry : m.entrySet()) {
+      put(entry.getKey(), entry.getValue());
+    }
+    clearStates();
+  }
+  
+  @Override
+  public V put(K key, V value) {
+    keyStates.remove(key);
+    V old = super.put(key, value);
+    //if old value is different or null, set state to dirty
+    if (!value.equals(old)) {
+      keyStates.put(key, State.DIRTY);
+    }
+    return old;
+  }
+
+  @SuppressWarnings("unchecked")
+  @Override
+  public V remove(Object key) {
+    keyStates.put((K) key, State.DELETED);
+    return null;
+    // We do not remove the actual entry from the map.
+    // When we keep the entries, we can compare previous state to make Datastore
+    // puts more efficient. (In the case of new puts that are in fact unchanged)
+  }
+
+  @Override
+  public void putAll(Map<? extends K, ? extends V> m) {
+    for (Entry<? extends K, ? extends V> e : m.entrySet()) {
+      put(e.getKey(), e.getValue());
+    }
+  }
+
+  @Override
+  public void clear() {
+    // The problem with clear() is that we cannot delete entries that were not
+    // initially set on the input.  This means that for a clear() to fully
+    // reflect on a datastore you have to input the full map from the store.
+    // This is acceptable for now. Another way around this is to implement
+    // some sort of "clear marker" that indicates a map should be fully cleared,
+    // with respect to any possible new entries.
+    for (Entry<K, V> e : entrySet()) {
+      keyStates.put(e.getKey(), State.DELETED);
+    }
+    // Do not actually clear the map, i.e. with super.clear()
+    // When we keep the entries, we can compare previous state to make Datastore
+    // puts more efficient. (In the case of new puts that are in fact unchanged)
+  }
+
+  public State getState(K key) {
+    return keyStates.get(key);
+  };
+  
+  /* (non-Javadoc)
+   * @see org.apache.gora.persistency.StatefulMap#resetStates()
+   */
+  public void clearStates() {
+    keyStates.clear();
+  }
+
+  /* (non-Javadoc)
+   * @see org.apache.gora.persistency.StatefulMap#putState(K, org.apache.gora.persistency.State)
+   */
+  public void putState(K key, State state) {
+    keyStates.put(key, state);
+  }
+
+  /* (non-Javadoc)
+   * @see org.apache.gora.persistency.StatefulMap#states()
+   */
+  public Map<K, State> states() {
+    return keyStates;
+  }
+
+  /* (non-Javadoc)
+   * @see org.apache.gora.persistency.StatefulMap#reuse()
+   */
+  public void reuse() {
+    super.clear();
+    clearStates();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulMap.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulMap.java
new file mode 100644
index 0000000..6136d2d
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/StatefulMap.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+import java.util.Map;
+
+/**
+ * StatefulMap extends the Map interface to keep track of the 
+ * persistency states of individual elements in the Map.  
+ */
+public interface StatefulMap<K, V> extends Map<K, V> {
+
+  State getState(K key);
+  
+  void putState(K key, State state);
+
+  Map<K, State> states();
+
+  void clearStates();
+  
+  /**
+   * Reuse will clear the map completely with states. This is different
+   * from {@link #clear()} in that the latter only sets entries to deleted.
+   */
+  void reuse();
+  
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/BeanFactoryImpl.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/BeanFactoryImpl.java
new file mode 100644
index 0000000..59da421
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/BeanFactoryImpl.java
@@ -0,0 +1,102 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency.impl;
+
+import java.lang.reflect.Constructor;
+
+import org.apache.gora.persistency.BeanFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.util.ReflectionUtils;
+
+/**
+ * A default implementation of the {@link BeanFactory} interface. Constructs 
+ * the keys using by reflection, {@link Persistent} objects by calling 
+ * {@link Persistent#newInstance(org.apache.gora.persistency.StateManager)}. 
+ */
+public class BeanFactoryImpl<K, T extends Persistent> implements BeanFactory<K, T> {
+
+  private Class<K> keyClass;
+  private Class<T> persistentClass;
+  
+  private Constructor<K> keyConstructor;
+  
+  private K key;
+  private T persistent;
+  
+  private boolean isKeyPersistent = false;
+  
+  public BeanFactoryImpl(Class<K> keyClass, Class<T> persistentClass) {
+    this.keyClass = keyClass;
+    this.persistentClass = persistentClass;
+    
+    try {
+      if(ReflectionUtils.hasConstructor(keyClass)) {
+        this.keyConstructor = ReflectionUtils.getConstructor(keyClass);
+        this.key = keyConstructor.newInstance(ReflectionUtils.EMPTY_OBJECT_ARRAY);
+      }
+      this.persistent = ReflectionUtils.newInstance(persistentClass);
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
+    }
+    
+    isKeyPersistent = Persistent.class.isAssignableFrom(keyClass);
+  }
+  
+  @Override
+  @SuppressWarnings("unchecked")
+  public K newKey() throws Exception {
+    if(isKeyPersistent)
+      return (K)((Persistent)key).newInstance(new StateManagerImpl());
+    else if(keyConstructor == null) {
+      throw new RuntimeException("Key class does not have a no-arg constructor");
+    }
+    else
+      return keyConstructor.newInstance(ReflectionUtils.EMPTY_OBJECT_ARRAY);
+  }
+ 
+  @SuppressWarnings("unchecked")
+  @Override
+  public T newPersistent() {
+    return (T) persistent.newInstance(new StateManagerImpl());
+  }
+  
+  @Override
+  public K getCachedKey() {
+    return key;
+  }
+  
+  @Override
+  public T getCachedPersistent() {
+    return persistent;
+  }
+  
+  @Override
+  public Class<K> getKeyClass() {
+    return keyClass;
+  }
+  
+  @Override
+  public Class<T> getPersistentClass() {
+    return persistentClass;
+  }
+  
+  public boolean isKeyPersistent() {
+    return isKeyPersistent;
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/PersistentBase.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/PersistentBase.java
new file mode 100644
index 0000000..b53372e
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/PersistentBase.java
@@ -0,0 +1,305 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.persistency.impl;
+
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.specific.SpecificRecord;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.StatefulHashMap;
+
+/**
+ * Base classs implementing common functionality for Persistent
+ * classes.
+ */
+public abstract class PersistentBase implements Persistent {
+
+  protected static Map<Class<?>, Map<String, Integer>> FIELD_MAP =
+    new HashMap<Class<?>, Map<String,Integer>>();
+
+  protected static Map<Class<?>, String[]> FIELDS =
+    new HashMap<Class<?>, String[]>();
+
+  protected static final PersistentDatumReader<Persistent> datumReader =
+    new PersistentDatumReader<Persistent>();
+    
+  private StateManager stateManager;
+
+  protected PersistentBase() {
+    this(new StateManagerImpl());
+  }
+
+  protected PersistentBase(StateManager stateManager) {
+    this.stateManager = stateManager;
+    stateManager.setManagedPersistent(this);
+  }
+
+  /** Subclasses should call this function for all the persistable fields
+   * in the class to register them.
+   * @param clazz the Persistent class
+   * @param fields the name of the fields of the class
+   */
+  protected static void registerFields(Class<?> clazz, String... fields) {
+    FIELDS.put(clazz, fields);
+    int fieldsLength = fields == null ? 0 :fields.length;
+    HashMap<String, Integer> map = new HashMap<String, Integer>(fieldsLength);
+
+    for(int i=0; i < fieldsLength; i++) {
+      map.put(fields[i], i);
+    }
+    FIELD_MAP.put(clazz, map);
+  }
+
+  @Override
+  public StateManager getStateManager() {
+    return stateManager;
+  }
+
+  @Override
+  public String[] getFields() {
+    return FIELDS.get(getClass());
+  }
+
+  @Override
+  public String getField(int index) {
+    return FIELDS.get(getClass())[index];
+  }
+
+  @Override
+  public int getFieldIndex(String field) {
+    return FIELD_MAP.get(getClass()).get(field);
+  }
+
+  @Override
+  @SuppressWarnings("rawtypes")
+  public void clear() {
+    List<Field> fields = getSchema().getFields();
+
+    for(int i=0; i<getFields().length; i++) {
+      switch(fields.get(i).schema().getType()) {
+        case MAP: 
+          if(get(i) != null) {
+            if (get(i) instanceof StatefulHashMap) {
+              ((StatefulHashMap)get(i)).reuse(); 
+            } else {
+              ((Map)get(i)).clear();
+            }
+          }
+          break;
+        case ARRAY:
+          if(get(i) != null) {
+            if(get(i) instanceof ListGenericArray) {
+              ((ListGenericArray)get(i)).clear();
+            } else {
+              put(i, new ListGenericArray(fields.get(i).schema()));
+            }
+          }
+          break;
+        case RECORD :
+          Persistent field = ((Persistent)get(i));
+          if(field != null) field.clear();
+          break;
+        case BOOLEAN: put(i, false); break;
+        case INT    : put(i, 0); break;
+        case DOUBLE : put(i, 0d); break;
+        case FLOAT  : put(i, 0f); break;
+        case LONG   : put(i, 0l); break;
+        case NULL   : break;
+        default     : put(i, null); break;
+      }
+    }
+    clearDirty();
+    clearReadable();
+  }
+
+  @Override
+  public boolean isNew() {
+    return getStateManager().isNew(this);
+  }
+
+  @Override
+  public void setNew() {
+    getStateManager().setNew(this);
+  }
+
+  @Override
+  public void clearNew() {
+    getStateManager().clearNew(this);
+  }
+
+  @Override
+  public boolean isDirty() {
+    return getStateManager().isDirty(this);
+  }
+
+  @Override
+  public boolean isDirty(int fieldIndex) {
+    return getStateManager().isDirty(this, fieldIndex);
+  }
+
+  @Override
+  public boolean isDirty(String field) {
+    return isDirty(getFieldIndex(field));
+  }
+
+  @Override
+  public void setDirty() {
+    getStateManager().setDirty(this);
+  }
+
+  @Override
+  public void setDirty(int fieldIndex) {
+    getStateManager().setDirty(this, fieldIndex);
+  }
+
+  @Override
+  public void setDirty(String field) {
+    setDirty(getFieldIndex(field));
+  }
+
+  @Override
+  public void clearDirty(int fieldIndex) {
+    getStateManager().clearDirty(this, fieldIndex);
+  }
+
+  @Override
+  public void clearDirty(String field) {
+    clearDirty(getFieldIndex(field));
+  }
+
+  @Override
+  public void clearDirty() {
+    getStateManager().clearDirty(this);
+  }
+
+  @Override
+  public boolean isReadable(int fieldIndex) {
+    return getStateManager().isReadable(this, fieldIndex);
+  }
+
+  @Override
+  public boolean isReadable(String field) {
+    return isReadable(getFieldIndex(field));
+  }
+
+  @Override
+  public void setReadable(int fieldIndex) {
+    getStateManager().setReadable(this, fieldIndex);
+  }
+
+  @Override
+  public void setReadable(String field) {
+    setReadable(getFieldIndex(field));
+  }
+
+  @Override
+  public void clearReadable() {
+    getStateManager().clearReadable(this);
+  }
+
+  @Override
+  public void clearReadable(int fieldIndex) {
+    getStateManager().clearReadable(this, fieldIndex);
+  }
+
+  @Override
+  public void clearReadable(String field) {
+    clearReadable(getFieldIndex(field));
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) return true;
+    if (!(o instanceof SpecificRecord)) return false;
+
+    SpecificRecord r2 = (SpecificRecord)o;
+    if (!this.getSchema().equals(r2.getSchema())) return false;
+
+    return this.hashCode() == r2.hashCode();
+  }
+
+  @Override
+  public int hashCode() {
+    final int prime = 31;
+    int result = 1;
+    List<Field> fields = this.getSchema().getFields();
+    int end = fields.size();
+    for (int i = 0; i < end; i++) {
+      result = prime * result + getFieldHashCode(i, fields.get(i));
+    }
+    return result;
+  }
+
+  private int getFieldHashCode(int i, Field field) {
+    Object o = get(i);
+    if(o == null)
+      return 0;
+
+    if(field.schema().getType() == Type.BYTES) {
+      return getByteBufferHashCode((ByteBuffer)o);
+    }
+
+    return o.hashCode();
+  }
+
+  /** ByteBuffer.hashCode() takes into account the position of the
+   * buffer, but we do not want that*/
+  private int getByteBufferHashCode(ByteBuffer buf) {
+    int h = 1;
+    int p = buf.arrayOffset();
+    for (int j = buf.limit() - 1; j >= p; j--)
+          h = 31 * h + buf.get(j);
+    return h;
+  }
+  
+  @Override
+  public Persistent clone() {
+    return datumReader.clone(this, getSchema());
+  }
+  
+  @Override
+  public String toString() {
+    StringBuilder builder = new StringBuilder();
+    builder.append(super.toString());
+    builder.append(" {\n");
+    List<Field> fields = getSchema().getFields();
+    for(int i=0; i<fields.size(); i++) {
+      builder.append("  \"").append(fields.get(i).name()).append("\":\"");
+      builder.append(get(i)).append("\"\n");
+    }
+    builder.append("}");
+    return builder.toString();
+  }
+  
+  protected boolean isFieldEqual(int index, Object value) {
+    Object old = get(index);
+    if (old == null && value == null)
+      return true;
+    if (old == null || value == null)
+      return false;
+    return value.equals(old);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/StateManagerImpl.java b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/StateManagerImpl.java
new file mode 100644
index 0000000..15ab3c1
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/persistency/impl/StateManagerImpl.java
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency.impl;
+
+import java.util.BitSet;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StateManager;
+
+/**
+ * An implementation for the StateManager. This implementation assumes 
+ * every Persistent object has it's own StateManager.
+ */
+public class StateManagerImpl implements StateManager {
+
+  //TODO: serialize isNew in PersistentSerializer 
+  protected boolean isNew;
+  protected BitSet dirtyBits;
+  protected BitSet readableBits;
+
+  public StateManagerImpl() {
+  }
+
+  public void setManagedPersistent(Persistent persistent) {
+    dirtyBits = new BitSet(persistent.getSchema().getFields().size());
+    readableBits = new BitSet(persistent.getSchema().getFields().size());
+    isNew = true;
+  }
+
+  @Override
+  public boolean isNew(Persistent persistent) {
+    return isNew;
+  }
+  
+  @Override
+  public void setNew(Persistent persistent) {
+    this.isNew = true;
+  }
+  
+  @Override
+  public void clearNew(Persistent persistent) {
+    this.isNew = false;
+  }
+  
+  public void setDirty(Persistent persistent, int fieldIndex) {
+    dirtyBits.set(fieldIndex);
+    readableBits.set(fieldIndex);
+  }
+  
+  public boolean isDirty(Persistent persistent, int fieldIndex) {
+    return dirtyBits.get(fieldIndex);
+  }
+
+  public boolean isDirty(Persistent persistent) {
+    return !dirtyBits.isEmpty();
+  }
+  
+  @Override
+  public void setDirty(Persistent persistent) {
+    dirtyBits.set(0, dirtyBits.size());
+  }
+  
+  @Override
+  public void clearDirty(Persistent persistent, int fieldIndex) {
+    dirtyBits.clear(fieldIndex);
+  }
+  
+  public void clearDirty(Persistent persistent) {
+    dirtyBits.clear();
+  }
+  
+  public void setReadable(Persistent persistent, int fieldIndex) {
+    readableBits.set(fieldIndex);
+  }
+
+  public boolean isReadable(Persistent persistent, int fieldIndex) {
+    return readableBits.get(fieldIndex);
+  }
+
+  @Override
+  public void clearReadable(Persistent persistent, int fieldIndex) {
+    readableBits.clear(fieldIndex);
+  }
+  
+  public void clearReadable(Persistent persistent) {
+    readableBits.clear();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/PartitionQuery.java b/trunk/gora-core/src/main/java/org/apache/gora/query/PartitionQuery.java
new file mode 100644
index 0000000..620fd02
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/PartitionQuery.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query;
+
+import org.apache.gora.persistency.Persistent;
+
+/**
+ * PartitionQuery divides the results of the Query to multi partitions, so that 
+ * queries can be run locally on the nodes that hold the data. PartitionQuery's are 
+ * used for generating Hadoop InputSplits.
+ */
+public interface PartitionQuery<K, T extends Persistent> extends Query<K, T> {
+
+  /* PartitionQuery interface relaxes the dependency of DataStores to Hadoop*/
+  
+  /**
+   * Returns the locations on which this partial query will run locally.
+   * @return the addresses of machines
+   */
+  String[] getLocations();
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/Query.java b/trunk/gora-core/src/main/java/org/apache/gora/query/Query.java
new file mode 100644
index 0000000..6e50e4b
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/Query.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * A query to a data store to retrieve objects. Queries are constructed by 
+ * the DataStore implementation via {@link DataStore#newQuery()}.
+ */
+public interface Query<K, T extends Persistent> extends Writable, Configurable {
+
+  /**
+   * Sets the dataStore of this query. Under normal operation, this call 
+   * is not necessary and it is potentially dangerous. So use this 
+   * method only if you know what you are doing.
+   * @param dataStore the dataStore of the query
+   */
+  void setDataStore(DataStore<K,T> dataStore);
+  
+  /**
+   * Returns the DataStore, that this Query is associated with.
+   * @return the DataStore of the Query
+   */
+  DataStore<K,T> getDataStore();
+  
+  /**
+   * Executes the Query on the DataStore and returns the results.
+   * @return the {@link Result} for the query.
+   */
+  Result<K,T> execute() throws IOException;
+  
+//  /**
+//   * Compiles the query for performance and error checking. This 
+//   * method is an optional optimization for DataStore implementations.
+//   */
+//  void compile();
+//  
+//  /**
+//   * Sets the query string
+//   * @param queryString the query in String
+//   */
+//  void setQueryString(String queryString);
+//  
+//  /**
+//   * Returns the query string
+//   * @return the query as String
+//   */
+//  String getQueryString();
+
+  /* Dimension : fields */
+  void setFields(String... fieldNames);
+
+  String[] getFields();
+
+  /* Dimension : key */ 
+  void setKey(K key);
+
+  /**
+   * 
+   * @param startKey
+   *          an inclusive start key
+   */
+  void setStartKey(K startKey);
+
+  /**
+   * 
+   * @param endKey
+   *          an inclusive end key
+   */
+  void setEndKey(K endKey);
+
+  /**
+   * Set the range of keys over which the query will execute.
+   * 
+   * @param startKey
+   *          an inclusive start key
+   * @param endKey
+   *          an inclusive end key
+   */
+  void setKeyRange(K startKey, K endKey);
+
+  K getKey();
+
+  K getStartKey();
+
+  K getEndKey();
+  
+  /* Dimension : time */
+  void setTimestamp(long timestamp);
+
+  void setStartTime(long startTime);
+
+  void setEndTime(long endTime);
+
+  void setTimeRange(long startTime, long endTime);
+
+  long getTimestamp();
+
+  long getStartTime();
+
+  long getEndTime();
+  
+  /**
+   * Sets the maximum number of results to return.
+   */
+  void setLimit(long limit);
+
+  /**
+   * Returns the maximum number of results
+   * @return the limit if it is set, otherwise a negative number
+   */
+  long getLimit();
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/Result.java b/trunk/gora-core/src/main/java/org/apache/gora/query/Result.java
new file mode 100644
index 0000000..12e93ab
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/Result.java
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+
+/**
+ * A result to a {@link Query}. Objects in the result set can be 
+ * iterated by calling {@link #next()}, {@link #get()} 
+ * and {@link #getKey()}. 
+ */
+public interface Result<K,T extends Persistent> extends Closeable {
+
+  /**
+   * Returns the DataStore, that this Result is associated with.
+   * @return the DataStore of the Result
+   */
+  DataStore<K,T> getDataStore();
+  
+  /**
+   * Returns the Query object for this Result.
+   * @return the Query object for this Result.
+   */
+  Query<K, T> getQuery();
+  
+  /**
+   * Advances to the next element and returns false if at end.
+   * @return true if end is not reached yet
+   */
+  boolean next() throws IOException;
+  
+  /**
+   * Returns the current key.
+   * @return current key
+   */
+  K getKey();
+  
+  /**
+   * Returns the current object.
+   * @return current object
+   */
+  T get();
+  
+  /**
+   * Returns the class of the keys
+   * @return class of the keys
+   */
+  Class<K> getKeyClass();
+    
+  /**
+   * Returns the class of the persistent objects
+   * @return class of the persistent objects
+   */
+  Class<T> getPersistentClass();
+  
+  /**
+   * Returns the number of times next() is called with return value true.
+   * @return the number of results so far
+   */
+  long getOffset();
+  
+  /**
+   * Returns how far along the result has iterated, a value between 0 and 1.
+   */
+  float getProgress() throws IOException;
+  
+  @Override
+  void close() throws IOException;
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/impl/FileSplitPartitionQuery.java b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/FileSplitPartitionQuery.java
new file mode 100644
index 0000000..f1ca283
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/FileSplitPartitionQuery.java
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query.impl;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.lib.input.FileSplit;
+
+/**
+ * Keeps a {@link FileSplit} to represent the partition boundaries.
+ * FileSplitPartitionQuery is best used with existing {@link InputFormat}s.
+ */
+public class FileSplitPartitionQuery<K, T extends Persistent>
+  extends PartitionQueryImpl<K,T> {
+
+  private FileSplit split;
+
+  public FileSplitPartitionQuery() {
+    super();
+  }
+
+  public FileSplitPartitionQuery(Query<K, T> baseQuery, FileSplit split)
+    throws IOException {
+    super(baseQuery, split.getLocations());
+    this.split = split;
+  }
+
+  public FileSplit getSplit() {
+    return split;
+  }
+
+  public long getLength() {
+    return split.getLength();
+  }
+
+  public long getStart() {
+    return split.getStart();
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    split.write(out);
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+    if(split == null)
+      split = new FileSplit(null, 0, 0, null); //change to new FileSplit() once hadoop-core.jar is updated
+    split.readFields(in);
+  }
+
+  @SuppressWarnings("rawtypes")
+  @Override
+  public boolean equals(Object obj) {
+    if(obj instanceof FileSplitPartitionQuery) {
+      return super.equals(obj) &&
+      this.split.equals(((FileSplitPartitionQuery)obj).split);
+    }
+    return false;
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/impl/PartitionQueryImpl.java b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/PartitionQueryImpl.java
new file mode 100644
index 0000000..b694a1b
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/PartitionQueryImpl.java
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query.impl;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.util.IOUtils;
+
+/**
+ * Implementation for {@link PartitionQuery}.
+ */
+public class PartitionQueryImpl<K, T extends Persistent>
+  extends QueryBase<K, T> implements PartitionQuery<K, T> {
+
+  protected Query<K, T> baseQuery;
+  protected String[] locations;
+
+  public PartitionQueryImpl() {
+    super(null);
+  }
+
+  public PartitionQueryImpl(Query<K, T> baseQuery, String... locations) {
+    this(baseQuery, null, null, locations);
+  }
+
+  public PartitionQueryImpl(Query<K, T> baseQuery, K startKey, K endKey,
+      String... locations) {
+    super(baseQuery.getDataStore());
+    this.baseQuery = baseQuery;
+    this.locations = locations;
+    setStartKey(startKey);
+    setEndKey(endKey);
+    this.dataStore = baseQuery.getDataStore();
+  }
+
+  @Override
+public String[] getLocations() {
+    return locations;
+  }
+
+  public Query<K, T> getBaseQuery() {
+    return baseQuery;
+  }
+
+  /* Override everything except start-key/end-key */
+
+  @Override
+  public String[] getFields() {
+    return baseQuery.getFields();
+  }
+
+  @Override
+  public DataStore<K, T> getDataStore() {
+    return baseQuery.getDataStore();
+  }
+
+  @Override
+  public long getTimestamp() {
+    return baseQuery.getTimestamp();
+  }
+
+  @Override
+  public long getStartTime() {
+    return baseQuery.getStartTime();
+  }
+
+  @Override
+  public long getEndTime() {
+    return baseQuery.getEndTime();
+  }
+
+  @Override
+  public long getLimit() {
+    return baseQuery.getLimit();
+  }
+
+  @Override
+  public void setFields(String... fields) {
+    baseQuery.setFields(fields);
+  }
+
+  @Override
+  public void setTimestamp(long timestamp) {
+    baseQuery.setTimestamp(timestamp);
+  }
+
+  @Override
+  public void setStartTime(long startTime) {
+    baseQuery.setStartTime(startTime);
+  }
+
+  @Override
+  public void setEndTime(long endTime) {
+    baseQuery.setEndTime(endTime);
+  }
+
+  @Override
+  public void setTimeRange(long startTime, long endTime) {
+    baseQuery.setTimeRange(startTime, endTime);
+  }
+
+  @Override
+  public void setLimit(long limit) {
+    baseQuery.setLimit(limit);
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    IOUtils.serialize(null, out, baseQuery);
+    IOUtils.writeStringArray(out, locations);
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+    try {
+      baseQuery = IOUtils.deserialize(null, in, null);
+    } catch (ClassNotFoundException ex) {
+      throw new IOException(ex);
+    }
+    locations = IOUtils.readStringArray(in);
+    //we should override the data store as basequery's data store
+    //also we may not call super.readFields so that temporary this.dataStore
+    //is not created at all
+    this.dataStore = baseQuery.getDataStore();
+  }
+
+  @Override
+  @SuppressWarnings({ "rawtypes" })
+  public boolean equals(Object obj) {
+    if(obj instanceof PartitionQueryImpl) {
+      PartitionQueryImpl that = (PartitionQueryImpl) obj;
+      return this.baseQuery.equals(that.baseQuery)
+        && Arrays.equals(locations, that.locations);
+    }
+    return false;
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/impl/QueryBase.java b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/QueryBase.java
new file mode 100644
index 0000000..c04e089
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/QueryBase.java
@@ -0,0 +1,313 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query.impl;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.commons.lang.builder.EqualsBuilder;
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import org.apache.commons.lang.builder.ToStringBuilder;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.util.ClassLoadingUtils;
+import org.apache.gora.util.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Base class for Query implementations.
+ */
+public abstract class QueryBase<K, T extends Persistent>
+implements Query<K,T> {
+
+  protected DataStore<K,T> dataStore;
+
+  protected String queryString;
+  protected String[] fields;
+
+  protected K startKey;
+  protected K endKey;
+
+  protected long startTime = -1;
+  protected long endTime = -1;
+
+  protected String filter;
+
+  protected long limit = -1;
+
+  protected boolean isCompiled = false;
+
+  private Configuration conf;
+
+  public QueryBase(DataStore<K,T> dataStore) {
+    this.dataStore = dataStore;
+  }
+
+  @Override
+  public Result<K,T> execute() throws IOException {
+    //compile();
+    return dataStore.execute(this);
+  }
+
+//  @Override
+//  public void compile() {
+//    if(!isCompiled) {
+//      isCompiled = true;
+//    }
+//  }
+
+  @Override
+  public void setDataStore(DataStore<K, T> dataStore) {
+    this.dataStore = dataStore;
+  }
+
+  @Override
+  public DataStore<K, T> getDataStore() {
+    return dataStore;
+  }
+
+//  @Override
+//  public void setQueryString(String queryString) {
+//    this.queryString = queryString;
+//  }
+//
+//  @Override
+//  public String getQueryString() {
+//    return queryString;
+//  }
+
+  @Override
+  public void setFields(String... fields) {
+    this.fields = fields;
+  }
+
+  @Override
+public String[] getFields() {
+    return fields;
+  }
+
+  @Override
+  public void setKey(K key) {
+    setKeyRange(key, key);
+  }
+
+  @Override
+  public void setStartKey(K startKey) {
+    this.startKey = startKey;
+  }
+
+  @Override
+  public void setEndKey(K endKey) {
+    this.endKey = endKey;
+  }
+
+  @Override
+  public void setKeyRange(K startKey, K endKey) {
+    this.startKey = startKey;
+    this.endKey = endKey;
+  }
+
+  @Override
+  public K getKey() {
+    if(startKey == endKey) {
+      return startKey; //address comparison
+    }
+    return null;
+  }
+
+  @Override
+  public K getStartKey() {
+    return startKey;
+  }
+
+  @Override
+  public K getEndKey() {
+    return endKey;
+  }
+
+  @Override
+  public void setTimestamp(long timestamp) {
+    setTimeRange(timestamp, timestamp);
+  }
+
+  @Override
+  public void setStartTime(long startTime) {
+    this.startTime = startTime;
+  }
+
+  @Override
+  public void setEndTime(long endTime) {
+    this.endTime = endTime;
+  }
+
+  @Override
+  public void setTimeRange(long startTime, long endTime) {
+    this.startTime = startTime;
+    this.endTime = endTime;
+  }
+
+  @Override
+  public long getTimestamp() {
+    return startTime == endTime ? startTime : -1;
+  }
+
+  @Override
+  public long getStartTime() {
+    return startTime;
+  }
+
+  @Override
+  public long getEndTime() {
+    return endTime;
+  }
+
+//  @Override
+//  public void setFilter(String filter) {
+//    this.filter = filter;
+//  }
+//
+//  @Override
+//  public String getFilter() {
+//    return filter;
+//  }
+
+  @Override
+  public void setLimit(long limit) {
+    this.limit = limit;
+  }
+
+  @Override
+  public long getLimit() {
+    return limit;
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public void readFields(DataInput in) throws IOException {
+    String dataStoreClass = Text.readString(in);
+    try {
+      dataStore = (DataStore<K, T>) ReflectionUtils.newInstance(ClassLoadingUtils.loadClass(dataStoreClass), conf);
+      dataStore.readFields(in);
+    } catch (ClassNotFoundException ex) {
+      throw new IOException(ex);
+    }
+
+    boolean[] nullFields = IOUtils.readNullFieldsInfo(in);
+
+    if(!nullFields[0])
+      queryString = Text.readString(in);
+    if(!nullFields[1])
+      fields = IOUtils.readStringArray(in);
+    if(!nullFields[2])
+      startKey = IOUtils.deserialize(null, in, null, dataStore.getKeyClass());
+    if(!nullFields[3])
+      endKey = IOUtils.deserialize(null, in, null, dataStore.getKeyClass());
+    if(!nullFields[4])
+      filter = Text.readString(in);
+
+    startTime = WritableUtils.readVLong(in);
+    endTime = WritableUtils.readVLong(in);
+    limit = WritableUtils.readVLong(in);
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    //write datastore
+    Text.writeString(out, dataStore.getClass().getCanonicalName());
+    dataStore.write(out);
+
+    IOUtils.writeNullFieldsInfo(out, queryString, (fields)
+        , startKey, endKey, filter);
+
+    if(queryString != null)
+      Text.writeString(out, queryString);
+    if(fields != null)
+      IOUtils.writeStringArray(out, fields);
+    if(startKey != null)
+      IOUtils.serialize(getConf(), out, startKey, dataStore.getKeyClass());
+    if(endKey != null)
+      IOUtils.serialize(getConf(), out, endKey, dataStore.getKeyClass());
+    if(filter != null)
+      Text.writeString(out, filter);
+
+    WritableUtils.writeVLong(out, getStartTime());
+    WritableUtils.writeVLong(out, getEndTime());
+    WritableUtils.writeVLong(out, getLimit());
+  }
+
+  @SuppressWarnings({ "rawtypes" })
+  @Override
+  public boolean equals(Object obj) {
+    if(obj instanceof QueryBase) {
+      QueryBase that = (QueryBase) obj;
+      EqualsBuilder builder = new EqualsBuilder();
+      builder.append(dataStore, that.dataStore);
+      builder.append(queryString, that.queryString);
+      builder.append(fields, that.fields);
+      builder.append(startKey, that.startKey);
+      builder.append(endKey, that.endKey);
+      builder.append(filter, that.filter);
+      builder.append(limit, that.limit);
+      return builder.isEquals();
+    }
+    return false;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+    builder.append(dataStore);
+    builder.append(queryString);
+    builder.append(fields);
+    builder.append(startKey);
+    builder.append(endKey);
+    builder.append(filter);
+    builder.append(limit);
+    return builder.toHashCode();
+  }
+
+  @Override
+  public String toString() {
+    ToStringBuilder builder = new ToStringBuilder(this);
+    builder.append("dataStore", dataStore);
+    builder.append("fields", fields);
+    builder.append("startKey", startKey);
+    builder.append("endKey", endKey);
+    builder.append("filter", filter);
+    builder.append("limit", limit);
+
+    return builder.toString();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/query/impl/ResultBase.java b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/ResultBase.java
new file mode 100644
index 0000000..59226fb
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/query/impl/ResultBase.java
@@ -0,0 +1,134 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query.impl;
+
+import java.io.IOException;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+
+/**
+ * Base class for {@link Result} implementations.
+ */
+public abstract class ResultBase<K, T extends Persistent> 
+  implements Result<K, T> {
+
+  protected final DataStore<K,T> dataStore;
+  
+  protected final Query<K, T> query;
+  
+  protected K key;
+  
+  protected T persistent;
+  
+  /** Query limit */
+  protected long limit;
+  
+  /** How far we have proceeded*/
+  protected long offset = 0;
+  
+  public ResultBase(DataStore<K,T> dataStore, Query<K,T> query) {
+    this.dataStore = dataStore;
+    this.query = query;
+    this.limit = query.getLimit();
+  }
+  
+  @Override
+  public DataStore<K, T> getDataStore() {
+    return dataStore;
+  }
+  
+  @Override
+  public Query<K, T> getQuery() {
+    return query;
+  }
+  
+  @Override
+  public T get() {
+    return persistent;
+  }
+  
+  @Override
+  public K getKey() {
+    return key;
+  }
+    
+  @Override
+  public Class<K> getKeyClass() {
+    return getDataStore().getKeyClass();
+  }
+  
+  @Override
+  public Class<T> getPersistentClass() {
+    return getDataStore().getPersistentClass();
+  }
+  
+  /**
+   * Returns whether the limit for the query is reached. 
+   */
+  protected boolean isLimitReached() {
+    if(limit > 0 && offset >= limit) {
+      return true;
+    }
+    return false;
+  }
+  
+  protected void clear() {
+    if(persistent != null) {
+      persistent.clear();
+    }
+    if(key != null && key instanceof Persistent) {
+      ((Persistent)key).clear();
+    }
+  }
+  
+  @Override
+  public final boolean next() throws IOException {
+    if(isLimitReached()) {
+      return false;
+    }
+    
+    clear();
+    persistent = getOrCreatePersistent(persistent);
+    
+    boolean ret = nextInner();
+    if(ret) ++offset;
+    return ret;
+  }
+  
+  @Override
+  public long getOffset() {
+    return offset;
+  }
+  
+  /**
+   * {@link ResultBase#next()} calls this function to read the 
+   * actual results. 
+   */
+  protected abstract boolean nextInner() throws IOException; 
+  
+  protected T getOrCreatePersistent(T persistent) throws IOException {
+    if(persistent != null) {
+      return persistent;
+    }
+    return dataStore.newPersistent();
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/store/DataStore.java b/trunk/gora-core/src/main/java/org/apache/gora/store/DataStore.java
new file mode 100644
index 0000000..df0a1cb
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/store/DataStore.java
@@ -0,0 +1,232 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.store;
+
+import java.io.Closeable;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.gora.persistency.BeanFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * DataStore handles actual object persistence. Objects can be persisted,
+ * fetched, queried or deleted by the DataStore methods. DataStores can be
+ * constructed by an instance of {@link DataStoreFactory}.
+ *
+ * <p> DataStores implementations should be thread safe.
+ * <p><a name="visibility"><b>Note:</b> Results of updates ({@link #put(Object, Persistent)},
+ * {@link #delete(Object)} and {@link #deleteByQuery(Query)} operations) are
+ * guaranteed to be visible to subsequent get / execute operations ONLY
+ * after a subsequent call to {@link #flush()}.
+ * @param <K> the class of keys in the datastore
+ * @param <T> the class of persistent objects in the datastore
+ */
+public interface DataStore<K, T extends Persistent> extends Closeable,
+  Writable, Configurable {
+
+  /**
+   * Initializes this DataStore.
+   * @param keyClass the class of the keys
+   * @param persistentClass the class of the persistent objects
+   * @param properties extra metadata
+   * @throws IOException
+   */
+  void initialize(Class<K> keyClass, Class<T> persistentClass,
+      Properties properties) throws IOException;
+
+  /**
+   * Sets the class of the keys
+   * @param keyClass the class of keys
+   */
+  void setKeyClass(Class<K> keyClass);
+
+  /**
+   * Returns the class of the keys
+   * @return class of the keys
+   */
+  Class<K> getKeyClass();
+
+  /**
+   * Sets the class of the persistent objects
+   * @param persistentClass class of persistent objects
+   */
+  void setPersistentClass(Class<T> persistentClass);
+
+  /**
+   * Returns the class of the persistent objects
+   * @return class of the persistent objects
+   */
+  Class<T> getPersistentClass();
+
+  /**
+   * Returns the schema name given to this DataStore
+   * @return schema name
+   */
+  String getSchemaName();
+
+  /**
+   * Creates the optional schema or table (or similar) in the datastore
+   * to hold the objects. If the schema is already created previously,
+   * or the underlying data model does not support
+   * or need this operation, the operation is ignored.
+   */
+  void createSchema() throws IOException;
+
+  /**
+   * Deletes the underlying schema or table (or similar) in the datastore
+   * that holds the objects. This also deletes all the data associated with
+   * the schema.
+   */
+  void deleteSchema() throws IOException;
+
+  /**
+   * Deletes all the data associated with the schema, but keeps the
+   * schema (table or similar) intact.
+   */
+  void truncateSchema() throws IOException;
+
+  /**
+   * Returns whether the schema that holds the data exists in the datastore.
+   * @return whether schema exists
+   */
+  boolean schemaExists() throws IOException;
+
+  /**
+   * Returns a new instance of the key object. If the object cannot be instantiated 
+   * (it the class is a Java primitive wrapper, or does not have no-arg 
+   * constructor) it throws an exception. Only use this function if you can 
+   * make sure that the key class has a no-arg constructor.   
+   * @return a new instance of the key object.
+   */
+  K newKey() throws IOException;
+
+  /**
+   * Returns a new instance of the managed persistent object.
+   * @return a new instance of the managed persistent object.
+   */
+  T newPersistent() throws IOException;
+
+  /**
+   * Returns the object corresponding to the given key fetching all the fields.
+   * @param key the key of the object
+   * @return the Object corresponding to the key or null if it cannot be found
+   */
+  T get(K key) throws IOException;
+
+  /**
+   * Returns the object corresponding to the given key.
+   * @param key the key of the object
+   * @param fields the fields required in the object. Pass null, to retrieve all fields
+   * @return the Object corresponding to the key or null if it cannot be found
+   */
+  T get(K key, String[] fields) throws IOException;
+
+  /**
+   * Inserts the persistent object with the given key. If an 
+   * object with the same key already exists it will silently
+   * be replaced. See also the note on 
+   * <a href="#visibility">visibility</a>.
+   */
+  void put(K key, T obj) throws IOException;
+
+  /**
+   * Deletes the object with the given key
+   * @param key the key of the object
+   * @return whether the object was successfully deleted
+   */
+  boolean delete(K key) throws IOException;
+
+  /**
+   * Deletes all the objects matching the query.
+   * See also the note on <a href="#visibility">visibility</a>.
+   * @param query matching records to this query will be deleted
+   * @return number of deleted records
+   */
+  long deleteByQuery(Query<K, T> query) throws IOException;
+
+  /**
+   * Executes the given query and returns the results.
+   * @param query the query to execute.
+   * @return the results as a {@link Result} object.
+   */
+  Result<K,T> execute(Query<K, T> query) throws IOException;
+
+  /**
+   * Constructs and returns a new Query.
+   * @return a new Query.
+   */
+  Query<K, T> newQuery();
+
+  /**
+   * Partitions the given query and returns a list of {@link PartitionQuery}s,
+   * which will execute on local data.
+   * @param query the base query to create the partitions for. If the query
+   * is null, then the data store returns the partitions for the default query
+   * (returning every object)
+   * @return a List of PartitionQuery's
+   */
+  List<PartitionQuery<K,T>> getPartitions(Query<K,T> query)
+    throws IOException;
+
+  /**
+   * Forces the write caches to be flushed. DataStore implementations may
+   * optimize their writing by deferring the actual put / delete operations
+   * until this moment.
+   * See also the note on <a href="#visibility">visibility</a>.
+   */
+  void flush() throws IOException;
+
+  /**
+   * Sets the {@link BeanFactory} to use by the DataStore.
+   * @param beanFactory the BeanFactory to use
+   */
+  void setBeanFactory(BeanFactory<K,T> beanFactory);
+
+  /**
+   * Returns the BeanFactory used by the DataStore
+   * @return the BeanFactory used by the DataStore
+   */
+  BeanFactory<K,T> getBeanFactory();
+
+  /**
+   * Close the DataStore. This should release any resources held by the
+   * implementation, so that the instance is ready for GC.
+   * All other DataStore methods cannot be used after this
+   * method was called. Subsequent calls of this method are ignored.
+   */
+  void close() throws IOException;
+
+  Configuration getConf();
+
+  void setConf(Configuration conf);
+
+  void readFields(DataInput in) throws IOException;
+
+  void write(DataOutput out) throws IOException;
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/store/DataStoreFactory.java b/trunk/gora-core/src/main/java/org/apache/gora/store/DataStoreFactory.java
new file mode 100644
index 0000000..d7ded0e
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/store/DataStoreFactory.java
@@ -0,0 +1,429 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.store;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Properties;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.apache.gora.util.ClassLoadingUtils;
+import org.apache.gora.util.GoraException;
+import org.apache.gora.util.ReflectionUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * A Factory for {@link DataStore}s. DataStoreFactory instances are thread-safe.
+ */
+public class DataStoreFactory {
+
+  public static final Logger log = LoggerFactory.getLogger(DataStoreFactory.class);
+
+  public static final String GORA_DEFAULT_PROPERTIES_FILE = "gora.properties";
+
+  public static final String GORA_DEFAULT_DATASTORE_KEY = "gora.datastore.default";
+
+  public static final String GORA = "gora";
+
+  public static final String DATASTORE = "datastore";
+
+  private static final String GORA_DATASTORE = GORA + "." + DATASTORE + ".";
+
+  public static final String AUTO_CREATE_SCHEMA = "autocreateschema";
+
+  public static final String INPUT_PATH  = "input.path";
+
+  public static final String OUTPUT_PATH = "output.path";
+
+  public static final String MAPPING_FILE = "mapping.file";
+
+	public static final String SCHEMA_NAME = "schema.name";
+
+  /**
+   * Do not use! Deprecated because it shares system wide state. 
+   * Use {@link #createProps()} instead.
+   */
+  @Deprecated()
+  public static final Properties properties = createProps();
+  
+  /**
+   * Creates a new {@link Properties}. It adds the default gora configuration
+   * resources. This properties object can be modified and used to instantiate
+   * store instances. It is recommended to use a properties object for a single
+   * store, because the properties object is passed on to store initialization
+   * methods that are able to store the properties as a field.   
+   * @return The new properties object.
+   */
+  public static Properties createProps() {
+    try {
+    Properties properties = new Properties();
+      InputStream stream = DataStoreFactory.class.getClassLoader()
+        .getResourceAsStream(GORA_DEFAULT_PROPERTIES_FILE);
+      if(stream != null) {
+        try {
+          properties.load(stream);
+          return properties;
+        } finally {
+          stream.close();
+        }
+      } else {
+        log.warn(GORA_DEFAULT_PROPERTIES_FILE + " not found, properties will be empty.");
+      }
+      return properties;
+    } catch(Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  private DataStoreFactory() { }
+
+  private static <K, T extends Persistent> void initializeDataStore(
+      DataStore<K, T> dataStore, Class<K> keyClass, Class<T> persistent,
+      Properties properties) throws IOException {
+    dataStore.initialize(keyClass, persistent, properties);
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses default properties. Uses 'null' schema.
+   * 
+   * @param dataStoreClass The datastore implementation class.
+   * @param keyClass The key class.
+   * @param persistent The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  public static <D extends DataStore<K,T>, K, T extends Persistent>
+  D createDataStore(Class<D> dataStoreClass
+      , Class<K> keyClass, Class<T> persistent, Configuration conf) throws GoraException {
+    return createDataStore(dataStoreClass, keyClass, persistent, conf, createProps(), null);
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses default properties.
+   * 
+   * @param dataStoreClass The datastore implementation class.
+   * @param keyClass The key class.
+   * @param persistent The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @param schemaName A default schemaname that will be put on the properties.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  public static <D extends DataStore<K,T>, K, T extends Persistent>
+  D createDataStore(Class<D> dataStoreClass , Class<K> keyClass, 
+      Class<T> persistent, Configuration conf, String schemaName) throws GoraException {
+    return createDataStore(dataStoreClass, keyClass, persistent, conf, createProps(), schemaName);
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}.
+   * 
+   * @param dataStoreClass The datastore implementation class.
+   * @param keyClass The key class.
+   * @param persistent The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @param properties The properties to be used be the store.
+   * @param schemaName A default schemaname that will be put on the properties.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  public static <D extends DataStore<K,T>, K, T extends Persistent>
+  D createDataStore(Class<D> dataStoreClass, Class<K> keyClass
+      , Class<T> persistent, Configuration conf, Properties properties, String schemaName) 
+  throws GoraException {
+    try {
+      setDefaultSchemaName(properties, schemaName);
+      D dataStore =
+        ReflectionUtils.newInstance(dataStoreClass);
+      if ((dataStore instanceof Configurable) && conf != null) {
+        ((Configurable)dataStore).setConf(conf);
+      }
+      initializeDataStore(dataStore, keyClass, persistent, properties);
+      return dataStore;
+
+    } catch (GoraException ex) {
+      throw ex;
+    } catch(Exception ex) {
+      throw new GoraException(ex);
+    }
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses 'null' schema.
+   * 
+   * @param dataStoreClass The datastore implementation class.
+   * @param keyClass The key class.
+   * @param persistent The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @param properties The properties to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  public static <D extends DataStore<K,T>, K, T extends Persistent>
+  D createDataStore(Class<D> dataStoreClass
+      , Class<K> keyClass, Class<T> persistent, Configuration conf, Properties properties) 
+  throws GoraException {
+    return createDataStore(dataStoreClass, keyClass, persistent, conf, properties, null);
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses default properties. Uses 'null' schema.
+   * 
+   * @param dataStoreClass The datastore implementation class.
+   * @param keyClass The key class.
+   * @param persistentClass The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  public static <D extends DataStore<K,T>, K, T extends Persistent>
+  D getDataStore( Class<D> dataStoreClass, Class<K> keyClass,
+      Class<T> persistentClass, Configuration conf) throws GoraException {
+
+    return createDataStore(dataStoreClass, keyClass, persistentClass, conf, createProps(), null);
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses default properties. Uses 'null' schema.
+   * 
+   * @param dataStoreClass The datastore implementation class <i>as string</i>.
+   * @param keyClass The key class.
+   * @param persistentClass The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  @SuppressWarnings("unchecked")
+  public static <K, T extends Persistent> DataStore<K, T> getDataStore(
+      String dataStoreClass, Class<K> keyClass, Class<T> persistentClass, Configuration conf)
+      throws GoraException {
+    try {
+      Class<? extends DataStore<K,T>> c
+        = (Class<? extends DataStore<K, T>>) Class.forName(dataStoreClass);
+      return createDataStore(c, keyClass, persistentClass, conf, createProps(), null);
+    } catch(GoraException ex) {
+      throw ex;
+    } catch (Exception ex) {
+      throw new GoraException(ex);
+    }
+  }
+
+  /**
+   * Instantiate a new {@link DataStore}. Uses default properties. Uses 'null' schema.
+   * 
+   * @param dataStoreClass The datastore implementation class <i>as string</i>.
+   * @param keyClass The key class <i>as string</i>.
+   * @param persistentClass The value class <i>as string</i>.
+   * @param conf {@link Configuration} to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  @SuppressWarnings({ "unchecked" })
+  public static <K, T extends Persistent> DataStore<K, T> getDataStore(
+      String dataStoreClass, String keyClass, String persistentClass, Configuration conf)
+    throws GoraException {
+
+    try {
+      Class<? extends DataStore<K,T>> c
+          = (Class<? extends DataStore<K, T>>) Class.forName(dataStoreClass);
+      Class<K> k = (Class<K>) ClassLoadingUtils.loadClass(keyClass);
+      Class<T> p = (Class<T>) ClassLoadingUtils.loadClass(persistentClass);
+      return createDataStore(c, k, p, conf, createProps(), null);
+    } catch(GoraException ex) {
+      throw ex;
+    } catch (Exception ex) {
+      throw new GoraException(ex);
+    }
+  }
+
+  /**
+   * Instantiate <i>the default</i> {@link DataStore}. Uses default properties. Uses 'null' schema.
+   * 
+   * @param keyClass The key class.
+   * @param persistent The value class.
+   * @param conf {@link Configuration} to be used be the store.
+   * @return A new store instance.
+   * @throws GoraException
+   */
+  @SuppressWarnings("unchecked")
+  public static <K, T extends Persistent> DataStore<K, T> getDataStore(
+      Class<K> keyClass, Class<T> persistent, Configuration conf) throws GoraException {
+    Properties createProps = createProps();
+    Class<? extends DataStore<K, T>> c;
+    try {
+      c = (Class<? extends DataStore<K, T>>) Class.forName(getDefaultDataStore(createProps));
+    } catch (Exception ex) {
+      throw new GoraException(ex);
+    }
+    return createDataStore(c, keyClass, persistent, conf, createProps, null);
+  }
+
+  /**
+   * Tries to find a property with the given baseKey. First the property
+   * key constructed as "gora.&lt;classname&gt;.&lt;baseKey&gt;" is searched.
+   * If not found, the property keys for all superclasses is recursively
+   * tested. Lastly, the property key constructed as
+   * "gora.datastore.&lt;baseKey&gt;" is searched.
+   * @return the first found value, or defaultValue
+   */
+  public static String findProperty(Properties properties
+      , DataStore<?, ?> store, String baseKey, String defaultValue) {
+
+    //recursively try the class names until the base class
+    Class<?> clazz = store.getClass();
+    while(true) {
+      String fullKey = GORA + "." + org.apache.gora.util.StringUtils.getClassname(clazz) + "." + baseKey;
+      String value = getProperty(properties, fullKey);
+      if(value != null) {
+        return value;
+      }
+      //try once with lowercase
+      value = getProperty(properties, fullKey.toLowerCase());
+      if(value != null) {
+        return value;
+      }
+
+      if(clazz.equals(DataStoreBase.class)) {
+        break;
+      }
+      clazz = clazz.getSuperclass();
+      if(clazz == null) {
+        break;
+      }
+    }
+    //try with "datastore"
+    String fullKey = GORA + "." + DATASTORE + "." + baseKey;
+    String value = getProperty(properties, fullKey);
+    if(value != null) {
+      return value;
+    }
+    return defaultValue;
+  }
+
+  /**
+   * Tries to find a property with the given baseKey. First the property
+   * key constructed as "gora.&lt;classname&gt;.&lt;baseKey&gt;" is searched.
+   * If not found, the property keys for all superclasses is recursively
+   * tested. Lastly, the property key constructed as
+   * "gora.datastore.&lt;baseKey&gt;" is searched.
+   * @return the first found value, or throws IOException
+   */
+  public static String findPropertyOrDie(Properties properties
+      , DataStore<?, ?> store, String baseKey) throws IOException {
+    String val = findProperty(properties, store, baseKey, null);
+    if(val == null) {
+      throw new IOException("Property with base name \""+baseKey+"\" could not be found, make " +
+      		"sure to include this property in gora.properties file");
+    }
+    return val;
+  }
+
+  public static boolean findBooleanProperty(Properties properties
+      , DataStore<?, ?> store, String baseKey, String defaultValue) {
+    return Boolean.parseBoolean(findProperty(properties, store, baseKey, defaultValue));
+  }
+
+  public static boolean getAutoCreateSchema(Properties properties
+      , DataStore<?,?> store) {
+    return findBooleanProperty(properties, store, AUTO_CREATE_SCHEMA, "true");
+  }
+
+  /**
+   * Returns the input path as read from the properties for file-backed data stores.
+   */
+  public static String getInputPath(Properties properties, DataStore<?,?> store) {
+    return findProperty(properties, store, INPUT_PATH, null);
+  }
+
+  /**
+   * Returns the output path as read from the properties for file-backed data stores.
+   */
+  public static String getOutputPath(Properties properties, DataStore<?,?> store) {
+    return findProperty(properties, store, OUTPUT_PATH, null);
+  }
+
+  public static String getMappingFile(Properties properties, DataStore<?,?> store
+      , String defaultValue) {
+    return findProperty(properties, store, MAPPING_FILE, defaultValue);
+  }
+
+  private static String getDefaultDataStore(Properties properties) {
+    return getProperty(properties, GORA_DEFAULT_DATASTORE_KEY);
+  }
+
+  private static String getProperty(Properties properties, String key) {
+    return getProperty(properties, key, null);
+  }
+
+  private static String getProperty(Properties properties, String key, String defaultValue) {
+    if (properties == null) {
+      return defaultValue;
+    }
+    String result = properties.getProperty(key);
+    if (result == null) {
+      return defaultValue;
+    }
+    return result;
+  }
+
+  /**
+   * Set a property
+   */
+  private static void setProperty(Properties properties, String baseKey, String value) {
+    if(value != null) {
+      properties.setProperty(GORA_DATASTORE + baseKey, value);
+    }
+  }
+
+  /**
+   * Sets a property for the datastore of the given class
+   */
+  private static<D extends DataStore<K,T>, K, T extends Persistent>
+  void setProperty(Properties properties, Class<D> dataStoreClass, String baseKey, String value) {
+    properties.setProperty(GORA+"."+org.apache.gora.util.StringUtils.getClassname(dataStoreClass)+"."+baseKey, value);
+  }
+
+  /**
+   * Gets the default schema name of a given store class 
+   */
+  public static String getDefaultSchemaName(Properties properties, DataStore<?,?> store) {
+    return findProperty(properties, store, SCHEMA_NAME, null);
+  }
+
+  /**
+   * Sets the default schema name.
+   */
+  public static void setDefaultSchemaName(Properties properties, String schemaName) {
+    if (schemaName != null) {
+      setProperty(properties, SCHEMA_NAME, schemaName);
+    }
+  }
+
+  /**
+   * Sets the default schema name to be used by the datastore of the given class
+   */
+  public static<D extends DataStore<K,T>, K, T extends Persistent>
+  void setDefaultSchemaName(Properties properties, Class<D> dataStoreClass, String schemaName) {
+    setProperty(properties, dataStoreClass, SCHEMA_NAME, schemaName);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/store/FileBackedDataStore.java b/trunk/gora-core/src/main/java/org/apache/gora/store/FileBackedDataStore.java
new file mode 100644
index 0000000..cc7ad8c
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/store/FileBackedDataStore.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store;
+
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import org.apache.gora.persistency.Persistent;
+
+/** FileBackedDataStore supplies necessary interfaces to set input 
+ * and output paths for data stored which are file based.   
+ */
+public interface FileBackedDataStore<K, T extends Persistent> extends DataStore<K, T> {
+
+  void setInputPath(String inputPath);
+  
+  void setOutputPath(String outputPath);
+  
+  String getInputPath();
+  
+  String getOutputPath();
+  
+  void setInputStream(InputStream inputStream);
+  
+  void setOutputStream(OutputStream outputStream);
+
+  InputStream getInputStream();
+  
+  OutputStream getOutputStream();
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/store/impl/DataStoreBase.java b/trunk/gora-core/src/main/java/org/apache/gora/store/impl/DataStoreBase.java
new file mode 100644
index 0000000..05cb94c
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/store/impl/DataStoreBase.java
@@ -0,0 +1,238 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store.impl;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.commons.lang.builder.EqualsBuilder;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.avro.PersistentDatumWriter;
+import org.apache.gora.persistency.BeanFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.impl.BeanFactoryImpl;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.util.AvroUtils;
+import org.apache.gora.util.ClassLoadingUtils;
+import org.apache.gora.util.StringUtils;
+import org.apache.gora.util.WritableUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+
+/**
+ * A Base class for {@link DataStore}s.
+ */
+public abstract class DataStoreBase<K, T extends Persistent>
+implements DataStore<K, T> {
+
+  protected BeanFactory<K, T> beanFactory;
+
+  protected Class<K> keyClass;
+  protected Class<T> persistentClass;
+
+  /** The schema of the persistent class*/
+  protected Schema schema;
+
+  /** A map of field names to Field objects containing schema's fields*/
+  protected Map<String, Field> fieldMap;
+
+  protected Configuration conf;
+
+  protected boolean autoCreateSchema;
+
+  protected Properties properties;
+
+  protected PersistentDatumReader<T> datumReader;
+
+  protected PersistentDatumWriter<T> datumWriter;
+
+  public DataStoreBase() {
+  }
+
+  @Override
+  public void initialize(Class<K> keyClass, Class<T> persistentClass,
+      Properties properties) throws IOException {
+    setKeyClass(keyClass);
+    setPersistentClass(persistentClass);
+    if(this.beanFactory == null)
+      this.beanFactory = new BeanFactoryImpl<K, T>(keyClass, persistentClass);
+    schema = this.beanFactory.getCachedPersistent().getSchema();
+    fieldMap = AvroUtils.getFieldMap(schema);
+
+    autoCreateSchema = DataStoreFactory.getAutoCreateSchema(properties, this);
+    this.properties = properties;
+
+    datumReader = new PersistentDatumReader<T>(schema, false);
+    datumWriter = new PersistentDatumWriter<T>(schema, false);
+  }
+
+  @Override
+  public void setPersistentClass(Class<T> persistentClass) {
+    this.persistentClass = persistentClass;
+  }
+
+  @Override
+  public Class<T> getPersistentClass() {
+    return persistentClass;
+  }
+
+  @Override
+  public Class<K> getKeyClass() {
+    return keyClass;
+  }
+
+  @Override
+  public void setKeyClass(Class<K> keyClass) {
+    if(keyClass != null)
+      this.keyClass = keyClass;
+  }
+
+  @Override
+  public K newKey() throws IOException {
+    try {
+      return beanFactory.newKey();
+    } catch (Exception ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  @Override
+  public T newPersistent() throws IOException {
+    try {
+      return beanFactory.newPersistent();
+    } catch (Exception ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  @Override
+  public void setBeanFactory(BeanFactory<K, T> beanFactory) {
+    this.beanFactory = beanFactory;
+  }
+
+  @Override
+  public BeanFactory<K, T> getBeanFactory() {
+    return beanFactory;
+  }
+
+  @Override
+  public T get(K key) throws IOException {
+    return get(key, getFieldsToQuery(null));
+  };
+
+  /**
+   * Checks whether the fields argument is null, and if so
+   * returns all the fields of the Persistent object, else returns the
+   * argument.
+   */
+  protected String[] getFieldsToQuery(String[] fields) {
+    if(fields != null) {
+      return fields;
+    }
+    return beanFactory.getCachedPersistent().getFields();
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+  protected Configuration getOrCreateConf() {
+    if(conf == null) {
+      conf = new Configuration();
+    }
+    return conf;
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public void readFields(DataInput in) throws IOException {
+    try {
+      Class<K> keyClass = (Class<K>) ClassLoadingUtils.loadClass(Text.readString(in));
+      Class<T> persistentClass = (Class<T>)ClassLoadingUtils.loadClass(Text.readString(in));
+      Properties props = WritableUtils.readProperties(in);
+      initialize(keyClass, persistentClass, props);
+    } catch (ClassNotFoundException ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    Text.writeString(out, getKeyClass().getCanonicalName());
+    Text.writeString(out, getPersistentClass().getCanonicalName());
+    WritableUtils.writeProperties(out, properties);
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if(obj instanceof DataStoreBase) {
+      @SuppressWarnings("rawtypes")
+      DataStoreBase that = (DataStoreBase) obj;
+      EqualsBuilder builder = new EqualsBuilder();
+      builder.append(this.keyClass, that.keyClass);
+      builder.append(this.persistentClass, that.persistentClass);
+      return builder.isEquals();
+    }
+    return false;
+  }
+
+  @Override
+  /** Default implementation deletes and recreates the schema*/
+  public void truncateSchema() throws IOException {
+    deleteSchema();
+    createSchema();
+  }
+
+  /**
+   * Returns the name of the schema to use for the persistent class. 
+   * 
+   * The schema name is prefixed with schema.prefix from {@link Configuration}.
+   * The schema name in the defined properties is returned. If null then
+   * the provided mappingSchemaName is returned. If this is null too,
+   * the class name, without the package, of the persistent class is returned.
+   * @param mappingSchemaName the name of the schema as read from the mapping file
+   * @param persistentClass persistent class
+   */
+  protected String getSchemaName(String mappingSchemaName, Class<?> persistentClass) {
+    String prefix = getOrCreateConf().get("schema.prefix","");
+    
+    String schemaName = DataStoreFactory.getDefaultSchemaName(properties, this);
+    if(schemaName != null) {
+      return prefix+schemaName;
+    }
+
+    if(mappingSchemaName != null) {
+      return prefix+mappingSchemaName;
+    }
+
+    return prefix+StringUtils.getClassname(persistentClass);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/store/impl/FileBackedDataStoreBase.java b/trunk/gora-core/src/main/java/org/apache/gora/store/impl/FileBackedDataStoreBase.java
new file mode 100644
index 0000000..aaf60eb
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/store/impl/FileBackedDataStoreBase.java
@@ -0,0 +1,228 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store.impl;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.gora.mapreduce.GoraMapReduceUtils;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.FileSplitPartitionQuery;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.FileBackedDataStore;
+import org.apache.gora.util.OperationNotSupportedException;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.lib.input.FileSplit;
+
+/**
+ * Base implementations for {@link FileBackedDataStore} methods.
+ */
+public abstract class FileBackedDataStoreBase<K, T extends Persistent>
+  extends DataStoreBase<K, T> implements FileBackedDataStore<K, T> {
+
+  protected long inputSize; //input size in bytes
+
+  protected String inputPath;
+  protected String outputPath;
+
+  protected InputStream inputStream;
+  protected OutputStream outputStream;
+
+  @Override
+  public void initialize(Class<K> keyClass, Class<T> persistentClass,
+      Properties properties) throws IOException {
+    super.initialize(keyClass, persistentClass, properties);
+    if(properties != null) {
+      if(this.inputPath == null) {
+        this.inputPath = DataStoreFactory.getInputPath(properties, this);
+      }
+      if(this.outputPath == null) {
+        this.outputPath = DataStoreFactory.getOutputPath(properties, this);
+      }
+    }
+  }
+
+  @Override
+public void setInputPath(String inputPath) {
+    this.inputPath = inputPath;
+  }
+
+  @Override
+public void setOutputPath(String outputPath) {
+    this.outputPath = outputPath;
+  }
+
+  @Override
+public String getInputPath() {
+    return inputPath;
+  }
+
+  @Override
+public String getOutputPath() {
+    return outputPath;
+  }
+
+  @Override
+public void setInputStream(InputStream inputStream) {
+    this.inputStream = inputStream;
+  }
+
+  @Override
+public void setOutputStream(OutputStream outputStream) {
+    this.outputStream = outputStream;
+  }
+
+  @Override
+public InputStream getInputStream() {
+    return inputStream;
+  }
+
+  @Override
+  public OutputStream getOutputStream() {
+    return outputStream;
+  }
+
+  /** Opens an InputStream for the input Hadoop path */
+  protected InputStream createInputStream() throws IOException {
+    //TODO: if input path is a directory, use smt like MultiInputStream to
+    //read all the files recursively
+    Path path = new Path(inputPath);
+    FileSystem fs = path.getFileSystem(getConf());
+    inputSize = fs.getFileStatus(path).getLen();
+    return fs.open(path);
+  }
+
+  /** Opens an OutputStream for the output Hadoop path */
+  protected OutputStream createOutputStream() throws IOException {
+    Path path = new Path(outputPath);
+    FileSystem fs = path.getFileSystem(getConf());
+    return fs.create(path);
+  }
+
+  protected InputStream getOrCreateInputStream() throws IOException {
+    if(inputStream == null) {
+      inputStream = createInputStream();
+    }
+    return inputStream;
+  }
+
+  protected OutputStream getOrCreateOutputStream() throws IOException {
+    if(outputStream == null) {
+      outputStream = createOutputStream();
+    }
+    return outputStream;
+  }
+
+  @Override
+  public List<PartitionQuery<K, T>> getPartitions(Query<K, T> query)
+      throws IOException {
+    List<InputSplit> splits = GoraMapReduceUtils.getSplits(getConf(), inputPath);
+    List<PartitionQuery<K, T>> queries = new ArrayList<PartitionQuery<K,T>>(splits.size());
+
+    for(InputSplit split : splits) {
+      queries.add(new FileSplitPartitionQuery<K, T>(query, (FileSplit) split));
+    }
+
+    return queries;
+  }
+
+  @Override
+  public Result<K, T> execute(Query<K, T> query) throws IOException {
+    if(query instanceof FileSplitPartitionQuery) {
+        return executePartial((FileSplitPartitionQuery<K, T>) query);
+    } else {
+      return executeQuery(query);
+    }
+  }
+
+  /**
+   * Executes a normal Query reading the whole data. #execute() calls this function
+   * for non-PartitionQuery's.
+   */
+  protected abstract Result<K,T> executeQuery(Query<K,T> query)
+    throws IOException;
+
+  /**
+   * Executes a PartitialQuery, reading the data between start and end.
+   */
+  protected abstract Result<K,T> executePartial(FileSplitPartitionQuery<K,T> query)
+    throws IOException;
+
+  @Override
+  public void flush() throws IOException {
+    if(outputStream != null)
+      outputStream.flush();
+  }
+
+  @Override
+  public void createSchema() throws IOException {
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+    throw new OperationNotSupportedException("delete schema is not supported for " +
+    		"file backed data stores");
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    return true;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    org.apache.gora.util.IOUtils.writeNullFieldsInfo(out, inputPath, outputPath);
+    if(inputPath != null)
+      Text.writeString(out, inputPath);
+    if(outputPath != null)
+      Text.writeString(out, outputPath);
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+    boolean[] nullFields = org.apache.gora.util.IOUtils.readNullFieldsInfo(in);
+    if(!nullFields[0])
+      inputPath = Text.readString(in);
+    if(!nullFields[1])
+      outputPath = Text.readString(in);
+  }
+
+  @Override
+  public void close() throws IOException {
+    IOUtils.closeStream(inputStream);
+    IOUtils.closeStream(outputStream);
+    inputStream = null;
+    outputStream = null;
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/AvroUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/AvroUtils.java
new file mode 100644
index 0000000..f9c6945
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/AvroUtils.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.reflect.ReflectData;
+import org.apache.gora.persistency.Persistent;
+
+/**
+ * An utility class for Avro related tasks 
+ */
+public class AvroUtils {
+
+  /**
+   * Returns a map of field name to Field for schema's fields.
+   */
+  public static Map<String, Field> getFieldMap(Schema schema) {
+    List<Field> fields = schema.getFields();
+    HashMap<String, Field> fieldMap = new HashMap<String, Field>(fields.size());
+    for(Field field: fields) {
+      fieldMap.put(field.name(), field);
+    }
+    return fieldMap;
+  }
+  
+  @SuppressWarnings("unchecked")
+  public static Object getEnumValue(Schema schema, String symbol) {
+    return Enum.valueOf(ReflectData.get().getClass(schema), symbol);
+  }
+  
+  public static Object getEnumValue(Schema schema, int enumOrdinal) {
+    String symbol = schema.getEnumSymbols().get(enumOrdinal);
+    return getEnumValue(schema, symbol);
+  }
+  
+  /**
+   * Returns the schema of the class
+   */
+  public static Schema getSchema(Class<? extends Persistent> clazz) 
+    throws SecurityException, NoSuchFieldException
+    , IllegalArgumentException, IllegalAccessException {
+    
+    java.lang.reflect.Field field = clazz.getDeclaredField("_SCHEMA");
+    return (Schema) field.get(null);
+  }
+  
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/ByteUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/ByteUtils.java
new file mode 100644
index 0000000..f356bc7
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/ByteUtils.java
@@ -0,0 +1,721 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.util;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.nio.ByteBuffer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.reflect.ReflectData;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.avro.PersistentDatumWriter;
+import org.apache.hadoop.io.WritableUtils;
+
+//  This code is copied almost directly from HBase project's Bytes class.
+/**
+ * Utility class that handles byte arrays, conversions to/from other types.
+ *
+ */
+public class ByteUtils {
+
+  /**
+   * Size of boolean in bytes
+   */
+  public static final int SIZEOF_BOOLEAN = Byte.SIZE/Byte.SIZE;
+
+  /**
+   * Size of byte in bytes
+   */
+  public static final int SIZEOF_BYTE = SIZEOF_BOOLEAN;
+
+  /**
+   * Size of char in bytes
+   */
+  public static final int SIZEOF_CHAR = Character.SIZE/Byte.SIZE;
+
+  /**
+   * Size of double in bytes
+   */
+  public static final int SIZEOF_DOUBLE = Double.SIZE/Byte.SIZE;
+
+  /**
+   * Size of float in bytes
+   */
+  public static final int SIZEOF_FLOAT = Float.SIZE/Byte.SIZE;
+
+  /**
+   * Size of int in bytes
+   */
+  public static final int SIZEOF_INT = Integer.SIZE/Byte.SIZE;
+
+  /**
+   * Size of long in bytes
+   */
+  public static final int SIZEOF_LONG = Long.SIZE/Byte.SIZE;
+
+  /**
+   * Size of short in bytes
+   */
+  public static final int SIZEOF_SHORT = Short.SIZE/Byte.SIZE;
+
+  /**
+   * Put bytes at the specified byte array position.
+   * @param tgtBytes the byte array
+   * @param tgtOffset position in the array
+   * @param srcBytes byte to write out
+   * @param srcOffset
+   * @param srcLength
+   * @return incremented offset
+   */
+  public static int putBytes(byte[] tgtBytes, int tgtOffset, byte[] srcBytes,
+      int srcOffset, int srcLength) {
+    System.arraycopy(srcBytes, srcOffset, tgtBytes, tgtOffset, srcLength);
+    return tgtOffset + srcLength;
+  }
+
+  /**
+   * Write a single byte out to the specified byte array position.
+   * @param bytes the byte array
+   * @param offset position in the array
+   * @param b byte to write out
+   * @return incremented offset
+   */
+  public static int putByte(byte[] bytes, int offset, byte b) {
+    bytes[offset] = b;
+    return offset + 1;
+  }
+
+  /**
+   * Returns a new byte array, copied from the passed ByteBuffer.
+   * @param bb A ByteBuffer
+   * @return the byte array
+   */
+  public static byte[] toBytes(ByteBuffer bb) {
+    int length = bb.limit();
+    byte [] result = new byte[length];
+    System.arraycopy(bb.array(), bb.arrayOffset(), result, 0, length);
+    return result;
+  }
+
+  /**
+   * @param b Presumed UTF-8 encoded byte array.
+   * @return String made from <code>b</code>
+   */
+  public static String toString(final byte [] b) {
+    if (b == null) {
+      return null;
+    }
+    return toString(b, 0, b.length);
+  }
+
+  public static String toString(final byte [] b1,
+                                String sep,
+                                final byte [] b2) {
+    return toString(b1, 0, b1.length) + sep + toString(b2, 0, b2.length);
+  }
+
+  /**
+   * @param b Presumed UTF-8 encoded byte array.
+   * @param off
+   * @param len
+   * @return String made from <code>b</code>
+   */
+  public static String toString(final byte [] b, int off, int len) {
+    if(b == null) {
+      return null;
+    }
+    if(len == 0) {
+      return "";
+    }
+    String result = null;
+    try {
+      result = new String(b, off, len, "UTF-8");
+    } catch (UnsupportedEncodingException e) {
+      e.printStackTrace();
+    }
+    return result;
+  }
+  /**
+   * Converts a string to a UTF-8 byte array.
+   * @param s
+   * @return the byte array
+   */
+  public static byte[] toBytes(String s) {
+    if (s == null) {
+      throw new IllegalArgumentException("string cannot be null");
+    }
+    byte [] result = null;
+    try {
+      result = s.getBytes("UTF-8");
+    } catch (UnsupportedEncodingException e) {
+      e.printStackTrace();
+    }
+    return result;
+  }
+
+  /**
+   * Convert a boolean to a byte array.
+   * @param b
+   * @return <code>b</code> encoded in a byte array.
+   */
+  public static byte [] toBytes(final boolean b) {
+    byte [] bb = new byte[1];
+    bb[0] = b? (byte)-1: (byte)0;
+    return bb;
+  }
+
+  /**
+   * @param b
+   * @return True or false.
+   */
+  public static boolean toBoolean(final byte [] b) {
+    if (b == null || b.length > 1) {
+      throw new IllegalArgumentException("Array is wrong size");
+    }
+    return b[0] != (byte)0;
+  }
+
+  /**
+   * Convert a long value to a byte array
+   * @param val
+   * @return the byte array
+   */
+  public static byte[] toBytes(long val) {
+    byte [] b = new byte[8];
+    for(int i=7;i>0;i--) {
+      b[i] = (byte)(val);
+      val >>>= 8;
+    }
+    b[0] = (byte)(val);
+    return b;
+  }
+
+  /**
+   * Converts a byte array to a long value
+   * @param bytes
+   * @return the long value
+   */
+  public static long toLong(byte[] bytes) {
+    return toLong(bytes, 0);
+  }
+
+  /**
+   * Converts a byte array to a long value
+   * @param bytes
+   * @param offset
+   * @return the long value
+   */
+  public static long toLong(byte[] bytes, int offset) {
+    return toLong(bytes, offset, SIZEOF_LONG);
+  }
+
+  /**
+   * Converts a byte array to a long value
+   * @param bytes
+   * @param offset
+   * @param length
+   * @return the long value
+   */
+  public static long toLong(byte[] bytes, int offset, final int length) {
+    if (bytes == null || length != SIZEOF_LONG ||
+        (offset + length > bytes.length)) {
+      return -1L;
+    }
+    long l = 0;
+    for(int i = offset; i < (offset + length); i++) {
+      l <<= 8;
+      l ^= (long)bytes[i] & 0xFF;
+    }
+    return l;
+  }
+
+  /**
+   * Presumes float encoded as IEEE 754 floating-point "single format"
+   * @param bytes
+   * @return Float made from passed byte array.
+   */
+  public static float toFloat(byte [] bytes) {
+    return toFloat(bytes, 0);
+  }
+
+  /**
+   * Presumes float encoded as IEEE 754 floating-point "single format"
+   * @param bytes
+   * @param offset
+   * @return Float made from passed byte array.
+   */
+  public static float toFloat(byte [] bytes, int offset) {
+    int i = toInt(bytes, offset);
+    return Float.intBitsToFloat(i);
+  }
+  /**
+   * @param f
+   * @return the float represented as byte []
+   */
+  public static byte [] toBytes(final float f) {
+    // Encode it as int
+    int i = Float.floatToRawIntBits(f);
+    return toBytes(i);
+  }
+
+  /**
+   * @param bytes
+   * @return Return double made from passed bytes.
+   */
+  public static double toDouble(final byte [] bytes) {
+    return toDouble(bytes, 0);
+  }
+
+  /**
+   * @param bytes
+   * @param offset
+   * @return Return double made from passed bytes.
+   */
+  public static double toDouble(final byte [] bytes, final int offset) {
+    long l = toLong(bytes, offset);
+    return Double.longBitsToDouble(l);
+  }
+
+  /**
+   * @param d
+   * @return the double represented as byte []
+   */
+  public static byte [] toBytes(final double d) {
+    // Encode it as a long
+    long l = Double.doubleToRawLongBits(d);
+    return toBytes(l);
+  }
+
+  /**
+   * Convert an int value to a byte array
+   * @param val
+   * @return the byte array
+   */
+  public static byte[] toBytes(int val) {
+    byte [] b = new byte[4];
+    for(int i = 3; i > 0; i--) {
+      b[i] = (byte)(val);
+      val >>>= 8;
+    }
+    b[0] = (byte)(val);
+    return b;
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes
+   * @return the int value
+   */
+  public static int toInt(byte[] bytes) {
+    return toInt(bytes, 0);
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes
+   * @param offset
+   * @return the int value
+   */
+  public static int toInt(byte[] bytes, int offset) {
+    return toInt(bytes, offset, SIZEOF_INT);
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes
+   * @param offset
+   * @param length
+   * @return the int value
+   */
+  public static int toInt(byte[] bytes, int offset, final int length) {
+    if (bytes == null || length != SIZEOF_INT ||
+        (offset + length > bytes.length)) {
+      return -1;
+    }
+    int n = 0;
+    for(int i = offset; i < (offset + length); i++) {
+      n <<= 8;
+      n ^= bytes[i] & 0xFF;
+    }
+    return n;
+  }
+
+  /**
+   * Convert a short value to a byte array
+   * @param val
+   * @return the byte array
+   */
+  public static byte[] toBytes(short val) {
+    byte[] b = new byte[SIZEOF_SHORT];
+    b[1] = (byte)(val);
+    val >>= 8;
+    b[0] = (byte)(val);
+    return b;
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes
+   * @return the short value
+   */
+  public static short toShort(byte[] bytes) {
+    return toShort(bytes, 0);
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes
+   * @param offset
+   * @return the short value
+   */
+  public static short toShort(byte[] bytes, int offset) {
+    return toShort(bytes, offset, SIZEOF_SHORT);
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes
+   * @param offset
+   * @param length
+   * @return the short value
+   */
+  public static short toShort(byte[] bytes, int offset, final int length) {
+    if (bytes == null || length != SIZEOF_SHORT ||
+        (offset + length > bytes.length)) {
+      return -1;
+    }
+    short n = 0;
+    n ^= bytes[offset] & 0xFF;
+    n <<= 8;
+    n ^= bytes[offset+1] & 0xFF;
+    return n;
+  }
+
+  /**
+   * Convert a char value to a byte array
+   *
+   * @param val
+   * @return the byte array
+   */
+  public static byte[] toBytes(char val) {
+    byte[] b = new byte[SIZEOF_CHAR];
+    b[1] = (byte) (val);
+    val >>= 8;
+    b[0] = (byte) (val);
+    return b;
+  }
+
+  /**
+   * Converts a byte array to a char value
+   *
+   * @param bytes
+   * @return the char value
+   */
+  public static char toChar(byte[] bytes) {
+    return toChar(bytes, 0);
+  }
+
+
+  /**
+   * Converts a byte array to a char value
+   *
+   * @param bytes
+   * @param offset
+   * @return the char value
+   */
+  public static char toChar(byte[] bytes, int offset) {
+    return toChar(bytes, offset, SIZEOF_CHAR);
+  }
+
+  /**
+   * Converts a byte array to a char value
+   *
+   * @param bytes
+   * @param offset
+   * @param length
+   * @return the char value
+   */
+  public static char toChar(byte[] bytes, int offset, final int length) {
+    if (bytes == null || length != SIZEOF_CHAR ||
+      (offset + length > bytes.length)) {
+      return (char)-1;
+    }
+    char n = 0;
+    n ^= bytes[offset] & 0xFF;
+    n <<= 8;
+    n ^= bytes[offset + 1] & 0xFF;
+    return n;
+  }
+
+  /**
+   * Converts a byte array to a char array value
+   *
+   * @param bytes
+   * @return the char value
+   */
+  public static char[] toChars(byte[] bytes) {
+    return toChars(bytes, 0, bytes.length);
+  }
+
+  /**
+   * Converts a byte array to a char array value
+   *
+   * @param bytes
+   * @param offset
+   * @return the char value
+   */
+  public static char[] toChars(byte[] bytes, int offset) {
+    return toChars(bytes, offset, bytes.length-offset);
+  }
+
+  /**
+   * Converts a byte array to a char array value
+   *
+   * @param bytes
+   * @param offset
+   * @param length
+   * @return the char value
+   */
+  public static char[] toChars(byte[] bytes, int offset, final int length) {
+    int max = offset + length;
+    if (bytes == null || (max > bytes.length) || length %2 ==1) {
+      return null;
+    }
+
+    char[] chars = new char[length / 2];
+    for (int i = 0, j = offset; i < chars.length && j < max; i++, j += 2) {
+      char c = 0;
+      c ^= bytes[j] & 0xFF;
+      c <<= 8;
+      c ^= bytes[j + 1] & 0xFF;
+      chars[i] = c;
+    }
+    return chars;
+  }
+
+  /**
+   * @param vint Integer to make a vint of.
+   * @return Vint as bytes array.
+   */
+  public static byte [] vintToBytes(final long vint) {
+    long i = vint;
+    int size = WritableUtils.getVIntSize(i);
+    byte [] result = new byte[size];
+    int offset = 0;
+    if (i >= -112 && i <= 127) {
+      result[offset] = ((byte)i);
+      return result;
+    }
+
+    int len = -112;
+    if (i < 0) {
+      i ^= -1L; // take one's complement'
+      len = -120;
+    }
+
+    long tmp = i;
+    while (tmp != 0) {
+      tmp = tmp >> 8;
+    len--;
+    }
+
+    result[offset++] = (byte)len;
+
+    len = (len < -120) ? -(len + 120) : -(len + 112);
+
+    for (int idx = len; idx != 0; idx--) {
+      int shiftbits = (idx - 1) * 8;
+      long mask = 0xFFL << shiftbits;
+      result[offset++] = (byte)((i & mask) >> shiftbits);
+    }
+    return result;
+  }
+
+  /**
+   * @param buffer
+   * @return vint bytes as an integer.
+   */
+  public static long bytesToVlong(final byte [] buffer) {
+    int offset = 0;
+    byte firstByte = buffer[offset++];
+    int len = WritableUtils.decodeVIntSize(firstByte);
+    if (len == 1) {
+      return firstByte;
+    }
+    long i = 0;
+    for (int idx = 0; idx < len-1; idx++) {
+      byte b = buffer[offset++];
+      i = i << 8;
+      i = i | (b & 0xFF);
+    }
+    return (WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+  }
+
+  /**
+   * @param buffer
+   * @return vint bytes as an integer.
+   */
+  public static int bytesToVint(final byte [] buffer) {
+    int offset = 0;
+    byte firstByte = buffer[offset++];
+    int len = WritableUtils.decodeVIntSize(firstByte);
+    if (len == 1) {
+      return firstByte;
+    }
+    long i = 0;
+    for (int idx = 0; idx < len-1; idx++) {
+      byte b = buffer[offset++];
+      i = i << 8;
+      i = i | (b & 0xFF);
+    }
+    return (int)(WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+  }
+
+  /**
+   * Reads a zero-compressed encoded long from input stream and returns it.
+   * @param buffer Binary array
+   * @param offset Offset into array at which vint begins.
+   * @throws java.io.IOException
+   * @return deserialized long from stream.
+   */
+  public static long readVLong(final byte [] buffer, final int offset)
+  throws IOException {
+    byte firstByte = buffer[offset];
+    int len = WritableUtils.decodeVIntSize(firstByte);
+    if (len == 1) {
+      return firstByte;
+    }
+    long i = 0;
+    for (int idx = 0; idx < len-1; idx++) {
+      byte b = buffer[offset + 1 + idx];
+      i = i << 8;
+      i = i | (b & 0xFF);
+    }
+    return (WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+  }
+
+  /**
+   * @param left
+   * @param right
+   * @return 0 if equal, < 0 if left is less than right, etc.
+   */
+  public static int compareTo(final byte [] left, final byte [] right) {
+    return compareTo(left, 0, left.length, right, 0, right.length);
+  }
+
+  /**
+   * @param b1
+   * @param b2
+   * @param s1 Where to start comparing in the left buffer
+   * @param s2 Where to start comparing in the right buffer
+   * @param l1 How much to compare from the left buffer
+   * @param l2 How much to compare from the right buffer
+   * @return 0 if equal, < 0 if left is less than right, etc.
+   */
+  public static int compareTo(byte[] b1, int s1, int l1,
+      byte[] b2, int s2, int l2) {
+    // Bring WritableComparator code local
+    int end1 = s1 + l1;
+    int end2 = s2 + l2;
+    for (int i = s1, j = s2; i < end1 && j < end2; i++, j++) {
+      int a = (b1[i] & 0xff);
+      int b = (b2[j] & 0xff);
+      if (a != b) {
+        return a - b;
+      }
+    }
+    return l1 - l2;
+  }
+
+  /**
+   * @param left
+   * @param right
+   * @return True if equal
+   */
+  public static boolean equals(final byte [] left, final byte [] right) {
+    // Could use Arrays.equals?
+    return left == null && right == null? true:
+      (left == null || right == null || (left.length != right.length))? false:
+        compareTo(left, right) == 0;
+  }
+
+  @SuppressWarnings("unchecked")
+  public static Object fromBytes( byte[] val, Schema schema
+      , PersistentDatumReader<?> datumReader, Object object)
+  throws IOException {
+    Type type = schema.getType();
+    switch (type) {
+    case ENUM:
+      String symbol = schema.getEnumSymbols().get(val[0]);
+      return Enum.valueOf(ReflectData.get().getClass(schema), symbol);
+    case STRING:  return new Utf8(toString(val));
+    case BYTES:   return ByteBuffer.wrap(val);
+    case INT:     return bytesToVint(val);
+    case LONG:    return bytesToVlong(val);
+    case FLOAT:   return toFloat(val);
+    case DOUBLE:  return toDouble(val);
+    case BOOLEAN: return val[0] != 0;
+    case RECORD:  //fall
+    case MAP:
+    case ARRAY:   return IOUtils.deserialize(val, datumReader, schema, object);
+    default: throw new RuntimeException("Unknown type: "+type);
+    }
+  }
+
+  public static byte[] toBytes(Object o, Schema schema
+      , PersistentDatumWriter<?> datumWriter)
+  throws IOException {
+    Type type = schema.getType();
+    switch (type) {
+    case STRING:  return toBytes(((Utf8)o).toString()); // TODO: maybe ((Utf8)o).getBytes(); ?
+    case BYTES:   return ((ByteBuffer)o).array();
+    case INT:     return vintToBytes((Integer)o);
+    case LONG:    return vintToBytes((Long)o);
+    case FLOAT:   return toBytes((Float)o);
+    case DOUBLE:  return toBytes((Double)o);
+    case BOOLEAN: return (Boolean)o ? new byte[] {1} : new byte[] {0};
+    case ENUM:    return new byte[] { (byte)((Enum<?>) o).ordinal() };
+    case RECORD:  //fall
+    case MAP:
+    case ARRAY:   return IOUtils.serialize(datumWriter, schema, o);
+    default: throw new RuntimeException("Unknown type: "+type);
+    }
+  }
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/ClassLoadingUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/ClassLoadingUtils.java
new file mode 100644
index 0000000..ea77c1a
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/ClassLoadingUtils.java
@@ -0,0 +1,64 @@
+package org.apache.gora.util;
+
+public class ClassLoadingUtils {
+
+    private ClassLoadingUtils() {
+        //Utility Class
+    }
+
+    /**
+     * Loads a class using the class loader.
+     * 1. The class loader of the current class is being used.
+     * 2. The thread context class loader is being used.
+     * If both approaches fail, returns null.
+     *
+     * @param className    The name of the class to load.
+     * @return The class or null if no class loader could load the class.
+     */
+    public static Class<?> loadClass(String className) throws ClassNotFoundException {
+        return ClassLoadingUtils.loadClass(ClassLoadingUtils.class,className);
+    }
+
+    /**
+     * Loads a class using the class loader.
+     * 1. The class loader of the context class is being used.
+     * 2. The thread context class loader is being used.
+     * If both approaches fail, returns null.
+     *
+     * @param contextClass The name of a context class to use.
+     * @param className    The name of the class to load
+     * @return The class or null if no class loader could load the class.
+     */
+    public static Class<?> loadClass(Class<?> contextClass, String className) throws ClassNotFoundException {
+        Class<?> clazz = null;
+        if (contextClass.getClassLoader() != null) {
+            clazz = loadClass(className, contextClass.getClassLoader());
+        }
+        if (clazz == null && Thread.currentThread().getContextClassLoader() != null) {
+            clazz = loadClass(className, Thread.currentThread().getContextClassLoader());
+        }
+        if (clazz == null) {
+            throw new ClassNotFoundException("Failed to load class" + className);
+        }
+        return clazz;
+    }
+
+    /**
+     * Loads a {@link Class} from the specified {@link ClassLoader} without throwing {@ClassNotFoundException}.
+     *
+     * @param className
+     * @param classLoader
+     * @return
+     */
+    private static Class<?> loadClass(String className, ClassLoader classLoader) {
+        Class<?> clazz = null;
+        if (classLoader != null && className != null) {
+            try {
+                clazz = classLoader.loadClass(className);
+            } catch (ClassNotFoundException e) {
+                //Ignore and return null
+            }
+        }
+        return clazz;
+    }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/GoraException.java b/trunk/gora-core/src/main/java/org/apache/gora/util/GoraException.java
new file mode 100644
index 0000000..b741fd3
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/GoraException.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.io.IOException;
+
+/**
+ * Gora specific exception. This extends IOException, since 
+ * most of what Gora does is I/O related.
+ */
+public class GoraException extends IOException {
+
+  private static final long serialVersionUID = -3889679982234557828L;
+
+  public GoraException() {
+    super();
+  }
+
+  public GoraException(String message, Throwable cause) {
+    super(message, cause);
+  }
+
+  public GoraException(String message) {
+    super(message);
+  }
+
+  public GoraException(Throwable cause) {
+    super(cause);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/IOUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/IOUtils.java
new file mode 100644
index 0000000..62c93d0
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/IOUtils.java
@@ -0,0 +1,528 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.ObjectInput;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutput;
+import java.io.ObjectOutputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.avro.Schema;
+import org.apache.avro.io.BinaryDecoder;
+import org.apache.avro.io.BinaryEncoder;
+import org.apache.avro.io.Decoder;
+import org.apache.avro.io.DecoderFactory;
+import org.apache.avro.io.Encoder;
+import org.apache.avro.ipc.ByteBufferInputStream;
+import org.apache.avro.ipc.ByteBufferOutputStream;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.avro.PersistentDatumWriter;
+import org.apache.gora.persistency.Persistent;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.DefaultStringifier;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.io.serializer.Deserializer;
+import org.apache.hadoop.io.serializer.SerializationFactory;
+import org.apache.hadoop.io.serializer.Serializer;
+
+/**
+ * An utility class for I/O related functionality.
+ */
+public class IOUtils {
+
+  private static SerializationFactory serializationFactory = null;
+  private static Configuration conf;
+
+  public static final int BUFFER_SIZE = 8192;
+
+  private static BinaryDecoder decoder;
+
+  private static Configuration getOrCreateConf(Configuration conf) {
+    if(conf == null) {
+      if(IOUtils.conf == null) {
+        IOUtils.conf = new Configuration();
+      }
+    }
+    return conf != null ? conf : IOUtils.conf;
+  }
+
+  public static Object readObject(DataInput in)
+    throws ClassNotFoundException, IOException {
+
+    if(in instanceof ObjectInput) {
+      return ((ObjectInput)in).readObject();
+    } else {
+      if(in instanceof InputStream) {
+        ObjectInput objIn = new ObjectInputStream((InputStream)in);
+        Object obj = objIn.readObject();
+        return obj;
+      }
+    }
+    throw new IOException("cannot write to DataOutput of instance:"
+        + in.getClass());
+  }
+
+  public static void writeObject(DataOutput out, Object obj)
+    throws IOException {
+    if(out instanceof ObjectOutput) {
+      ((ObjectOutput)out).writeObject(obj);
+    } else {
+      if(out instanceof OutputStream) {
+        ObjectOutput objOut = new ObjectOutputStream((OutputStream)out);
+        objOut.writeObject(obj);
+      }
+    }
+    throw new IOException("cannot write to DataOutput of instance:"
+        + out.getClass());
+  }
+
+  /** Serializes the object to the given dataoutput using
+   * available Hadoop serializations
+   * @throws IOException */
+  public static<T> void serialize(Configuration conf, DataOutput out
+      , T obj, Class<T> objClass) throws IOException {
+
+    if(serializationFactory == null) {
+      serializationFactory = new SerializationFactory(getOrCreateConf(conf));
+    }
+    Serializer<T> serializer = serializationFactory.getSerializer(objClass);
+
+    ByteBufferOutputStream os = new ByteBufferOutputStream();
+    try {
+      serializer.open(os);
+      serializer.serialize(obj);
+
+      int length = 0;
+      List<ByteBuffer> buffers = os.getBufferList();
+      for(ByteBuffer buffer : buffers) {
+        length += buffer.limit() - buffer.arrayOffset();
+      }
+
+      WritableUtils.writeVInt(out, length);
+      for(ByteBuffer buffer : buffers) {
+        byte[] arr = buffer.array();
+        out.write(arr, buffer.arrayOffset(), buffer.limit());
+      }
+
+    }finally {
+      if(serializer != null)
+        serializer.close();
+      if(os != null)
+        os.close();
+    }
+  }
+
+  /** Serializes the object to the given dataoutput using
+   * available Hadoop serializations
+   * @throws IOException */
+  @SuppressWarnings("unchecked")
+  public static<T> void serialize(Configuration conf, DataOutput out
+      , T obj) throws IOException {
+    Text.writeString(out, obj.getClass().getCanonicalName());
+    serialize(conf, out, obj, (Class<T>)obj.getClass());
+  }
+
+  /** Serializes the object to the given dataoutput using
+   * available Hadoop serializations*/
+  public static<T> byte[] serialize(Configuration conf, T obj) throws IOException {
+    DataOutputBuffer buffer = new DataOutputBuffer();
+    serialize(conf, buffer, obj);
+    return buffer.getData();
+  }
+
+
+  /**
+   * Serializes the field object using the datumWriter.
+   */
+  public static<T extends Persistent> void serialize(OutputStream os,
+      PersistentDatumWriter<T> datumWriter, Schema schema, Object object)
+      throws IOException {
+
+    BinaryEncoder encoder = new BinaryEncoder(os);
+    datumWriter.write(schema, object, encoder);
+    encoder.flush();
+  }
+
+  /**
+   * Serializes the field object using the datumWriter.
+   */
+  public static<T extends Persistent> byte[] serialize(PersistentDatumWriter<T> datumWriter
+      , Schema schema, Object object) throws IOException {
+    ByteArrayOutputStream os = new ByteArrayOutputStream();
+    serialize(os, datumWriter, schema, object);
+    return os.toByteArray();
+  }
+
+  /** Deserializes the object in the given datainput using
+   * available Hadoop serializations.
+   * @throws IOException
+   * @throws ClassNotFoundException */
+  @SuppressWarnings("unchecked")
+  public static<T> T deserialize(Configuration conf, DataInput in
+      , T obj , String objClass) throws IOException, ClassNotFoundException {
+
+    Class<T> c = (Class<T>) ClassLoadingUtils.loadClass(objClass);
+
+    return deserialize(conf, in, obj, c);
+  }
+
+  /** Deserializes the object in the given datainput using
+   * available Hadoop serializations.
+   * @throws IOException */
+  public static<T> T deserialize(Configuration conf, DataInput in
+      , T obj , Class<T> objClass) throws IOException {
+    if(serializationFactory == null) {
+      serializationFactory = new SerializationFactory(getOrCreateConf(conf));
+    }
+    Deserializer<T> deserializer = serializationFactory.getDeserializer(
+        objClass);
+
+    int length = WritableUtils.readVInt(in);
+    byte[] arr = new byte[length];
+    in.readFully(arr);
+    List<ByteBuffer> list = new ArrayList<ByteBuffer>();
+    list.add(ByteBuffer.wrap(arr));
+    ByteBufferInputStream is = new ByteBufferInputStream(list);
+
+    try {
+      deserializer.open(is);
+      T newObj = deserializer.deserialize(obj);
+      return newObj;
+
+    }finally {
+      if(deserializer != null)
+        deserializer.close();
+      if(is != null)
+        is.close();
+    }
+  }
+
+  /** Deserializes the object in the given datainput using
+   * available Hadoop serializations.
+   * @throws IOException
+   * @throws ClassNotFoundException */
+  @SuppressWarnings("unchecked")
+  public static<T> T deserialize(Configuration conf, DataInput in
+      , T obj) throws IOException, ClassNotFoundException {
+    String clazz = Text.readString(in);
+    Class<T> c = (Class<T>)ClassLoadingUtils.loadClass(clazz);
+    return deserialize(conf, in, obj, c);
+  }
+
+  /** Deserializes the object in the given datainput using
+   * available Hadoop serializations.
+   * @throws IOException
+   * @throws ClassNotFoundException */
+  public static<T> T deserialize(Configuration conf, byte[] in
+      , T obj) throws IOException, ClassNotFoundException {
+    DataInputBuffer buffer = new DataInputBuffer();
+    buffer.reset(in, in.length);
+    return deserialize(conf, buffer, obj);
+  }
+
+  /**
+   * Deserializes the field object using the datumReader.
+   */
+  @SuppressWarnings("unchecked")
+  public static<K, T extends Persistent> K deserialize(InputStream is,
+      PersistentDatumReader<T> datumReader, Schema schema, K object)
+      throws IOException {
+    decoder = DecoderFactory.defaultFactory().createBinaryDecoder(is, decoder);
+    return (K)datumReader.read(object, schema, decoder);
+  }
+
+  /**
+   * Deserializes the field object using the datumReader.
+   */
+  @SuppressWarnings("unchecked")
+  public static<K, T extends Persistent> K deserialize(byte[] bytes,
+      PersistentDatumReader<T> datumReader, Schema schema, K object)
+      throws IOException {
+    decoder = DecoderFactory.defaultFactory().createBinaryDecoder(bytes, decoder);
+    return (K)datumReader.read(object, schema, decoder);
+  }
+
+
+  /**
+   * Serializes the field object using the datumWriter.
+   */
+  public static<T extends Persistent> byte[] deserialize(PersistentDatumWriter<T> datumWriter
+      , Schema schema, Object object) throws IOException {
+    ByteArrayOutputStream os = new ByteArrayOutputStream();
+    serialize(os, datumWriter, schema, object);
+    return os.toByteArray();
+  }
+
+  /**
+   * Writes a byte[] to the output, representing whether each given field is null
+   * or not. A Vint and ceil( fields.length / 8 ) bytes are written to the output.
+   * @param out the output to write to
+   * @param fields the fields to check for null
+   * @see #readNullFieldsInfo(DataInput)
+   */
+  public static void writeNullFieldsInfo(DataOutput out, Object ... fields)
+    throws IOException {
+
+    boolean[] isNull = new boolean[fields.length];
+
+    for(int i=0; i<fields.length; i++) {
+      isNull[i] = (fields[i] == null);
+    }
+
+    writeBoolArray(out, isNull);
+  }
+
+  /**
+   * Reads the data written by {@link #writeNullFieldsInfo(DataOutput, Object...)}
+   * and returns a boolean array representing whether each field is null or not.
+   * @param in the input to read from
+   * @return a boolean[] representing whether each field is null or not.
+   */
+  public static boolean[] readNullFieldsInfo(DataInput in) throws IOException {
+    return readBoolArray(in);
+  }
+
+  /**
+   * Writes a boolean[] to the output.
+   */
+  public static void writeBoolArray(DataOutput out, boolean[] boolArray)
+    throws IOException {
+
+    WritableUtils.writeVInt(out, boolArray.length);
+
+    byte b = 0;
+    int i = 0;
+    for(i=0; i<boolArray.length; i++) {
+      if(i % 8 == 0 && i != 0) {
+        out.writeByte(b);
+        b = 0;
+      }
+      b >>= 1;
+      if(boolArray[i])
+        b |= 0x80;
+      else
+        b &= 0x7F;
+    }
+    if(i % 8 != 0) {
+      for(int j=0; j < 8 - (i % 8); j++) { //shift for the remaining byte
+        b >>=1;
+        b &= 0x7F;
+      }
+    }
+
+    out.writeByte(b);
+  }
+
+  /**
+   * Reads a boolean[] from input
+   * @throws IOException
+   */
+  public static boolean[] readBoolArray(DataInput in) throws IOException {
+    int length = WritableUtils.readVInt(in);
+    boolean[] arr = new boolean[length];
+
+    byte b = 0;
+    for(int i=0; i < length; i++) {
+      if(i % 8 == 0) {
+        b = in.readByte();
+      }
+      arr[i] = (b & 0x01) > 0;
+      b >>= 1;
+    }
+    return arr;
+  }
+
+
+  /**
+   * Writes a boolean[] to the output.
+   */
+  public static void writeBoolArray(Encoder out, boolean[] boolArray)
+    throws IOException {
+
+    out.writeInt(boolArray.length);
+
+    int byteArrLength = (int)Math.ceil(boolArray.length / 8.0);
+
+    byte b = 0;
+    byte[] arr = new byte[byteArrLength];
+    int i = 0;
+    int arrIndex = 0;
+    for(i=0; i<boolArray.length; i++) {
+      if(i % 8 == 0 && i != 0) {
+        arr[arrIndex++] = b;
+        b = 0;
+      }
+      b >>= 1;
+      if(boolArray[i])
+        b |= 0x80;
+      else
+        b &= 0x7F;
+    }
+    if(i % 8 != 0) {
+      for(int j=0; j < 8 - (i % 8); j++) { //shift for the remaining byte
+        b >>=1;
+        b &= 0x7F;
+      }
+    }
+
+    arr[arrIndex++] = b;
+    out.writeFixed(arr);
+  }
+
+  /**
+   * Reads a boolean[] from input
+   * @throws IOException
+   */
+  public static boolean[] readBoolArray(Decoder in) throws IOException {
+
+    int length = in.readInt();
+    boolean[] boolArr = new boolean[length];
+
+    int byteArrLength = (int)Math.ceil(length / 8.0);
+    byte[] byteArr = new byte[byteArrLength];
+    in.readFixed(byteArr);
+
+    int arrIndex = 0;
+    byte b = 0;
+    for(int i=0; i < length; i++) {
+      if(i % 8 == 0) {
+        b = byteArr[arrIndex++];
+      }
+      boolArr[i] = (b & 0x01) > 0;
+      b >>= 1;
+    }
+    return boolArr;
+  }
+
+  /**
+   * Writes the String array to the given DataOutput.
+   * @param out the data output to write to
+   * @param arr the array to write
+   * @see #readStringArray(DataInput)
+   */
+  public static void writeStringArray(DataOutput out, String[] arr)
+    throws IOException {
+    WritableUtils.writeVInt(out, arr.length);
+    for(String str : arr) {
+      Text.writeString(out, str);
+    }
+  }
+
+  /**
+   * Reads and returns a String array that is written by
+   * {@link #writeStringArray(DataOutput, String[])}.
+   * @param in the data input to read from
+   * @return read String[]
+   */
+  public static String[] readStringArray(DataInput in) throws IOException {
+    int len = WritableUtils.readVInt(in);
+    String[] arr = new String[len];
+    for(int i=0; i<len; i++) {
+      arr[i] = Text.readString(in);
+    }
+    return arr;
+  }
+
+  /**
+   * Stores the given object in the configuration under the given dataKey
+   * @param obj the object to store
+   * @param conf the configuration to store the object into
+   * @param dataKey the key to store the data
+   */
+  public static<T> void storeToConf(T obj, Configuration conf, String dataKey)
+    throws IOException {
+    String classKey = dataKey + "._class";
+    conf.set(classKey, obj.getClass().getCanonicalName());
+    DefaultStringifier.store(conf, obj, dataKey);
+  }
+
+  /**
+   * Loads the object stored by {@link #storeToConf(Object, Configuration, String)}
+   * method from the configuration under the given dataKey.
+   * @param conf the configuration to read from
+   * @param dataKey the key to get the data from
+   * @return the store object
+   */
+  @SuppressWarnings("unchecked")
+  public static<T> T loadFromConf(Configuration conf, String dataKey)
+    throws IOException {
+    String classKey = dataKey + "._class";
+    String className = conf.get(classKey);
+    try {
+      T obj = (T) DefaultStringifier.load(conf, dataKey, ClassLoadingUtils.loadClass(className));
+      return obj;
+    } catch (Exception ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  /**
+   * Copies the contents of the buffers into a single byte[]
+   */
+  //TODO: not tested
+  public static byte[] getAsBytes(List<ByteBuffer> buffers) {
+    //find total size
+    int size = 0;
+    for(ByteBuffer buffer : buffers) {
+      size += buffer.remaining();
+    }
+
+    byte[] arr = new byte[size];
+
+    int offset = 0;
+    for(ByteBuffer buffer : buffers) {
+      int len = buffer.remaining();
+      buffer.get(arr, offset, len);
+      offset += len;
+    }
+
+    return arr;
+  }
+
+  /**
+   * Reads until the end of the input stream, and returns the contents as a byte[]
+   */
+  public static byte[] readFully(InputStream in) throws IOException {
+    List<ByteBuffer> buffers = new ArrayList<ByteBuffer>(4);
+    while(true) {
+      ByteBuffer buffer = ByteBuffer.allocate(BUFFER_SIZE);
+      int count = in.read(buffer.array(), 0, BUFFER_SIZE);
+      if(count > 0) {
+        buffer.limit(count);
+        buffers.add(buffer);
+      }
+      if(count < BUFFER_SIZE) break;
+    }
+
+    return getAsBytes(buffers);
+  }
+
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/NodeWalker.java b/trunk/gora-core/src/main/java/org/apache/gora/util/NodeWalker.java
new file mode 100644
index 0000000..9a586c0
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/NodeWalker.java
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.util;
+
+import java.util.Stack;
+
+import org.w3c.dom.Node;
+import org.w3c.dom.NodeList;
+
+/* Copied from Apache Nutch */
+
+/**
+ * <p>A utility class that allows the walking of any DOM tree using a stack 
+ * instead of recursion.  As the node tree is walked the next node is popped
+ * off of the stack and all of its children are automatically added to the 
+ * stack to be called in tree order.</p>
+ * 
+ * <p>Currently this class is not thread safe.  It is assumed that only one
+ * thread will be accessing the <code>NodeWalker</code> at any given time.</p>
+ */
+public class NodeWalker {
+
+  // the root node the the stack holding the nodes
+  private Node currentNode;
+  private NodeList currentChildren;
+  private Stack<Node> nodes;
+  
+  /**
+   * Starts the <code>Node</code> tree from the root node.
+   * 
+   * @param rootNode
+   */
+  public NodeWalker(Node rootNode) {
+
+    nodes = new Stack<Node>();
+    nodes.add(rootNode);
+  }
+  
+  /**
+   * <p>Returns the next <code>Node</code> on the stack and pushes all of its
+   * children onto the stack, allowing us to walk the node tree without the
+   * use of recursion.  If there are no more nodes on the stack then null is
+   * returned.</p>
+   * 
+   * @return Node The next <code>Node</code> on the stack or null if there
+   * isn't a next node.
+   */
+  public Node nextNode() {
+    
+    // if no next node return null
+    if (!hasNext()) {
+      return null;
+    }
+    
+    // pop the next node off of the stack and push all of its children onto
+    // the stack
+    currentNode = nodes.pop();
+    currentChildren = currentNode.getChildNodes();
+    int childLen = (currentChildren != null) ? currentChildren.getLength() : 0;
+    
+    // put the children node on the stack in first to last order
+    for (int i = childLen - 1; i >= 0; i--) {
+      nodes.add(currentChildren.item(i));
+    }
+    
+    return currentNode;
+  }
+  
+  /**
+   * <p>Skips over and removes from the node stack the children of the last
+   * node.  When getting a next node from the walker, that node's children 
+   * are automatically added to the stack.  You can call this method to remove
+   * those children from the stack.</p>
+   * 
+   * <p>This is useful when you don't want to process deeper into the 
+   * current path of the node tree but you want to continue processing sibling
+   * nodes.</p>
+   *
+   */
+  public void skipChildren() {
+    
+    int childLen = (currentChildren != null) ? currentChildren.getLength() : 0;
+    
+    for (int i = 0 ; i < childLen ; i++) {
+      Node child = nodes.peek();
+      if (child.equals(currentChildren.item(i))) {
+        nodes.pop();
+      }
+    }
+  }
+  
+  /**
+   * Returns true if there are more nodes on the current stack.
+   */
+  public boolean hasNext() {
+    return (nodes.size() > 0);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/Null.java b/trunk/gora-core/src/main/java/org/apache/gora/util/Null.java
new file mode 100644
index 0000000..d95f019
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/Null.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+/**
+ * Placeholder for Null type arguments
+ */
+public class Null {
+
+  private static final Null INSTANCE = new Null();
+  
+  public Null() {
+  }
+  
+  public static Null get() {
+    return INSTANCE;
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/OperationNotSupportedException.java b/trunk/gora-core/src/main/java/org/apache/gora/util/OperationNotSupportedException.java
new file mode 100644
index 0000000..1e17bc3
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/OperationNotSupportedException.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+/**
+ * Operation is not supported or implemented.
+ */
+public class OperationNotSupportedException extends RuntimeException {
+
+  private static final long serialVersionUID = 2929205790920793629L;
+
+  public OperationNotSupportedException() {
+    super();
+  }
+
+  public OperationNotSupportedException(String message, Throwable cause) {
+    super(message, cause);
+  }
+
+  public OperationNotSupportedException(String message) {
+    super(message);
+  }
+
+  public OperationNotSupportedException(Throwable cause) {
+    super(cause);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/ReflectionUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/ReflectionUtils.java
new file mode 100644
index 0000000..fd8c498
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/ReflectionUtils.java
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+
+/**
+ * Utility methods related to reflection
+ */
+public class ReflectionUtils {
+
+  public static Class<?>[] EMPTY_CLASS_ARRAY = new Class<?>[0];
+  public static Object[] EMPTY_OBJECT_ARRAY = new Object[0];
+  
+  /**
+   * Returns the empty argument constructor of the class.
+   */
+  public static<T> Constructor<T> getConstructor(Class<T> clazz) 
+    throws SecurityException, NoSuchMethodException {
+    if(clazz == null) {
+      throw new IllegalArgumentException("class cannot be null");
+    }
+    Constructor<T> cons = clazz.getConstructor(EMPTY_CLASS_ARRAY);
+    cons.setAccessible(true);
+    return cons;
+  }
+  
+  /**
+   * Returns whether the class defines an empty argument constructor.
+   */
+  public static boolean hasConstructor(Class<?> clazz) 
+  throws SecurityException, NoSuchMethodException {
+    if(clazz == null) {
+      throw new IllegalArgumentException("class cannot be null");
+    }
+    Constructor<?>[] consts = clazz.getConstructors();
+
+    boolean found = false;
+    for(Constructor<?> cons : consts) {
+      if(cons.getParameterTypes().length == 0) {
+        found = true;
+      }
+    }
+
+    return found;
+  }
+
+  /**
+   * Constructs a new instance of the class using the no-arg constructor.
+   * @param clazz the class of the object
+   * @return a new instance of the object
+   */
+  public static <T> T newInstance(Class<T> clazz) throws InstantiationException
+  , IllegalAccessException, SecurityException, NoSuchMethodException
+  , IllegalArgumentException, InvocationTargetException {
+    
+    Constructor<T> cons = getConstructor(clazz);
+    
+    return cons.newInstance(EMPTY_OBJECT_ARRAY);
+  }
+  
+  /**
+   * Constructs a new instance of the class using the no-arg constructor.
+   * @param classStr the class name of the object
+   * @return a new instance of the object
+   */
+  public static Object newInstance(String classStr) throws InstantiationException
+    , IllegalAccessException, ClassNotFoundException, SecurityException
+    , IllegalArgumentException, NoSuchMethodException, InvocationTargetException {
+    if(classStr == null) {
+      throw new IllegalArgumentException("class cannot be null");
+    }
+    Class<?> clazz = ClassLoadingUtils.loadClass(classStr);
+    return newInstance(clazz);
+  }
+  
+  /**
+   * Returns the value of a named static field
+   */
+  public static Object getStaticField(Class<?> clazz, String fieldName) 
+  throws IllegalArgumentException, SecurityException,
+  IllegalAccessException, NoSuchFieldException {
+    
+    return clazz.getField(fieldName).get(null);
+  }
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/StringUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/StringUtils.java
new file mode 100644
index 0000000..5ffdaad
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/StringUtils.java
@@ -0,0 +1,157 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Set;
+
+/**
+ * An utility class for String related functionality.
+ */
+public class StringUtils {
+
+  /**
+   * Joins the two given arrays, removing dup elements.
+   */
+  public static String[] joinStringArrays(String[] arr1, String... arr2) {
+    HashSet<String> set = new HashSet<String>();
+    for(String str : arr1) set.add(str);
+    for(String str : arr2) set.add(str);
+
+    return set.toArray(new String[set.size()]);
+  }
+
+  public static String join(List<String> strs) {
+    return join(new StringBuilder(), strs).toString();
+  }
+
+  public static String join(String[] strs) {
+    return join(new StringBuilder(), strs).toString();
+  }
+
+  public static StringBuilder join(StringBuilder builder, Collection<String> strs) {
+    int i = 0;
+    for (String s : strs) {
+      if(i != 0) builder.append(',');
+      builder.append(s);
+      i++;
+    }
+    return builder;
+  }
+
+  public static StringBuilder join(StringBuilder builder, String[] strs) {
+    for (int i = 0; i < strs.length; i++) {
+      if(i != 0) builder.append(",");
+      builder.append(strs[i]);
+    }
+    return builder;
+  }
+
+  /** helper for string null and empty checking*/
+  public static boolean is(String str) {
+    return str != null && str.length() > 0;
+  }
+
+  //below is taken from:http://jvalentino.blogspot.com/2007/02/shortcut-to-calculating-power-set-using.html
+  /**
+   * Returns the power set from the given set by using a binary counter
+   * Example: S = {a,b,c}
+   * P(S) = {[], [c], [b], [b, c], [a], [a, c], [a, b], [a, b, c]}
+   * @param set String[]
+   * @return LinkedHashSet
+   */
+  public static LinkedHashSet<Set<String>> powerset(String[] set) {
+
+    //create the empty power set
+    LinkedHashSet<Set<String>> power = new LinkedHashSet<Set<String>>();
+
+    //get the number of elements in the set
+    int elements = set.length;
+
+    //the number of members of a power set is 2^n
+    int powerElements = (int) Math.pow(2,elements);
+
+    //run a binary counter for the number of power elements
+    for (int i = 0; i < powerElements; i++) {
+
+      //convert the binary number to a string containing n digits
+      String binary = intToBinary(i, elements);
+
+      //create a new set
+      LinkedHashSet<String> innerSet = new LinkedHashSet<String>();
+
+      //convert each digit in the current binary number to the corresponding element
+      //in the given set
+      for (int j = 0; j < binary.length(); j++) {
+        if (binary.charAt(j) == '1')
+          innerSet.add(set[j]);
+      }
+
+      //add the new set to the power set
+      power.add(innerSet);
+
+    }
+
+    return power;
+  }
+
+  /**
+   * Converts the given integer to a String representing a binary number
+   * with the specified number of digits
+   * For example when using 4 digits the binary 1 is 0001
+   * @param binary int
+   * @param digits int
+   * @return String
+   */
+  private static String intToBinary(int binary, int digits) {
+    String temp = Integer.toBinaryString(binary);
+    int foundDigits = temp.length();
+    String returner = temp;
+    for (int i = foundDigits; i < digits; i++) {
+      returner = "0" + returner;
+    }
+    return returner;
+  }
+
+  public static int parseInt(String str, int defaultValue) {
+    if(str == null) {
+      return defaultValue;
+    }
+    return Integer.parseInt(str);
+  }
+
+  /**
+   * Returns the name of the class without the package name.
+   */
+  public static String getClassname(Class<?> clazz) {
+    return getClassname(clazz.getName());
+  }
+
+  /**
+   * Returns the name of the class without the package name.
+   */
+  public static String getClassname(String classname) {
+    String[] parts = classname.split("\\.");
+    return parts[parts.length-1];
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/org/apache/gora/util/WritableUtils.java b/trunk/gora-core/src/main/java/org/apache/gora/util/WritableUtils.java
new file mode 100644
index 0000000..1088107
--- /dev/null
+++ b/trunk/gora-core/src/main/java/org/apache/gora/util/WritableUtils.java
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map.Entry;
+import java.util.Properties;
+
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * An utility class for {@link Writable} related functionality.
+ */
+public class WritableUtils {
+  private WritableUtils() {
+    // prevents instantiation
+  }
+  
+  
+  public static final void writeProperties(DataOutput out, Properties props) throws IOException {
+    MapWritable propsWritable = new MapWritable();
+    for (Entry<Object, Object> prop : props.entrySet()) {
+      Writable key = new Text(prop.getKey().toString());
+      Writable value = new Text(prop.getValue().toString());
+      propsWritable.put(key,value);
+    }
+    propsWritable.write(out);
+  }
+  
+  public static final Properties readProperties(DataInput in) throws IOException {
+    Properties props = new Properties();
+    MapWritable propsWritable = new MapWritable();
+    propsWritable.readFields(in);
+    for (Entry<Writable, Writable> prop : propsWritable.entrySet()) {
+      String key = prop.getKey().toString();
+      String value = prop.getValue().toString();
+      props.put(key,value);
+    }
+    return props;
+  }
+
+}
diff --git a/trunk/gora-core/src/main/java/overview.html b/trunk/gora-core/src/main/java/overview.html
new file mode 100644
index 0000000..72e5885
--- /dev/null
+++ b/trunk/gora-core/src/main/java/overview.html
@@ -0,0 +1,63 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<head>
+   <title>Gora</title>
+</head>
+
+<body>
+
+  <h2> Introduction </h2>
+  <p>This is the javadoc for Gora. This javadoc captures all of the modules in Gora.</p>
+
+  <h2> Gora Modules </h2>
+  <p> Gora source code is organized in a modular architecture. The <b>gora-core</b> module is the main module which contains the core of the code.
+  All other modules depend on the gora-core module. Each data store backend in Gora resides in it's own module. The documentation for the specific module 
+  can be found at the module's documentation directory. </p>
+
+  <p> Source code for gora modules is organized as follows: </p>
+  <p> <ul>
+    <li><b> gora/&lt;module-name&gt;/src/main/java</b>: Main codes in Java  </li>
+    <li><b> gora/&lt;module-name&gt;/src/main/avro</b>: Avro schema definitions used in main </li>
+    <li><b> gora/&lt;module-name&gt;/src/test/java</b>: Unit test codes in Java  </li>
+    <li><b> gora/&lt;module-name&gt;/src/test/avro</b>: Avro schema definitions used in unit tests </li>
+    <li><b> gora/&lt;module-name&gt;/src/examples/java</b>: Example codes in Java </li>
+    <li><b> gora/&lt;module-name&gt;/src/examples/avro</b>: Example avro schema definitions</li>
+  </ul></p>
+   
+  <h2> gora-core </h2>
+  <p> gora-core module contains the source code for the main functionality in Gora. </p>
+
+  <h2> gora-cassandra </h2>
+  <p> gora-cassandra module contains the source code for the <a href="http://cassandra.apache.org/">Apache Cassandra</a> backend. </p>
+
+  <h2> gora-hbase </h2>
+  <p> gora-hbase module contains the source code for the <a href="http://hbase.apache.org/">Apache HBase</a> backend. </p>
+
+  <h2> gora-sql </h2>
+  <p> gora-sql module contains the source code for the SQL backends. Currently MySQL and HSQLDB is supported. </p>
+  
+  <h2> gora-accumulo </h2>
+  <p> gora-accumulo module contains the source code for the <a href="http://accumulo.apache.org/">Apache Accumulo</a> backend. </p>
+
+  <h2> More information </h2> 
+  <p> Most of the documentation about the project is kept at the project <a href="http://gora.apache.org">web site</a> or at the <a href="https://cwiki.apache.org/confluence/display/GORA/Index">wiki</a>. </p>
+  
+</body>
+</html>
+
diff --git a/trunk/gora-core/src/test/conf/.gitignore b/trunk/gora-core/src/test/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-core/src/test/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-core/src/test/conf/core-site.xml b/trunk/gora-core/src/test/conf/core-site.xml
new file mode 100644
index 0000000..87934d6
--- /dev/null
+++ b/trunk/gora-core/src/test/conf/core-site.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property>
+  <name>io.serializations</name>
+  <value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.JavaSerialization</value>
+<!--         org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,
+         org.apache.hadoop.io.serializer.avro.AvroReflectSerialization,
+         org.apache.hadoop.io.serializer.avro.AvroGenericSerialization, -->
+  <description>A list of serialization classes that can be used for
+  obtaining serializers and deserializers.</description>
+</property>
+
+</configuration>
diff --git a/trunk/gora-core/src/test/conf/gora.properties b/trunk/gora-core/src/test/conf/gora.properties
new file mode 100644
index 0000000..9ee87f1
--- /dev/null
+++ b/trunk/gora-core/src/test/conf/gora.properties
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+gora.datastore.default=org.apache.gora.mock.store.MockDataStore
+gora.datastore.autocreateschema=true
+gora.avrostore.output.path=file:///tmp/gora.avrostore.test.output
+
+gora.datafileavrostore.foo_property=foo_value
+gora.avrostore.baz_property=baz_value
+gora.datastore.bar_property=bar_value
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/GoraTestDriver.java b/trunk/gora-core/src/test/java/org/apache/gora/GoraTestDriver.java
new file mode 100644
index 0000000..ed62b74
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/GoraTestDriver.java
@@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Properties;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.util.GoraException;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * GoraTestDriver is a helper class for third party tests and should
+ * be used to initialize and tear down mini clusters (such as mini HBase 
+ * or Cassandra cluster, local Hsqldb instance, etc) so that these 
+ * details are abstracted away.
+ */
+public class GoraTestDriver {
+
+  protected static final Logger log = LoggerFactory.getLogger(GoraTestDriver.class);
+
+  protected Class<? extends DataStore> dataStoreClass;
+  protected Configuration conf = new Configuration();
+
+  @SuppressWarnings("rawtypes")
+  protected HashSet<DataStore> dataStores;
+
+  @SuppressWarnings("rawtypes")
+  protected GoraTestDriver(Class<? extends DataStore> dataStoreClass) {
+    this.dataStoreClass = dataStoreClass;
+    this.dataStores = new HashSet<DataStore>();
+  }
+
+  /** Should be called once before the tests are started, probably in the
+   * method annotated with org.junit.BeforeClass
+   */
+  public void setUpClass() throws Exception {
+    setProperties(DataStoreFactory.createProps());
+  }
+
+  /** Should be called once after the tests have finished, probably in the
+   * method annotated with org.junit.AfterClass
+   */
+  public void tearDownClass() throws Exception {
+
+  }
+
+  /** Should be called once before each test, probably in the
+   * method annotated with org.junit.Before
+   */
+  public void setUp() throws Exception {
+    log.info("setting up test");
+    try {
+      for(DataStore store : dataStores) {
+        store.truncateSchema();
+      }
+    }catch (IOException ignore) {
+    }
+  }
+    
+  /** Should be called once after each test, probably in the
+   * method annotated with org.junit.After
+   */
+  @SuppressWarnings("rawtypes")
+  public void tearDown() throws Exception {
+    log.info("tearing down test");
+    //delete everything
+    for(DataStore store : dataStores) {
+      try {
+        //store.flush();
+        store.deleteSchema();
+        store.close();
+      }catch (Exception ignore) {
+      }
+    }
+    dataStores.clear();
+  }
+
+  protected void setProperties(Properties properties) {
+  }
+
+  @SuppressWarnings("unchecked")
+  public<K, T extends Persistent> DataStore<K,T>
+    createDataStore(Class<K> keyClass, Class<T> persistentClass) throws GoraException {
+    setProperties(DataStoreFactory.createProps());
+    DataStore<K,T> dataStore = DataStoreFactory.createDataStore(
+        (Class<? extends DataStore<K,T>>)dataStoreClass, keyClass, persistentClass, conf);
+    dataStores.add(dataStore);
+
+    return dataStore;
+  }
+  
+  public Class<?> getDataStoreClass() {
+    return dataStoreClass;
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/avro/TestPersistentDatumReader.java b/trunk/gora-core/src/test/java/org/apache/gora/avro/TestPersistentDatumReader.java
new file mode 100644
index 0000000..435769c
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/avro/TestPersistentDatumReader.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro;
+
+import java.io.IOException;
+
+import junit.framework.Assert;
+
+import org.apache.avro.util.Utf8;
+import org.apache.gora.avro.PersistentDatumReader;
+import org.apache.gora.examples.WebPageDataCreator;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.memory.store.MemStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestUtil;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+/**
+ * Test case for {@link PersistentDatumReader}.
+ */
+public class TestPersistentDatumReader {
+
+  private PersistentDatumReader<WebPage> webPageDatumReader 
+    = new PersistentDatumReader<WebPage>();
+  private Configuration conf = new Configuration();
+  
+  private void testClone(Persistent persistent) throws IOException {
+    Persistent cloned = webPageDatumReader.clone(persistent, persistent.getSchema());
+    assertClone(persistent, cloned);
+  }
+  
+  private void assertClone(Persistent persistent, Persistent cloned) {
+    Assert.assertNotNull("cloned object is null", cloned);
+    Assert.assertEquals("cloned object is not equal to original object", persistent, cloned);
+  }
+  
+  @Test
+  public void testCloneEmployee() throws Exception {
+    @SuppressWarnings("unchecked")
+    MemStore<String, Employee> store = DataStoreFactory.getDataStore(
+        MemStore.class, String.class, Employee.class, conf);
+
+    Employee employee = DataStoreTestUtil.createEmployee(store);
+    
+    testClone(employee);
+  }
+  
+  @Test
+  public void testCloneEmployeeOneField() throws Exception {
+    Employee employee = new Employee();
+    employee.setSsn(new Utf8("11111"));
+
+    testClone(employee);
+  }
+
+  @Test
+  public void testCloneEmployeeTwoFields() throws Exception {
+    Employee employee = new Employee();
+    employee.setSsn(new Utf8("11111"));
+    employee.setSalary(100);
+
+    testClone(employee);
+  }
+
+  @Test
+  public void testCloneWebPage() throws Exception {
+    @SuppressWarnings("unchecked")
+    DataStore<String, WebPage> store = DataStoreFactory.createDataStore(
+        MemStore.class, String.class, WebPage.class, conf);
+    WebPageDataCreator.createWebPageData(store);
+
+    Query<String, WebPage> query = store.newQuery();
+    Result<String, WebPage> result = query.execute();
+    
+    int tested = 0;
+    while(result.next()) {
+      WebPage page = result.get();
+      testClone(page);
+      tested++;
+    }
+    Assert.assertEquals(WebPageDataCreator.URLS.length, tested);
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/avro/mapreduce/TestDataFileAvroStoreMapReduce.java b/trunk/gora-core/src/test/java/org/apache/gora/avro/mapreduce/TestDataFileAvroStoreMapReduce.java
new file mode 100644
index 0000000..e3f02bf
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/avro/mapreduce/TestDataFileAvroStoreMapReduce.java
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.mapreduce;
+
+import static org.apache.gora.avro.store.TestAvroStore.WEBPAGE_OUTPUT;
+
+import java.io.IOException;
+
+import org.apache.gora.avro.store.DataFileAvroStore;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.mapreduce.DataStoreMapReduceTestBase;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+
+/**
+ * Mapreduce tests for {@link DataFileAvroStore}.
+ */
+public class TestDataFileAvroStoreMapReduce extends DataStoreMapReduceTestBase {
+
+  public TestDataFileAvroStoreMapReduce() throws IOException {
+    super();
+  }
+
+  @Override
+  protected DataStore<String, WebPage> createWebPageDataStore() 
+    throws IOException {
+    DataFileAvroStore<String,WebPage> webPageStore = new DataFileAvroStore<String, WebPage>();
+    webPageStore.initialize(String.class, WebPage.class, DataStoreFactory.createProps());
+    webPageStore.setOutputPath(WEBPAGE_OUTPUT);
+    webPageStore.setInputPath(WEBPAGE_OUTPUT);
+    
+    return webPageStore;
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestAvroStore.java b/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestAvroStore.java
new file mode 100644
index 0000000..55ec665
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestAvroStore.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.store;
+
+import static org.apache.gora.examples.WebPageDataCreator.URLS;
+import static org.apache.gora.examples.WebPageDataCreator.URL_INDEXES;
+import static org.apache.gora.examples.WebPageDataCreator.createWebPageData;
+
+import java.io.IOException;
+
+import junit.framework.Assert;
+
+import org.apache.gora.avro.store.AvroStore.CodecType;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestUtil;
+import org.apache.gora.util.GoraException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test case for {@link AvroStore}.
+ */
+public class TestAvroStore {
+
+  public static final String EMPLOYEE_OUTPUT =
+    System.getProperty("test.build.data") + "/testavrostore/employee.data";
+  public static final String WEBPAGE_OUTPUT =
+    System.getProperty("test.build.data") + "/testavrostore/webpage.data";
+
+  protected AvroStore<String,Employee> employeeStore;
+  protected AvroStore<String,WebPage> webPageStore;
+  protected Configuration conf = new Configuration();
+
+  @Before
+  public void setUp() throws Exception {
+    employeeStore = createEmployeeDataStore();
+    employeeStore.initialize(String.class, Employee.class, DataStoreFactory.createProps());
+    employeeStore.setOutputPath(EMPLOYEE_OUTPUT);
+    employeeStore.setInputPath(EMPLOYEE_OUTPUT);
+
+    webPageStore = new AvroStore<String, WebPage>();
+    webPageStore.initialize(String.class, WebPage.class, DataStoreFactory.createProps());
+    webPageStore.setOutputPath(WEBPAGE_OUTPUT);
+    webPageStore.setInputPath(WEBPAGE_OUTPUT);
+  }
+
+  @SuppressWarnings("unchecked")
+  protected AvroStore<String, Employee> createEmployeeDataStore() throws GoraException {
+    return DataStoreFactory.getDataStore(
+        AvroStore.class, String.class, Employee.class, conf);
+  }
+
+  protected AvroStore<String, WebPage> createWebPageDataStore() {
+    return new AvroStore<String, WebPage>();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    deletePath(employeeStore.getOutputPath());
+    deletePath(webPageStore.getOutputPath());
+
+    employeeStore.close();
+    webPageStore.close();
+  }
+
+  private void deletePath(String output) throws IOException {
+    if(output != null) {
+      Path path = new Path(output);
+      path.getFileSystem(conf).delete(path, true);
+    }
+  }
+
+  @Test
+  public void testNewInstance() throws IOException {
+    DataStoreTestUtil.testNewPersistent(employeeStore);
+  }
+
+  @Test
+  public void testCreateSchema() throws IOException {
+    DataStoreTestUtil.testCreateEmployeeSchema(employeeStore);
+  }
+
+  @Test
+  public void testAutoCreateSchema() throws IOException {
+    DataStoreTestUtil.testAutoCreateSchema(employeeStore);
+  }
+
+  @Test
+  public void testPut() throws IOException {
+    DataStoreTestUtil.testPutEmployee(employeeStore);
+  }
+
+  @Test
+  public void testQuery() throws IOException {
+    createWebPageData(webPageStore);
+    webPageStore.close();
+
+    webPageStore.setInputPath(webPageStore.getOutputPath());
+    testQueryWebPages(webPageStore);
+  }
+
+  @Test
+  public void testQueryBinaryEncoder() throws IOException {
+    webPageStore.setCodecType(CodecType.BINARY);
+    webPageStore.setInputPath(webPageStore.getOutputPath());
+
+    createWebPageData(webPageStore);
+    webPageStore.close();
+    testQueryWebPages(webPageStore);
+  }
+
+  //AvroStore should be closed so that Hadoop file is completely flushed,
+  //so below test is copied and modified to close the store after pushing data
+  public static void testQueryWebPages(DataStore<String, WebPage> store)
+  throws IOException {
+
+    Query<String, WebPage> query = store.newQuery();
+    Result<String, WebPage> result = query.execute();
+
+    int i=0;
+    while(result.next()) {
+      WebPage page = result.get();
+      DataStoreTestUtil.assertWebPage(page, URL_INDEXES.get(page.getUrl().toString()));
+      i++;
+    }
+    Assert.assertEquals(i, URLS.length);
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestDataFileAvroStore.java b/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestDataFileAvroStore.java
new file mode 100644
index 0000000..0d7d485
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/avro/store/TestDataFileAvroStore.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.avro.store;
+
+import org.apache.gora.avro.store.AvroStore;
+import org.apache.gora.avro.store.DataFileAvroStore;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+
+/**
+ * Test case for {@link DataFileAvroStore}.
+ */
+public class TestDataFileAvroStore extends TestAvroStore {
+
+  @Override
+  protected AvroStore<String, Employee> createEmployeeDataStore() {
+    return new DataFileAvroStore<String, Employee>();
+  }
+  
+  @Override
+  protected AvroStore<String, WebPage> createWebPageDataStore() {
+    return new DataFileAvroStore<String, WebPage>();
+  }
+  
+  //import all tests from super class
+  
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/DataStoreMapReduceTestBase.java b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/DataStoreMapReduceTestBase.java
new file mode 100644
index 0000000..28acdcd
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/DataStoreMapReduceTestBase.java
@@ -0,0 +1,92 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.mapred.HadoopTestCase;
+import org.apache.hadoop.mapred.JobConf;
+import org.junit.Before;
+import org.junit.Test;
+
+// Slf4j logging imports
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Base class for Mapreduce based tests. This is just a convenience
+ * class, which actually only uses {@link MapReduceTestUtils} methods to
+ * run the tests.
+ */
+@SuppressWarnings("deprecation")
+public abstract class DataStoreMapReduceTestBase extends HadoopTestCase {
+  public static final Logger LOG = LoggerFactory.getLogger(DataStoreMapReduceTestBase.class);
+
+  private DataStore<String, WebPage> webPageStore;
+  private JobConf job;
+
+  public DataStoreMapReduceTestBase(int mrMode, int fsMode, int taskTrackers,
+      int dataNodes) throws IOException {
+    super(mrMode, fsMode, taskTrackers, dataNodes);
+  }
+
+  public DataStoreMapReduceTestBase() throws IOException {
+    this(HadoopTestCase.CLUSTER_MR, HadoopTestCase.DFS_FS, 2, 2);
+  }
+
+  @Override
+  @Before
+  public void setUp() throws Exception {
+    LOG.info("Setting up Hadoop Test Case...");
+    try {
+      super.setUp();
+      webPageStore = createWebPageDataStore();
+      job = createJobConf();
+    } catch (Exception e) {
+      LOG.error("Hadoop Test Case set up failed", e);
+      // cleanup
+      tearDown();
+    }
+  } 
+
+  @Override
+  public void tearDown() throws Exception {
+    LOG.info("Tearing down Hadoop Test Case...");
+    super.tearDown();
+    webPageStore.close();
+  }
+
+  protected abstract DataStore<String, WebPage> createWebPageDataStore()
+    throws IOException;
+
+  @Test
+  public void testCountQuery() throws Exception {
+    MapReduceTestUtils.testCountQuery(webPageStore, job);
+  }
+
+ // TODO The correct implementation for this test need to be created
+ // and implemented. For a WIP and more details see GORA-104 
+ // @Test
+ // public void testWordCount() throws Exception {
+ //   MapReduceTestUtils.testWordCount(job, tokenDatumStore, webPageStore);
+ // }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/MapReduceTestUtils.java b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/MapReduceTestUtils.java
new file mode 100644
index 0000000..948690b
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/MapReduceTestUtils.java
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.examples.WebPageDataCreator;
+import org.apache.gora.examples.generated.TokenDatum;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.examples.mapreduce.QueryCounter;
+import org.apache.gora.examples.mapreduce.WordCount;
+import org.apache.gora.query.Query;
+import org.apache.gora.store.DataStore;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Assert;
+
+public class MapReduceTestUtils {
+
+  private static final Logger log = LoggerFactory.getLogger(MapReduceTestUtils.class);
+  
+  /** Tests by running the {@link QueryCounter} mapreduce job */
+  public static void testCountQuery(DataStore<String, WebPage> dataStore
+      , Configuration conf) 
+  throws Exception {
+    
+    dataStore.setConf(conf);
+    
+    //create input
+    WebPageDataCreator.createWebPageData(dataStore);
+    
+    
+    QueryCounter<String,WebPage> counter = new QueryCounter<String,WebPage>(conf);
+    Query<String,WebPage> query = dataStore.newQuery();
+    query.setFields(WebPage._ALL_FIELDS);
+    
+    dataStore.close();
+    
+    
+    //run the job
+    log.info("running count query job");
+    long result = counter.countQuery(dataStore, query);
+    log.info("finished count query job");
+    
+    //assert results
+    Assert.assertEquals(WebPageDataCreator.URLS.length, result);
+    
+  }
+ 
+  public static void testWordCount(Configuration conf, 
+      DataStore<String,WebPage> inStore, DataStore<String, 
+      TokenDatum> outStore) throws Exception {
+    inStore.setConf(conf);
+    outStore.setConf(conf);
+    
+    //create input
+    WebPageDataCreator.createWebPageData(inStore);
+    
+    //run the job
+    WordCount wordCount = new WordCount(conf);
+    wordCount.wordCount(inStore, outStore);
+    
+    //assert results
+    HashMap<String, Integer> actualCounts = new HashMap<String, Integer>();
+    for(String content : WebPageDataCreator.CONTENTS) {
+      for(String token:content.split(" ")) {
+        Integer count = actualCounts.get(token);
+        if(count == null) 
+          count = 0;
+        actualCounts.put(token, ++count);
+      }
+    }
+    for(Map.Entry<String, Integer> entry:actualCounts.entrySet()) {
+      assertTokenCount(outStore, entry.getKey(), entry.getValue()); 
+    }
+  }
+  
+  private static void assertTokenCount(DataStore<String, TokenDatum> outStore,
+      String token, int count) throws IOException {
+    TokenDatum datum = outStore.get(token, null);
+    Assert.assertNotNull("token:" + token + " cannot be found in datastore", datum);
+    Assert.assertEquals("count for token:" + token + " is wrong", count, datum.getCount());
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputFormat.java b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputFormat.java
new file mode 100644
index 0000000..f16ed1b
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputFormat.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+
+import junit.framework.Assert;
+
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.mapreduce.GoraInputFormat;
+import org.apache.gora.mapreduce.GoraInputSplit;
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.mock.query.MockQuery;
+import org.apache.gora.mock.store.MockDataStore;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.junit.Test;
+
+public class TestGoraInputFormat {
+
+  public List<InputSplit> getInputSplits()
+    throws IOException, InterruptedException {
+
+    Job job = new Job();
+    MockDataStore store = MockDataStore.get();
+
+    MockQuery query = store.newQuery();
+    query.setFields(Employee._ALL_FIELDS);
+    GoraInputFormat.setInput(job, query, false);
+
+    GoraInputFormat<String, MockPersistent> inputFormat
+      = new GoraInputFormat<String, MockPersistent>();
+
+    inputFormat.setConf(job.getConfiguration());
+
+    return inputFormat.getSplits(job);
+  }
+
+  @Test
+  @SuppressWarnings("rawtypes")
+  public void testGetSplits() throws IOException, InterruptedException {
+    List<InputSplit> splits = getInputSplits();
+
+    Assert.assertTrue(splits.size() > 0);
+
+    InputSplit split = splits.get(0);
+    PartitionQuery query = ((GoraInputSplit)split).getQuery();
+    Assert.assertTrue(Arrays.equals(Employee._ALL_FIELDS, query.getFields()));
+  }
+
+}
\ No newline at end of file
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputSplit.java b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputSplit.java
new file mode 100644
index 0000000..31f5337
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestGoraInputSplit.java
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import java.io.IOException;
+import java.util.List;
+
+import junit.framework.Assert;
+
+import org.apache.gora.mapreduce.GoraInputSplit;
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.mock.query.MockQuery;
+import org.apache.gora.mock.store.MockDataStore;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.TestWritable;
+import org.junit.Test;
+
+/**
+ * Test case for {@link GoraInputSplit}.
+ */
+public class TestGoraInputSplit {
+
+  private Configuration conf = new Configuration();
+  
+  private List<PartitionQuery<String, MockPersistent>> 
+    getPartitions() throws IOException {
+    MockDataStore store = MockDataStore.get();
+    MockQuery query = store.newQuery();
+
+    List<PartitionQuery<String, MockPersistent>> partitions = 
+      store.getPartitions(query);
+    return partitions;
+  }
+  
+  @Test
+  public void testGetLocations() throws IOException {
+    List<PartitionQuery<String, MockPersistent>> partitions = 
+      getPartitions();
+
+    int i=0;;
+    for(PartitionQuery<String, MockPersistent> partition : partitions) {
+      GoraInputSplit split = new GoraInputSplit(conf, partition);
+      Assert.assertEquals(split.getLocations().length, 1);
+      Assert.assertEquals(split.getLocations()[0], MockDataStore.LOCATIONS[i++]);
+    }
+  }
+
+  @Test
+  public void testReadWrite() throws Exception {
+    
+    List<PartitionQuery<String, MockPersistent>> partitions = 
+      getPartitions();
+
+    for(PartitionQuery<String, MockPersistent> partition : partitions) {
+      GoraInputSplit split = new GoraInputSplit(conf, partition);
+      TestWritable.testWritable(split);
+    }
+  }
+  
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestPersistentSerialization.java b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestPersistentSerialization.java
new file mode 100644
index 0000000..9af9c35
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mapreduce/TestPersistentSerialization.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mapreduce;
+
+import junit.framework.Assert;
+
+import org.apache.avro.util.Utf8;
+import org.apache.gora.examples.WebPageDataCreator;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.mapreduce.PersistentDeserializer;
+import org.apache.gora.mapreduce.PersistentSerialization;
+import org.apache.gora.mapreduce.PersistentSerializer;
+import org.apache.gora.memory.store.MemStore;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestUtil;
+import org.apache.gora.util.TestIOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+/** Test class for {@link PersistentSerialization}, {@link PersistentSerializer}
+ *  and {@link PersistentDeserializer}
+ */
+public class TestPersistentSerialization {
+
+  @SuppressWarnings("unchecked")
+  @Test
+  public void testSerdeEmployee() throws Exception {
+
+    MemStore<String, Employee> store = DataStoreFactory.getDataStore(
+        MemStore.class, String.class, Employee.class, new Configuration());
+
+    Employee employee = DataStoreTestUtil.createEmployee(store);
+
+    TestIOUtils.testSerializeDeserialize(employee);
+  }
+
+  @Test
+  public void testSerdeEmployeeOneField() throws Exception {
+    Employee employee = new Employee();
+    employee.setSsn(new Utf8("11111"));
+
+    TestIOUtils.testSerializeDeserialize(employee);
+  }
+
+  @Test
+  public void testSerdeEmployeeTwoFields() throws Exception {
+    Employee employee = new Employee();
+    employee.setSsn(new Utf8("11111"));
+    employee.setSalary(100);
+
+    TestIOUtils.testSerializeDeserialize(employee);
+  }
+
+  @SuppressWarnings("unchecked")
+  @Test
+  public void testSerdeWebPage() throws Exception {
+
+    MemStore<String, WebPage> store = DataStoreFactory.getDataStore(
+        MemStore.class, String.class, WebPage.class, new Configuration());
+    WebPageDataCreator.createWebPageData(store);
+
+    Result<String, WebPage> result = store.newQuery().execute();
+
+    int i=0;
+    while(result.next()) {
+      WebPage page = result.get();
+      TestIOUtils.testSerializeDeserialize(page);
+      i++;
+    }
+    Assert.assertEquals(WebPageDataCreator.URLS.length, i);
+  }
+
+  @Test
+  public void testSerdeMultipleWebPages() throws Exception {
+    WebPage page1 = new WebPage();
+    WebPage page2 = new WebPage();
+    WebPage page3 = new WebPage();
+
+    page1.setUrl(new Utf8("foo"));
+    page2.setUrl(new Utf8("baz"));
+    page3.setUrl(new Utf8("bar"));
+
+    page1.addToParsedContent(new Utf8("coo"));
+
+    page2.putToOutlinks(new Utf8("a"), new Utf8("b"));
+
+    TestIOUtils.testSerializeDeserialize(page1, page2, page3);
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mock/persistency/MockPersistent.java b/trunk/gora-core/src/test/java/org/apache/gora/mock/persistency/MockPersistent.java
new file mode 100644
index 0000000..1cf0bfb
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mock/persistency/MockPersistent.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mock.persistency;
+
+import org.apache.avro.Schema;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+
+public class MockPersistent extends PersistentBase {
+
+  public static final String FOO = "foo";
+  public static final String BAZ = "baz";
+  
+  public static final String[] _ALL_FIELDS = {FOO, BAZ};
+  
+  private int foo;
+  private int baz;
+  
+  public MockPersistent() {
+  }
+  
+  public MockPersistent(StateManager stateManager) {
+    super(stateManager);
+  }
+  
+  @Override
+  public Object get(int field) {
+    switch(field) {
+      case 0: return foo;
+      case 1: return baz;
+    }
+    return null;
+  }
+
+  @Override
+  public void put(int field, Object value) {
+    switch(field) {
+      case 0:  foo = (Integer)value;
+      case 1:  baz = (Integer)value;
+    }
+  }
+
+  @Override
+  public Schema getSchema() {
+    return Schema.parse("{\"type\":\"record\",\"name\":\"MockPersistent\",\"namespace\":\"org.apache.gora.mock.persistency\",\"fields\":[{\"name\":\"foo\",\"type\":\"int\"},{\"name\":\"baz\",\"type\":\"int\"}]}");
+  }
+  
+  public void setFoo(int foo) {
+    this.foo = foo;
+  }
+  
+  public void setBaz(int baz) {
+    this.baz = baz;
+  }
+  
+  public int getFoo() {
+    return foo;
+  }
+  
+  public int getBaz() {
+    return baz;
+  }
+
+  @Override
+  public String getField(int index) {
+    return null;
+  }
+
+  @Override
+  public int getFieldIndex(String field) {
+    return 0;
+  }
+
+  @Override
+  public String[] getFields() {
+    return null;
+  }
+
+  @Override
+  public Persistent newInstance(StateManager stateManager) {
+    return new MockPersistent(stateManager);
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mock/query/MockQuery.java b/trunk/gora-core/src/test/java/org/apache/gora/mock/query/MockQuery.java
new file mode 100644
index 0000000..656d68a
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mock/query/MockQuery.java
@@ -0,0 +1,35 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mock.query;
+
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.store.DataStore;
+
+public class MockQuery extends QueryBase<String, MockPersistent> {
+
+  public MockQuery() {
+    super(null);
+  }
+  
+  public MockQuery(DataStore<String, MockPersistent> dataStore) {
+    super(dataStore);
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/mock/store/MockDataStore.java b/trunk/gora-core/src/test/java/org/apache/gora/mock/store/MockDataStore.java
new file mode 100644
index 0000000..fd8041d
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/mock/store/MockDataStore.java
@@ -0,0 +1,146 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.mock.store;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.mock.query.MockQuery;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.apache.gora.util.GoraException;
+import org.apache.hadoop.conf.Configuration;
+
+public class MockDataStore extends DataStoreBase<String, MockPersistent> {
+
+  public static final int NUM_PARTITIONS = 5;
+  public static final String[] LOCATIONS = {"foo1", "foo2", "foo3", "foo4", "foo1"};
+
+  public static MockDataStore get() {
+    MockDataStore dataStore;
+    try {
+      dataStore = DataStoreFactory.getDataStore(MockDataStore.class
+          , String.class, MockPersistent.class, new Configuration());
+      return dataStore;
+    } catch (GoraException ex) {
+      throw new RuntimeException(ex);
+    }
+  }
+
+  public MockDataStore() { }
+
+  @Override
+  public String getSchemaName() {
+    return null;
+  }
+
+  @Override
+  public void close() throws IOException {
+  }
+
+  @Override
+  public void createSchema() throws IOException {
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+  }
+
+  @Override
+  public void truncateSchema() throws IOException {
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    return true;
+  }
+
+  @Override
+  public boolean delete(String key) throws IOException {
+    return false;
+  }
+
+  @Override
+  public long deleteByQuery(Query<String, MockPersistent> query)
+      throws IOException {
+    return 0;
+  }
+
+  @Override
+  public Result<String, MockPersistent> execute(
+      Query<String, MockPersistent> query) throws IOException {
+    return null;
+  }
+
+  @Override
+  public void flush() throws IOException {
+  }
+
+  @Override
+  public MockPersistent get(String key, String[] fields) throws IOException {
+    return null;
+  }
+
+  @Override
+  public Class<String> getKeyClass() {
+    return String.class;
+  }
+
+  @Override
+  public List<PartitionQuery<String, MockPersistent>> getPartitions(
+      Query<String, MockPersistent> query) throws IOException {
+
+    ArrayList<PartitionQuery<String, MockPersistent>> list =
+      new ArrayList<PartitionQuery<String,MockPersistent>>();
+
+    for(int i=0; i<NUM_PARTITIONS; i++) {
+      list.add(new PartitionQueryImpl<String, MockPersistent>(query, LOCATIONS[i]));
+    }
+
+    return list;
+  }
+
+  @Override
+  public Class<MockPersistent> getPersistentClass() {
+    return MockPersistent.class;
+  }
+
+  @Override
+  public MockQuery newQuery() {
+    return new MockQuery(this);
+  }
+
+  @Override
+  public void put(String key, MockPersistent obj) throws IOException {
+  }
+
+  @Override
+  public void setKeyClass(Class<String> keyClass) {
+  }
+
+  @Override
+  public void setPersistentClass(Class<MockPersistent> persistentClass) {
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/persistency/TestListGenericArray.java b/trunk/gora-core/src/test/java/org/apache/gora/persistency/TestListGenericArray.java
new file mode 100644
index 0000000..e595c3a
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/persistency/TestListGenericArray.java
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency;
+
+import org.apache.avro.Schema; 
+import org.apache.avro.generic.GenericData;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.ListGenericArray; 
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Testcase for ListGenericArray class
+ */
+public class TestListGenericArray {
+  
+  @Test
+  public void testHashCode() {
+    ListGenericArray array = new ListGenericArray(Schema.create(Schema.Type.STRING)); 
+    boolean stackOverflowError = false;
+    array.add(new Utf8("array test")); 
+    try {
+      int hashCode = array.hashCode();
+    }
+    catch (StackOverflowError e) {
+      stackOverflowError = true;
+    }
+    Assert.assertFalse(stackOverflowError);
+  }
+  
+  @Test
+  public void testCompareTo() {
+    ListGenericArray array = new ListGenericArray(Schema.create(Schema.Type.STRING));
+    boolean stackOverflowError = false;
+    array.add(new Utf8("array comparison test"));
+    try {
+      int compareTo = array.compareTo(array);
+    } catch (StackOverflowError e) {
+      stackOverflowError = true;
+    }
+    Assert.assertFalse(stackOverflowError);
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestPersistentBase.java b/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestPersistentBase.java
new file mode 100644
index 0000000..8e6ef16
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestPersistentBase.java
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency.impl;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.avro.util.Utf8;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.memory.store.MemStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestUtil;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Testcase for PersistentBase class
+ */
+public class TestPersistentBase {
+  
+  @Test
+  public void testGetFields() {
+    WebPage page = new WebPage();
+    String[] fields = page.getFields();
+    Assert.assertArrayEquals(WebPage._ALL_FIELDS, fields);
+  }
+  
+  @Test
+  public void testGetField() {
+    WebPage page = new WebPage();
+    for(int i=0; i<WebPage._ALL_FIELDS.length; i++) {
+      String field = page.getField(i);
+      Assert.assertEquals(WebPage._ALL_FIELDS[i], field);
+    }
+  }
+  
+  @Test
+  public void testGetFieldIndex() {
+    WebPage page = new WebPage();
+    for(int i=0; i<WebPage._ALL_FIELDS.length; i++) {
+      int index = page.getFieldIndex(WebPage._ALL_FIELDS[i]);
+      Assert.assertEquals(i, index);
+    }
+  }
+  
+  @Test
+  public void testFieldsWithTwoClasses() {
+    WebPage page = new WebPage();
+    for(int i=0; i<WebPage._ALL_FIELDS.length; i++) {
+      int index = page.getFieldIndex(WebPage._ALL_FIELDS[i]);
+      Assert.assertEquals(i, index);
+    }
+    Employee employee = new Employee();
+    for(int i=0; i<Employee._ALL_FIELDS.length; i++) {
+      int index = employee.getFieldIndex(Employee._ALL_FIELDS[i]);
+      Assert.assertEquals(i, index);
+    }
+  }
+  
+  @Test
+  public void testClear() {
+    
+    //test clear all fields
+    WebPage page = new WebPage();
+    page.setUrl(new Utf8("http://foo.com"));
+    page.addToParsedContent(new Utf8("foo"));
+    page.putToOutlinks(new Utf8("foo"), new Utf8("bar"));
+    page.setContent(ByteBuffer.wrap("foo baz bar".getBytes()));
+    
+    page.clear();
+    
+    Assert.assertNull(page.getUrl());
+    Assert.assertEquals(0, page.getParsedContent().size());
+    Assert.assertEquals(0, page.getOutlinks().size());
+    Assert.assertNull(page.getContent());
+    
+    //set fields again
+    page.setUrl(new Utf8("http://bar.com"));
+    page.addToParsedContent(new Utf8("bar"));
+    page.putToOutlinks(new Utf8("bar"), new Utf8("baz"));
+    page.setContent(ByteBuffer.wrap("foo baz bar barbaz".getBytes()));
+    
+    //test clear new object
+    page = new WebPage();
+    page.clear();
+    
+    //test primitive fields
+    Employee employee = new Employee();
+    employee.clear();
+  }
+  
+  @Test
+  public void testClone() throws IOException {
+    //more tests for clone are in TestPersistentDatumReader
+    @SuppressWarnings("unchecked")
+    MemStore<String, Employee> store = DataStoreFactory.getDataStore(
+        MemStore.class, String.class, Employee.class, new Configuration());
+
+    Employee employee = DataStoreTestUtil.createEmployee(store);
+    
+    Assert.assertEquals(employee, employee.clone());
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestStateManagerImpl.java b/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestStateManagerImpl.java
new file mode 100644
index 0000000..dabcc94
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/persistency/impl/TestStateManagerImpl.java
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.persistency.impl;
+
+import java.io.IOException;
+
+import junit.framework.Assert;
+
+import org.apache.avro.util.Utf8;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test case for {@link StateManagerImpl}
+ */
+public class TestStateManagerImpl {
+
+  private StateManagerImpl stateManager;
+  private MockPersistent persistent;
+  
+  @Before
+  public void setUp() {
+    this.stateManager = new StateManagerImpl();
+    this.persistent = new MockPersistent(stateManager);
+  }
+  
+  @Test
+  public void testDirty() {
+    Assert.assertFalse(stateManager.isDirty(persistent));
+    stateManager.setDirty(persistent);
+    Assert.assertTrue(stateManager.isDirty(persistent));
+  }
+  
+  @Test
+  public void testDirty2() {
+    Assert.assertFalse(stateManager.isDirty(persistent, 0));
+    Assert.assertFalse(stateManager.isDirty(persistent, 1));
+    stateManager.setDirty(persistent, 0);
+    Assert.assertTrue(stateManager.isDirty(persistent, 0));
+    Assert.assertFalse(stateManager.isDirty(persistent, 1));
+  }
+  
+  @Test
+  public void testClearDirty() {
+    Assert.assertFalse(stateManager.isDirty(persistent));
+    stateManager.setDirty(persistent, 0);
+    stateManager.clearDirty(persistent);
+    Assert.assertFalse(this.stateManager.isDirty(persistent));
+  }
+  
+  @Test
+  public void testReadable() throws IOException {
+    Assert.assertFalse(stateManager.isReadable(persistent, 0));
+    Assert.assertFalse(stateManager.isReadable(persistent, 1));
+    stateManager.setReadable(persistent, 0);
+    Assert.assertTrue(stateManager.isReadable(persistent, 0));
+    Assert.assertFalse(stateManager.isReadable(persistent, 1));
+  }
+
+  @Test
+  public void testReadable2() {
+    stateManager = new StateManagerImpl();
+    Employee employee = new Employee(stateManager);
+    Assert.assertFalse(stateManager.isReadable(employee, 0));
+    Assert.assertFalse(stateManager.isReadable(employee, 1));
+    employee.setName(new Utf8("foo"));
+    Assert.assertTrue(stateManager.isReadable(employee, 0));
+    Assert.assertFalse(stateManager.isReadable(employee, 1));
+  }
+  
+  @Test
+  public void testClearReadable() {
+    stateManager.setReadable(persistent, 0);
+    stateManager.clearReadable(persistent);
+    Assert.assertFalse(stateManager.isReadable(persistent, 0));
+  }
+  
+  @Test
+  public void testIsNew() {
+    //newly created objects should be new
+    Assert.assertTrue(persistent.isNew());
+  }
+  
+  @Test
+  public void testNew() {
+    stateManager.setNew(persistent);
+    Assert.assertTrue(persistent.isNew());
+  }
+  
+  @Test
+  public void testClearNew() {
+    stateManager.clearNew(persistent);
+    Assert.assertFalse(persistent.isNew());
+  }
+  
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestPartitionQueryImpl.java b/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestPartitionQueryImpl.java
new file mode 100644
index 0000000..316684b
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestPartitionQueryImpl.java
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.query.impl;
+
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.mock.query.MockQuery;
+import org.apache.gora.mock.store.MockDataStore;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.hadoop.io.TestWritable;
+import org.junit.Test;
+
+/**
+ * Test case for {@link PartitionQueryImpl}
+ */
+public class TestPartitionQueryImpl {
+
+  private MockDataStore dataStore = MockDataStore.get();
+  
+  @Test
+  public void testReadWrite() throws Exception {
+    
+    MockQuery baseQuery = dataStore.newQuery();
+    baseQuery.setStartKey("start");
+    baseQuery.setLimit(42);
+    
+    PartitionQueryImpl<String, MockPersistent> 
+      query = new PartitionQueryImpl<String, MockPersistent>(baseQuery);
+    
+    TestWritable.testWritable(query);
+  }
+  
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestQueryBase.java b/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestQueryBase.java
new file mode 100644
index 0000000..6cfd861
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/query/impl/TestQueryBase.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.gora.query.impl;
+
+import junit.framework.Assert;
+
+import org.apache.gora.mock.query.MockQuery;
+import org.apache.gora.mock.store.MockDataStore;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.util.TestIOUtils;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test case for {@link QueryBase}.
+ */
+public class TestQueryBase {
+
+  private MockDataStore dataStore = MockDataStore.get();
+  private MockQuery query;
+  
+  private static final String[] FIELDS = {"foo", "baz", "bar"};
+  private static final String START_KEY = "1_start";
+  private static final String END_KEY = "2_end";
+  
+  @Before
+  public void setUp() {
+    query = dataStore.newQuery(); //MockQuery extends QueryBase
+  }
+  
+  @Test
+  public void testReadWrite() throws Exception {
+    query.setFields(FIELDS);
+    query.setKeyRange(START_KEY, END_KEY);
+    TestIOUtils.testSerializeDeserialize(query);
+    
+    Assert.assertNotNull(query.getDataStore());
+  }
+  
+  @Test
+  public void testReadWrite2() throws Exception {
+    query.setLimit(1000);
+    query.setTimeRange(0, System.currentTimeMillis());
+    TestIOUtils.testSerializeDeserialize(query);
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestBase.java b/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestBase.java
new file mode 100644
index 0000000..3275e8b
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestBase.java
@@ -0,0 +1,369 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import junit.framework.Assert;
+
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.GoraTestDriver;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.Metadata;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.store.DataStore;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * A base class for {@link DataStore} tests. This is just a convenience
+ * class, which actually only uses {@link DataStoreTestUtil} methods to
+ * run the tests. Not all test cases can extend this class (like TestHBaseStore),
+ * so all test logic should reside in DataStoreTestUtil class.
+ */
+public abstract class DataStoreTestBase {
+
+  public static final Logger log = LoggerFactory.getLogger(DataStoreTestBase.class);
+
+  protected static GoraTestDriver testDriver;
+
+  protected DataStore<String,Employee> employeeStore;
+  protected DataStore<String,WebPage> webPageStore;
+
+  @Deprecated
+  protected abstract DataStore<String,Employee> createEmployeeDataStore() throws IOException ;
+
+  @Deprecated
+  protected abstract DataStore<String,WebPage> createWebPageDataStore() throws IOException;
+
+  /** junit annoyingly forces BeforeClass to be static, so this method
+   * should be called from a static block
+   */
+  protected static void setTestDriver(GoraTestDriver driver) {
+    testDriver = driver;
+  }
+
+  private static boolean setUpClassCalled = false;
+  
+  @BeforeClass
+  public static void setUpClass() throws Exception {
+    if(testDriver != null && !setUpClassCalled) {
+      log.info("setting up class");
+      testDriver.setUpClass();
+      setUpClassCalled = true;
+    }
+  }
+
+  @AfterClass
+  public static void tearDownClass() throws Exception {
+    if(testDriver != null) {
+      log.info("tearing down class");
+      testDriver.tearDownClass();
+    }
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    //There is an issue in JUnit 4 tests in Eclipse where TestSqlStore static
+    //methods are not called BEFORE setUpClass. I think this is a bug in 
+    //JUnitRunner in Eclipse. Below is a workaround for that problem.
+    if(!setUpClassCalled) {
+    	setUpClass();  
+    }
+    
+    log.info("setting up test");
+    if(testDriver != null) {
+      employeeStore = testDriver.createDataStore(String.class, Employee.class);
+      webPageStore = testDriver.createDataStore(String.class, WebPage.class);
+      testDriver.setUp();
+    } else {
+      employeeStore =  createEmployeeDataStore();
+      webPageStore = createWebPageDataStore();
+
+      employeeStore.truncateSchema();
+      webPageStore.truncateSchema();
+    }
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    log.info("tearing down test");
+    if(testDriver != null) {
+      testDriver.tearDown();
+    }
+    //employeeStore.close();
+    //webPageStore.close();
+  }
+
+  @Test
+  public void testNewInstance() throws IOException {
+    log.info("test method: testNewInstance");
+    DataStoreTestUtil.testNewPersistent(employeeStore);
+  }
+
+  @Test
+  public void testCreateSchema() throws Exception {
+    log.info("test method: testCreateSchema");
+    DataStoreTestUtil.testCreateEmployeeSchema(employeeStore);
+    assertSchemaExists("Employee");
+  }
+
+  // Override this to assert that schema is created correctly
+  public void assertSchemaExists(String schemaName) throws Exception {
+  }
+
+  @Test
+  public void testAutoCreateSchema() throws Exception {
+    log.info("test method: testAutoCreateSchema");
+    DataStoreTestUtil.testAutoCreateSchema(employeeStore);
+    assertAutoCreateSchema();
+  }
+
+  public void assertAutoCreateSchema() throws Exception {
+    assertSchemaExists("Employee");
+  }
+
+  @Test
+  public  void testTruncateSchema() throws Exception {
+    log.info("test method: testTruncateSchema");
+    DataStoreTestUtil.testTruncateSchema(webPageStore);
+    assertSchemaExists("WebPage");
+  }
+
+  @Test
+  public void testDeleteSchema() throws IOException {
+    log.info("test method: testDeleteSchema");
+    DataStoreTestUtil.testDeleteSchema(webPageStore);
+  }
+
+  @Test
+  public void testSchemaExists() throws Exception {
+    log.info("test method: testSchemaExists");
+    DataStoreTestUtil.testSchemaExists(webPageStore);
+  }
+
+  @Test
+  public void testPut() throws IOException {
+    log.info("test method: testPut");
+    Employee employee = DataStoreTestUtil.testPutEmployee(employeeStore);
+    assertPut(employee);
+  }
+
+  public void assertPut(Employee employee) throws IOException {
+  }
+
+  @Test
+  public void testPutNested() throws IOException {
+    log.info("test method: testPutNested");
+
+    String revUrl = "foo.com:http/";
+    String url = "http://foo.com/";
+
+    webPageStore.createSchema();
+    WebPage page = webPageStore.newPersistent();
+    Metadata metadata = new Metadata();  
+    metadata.setVersion(1);
+    metadata.putToData(new Utf8("foo"), new Utf8("baz"));
+
+    page.setMetadata(metadata);
+    page.setUrl(new Utf8(url));
+
+    webPageStore.put(revUrl, page);
+    webPageStore.flush();
+
+    page = webPageStore.get(revUrl);
+    metadata = page.getMetadata();
+    Assert.assertNotNull(metadata);
+    Assert.assertEquals(1, metadata.getVersion());
+    Assert.assertEquals(new Utf8("baz"), metadata.getData().get(new Utf8("foo")));
+  }
+
+  @Test
+  public void testPutArray() throws IOException {
+    log.info("test method: testPutArray");
+    webPageStore.createSchema();
+    WebPage page = webPageStore.newPersistent();
+
+    String[] tokens = {"example", "content", "in", "example.com"};
+
+    for(String token: tokens) {
+      page.addToParsedContent(new Utf8(token));
+    }
+
+    webPageStore.put("com.example/http", page);
+    webPageStore.close();
+
+    assertPutArray();
+  }
+
+  public void assertPutArray() throws IOException {
+  }
+
+  @Test
+  public void testPutBytes() throws IOException {
+    log.info("test method: testPutBytes");
+    webPageStore.createSchema();
+    WebPage page = webPageStore.newPersistent();
+    page.setUrl(new Utf8("http://example.com"));
+    byte[] contentBytes = "example content in example.com".getBytes();
+    ByteBuffer buff = ByteBuffer.wrap(contentBytes);
+    page.setContent(buff);
+
+    webPageStore.put("com.example/http", page);
+    webPageStore.close();
+
+    assertPutBytes(contentBytes);
+  }
+
+  public void assertPutBytes(byte[] contentBytes) throws IOException {
+  }
+
+  @Test
+  public void testPutMap() throws IOException {
+    log.info("test method: testPutMap");
+    webPageStore.createSchema();
+
+    WebPage page = webPageStore.newPersistent();
+
+    page.setUrl(new Utf8("http://example.com"));
+    page.putToOutlinks(new Utf8("http://example2.com"), new Utf8("anchor2"));
+    page.putToOutlinks(new Utf8("http://example3.com"), new Utf8("anchor3"));
+    page.putToOutlinks(new Utf8("http://example3.com"), new Utf8("anchor4"));
+    webPageStore.put("com.example/http", page);
+    webPageStore.close();
+
+    assertPutMap();
+  }
+
+  public void assertPutMap() throws IOException {
+  }
+
+  @Test
+  public void testUpdate() throws IOException {
+    log.info("test method: testUpdate");
+    DataStoreTestUtil.testUpdateEmployee(employeeStore);
+    DataStoreTestUtil.testUpdateWebPage(webPageStore);
+  }
+
+  public void testEmptyUpdate() throws IOException {
+    DataStoreTestUtil.testEmptyUpdateEmployee(employeeStore);
+  }
+
+  @Test
+  public void testGet() throws IOException {
+    log.info("test method: testGet");
+    DataStoreTestUtil.testGetEmployee(employeeStore);
+  }
+
+  @Test
+  public void testGetWithFields() throws IOException {
+    log.info("test method: testGetWithFields");
+    DataStoreTestUtil.testGetEmployeeWithFields(employeeStore);
+  }
+
+  @Test
+  public void testGetWebPage() throws IOException {
+    log.info("test method: testGetWebPage");
+    DataStoreTestUtil.testGetWebPage(webPageStore);
+  }
+
+  @Test
+  public void testGetWebPageDefaultFields() throws IOException {
+    log.info("test method: testGetWebPageDefaultFields");
+    DataStoreTestUtil.testGetWebPageDefaultFields(webPageStore);
+  }
+
+  @Test
+  public void testGetNonExisting() throws Exception {
+    log.info("test method: testGetNonExisting");
+    DataStoreTestUtil.testGetEmployeeNonExisting(employeeStore);
+  }
+
+ @Test
+  public void testQuery() throws IOException {
+    log.info("test method: testQuery");
+    DataStoreTestUtil.testQueryWebPages(webPageStore);
+  }
+
+  @Test
+  public void testQueryStartKey() throws IOException {
+    log.info("test method: testQueryStartKey");
+    DataStoreTestUtil.testQueryWebPageStartKey(webPageStore);
+  }
+
+  @Test
+  public void testQueryEndKey() throws IOException {
+    log.info("test method: testQueryEndKey");
+    DataStoreTestUtil.testQueryWebPageEndKey(webPageStore);
+  }
+
+  @Test
+  public void testQueryKeyRange() throws IOException {
+    log.info("test method: testQueryKetRange");
+    DataStoreTestUtil.testQueryWebPageKeyRange(webPageStore);
+  }
+
+ @Test
+  public void testQueryWebPageSingleKey() throws IOException {
+   log.info("test method: testQueryWebPageSingleKey");
+    DataStoreTestUtil.testQueryWebPageSingleKey(webPageStore);
+  }
+
+  @Test
+  public void testQueryWebPageSingleKeyDefaultFields() throws IOException {
+    log.info("test method: testQuerySingleKeyDefaultFields");
+    DataStoreTestUtil.testQueryWebPageSingleKeyDefaultFields(webPageStore);
+  }
+
+  @Test
+  public void testQueryWebPageQueryEmptyResults() throws IOException {
+    log.info("test method: testQueryEmptyResults");
+    DataStoreTestUtil.testQueryWebPageEmptyResults(webPageStore);
+  }
+
+  @Test
+  public void testDelete() throws IOException {
+    log.info("test method: testDelete");
+    DataStoreTestUtil.testDelete(webPageStore);
+  }
+
+  @Test
+  public void testDeleteByQuery() throws IOException {
+    log.info("test method: testDeleteByQuery");
+    DataStoreTestUtil.testDeleteByQuery(webPageStore);
+  }
+
+  @Test
+  public void testDeleteByQueryFields() throws IOException {
+    log.info("test method: testQueryByQueryFields");
+    DataStoreTestUtil.testDeleteByQueryFields(webPageStore);
+  }
+
+  @Test
+  public void testGetPartitions() throws IOException {
+    log.info("test method: testGetPartitions");
+    DataStoreTestUtil.testGetPartitions(webPageStore);
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestUtil.java b/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestUtil.java
new file mode 100644
index 0000000..bc633bb
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/store/DataStoreTestUtil.java
@@ -0,0 +1,732 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store;
+
+import static org.apache.gora.examples.WebPageDataCreator.ANCHORS;
+import static org.apache.gora.examples.WebPageDataCreator.CONTENTS;
+import static org.apache.gora.examples.WebPageDataCreator.LINKS;
+import static org.apache.gora.examples.WebPageDataCreator.SORTED_URLS;
+import static org.apache.gora.examples.WebPageDataCreator.URLS;
+import static org.apache.gora.examples.WebPageDataCreator.URL_INDEXES;
+import static org.apache.gora.examples.WebPageDataCreator.createWebPageData;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import junit.framework.Assert;
+
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.examples.WebPageDataCreator;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.util.ByteUtils;
+import org.apache.gora.util.StringUtils;
+
+/**
+ * Test utilities for DataStores. This utility class provides everything
+ * necessary for convenience tests in {@link DataStoreTestBase} to execute cleanly.
+ * The tests begin in a fairly trivial fashion getting progressively 
+ * more complex as we begin testing some more advanced features within the 
+ * Gora API. In addition to this class, the first place to look API
+ * functionality is at the examples directories under various Gora modules. 
+ * All the modules have a <gora-module>/src/examples/ directory under 
+ * which some example classes can be found. Especially, there are some 
+ * classes that are used for tests under <gora-core>/src/examples/
+ */
+public class DataStoreTestUtil {
+
+  public static final long YEAR_IN_MS = 365L * 24L * 60L * 60L * 1000L;
+  private static final int NUM_KEYS = 4;
+
+  public static <K, T extends Persistent> void testNewPersistent(
+      DataStore<K,T> dataStore) throws IOException {
+
+    T obj1 = dataStore.newPersistent();
+    T obj2 = dataStore.newPersistent();
+
+    Assert.assertEquals(dataStore.getPersistentClass(),
+        obj1.getClass());
+    Assert.assertNotNull(obj1);
+    Assert.assertNotNull(obj2);
+    Assert.assertFalse( obj1 == obj2 );
+  }
+
+  public static <K> Employee createEmployee(
+      DataStore<K, Employee> dataStore) throws IOException {
+
+    Employee employee = dataStore.newPersistent();
+    employee.setName(new Utf8("Random Joe"));
+    employee.setDateOfBirth( System.currentTimeMillis() - 20L *  YEAR_IN_MS );
+    employee.setSalary(100000);
+    employee.setSsn(new Utf8("101010101010"));
+    return employee;
+  }
+
+  public static void testAutoCreateSchema(DataStore<String,Employee> dataStore)
+  throws IOException {
+    //should not throw exception
+    dataStore.put("foo", createEmployee(dataStore));
+  }
+
+  public static void testCreateEmployeeSchema(DataStore<String, Employee> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+
+    //should not throw exception
+    dataStore.createSchema();
+  }
+
+  public static void testTruncateSchema(DataStore<String, WebPage> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+    WebPageDataCreator.createWebPageData(dataStore);
+    dataStore.truncateSchema();
+
+    assertEmptyResults(dataStore.newQuery());
+  }
+
+  public static void testDeleteSchema(DataStore<String, WebPage> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+    WebPageDataCreator.createWebPageData(dataStore);
+    dataStore.deleteSchema();
+    dataStore.createSchema();
+
+    assertEmptyResults(dataStore.newQuery());
+  }
+
+  public static<K, T extends Persistent> void testSchemaExists(
+      DataStore<K, T> dataStore) throws IOException {
+    dataStore.createSchema();
+
+    Assert.assertTrue(dataStore.schemaExists());
+
+    dataStore.deleteSchema();
+    Assert.assertFalse(dataStore.schemaExists());
+  }
+
+  public static void testGetEmployee(DataStore<String, Employee> dataStore)
+    throws IOException {
+    dataStore.createSchema();
+    Employee employee = DataStoreTestUtil.createEmployee(dataStore);
+    String ssn = employee.getSsn().toString();
+    dataStore.put(ssn, employee);
+    dataStore.flush();
+
+    Employee after = dataStore.get(ssn, Employee._ALL_FIELDS);
+
+    Assert.assertEquals(employee, after);
+  }
+
+  public static void testGetEmployeeNonExisting(DataStore<String, Employee> dataStore)
+    throws IOException {
+    Employee employee = dataStore.get("_NON_EXISTING_SSN_FOR_EMPLOYEE_");
+    Assert.assertNull(employee);
+  }
+
+  public static void testGetEmployeeWithFields(DataStore<String, Employee> dataStore)
+    throws IOException {
+    Employee employee = DataStoreTestUtil.createEmployee(dataStore);
+    String ssn = employee.getSsn().toString();
+    dataStore.put(ssn, employee);
+    dataStore.flush();
+
+    String[] fields = employee.getFields();
+    for(Set<String> subset : StringUtils.powerset(fields)) {
+      if(subset.isEmpty())
+        continue;
+      Employee after = dataStore.get(ssn, subset.toArray(new String[subset.size()]));
+      Employee expected = new Employee();
+      for(String field:subset) {
+        int index = expected.getFieldIndex(field);
+        expected.put(index, employee.get(index));
+      }
+
+      Assert.assertEquals(expected, after);
+    }
+  }
+
+  public static Employee testPutEmployee(DataStore<String, Employee> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+    Employee employee = DataStoreTestUtil.createEmployee(dataStore);
+    return employee;
+  }
+
+  public static void testEmptyUpdateEmployee(DataStore<String, Employee> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+    long ssn = 1234567890L;
+    String ssnStr = Long.toString(ssn);
+    long now = System.currentTimeMillis();
+
+    Employee employee = dataStore.newPersistent();
+    employee.setName(new Utf8("John Doe"));
+    employee.setDateOfBirth(now - 20L *  YEAR_IN_MS);
+    employee.setSalary(100000);
+    employee.setSsn(new Utf8(ssnStr));
+    dataStore.put(employee.getSsn().toString(), employee);
+
+    dataStore.flush();
+
+    employee = dataStore.get(ssnStr);
+    dataStore.put(ssnStr, employee);
+
+    dataStore.flush();
+
+    employee = dataStore.newPersistent();
+    dataStore.put(Long.toString(ssn + 1), employee);
+
+    dataStore.flush();
+
+    employee = dataStore.get(Long.toString(ssn + 1));
+    Assert.assertNull(employee);
+  }
+
+  public static void testUpdateEmployee(DataStore<String, Employee> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+    long ssn = 1234567890L;
+    long now = System.currentTimeMillis();
+
+    for (int i = 0; i < 5; i++) {
+      Employee employee = dataStore.newPersistent();
+      employee.setName(new Utf8("John Doe " + i));
+      employee.setDateOfBirth(now - 20L *  YEAR_IN_MS);
+      employee.setSalary(100000);
+      employee.setSsn(new Utf8(Long.toString(ssn + i)));
+      dataStore.put(employee.getSsn().toString(), employee);
+    }
+
+    dataStore.flush();
+
+    for (int i = 0; i < 1; i++) {
+      Employee employee = dataStore.newPersistent();
+      employee.setName(new Utf8("John Doe " + (i + 5)));
+      employee.setDateOfBirth(now - 18L *  YEAR_IN_MS);
+      employee.setSalary(120000);
+      employee.setSsn(new Utf8(Long.toString(ssn + i)));
+      dataStore.put(employee.getSsn().toString(), employee);
+    }
+
+    dataStore.flush();
+
+    for (int i = 0; i < 1; i++) {
+      String key = Long.toString(ssn + i);
+      Employee employee = dataStore.get(key);
+      Assert.assertEquals(now - 18L * YEAR_IN_MS, employee.getDateOfBirth());
+      Assert.assertEquals("John Doe " + (i + 5), employee.getName().toString());
+      Assert.assertEquals(120000, employee.getSalary());
+    }
+  }
+
+  public static void testUpdateWebPage(DataStore<String, WebPage> dataStore)
+  throws IOException {
+    dataStore.createSchema();
+
+    String[] urls = {"http://a.com/a", "http://b.com/b", "http://c.com/c",
+        "http://d.com/d", "http://e.com/e", "http://f.com/f", "http://g.com/g"};
+    String content = "content";
+    String parsedContent = "parsedContent";
+    String anchor = "anchor";
+
+    int parsedContentCount = 0;
+
+
+    for (int i = 0; i < urls.length; i++) {
+      WebPage webPage = dataStore.newPersistent();
+      webPage.setUrl(new Utf8(urls[i]));
+      for (parsedContentCount = 0; parsedContentCount < 5; parsedContentCount++) {
+        webPage.addToParsedContent(new Utf8(parsedContent + i + "," + parsedContentCount));
+      }
+      for (int j = 0; j < urls.length; j += 2) {
+        webPage.putToOutlinks(new Utf8(anchor + j), new Utf8(urls[j]));
+      }
+      dataStore.put(webPage.getUrl().toString(), webPage);
+    }
+
+    dataStore.flush();
+
+    for (int i = 0; i < urls.length; i++) {
+      WebPage webPage = dataStore.get(urls[i]);
+      webPage.setContent(ByteBuffer.wrap(ByteUtils.toBytes(content + i)));
+      for (parsedContentCount = 5; parsedContentCount < 10; parsedContentCount++) {
+        webPage.addToParsedContent(new Utf8(parsedContent + i + "," + parsedContentCount));
+      }
+      webPage.getOutlinks().clear();
+      for (int j = 1; j < urls.length; j += 2) {
+        webPage.putToOutlinks(new Utf8(anchor + j), new Utf8(urls[j]));
+      }
+      dataStore.put(webPage.getUrl().toString(), webPage);
+    }
+
+    dataStore.flush();
+
+    for (int i = 0; i < urls.length; i++) {
+      WebPage webPage = dataStore.get(urls[i]);
+      Assert.assertEquals(content + i, ByteUtils.toString( toByteArray(webPage.getContent()) ));
+      Assert.assertEquals(10, webPage.getParsedContent().size());
+      int j = 0;
+      for (Utf8 pc : webPage.getParsedContent()) {
+        Assert.assertEquals(parsedContent + i + "," + j, pc.toString());
+        j++;
+      }
+      int count = 0;
+      for (j = 1; j < urls.length; j += 2) {
+        Utf8 link = webPage.getOutlinks().get(new Utf8(anchor + j));
+        Assert.assertNotNull(link);
+        Assert.assertEquals(urls[j], link.toString());
+        count++;
+      }
+      Assert.assertEquals(count, webPage.getOutlinks().size());
+    }
+
+    for (int i = 0; i < urls.length; i++) {
+      WebPage webPage = dataStore.get(urls[i]);
+      for (int j = 0; j < urls.length; j += 2) {
+        webPage.putToOutlinks(new Utf8(anchor + j), new Utf8(urls[j]));
+      }
+      dataStore.put(webPage.getUrl().toString(), webPage);
+    }
+
+    dataStore.flush();
+
+    for (int i = 0; i < urls.length; i++) {
+      WebPage webPage = dataStore.get(urls[i]);
+      int count = 0;
+      for (int j = 0; j < urls.length; j++) {
+        Utf8 link = webPage.getOutlinks().get(new Utf8(anchor + j));
+        Assert.assertNotNull(link);
+        Assert.assertEquals(urls[j], link.toString());
+        count++;
+      }
+    }
+  }
+
+  public static void assertWebPage(WebPage page, int i) {
+    Assert.assertNotNull(page);
+
+    Assert.assertEquals(URLS[i], page.getUrl().toString());
+    Assert.assertTrue("content error:" + new String( toByteArray(page.getContent()) ) +
+        " actual=" + CONTENTS[i] + " i=" + i
+    , Arrays.equals( toByteArray(page.getContent() )
+        , CONTENTS[i].getBytes()));
+
+    GenericArray<Utf8> parsedContent = page.getParsedContent();
+    Assert.assertNotNull(parsedContent);
+    Assert.assertTrue(parsedContent.size() > 0);
+
+    int j=0;
+    String[] tokens = CONTENTS[i].split(" ");
+    for(Utf8 token : parsedContent) {
+      Assert.assertEquals(tokens[j++], token.toString());
+    }
+
+    if(LINKS[i].length > 0) {
+      Assert.assertNotNull(page.getOutlinks());
+      Assert.assertTrue(page.getOutlinks().size() > 0);
+      for(j=0; j<LINKS[i].length; j++) {
+        Assert.assertEquals(ANCHORS[i][j],
+            page.getFromOutlinks(new Utf8(URLS[LINKS[i][j]])).toString());
+      }
+    } else {
+      Assert.assertTrue(page.getOutlinks() == null || page.getOutlinks().isEmpty());
+    }
+  }
+
+  private static void testGetWebPage(DataStore<String, WebPage> store, String[] fields)
+    throws IOException {
+    createWebPageData(store);
+
+    for(int i=0; i<URLS.length; i++) {
+      WebPage page = store.get(URLS[i], fields);
+      assertWebPage(page, i);
+    }
+  }
+
+  public static void testGetWebPage(DataStore<String, WebPage> store) throws IOException {
+    testGetWebPage(store, WebPage._ALL_FIELDS);
+  }
+
+  public static void testGetWebPageDefaultFields(DataStore<String, WebPage> store)
+  throws IOException {
+    testGetWebPage(store, null);
+  }
+
+  private static void testQueryWebPageSingleKey(DataStore<String, WebPage> store
+      , String[] fields) throws IOException {
+
+    createWebPageData(store);
+
+    for(int i=0; i<URLS.length; i++) {
+      Query<String, WebPage> query = store.newQuery();
+      query.setFields(fields);
+      query.setKey(URLS[i]);
+      Result<String, WebPage> result = query.execute();
+      Assert.assertTrue(result.next());
+      WebPage page = result.get();
+      assertWebPage(page, i);
+      Assert.assertFalse(result.next());
+    }
+  }
+
+  public static void testQueryWebPageSingleKey(DataStore<String, WebPage> store)
+  throws IOException {
+    testQueryWebPageSingleKey(store, WebPage._ALL_FIELDS);
+  }
+
+  public static void testQueryWebPageSingleKeyDefaultFields(
+      DataStore<String, WebPage> store) throws IOException {
+    testQueryWebPageSingleKey(store, null);
+  }
+
+  public static void testQueryWebPageKeyRange(DataStore<String, WebPage> store,
+      boolean setStartKeys, boolean setEndKeys)
+  throws IOException {
+    createWebPageData(store);
+
+    //create sorted set of urls
+    List<String> sortedUrls = new ArrayList<String>();
+    for(String url: URLS) {
+      sortedUrls.add(url);
+    }
+    Collections.sort(sortedUrls);
+
+    //try all ranges
+    for(int i=0; i<sortedUrls.size(); i++) {
+      for(int j=i; j<sortedUrls.size(); j++) {
+        Query<String, WebPage> query = store.newQuery();
+        if(setStartKeys)
+          query.setStartKey(sortedUrls.get(i));
+        if(setEndKeys)
+          query.setEndKey(sortedUrls.get(j));
+        Result<String, WebPage> result = query.execute();
+
+        int r=0;
+        while(result.next()) {
+          WebPage page = result.get();
+          assertWebPage(page, URL_INDEXES.get(page.getUrl().toString()));
+          r++;
+        }
+
+        int expectedLength = (setEndKeys ? j+1: sortedUrls.size()) -
+                             (setStartKeys ? i: 0);
+        Assert.assertEquals(expectedLength, r);
+        if(!setEndKeys)
+          break;
+      }
+      if(!setStartKeys)
+        break;
+    }
+  }
+
+  public static void testQueryWebPages(DataStore<String, WebPage> store)
+  throws IOException {
+    testQueryWebPageKeyRange(store, false, false);
+  }
+
+  public static void testQueryWebPageStartKey(DataStore<String, WebPage> store)
+  throws IOException {
+    testQueryWebPageKeyRange(store, true, false);
+  }
+
+  public static void testQueryWebPageEndKey(DataStore<String, WebPage> store)
+  throws IOException {
+    testQueryWebPageKeyRange(store, false, true);
+  }
+
+  public static void testQueryWebPageKeyRange(DataStore<String, WebPage> store)
+  throws IOException {
+    testQueryWebPageKeyRange(store, true, true);
+  }
+
+  public static void testQueryWebPageEmptyResults(DataStore<String, WebPage> store)
+    throws IOException {
+    createWebPageData(store);
+
+    //query empty results
+    Query<String, WebPage> query = store.newQuery();
+    query.setStartKey("aa");
+    query.setEndKey("ab");
+    assertEmptyResults(query);
+
+    //query empty results for one key
+    query = store.newQuery();
+    query.setKey("aa");
+    assertEmptyResults(query);
+  }
+
+  public static<K,T extends Persistent> void assertEmptyResults(Query<K, T> query)
+    throws IOException {
+    assertNumResults(query, 0);
+  }
+
+  public static<K,T extends Persistent> void assertNumResults(Query<K, T>query
+      , long numResults) throws IOException {
+    Result<K, T> result = query.execute();
+    int actualNumResults = 0;
+    while(result.next()) {
+      actualNumResults++;
+    }
+    result.close();
+    Assert.assertEquals(numResults, actualNumResults);
+  }
+
+  public static void testGetPartitions(DataStore<String, WebPage> store)
+  throws IOException {
+    createWebPageData(store);
+    testGetPartitions(store, store.newQuery());
+  }
+
+  public static void testGetPartitions(DataStore<String, WebPage> store
+      , Query<String, WebPage> query) throws IOException {
+    List<PartitionQuery<String, WebPage>> partitions = store.getPartitions(query);
+
+    Assert.assertNotNull(partitions);
+    Assert.assertTrue(partitions.size() > 0);
+
+    for(PartitionQuery<String, WebPage> partition:partitions) {
+      Assert.assertNotNull(partition);
+    }
+
+    assertPartitions(store, query, partitions);
+  }
+
+  public static void assertPartitions(DataStore<String, WebPage> store,
+      Query<String, WebPage> query, List<PartitionQuery<String,WebPage>> partitions)
+  throws IOException {
+
+    int count = 0, partitionsCount = 0;
+    Map<String, Integer> results = new HashMap<String, Integer>();
+    Map<String, Integer> partitionResults = new HashMap<String, Integer>();
+
+    //execute query and count results
+    Result<String, WebPage> result = store.execute(query);
+    Assert.assertNotNull(result);
+
+    while(result.next()) {
+      Assert.assertNotNull(result.getKey());
+      Assert.assertNotNull(result.get());
+      results.put(result.getKey(), result.get().hashCode()); //keys are not reused, so this is safe
+      count++;
+    }
+    result.close();
+
+    Assert.assertTrue(count > 0); //assert that results is not empty
+    Assert.assertEquals(count, results.size()); //assert that keys are unique
+
+    for(PartitionQuery<String, WebPage> partition:partitions) {
+      Assert.assertNotNull(partition);
+
+      result = store.execute(partition);
+      Assert.assertNotNull(result);
+
+      while(result.next()) {
+        Assert.assertNotNull(result.getKey());
+        Assert.assertNotNull(result.get());
+        partitionResults.put(result.getKey(), result.get().hashCode());
+        partitionsCount++;
+      }
+      result.close();
+
+      Assert.assertEquals(partitionsCount, partitionResults.size()); //assert that keys are unique
+    }
+
+    Assert.assertTrue(partitionsCount > 0);
+    Assert.assertEquals(count, partitionsCount);
+
+    for(Map.Entry<String, Integer> r : results.entrySet()) {
+      Integer p = partitionResults.get(r.getKey());
+      Assert.assertNotNull(p);
+      Assert.assertEquals(r.getValue(), p);
+    }
+  }
+
+  public static void testDelete(DataStore<String, WebPage> store) throws IOException {
+    WebPageDataCreator.createWebPageData(store);
+    //delete one by one
+
+    int deletedSoFar = 0;
+    for(String url : URLS) {
+      Assert.assertTrue(store.delete(url));
+      store.flush();
+
+      //assert that it is actually deleted
+      Assert.assertNull(store.get(url));
+
+      //assert that other records are not deleted
+      assertNumResults(store.newQuery(), URLS.length - ++deletedSoFar);
+    }
+  }
+
+  public static void testDeleteByQuery(DataStore<String, WebPage> store)
+    throws IOException {
+
+    Query<String, WebPage> query;
+
+    //test 1 - delete all
+    WebPageDataCreator.createWebPageData(store);
+
+    query = store.newQuery();
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.flush();
+    assertEmptyResults(store.newQuery());
+
+
+    //test 2 - delete all
+    WebPageDataCreator.createWebPageData(store);
+
+    query = store.newQuery();
+    query.setFields(WebPage._ALL_FIELDS);
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.flush();
+    assertEmptyResults(store.newQuery());
+
+
+    //test 3 - delete all
+    WebPageDataCreator.createWebPageData(store);
+
+    query = store.newQuery();
+    query.setKeyRange("a", "z"); //all start with "http://"
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.flush();
+    assertEmptyResults(store.newQuery());
+
+
+    //test 4 - delete some
+    WebPageDataCreator.createWebPageData(store);
+    query = store.newQuery();
+    query.setEndKey(SORTED_URLS[NUM_KEYS]);
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.flush();
+    assertNumResults(store.newQuery(), URLS.length - (NUM_KEYS+1));
+
+    store.truncateSchema();
+
+  }
+
+  public static void testDeleteByQueryFields(DataStore<String, WebPage> store)
+  throws IOException {
+
+    Query<String, WebPage> query;
+
+    //test 5 - delete all with some fields
+    WebPageDataCreator.createWebPageData(store);
+
+    query = store.newQuery();
+    query.setFields(WebPage.Field.OUTLINKS.getName()
+        , WebPage.Field.PARSED_CONTENT.getName(), WebPage.Field.CONTENT.getName());
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.deleteByQuery(query);
+    store.deleteByQuery(query);//don't you love that HBase sometimes does not delete arbitrarily
+    
+    store.flush();
+    
+    assertNumResults(store.newQuery(), URLS.length);
+
+    //assert that data is deleted
+    for (int i = 0; i < SORTED_URLS.length; i++) {
+      WebPage page = store.get(SORTED_URLS[i]);
+      Assert.assertNotNull(page);
+
+      Assert.assertNotNull(page.getUrl());
+      Assert.assertEquals(page.getUrl().toString(), SORTED_URLS[i]);
+      Assert.assertEquals(0, page.getOutlinks().size());
+      Assert.assertEquals(0, page.getParsedContent().size());
+      if(page.getContent() != null) {
+        System.out.println("url:" + page.getUrl().toString());
+        System.out.println( "limit:" + page.getContent().limit());
+      } else {
+        Assert.assertNull(page.getContent());
+      }
+    }
+
+    //test 6 - delete some with some fields
+    WebPageDataCreator.createWebPageData(store);
+
+    query = store.newQuery();
+    query.setFields(WebPage.Field.URL.getName());
+    String startKey = SORTED_URLS[NUM_KEYS];
+    String endKey = SORTED_URLS[SORTED_URLS.length - NUM_KEYS];
+    query.setStartKey(startKey);
+    query.setEndKey(endKey);
+
+    assertNumResults(store.newQuery(), URLS.length);
+    store.deleteByQuery(query);
+    store.deleteByQuery(query);
+    store.deleteByQuery(query);//don't you love that HBase sometimes does not delete arbitrarily
+    
+    store.flush();
+
+    assertNumResults(store.newQuery(), URLS.length);
+
+    //assert that data is deleted
+    for (int i = 0; i < URLS.length; i++) {
+      WebPage page = store.get(URLS[i]);
+      Assert.assertNotNull(page);
+      if( URLS[i].compareTo(startKey) < 0 || URLS[i].compareTo(endKey) >= 0) {
+        //not deleted
+        assertWebPage(page, i);
+      } else {
+        //deleted
+        Assert.assertNull(page.getUrl());
+        Assert.assertNotNull(page.getOutlinks());
+        Assert.assertNotNull(page.getParsedContent());
+        Assert.assertNotNull(page.getContent());
+        Assert.assertTrue(page.getOutlinks().size() > 0);
+        Assert.assertTrue(page.getParsedContent().size() > 0);
+      }
+    }
+
+  }
+
+  private static byte[] toByteArray(ByteBuffer buffer) {
+    int p = buffer.position();
+    int n = buffer.limit() - p;
+    byte[] bytes = new byte[n];
+    for (int i = 0; i < n; i++) {
+      bytes[i] = buffer.get(p++);
+    }
+    return bytes;
+  }
+
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/store/TestDataStoreFactory.java b/trunk/gora-core/src/test/java/org/apache/gora/store/TestDataStoreFactory.java
new file mode 100644
index 0000000..4700184
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/store/TestDataStoreFactory.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.store;
+
+import java.util.Properties;
+
+import junit.framework.Assert;
+
+import org.apache.gora.avro.store.DataFileAvroStore;
+import org.apache.gora.mock.persistency.MockPersistent;
+import org.apache.gora.mock.store.MockDataStore;
+import org.apache.gora.util.GoraException;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestDataStoreFactory {
+  private Configuration conf;
+  
+  @Before
+  public void setUp() {
+    conf = new Configuration();
+  }
+
+  @Test
+  public void testGetDataStore() throws GoraException {
+    DataStore<?,?> dataStore = DataStoreFactory.getDataStore("org.apache.gora.mock.store.MockDataStore"
+        , String.class, MockPersistent.class, conf);
+    Assert.assertNotNull(dataStore);
+  }
+  
+  @Test
+  public void testGetClasses() throws GoraException {
+    DataStore<?,?> dataStore = DataStoreFactory.getDataStore("org.apache.gora.mock.store.MockDataStore"
+        , String.class, MockPersistent.class, conf);
+    Assert.assertNotNull(dataStore);
+    Assert.assertEquals(String.class, dataStore.getKeyClass());
+    Assert.assertEquals(MockPersistent.class, dataStore.getPersistentClass());
+  }
+  
+  @Test
+  public void testGetDataStore2() throws GoraException {
+    DataStore<?,?> dataStore = DataStoreFactory.getDataStore(MockDataStore.class
+        , String.class, MockPersistent.class, conf);
+    Assert.assertNotNull(dataStore);
+  }
+  
+  @Test
+  public void testGetDataStore3() throws GoraException {
+    DataStore<?,?> dataStore1 = DataStoreFactory.getDataStore("org.apache.gora.mock.store.MockDataStore"
+        , Object.class, MockPersistent.class, conf);
+    DataStore<?,?> dataStore2 = DataStoreFactory.getDataStore("org.apache.gora.mock.store.MockDataStore"
+        , Object.class, MockPersistent.class, conf);
+    DataStore<?,?> dataStore3 = DataStoreFactory.getDataStore("org.apache.gora.mock.store.MockDataStore"
+        , String.class, MockPersistent.class, conf);
+    
+    Assert.assertNotSame(dataStore1, dataStore2);
+    Assert.assertNotSame(dataStore1, dataStore3);
+  }
+  
+  @Test
+  public void testReadProperties() throws GoraException{
+    //indirect testing
+    DataStore<?,?> dataStore = DataStoreFactory.getDataStore(String.class,
+            MockPersistent.class, conf);
+    Assert.assertNotNull(dataStore);
+    Assert.assertEquals(MockDataStore.class, dataStore.getClass());
+  }
+  
+  @Test
+  public void testFindProperty() {
+    Properties properties = DataStoreFactory.createProps();
+    
+    DataStore<String, MockPersistent> store = new DataFileAvroStore<String,MockPersistent>();
+    
+    String fooValue = DataStoreFactory.findProperty(properties, store
+        , "foo_property", "foo_default");
+    Assert.assertEquals("foo_value", fooValue);
+    
+    String bazValue = DataStoreFactory.findProperty(properties, store
+        , "baz_property", "baz_default");
+    Assert.assertEquals("baz_value", bazValue);
+    
+    String barValue = DataStoreFactory.findProperty(properties, store
+        , "bar_property", "bar_default");
+    Assert.assertEquals("bar_value", barValue);
+  }
+  
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/util/TestIOUtils.java b/trunk/gora-core/src/test/java/org/apache/gora/util/TestIOUtils.java
new file mode 100644
index 0000000..baafac0
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/util/TestIOUtils.java
@@ -0,0 +1,246 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInput;
+import java.io.DataInputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.Arrays;
+
+import junit.framework.Assert;
+
+import org.apache.avro.ipc.ByteBufferInputStream;
+import org.apache.avro.ipc.ByteBufferOutputStream;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.mapreduce.GoraMapReduceUtils;
+import org.apache.gora.util.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.junit.Test;
+
+/**
+ * Test case for {@link IOUtils} class.
+ */
+public class TestIOUtils {
+
+  public static final Logger log = LoggerFactory.getLogger(TestIOUtils.class);
+  
+  public static Configuration conf = new Configuration();
+
+  private static final int BOOL_ARRAY_MAX_LENGTH = 30;
+  private static final int STRING_ARRAY_MAX_LENGTH = 30;
+  
+  private static class BoolArrayWrapper implements Writable {
+    boolean[] arr;
+    @SuppressWarnings("unused")
+    public BoolArrayWrapper() {
+    }
+    public BoolArrayWrapper(boolean[] arr) {
+      this.arr = arr;
+    }
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      this.arr = IOUtils.readBoolArray(in);
+    }
+    @Override
+    public void write(DataOutput out) throws IOException {
+      IOUtils.writeBoolArray(out, arr);
+    }
+    @Override
+    public boolean equals(Object obj) {
+      return Arrays.equals(arr, ((BoolArrayWrapper)obj).arr);
+    }
+  }
+  
+  private static class StringArrayWrapper implements Writable {
+    String[] arr;
+    @SuppressWarnings("unused")
+    public StringArrayWrapper() {
+    }
+    public StringArrayWrapper(String[] arr) {
+      this.arr = arr;
+    }
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      this.arr = IOUtils.readStringArray(in);
+    }
+    @Override
+    public void write(DataOutput out) throws IOException {
+      IOUtils.writeStringArray(out, arr);
+    }
+    @Override
+    public boolean equals(Object obj) {
+      return Arrays.equals(arr, ((StringArrayWrapper)obj).arr);
+    }
+  }
+  
+  @SuppressWarnings("unchecked")
+  public static <T> void testSerializeDeserialize(T... objects) throws Exception {
+    ByteBufferOutputStream os = new ByteBufferOutputStream();
+    DataOutputStream dos = new DataOutputStream(os);
+    ByteBufferInputStream is = null;
+    DataInputStream dis = null;
+    
+    GoraMapReduceUtils.setIOSerializations(conf, true);
+    
+    try {
+      for(T before : objects) {
+        IOUtils.serialize(conf, dos , before, (Class<T>)before.getClass());
+        dos.flush();
+      }
+       
+      is = new ByteBufferInputStream(os.getBufferList());
+      dis = new DataInputStream(is);
+      
+      for(T before : objects) {
+        T after = IOUtils.deserialize(conf, dis, null, (Class<T>)before.getClass());
+        
+        log.info("Before: " + before);
+        log.info("After : " + after);
+        
+        Assert.assertEquals(before, after);
+      }
+      
+      //assert that the end of input is reached
+      try {
+        long skipped = dis.skip(1);
+        Assert.assertEquals(0, skipped);
+      }catch (EOFException expected) {
+        //either should throw exception or return 0 as skipped
+      }
+    }finally {
+      org.apache.hadoop.io.IOUtils.closeStream(dos);
+      org.apache.hadoop.io.IOUtils.closeStream(os);
+      org.apache.hadoop.io.IOUtils.closeStream(dis);
+      org.apache.hadoop.io.IOUtils.closeStream(is);
+    }
+  }
+  
+  @Test
+  public void testWritableSerde() throws Exception {
+    Text text = new Text("foo goes to a bar to get some buzz");
+    testSerializeDeserialize(text);
+  }
+  
+  @Test
+  public void testJavaSerializableSerde() throws Exception {
+    Integer integer = Integer.valueOf(42);
+    testSerializeDeserialize(integer);
+  }
+  
+  @Test
+  public void testReadWriteBoolArray() throws Exception {
+    
+    boolean[][] patterns = {
+        {true},
+        {false},
+        {true, false},
+        {false, true},
+        {true, false, true},
+        {false, true, false},
+        {false, true, false, false, true, true, true},
+        {false, true, false, false, true, true, true, true},
+        {false, true, false, false, true, true, true, true, false},
+    };
+    
+    for(int i=0; i<BOOL_ARRAY_MAX_LENGTH; i++) {
+      for(int j=0; j<patterns.length; j++) {
+        boolean[] arr = new boolean[i];
+        for(int k=0; k<i; k++) {
+          arr[k] = patterns[j][k % patterns[j].length];
+        }
+        
+        testSerializeDeserialize(new BoolArrayWrapper(arr));
+      }
+    }
+  }
+  
+  @Test
+  public void testReadWriteNullFieldsInfo() throws IOException {
+
+    Integer n = null; //null
+    Integer nn = new Integer(42); //not null
+
+    testNullFieldsWith(nn);
+    testNullFieldsWith(n);
+    testNullFieldsWith(n, nn);
+    testNullFieldsWith(nn, n);
+    testNullFieldsWith(nn, n, nn, n);
+    testNullFieldsWith(nn, n, nn, n, n, n, nn, nn, nn, n, n);
+  }
+
+  private void testNullFieldsWith( Object ... values ) throws IOException {
+    DataOutputBuffer out = new DataOutputBuffer();
+    DataInputBuffer in = new DataInputBuffer();
+
+    IOUtils.writeNullFieldsInfo(out, values);
+
+    in.reset(out.getData(), out.getLength());
+
+    boolean[] ret = IOUtils.readNullFieldsInfo(in);
+
+    //assert
+    Assert.assertEquals(values.length, ret.length);
+
+    for(int i=0; i<values.length; i++) {
+      Assert.assertEquals( values[i] == null , ret[i]);
+    }
+  }
+  
+  @Test
+  public void testReadWriteStringArray() throws Exception {
+    for(int i=0; i<STRING_ARRAY_MAX_LENGTH; i++) {
+      String[] arr = new String[i];
+      for(int j=0; j<i; j++) {
+        arr[j] = String.valueOf(j);
+      }
+      
+      testSerializeDeserialize(new StringArrayWrapper(arr));
+    }
+  }
+  
+  @Test
+  public void testReadFullyBufferLimit() throws IOException {
+    for(int i=-2; i<=2; i++) {
+      byte[] bytes = new byte[IOUtils.BUFFER_SIZE + i];
+      for(int j=0; j<bytes.length; j++) {
+        bytes[j] = (byte)j;
+      }
+      ByteArrayInputStream is = new ByteArrayInputStream(bytes);
+      
+      byte[] readBytes = IOUtils.readFully(is);
+      assertByteArrayEquals(bytes, readBytes);
+    }
+  }
+  
+  public void assertByteArrayEquals(byte[] expected, byte[] actual) {
+    Assert.assertEquals("Array lengths do not match", expected.length, actual.length);
+    for(int j=0; j<expected.length; j++) {
+      Assert.assertEquals("bytes at position "+j+" do not match", expected[j], actual[j]);
+    }
+  }
+}
diff --git a/trunk/gora-core/src/test/java/org/apache/gora/util/TestWritableUtils.java b/trunk/gora-core/src/test/java/org/apache/gora/util/TestWritableUtils.java
new file mode 100644
index 0000000..14cc81c
--- /dev/null
+++ b/trunk/gora-core/src/test/java/org/apache/gora/util/TestWritableUtils.java
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.util;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInput;
+import java.io.DataInputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.util.Properties;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Test case for {@link WritableUtils} class.
+ */
+public class TestWritableUtils {
+  @Test
+  public void testWritesReads() throws Exception {
+    Properties props = new Properties();
+    props.put("keyBlah", "valueBlah");
+    props.put("keyBlah2", "valueBlah2");
+    
+    ByteArrayOutputStream bos = new ByteArrayOutputStream();
+    DataOutput out = new DataOutputStream(bos);
+    WritableUtils.writeProperties(out, props);
+    
+    DataInput in = new DataInputStream(new ByteArrayInputStream(bos.toByteArray()));
+    
+    Properties propsRead = WritableUtils.readProperties(in);
+    
+    Assert.assertEquals(propsRead.get("keyBlah"), props.get("keyBlah"));
+    Assert.assertEquals(propsRead.get("keyBlah2"), props.get("keyBlah2"));
+  }
+}
diff --git a/trunk/gora-hbase/build.xml b/trunk/gora-hbase/build.xml
new file mode 100644
index 0000000..8a7438a
--- /dev/null
+++ b/trunk/gora-hbase/build.xml
@@ -0,0 +1,27 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-hbase" default="compile">
+  <property name="project.dir" value="${basedir}/.."/>
+
+  <import file="${project.dir}/build-common.xml"/>
+
+  <!-- do nothing for now as the tests need fixing -->
+  <target name="test" depends="compile-test" description="Run core unit tests"/>
+</project>
diff --git a/trunk/gora-hbase/conf/.gitignore b/trunk/gora-hbase/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-hbase/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-hbase/ivy/ivy.xml b/trunk/gora-hbase/ivy/ivy.xml
new file mode 100644
index 0000000..dfa5a75
--- /dev/null
+++ b/trunk/gora-hbase/ivy/ivy.xml
@@ -0,0 +1,53 @@
+<?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivy-module version="2.0">
+    <info 
+      organisation="org.apache.gora"
+      module="gora-hbase"
+      status="integration"/>
+
+  <configurations>
+    <include file="../../ivy/ivy-configurations.xml"/>
+  </configurations>
+  
+  <publications>
+    <artifact name="gora-hbase" conf="compile"/>
+    <artifact name="gora-hbase-test" conf="test"/>
+  </publications>
+
+  <dependencies>
+    <!-- conf="*->@" means every conf is mapped to the conf of the same name of the artifact-->
+    <dependency org="org.apache.gora" name="gora-core" rev="latest.integration" changing="true" conf="*->@"/> 
+    <dependency org="org.jdom" name="jdom" rev="1.1" conf="*->master"/>
+
+    <!-- test dependencies -->
+    <dependency org="org.apache.hadoop" name="hadoop-test" rev="0.20.2" conf="test->default"/>
+    <dependency org="org.apache.hbase" name="hbase" rev="0.90.0" conf="*->*">
+        <exclude org="org.apache.thrift"/>
+    </dependency>
+    <dependency org="org.apache.hbase" name="hbase-tests" rev="0.90.0" conf="*->*">
+        <artifact name="hbase-tests" type="jar" ext="jar" url="http://repo1.maven.org/maven2/org/apache/hbase/hbase/0.90.0/hbase-0.90.0-tests.jar"/>
+        <exclude org="org.apache.thrift"/>
+    </dependency>
+
+  </dependencies>
+    
+</ivy-module>
+
diff --git a/trunk/gora-hbase/lib-ext/.gitignore b/trunk/gora-hbase/lib-ext/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-hbase/lib-ext/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-hbase/pom.xml b/trunk/gora-hbase/pom.xml
new file mode 100644
index 0000000..ea43e06
--- /dev/null
+++ b/trunk/gora-hbase/pom.xml
@@ -0,0 +1,189 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+       <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>gora-hbase</artifactId>
+    <packaging>bundle</packaging>
+
+    <name>Apache Gora :: Hbase</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-hbase</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-hbase</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-hbase</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora.hbase*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <testResources>
+          <testResource>
+            <directory>${project.basedir}/src/test/conf</directory>
+            <includes>
+              <include>**/*</include>
+            </includes>
+            <!--targetPath>${project.basedir}/target/classes/</targetPath-->
+          </testResource>
+        </testResources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <goal>test-jar</goal>
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Gora Internal Dependencies -->
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <classifier>tests</classifier>
+            <scope>test</scope>
+        </dependency>
+
+        <!-- Hadoop Dependencies -->
+        <dependency>
+            <groupId>org.apache.hbase</groupId>
+            <artifactId>hbase</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hbase</groupId>
+            <artifactId>hbase</artifactId>
+            <classifier>tests</classifier>
+        </dependency>
+
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+
+        <!-- Misc Dependencies -->
+        <dependency>
+            <groupId>org.jdom</groupId>
+            <artifactId>jdom</artifactId>
+        </dependency>
+
+        <!-- Logging Dependencies -->
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-simple</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+	        <exclusions>
+	          <exclusion>
+                <groupId>javax.jms</groupId>
+	            <artifactId>jms</artifactId>
+	          </exclusion>
+            </exclusions>
+        </dependency>
+
+        <!-- Testing Dependencies -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-test</artifactId>
+        </dependency>
+
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-hbase/src/examples/java/.gitignore b/trunk/gora-hbase/src/examples/java/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-hbase/src/examples/java/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseGetResult.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseGetResult.java
new file mode 100644
index 0000000..e5a178b
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseGetResult.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.query;
+
+import java.io.IOException;
+
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+
+/**
+ * An {@link HBaseResult} based on the result of a HBase {@link Get} query.
+ */
+public class HBaseGetResult<K, T extends Persistent> extends HBaseResult<K,T> {
+
+  private Result result;
+  
+  public HBaseGetResult(HBaseStore<K, T> dataStore, Query<K, T> query
+      , Result result) {
+    super(dataStore, query);
+    this.result = result;
+  }
+
+  @Override
+  public float getProgress() throws IOException {
+    return key == null ? 0f : 1f;
+  }
+
+  @Override
+  public boolean nextInner() throws IOException {
+    if(result == null || result.getRow() == null 
+        || result.getRow().length == 0) {
+      return false;
+    }
+    if(key == null) {
+      readNext(result);
+      return key != null;
+    }
+    
+    return false;
+  }
+
+  @Override
+  public void close() throws IOException {
+  }
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseQuery.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseQuery.java
new file mode 100644
index 0000000..e1adbf5
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseQuery.java
@@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.query;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.store.DataStore;
+
+/**
+ * HBase specific implementation of the {@link Query} interface.
+ */
+public class HBaseQuery<K, T extends Persistent> extends QueryBase<K, T> {
+
+  public HBaseQuery() {
+    super(null);
+  }
+  
+  public HBaseQuery(DataStore<K, T> dataStore) {
+    super(dataStore);
+  }
+
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseResult.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseResult.java
new file mode 100644
index 0000000..0dcc06d
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseResult.java
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.query;
+
+import static org.apache.gora.hbase.util.HBaseByteInterface.fromBytes;
+
+import java.io.IOException;
+
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.hadoop.hbase.client.Result;
+
+/**
+ * Base class for {@link Result} implementations for HBase.  
+ */
+public abstract class HBaseResult<K, T extends Persistent> 
+  extends ResultBase<K, T> {
+
+  public HBaseResult(HBaseStore<K,T> dataStore, Query<K, T> query) {
+    super(dataStore, query);
+  }
+  
+  @Override
+  public HBaseStore<K, T> getDataStore() {
+    return (HBaseStore<K, T>) super.getDataStore();
+  }
+  
+  protected void readNext(Result result) throws IOException {
+    key = fromBytes(getKeyClass(), result.getRow());
+    persistent = getDataStore().newInstance(result, query.getFields());
+  }
+  
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseScannerResult.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseScannerResult.java
new file mode 100644
index 0000000..6963969
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/query/HBaseScannerResult.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.query;
+
+import java.io.IOException;
+
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+
+/**
+ * Result of a query based on an HBase scanner.
+ */
+public class HBaseScannerResult<K, T extends Persistent> 
+  extends HBaseResult<K, T> {
+
+  private final ResultScanner scanner;
+  
+  public HBaseScannerResult(HBaseStore<K,T> dataStore, Query<K, T> query, 
+      ResultScanner scanner) {
+    super(dataStore, query);
+    this.scanner = scanner;
+  }
+
+  // do not clear object in scanner result
+  @Override
+  protected void clear() { }
+  
+  @Override
+  public boolean nextInner() throws IOException {
+    
+    Result result = scanner.next();
+    if (result == null) {
+      return false;
+    }
+    
+    readNext(result);
+    
+    return true;
+  }
+
+  @Override
+  public void close() throws IOException {
+    scanner.close();
+  }
+  
+  @Override
+  public float getProgress() throws IOException {
+    //TODO: if limit is set, we know how far we have gone 
+    return 0;
+  }
+  
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseColumn.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseColumn.java
new file mode 100644
index 0000000..d610808
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseColumn.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.hbase.store;
+
+import java.util.Arrays;
+
+/**
+ * Store family, qualifier tuple 
+ */
+class HBaseColumn {
+  
+  final byte[] family;
+  final byte[] qualifier;
+  
+  public HBaseColumn(byte[] family, byte[] qualifier) {
+    this.family = family==null ? null : Arrays.copyOf(family, family.length);
+    this.qualifier = qualifier==null ? null : 
+      Arrays.copyOf(qualifier, qualifier.length);
+  }
+  
+  /**
+   * @return the family (internal array returned; do not modify)
+   */
+  public byte[] getFamily() {
+    return family;
+  }
+
+  /**
+   * @return the qualifer (internal array returned; do not modify)
+   */
+  public byte[] getQualifier() {
+    return qualifier;
+  }
+
+  @Override
+  public int hashCode() {
+    final int prime = 31;
+    int result = 1;
+    result = prime * result + Arrays.hashCode(family);
+    result = prime * result + Arrays.hashCode(qualifier);
+    return result;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj)
+      return true;
+    if (obj == null)
+      return false;
+    if (getClass() != obj.getClass())
+      return false;
+    HBaseColumn other = (HBaseColumn) obj;
+    if (!Arrays.equals(family, other.family))
+      return false;
+    if (!Arrays.equals(qualifier, other.qualifier))
+      return false;
+    return true;
+  }
+
+  @Override
+  public String toString() {
+    return "HBaseColumn [family=" + Arrays.toString(family) + ", qualifier="
+        + Arrays.toString(qualifier) + "]";
+  }
+  
+  
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseMapping.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseMapping.java
new file mode 100644
index 0000000..f8ad9d7
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseMapping.java
@@ -0,0 +1,179 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.store;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.hfile.Compression.Algorithm;
+import org.apache.hadoop.hbase.regionserver.StoreFile.BloomType;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Mapping definitions for HBase. Thread safe.
+ * It holds a definition for a single table. 
+ */
+public class HBaseMapping {
+
+  private final HTableDescriptor tableDescriptor;
+  
+  // a map from field name to hbase column
+  private final Map<String, HBaseColumn> columnMap;
+  
+  public HBaseMapping(HTableDescriptor tableDescriptor,
+      Map<String, HBaseColumn> columnMap) {
+    super();
+    this.tableDescriptor = tableDescriptor;
+    this.columnMap = columnMap;
+  }
+  
+
+  public String getTableName() {
+    return tableDescriptor.getNameAsString();
+  }
+  
+  public HTableDescriptor getTable() {
+    return tableDescriptor;
+  }
+  
+  public HBaseColumn getColumn(String fieldName) {
+    return columnMap.get(fieldName);
+  }
+  
+  /**
+   * A builder for creating the mapper. This will allow building a thread safe
+   * {@link HBaseMapping} using simple immutabilty.
+   *
+   */
+  public static class HBaseMappingBuilder { 
+    private Map<String, Map<String, HColumnDescriptor>> tableToFamilies = 
+      new HashMap<String, Map<String, HColumnDescriptor>>();
+    private Map<String, HBaseColumn> columnMap = 
+      new HashMap<String, HBaseColumn>();
+    
+    private String tableName;
+    
+    public String getTableName() {
+      return tableName;
+    }
+    
+    public void setTableName(String tableName) {
+      this.tableName = tableName;
+    }
+    
+    public void addFamilyProps(String tableName, String familyName,
+        String compression, String blockCache, String blockSize,
+        String bloomFilter ,String maxVersions, String timeToLive, 
+        String inMemory) {
+      
+      // We keep track of all tables, because even though we
+      // only build a mapping for one table. We do this because of the way
+      // the mapping file is set up. 
+      // (First family properties are defined, whereafter columns are defined).
+      //
+      // HBaseMapping in fact does not need to support multiple tables,
+      // because a Store itself only supports a single table. (Every store 
+      // instance simply creates one mapping instance for itself).
+      //
+      // TODO A nice solution would be to redefine the mapping file structure.
+      // For example nest columns in families. Of course this would break compatibility.
+      
+      
+      Map<String, HColumnDescriptor> families = getOrCreateFamilies(tableName);;
+      
+      
+      HColumnDescriptor columnDescriptor = getOrCreateFamily(familyName, families);
+      
+      if(compression != null)
+        columnDescriptor.setCompressionType(Algorithm.valueOf(compression));
+      if(blockCache != null)
+        columnDescriptor.setBlockCacheEnabled(Boolean.parseBoolean(blockCache));
+      if(blockSize != null)
+        columnDescriptor.setBlocksize(Integer.parseInt(blockSize));
+      if(bloomFilter != null)
+        columnDescriptor.setBloomFilterType(BloomType.valueOf(bloomFilter));
+      if(maxVersions != null)
+        columnDescriptor.setMaxVersions(Integer.parseInt(maxVersions));
+      if(timeToLive != null)
+        columnDescriptor.setTimeToLive(Integer.parseInt(timeToLive));
+      if(inMemory != null)
+        columnDescriptor.setInMemory(Boolean.parseBoolean(inMemory));
+    }
+
+    public void addColumnFamily(String tableName, String familyName) {
+      Map<String, HColumnDescriptor> families = getOrCreateFamilies(tableName);
+      getOrCreateFamily(familyName, families);
+    }
+    
+    public void addField(String fieldName, String family, String qualifier) {
+      byte[] familyBytes = Bytes.toBytes(family);
+      byte[] qualifierBytes = qualifier == null ? null : 
+        Bytes.toBytes(qualifier);
+      
+      HBaseColumn column = new HBaseColumn(familyBytes, qualifierBytes);
+      columnMap.put(fieldName, column);
+    }
+    
+
+    private HColumnDescriptor getOrCreateFamily(String familyName,
+        Map<String, HColumnDescriptor> families) {
+      HColumnDescriptor columnDescriptor = families.get(familyName);
+      if (columnDescriptor == null) {
+        columnDescriptor=new HColumnDescriptor(familyName);
+        families.put(familyName, columnDescriptor);
+      }
+      return columnDescriptor;
+    }
+
+    private Map<String, HColumnDescriptor> getOrCreateFamilies(String tableName) {
+      Map<String, HColumnDescriptor> families;
+      families = tableToFamilies.get(tableName);
+      if (families == null) {
+        families = new HashMap<String, HColumnDescriptor>();
+        tableToFamilies.put(tableName, families);
+      }
+      return families;
+    }
+    
+    public void renameTable(String oldName, String newName) {
+      Map<String, HColumnDescriptor> families = tableToFamilies.remove(oldName);
+      if (families == null) throw new IllegalArgumentException(oldName + " does not exist");
+      tableToFamilies.put(newName, families);
+    }
+    
+    /**
+     * @return A newly constructed mapping.
+     */
+    public HBaseMapping build() {
+      if (tableName == null) throw new IllegalStateException("tableName is not specified");
+      
+      Map<String, HColumnDescriptor> families = tableToFamilies.get(tableName);
+      if (families == null) throw new IllegalStateException("no families for table " + tableName);
+      
+      HTableDescriptor tableDescriptors = new HTableDescriptor(tableName);
+      for (HColumnDescriptor desc : families.values()) {
+        tableDescriptors.addFamily(desc);
+      }
+      return new HBaseMapping(tableDescriptors, columnMap);
+    }
+  }
+
+}
\ No newline at end of file
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseStore.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseStore.java
new file mode 100644
index 0000000..7cae7ab
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseStore.java
@@ -0,0 +1,610 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.hbase.store;
+
+import static org.apache.gora.hbase.util.HBaseByteInterface.fromBytes;
+import static org.apache.gora.hbase.util.HBaseByteInterface.toBytes;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.NavigableMap;
+import java.util.Properties;
+import java.util.Set;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.hbase.query.HBaseGetResult;
+import org.apache.gora.hbase.query.HBaseQuery;
+import org.apache.gora.hbase.query.HBaseScannerResult;
+import org.apache.gora.hbase.store.HBaseMapping.HBaseMappingBuilder;
+import org.apache.gora.hbase.util.HBaseByteInterface;
+import org.apache.gora.persistency.ListGenericArray;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.State;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.StatefulHashMap;
+import org.apache.gora.persistency.StatefulMap;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.jdom.Document;
+import org.jdom.Element;
+import org.jdom.input.SAXBuilder;
+
+/**
+ * DataStore for HBase. Thread safe.
+ *
+ */
+public class HBaseStore<K, T extends Persistent> extends DataStoreBase<K, T>
+implements Configurable {
+
+  public static final Logger LOG = LoggerFactory.getLogger(HBaseStore.class);
+
+  public static final String PARSE_MAPPING_FILE_KEY = "gora.hbase.mapping.file";
+
+  @Deprecated
+  private static final String DEPRECATED_MAPPING_FILE = "hbase-mapping.xml";
+  public static final String DEFAULT_MAPPING_FILE = "gora-hbase-mapping.xml";
+
+  private volatile HBaseAdmin admin;
+
+  private volatile HBaseTableConnection table;
+
+  private final boolean autoCreateSchema = true;
+
+  private volatile HBaseMapping mapping;
+
+  public HBaseStore()  {
+  }
+
+  @Override
+  public void initialize(Class<K> keyClass, Class<T> persistentClass,
+      Properties properties) throws IOException {
+    super.initialize(keyClass, persistentClass, properties);
+    this.conf = HBaseConfiguration.create(getConf());
+
+    admin = new HBaseAdmin(this.conf);
+
+    try {
+      mapping = readMapping(getConf().get(PARSE_MAPPING_FILE_KEY, DEFAULT_MAPPING_FILE));
+    } catch (FileNotFoundException ex) {
+      try {
+        mapping = readMapping(getConf().get(PARSE_MAPPING_FILE_KEY, DEPRECATED_MAPPING_FILE));
+        LOG.warn(DEPRECATED_MAPPING_FILE + " is deprecated, please rename the file to "
+            + DEFAULT_MAPPING_FILE);
+      } catch (FileNotFoundException ex1) {
+        throw ex; //throw the original exception
+      } catch (Exception ex1) {
+        LOG.warn(DEPRECATED_MAPPING_FILE + " is deprecated, please rename the file to "
+            + DEFAULT_MAPPING_FILE);
+        throw new RuntimeException(ex1);
+      }
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    if(autoCreateSchema) {
+      createSchema();
+    }
+
+    boolean autoflush = this.conf.getBoolean("hbase.client.autoflush.default", false);
+    table = new HBaseTableConnection(getConf(), getSchemaName(), autoflush);
+  }
+
+  @Override
+  public String getSchemaName() {
+    //return the name of this table
+    return mapping.getTableName();
+  }
+
+  @Override
+  public void createSchema() throws IOException {
+    if(schemaExists()) {
+      return;
+    }
+    HTableDescriptor tableDesc = mapping.getTable();
+
+    admin.createTable(tableDesc);
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+    if(!schemaExists()) {
+      return;
+    }
+    admin.disableTable(getSchemaName());
+    admin.deleteTable(getSchemaName());
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+    return admin.tableExists(mapping.getTableName());
+  }
+
+  @Override
+  public T get(K key, String[] fields) throws IOException {
+    fields = getFieldsToQuery(fields);
+    Get get = new Get(toBytes(key));
+    addFields(get, fields);
+    Result result = table.get(get);
+    return newInstance(result, fields);
+  }
+
+  @SuppressWarnings({ "unchecked", "rawtypes" })
+  @Override
+  public void put(K key, T persistent) throws IOException {
+    Schema schema = persistent.getSchema();
+    StateManager stateManager = persistent.getStateManager();
+    byte[] keyRaw = toBytes(key);
+    Put put = new Put(keyRaw);
+    Delete delete = new Delete(keyRaw);
+    boolean hasPuts = false;
+    boolean hasDeletes = false;
+    Iterator<Field> iter = schema.getFields().iterator();
+    for (int i = 0; iter.hasNext(); i++) {
+      Field field = iter.next();
+      if (!stateManager.isDirty(persistent, i)) {
+        continue;
+      }
+      Type type = field.schema().getType();
+      Object o = persistent.get(i);
+      HBaseColumn hcol = mapping.getColumn(field.name());
+      switch(type) {
+        case MAP:
+          if(o instanceof StatefulMap) {
+            StatefulHashMap<Utf8, ?> map = (StatefulHashMap<Utf8, ?>) o;
+            for (Entry<Utf8, State> e : map.states().entrySet()) {
+              Utf8 mapKey = e.getKey();
+              switch (e.getValue()) {
+                case DIRTY:
+                  byte[] qual = Bytes.toBytes(mapKey.toString());
+                  byte[] val = toBytes(map.get(mapKey), field.schema().getValueType());
+                  put.add(hcol.getFamily(), qual, val);
+                  hasPuts = true;
+                  break;
+                case DELETED:
+                  qual = Bytes.toBytes(mapKey.toString());
+                  hasDeletes = true;
+                  delete.deleteColumn(hcol.getFamily(), qual);
+                  break;
+              }
+            }
+          } else {
+            Set<Map.Entry> set = ((Map)o).entrySet();
+            for(Entry entry: set) {
+              byte[] qual = toBytes(entry.getKey());
+              byte[] val = toBytes(entry.getValue());
+              put.add(hcol.getFamily(), qual, val);
+              hasPuts = true;
+            }
+          }
+          break;
+        case ARRAY:
+          if(o instanceof GenericArray) {
+            GenericArray arr = (GenericArray) o;
+            int j=0;
+            for(Object item : arr) {
+              byte[] val = toBytes(item);
+              put.add(hcol.getFamily(), Bytes.toBytes(j++), val);
+              hasPuts = true;
+            }
+          }
+          break;
+        default:
+          put.add(hcol.getFamily(), hcol.getQualifier(), toBytes(o, field.schema()));
+          hasPuts = true;
+          break;
+      }
+    }
+    if (hasPuts) {
+      table.put(put);
+    }
+    if (hasDeletes) {
+      table.delete(delete);
+    }
+  }
+
+  public void delete(T obj) {
+    throw new RuntimeException("Not implemented yet");
+  }
+
+  /**
+   * Deletes the object with the given key.
+   * @return always true
+   */
+  @Override
+  public boolean delete(K key) throws IOException {
+    table.delete(new Delete(toBytes(key)));
+    //HBase does not return success information and executing a get for
+    //success is a bit costly
+    return true;
+  }
+
+  @Override
+  public long deleteByQuery(Query<K, T> query) throws IOException {
+
+    String[] fields = getFieldsToQuery(query.getFields());
+    //find whether all fields are queried, which means that complete
+    //rows will be deleted
+    boolean isAllFields = Arrays.equals(fields
+        , getBeanFactory().getCachedPersistent().getFields());
+
+    org.apache.gora.query.Result<K, T> result = query.execute();
+
+    ArrayList<Delete> deletes = new ArrayList<Delete>();
+    while(result.next()) {
+      Delete delete = new Delete(toBytes(result.getKey()));
+      deletes.add(delete);
+      if(!isAllFields) {
+        addFields(delete, query);
+      }
+    }
+    //TODO: delete by timestamp, etc
+
+    table.delete(deletes);
+
+    return deletes.size();
+  }
+
+  @Override
+  public void flush() throws IOException {
+    table.flushCommits();
+  }
+
+  @Override
+  public Query<K, T> newQuery() {
+    return new HBaseQuery<K, T>(this);
+  }
+
+  @Override
+  public List<PartitionQuery<K, T>> getPartitions(Query<K, T> query)
+      throws IOException {
+
+    // taken from o.a.h.hbase.mapreduce.TableInputFormatBase
+    Pair<byte[][], byte[][]> keys = table.getStartEndKeys();
+    if (keys == null || keys.getFirst() == null ||
+        keys.getFirst().length == 0) {
+      throw new IOException("Expecting at least one region.");
+    }
+    if (table == null) {
+      throw new IOException("No table was provided.");
+    }
+    List<PartitionQuery<K,T>> partitions = new ArrayList<PartitionQuery<K,T>>(keys.getFirst().length);
+    for (int i = 0; i < keys.getFirst().length; i++) {
+      String regionLocation = table.getRegionLocation(keys.getFirst()[i]).
+      getServerAddress().getHostname();
+      byte[] startRow = query.getStartKey() != null ? toBytes(query.getStartKey())
+          : HConstants.EMPTY_START_ROW;
+      byte[] stopRow = query.getEndKey() != null ? toBytes(query.getEndKey())
+          : HConstants.EMPTY_END_ROW;
+
+      // determine if the given start an stop key fall into the region
+      if ((startRow.length == 0 || keys.getSecond()[i].length == 0 ||
+          Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) &&
+          (stopRow.length == 0 ||
+              Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0)) {
+
+        byte[] splitStart = startRow.length == 0 || 
+            Bytes.compareTo(keys.getFirst()[i], startRow) >= 0 ? 
+            keys.getFirst()[i] : startRow;
+
+        byte[] splitStop = (stopRow.length == 0 || 
+            Bytes.compareTo(keys.getSecond()[i], stopRow) <= 0) && 
+            keys.getSecond()[i].length > 0 ? keys.getSecond()[i] : stopRow;
+
+        K startKey = Arrays.equals(HConstants.EMPTY_START_ROW, splitStart) ?
+            null : HBaseByteInterface.fromBytes(keyClass, splitStart);
+        K endKey = Arrays.equals(HConstants.EMPTY_END_ROW, splitStop) ?
+            null : HBaseByteInterface.fromBytes(keyClass, splitStop);
+
+        PartitionQuery<K, T> partition = new PartitionQueryImpl<K, T>(
+            query, startKey, endKey, regionLocation);
+
+        partitions.add(partition);
+      }
+    }
+    return partitions;
+  }
+
+  @Override
+  public org.apache.gora.query.Result<K, T> execute(Query<K, T> query)
+      throws IOException {
+
+    //check if query.fields is null
+    query.setFields(getFieldsToQuery(query.getFields()));
+
+    if(query.getStartKey() != null && query.getStartKey().equals(
+        query.getEndKey())) {
+      Get get = new Get(toBytes(query.getStartKey()));
+      addFields(get, query.getFields());
+      addTimeRange(get, query);
+      Result result = table.get(get);
+      return new HBaseGetResult<K,T>(this, query, result);
+    } else {
+      ResultScanner scanner = createScanner(query);
+
+      org.apache.gora.query.Result<K,T> result
+      = new HBaseScannerResult<K,T>(this,query, scanner);
+
+      return result;
+    }
+  }
+
+  public ResultScanner createScanner(Query<K, T> query)
+  throws IOException {
+    final Scan scan = new Scan();
+    if (query.getStartKey() != null) {
+      scan.setStartRow(toBytes(query.getStartKey()));
+    }
+    if (query.getEndKey() != null) {
+      scan.setStopRow(toBytes(query.getEndKey()));
+    }
+    addFields(scan, query);
+
+    return table.getScanner(scan);
+  }
+
+  private void addFields(Get get, String[] fieldNames) {
+    for (String f : fieldNames) {
+      HBaseColumn col = mapping.getColumn(f);
+      Schema fieldSchema = fieldMap.get(f).schema();
+
+      switch (fieldSchema.getType()) {
+        case MAP:
+        case ARRAY:
+          get.addFamily(col.family); break;
+        default:
+          get.addColumn(col.family, col.qualifier); break;
+      }
+    }
+  }
+
+  private void addFields(Scan scan, Query<K,T> query)
+  throws IOException {
+    String[] fields = query.getFields();
+    for (String f : fields) {
+      HBaseColumn col = mapping.getColumn(f);
+      Schema fieldSchema = fieldMap.get(f).schema();
+      switch (fieldSchema.getType()) {
+        case MAP:
+        case ARRAY:
+          scan.addFamily(col.family); break;
+        default:
+          scan.addColumn(col.family, col.qualifier); break;
+      }
+    }
+  }
+
+  //TODO: HBase Get, Scan, Delete should extend some common interface with addFamily, etc
+  private void addFields(Delete delete, Query<K,T> query)
+    throws IOException {
+    String[] fields = query.getFields();
+    for (String f : fields) {
+      HBaseColumn col = mapping.getColumn(f);
+      Schema fieldSchema = fieldMap.get(f).schema();
+      switch (fieldSchema.getType()) {
+        case MAP:
+        case ARRAY:
+          delete.deleteFamily(col.family); break;
+        default:
+          delete.deleteColumn(col.family, col.qualifier); break;
+      }
+    }
+  }
+
+  private void addTimeRange(Get get, Query<K, T> query) throws IOException {
+    if(query.getStartTime() > 0 || query.getEndTime() > 0) {
+      if(query.getStartTime() == query.getEndTime()) {
+        get.setTimeStamp(query.getStartTime());
+      } else {
+        long startTime = query.getStartTime() > 0 ? query.getStartTime() : 0;
+        long endTime = query.getEndTime() > 0 ? query.getEndTime() : Long.MAX_VALUE;
+        get.setTimeRange(startTime, endTime);
+      }
+    }
+  }
+
+  @SuppressWarnings({ "unchecked", "rawtypes" })
+  public T newInstance(Result result, String[] fields)
+  throws IOException {
+    if(result == null || result.isEmpty())
+      return null;
+
+    T persistent = newPersistent();
+    StateManager stateManager = persistent.getStateManager();
+    for (String f : fields) {
+      HBaseColumn col = mapping.getColumn(f);
+      Field field = fieldMap.get(f);
+      Schema fieldSchema = field.schema();
+      switch(fieldSchema.getType()) {
+        case MAP:
+          NavigableMap<byte[], byte[]> qualMap =
+            result.getNoVersionMap().get(col.getFamily());
+          if (qualMap == null) {
+            continue;
+          }
+          Schema valueSchema = fieldSchema.getValueType();
+          Map map = new HashMap();
+          for (Entry<byte[], byte[]> e : qualMap.entrySet()) {
+            map.put(new Utf8(Bytes.toString(e.getKey())),
+                fromBytes(valueSchema, e.getValue()));
+          }
+          setField(persistent, field, map);
+          break;
+        case ARRAY:
+          qualMap = result.getFamilyMap(col.getFamily());
+          if (qualMap == null) {
+            continue;
+          }
+          valueSchema = fieldSchema.getElementType();
+          ArrayList arrayList = new ArrayList();
+          for (Entry<byte[], byte[]> e : qualMap.entrySet()) {
+            arrayList.add(fromBytes(valueSchema, e.getValue()));
+          }
+          ListGenericArray arr = new ListGenericArray(fieldSchema, arrayList);
+          setField(persistent, field, arr);
+          break;
+        default:
+          byte[] val =
+            result.getValue(col.getFamily(), col.getQualifier());
+          if (val == null) {
+            continue;
+          }
+          setField(persistent, field, val);
+          break;
+      }
+    }
+    stateManager.clearDirty(persistent);
+    return persistent;
+  }
+
+  @SuppressWarnings({ "unchecked", "rawtypes" })
+  private void setField(T persistent, Field field, Map map) {
+    persistent.put(field.pos(), new StatefulHashMap(map));
+  }
+
+  private void setField(T persistent, Field field, byte[] val)
+  throws IOException {
+    persistent.put(field.pos(), fromBytes(field.schema(), val));
+  }
+
+  @SuppressWarnings("rawtypes")
+  private void setField(T persistent, Field field, GenericArray list) {
+    persistent.put(field.pos(), list);
+  }
+
+  @SuppressWarnings("unchecked")
+  private HBaseMapping readMapping(String filename) throws IOException {
+
+    HBaseMappingBuilder mappingBuilder = new HBaseMappingBuilder();
+
+    try {
+      SAXBuilder builder = new SAXBuilder();
+      Document doc = builder.build(getClass().getClassLoader()
+          .getResourceAsStream(filename));
+      Element root = doc.getRootElement();
+
+      List<Element> tableElements = root.getChildren("table");
+      for(Element tableElement : tableElements) {
+        String tableName = tableElement.getAttributeValue("name");
+
+        List<Element> fieldElements = tableElement.getChildren("family");
+        for(Element fieldElement : fieldElements) {
+          String familyName  = fieldElement.getAttributeValue("name");
+          String compression = fieldElement.getAttributeValue("compression");
+          String blockCache  = fieldElement.getAttributeValue("blockCache");
+          String blockSize   = fieldElement.getAttributeValue("blockSize");
+          String bloomFilter = fieldElement.getAttributeValue("bloomFilter");
+          String maxVersions = fieldElement.getAttributeValue("maxVersions");
+          String timeToLive  = fieldElement.getAttributeValue("timeToLive");
+          String inMemory    = fieldElement.getAttributeValue("inMemory");
+          
+          mappingBuilder.addFamilyProps(tableName, familyName, compression, 
+              blockCache, blockSize, bloomFilter, maxVersions, timeToLive, 
+              inMemory);
+        }
+      }
+
+      List<Element> classElements = root.getChildren("class");
+      for(Element classElement: classElements) {
+        if(classElement.getAttributeValue("keyClass").equals(
+            keyClass.getCanonicalName())
+            && classElement.getAttributeValue("name").equals(
+                persistentClass.getCanonicalName())) {
+
+          String tableNameFromMapping = classElement.getAttributeValue("table");
+          String tableName = getSchemaName(tableNameFromMapping, persistentClass);
+          
+          //tableNameFromMapping could be null here
+          if (!tableName.equals(tableNameFromMapping)) {
+            LOG.info("Keyclass and nameclass match but mismatching table names " 
+                + " mappingfile schema is '" + tableNameFromMapping 
+                + "' vs actual schema '" + tableName + "' , assuming they are the same.");
+            if (tableNameFromMapping != null) {
+              mappingBuilder.renameTable(tableNameFromMapping, tableName);
+            }
+          }
+          mappingBuilder.setTableName(tableName);
+
+          List<Element> fields = classElement.getChildren("field");
+          for(Element field:fields) {
+            String fieldName =  field.getAttributeValue("name");
+            String family =  field.getAttributeValue("family");
+            String qualifier = field.getAttributeValue("qualifier");
+            mappingBuilder.addField(fieldName, family, qualifier);
+            mappingBuilder.addColumnFamily(tableName, family);
+          }
+          
+          
+          
+          //we found a matching key and value class definition,
+          //do not continue on other class definitions
+          break;
+        }
+      }
+    } catch(IOException ex) {
+      throw ex;
+    } catch(Exception ex) {
+      throw new IOException(ex);
+    }
+
+    return mappingBuilder.build();
+  }
+
+  @Override
+  public void close() throws IOException {
+    table.close();
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseTableConnection.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseTableConnection.java
new file mode 100644
index 0000000..1e1a7bc
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/store/HBaseTableConnection.java
@@ -0,0 +1,257 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.hbase.store;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowLock;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+/**
+ * Thread safe implementation to connect to a HBase table.
+ *
+ */
+public class HBaseTableConnection implements HTableInterface{
+  /*
+   * The current implementation uses ThreadLocal HTable instances. It keeps
+   * track of the floating instances in order to correctly flush and close
+   * the connection when it is closed. HBase itself provides a utility called
+   * HTablePool for maintaining a pool of tables, but there are still some
+   * drawbacks that are only solved in later releases.
+   * 
+   */
+  
+  private final Configuration conf;
+  private final ThreadLocal<HTable> tables;
+  private final BlockingQueue<HTable> pool = new LinkedBlockingQueue<HTable>();
+  private final boolean autoflush;
+  private final String tableName;
+  
+  /**
+   * Instantiate new connection.
+   * 
+   * @param conf
+   * @param tableName
+   * @param autoflush
+   * @throws IOException
+   */
+  public HBaseTableConnection(Configuration conf, String tableName, 
+      boolean autoflush) throws IOException {
+    this.conf = conf;
+    this.tables = new ThreadLocal<HTable>();
+    this.tableName = tableName;
+    this.autoflush = autoflush;
+  }
+  
+  private HTable getTable() throws IOException {
+    HTable table = tables.get();
+    if (table == null) {
+      table = new HTable(conf, tableName) {
+        @Override
+        public synchronized void flushCommits() throws IOException {
+          super.flushCommits();
+        }
+      };
+      table.setAutoFlush(autoflush);
+      pool.add(table); //keep track
+      tables.set(table);
+    }
+    return table;
+  }
+  
+  @Override
+  public void close() throws IOException {
+    // Flush and close all instances.
+    // (As an extra safeguard one might employ a shared variable i.e. 'closed'
+    //  in order to prevent further table creation but for now we assume that
+    //  once close() is called, clients are no longer using it).
+    for (HTable table : pool) {
+      table.flushCommits();
+      table.close();
+    }
+  }
+
+  @Override
+  public byte[] getTableName() {
+    return Bytes.toBytes(tableName);
+  }
+
+  @Override
+  public Configuration getConfiguration() {
+    return conf;
+  }
+
+  @Override
+  public boolean isAutoFlush() {
+    return autoflush;
+  }
+
+  /**
+   * getStartEndKeys provided by {@link HTable} but not {@link HTableInterface}.
+   * @see HTable#getStartEndKeys()
+   */
+  public Pair<byte[][], byte[][]> getStartEndKeys() throws IOException {
+    return getTable().getStartEndKeys();
+  }
+  /**
+   * getRegionLocation provided by {@link HTable} but not 
+   * {@link HTableInterface}.
+   * @see HTable#getRegionLocation(byte[])
+   */
+  public HRegionLocation getRegionLocation(final byte[] bs) throws IOException {
+    return getTable().getRegionLocation(bs);
+  }
+
+  @Override
+  public HTableDescriptor getTableDescriptor() throws IOException {
+    return getTable().getTableDescriptor();
+  }
+
+  @Override
+  public boolean exists(Get get) throws IOException {
+    return getTable().exists(get);
+  }
+
+  @Override
+  public void batch(List<Row> actions, Object[] results) throws IOException,
+      InterruptedException {
+    getTable().batch(actions, results);
+  }
+
+  @Override
+  public Object[] batch(List<Row> actions) throws IOException,
+      InterruptedException {
+    return getTable().batch(actions);
+  }
+
+  @Override
+  public Result get(Get get) throws IOException {
+    return getTable().get(get);
+  }
+
+  @Override
+  public Result[] get(List<Get> gets) throws IOException {
+    return getTable().get(gets);
+  }
+
+  @Override
+  public Result getRowOrBefore(byte[] row, byte[] family) throws IOException {
+    return getTable().getRowOrBefore(row, family);
+  }
+
+  @Override
+  public ResultScanner getScanner(Scan scan) throws IOException {
+    return getTable().getScanner(scan);
+  }
+
+  @Override
+  public ResultScanner getScanner(byte[] family) throws IOException {
+    return getTable().getScanner(family);
+  }
+
+  @Override
+  public ResultScanner getScanner(byte[] family, byte[] qualifier)
+      throws IOException {
+    return getTable().getScanner(family, qualifier);
+  }
+
+  @Override
+  public void put(Put put) throws IOException {
+    getTable().put(put);
+  }
+
+  @Override
+  public void put(List<Put> puts) throws IOException {
+    getTable().put(puts);
+  }
+
+  @Override
+  public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Put put) throws IOException {
+    return getTable().checkAndPut(row, family, qualifier, value, put);
+  }
+
+  @Override
+  public void delete(Delete delete) throws IOException {
+    getTable().delete(delete);
+  }
+
+  @Override
+  public void delete(List<Delete> deletes) throws IOException {
+    getTable().delete(deletes);
+    
+  }
+
+  @Override
+  public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Delete delete) throws IOException {
+    return getTable().checkAndDelete(row, family, qualifier, value, delete);
+  }
+
+  @Override
+  public Result increment(Increment increment) throws IOException {
+    return getTable().increment(increment);
+  }
+
+  @Override
+  public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount) throws IOException {
+    return getTable().incrementColumnValue(row, family, qualifier, amount);
+  }
+
+  @Override
+  public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount, boolean writeToWAL) throws IOException {
+    return getTable().incrementColumnValue(row, family, qualifier, amount,
+        writeToWAL);
+  }
+
+  @Override
+  public void flushCommits() throws IOException {
+    for (HTable table : pool) {
+      table.flushCommits();
+    }
+  }
+
+  @Override
+  public RowLock lockRow(byte[] row) throws IOException {
+    return getTable().lockRow(row);
+  }
+
+  @Override
+  public void unlockRow(RowLock rl) throws IOException {
+    getTable().unlockRow(rl);
+  }
+}
diff --git a/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/util/HBaseByteInterface.java b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/util/HBaseByteInterface.java
new file mode 100644
index 0000000..c3237b1
--- /dev/null
+++ b/trunk/gora-hbase/src/main/java/org/apache/gora/hbase/util/HBaseByteInterface.java
@@ -0,0 +1,209 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.util;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.io.BinaryDecoder;
+import org.apache.avro.io.BinaryEncoder;
+import org.apache.avro.io.DecoderFactory;
+import org.apache.avro.specific.SpecificDatumReader;
+import org.apache.avro.specific.SpecificDatumWriter;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.util.AvroUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Contains utility methods for byte[] <-> field
+ * conversions.
+ */
+public class HBaseByteInterface {
+  /**
+   * Threadlocals maintaining reusable binary decoders and encoders.
+   */
+  public static final ThreadLocal<BinaryDecoder> decoders =
+      new ThreadLocal<BinaryDecoder>();
+  public static final ThreadLocal<BinaryEncoderWithStream> encoders =
+      new ThreadLocal<BinaryEncoderWithStream>();
+  
+  /**
+   * A BinaryEncoder that exposes the outputstream so that it can be reset
+   * every time. (This is a workaround to reuse BinaryEncoder and the buffers,
+   * normally provided be EncoderFactory, but this class does not exist yet 
+   * in the current Avro version).
+   */
+  public static final class BinaryEncoderWithStream extends BinaryEncoder {
+    public BinaryEncoderWithStream(OutputStream out) {
+      super(out);
+    }
+    
+    protected OutputStream getOut() {
+      return out;
+    }
+  }
+  
+  /*
+   * Create a threadlocal map for the datum readers and writers, because
+   * they are not thread safe, at least not before Avro 1.4.0 (See AVRO-650).
+   * When they are thread safe, it is possible to maintain a single reader and
+   * writer pair for every schema, instead of one for every thread.
+   */
+  
+  public static final ThreadLocal<Map<String, SpecificDatumReader<?>>> 
+    readerMaps = new ThreadLocal<Map<String, SpecificDatumReader<?>>>() {
+      protected Map<String,SpecificDatumReader<?>> initialValue() {
+        return new HashMap<String, SpecificDatumReader<?>>();
+      };
+  };
+  
+  public static final ThreadLocal<Map<String, SpecificDatumWriter<?>>> 
+    writerMaps = new ThreadLocal<Map<String, SpecificDatumWriter<?>>>() {
+      protected Map<String,SpecificDatumWriter<?>> initialValue() {
+        return new HashMap<String, SpecificDatumWriter<?>>();
+      };
+  };
+
+
+  @SuppressWarnings("rawtypes")
+  public static Object fromBytes(Schema schema, byte[] val) throws IOException {
+    Type type = schema.getType();
+    switch (type) {
+    case ENUM:    return AvroUtils.getEnumValue(schema, val[0]);
+    case STRING:  return new Utf8(Bytes.toString(val));
+    case BYTES:   return ByteBuffer.wrap(val);
+    case INT:     return Bytes.toInt(val);
+    case LONG:    return Bytes.toLong(val);
+    case FLOAT:   return Bytes.toFloat(val);
+    case DOUBLE:  return Bytes.toDouble(val);
+    case BOOLEAN: return val[0] != 0;
+    case RECORD:
+      Map<String, SpecificDatumReader<?>> readerMap = readerMaps.get();
+      SpecificDatumReader<?> reader = readerMap.get(schema.getFullName());
+      if (reader == null) {
+        reader = new SpecificDatumReader(schema);     
+        readerMap.put(schema.getFullName(), reader);
+      }
+      
+      // initialize a decoder, possibly reusing previous one
+      BinaryDecoder decoderFromCache = decoders.get();
+      BinaryDecoder decoder=DecoderFactory.defaultFactory().
+          createBinaryDecoder(val, decoderFromCache);
+      // put in threadlocal cache if the initial get was empty
+      if (decoderFromCache==null) {
+        decoders.set(decoder);
+      }
+      
+      return reader.read(null, decoder);
+    default: throw new RuntimeException("Unknown type: "+type);
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <K> K fromBytes(Class<K> clazz, byte[] val) {
+    if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+      return (K) Byte.valueOf(val[0]);
+    } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+      return (K) Boolean.valueOf(val[0] == 0 ? false : true);
+    } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+      return (K) Short.valueOf(Bytes.toShort(val));
+    } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+      return (K) Integer.valueOf(Bytes.toInt(val));
+    } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+      return (K) Long.valueOf(Bytes.toLong(val));
+    } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+      return (K) Float.valueOf(Bytes.toFloat(val));
+    } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+      return (K) Double.valueOf(Bytes.toDouble(val));
+    } else if (clazz.equals(String.class)) {
+      return (K) Bytes.toString(val);
+    } else if (clazz.equals(Utf8.class)) {
+      return (K) new Utf8(Bytes.toString(val));
+    }
+    throw new RuntimeException("Can't parse data as class: " + clazz);
+  }
+
+  public static byte[] toBytes(Object o) {
+    Class<?> clazz = o.getClass();
+    if (clazz.equals(Enum.class)) {
+      return new byte[] { (byte)((Enum<?>) o).ordinal() }; // yeah, yeah it's a hack
+    } else if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+      return new byte[] { (Byte) o };
+    } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+      return new byte[] { ((Boolean) o ? (byte) 1 :(byte) 0)};
+    } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+      return Bytes.toBytes((Short) o);
+    } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+      return Bytes.toBytes((Integer) o);
+    } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+      return Bytes.toBytes((Long) o);
+    } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+      return Bytes.toBytes((Float) o);
+    } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+      return Bytes.toBytes((Double) o);
+    } else if (clazz.equals(String.class)) {
+      return Bytes.toBytes((String) o);
+    } else if (clazz.equals(Utf8.class)) {
+      return ((Utf8) o).getBytes();
+    }
+    throw new RuntimeException("Can't parse data as class: " + clazz);
+  }
+
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public static byte[] toBytes(Object o, Schema schema) throws IOException {
+    Type type = schema.getType();
+    switch (type) {
+    case STRING:  return Bytes.toBytes(((Utf8)o).toString()); // TODO: maybe ((Utf8)o).getBytes(); ?
+    case BYTES:   return ((ByteBuffer)o).array();
+    case INT:     return Bytes.toBytes((Integer)o);
+    case LONG:    return Bytes.toBytes((Long)o);
+    case FLOAT:   return Bytes.toBytes((Float)o);
+    case DOUBLE:  return Bytes.toBytes((Double)o);
+    case BOOLEAN: return (Boolean)o ? new byte[] {1} : new byte[] {0};
+    case ENUM:    return new byte[] { (byte)((Enum<?>) o).ordinal() };
+    case RECORD:
+      Map<String, SpecificDatumWriter<?>> writerMap = writerMaps.get();
+      SpecificDatumWriter writer = writerMap.get(schema.getFullName());
+      if (writer == null) {
+        writer = new SpecificDatumWriter(schema);
+        writerMap.put(schema.getFullName(),writer);
+      }
+      
+      BinaryEncoderWithStream encoder = encoders.get();
+      if (encoder == null) {
+        encoder = new BinaryEncoderWithStream(new ByteArrayOutputStream());
+        encoders.set(encoder);
+      }
+      //reset the buffers
+      ByteArrayOutputStream os = (ByteArrayOutputStream) encoder.getOut();
+      os.reset();
+      
+      writer.write(o, encoder);
+      encoder.flush();
+      return os.toByteArray();
+    default: throw new RuntimeException("Unknown type: "+type);
+    }
+  }
+}
diff --git a/trunk/gora-hbase/src/test/conf/gora-hbase-mapping.xml b/trunk/gora-hbase/src/test/conf/gora-hbase-mapping.xml
new file mode 100644
index 0000000..c242e52
--- /dev/null
+++ b/trunk/gora-hbase/src/test/conf/gora-hbase-mapping.xml
@@ -0,0 +1,53 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<gora-orm>
+
+  <table name="Employee"> <!-- optional descriptors for tables -->
+    <family name="info"/> <!-- This can also have params like compression, bloom filters -->
+  </table>
+
+  <table name="WebPage">
+    <family name="common"/>
+    <family name="content"/>
+    <family name="parsedContent"/>
+    <family name="outlinks"/>
+  </table>
+
+  <class name="org.apache.gora.examples.generated.Employee" keyClass="java.lang.String" table="Employee">
+    <field name="name" family="info" qualifier="nm"/>
+    <field name="dateOfBirth" family="info" qualifier="db"/>
+    <field name="ssn" family="info" qualifier="sn"/>
+    <field name="salary" family="info" qualifier="sl"/>
+  </class>
+
+  <class name="org.apache.gora.examples.generated.WebPage" keyClass="java.lang.String" table="WebPage">
+    <field name="url" family="common" qualifier="u"/>
+    <field name="content" family="content"/>
+    <field name="parsedContent" family="parsedContent"/>
+    <field name="outlinks" family="outlinks"/>
+    <field name="metadata" family="common" qualifier="metadata"/>
+  </class>
+
+
+  <class name="org.apache.gora.examples.generated.TokenDatum" keyClass="java.lang.String">
+    <field name="count" family="common" qualifier="count"/>
+  </class>
+
+</gora-orm>
diff --git a/trunk/gora-hbase/src/test/conf/hbase-site.xml b/trunk/gora-hbase/src/test/conf/hbase-site.xml
new file mode 100644
index 0000000..5024e85
--- /dev/null
+++ b/trunk/gora-hbase/src/test/conf/hbase-site.xml
@@ -0,0 +1,137 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>hbase.regionserver.msginterval</name>
+    <value>1000</value>
+    <description>Interval between messages from the RegionServer to HMaster
+    in milliseconds.  Default is 15. Set this value low if you want unit
+    tests to be responsive.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.pause</name>
+    <value>5000</value>
+    <description>General client pause value.  Used mostly as value to wait
+    before running a retry of a failed get, region lookup, etc.</description>
+  </property>
+  <property>
+    <name>hbase.master.meta.thread.rescanfrequency</name>
+    <value>10000</value>
+    <description>How long the HMaster sleeps (in milliseconds) between scans of
+    the root and meta tables.
+    </description>
+  </property>
+  <property>
+    <name>hbase.server.thread.wakefrequency</name>
+    <value>1000</value>
+    <description>Time to sleep in between searches for work (in milliseconds).
+    Used as sleep interval by service threads such as META scanner and log roller.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.handler.count</name>
+    <value>5</value>
+    <description>Count of RPC Server instances spun up on RegionServers
+    Same property is used by the HMaster for count of master handlers.
+    Default is 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.lease.period</name>
+    <value>6000</value>
+    <description>Length of time the master will wait before timing out a region
+    server lease. Since region servers report in every second (see above), this
+    value has been reduced so that the master will notice a dead region server
+    sooner. The default is 30 seconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.port</name>
+    <value>-1</value>
+    <description>The port for the hbase master web UI
+    Set to -1 if you do not want the info server to run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port</name>
+    <value>-1</value>
+    <description>The port for the hbase regionserver web UI
+    Set to -1 if you do not want the info server to run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port.auto</name>
+    <value>true</value>
+    <description>Info server auto port bind. Enables automatic port
+    search if hbase.regionserver.info.port is already in use.
+    Enabled for testing to run multiple tests on one machine.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.lease.thread.wakefrequency</name>
+    <value>3000</value>
+    <description>The interval between checks for expired region server leases.
+    This value has been reduced due to the other reduced values above so that
+    the master will notice a dead region server sooner. The default is 15 seconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.optionalcacheflushinterval</name>
+    <value>10000</value>
+    <description>
+    Amount of time to wait since the last time a region was flushed before
+    invoking an optional cache flush. Default 60,000.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.safemode</name>
+    <value>false</value>
+    <description>
+    Turn on/off safe mode in region server. Always on for production, always off
+    for tests.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.max.filesize</name>
+    <value>67108864</value>
+    <description>
+    Maximum desired file size for an HRegion.  If filesize exceeds
+    value + (value / 2), the HRegion is split in two.  Default: 256M.
+
+    Keep the maximum filesize small so we split more often in tests.
+    </description>
+  </property>
+  <property>
+    <name>hadoop.log.dir</name>
+    <value>${user.dir}/../logs</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.clientPort</name>
+    <value>21818</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The port at which the clients will connect.
+    </description>
+  </property>
+</configuration>
diff --git a/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/GoraHBaseTestDriver.java b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/GoraHBaseTestDriver.java
new file mode 100644
index 0000000..992aca4
--- /dev/null
+++ b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/GoraHBaseTestDriver.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase;
+
+import org.apache.gora.GoraTestDriver;
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.hadoop.conf.Configuration;
+
+//HBase imports
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+
+/**
+ * Helper class for third part tests using gora-hbase backend. 
+ * @see GoraTestDriver
+ */
+public class GoraHBaseTestDriver extends GoraTestDriver {
+
+  protected HBaseTestingUtility hbaseUtil;
+  protected int numServers = 1;
+  
+  public GoraHBaseTestDriver() {
+    super(HBaseStore.class);
+    hbaseUtil = new HBaseTestingUtility();
+  }
+
+  public void setNumServers(int numServers) {
+    this.numServers = numServers;
+  }
+  
+  public int getNumServers() {
+    return numServers;
+  }
+  
+  @Override
+  public void setUpClass() throws Exception {
+    super.setUpClass();
+    log.info("Starting HBase cluster");
+    hbaseUtil.startMiniCluster(numServers);
+  }
+
+  @Override
+  public void tearDownClass() throws Exception {
+    super.tearDownClass();
+    log.info("Stoping HBase cluster");
+    hbaseUtil.shutdownMiniCluster();
+  }
+  
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+  }
+  
+  public void deleteAllTables() throws Exception {
+    HBaseAdmin admin = hbaseUtil.getHBaseAdmin();
+    for(HTableDescriptor table:admin.listTables()) {
+      admin.disableTable(table.getName());
+      admin.deleteTable(table.getName());
+    }
+  }
+  
+  public Configuration  getConf() {
+      return hbaseUtil.getConfiguration();
+  }
+  
+  public HBaseTestingUtility getHbaseUtil() {
+    return hbaseUtil;
+  }
+  
+}		
diff --git a/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreCountQuery.java b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreCountQuery.java
new file mode 100644
index 0000000..d321bea
--- /dev/null
+++ b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreCountQuery.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.mapreduce;
+
+import org.apache.gora.examples.generated.TokenDatum;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.mapreduce.MapReduceTestUtils;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests related to {@link HBaseStore} using mapreduce.
+ */
+public class TestHBaseStoreCountQuery extends HBaseClusterTestCase{
+
+  private HBaseStore<String, WebPage> webPageStore;
+  
+  @Before
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    webPageStore = DataStoreFactory.getDataStore(
+        HBaseStore.class, String.class, WebPage.class, conf);
+  }
+
+  @After
+  @Override
+  public void tearDown() throws Exception {
+    webPageStore.close();
+    super.tearDown();
+  }
+  
+  @Test
+  public void testCountQuery() throws Exception {
+    MapReduceTestUtils.testCountQuery(webPageStore, conf);
+  }
+
+  public static void main(String[] args) throws Exception {
+   TestHBaseStoreCountQuery test =  new TestHBaseStoreCountQuery();
+   test.setUp();
+   test.testCountQuery();
+  }
+}
diff --git a/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreWordCount.java b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreWordCount.java
new file mode 100644
index 0000000..fb60124
--- /dev/null
+++ b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/mapreduce/TestHBaseStoreWordCount.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.mapreduce;
+
+import org.apache.gora.examples.generated.TokenDatum;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.mapreduce.MapReduceTestUtils;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests related to {@link org.apache.gora.hbase.store.HBaseStore} using mapreduce.
+ */
+public class TestHBaseStoreWordCount extends HBaseClusterTestCase{
+
+  private HBaseStore<String, WebPage> webPageStore;
+  private HBaseStore<String, TokenDatum> tokenStore;
+  
+  @Before
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    webPageStore = DataStoreFactory.getDataStore(
+        HBaseStore.class, String.class, WebPage.class, conf);
+    tokenStore = DataStoreFactory.getDataStore(HBaseStore.class, 
+        String.class, TokenDatum.class, conf);
+  }
+
+  @After
+  @Override
+  public void tearDown() throws Exception {
+    webPageStore.close();
+    tokenStore.close();
+    super.tearDown();
+  }
+
+  @Test
+  public void testWordCount() throws Exception {
+    MapReduceTestUtils.testWordCount(conf, webPageStore, tokenStore);
+  }
+  
+  public static void main(String[] args) throws Exception {
+   TestHBaseStoreWordCount test =  new TestHBaseStoreWordCount();
+   test.setUp();
+   test.testWordCount();
+  }
+}
diff --git a/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/store/TestHBaseStore.java b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/store/TestHBaseStore.java
new file mode 100644
index 0000000..9aa62fc
--- /dev/null
+++ b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/store/TestHBaseStore.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.store;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import junit.framework.Assert;
+
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.hbase.GoraHBaseTestDriver;
+import org.apache.gora.hbase.store.HBaseStore;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test case for HBaseStore.
+ */
+public class TestHBaseStore extends DataStoreTestBase {
+
+  private Configuration conf;
+  
+  static {
+    setTestDriver(new GoraHBaseTestDriver());
+  }
+  
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    conf = getTestDriver().getHbaseUtil().getConfiguration();
+  }
+    
+  @SuppressWarnings("unchecked")
+  @Override
+  protected DataStore<String, Employee> createEmployeeDataStore()
+      throws IOException {
+    return DataStoreFactory.createDataStore(HBaseStore.class, String.class, 
+        Employee.class, conf);
+  }
+
+  @SuppressWarnings("unchecked")
+  @Override
+  protected DataStore<String, WebPage> createWebPageDataStore()
+      throws IOException {
+    return DataStoreFactory.createDataStore(HBaseStore.class, String.class, 
+        WebPage.class, conf);
+  }
+
+  public GoraHBaseTestDriver getTestDriver() {
+    return (GoraHBaseTestDriver) testDriver;
+  }
+  
+  @Override
+  public void assertSchemaExists(String schemaName) throws Exception {
+    HBaseAdmin admin = getTestDriver().getHbaseUtil().getHBaseAdmin();
+    Assert.assertTrue(admin.tableExists(schemaName));
+  }
+
+  @Override
+  public void assertPutArray() throws IOException { 
+    HTable table = new HTable("WebPage");
+    Get get = new Get(Bytes.toBytes("com.example/http"));
+    org.apache.hadoop.hbase.client.Result result = table.get(get);
+    
+    Assert.assertEquals(result.getFamilyMap(Bytes.toBytes("parsedContent")).size(), 4);
+    Assert.assertTrue(Arrays.equals(result.getValue(Bytes.toBytes("parsedContent")
+        ,Bytes.toBytes(0)), Bytes.toBytes("example")));
+    
+    Assert.assertTrue(Arrays.equals(result.getValue(Bytes.toBytes("parsedContent")
+        ,Bytes.toBytes(3)), Bytes.toBytes("example.com")));
+    table.close();
+  }
+  
+  
+  @Override
+  public void assertPutBytes(byte[] contentBytes) throws IOException {    
+    HTable table = new HTable("WebPage");
+    Get get = new Get(Bytes.toBytes("com.example/http"));
+    org.apache.hadoop.hbase.client.Result result = table.get(get);
+    
+    byte[] actualBytes = result.getValue(Bytes.toBytes("content"), null);
+    Assert.assertNotNull(actualBytes);
+    Assert.assertTrue(Arrays.equals(contentBytes, actualBytes));
+    table.close();
+  }
+  
+  @Override
+  public void assertPutMap() throws IOException {
+    HTable table = new HTable("WebPage");
+    Get get = new Get(Bytes.toBytes("com.example/http"));
+    org.apache.hadoop.hbase.client.Result result = table.get(get);
+    
+    byte[] anchor2Raw = result.getValue(Bytes.toBytes("outlinks")
+        , Bytes.toBytes("http://example2.com"));
+    Assert.assertNotNull(anchor2Raw);
+    String anchor2 = Bytes.toString(anchor2Raw);
+    Assert.assertEquals("anchor2", anchor2);
+    table.close();
+  }
+
+
+    @Override
+    public void testQueryEndKey() throws IOException {
+        //We need to skip this test since gora considers endRow inclusive, while its exclusinve for HBase.
+        //TODO: We should raise an issue for HBase to allow us to specify if the endRow will be inclussive or exclusive.
+    }
+
+    @Override
+    public void testQueryKeyRange() throws IOException {
+        //We need to skip this test since gora considers endRow inclusive, while its exclusinve for HBase.
+        //TODO: We should raise an issue for HBase to allow us to specify if the endRow will be inclussive or exclusive.
+    }
+
+    @Override
+    public void testDeleteByQuery() throws IOException {
+        //We need to skip this test since gora considers endRow inclusive, while its exclusinve for HBase.
+        //TODO: We should raise an issue for HBase to allow us to specify if the endRow will be inclussive or exclusive.
+    }
+
+    public static void main(String[] args) throws Exception {
+    TestHBaseStore test = new TestHBaseStore();
+    test.setUpClass();
+    test.setUp();
+    
+    test.testQuery();
+    
+    test.tearDown();
+    test.tearDownClass();
+  }
+}
diff --git a/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/util/TestHBaseByteInterface.java b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/util/TestHBaseByteInterface.java
new file mode 100644
index 0000000..18fd11a
--- /dev/null
+++ b/trunk/gora-hbase/src/test/java/org/apache/gora/hbase/util/TestHBaseByteInterface.java
@@ -0,0 +1,116 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.hbase.util;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.apache.avro.util.Utf8;
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.Metadata;
+import org.apache.gora.examples.generated.TokenDatum;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestHBaseByteInterface {
+
+  private static final Random RANDOM = new Random();
+
+  @Test
+  public void testEncodingDecoding() throws Exception {
+    for (int i=0; i < 1000; i++) {
+    
+      //employer
+      Utf8 name = new Utf8("john");
+      long dateOfBirth = System.currentTimeMillis();
+      int salary = 1337;
+      Utf8 ssn = new Utf8(String.valueOf(RANDOM.nextLong()));
+      
+      Employee e = new Employee();
+      e.setName(name);
+      e.setDateOfBirth(dateOfBirth);
+      e.setSalary(salary);
+      e.setSsn(ssn);
+      
+      byte[] employerBytes = HBaseByteInterface.toBytes(e, Employee._SCHEMA);
+      Employee e2 = (Employee) HBaseByteInterface.fromBytes(Employee._SCHEMA, 
+          employerBytes);
+      
+      Assert.assertEquals(name, e2.getName());
+      Assert.assertEquals(dateOfBirth, e2.getDateOfBirth());
+      Assert.assertEquals(salary, e2.getSalary());
+      Assert.assertEquals(ssn, e2.getSsn());
+      
+      
+      //metadata
+      Utf8 key = new Utf8("theKey");
+      Utf8 value = new Utf8("theValue " + RANDOM.nextLong());
+      
+      Metadata m = new Metadata();
+      m.putToData(key, value);
+      
+      byte[] datumBytes = HBaseByteInterface.toBytes(m, Metadata._SCHEMA);
+      Metadata m2 = (Metadata) HBaseByteInterface.fromBytes(Metadata._SCHEMA, 
+          datumBytes);
+      
+      Assert.assertEquals(value, m2.getFromData(key));
+    }
+  }
+  
+  @Test
+  public void testEncodingDecodingMultithreaded() throws Exception {
+    // create a fixed thread pool
+    int numThreads = 8;
+    ExecutorService pool = Executors.newFixedThreadPool(numThreads);
+
+    // define a list of tasks
+    Collection<Callable<Integer>> tasks = new ArrayList<Callable<Integer>>();
+    for (int i = 0; i < numThreads; i++) {
+      tasks.add(new Callable<Integer>() {
+        @Override
+        public Integer call() {
+          try {
+            // run a sequence
+            testEncodingDecoding();
+            // everything ok, return 0
+            return 0;
+          } catch (Exception e) {
+            e.printStackTrace();
+            // this will fail the test
+            return 1;
+          }
+        }
+      });
+    }
+    // submit them at once
+    List<Future<Integer>> results = pool.invokeAll(tasks);
+
+    // check results
+    for (Future<Integer> result : results) {
+      Assert.assertEquals(0, (int) result.get());
+    }
+  }
+
+}
\ No newline at end of file
diff --git a/trunk/gora-sql/build.xml b/trunk/gora-sql/build.xml
new file mode 100644
index 0000000..1d29f8b
--- /dev/null
+++ b/trunk/gora-sql/build.xml
@@ -0,0 +1,24 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-sql" default="compile">
+  <property name="project.dir" value="${basedir}/.."/>
+
+  <import file="${project.dir}/build-common.xml"/>
+</project>
diff --git a/trunk/gora-sql/conf/.gitignore b/trunk/gora-sql/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-sql/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-sql/ivy/ivy.xml b/trunk/gora-sql/ivy/ivy.xml
new file mode 100644
index 0000000..980d1f6
--- /dev/null
+++ b/trunk/gora-sql/ivy/ivy.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivy-module version="2.0">
+    <info 
+      organisation="org.apache.gora"
+      module="gora-sql"
+      status="integration"/>
+
+  <configurations>
+    <include file="../../ivy/ivy-configurations.xml"/>
+  </configurations>
+  
+  <publications>
+    <artifact name="gora-sql" conf="compile"/>
+    <artifact name="gora-sql-test" conf="test"/>
+  </publications>
+
+  <dependencies>
+    <!-- conf="*->@" means every conf is mapped to the conf of the same name of the artifact-->
+    <dependency org="org.apache.gora" name="gora-core" rev="latest.integration" changing="true" conf="*->@"/> 
+    <dependency org="org.jdom" name="jdom" rev="1.1" conf="*->master"/>
+    <dependency org="com.healthmarketscience.sqlbuilder" name="sqlbuilder" rev="2.0.6" conf="*->default"/>
+
+    <!-- test dependencies -->
+    <dependency org="org.hsqldb" name="hsqldb" rev="2.0.0" conf="test->default"/>
+
+  </dependencies>
+    
+</ivy-module>
+
diff --git a/trunk/gora-sql/lib-ext/.gitignore b/trunk/gora-sql/lib-ext/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-sql/lib-ext/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-sql/pom.xml b/trunk/gora-sql/pom.xml
new file mode 100644
index 0000000..3f145fd
--- /dev/null
+++ b/trunk/gora-sql/pom.xml
@@ -0,0 +1,200 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>gora-sql</artifactId>
+    <packaging>bundle</packaging>
+
+    <name>Apache Gora :: SQL</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-sql</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-sql</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-sql</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora.sql*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <testResources>
+            <testResource>
+              <directory>src/test/conf</directory>
+                <includes>
+                    <include>**/*</include>
+                </includes>
+            <!--targetPath>${project.basedir}/target/classes/</targetPath-->
+            </testResource>
+        </testResources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                    <execution>
+                        <id>reserve-network-port</id>
+                        <goals>
+                            <goal>reserve-network-port</goal>
+                        </goals>
+                        <phase>process-resources</phase>
+                        <configuration>
+                            <portNames>
+                                <portName>hsqldb.port</portName>
+                            </portNames>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-surefire-plugin</artifactId>
+                <version>${maven-surfire-plugin.version}</version>
+                <inherited>true</inherited>
+                <configuration>
+                    <systemPropertyVariables>
+                        <hadoop.log.dir>${project.basedir}/target/test-logs/</hadoop.log.dir>
+                        <test.build.data>${project.basedir}/target/test-data/</test.build.data>
+                    </systemPropertyVariables>
+                    <forkMode>always</forkMode>
+                    <testFailureIgnore>true</testFailureIgnore>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <goal>test-jar</goal>
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Gora Internal Dependencies -->
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+        </dependency>
+
+      <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <classifier>tests</classifier>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.jdom</groupId>
+            <artifactId>jdom</artifactId>
+        </dependency>
+
+        <!-- Logging Dependencies -->
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-jdk14</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+        </dependency>
+
+        <!-- Testing Dependencies -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-test</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.hsqldb</groupId>
+            <artifactId>hsqldb</artifactId>
+            <scope>test</scope>
+        </dependency>
+
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-sql/src/examples/java/.gitignore b/trunk/gora-sql/src/examples/java/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-sql/src/examples/java/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlQuery.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlQuery.java
new file mode 100644
index 0000000..1f7101d
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlQuery.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.query;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.impl.QueryBase;
+import org.apache.gora.sql.store.SqlStore;
+
+/**
+ * Query implementation covering SQL queries
+ */
+public class SqlQuery<K, T extends Persistent> extends QueryBase<K, T> {
+
+  public SqlQuery() {
+    super(null);
+  }
+
+  public SqlQuery(SqlStore<K, T> dataStore) {
+    super(dataStore);
+  }
+
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlResult.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlResult.java
new file mode 100644
index 0000000..d031a83
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/query/SqlResult.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.query;
+
+import java.io.IOException;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.impl.ResultBase;
+import org.apache.gora.sql.store.SqlStore;
+import org.apache.gora.sql.util.SqlUtils;
+import org.apache.gora.store.DataStore;
+
+public class SqlResult<K, T extends Persistent> extends ResultBase<K, T> {
+
+  private ResultSet resultSet;
+  private PreparedStatement statement;
+  
+  public SqlResult(DataStore<K, T> dataStore, Query<K, T> query
+      , ResultSet resultSet, PreparedStatement statement) {
+    super(dataStore, query);
+    this.resultSet = resultSet;
+    this.statement = statement;
+  }
+
+  @Override
+  protected boolean nextInner() throws IOException {
+    try {
+      if(!resultSet.next()) { //no matching result
+        close();
+        return false;
+      }
+
+      SqlStore<K, T> sqlStore = ((SqlStore<K,T>)dataStore);
+      key = sqlStore.readPrimaryKey(resultSet);
+      persistent = sqlStore.readObject(resultSet, persistent, query.getFields());
+
+      return true;
+    } catch (Exception ex) {
+      throw new IOException(ex);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    SqlUtils.close(resultSet);
+    SqlUtils.close(statement);
+  }
+
+  @Override
+  public float getProgress() throws IOException {
+    return 0;
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Delete.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Delete.java
new file mode 100644
index 0000000..4cca504
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Delete.java
@@ -0,0 +1,69 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.statement;
+
+/**
+ * A SQL DELETE statement, for generating a Prepared Statement
+ */
+//new API experiment
+public class Delete {
+
+  private String from;
+  private Where where;
+  
+  /**
+   * @return the from
+   */
+  public String from() {
+    return from;
+  }
+
+  /**
+   * @param from the from to set
+   */
+  public Delete from(String from) {
+    this.from = from;
+    return this;
+  }
+  
+  public Delete where(Where where) {
+    this.where = where;
+    return this;
+  }
+  
+  public Where where() {
+    if(where == null) {
+      where = new Where();
+    }
+    return where;
+  }
+  
+  @Override
+  public String toString() {
+    StringBuilder builder = new StringBuilder("DELETE FROM ");
+    builder.append(from);
+    
+    if(where != null && !where.isEmpty()) {
+      builder.append(" WHERE ");
+      builder.append(where.toString());
+    }
+    
+    return builder.toString();
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/HSqlInsertUpdateStatement.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/HSqlInsertUpdateStatement.java
new file mode 100644
index 0000000..41a6387
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/HSqlInsertUpdateStatement.java
@@ -0,0 +1,127 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Map.Entry;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.sql.store.Column;
+import org.apache.gora.sql.store.SqlMapping;
+import org.apache.gora.sql.store.SqlStore;
+
+public class HSqlInsertUpdateStatement<K, T extends Persistent>
+extends InsertUpdateStatement<K, T> {
+
+  public HSqlInsertUpdateStatement(SqlStore<K, T> store, SqlMapping mapping,
+      String tableName) {
+    super(store, mapping, tableName);
+  }
+
+  private String getVariable(String columnName) {
+    return "v_" + columnName;
+  }
+
+  @Override
+  public PreparedStatement toStatement(Connection connection)
+  throws SQLException {
+    int i;
+
+    StringBuilder buf = new StringBuilder("MERGE INTO ");
+    buf.append(tableName).append(" USING (VALUES(");
+
+    i = 0;
+    for (Entry<String, ColumnData> e : columnMap.entrySet()) {
+      Column column = e.getValue().column;
+      if (i != 0) buf.append(",");
+      buf.append("CAST(? AS ");
+      buf.append(column.getJdbcType().toString());
+      if (column.getScaleOrLength() > 0) {
+        buf.append("(").append(column.getScaleOrLength()).append(")");
+      }
+      buf.append(")");
+      i++;
+    }
+    buf.append(")) AS vals(");
+
+    i = 0;
+    for (String columnName : columnMap.keySet()) {
+      if (i != 0) buf.append(",");
+      buf.append(getVariable(columnName));
+      i++;
+    }
+
+    buf.append(") ON ").append(tableName).append(".").append(mapping.getPrimaryColumnName()).append("=vals.");
+    buf.append(getVariable(mapping.getPrimaryColumnName()));
+
+    buf.append(" WHEN MATCHED THEN UPDATE SET ");
+    i = 0;
+    for (String columnName : columnMap.keySet()) {
+      if (columnName.equals(mapping.getPrimaryColumnName())) {
+        continue;
+      }
+      if (i != 0) { buf.append(","); }
+      buf.append(tableName).append(".").append(columnName).append("=vals.");
+      buf.append(getVariable(columnName));
+      i++;
+    }
+
+    buf.append(" WHEN NOT MATCHED THEN INSERT (");
+    i = 0;
+    for (String columnName : columnMap.keySet()) {
+      if (i != 0) { buf.append(","); }
+      buf.append(columnName);
+      i++;
+    }
+    i = 0;
+    buf.append(") VALUES ");
+    for (String columnName : columnMap.keySet()) {
+      if (i != 0) { buf.append(","); }
+      buf.append("vals.").append(getVariable(columnName));
+      i++;
+    }
+
+    Column primaryColumn = mapping.getPrimaryColumn();
+    PreparedStatement insert = connection.prepareStatement(buf.toString());
+    int psIndex = 1;
+    for (Entry<String, ColumnData> e : columnMap.entrySet()) {
+      ColumnData cd = e.getValue();
+      Column column = cd.column;
+      if (column.getName().equals(primaryColumn.getName())) {
+        Object key = columnMap.get(primaryColumn.getName()).object;
+        if (primaryColumn.getScaleOrLength() > 0) {
+          insert.setObject(psIndex++, key,
+              primaryColumn.getJdbcType().getOrder(), primaryColumn.getScaleOrLength());
+        } else {
+          insert.setObject(psIndex++, key, primaryColumn.getJdbcType().getOrder());
+        }
+        continue;
+      }
+      try {
+        store.setObject(insert, psIndex++, cd.object, cd.schema, cd.column);
+      } catch (IOException ex) {
+        throw new SQLException(ex);
+      }
+    }
+
+    return insert;
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertStatement.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertStatement.java
new file mode 100644
index 0000000..99a7670
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertStatement.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.gora.sql.store.SqlMapping;
+import org.apache.gora.util.StringUtils;
+
+/**
+ * An SQL INSERT statement, for generating a Prepared Statement
+ */
+public class InsertStatement {
+
+  private SqlMapping mapping;
+  private String tableName;
+  private List<String> columnNames;
+
+  public InsertStatement(SqlMapping mapping, String tableName) {
+    this.mapping = mapping;
+    this.tableName = tableName;
+    this.columnNames = new ArrayList<String>();
+  }
+
+  public InsertStatement(SqlMapping mapping, String tableName, String... columnNames) {
+    this.mapping = mapping;
+    this.tableName = tableName;
+    this.columnNames = Arrays.asList(columnNames);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder builder = new StringBuilder("INSERT INTO ");
+    builder.append(tableName);
+
+    StringUtils.join(builder.append(" ("), columnNames).append(" )");
+
+    builder.append("VALUES (");
+    for(int i = 0; i < columnNames.size(); i++) {
+      if (i != 0) builder.append(",");
+      builder.append("?");
+    }
+
+    builder.append(") ON DUPLICATE KEY UPDATE ");
+    columnNames.remove(mapping.getPrimaryColumnName());
+    for(int i = 0; i < columnNames.size(); i++) {
+      if (i != 0) builder.append(",");
+      builder.append(columnNames.get(i));
+      builder.append("=");
+      builder.append("?");
+    }
+    builder.append(";");
+
+    return builder.toString();
+  }
+
+  /**
+   * @return the tableName
+   */
+  public String getTableName() {
+    return tableName;
+  }
+
+  /**
+   * @param tableName the tableName to set
+   */
+  public void setTableName(String tableName) {
+    this.tableName = tableName;
+  }
+
+  /**
+   * @return the columnNames
+   */
+  public List<String> getColumnNames() {
+    return columnNames;
+  }
+
+  /**
+   * @param columnNames the columnNames to set
+   */
+  public void setColumnNames(String... columnNames) {
+    this.columnNames = Arrays.asList(columnNames);
+  }
+
+  public void addColumnName(String columnName) {
+    this.columnNames.add(columnName);
+  }
+
+  public void clear() {
+    this.columnNames.clear();
+  }
+}
\ No newline at end of file
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatement.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatement.java
new file mode 100644
index 0000000..663e15a
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatement.java
@@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+import org.apache.avro.Schema;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.sql.store.Column;
+import org.apache.gora.sql.store.SqlMapping;
+import org.apache.gora.sql.store.SqlStore;
+
+public abstract class InsertUpdateStatement<K, V extends Persistent> {
+
+  protected class ColumnData {
+    protected Object object;
+    protected Schema schema;
+    protected Column column;
+
+    protected ColumnData(Object object, Schema schema, Column column) {
+      this.object = object;
+      this.schema = schema;
+      this.column = column;
+    }
+  }
+
+  protected SortedMap<String, ColumnData> columnMap = new TreeMap<String, ColumnData>();
+
+  protected String tableName;
+
+  protected SqlMapping mapping;
+
+  protected SqlStore<K, V> store;
+
+  public InsertUpdateStatement(SqlStore<K, V> store, SqlMapping mapping, String tableName) {
+    this.store = store;
+    this.mapping = mapping;
+    this.tableName = tableName;
+  }
+
+  public void setObject(Object object, Schema schema, Column column) {
+    columnMap.put(column.getName(), new ColumnData(object, schema, column));
+  }
+
+  public abstract PreparedStatement toStatement(Connection connection)
+  throws SQLException;
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatementFactory.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatementFactory.java
new file mode 100644
index 0000000..53ffc3c
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/InsertUpdateStatementFactory.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.sql.store.SqlMapping;
+import org.apache.gora.sql.store.SqlStore;
+import org.apache.gora.sql.store.SqlStore.DBVendor;
+
+public class InsertUpdateStatementFactory {
+
+  public static <K, T extends Persistent>
+  InsertUpdateStatement<K, T> createStatement(SqlStore<K, T> store,
+      SqlMapping mapping, DBVendor dbVendor) {
+    switch(dbVendor) {
+      case MYSQL:
+        return new MySqlInsertUpdateStatement<K, T>(store, mapping, mapping.getTableName());
+      case HSQL:
+        return new HSqlInsertUpdateStatement<K, T>(store, mapping, mapping.getTableName());
+      case GENERIC:
+      default :
+        throw new RuntimeException("Database is not supported yet.");    
+    }
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/MySqlInsertUpdateStatement.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/MySqlInsertUpdateStatement.java
new file mode 100644
index 0000000..ca3b9a9
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/MySqlInsertUpdateStatement.java
@@ -0,0 +1,106 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Map.Entry;
+
+import org.apache.avro.Schema;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.sql.store.Column;
+import org.apache.gora.sql.store.SqlMapping;
+import org.apache.gora.sql.store.SqlStore;
+import org.apache.gora.util.StringUtils;
+
+public class MySqlInsertUpdateStatement<K, V extends Persistent> extends InsertUpdateStatement<K, V> {
+
+  public MySqlInsertUpdateStatement(SqlStore<K, V> store, SqlMapping mapping, String tableName) {
+    super(store, mapping, tableName);
+  }
+
+  @Override
+  public PreparedStatement toStatement(Connection connection)
+  throws SQLException {
+    int i = 0;
+    StringBuilder builder = new StringBuilder("INSERT INTO ");
+    builder.append(tableName);
+    StringUtils.join(builder.append(" ("), columnMap.keySet()).append(" )");
+
+    builder.append("VALUES (");
+    for(i = 0; i < columnMap.size(); i++) {
+      if (i != 0) builder.append(",");
+      builder.append("?");
+    }
+
+    builder.append(") ON DUPLICATE KEY UPDATE ");
+
+    // TODO: Fix this stupid code. We need to make sure primary key field
+    // is not in UPDATE part of sql query. This desperately needs a
+    // better solution
+    Column primaryColumn = mapping.getPrimaryColumn();
+    Object key = columnMap.get(primaryColumn.getName()).object;
+    i = 0;
+    for(String s : columnMap.keySet()) {
+      if (s.equals(primaryColumn.getName())) {
+        continue;
+      }
+      if (i != 0) builder.append(",");
+      builder.append(s).append("=").append("?");
+      i++;
+    }
+    builder.append(";");
+
+    PreparedStatement insert = connection.prepareStatement(builder.toString());
+
+    int psIndex = 1;
+    for (int count = 0; count < 2; count++) {
+      for (Entry<String, ColumnData> e : columnMap.entrySet()) {
+        ColumnData columnData = e.getValue();
+        Column column = columnData.column;
+        Schema fieldSchema = columnData.schema;
+        Object fieldValue = columnData.object;
+
+        // check if primary key
+        if (column.getName().equals(primaryColumn.getName())) {
+          if (count == 1) {
+            continue;
+          }
+          if (primaryColumn.getScaleOrLength() > 0) {
+            insert.setObject(psIndex++, key,
+                primaryColumn.getJdbcType().getOrder(), primaryColumn.getScaleOrLength());
+          } else {
+            insert.setObject(psIndex++, key, primaryColumn.getJdbcType().getOrder());
+          }
+          continue;
+        }
+
+        try {
+          store.setObject(insert, psIndex++, fieldValue, fieldSchema, column);
+        } catch (IOException ex) {
+          throw new SQLException(ex);
+        }
+      }
+    }
+
+    return insert;
+  }
+
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/SelectStatement.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/SelectStatement.java
new file mode 100644
index 0000000..4d3ba69
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/SelectStatement.java
@@ -0,0 +1,262 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+import java.util.ArrayList;
+
+import org.apache.gora.util.StringUtils;
+
+/** A SQL SELECT statement */
+public class SelectStatement {
+  
+  private String selectStatement;
+  private ArrayList<String> selectList;
+  private String from;
+  private Where where;
+  private String groupBy;
+  private String having;
+  private String orderBy;
+  private boolean orderByAsc = true; //whether ascending or descending
+  private long offset = -1;
+  private long limit = -1 ;
+  private boolean semicolon = true;
+  
+  public SelectStatement() {
+    this.selectList = new ArrayList<String>();
+  }
+  
+  public SelectStatement(String from) {
+    this();
+    this.from = from;
+  }
+  
+  public SelectStatement(String selectList, String from, String where,
+      String orderBy) {
+    this.selectStatement = selectList;
+    this.from = from;
+    setWhere(where);
+    this.orderBy = orderBy;
+  }
+  
+  public SelectStatement(String selectList, String from, Where where,
+      String groupBy, String having, String orderBy, boolean orderByAsc,
+      int offset, int limit, boolean semicolon) {
+    super();
+    this.selectStatement = selectList;
+    this.from = from;
+    this.where = where;
+    this.groupBy = groupBy;
+    this.having = having;
+    this.orderBy = orderBy;
+    this.orderByAsc = orderByAsc;
+    this.offset = offset;
+    this.limit = limit;
+    this.semicolon = semicolon;
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder builder = new StringBuilder("SELECT ");
+    if(selectStatement != null)
+      builder.append(selectStatement);
+    else
+      StringUtils.join(builder, selectList);
+    append(builder, "FROM", from);
+    append(builder, "WHERE", where);
+    append(builder, "GROUP BY", groupBy);
+    append(builder, "HAVING", having);
+    append(builder, "ORDER BY", orderBy);
+    if(orderBy != null)
+      builder.append(" ").append(orderByAsc?" ASC ":" DESC ");
+    if(limit > 0)
+      builder.append(" LIMIT ").append(limit);
+    if(offset >= 0)
+      builder.append(" OFFSET ").append(offset);
+    if(semicolon)
+      builder.append(";");
+    return builder.toString();
+  }
+  
+  /** Adds a part to the Where clause connected with AND */
+  public void addWhere(String part) {
+    if(where == null)
+      where = new Where();
+    where.addPart(part);
+  }
+  
+  /** Appends the clause if not null */
+  static void append(StringBuilder builder, String sqlClause, Object clause ) {
+    if(clause != null && !clause.toString().equals("")) {
+      builder.append(" ").append(sqlClause).append(" ").append(clause.toString());
+    }
+  }
+
+  public void setSelectStatement(String selectStatement) {
+    this.selectStatement = selectStatement;
+  }
+  
+  public String getSelectStatement() {
+    return selectStatement;
+  }
+
+  public ArrayList<String> getSelectList() {
+    return selectList;
+  }
+  
+  public void setSelectList(ArrayList<String> selectList) {
+    this.selectList = selectList;
+  }
+  
+  public void addToSelectList(String selectField) {
+    selectList.add(selectField);
+  }
+  
+  /**
+   * @return the from
+   */
+  public String getFrom() {
+    return from;
+  }
+
+  /**
+   * @param from the from to set
+   */
+  public void setFrom(String from) {
+    this.from = from;
+  }
+
+  /**
+   * @return the where
+   */
+  public Where getWhere() {
+    return where;
+  }
+
+  /**
+   * @param where the where to set
+   */
+  public void setWhere(Where where) {
+    this.where = where;
+  }
+  
+  /**
+   * @param where the where to set
+   */
+  public void setWhere(String where) {
+    this.where = new Where(where);
+  }
+
+  /**
+   * @return the groupBy
+   */
+  public String getGroupBy() {
+    return groupBy;
+  }
+
+  /**
+   * @param groupBy the groupBy to set
+   */
+  public void setGroupBy(String groupBy) {
+    this.groupBy = groupBy;
+  }
+
+  /**
+   * @return the having
+   */
+  public String getHaving() {
+    return having;
+  }
+
+  /**
+   * @param having the having to set
+   */
+  public void setHaving(String having) {
+    this.having = having;
+  }
+
+  /**
+   * @return the orderBy
+   */
+  public String getOrderBy() {
+    return orderBy;
+  }
+
+  /**
+   * @param orderBy the orderBy to set
+   */
+  public void setOrderBy(String orderBy) {
+    this.orderBy = orderBy;
+  }
+
+  /**
+   * @return the orderByAsc
+   */
+  public boolean isOrderByAsc() {
+    return orderByAsc;
+  }
+
+  /**
+   * @param orderByAsc the orderByAsc to set
+   */
+  public void setOrderByAsc(boolean orderByAsc) {
+    this.orderByAsc = orderByAsc;
+  }
+
+  /**
+   * @return the offset
+   */
+  public long getOffset() {
+    return offset;
+  }
+
+  /**
+   * @param offset the offset to set
+   */
+  public void setOffset(long offset) {
+    this.offset = offset;
+  }
+
+  /**
+   * @return the limit
+   */
+  public long getLimit() {
+    return limit;
+  }
+
+  /**
+   * @param limit the limit to set
+   */
+  public void setLimit(long limit) {
+    this.limit = limit;
+  }
+
+  /**
+   * @return the semicolon
+   */
+  public boolean isSemicolon() {
+    return semicolon;
+  }
+
+  /**
+   * @param semicolon the semicolon to set
+   */
+  public void setSemicolon(boolean semicolon) {
+    this.semicolon = semicolon;
+  }
+  
+}
\ No newline at end of file
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Where.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Where.java
new file mode 100644
index 0000000..ff4313a
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/statement/Where.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.sql.statement;
+
+/**
+ * A WHERE clause in an SQL statement
+ */
+public class Where {
+
+  private StringBuilder builder;
+
+  public Where() {
+    builder = new StringBuilder();
+  }
+
+  public Where(String where) {
+    builder = new StringBuilder(where == null ? "" : where);
+  }
+
+  /** Adds a part to the Where clause connected with AND */
+  public void addPart(String part) {
+    if (builder.length() > 0) {
+      builder.append(" AND ");
+    }
+    builder.append(part);
+  }
+
+  public void equals(String name, String value) {
+    addPart(name + " = " + value);
+  }
+
+  public void lessThan(String name, String value) {
+    addPart(name + " < " + value);
+  }
+  
+  public void lessThanEq(String name, String value) {
+    addPart(name + " <= " + value);
+  }
+  
+  public void greaterThan(String name, String value) {
+    addPart(name + " > " + value);
+  }
+  
+  public void greaterThanEq(String name, String value) {
+    addPart(name + " >= " + value);
+  }
+  
+  public boolean isEmpty() {
+    return builder.length() == 0;
+  }
+  
+  @Override
+  public String toString() {
+    return builder.toString();
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/Column.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/Column.java
new file mode 100644
index 0000000..8d87921
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/Column.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.store;
+
+import org.apache.gora.sql.store.SqlTypeInterface.JdbcType;
+
+public class Column {
+
+  public static enum MappingStrategy {
+    SERIALIZED,
+    JOIN_TABLE,
+    SECONDARY_TABLE,
+  }
+
+  private String tableName;
+  private String name;
+  private JdbcType jdbcType;
+  private String sqlType;
+  private boolean isPrimaryKey;
+  private int length = -1;
+  private int scale = -1;
+  private MappingStrategy mappingStrategy;
+
+  //index, not-null, default-value
+
+  public Column() {
+  }
+
+  public Column(String name) {
+    this.name = name;
+  }
+
+  public Column(String name, boolean isPrimaryKey, JdbcType jdbcType, String sqlType
+      , int length, int scale) {
+    this.name = name;
+    this.isPrimaryKey = isPrimaryKey;
+    this.jdbcType = jdbcType;
+    this.length = length;
+    this.scale = scale;
+    this.mappingStrategy = MappingStrategy.SERIALIZED;
+    this.sqlType = sqlType == null ? jdbcType.getSqlType() : sqlType;
+  }
+
+  public Column(String name, boolean isPrimaryKey, JdbcType jdbcType
+      , int length, int scale) {
+    this(name, isPrimaryKey, jdbcType, null, length, scale);
+  }
+  
+  public Column(String name, boolean isPrimaryKey) {
+    this.name = name;
+  }
+
+  public String getName() {
+    return name;
+  }
+
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  public JdbcType getJdbcType() {
+    return jdbcType;
+  }
+
+  public void setJdbcType(JdbcType jdbcType) {
+    this.jdbcType = jdbcType;
+  }
+
+  public String getSqlType() {
+    return sqlType;
+  }
+  
+  public void setSqlType(String sqlType) {
+    this.sqlType = sqlType;
+  }
+  
+  public void setLength(int length) {
+    this.length = length;
+  }
+
+  public int getLength() {
+    return length;
+  }
+
+  public int getScale() {
+    return scale;
+  }
+
+  public void setScale(int scale) {
+    this.scale = scale;
+  }
+
+  public int getScaleOrLength() {
+    return length > 0 ? length : scale;
+  }
+
+  public String getTableName() {
+    return tableName;
+  }
+
+  public void setTableName(String tableName) {
+    this.tableName = tableName;
+  }
+
+  public MappingStrategy getMappingStrategy() {
+    return mappingStrategy;
+  }
+
+  public void setMappingStrategy(MappingStrategy mappingStrategy) {
+    this.mappingStrategy = mappingStrategy;
+  }
+
+  public boolean isPrimaryKey() {
+    return isPrimaryKey;
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlMapping.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlMapping.java
new file mode 100644
index 0000000..9c90f79
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlMapping.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.store;
+
+import java.util.HashMap;
+
+import org.apache.gora.sql.store.SqlTypeInterface.JdbcType;
+
+public class SqlMapping {
+
+  private String tableName;
+  private HashMap<String, Column> fields;
+  private Column primaryColumn;
+
+  public SqlMapping() {
+    fields = new HashMap<String, Column>();
+  }
+
+  public void setTableName(String tableName) {
+    this.tableName = tableName;
+  }
+
+  public String getTableName() {
+    return tableName;
+  }
+
+  public void addField(String fieldname, String column) {
+    fields.put(fieldname, new Column(column));
+  }
+
+  public void addField(String fieldName, String columnName, JdbcType jdbcType,
+      String sqlType, int length, int scale) {
+    fields.put(fieldName, new Column(columnName, false, jdbcType, sqlType, length, scale));
+  }
+
+  public Column getColumn(String fieldname) {
+    return fields.get(fieldname);
+  }
+
+  public void setPrimaryKey(String columnName, JdbcType jdbcType,
+      int length, int scale) {
+    primaryColumn = new Column(columnName, true, jdbcType, length, scale);
+  }
+
+  public Column getPrimaryColumn() {
+    return primaryColumn;
+  }
+
+  public String getPrimaryColumnName() {
+    return primaryColumn.getName();
+  }
+
+  public HashMap<String, Column> getFields() {
+    return fields;
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlStore.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlStore.java
new file mode 100644
index 0000000..510a2a2
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlStore.java
@@ -0,0 +1,323 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.store;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.sql.Blob;
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Field;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.generic.GenericFixed;
+import org.apache.avro.ipc.ByteBufferInputStream;
+import org.apache.avro.ipc.ByteBufferOutputStream;
+import org.apache.avro.specific.SpecificFixed;
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.persistency.Persistent;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.query.PartitionQuery;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.query.impl.PartitionQueryImpl;
+import org.apache.gora.sql.query.SqlQuery;
+import org.apache.gora.sql.query.SqlResult;
+import org.apache.gora.sql.statement.Delete;
+import org.apache.gora.sql.statement.InsertUpdateStatement;
+import org.apache.gora.sql.statement.InsertUpdateStatementFactory;
+import org.apache.gora.sql.statement.SelectStatement;
+import org.apache.gora.sql.statement.Where;
+import org.apache.gora.sql.store.SqlTypeInterface.JdbcType;
+import org.apache.gora.sql.util.SqlUtils;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.impl.DataStoreBase;
+import org.apache.gora.util.AvroUtils;
+import org.apache.gora.util.ClassLoadingUtils;
+import org.apache.gora.util.IOUtils;
+import org.apache.gora.util.StringUtils;
+import org.jdom.Document;
+import org.jdom.Element;
+import org.jdom.input.SAXBuilder;
+
+/**
+ * A DataStore implementation for RDBMS with a SQL interface. SqlStore
+ * uses the JOOQ API and various JDBC drivers to communicate with the DB. 
+ * Through use of the JOOQ API this SqlStore aims to support numerous SQL 
+ * database stores namely;
+ * DB2 9.7
+ * Derby 10.8
+ * H2 1.3.161
+ * HSQLDB 2.2.5
+ * Ingres 10.1.0
+ * MySQL 5.1.41 and 5.5.8
+ * Oracle XE 10.2.0.1.0 and 11g
+ * PostgreSQL 9.0
+ * SQLite with inofficial JDBC driver v056
+ * SQL Server 2008 R8
+ * Sybase Adaptive Server Enterprise 15.5
+ * Sybase SQL Anywhere 12
+ *
+ * This DataStore is currently in development, and requires a complete
+ * re-write as per GORA-86
+ * Please see https://issues.apache.org/jira/browse/GORA-86
+ */
+public class SqlStore<K, T extends Persistent> extends DataStoreBase<K, T> {
+
+  /** The vendor of the DB */
+  public static enum DBVendor {
+    MYSQL,
+    HSQL,
+    GENERIC;
+
+    static DBVendor getVendor(String dbProductName) {
+      String name = dbProductName.toLowerCase();
+      if(name.contains("mysql"))
+        return MYSQL;
+      else if(name.contains("hsql"))
+        return HSQL;
+      return GENERIC;
+    }
+  }
+
+  private static final Logger log = LoggerFactory.getLogger(SqlStore.class);
+
+  /** The JDBC Driver class name */
+  protected static final String DRIVER_CLASS_PROPERTY = "jdbc.driver";
+
+  /** JDBC Database access URL */
+  protected static final String URL_PROPERTY = "jdbc.url";
+
+  /** User name to access the database */
+  protected static final String USERNAME_PROPERTY = "jdbc.user";
+
+  /** Password to access the database */
+  protected static final String PASSWORD_PROPERTY = "jdbc.password";
+
+  protected static final String DEFAULT_MAPPING_FILE = "gora-sql-mapping.xml";
+
+  private String jdbcDriverClass;
+  private String jdbcUrl;
+  private String jdbcUsername;
+  private String jdbcPassword;
+
+  private SqlMapping mapping;
+
+  private Connection connection; //no connection pooling yet
+
+  private DatabaseMetaData metadata;
+  private boolean dbMixedCaseIdentifiers, dbLowerCaseIdentifiers, dbUpperCaseIdentifiers;
+  private HashMap<String, JdbcType> dbTypeMap;
+
+  private HashSet<PreparedStatement> writeCache;
+
+  private int keySqlType;
+
+  // TODO implement DataBaseTable sqlTable
+  //private DataBaseTable sqlTable;
+
+  private Column primaryColumn;
+
+  private String dbProductName;
+
+  private DBVendor dbVendor;
+
+  public void initialize() throws IOException {
+      //TODO
+  }
+
+  @Override
+  public String getSchemaName() {
+    return mapping.getTableName();
+  }
+
+  @Override
+  public void close() throws IOException {
+  //TODO
+  }
+
+  
+  private void setColumnConstraintForQuery() throws IOException {
+  //TODO
+  }
+  
+  
+  @Override
+  public void createSchema() throws IOException {
+  //TODO
+  }
+
+  private void getColumnConstraint() throws IOException {
+  //TODO
+  }
+
+  @Override
+  public void deleteSchema() throws IOException {
+  //TODO
+  }
+
+  @Override
+  public boolean schemaExists() throws IOException {
+  //TODO
+  return false;
+  }
+
+  @Override
+  public boolean delete(K key) throws IOException {
+  //TODO
+  return false;
+  }
+  
+  @Override
+  public long deleteByQuery(Query<K, T> query) throws IOException {
+  //TODO
+  return 0;
+  }
+
+  public void flush() throws IOException {
+  //TODO
+  }
+
+  @Override
+  public T get(K key, String[] requestFields) throws IOException {
+  //TODO
+  return null;
+  }
+
+  @Override
+  public Result<K, T> execute(Query<K, T> query) throws IOException {
+  //TODO
+  return null;
+  }
+
+  private void constructWhereClause() throws IOException {
+  //TODO
+  }
+
+  private void setParametersForPreparedStatement() throws SQLException, IOException {
+  //TODO
+  }
+
+  @SuppressWarnings("unchecked")
+  public K readPrimaryKey(ResultSet resultSet) throws SQLException {
+    return (K) resultSet.getObject(primaryColumn.getName());
+  }
+
+  public T readObject(ResultSet rs, T persistent
+      , String[] requestFields) throws SQLException, IOException {
+  //TODO
+  return null;
+  }
+
+  protected byte[] getBytes() throws SQLException, IOException {
+    return null;
+  }
+
+  protected Object readField() throws SQLException, IOException {
+  //TODO
+  return null;
+  }
+
+  public List<PartitionQuery<K, T>> getPartitions(Query<K, T> query)
+  throws IOException {
+  //TODO Implement this using Hadoop support
+  return null;
+  }
+
+  @Override
+  public Query<K, T> newQuery() {
+    return new SqlQuery<K, T>(this);
+  }
+
+  @Override
+  public void put(K key, T persistent) throws IOException {
+  //TODO
+  }
+
+  /**
+   * Sets the object to the preparedStatement by it's schema
+   */
+  public void setObject(PreparedStatement statement, int index, Object object
+      , Schema schema, Column column) throws SQLException, IOException {
+  //TODO
+  }
+  
+  protected <V> void setObject(PreparedStatement statement, int index, V object
+      , int objectType, Column column) throws SQLException, IOException {
+    statement.setObject(index, object, objectType, column.getScaleOrLength());
+  }
+
+  protected void setBytes() throws SQLException   {
+  //TODO
+  }
+
+  /** Serializes the field using Avro to a BLOB field */
+  protected void setField() throws IOException, SQLException {
+  //TODO
+  }
+
+  protected Connection getConnection() throws IOException {
+  //TODO
+  return null;
+  }
+
+  protected void initDbMetadata() throws IOException {
+  //TODO
+  }
+
+  protected String getIdentifier() {
+  //TODO
+  return null;
+  }
+
+  private void addColumn() {
+  //TODO
+  }
+
+  
+  protected void createSqlTable() {
+  //TODO
+  }
+  
+  private void addField() throws IOException {
+  //TODO
+  }
+
+  @SuppressWarnings("unchecked")
+  protected SqlMapping readMapping() throws IOException {
+  //TODO
+  return null;
+  }
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlTypeInterface.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlTypeInterface.java
new file mode 100644
index 0000000..5c8d65f
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/store/SqlTypeInterface.java
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.store;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.math.BigInteger;
+import java.sql.Types;
+import java.util.Currency;
+import java.util.HashMap;
+import java.util.Locale;
+
+import org.apache.avro.Schema;
+import org.apache.avro.Schema.Type;
+import org.apache.avro.util.Utf8;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.VIntWritable;
+import org.apache.hadoop.io.VLongWritable;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Contains utility methods related to type conversion between
+ * java, avro and SQL types.
+ */
+public class SqlTypeInterface {
+
+  /**
+   * Encapsules java.sql.Types as an enum
+   */
+  public static enum JdbcType {
+    ARRAY(Types.ARRAY),
+    BIT(Types.BIT),
+    BIGINT(Types.BIGINT),
+    BINARY(Types.BINARY),
+    BLOB(Types.BLOB),
+    BOOLEAN(Types.BOOLEAN),
+    CHAR(Types.CHAR),
+    CLOB(Types.CLOB),
+    DATALINK(Types.DATALINK),
+    DATE(Types.DATE),
+    DECIMAL(Types.DECIMAL),
+    DISTINCT(Types.DISTINCT),
+    DOUBLE(Types.DOUBLE),
+    FLOAT(Types.FLOAT),
+    INTEGER(Types.INTEGER),
+    LONGNVARCHAR(Types.LONGNVARCHAR),
+    LONGVARBINARY(Types.LONGVARBINARY),
+    LONGVARCHAR(Types.LONGVARCHAR),
+    NCHAR(Types.NCHAR),
+    NCLOB(Types.NCLOB),
+    NULL(Types.NULL),
+    NUMERIC(Types.NUMERIC),
+    NVARCHAR(Types.NVARCHAR),
+    OTHER(Types.OTHER),
+    REAL(Types.REAL),
+    REF(Types.REF),
+    ROWID(Types.ROWID),
+    SMALLINT(Types.SMALLINT),
+    SQLXML(Types.SQLXML, "XML"),
+    STRUCT(Types.STRUCT),
+    TIME(Types.TIME),
+    TIMESTAMP(Types.TIMESTAMP),
+    TINYINT(Types.TINYINT),
+    VARBINARY(Types.VARBINARY),
+    VARCHAR(Types.VARCHAR)
+    ;
+
+    private int order;
+    private String sqlType;
+
+    private JdbcType(int order) {
+      this.order = order;
+    }
+    private JdbcType(int order, String sqlType) {
+      this.order = order;
+      this.sqlType = sqlType;
+    }
+    public String getSqlType() {
+      return sqlType == null ? toString() : sqlType;
+    }
+    public int getOrder() {
+      return order;
+    }
+
+    private static HashMap<Integer, JdbcType> map =
+      new HashMap<Integer, JdbcType>();
+    static {
+      for(JdbcType type : JdbcType.values()) {
+        map.put(type.order, type);
+      }
+    }
+
+    /**
+     * Returns a JdbcType enum from a jdbc type in java.sql.Types
+     * @param order an integer in java.sql.Types
+     */
+    public static final JdbcType get(int order) {
+      return map.get(order);
+    }
+  };
+
+  public static int getSqlType(Class<?> clazz) {
+
+    //jdo default types
+    if (Boolean.class.isAssignableFrom(clazz)) {
+      return Types.BIT;
+    } else if (Character.class.isAssignableFrom(clazz)) {
+      return Types.CHAR;
+    } else if (Byte.class.isAssignableFrom(clazz)) {
+      return Types.TINYINT;
+    } else if (Short.class.isAssignableFrom(clazz)) {
+      return Types.SMALLINT;
+    } else if (Integer.class.isAssignableFrom(clazz)) {
+      return Types.INTEGER;
+    } else if (Long.class.isAssignableFrom(clazz)) {
+      return Types.BIGINT;
+    } else if (Float.class.isAssignableFrom(clazz)) {
+      return Types.FLOAT;
+    } else if (Double.class.isAssignableFrom(clazz)) {
+      return Types.DOUBLE;
+    } else if (java.util.Date.class.isAssignableFrom(clazz)) {
+      return Types.TIMESTAMP;
+    } else if (java.sql.Date.class.isAssignableFrom(clazz)) {
+      return Types.DATE;
+    } else if (java.sql.Time.class.isAssignableFrom(clazz)) {
+      return Types.TIME;
+    } else if (java.sql.Timestamp.class.isAssignableFrom(clazz)) {
+      return Types.TIMESTAMP;
+    } else if (String.class.isAssignableFrom(clazz)) {
+      return Types.VARCHAR;
+    } else if (Locale.class.isAssignableFrom(clazz)) {
+      return Types.VARCHAR;
+    } else if (Currency.class.isAssignableFrom(clazz)) {
+      return Types.VARCHAR;
+    } else if (BigInteger.class.isAssignableFrom(clazz)) {
+      return Types.NUMERIC;
+    } else if (BigDecimal.class.isAssignableFrom(clazz)) {
+      return Types.DECIMAL;
+    } else if (Serializable.class.isAssignableFrom(clazz)) {
+      return Types.LONGVARBINARY;
+    }
+
+    //Hadoop types
+    else if (DoubleWritable.class.isAssignableFrom(clazz)) {
+      return Types.DOUBLE;
+    } else if (FloatWritable.class.isAssignableFrom(clazz)) {
+      return Types.FLOAT;
+    } else if (IntWritable.class.isAssignableFrom(clazz)) {
+      return Types.INTEGER;
+    } else if (LongWritable.class.isAssignableFrom(clazz)) {
+      return Types.BIGINT;
+    } else if (Text.class.isAssignableFrom(clazz)) {
+      return Types.VARCHAR;
+    } else if (VIntWritable.class.isAssignableFrom(clazz)) {
+      return Types.INTEGER;
+    } else if (VLongWritable.class.isAssignableFrom(clazz)) {
+      return Types.BIGINT;
+    } else if (Writable.class.isAssignableFrom(clazz)) {
+      return Types.LONGVARBINARY;
+    }
+
+    //avro types
+    else if (Utf8.class.isAssignableFrom(clazz)) {
+      return Types.VARCHAR;
+    }
+
+    return Types.OTHER;
+  }
+
+  public static JdbcType getJdbcType(Schema schema, int length, int scale) throws IOException {
+    Type type = schema.getType();
+
+    switch(type) {
+      case MAP    : return JdbcType.BLOB;
+      case ARRAY  : return JdbcType.BLOB;
+      case BOOLEAN: return JdbcType.BIT;
+      case BYTES  : return JdbcType.BLOB;
+      case DOUBLE : return JdbcType.DOUBLE;
+      case ENUM   : return JdbcType.VARCHAR;
+      case FIXED  : return JdbcType.BINARY;
+      case FLOAT  : return JdbcType.FLOAT;
+      case INT    : return JdbcType.INTEGER;
+      case LONG   : return JdbcType.BIGINT;
+      case NULL   : break;
+      case RECORD : return JdbcType.BLOB;
+      case STRING : return JdbcType.VARCHAR;
+      case UNION  : throw new IOException("Union is not supported yet");
+    }
+    return null;
+  }
+
+  public static JdbcType getJdbcType(Class<?> clazz, int length, int scale) throws IOException {
+    if (clazz.equals(Enum.class)) {
+      return JdbcType.VARCHAR;
+    } else if (clazz.equals(Byte.TYPE) || clazz.equals(Byte.class)) {
+      return JdbcType.BLOB;
+    } else if (clazz.equals(Boolean.TYPE) || clazz.equals(Boolean.class)) {
+      return JdbcType.BIT;
+    } else if (clazz.equals(Short.TYPE) || clazz.equals(Short.class)) {
+      return JdbcType.INTEGER;
+    } else if (clazz.equals(Integer.TYPE) || clazz.equals(Integer.class)) {
+      return JdbcType.INTEGER;
+    } else if (clazz.equals(Long.TYPE) || clazz.equals(Long.class)) {
+      return JdbcType.BIGINT;
+    } else if (clazz.equals(Float.TYPE) || clazz.equals(Float.class)) {
+      return JdbcType.FLOAT;
+    } else if (clazz.equals(Double.TYPE) || clazz.equals(Double.class)) {
+      return JdbcType.FLOAT;
+    } else if (clazz.equals(String.class)) {
+      return JdbcType.VARCHAR;
+    }
+    throw new RuntimeException("Can't parse data as class: " + clazz);
+  }
+
+  public static JdbcType stringToJdbcType(String type) {
+    try {
+      return JdbcType.valueOf(type);
+    }catch (IllegalArgumentException ex) {
+      return JdbcType.OTHER; //db specific type
+    }
+  }
+
+}
diff --git a/trunk/gora-sql/src/main/java/org/apache/gora/sql/util/SqlUtils.java b/trunk/gora-sql/src/main/java/org/apache/gora/sql/util/SqlUtils.java
new file mode 100644
index 0000000..62f960b
--- /dev/null
+++ b/trunk/gora-sql/src/main/java/org/apache/gora/sql/util/SqlUtils.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.util;
+
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+
+/**
+ * SQL related utilities
+ */
+public class SqlUtils {
+  
+  /** Closes the ResultSet silently */
+  public static void close(ResultSet rs) {
+    if(rs != null) {
+      try {
+        rs.close();
+      } catch (SQLException ignore) { }
+    }
+  }
+  
+  /** Closes the Statement silently */
+  public static void close(Statement statement) {
+    if(statement != null) {
+      try {
+        statement.close();
+      } catch (SQLException ignore) { }
+    }
+  }
+}
diff --git a/trunk/gora-sql/src/test/conf/.gitignore b/trunk/gora-sql/src/test/conf/.gitignore
new file mode 100644
index 0000000..09697dc
--- /dev/null
+++ b/trunk/gora-sql/src/test/conf/.gitignore
@@ -0,0 +1,15 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/trunk/gora-sql/src/test/conf/gora-sql-mapping.xml b/trunk/gora-sql/src/test/conf/gora-sql-mapping.xml
new file mode 100644
index 0000000..d67414f
--- /dev/null
+++ b/trunk/gora-sql/src/test/conf/gora-sql-mapping.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<gora-orm>
+  <class name="org.apache.gora.examples.generated.Employee" keyClass="java.lang.String" table="Employee">
+    <primarykey column="id" length="16"/>
+    <field name="name" column="name" length="128"/>
+    <field name="dateOfBirth" column="dateOfBirth"/>
+    <field name="ssn" column="ssn" jdbc-type="VARCHAR_IGNORECASE" length="16"/> <!-- jdbc-type is HSQLDB specific for testing -->
+    <field name="salary" column="salary"/>
+  </class>
+
+  <class name="org.apache.gora.examples.generated.WebPage" keyClass="java.lang.String" table="WebPage">
+    <primarykey column="id" length="128"/>
+    <field name="url" column="url" length="128" primarykey="true"/>
+    <field name="content" column="content"/>
+    <field name="parsedContent" column="parsedContent"/>
+    <field name="outlinks" column="outlinks"/>
+    <field name="metadata" column="metadata"/>
+  </class>
+
+<!--
+<table name="TokenDatum" keyClass="java.lang.String" persistentClass="org.apache.gora.examples.generated.TokenDatum">
+  <description>
+    <family name="common"/>
+  </description>
+  <fields>
+    <field name="count" family="common" qualifier="count"/>
+  </fields>
+</table>
+-->
+</gora-orm>
+
diff --git a/trunk/gora-sql/src/test/java/org/apache/gora/sql/GoraSqlTestDriver.java b/trunk/gora-sql/src/test/java/org/apache/gora/sql/GoraSqlTestDriver.java
new file mode 100644
index 0000000..f59253e
--- /dev/null
+++ b/trunk/gora-sql/src/test/java/org/apache/gora/sql/GoraSqlTestDriver.java
@@ -0,0 +1,119 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.gora.GoraTestDriver;
+import org.apache.gora.sql.store.SqlStore;
+import org.apache.gora.util.ClassLoadingUtils;
+import org.apache.hadoop.util.StringUtils;
+import org.hsqldb.Server;
+
+/**
+ * Helper class for third part tests using gora-sql backend. 
+ * @see GoraTestDriver
+ */
+public class GoraSqlTestDriver extends GoraTestDriver {
+
+  public GoraSqlTestDriver() {
+    super(SqlStore.class);
+  }
+
+  /** The JDBC Driver class name */
+  protected static final String DRIVER_CLASS_PROPERTY = "jdbc.driver";
+  /** JDBC Database access URL */
+  protected static final String URL_PROPERTY = "jdbc.url";
+  /** User name to access the database */
+  protected static final String USERNAME_PROPERTY = "jdbc.user";
+  /** Password to access the database */
+  protected static final String PASSWORD_PROPERTY = "jdbc.password";
+
+  private static final String HSQLDB_PORT = System.getProperty("hsqldb.port") != null ? System.getProperty("hsqldb.port") : "9001";
+  private static final String JDBC_URL = String.format("jdbc:hsqldb:hsql://localhost:%s/goratest",HSQLDB_PORT);
+  private static final String JDBC_DRIVER_CLASS = "org.hsqldb.jdbcDriver";
+
+  private Server server;
+
+  private boolean initialized = false;
+
+  private boolean startHsqldb = true;
+
+  private void startHsqldbServer() {
+    log.info("Starting HSQLDB server");
+    server = new Server();
+    server.setDatabasePath(0,
+        System.getProperty("test.build.data", "/tmp") + "/goratest");
+    server.setDatabaseName(0, "goratest");
+    server.setDaemon(true);
+    server.setPort(Integer.parseInt(HSQLDB_PORT));
+    server.start();
+  }
+
+  @Override
+  public void setUpClass() throws Exception {
+    super.setUpClass();
+
+    if(!this.initialized && startHsqldb) {
+      startHsqldbServer();
+      this.initialized = true;
+    }
+  }
+
+  @Override
+  public void tearDownClass() throws Exception {
+    super.tearDownClass();
+    try {
+      if(server != null) {
+        server.shutdown();
+      }
+    }catch (Throwable ex) {
+      log.warn("Exception occurred while shutting down HSQLDB :"
+          + StringUtils.stringifyException(ex));
+    }
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+    super.tearDown();
+  }
+
+  @SuppressWarnings("unused")
+  private Connection createConnection(String driverClassName
+      , String url) throws Exception {
+
+    ClassLoadingUtils.loadClass(driverClassName);
+    Connection connection = DriverManager.getConnection(url);
+    connection.setAutoCommit(false);
+    return connection;
+  }
+
+
+  @Override
+  protected void setProperties(Properties properties) {
+    super.setProperties(properties);
+    properties.setProperty("gora.sqlstore." + DRIVER_CLASS_PROPERTY, JDBC_DRIVER_CLASS);
+    properties.setProperty("gora.sqlstore." + URL_PROPERTY, JDBC_URL);
+    properties.remove("gora.sqlstore." + USERNAME_PROPERTY);
+    properties.remove("gora.sqlstore." + PASSWORD_PROPERTY);
+  }
+
+}
diff --git a/trunk/gora-sql/src/test/java/org/apache/gora/sql/store/TestSqlStore.java b/trunk/gora-sql/src/test/java/org/apache/gora/sql/store/TestSqlStore.java
new file mode 100644
index 0000000..4b9d8c6
--- /dev/null
+++ b/trunk/gora-sql/src/test/java/org/apache/gora/sql/store/TestSqlStore.java
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.sql.store;
+
+import java.io.IOException;
+
+import org.apache.gora.examples.generated.Employee;
+import org.apache.gora.examples.generated.WebPage;
+import org.apache.gora.sql.GoraSqlTestDriver;
+import org.apache.gora.sql.store.SqlStore;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.store.DataStoreTestBase;
+
+/**
+ * Test case for {@link SqlStore}
+ */
+public class TestSqlStore extends DataStoreTestBase {
+
+  static {
+    setTestDriver(new GoraSqlTestDriver());
+  }
+
+  public TestSqlStore() {
+  }
+
+  @Override
+  protected DataStore<String, Employee> createEmployeeDataStore() throws IOException {
+    SqlStore<String, Employee> store = new SqlStore<String, Employee>();
+    store.initialize(String.class, Employee.class, DataStoreFactory.properties);
+    return store;
+  }
+
+  @Override
+  protected DataStore<String, WebPage> createWebPageDataStore() throws IOException {
+    SqlStore<String, WebPage> store = new SqlStore<String, WebPage>();
+    store.initialize(String.class, WebPage.class, DataStoreFactory.properties);
+    return store;
+  }
+
+  //@Override
+  public void testDeleteByQueryFields() {
+    //TODO: implement delete fields in SqlStore
+  }
+
+  //@Override
+  public void testDeleteByQuery() throws IOException {
+    //HSQLDB somehow hangs for this test. we need to solve the issue or switch to
+    //another embedded db.
+  }
+  
+  public void testGet() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testSchemaExists() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testGetWithFields() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testGetWebPage() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testGetWebPageDefaultFields() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testDelete() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testGetPartitions() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testTruncateSchema() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testDeleteSchema() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testPutNested() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testUpdate() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQuery() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryStartKey() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryEndKey() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryKeyRange() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryWebPageSingleKey() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryWebPageSingleKeyDefaultFields() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+  
+  public void testQueryWebPageQueryEmptyResults() {
+   //TODO once re-write of gora-sql with JOOQ API 
+  }
+
+  public static void main(String[] args) throws Exception {
+    TestSqlStore test = new TestSqlStore();
+    TestSqlStore.setUpClass();
+    test.setUp();
+    test.testDeleteByQuery();
+    test.tearDown();
+    TestSqlStore.tearDownClass();
+  }
+}
diff --git a/trunk/gora-tutorial/build.xml b/trunk/gora-tutorial/build.xml
new file mode 100644
index 0000000..4093dea
--- /dev/null
+++ b/trunk/gora-tutorial/build.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="gora-tutorial" default="compile">
+  <property name="project.dir" value="${basedir}/.."/>
+
+  <import file="${project.dir}/build-common.xml"/>
+
+  <target name="compile-module">
+    <!-- copy dependent module's lib-ext jars -->
+    <copy todir="${lib.dir}" verbose="true" failonerror="false">
+      <fileset dir="../gora-hbase/lib-ext/" includes="**/*.jar"/>
+    </copy>
+  </target>
+
+</project>
diff --git a/trunk/gora-tutorial/conf/gora-cassandra-mapping.xml b/trunk/gora-tutorial/conf/gora-cassandra-mapping.xml
new file mode 100644
index 0000000..17f1ef8
--- /dev/null
+++ b/trunk/gora-tutorial/conf/gora-cassandra-mapping.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<!--
+  Gora Mapping file for Cassandra Backend
+-->
+<gora-orm>
+
+  <keyspace name="Pageview" cluster="Test Cluster" host="localhost">
+    <family name="common"/>
+    <family name="http"/>
+    <family name="misc"/>
+  </keyspace>
+
+  <class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog">
+    <field name="url" family="common" qualifier="url"/>
+    <field name="timestamp" family="common" qualifier="timestamp"/>
+    <field name="ip" family="common" qualifier="ip" />
+    <field name="httpMethod" family="http" qualifier="httpMethod"/>
+    <field name="httpStatusCode" family="http" qualifier="httpStatusCode"/>
+    <field name="responseSize" family="http" qualifier="responseSize"/>
+    <field name="referrer" family="misc" qualifier="referrer"/>
+    <field name="userAgent" family="misc" qualifier="userAgent"/>
+  </class>
+
+  <class name="org.apache.gora.tutorial.log.generated.MetricDatum" keyClass="java.lang.String" table="Metrics">
+    <field name="metricDimension" family="common"  qualifier="metricDimension"/>
+    <field name="timestamp" family="common" qualifier="ts"/>
+    <field name="metric" family="common" qualifier="metric"/>
+  </class>
+
+</gora-orm>
diff --git a/trunk/gora-tutorial/conf/gora-hbase-mapping.xml b/trunk/gora-tutorial/conf/gora-hbase-mapping.xml
new file mode 100644
index 0000000..d044326
--- /dev/null
+++ b/trunk/gora-tutorial/conf/gora-hbase-mapping.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<!--
+  Gora Mapping file for HBase Backend
+-->
+<gora-orm>
+  <table name="Pageview"> <!-- optional descriptors for tables -->
+    <family name="common"/> <!-- This can also have params like compression, bloom filters -->
+    <family name="http"/>
+    <family name="misc"/>
+  </table>
+
+  <class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog">
+    <field name="url" family="common" qualifier="url"/>
+    <field name="timestamp" family="common" qualifier="timestamp"/>
+    <field name="ip" family="common" qualifier="ip" />
+    <field name="httpMethod" family="http" qualifier="httpMethod"/>
+    <field name="httpStatusCode" family="http" qualifier="httpStatusCode"/>
+    <field name="responseSize" family="http" qualifier="responseSize"/>
+    <field name="referrer" family="misc" qualifier="referrer"/>
+    <field name="userAgent" family="misc" qualifier="userAgent"/>
+  </class>
+
+  <class name="org.apache.gora.tutorial.log.generated.MetricDatum" keyClass="java.lang.String" table="Metrics">
+    <field name="metricDimension" family="common"  qualifier="metricDimension"/>
+    <field name="timestamp" family="common" qualifier="ts"/>
+    <field name="metric" family="common" qualifier="metric"/>
+  </class>
+
+</gora-orm>
diff --git a/trunk/gora-tutorial/conf/gora-sql-mapping.xml b/trunk/gora-tutorial/conf/gora-sql-mapping.xml
new file mode 100644
index 0000000..01bbb2b
--- /dev/null
+++ b/trunk/gora-tutorial/conf/gora-sql-mapping.xml
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<!-- 
+  Gora Mapping file for SQL Backend
+-->
+<gora-orm>
+  <class name="org.apache.gora.tutorial.log.generated.Pageview" keyClass="java.lang.Long" table="AccessLog">
+    <primarykey column="line"/>
+    <field name="url" column="url" length="512" primarykey="true"/>
+    <field name="timestamp" column="timestamp"/>
+    <field name="ip" column="ip" length="16"/>
+    <field name="httpMethod" column="httpMethod" length="6"/>
+    <field name="httpStatusCode" column="httpStatusCode"/>
+    <field name="responseSize" column="responseSize"/>
+    <field name="referrer" column="referrer" length="512"/>
+    <field name="userAgent" column="userAgent" length="512"/>
+  </class>
+
+  <class name="org.apache.gora.tutorial.log.generated.MetricDatum" keyClass="java.lang.String" table="Metrics">
+    <primarykey column="id" length="512"/>
+    <field name="metricDimension" column="metricDimension" length="512"/>
+    <field name="timestamp" column="ts"/>
+    <field name="metric" column="metric"/>
+  </class>
+
+</gora-orm>
+
diff --git a/trunk/gora-tutorial/conf/gora.properties b/trunk/gora-tutorial/conf/gora.properties
new file mode 100644
index 0000000..d7b49be
--- /dev/null
+++ b/trunk/gora-tutorial/conf/gora.properties
@@ -0,0 +1,41 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+##gora.datastore.default is the default detastore implementation to use 
+##if it is not passed to the DataStoreFactory#createDataStore() method.
+gora.datastore.default=org.apache.gora.hbase.store.HBaseStore
+#gora.datastore.default=org.apache.gora.cassandra.store.CassandraStore
+
+##whether to create schema automatically if not exists.
+gora.datastore.autocreateschema=true
+
+##Cassandra properties for gora-cassandra module using Cassandra
+#gora.cassandrastore.servers=localhost:9160
+
+##JDBC properties for gora-sql module using HSQL
+gora.sqlstore.jdbc.driver=org.hsqldb.jdbcDriver
+##HSQL jdbc connection as persistent in-process database
+gora.sqlstore.jdbc.url=jdbc:hsqldb:file:./hsql-data
+
+##HSQL jdbc connection as network server
+#gora.sqlstore.jdbc.url=jdbc:hsqldb:hsql://localhost/goratest
+
+##JDBC properties for gora-sql module using MySQL
+#gora.sqlstore.jdbc.driver=com.mysql.jdbc.Driver
+#gora.sqlstore.jdbc.url=jdbc:mysql://localhost:3306/goratest
+#gora.sqlstore.jdbc.user=root
+#gora.sqlstore.jdbc.password=
+
diff --git a/trunk/gora-tutorial/ivy/ivy.xml b/trunk/gora-tutorial/ivy/ivy.xml
new file mode 100644
index 0000000..e6c07e6
--- /dev/null
+++ b/trunk/gora-tutorial/ivy/ivy.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivy-module version="2.0">
+    <info 
+      organisation="org.apache.gora"
+      module="gora-tutorial"
+      status="integration"/>
+
+  <configurations>
+    <include file="../../ivy/ivy-configurations.xml"/>
+  </configurations>
+
+  <publications defaultconf="compile">
+    <artifact name="gora-tutorial" conf="compile"/>
+  </publications>
+
+  <dependencies>
+    <dependency org="org.apache.gora" name="gora-hbase" rev="latest.integration" changing="true" conf="*->@"/>
+    <dependency org="org.apache.gora" name="gora-sql" rev="latest.integration" changing="true" conf="*->@"/>
+    
+    <!-- Uncomment below if you are using MySQL -->
+    <!-- <dependency org="mysql" name="mysql-connector-java" rev="5.1.13" conf="*->default"/> -->
+
+    <!-- Uncomment below if you are using Hsqldb -->
+    <!--<dependency org="org.hsqldb" name="hsqldb" rev="2.0.0" conf="*->default"/>-->
+
+  </dependencies>
+</ivy-module>
+
diff --git a/trunk/gora-tutorial/pom.xml b/trunk/gora-tutorial/pom.xml
new file mode 100644
index 0000000..48b0997
--- /dev/null
+++ b/trunk/gora-tutorial/pom.xml
@@ -0,0 +1,187 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+     <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>gora-tutorial</artifactId>
+    <packaging>bundle</packaging>
+
+    <name>Apache Gora :: Tutorial</name>
+        <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/gora-tutorial</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-tutorial</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/gora-tutorial</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+    
+    <properties>
+        <osgi.import>*</osgi.import>
+        <osgi.export>org.apache.gora.tutorial*;version="${project.version}";-noimport:=true</osgi.export>
+    </properties>
+
+    <build>
+        <directory>target</directory>
+        <outputDirectory>target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>target/test-classes</testOutputDirectory>
+        <testSourceDirectory>src/test/java</testSourceDirectory>
+        <sourceDirectory>src/main/java</sourceDirectory>
+        <resources>
+          <resource>
+            <directory>${basedir}/src/main/resources</directory>
+          </resource>
+          <resource>
+            <directory>${basedir}/conf</directory>
+          </resource>
+        </resources>
+        <plugins>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>build-helper-maven-plugin</artifactId>
+                <version>${build-helper-maven-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <phase>generate-sources</phase>
+                        <goals>
+                            <goal>add-source</goal>
+                        </goals>
+                        <configuration>
+                            <sources>
+                                <source>src/examples/java</source>
+                            </sources>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>jar</goal>
+                            <!-- goal>test-jar</goal-->
+                        </goals>
+                        <configuration>
+                        <archive>
+                            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
+                        </archive>
+                    </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+    <dependencies>
+        <!-- Gora Internal Dependencies -->
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+        </dependency>
+        
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-hbase</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-cassandra</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-sql</artifactId>
+        </dependency>
+
+		<!-- Hadoop Dependencies -->
+		<dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-core</artifactId>
+        </dependency>
+        
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>avro</artifactId>
+        </dependency>
+        
+        <!-- Misc Dependencies -->
+        <dependency>
+            <groupId>org.jdom</groupId>
+            <artifactId>jdom</artifactId>
+        </dependency>
+        
+        <!-- Logging Dependencies -->
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-simple</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+	        <exclusions>
+	          <exclusion>
+                <groupId>javax.jms</groupId>
+	            <artifactId>jms</artifactId>
+	          </exclusion>
+            </exclusions>
+        </dependency>
+		
+        <dependency>
+            <groupId>org.hsqldb</groupId>
+            <artifactId>hsqldb</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>mysql</groupId>
+            <artifactId>mysql-connector-java</artifactId>
+        </dependency>
+    </dependencies>
+
+</project>
diff --git a/trunk/gora-tutorial/src/examples/java/.gitignore b/trunk/gora-tutorial/src/examples/java/.gitignore
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/trunk/gora-tutorial/src/examples/java/.gitignore
diff --git a/trunk/gora-tutorial/src/main/avro/metricdatum.json b/trunk/gora-tutorial/src/main/avro/metricdatum.json
new file mode 100644
index 0000000..9fa61c0
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/avro/metricdatum.json
@@ -0,0 +1,10 @@
+{
+  "type": "record",
+  "name": "MetricDatum",
+  "namespace": "org.apache.gora.tutorial.log.generated",
+  "fields" : [
+    {"name": "metricDimension", "type": "string"},
+    {"name": "timestamp", "type": "long"},
+    {"name": "metric", "type" : "long"}
+  ]
+}
diff --git a/trunk/gora-tutorial/src/main/avro/pageview.json b/trunk/gora-tutorial/src/main/avro/pageview.json
new file mode 100644
index 0000000..34c08b3
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/avro/pageview.json
@@ -0,0 +1,15 @@
+{
+  "type": "record",
+  "name": "Pageview",
+  "namespace": "org.apache.gora.tutorial.log.generated",
+  "fields" : [
+    {"name": "url", "type": "string"},
+    {"name": "timestamp", "type": "long"},
+    {"name": "ip", "type": "string"},
+    {"name": "httpMethod", "type": "string"},
+    {"name": "httpStatusCode", "type": "int"},
+    {"name": "responseSize", "type": "int"},
+    {"name": "referrer", "type": "string"},
+    {"name": "userAgent", "type": "string"}
+  ]
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/KeyValueWritable.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/KeyValueWritable.java
new file mode 100644
index 0000000..b4cdaf1
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/KeyValueWritable.java
@@ -0,0 +1,116 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.gora.tutorial.log;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * A WritableComparable containing a key-value WritableComparable pair.
+ * @param <K> the class of key 
+ * @param <V> the class of value
+ */
+public class KeyValueWritable<K extends WritableComparable, V extends WritableComparable> 
+  implements WritableComparable<KeyValueWritable<K,V>> {
+
+  protected K key = null;
+  protected V value =  null;
+  
+  public KeyValueWritable() {
+  }
+  
+  public KeyValueWritable(K key, V value) {
+    this.key = key;
+    this.value = value;
+  }
+
+  public K getKey() {
+    return key;
+  }
+  
+  public void setKey(K key) {
+    this.key = key;
+  }
+  
+  public V getValue() {
+    return value;
+  }
+  
+  public void setValue(V value) {
+    this.value = value;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    if(key == null) {
+      
+    }
+    key.readFields(in);
+    value.readFields(in);
+  }
+  
+  @Override
+  public void write(DataOutput out) throws IOException {
+    key.write(out);
+    value.write(out);
+  }
+
+  @Override
+  public int hashCode() {
+    final int prime = 31;
+    int result = 1;
+    result = prime * result + ((key == null) ? 0 : key.hashCode());
+    result = prime * result + ((value == null) ? 0 : value.hashCode());
+    return result;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj)
+      return true;
+    if (obj == null)
+      return false;
+    if (getClass() != obj.getClass())
+      return false;
+    KeyValueWritable other = (KeyValueWritable) obj;
+    if (key == null) {
+      if (other.key != null)
+        return false;
+    } else if (!key.equals(other.key))
+      return false;
+    if (value == null) {
+      if (other.value != null)
+        return false;
+    } else if (!value.equals(other.value))
+      return false;
+    return true;
+  }
+
+  @Override
+  public int compareTo(KeyValueWritable<K, V> o) {
+    int cmp = key.compareTo(o.key);
+    if(cmp != 0)
+      return cmp;
+    
+    return value.compareTo(o.value);
+  }
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogAnalytics.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogAnalytics.java
new file mode 100644
index 0000000..e951535
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogAnalytics.java
@@ -0,0 +1,198 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.tutorial.log;
+
+import java.io.IOException;
+
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.mapreduce.GoraMapper;
+import org.apache.gora.mapreduce.GoraReducer;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.tutorial.log.generated.MetricDatum;
+import org.apache.gora.tutorial.log.generated.Pageview;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * LogAnalytics is the tutorial class to illustrate Gora MapReduce API. 
+ * The analytics mapreduce job reads the web access data stored earlier by the 
+ * {@link LogManager}, and calculates the aggregate daily pageviews. The
+ * output of the job is stored in a Gora compatible data store. 
+ * 
+ * <p>See the tutorial.html file in docs or go to the 
+ * <a href="http://incubator.apache.org/gora/docs/current/tutorial.html"> 
+ * web site</a>for more information.</p>
+ */
+public class LogAnalytics extends Configured implements Tool {
+
+  private static final Logger log = LoggerFactory.getLogger(LogAnalytics.class);
+  
+  /** The number of miliseconds in a day */
+  private static final long DAY_MILIS = 1000 * 60 * 60 * 24;
+    
+  /**
+   * The Mapper takes Long keys and Pageview objects, and emits 
+   * tuples of &lt;url, day&gt; as keys and 1 as values. Input values are 
+   * read from the input data store.
+   * Note that all Hadoop serializable classes can be used as map output key and value.
+   */
+  public static class LogAnalyticsMapper 
+    extends GoraMapper<Long, Pageview, TextLong, LongWritable> {
+    
+    private LongWritable one = new LongWritable(1L);
+  
+    private TextLong tuple;
+    
+    @Override
+    protected void setup(Context context) throws IOException ,InterruptedException {
+      tuple = new TextLong();
+      tuple.setKey(new Text());
+      tuple.setValue(new LongWritable());
+    };
+    
+    @Override
+    protected void map(Long key, Pageview pageview, Context context) 
+      throws IOException ,InterruptedException {
+      
+      Utf8 url = pageview.getUrl();
+      long day = getDay(pageview.getTimestamp());
+      
+      tuple.getKey().set(url.toString());
+      tuple.getValue().set(day);
+      
+      context.write(tuple, one);
+    };
+    
+    /** Rolls up the given timestamp to the day cardinality, so that 
+     * data can be aggregated daily */
+    private long getDay(long timeStamp) {
+      return (timeStamp / DAY_MILIS) * DAY_MILIS; 
+    }
+  }
+  
+  /**
+   * The Reducer receives tuples of &lt;url, day&gt; as keys and a list of 
+   * values corresponding to the keys, and emits a combined keys and
+   * {@link MetricDatum} objects. The metric datum objects are stored 
+   * as job outputs in the output data store.
+   */
+  public static class LogAnalyticsReducer 
+    extends GoraReducer<TextLong, LongWritable, String, MetricDatum> {
+    
+    private MetricDatum metricDatum = new MetricDatum();
+    
+    @Override
+    protected void reduce(TextLong tuple
+        , Iterable<LongWritable> values, Context context) 
+      throws IOException ,InterruptedException {
+      
+      long sum = 0L; //sum up the values
+      for(LongWritable value: values) {
+        sum+= value.get();
+      }
+      
+      String dimension = tuple.getKey().toString();
+      long timestamp = tuple.getValue().get();
+      
+      metricDatum.setMetricDimension(new Utf8(dimension));
+      metricDatum.setTimestamp(timestamp);
+      
+      String key = metricDatum.getMetricDimension().toString();
+      key += "_" + Long.toString(timestamp);
+      metricDatum.setMetric(sum);
+      
+      context.write(key, metricDatum);
+    };
+  }
+  
+  /**
+   * Creates and returns the {@link Job} for submitting to Hadoop mapreduce.
+   * @param dataStore
+   * @param query
+   * @return
+   * @throws IOException
+   */
+  public Job createJob(DataStore<Long, Pageview> inStore
+      , DataStore<String, MetricDatum> outStore, int numReducer) throws IOException {
+    Job job = new Job(getConf());
+
+    job.setJobName("Log Analytics");
+    job.setNumReduceTasks(numReducer);
+    job.setJarByClass(getClass());
+
+    /* Mappers are initialized with GoraMapper.initMapper() or 
+     * GoraInputFormat.setInput()*/
+    GoraMapper.initMapperJob(job, inStore, TextLong.class, LongWritable.class
+        , LogAnalyticsMapper.class, true);
+
+    /* Reducers are initialized with GoraReducer#initReducer().
+     * If the output is not to be persisted via Gora, any reducer 
+     * can be used instead. */
+    GoraReducer.initReducerJob(job, outStore, LogAnalyticsReducer.class);
+    
+    return job;
+  }
+  
+  @Override
+  public int run(String[] args) throws Exception {
+    
+    DataStore<Long, Pageview> inStore;
+    DataStore<String, MetricDatum> outStore;
+    Configuration conf = new Configuration();    
+
+    if(args.length > 0) {
+      String dataStoreClass = args[0];
+      inStore = DataStoreFactory.
+          getDataStore(dataStoreClass, Long.class, Pageview.class, conf);
+      if(args.length > 1) {
+        dataStoreClass = args[1];
+      }
+      outStore = DataStoreFactory.
+          getDataStore(dataStoreClass, 
+			 String.class, MetricDatum.class, conf);
+    } else {
+	inStore = DataStoreFactory.getDataStore(Long.class, Pageview.class, conf);
+	outStore = DataStoreFactory.getDataStore(String.class, MetricDatum.class, conf);
+    }
+    
+    Job job = createJob(inStore, outStore, 3);
+    boolean success = job.waitForCompletion(true);
+    
+    inStore.close();
+    outStore.close();
+    
+    log.info("Log completed with " + (success ? "success" : "failure"));
+    
+    return success ? 0 : 1;
+  }
+  
+  public static void main(String[] args) throws Exception {
+    //run as any other MR job
+    int ret = ToolRunner.run(new LogAnalytics(), args);
+    System.exit(ret);
+  }
+  
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogManager.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogManager.java
new file mode 100644
index 0000000..85924e9
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/LogManager.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.tutorial.log;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.text.ParseException;
+import java.text.SimpleDateFormat;
+import java.util.StringTokenizer;
+
+import org.apache.avro.util.Utf8;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.gora.query.Query;
+import org.apache.gora.query.Result;
+import org.apache.gora.store.DataStore;
+import org.apache.gora.store.DataStoreFactory;
+import org.apache.gora.tutorial.log.generated.Pageview;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * LogManager is the tutorial class to illustrate the basic 
+ * {@link DataStore} API usage. The LogManager class is used 
+ * to parse the web server logs in combined log format, store the 
+ * data in a Gora compatible data store, query and manipulate the stored data.  
+ * 
+ * <p>In the data model, keys are the line numbers in the log file, 
+ * and the values are Pageview objects, generated from 
+ * <code>gora-tutorial/src/main/avro/pageview.json</code>.
+ * 
+ * <p>See the tutorial.html file in docs or go to the 
+ * <a href="http://gora.apache.org/docs/current/tutorial.html"> 
+ * web site</a>for more information.</p>
+ */
+public class LogManager {
+
+  private static final Logger log = LoggerFactory.getLogger(LogManager.class);
+  
+  private DataStore<Long, Pageview> dataStore; 
+  
+  private static final SimpleDateFormat dateFormat 
+    = new SimpleDateFormat("dd/MMM/yyyy:HH:mm:ss Z");
+  
+  public LogManager() {
+    try {
+      init();
+    } catch (IOException ex) {
+      throw new RuntimeException(ex);
+    }
+  }
+  
+  private void init() throws IOException {
+    //Data store objects are created from a factory. It is necessary to 
+    //provide the key and value class. The datastore class is optional, 
+    //and if not specified it will be read from the properties file
+    dataStore = DataStoreFactory.getDataStore(Long.class, Pageview.class,
+            new Configuration());
+  }
+  
+  /**
+   * Parses a log file and store the contents at the data store.
+   * @param input the input file location
+   */
+  private void parse(String input) throws IOException, ParseException {
+    log.info("Parsing file:" + input);
+    BufferedReader reader = new BufferedReader(new FileReader(input));
+    long lineCount = 0;
+    try {
+      String line = reader.readLine();
+      do {
+        Pageview pageview = parseLine(line);
+        
+        if(pageview != null) {
+          //store the pageview 
+          storePageview(lineCount++, pageview);
+        }
+        
+        line = reader.readLine();
+      } while(line != null);
+      
+    } finally {
+      reader.close();  
+    }
+    log.info("finished parsing file. Total number of log lines:" + lineCount);
+  }
+  
+  /** Parses a single log line in combined log format using StringTokenizers */
+  private Pageview parseLine(String line) throws ParseException {
+    StringTokenizer matcher = new StringTokenizer(line);
+    //parse the log line
+    String ip = matcher.nextToken();
+    matcher.nextToken(); //discard
+    matcher.nextToken();
+    long timestamp = dateFormat.parse(matcher.nextToken("]").substring(2)).getTime();
+    matcher.nextToken("\"");
+    String request = matcher.nextToken("\"");
+    String[] requestParts = request.split(" ");
+    String httpMethod = requestParts[0];
+    String url = requestParts[1];
+    matcher.nextToken(" ");
+    int httpStatusCode = Integer.parseInt(matcher.nextToken());
+    int responseSize = Integer.parseInt(matcher.nextToken());
+    matcher.nextToken("\"");
+    String referrer = matcher.nextToken("\"");
+    matcher.nextToken("\"");
+    String userAgent = matcher.nextToken("\"");
+    
+    //construct and return pageview object
+    Pageview pageview = new Pageview();
+    pageview.setIp(new Utf8(ip));
+    pageview.setTimestamp(timestamp);
+    pageview.setHttpMethod(new Utf8(httpMethod));
+    pageview.setUrl(new Utf8(url));
+    pageview.setHttpStatusCode(httpStatusCode);
+    pageview.setResponseSize(responseSize);
+    pageview.setReferrer(new Utf8(referrer));
+    pageview.setUserAgent(new Utf8(userAgent));
+    
+    return pageview;
+  }
+  
+  /** Stores the pageview object with the given key */
+  private void storePageview(long key, Pageview pageview) throws IOException {
+	log.info("Storing Pageview in: " + dataStore.toString());
+    dataStore.put(key, pageview);
+  }
+  
+  /** Fetches a single pageview object and prints it*/
+  private void get(long key) throws IOException {
+    Pageview pageview = dataStore.get(key);
+    printPageview(pageview);
+  }
+  
+  /** Queries and prints a single pageview object */
+  private void query(long key) throws IOException {
+    //Queries are constructed from the data store
+    Query<Long, Pageview> query = dataStore.newQuery();
+    query.setKey(key);
+    
+    Result<Long, Pageview> result = query.execute(); //Actually executes the query.
+    // alternatively dataStore.execute(query); can be used
+    
+    printResult(result);
+  }
+  
+  /** Queries and prints pageview object that have keys between startKey and endKey*/
+  private void query(long startKey, long endKey) throws IOException {
+    Query<Long, Pageview> query = dataStore.newQuery();
+    //set the properties of query
+    query.setStartKey(startKey);
+    query.setEndKey(endKey);
+    
+    Result<Long, Pageview> result = query.execute();
+    
+    printResult(result);
+  }
+  
+  
+  /**Deletes the pageview with the given line number */
+  private void delete(long lineNum) throws Exception {
+    dataStore.delete(lineNum);
+    dataStore.flush(); //write changes may need to be flushed before
+                       //they are committed 
+    log.info("pageview with key:" + lineNum + " deleted");
+  }
+  
+  /** This method illustrates delete by query call */
+  private void deleteByQuery(long startKey, long endKey) throws IOException {
+    //Constructs a query from the dataStore. The matching rows to this query will be deleted
+    Query<Long, Pageview> query = dataStore.newQuery();
+    //set the properties of query
+    query.setStartKey(startKey);
+    query.setEndKey(endKey);
+    
+    dataStore.deleteByQuery(query);
+    log.info("pageviews with keys between " + startKey + " and " + endKey + " are deleted");
+  }
+  
+  private void printResult(Result<Long, Pageview> result) throws IOException {
+    
+    while(result.next()) { //advances the Result object and breaks if at end
+      long resultKey = result.getKey(); //obtain current key
+      Pageview resultPageview = result.get(); //obtain current value object
+      
+      //print the results
+      System.out.println(resultKey + ":");
+      printPageview(resultPageview);
+    }
+    
+    System.out.println("Number of pageviews from the query:" + result.getOffset());
+  }
+  
+  /** Pretty prints the pageview object to stdout */
+  private void printPageview(Pageview pageview) {
+    if(pageview == null) {
+      System.out.println("No result to show"); 
+    } else {
+      System.out.println(pageview.toString());
+    }
+  }
+  
+  private void close() throws IOException {
+    //It is very important to close the datastore properly, otherwise
+    //some data loss might occur.
+    if(dataStore != null)
+      dataStore.close();
+  }
+  
+  private static final String USAGE = "LogManager -parse <input_log_file>\n" +
+                                      "           -get <lineNum>\n" +
+                                      "           -query <lineNum>\n" +
+                                      "           -query <startLineNum> <endLineNum>\n" +
+  		                                "           -delete <lineNum>\n" +
+  		                                "           -deleteByQuery <startLineNum> <endLineNum>\n";
+  
+  public static void main(String[] args) throws Exception {
+    if(args.length < 2) {
+      System.err.println(USAGE);
+      System.exit(1);
+    }
+    
+    LogManager manager = new LogManager();
+    
+    if("-parse".equals(args[0])) {
+      manager.parse(args[1]);
+    } else if("-get".equals(args[0])) {
+      manager.get(Long.parseLong(args[1]));
+    } else if("-query".equals(args[0])) {
+      if(args.length == 2) 
+        manager.query(Long.parseLong(args[1]));
+      else 
+        manager.query(Long.parseLong(args[1]), Long.parseLong(args[2]));
+    } else if("-delete".equals(args[0])) {
+      manager.delete(Long.parseLong(args[1]));
+    } else if("-deleteByQuery".equalsIgnoreCase(args[0])) {
+      manager.deleteByQuery(Long.parseLong(args[1]), Long.parseLong(args[2]));
+    } else {
+      System.err.println(USAGE);
+      System.exit(1);
+    }
+    
+    manager.close();
+  }
+  
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/TextLong.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/TextLong.java
new file mode 100644
index 0000000..98fc771
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/TextLong.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.tutorial.log;
+
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+
+/**
+ * A {@link KeyValueWritable} of {@link Text} keys and 
+ * {@link LongWritable} values. 
+ */
+public class TextLong extends KeyValueWritable<Text, LongWritable> {
+
+  public TextLong() {
+    key = new Text();
+    value = new LongWritable();
+  }
+  
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/MetricDatum.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/MetricDatum.java
new file mode 100644
index 0000000..2cba73f
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/MetricDatum.java
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.tutorial.log.generated;
+
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Schema;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+
+@SuppressWarnings("all")
+public class MetricDatum extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"MetricDatum\",\"namespace\":\"org.apache.gora.tutorial.log.generated\",\"fields\":[{\"name\":\"metricDimension\",\"type\":\"string\"},{\"name\":\"timestamp\",\"type\":\"long\"},{\"name\":\"metric\",\"type\":\"long\"}]}");
+  public static enum Field {
+    METRIC_DIMENSION(0,"metricDimension"),
+    TIMESTAMP(1,"timestamp"),
+    METRIC(2,"metric"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"metricDimension","timestamp","metric",};
+  static {
+    PersistentBase.registerFields(MetricDatum.class, _ALL_FIELDS);
+  }
+  private Utf8 metricDimension;
+  private long timestamp;
+  private long metric;
+  public MetricDatum() {
+    this(new StateManagerImpl());
+  }
+  public MetricDatum(StateManager stateManager) {
+    super(stateManager);
+  }
+  public MetricDatum newInstance(StateManager stateManager) {
+    return new MetricDatum(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return metricDimension;
+    case 1: return timestamp;
+    case 2: return metric;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:metricDimension = (Utf8)_value; break;
+    case 1:timestamp = (Long)_value; break;
+    case 2:metric = (Long)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public Utf8 getMetricDimension() {
+    return (Utf8) get(0);
+  }
+  public void setMetricDimension(Utf8 value) {
+    put(0, value);
+  }
+  public long getTimestamp() {
+    return (Long) get(1);
+  }
+  public void setTimestamp(long value) {
+    put(1, value);
+  }
+  public long getMetric() {
+    return (Long) get(2);
+  }
+  public void setMetric(long value) {
+    put(2, value);
+  }
+}
diff --git a/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/Pageview.java b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/Pageview.java
new file mode 100644
index 0000000..89598d8
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/java/org/apache/gora/tutorial/log/generated/Pageview.java
@@ -0,0 +1,146 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.gora.tutorial.log.generated;
+
+import org.apache.avro.AvroRuntimeException;
+import org.apache.avro.Schema;
+import org.apache.avro.util.Utf8;
+import org.apache.gora.persistency.StateManager;
+import org.apache.gora.persistency.impl.PersistentBase;
+import org.apache.gora.persistency.impl.StateManagerImpl;
+
+@SuppressWarnings("all")
+public class Pageview extends PersistentBase {
+  public static final Schema _SCHEMA = Schema.parse("{\"type\":\"record\",\"name\":\"Pageview\",\"namespace\":\"org.apache.gora.tutorial.log.generated\",\"fields\":[{\"name\":\"url\",\"type\":\"string\"},{\"name\":\"timestamp\",\"type\":\"long\"},{\"name\":\"ip\",\"type\":\"string\"},{\"name\":\"httpMethod\",\"type\":\"string\"},{\"name\":\"httpStatusCode\",\"type\":\"int\"},{\"name\":\"responseSize\",\"type\":\"int\"},{\"name\":\"referrer\",\"type\":\"string\"},{\"name\":\"userAgent\",\"type\":\"string\"}]}");
+  public static enum Field {
+    URL(0,"url"),
+    TIMESTAMP(1,"timestamp"),
+    IP(2,"ip"),
+    HTTP_METHOD(3,"httpMethod"),
+    HTTP_STATUS_CODE(4,"httpStatusCode"),
+    RESPONSE_SIZE(5,"responseSize"),
+    REFERRER(6,"referrer"),
+    USER_AGENT(7,"userAgent"),
+    ;
+    private int index;
+    private String name;
+    Field(int index, String name) {this.index=index;this.name=name;}
+    public int getIndex() {return index;}
+    public String getName() {return name;}
+    public String toString() {return name;}
+  };
+  public static final String[] _ALL_FIELDS = {"url","timestamp","ip","httpMethod","httpStatusCode","responseSize","referrer","userAgent",};
+  static {
+    PersistentBase.registerFields(Pageview.class, _ALL_FIELDS);
+  }
+  private Utf8 url;
+  private long timestamp;
+  private Utf8 ip;
+  private Utf8 httpMethod;
+  private int httpStatusCode;
+  private int responseSize;
+  private Utf8 referrer;
+  private Utf8 userAgent;
+  public Pageview() {
+    this(new StateManagerImpl());
+  }
+  public Pageview(StateManager stateManager) {
+    super(stateManager);
+  }
+  public Pageview newInstance(StateManager stateManager) {
+    return new Pageview(stateManager);
+  }
+  public Schema getSchema() { return _SCHEMA; }
+  public Object get(int _field) {
+    switch (_field) {
+    case 0: return url;
+    case 1: return timestamp;
+    case 2: return ip;
+    case 3: return httpMethod;
+    case 4: return httpStatusCode;
+    case 5: return responseSize;
+    case 6: return referrer;
+    case 7: return userAgent;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int _field, Object _value) {
+    if(isFieldEqual(_field, _value)) return;
+    getStateManager().setDirty(this, _field);
+    switch (_field) {
+    case 0:url = (Utf8)_value; break;
+    case 1:timestamp = (Long)_value; break;
+    case 2:ip = (Utf8)_value; break;
+    case 3:httpMethod = (Utf8)_value; break;
+    case 4:httpStatusCode = (Integer)_value; break;
+    case 5:responseSize = (Integer)_value; break;
+    case 6:referrer = (Utf8)_value; break;
+    case 7:userAgent = (Utf8)_value; break;
+    default: throw new AvroRuntimeException("Bad index");
+    }
+  }
+  public Utf8 getUrl() {
+    return (Utf8) get(0);
+  }
+  public void setUrl(Utf8 value) {
+    put(0, value);
+  }
+  public long getTimestamp() {
+    return (Long) get(1);
+  }
+  public void setTimestamp(long value) {
+    put(1, value);
+  }
+  public Utf8 getIp() {
+    return (Utf8) get(2);
+  }
+  public void setIp(Utf8 value) {
+    put(2, value);
+  }
+  public Utf8 getHttpMethod() {
+    return (Utf8) get(3);
+  }
+  public void setHttpMethod(Utf8 value) {
+    put(3, value);
+  }
+  public int getHttpStatusCode() {
+    return (Integer) get(4);
+  }
+  public void setHttpStatusCode(int value) {
+    put(4, value);
+  }
+  public int getResponseSize() {
+    return (Integer) get(5);
+  }
+  public void setResponseSize(int value) {
+    put(5, value);
+  }
+  public Utf8 getReferrer() {
+    return (Utf8) get(6);
+  }
+  public void setReferrer(Utf8 value) {
+    put(6, value);
+  }
+  public Utf8 getUserAgent() {
+    return (Utf8) get(7);
+  }
+  public void setUserAgent(Utf8 value) {
+    put(7, value);
+  }
+}
diff --git a/trunk/gora-tutorial/src/main/resources/access.log.tar.gz b/trunk/gora-tutorial/src/main/resources/access.log.tar.gz
new file mode 100644
index 0000000..f889abc
--- /dev/null
+++ b/trunk/gora-tutorial/src/main/resources/access.log.tar.gz
Binary files differ
diff --git a/trunk/gora-tutorial/src/test/conf/.gitignore b/trunk/gora-tutorial/src/test/conf/.gitignore
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/trunk/gora-tutorial/src/test/conf/.gitignore
diff --git a/trunk/ivy/ivy-2.1.0.jar b/trunk/ivy/ivy-2.1.0.jar
new file mode 100644
index 0000000..3902b6f
--- /dev/null
+++ b/trunk/ivy/ivy-2.1.0.jar
Binary files differ
diff --git a/trunk/ivy/ivy-configurations.xml b/trunk/ivy/ivy-configurations.xml
new file mode 100644
index 0000000..7b35dc0
--- /dev/null
+++ b/trunk/ivy/ivy-configurations.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<configurations>
+  <conf name="compile" visibility="public" />
+  <conf name="test" visibility="public" extends="compile"/>
+</configurations>
diff --git a/trunk/ivy/ivysettings.xml b/trunk/ivy/ivysettings.xml
new file mode 100644
index 0000000..661a13b
--- /dev/null
+++ b/trunk/ivy/ivysettings.xml
@@ -0,0 +1,97 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<ivysettings>
+  <!--
+  see http://www.jayasoft.org/ivy/doc/configuration
+  -->
+  <!-- you can override this property to use mirrors
+          http://repo1.maven.org/maven2/
+          http://mirrors.dotsrc.org/maven2
+          http://ftp.ggi-project.org/pub/packages/maven2
+          http://mirrors.sunsite.dk/maven2
+          http://public.planetmirror.com/pub/maven2
+          http://ibiblio.lsu.edu/main/pub/packages/maven2
+          http://www.ibiblio.net/pub/packages/maven2
+  -->
+  <property name="repo.maven.org"
+    value="http://repo1.maven.org/maven2/"
+    override="false"/>
+  <property name="repo.maven.local"
+    value="file://${user.home}/.m2/repository/"
+    override="false"/>
+  <property name="snapshot.apache.org"
+    value="http://people.apache.org/repo/m2-snapshot-repository/"
+    override="false"/>
+  <property name="maven2.pattern"
+    value="[organisation]/[module]/[revision]/[module]-[revision]"/>
+  <property name="maven2.pattern.ext"
+    value="${maven2.pattern}.[ext]"/>
+
+  <!-- pull in the local repository -->
+  <include url="${ivy.default.conf.dir}/ivyconf-local.xml"/>
+  <settings defaultResolver="default"/>
+  <resolvers>
+    <ibiblio name="maven2"
+      root="${repo.maven.org}"
+      pattern="${maven2.pattern.ext}"
+      m2compatible="true"
+      />
+    <ibiblio name="java.net-maven2" 
+      root="http://download.java.net/maven/2/" 
+      pattern="${maven2.pattern.ext}" 
+      m2compatible="true" 
+    /> 
+    <ibiblio name="apache-snapshot"
+      root="${snapshot.apache.org}"
+      pattern="${maven2.pattern.ext}"
+      m2compatible="true"
+      />
+    <ibiblio name="maven2-local"
+     root="${repo.maven.local}"
+     pattern="${maven2.pattern.ext}"
+     m2compatible="true"
+     usepoms="true"
+     useMavenMetadata="true">
+    </ibiblio>
+    <chain name="default" dual="true">
+      <resolver ref="local"/>
+      <resolver ref="maven2-local"/>
+      <resolver ref="maven2"/>
+      <resolver ref="java.net-maven2"/>
+    </chain>
+    <chain name="internal">
+      <resolver ref="local"/>
+    </chain>
+    <chain name="external">
+      <resolver ref="maven2"/>
+    </chain>
+    <chain name="external-and-snapshots">
+      <resolver ref="maven2"/>
+      <resolver ref="apache-snapshot"/>
+    </chain>
+  </resolvers>
+    
+  <modules>
+    <!-- Force gora modules to be resolved locally -->
+    <module organisation="org.apache.gora" name=".*" resolver="local"/>
+  </modules>
+
+</ivysettings>
+
diff --git a/trunk/pom.xml b/trunk/pom.xml
new file mode 100644
index 0000000..20ffdf0
--- /dev/null
+++ b/trunk/pom.xml
@@ -0,0 +1,856 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+    -->
+    <modelVersion>4.0.0</modelVersion>
+     <parent>
+       <groupId>org.apache</groupId>
+       <artifactId>apache</artifactId>
+       <version>10</version>
+     </parent>
+
+    <groupId>org.apache.gora</groupId>
+    <artifactId>gora</artifactId>
+    <packaging>pom</packaging>
+    <version>0.2.1</version>
+    <name>Apache Gora</name>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support. </description>
+    <url>http://gora.apache.org</url>
+    <inceptionYear>2010</inceptionYear>
+    
+    <licenses>
+      <license>
+        <name>The Apache Software License, Version 2.0</name>
+        <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+      </license>
+    </licenses>
+
+    <organization>
+      <name>The Apache Software Foundation</name>
+      <url>http://www.apache.org/</url>
+    </organization>
+    
+  <developers>
+    <developer>
+      <id>ab</id>
+      <name>Andrzej Bialecki</name>
+      <email>ab [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>ahart</id>
+      <name>Andrew Hart</name>
+      <email>ahart [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>dogacan</id>
+      <name>DoÄŸacan Güney</name>
+      <email>dogacan [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>enis</id>
+      <name>Enis Soztutar</name>
+      <email>enis [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>ferdy</id>
+      <name>Ferdy Galema</name>
+      <email>ferdy [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>hsaputra</id>
+      <name>Henry Saputra</name>
+      <email>hsaputra [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>iocanel</id>
+      <name>Ioannis Canellos</name>
+      <email>iocanel [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>jnioche</id>
+      <name>Julien Nioche</name>
+      <email>jnioche[at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>kturner</id>
+      <name>Keith Turner</name>
+      <email>kturner [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>lewismc</id>
+      <name>Lewis John McGibbney</name>
+      <email>lewismc [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+        <role>PMC Chair</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>mattmann</id>
+      <name>Chris Mattmann</name>
+      <email>mattmann [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+        <role>Champion</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>sertan</id>
+      <name>Sertan Alkan</name>
+      <email>sertan [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>woollard</id>
+      <name>Dave Woollard</name>
+      <email>woollard [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+    <developer>
+      <id>kazk</id>
+      <name>Kazuomi Kashii</name>
+      <email>kazk [at] apache [dot] org</email>
+      <roles>
+        <role>Committer</role>
+        <role>PMC Member</role>
+      </roles>
+    </developer>
+  </developers>
+
+  <mailingLists>
+    <mailingList>
+      <name>Dev Mailing List</name>
+      <post>dev[at]gora[dot]apache[dot]org</post>
+      <subscribe>dev-subscribe[at]gora[dot]apache[dot]org</subscribe>
+      <unsubscribe>dev-unsubscribe[at]gora[dot]apache[dot]org</unsubscribe>
+      <archive>http://mail-archives.apache.org/mod_mbox/gora-dev/</archive>
+    </mailingList>
+
+    <mailingList>
+      <name>User Mailing List</name>
+      <post>user[at]gora[dot]apache[dot]org</post>
+      <subscribe>user-subscribe[at]gora[dot]apache[dot]org</subscribe>
+      <unsubscribe>user-unsubscribe[at]gora[dot]apache[dot]org</unsubscribe>
+      <archive>http://mail-archives.apache.org/mod_mbox/gora-dev/</archive>
+    </mailingList>
+
+    <mailingList>
+      <name>Commits Mailing List</name>
+      <post>commits[at]gora[dot]apache[dot]org</post>
+      <subscribe>commits-subscribe[at]gora[dot]apache[dot]org</subscribe>
+      <unsubscribe>commits-unsubscribe[at]gora[dot]apache[dot]org</unsubscribe>
+      <archive>http://mail-archives.apache.org/mod_mbox/gora-commits</archive>
+    </mailingList>
+  </mailingLists>
+       
+  <scm>
+    <connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1</connection>
+    <developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1</developerConnection>
+    <url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1</url>
+  </scm>
+  <issueManagement>
+    <system>JIRA</system>
+    <url>https://issues.apache.org/jira/browse/GORA</url>
+  </issueManagement>
+  <ciManagement>
+    <system>Jenkins</system>
+    <url>https://builds.apache.org/job/Gora-trunk/</url>
+  </ciManagement>
+  
+  <distributionManagement>
+    <repository>
+      <id>apache.releases.https</id>
+      <name>Apache Release Distribution Repository</name>
+      <url>https://repository.apache.org/service/local/staging/deploy/maven2</url>
+    </repository>
+    <snapshotRepository>
+      <id>apache.snapshots.https</id>
+      <name>Apache Development Snapshot Repository</name>
+      <url>https://repository.apache.org/content/repositories/snapshots</url>
+    </snapshotRepository>
+  </distributionManagement>
+  
+  <repositories>
+    <repository>
+      <id>apache.snapshots</id>
+      <url>http://repository.apache.org/snapshots/</url>
+      <name>Apache Snapshot Repo</name>
+      <snapshots>
+        <enabled>true</enabled>
+      </snapshots>
+      <releases>
+        <enabled>false</enabled>
+      </releases>
+    </repository>
+  </repositories>
+  
+    <build>
+    	<defaultGoal>install</defaultGoal>
+    	<directory>target</directory>
+        <outputDirectory>${basedir}/target/classes</outputDirectory>
+        <finalName>${project.artifactId}-${project.version}</finalName>
+        <testOutputDirectory>${basedir}/target/test-classes</testOutputDirectory>
+        <sourceDirectory>${basedir}/src/main/java</sourceDirectory>
+        <testSourceDirectory>${basedir}/src/test/java</testSourceDirectory>
+	<pluginManagement>
+	<plugins>
+	  <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <version>${maven-assembly-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>assembly</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <tarLongFileMode>gnu</tarLongFileMode>
+              <finalName>${assembly.finalName}</finalName>
+              <descriptors>
+                <descriptor>sources-dist/src/main/assembly/src.xml</descriptor>
+              </descriptors>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-deploy-plugin</artifactId>
+            <version>${maven-deploy-plugin.version}</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-release-plugin</artifactId>
+            <version>${maven-release-plugin.version}</version>
+            <configuration>
+              <mavenExecutorId>forked-path</mavenExecutorId>
+              <tagBase>https://svn.apache.org/repos/asf/gora/tags</tagBase>
+              <useReleaseProfile>false</useReleaseProfile>
+              <arguments>-Papache-release,release</arguments>
+              <autoVersionSubmodules>true</autoVersionSubmodules>
+            </configuration>
+          </plugin>
+        </plugins>
+	</pluginManagement>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-compiler-plugin</artifactId>
+                <version>${maven-compiler-plugin.version}</version>
+                <inherited>true</inherited>
+                <configuration>
+                    <source>${javac.source.version}</source>
+                    <target>${javac.target.version}</target>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-surefire-plugin</artifactId>
+                <version>${maven-surfire-plugin.version}</version>
+                <inherited>true</inherited>
+                <configuration>
+                    <systemPropertyVariables>
+                        <hadoop.log.dir>${project.basedir}/target/test-logs/</hadoop.log.dir>
+                        <test.build.data>${project.basedir}/target/test-data/</test.build.data>
+                    </systemPropertyVariables>
+                    <argLine>-Xmx512m</argLine>
+                    <forkMode>always</forkMode>
+                    <testFailureIgnore>true</testFailureIgnore>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-dependency-plugin</artifactId>
+                <version>${maven-dependency-plugin.version}</version>
+                <inherited>true</inherited>
+                <executions>
+                    <execution>
+                        <phase>package</phase>
+                        <goals>
+                            <goal>copy-dependencies</goal>
+                        </goals>
+                        <configuration>
+                            <outputDirectory>lib</outputDirectory>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+             <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-jar-plugin</artifactId>
+                <version>${maven-jar-plugin.version}</version>
+                <executions>
+                  <execution>
+                  <goals>
+                    <goal>jar</goal>
+                    <goal>test-jar</goal>
+                  </goals>
+                 </execution>
+               </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.felix</groupId>
+                <artifactId>maven-bundle-plugin</artifactId>
+                <version>${maven-bundle-plugin.version}</version>
+                <extensions>true</extensions>
+                <inherited>true</inherited>
+                <configuration>
+                    <instructions>
+                        <Bundle-Name>${project.name}</Bundle-Name>
+                        <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
+                        <Export-Package>${osgi.export}</Export-Package>
+                        <Import-Package>${osgi.import}</Import-Package>
+                        <DynamicImport-Package>${osgi.dynamic.import}</DynamicImport-Package>
+                        <Private-Package>${osgi.private}</Private-Package>
+                        <Require-Bundle>${osgi.bundles}</Require-Bundle>
+                        <Bundle-Activator>${osgi.activator}</Bundle-Activator>
+                    </instructions>
+                    <supportedProjectTypes>
+                        <supportedProjectType>jar</supportedProjectType>
+                        <supportedProjectType>war</supportedProjectType>
+                        <supportedProjectType>bundle</supportedProjectType>
+                    </supportedProjectTypes>
+                    <unpackBundle>true</unpackBundle>
+                </configuration>
+                <executions>
+                    <execution>
+                        <id>bundle-manifest</id>
+                        <phase>process-classes</phase>
+                        <goals>
+                            <goal>manifest</goal>
+                        </goals>
+                    </execution>
+                </executions>
+            </plugin>
+         </plugins>
+    </build>
+    
+    <profiles>
+      <profile>
+        <id>release</id>
+          <build>
+            <plugins>
+              <!-- 
+              <plugin>
+                <groupId>org.apache.rat</groupId>
+                <artifactId>apache-rat-plugin</artifactId>
+                <version>${apache-rat-plugin.version}</version>
+                <executions>
+                  <execution>
+                    <id>rat-verify</id>
+                    <phase>test</phase>
+                    <goals>
+                      <goal>check</goal>
+                    </goals>
+                  </execution>
+                </executions>
+                <configuration>
+          	    <licenses>
+            	  <license implementation="org.apache.rat.analysis.license.SimplePatternBasedLicense">
+              	  <licenseFamilyCategory>ASL20</licenseFamilyCategory>
+              	  <licenseFamilyName>Apache Software License, 2.0</licenseFamilyName>
+              	  <notes>Single licensed ASL v2.0</notes>
+              	  <patterns>
+                	<pattern>Licensed to the Apache Software Foundation (ASF) under one
+                	or more contributor license agreements.</pattern>
+              	  </patterns>
+            	  </license>
+                 </licenses>
+                 <excludeSubProjects>false</excludeSubProjects>
+                 <excludes>
+                   <exclude>CHANGES.txt</exclude>
+                   <exclude>README.txt</exclude>
+                   <exclude>NOTICE.txt</exclude>
+                   <exclude>LICENSE.txt</exclude>
+                   <exclude>KEYS</exclude>
+                   <exclude>doap_Gora.rdf</exclude>
+                   <exclude>.gitignore/**/**</exclude>
+                 </excludes>
+                </configuration>
+              </plugin-->
+              <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+            	<artifactId>maven-source-plugin</artifactId>
+            	<version>${maven-source-plugin.version}</version>
+            	<executions>
+              	  <execution>
+                  <id>attach-sources</id>
+                  <goals>
+                    <goal>jar-no-fork</goal>
+                  </goals>
+                  <configuration>
+                    <archive>
+                      <manifest>
+                        <addDefaultImplementationEntries>true</addDefaultImplementationEntries>
+                      	<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
+                      </manifest>
+                      <manifestEntries>
+                        <Implementation-Build>${implementation.build}</Implementation-Build>
+                      	<Implementation-Build-Date>${maven.build.timestamp}</Implementation-Build-Date>
+                      	<X-Compile-Source-JDK>${javac.src.version}</X-Compile-Source-JDK>
+                      	<X-Compile-Target-JDK>${javac.target.version}</X-Compile-Target-JDK>
+                      </manifestEntries>
+                    </archive>
+                  </configuration>
+              	  </execution>
+            	</executions>
+              </plugin>
+              <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+            	<artifactId>maven-javadoc-plugin</artifactId>
+            	<version>${maven-javadoc-plugin.version}</version>
+            	<executions>
+                  <execution>
+                    <id>attach-javadocs</id>
+                    <goals>
+                      <goal>jar</goal>
+                    </goals>
+                    <configuration>
+                      <quiet>true</quiet>
+                      <archive>
+                        <manifest>
+                          <addDefaultImplementationEntries>true</addDefaultImplementationEntries>
+                          <addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
+                        </manifest>
+                        <manifestEntries>
+                          <Implementation-Build>${implementation.build}</Implementation-Build>
+                          <Implementation-Build-Date>${maven.build.timestamp}</Implementation-Build-Date>
+                          <X-Compile-Source-JDK>${javac.src.version}</X-Compile-Source-JDK>
+                          <X-Compile-Target-JDK>${javac.target.version}</X-Compile-Target-JDK>
+                        </manifestEntries>
+                      </archive>
+                    </configuration>
+                  </execution>
+                </executions>
+              </plugin>
+              <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-gpg-plugin</artifactId>
+                <version>${maven-gpg-plugin.version}</version>
+            	<executions>
+              	  <execution>
+                    <id>sign-artifacts</id>
+                    <phase>verify</phase>
+                    <goals>
+                      <goal>sign</goal>
+                    </goals>
+                  </execution>
+                </executions>
+              </plugin>
+              <plugin>
+        	<groupId>net.ju-n.maven.plugins</groupId>
+        	<artifactId>checksum-maven-plugin</artifactId>
+        	<version>${checksum-maven-plugin.version}</version>
+      	      </plugin>
+            </plugins>
+          </build>
+       </profile>
+    </profiles>
+
+    <modules>
+        <module>gora-core</module>
+        <module>gora-hbase</module>
+        <module>gora-accumulo</module>
+        <module>gora-cassandra</module>
+        <module>gora-sql</module>
+        <module>gora-tutorial</module>
+        <module>sources-dist</module>
+    </modules>
+
+    <properties>
+        <!-- Dependencies -->
+        <osgi.version>4.2.0</osgi.version>
+        <!-- Avro Dependencies -->
+        <jackson.version>1.4.2</jackson.version>
+        <!-- Hadoop Dependencies -->
+        <hadoop.version>1.0.1</hadoop.version>
+        <hadoop.test.version>1.0.1</hadoop.test.version>
+        <hbase.version>0.90.4</hbase.version>
+        <avro.version>1.3.3</avro.version>
+        <cxf-rt-frontend-jaxrs.version>2.5.2</cxf-rt-frontend-jaxrs.version>
+        <!-- Cassandra Dependencies -->
+        <cassandra.version>1.1.2</cassandra.version>
+        <libthrift.version>0.7.0</libthrift.version>
+        <hector.version>1.1-0</hector.version>
+        <!-- Misc Dependencies -->
+        <guava.version>10.0.1</guava.version>
+        <commons-lang.version>2.6</commons-lang.version>
+        <jdom.version>1.1.2</jdom.version>
+        <hsqldb.version>2.2.8</hsqldb.version>
+        <mysql.version>5.1.18</mysql.version>
+        <xerces.version>2.9.1</xerces.version>
+        <!-- Logging Dependencies -->
+        <slf4j.version>1.6.1</slf4j.version>
+        <log4j.version>1.2.16</log4j.version>
+
+        <!-- Testing Dependencies -->
+        <junit.version>4.10</junit.version>
+
+        <!-- Maven Plugin Dependencies -->
+        <maven-compiler-plugin.version>2.3.2</maven-compiler-plugin.version>
+        <maven-resources-plugin.version>2.5</maven-resources-plugin.version>
+        <maven-jar-plugin.version>2.4</maven-jar-plugin.version>
+        <maven-dependency-plugin.version>2.4</maven-dependency-plugin.version>
+        <build-helper-maven-plugin.version>1.7</build-helper-maven-plugin.version>
+        <maven-surfire-plugin.version>2.12</maven-surfire-plugin.version>
+        <maven-release-plugin.version>2.2.2</maven-release-plugin.version>
+        <maven-bundle-plugin.version>2.3.7</maven-bundle-plugin.version>
+        <maven-source-plugin.version>2.1.2</maven-source-plugin.version>
+        <maven-javadoc-plugin.version>2.8.1</maven-javadoc-plugin.version>
+        <maven-gpg-plugin.version>1.4</maven-gpg-plugin.version>
+        <apache-rat-plugin.version>0.8</apache-rat-plugin.version>
+        <maven-assembly-plugin.version>2.2.2</maven-assembly-plugin.version>
+        <maven-deploy-plugin.version>2.5</maven-deploy-plugin.version>
+        <checksum-maven-plugin.version>1.0.1</checksum-maven-plugin.version>
+        
+        <!-- General Properties -->
+        <implementation.build>${scmBranch}@r${buildNumber}</implementation.build>
+        <javac.src.version>1.6</javac.src.version>
+        <javac.target.version>1.6</javac.target.version>
+        <maven.build.timestamp.format>yyyy-MM-dd HH:mm:ssZ</maven.build.timestamp.format>
+        <skipTests>false</skipTests>
+        <assembly.finalName>apache-${project.build.finalName}</assembly.finalName>
+        <downloads.url>http://www.apache.org/dist/gora</downloads.url>
+    </properties>
+
+    <dependencyManagement>
+        <dependencies>
+          <!-- Internal Dependencies -->
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <version>${project.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-core</artifactId>
+            <version>${project.version}</version>
+            <classifier>tests</classifier>
+          </dependency>
+
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-cassandra</artifactId>
+            <version>${project.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-cassandra</artifactId>
+            <version>${project.version}</version>
+            <classifier>tests</classifier>
+          </dependency>
+          
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-sql</artifactId>
+            <version>${project.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-sql</artifactId>
+            <version>${project.version}</version>
+            <classifier>tests</classifier>
+          </dependency>
+
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-hbase</artifactId>
+            <version>${project.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-hbase</artifactId>
+            <version>${project.version}</version>
+            <classifier>tests</classifier>
+          </dependency>
+
+          <dependency>
+            <groupId>org.apache.gora</groupId>
+            <artifactId>gora-tutorial</artifactId>
+            <version>${project.version}</version>
+            </dependency>
+  
+          <!-- Avro needs this version of jackson -->
+          <dependency>
+            <groupId>org.codehaus.jackson</groupId>
+            <artifactId>jackson-core-asl</artifactId>
+            <version>${jackson.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.codehaus.jackson</groupId>
+            <artifactId>jackson-mapper-asl</artifactId>
+            <version>${jackson.version}</version>
+          </dependency>
+
+            <!-- Hadoop Dependencies -->
+            <dependency>
+                <groupId>org.apache.hadoop</groupId>
+                <artifactId>hadoop-core</artifactId>
+                <version>${hadoop.version}</version>
+                <exclusions>
+                    <!--  jackson is conflicting with the Avro dep -->
+                    <exclusion>
+                        <groupId>org.codehaus.jackson</groupId>
+                        <artifactId>jackson-core-asl</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>org.codehaus.jackson</groupId>
+                        <artifactId>jackson-mapper-asl</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>hsqldb</groupId>
+                        <artifactId>hsqldb</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>net.sf.kosmos</groupId>
+                        <artifactId>kfs</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>net.java.dev.jets3t</groupId>
+                        <artifactId>jets3t</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>org.eclipse.jdt</groupId>
+                        <artifactId>core</artifactId>
+                    </exclusion>
+                </exclusions>
+            </dependency>
+            
+            <dependency>
+                <groupId>org.apache.cxf</groupId>
+                <artifactId>cxf-rt-frontend-jaxrs</artifactId>
+                <version>${cxf-rt-frontend-jaxrs.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.apache.hadoop</groupId>
+                <artifactId>avro</artifactId>
+                <version>${avro.version}</version>
+                <exclusions>
+                    <exclusion>
+                        <groupId>ant</groupId>
+                        <artifactId>ant</artifactId>
+                    </exclusion>
+                </exclusions>
+            </dependency>
+
+            <dependency>
+                <groupId>org.apache.hbase</groupId>
+                <artifactId>hbase</artifactId>
+                <version>${hbase.version}</version>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.hbase</groupId>
+                <artifactId>hbase</artifactId>
+                <version>${hbase.version}</version>
+                <classifier>tests</classifier>
+            </dependency>
+
+            <!-- Cassandra Dependencies -->
+            <dependency>
+                <groupId>org.apache.cassandra</groupId>
+                <artifactId>cassandra-all</artifactId>
+                <version>${cassandra.version}</version>
+                <scope>test</scope>
+                <exclusions>
+                    <exclusion>
+                        <groupId>org.apache.cassandra.deps</groupId>
+    			<artifactId>avro</artifactId>
+                    </exclusion>
+                </exclusions>
+            </dependency>
+            
+            <dependency>
+                <groupId>org.apache.cassandra</groupId>
+                <artifactId>cassandra-thrift</artifactId>
+                <version>${cassandra.version}</version>
+            </dependency>
+
+          <dependency>
+            <groupId>org.hectorclient</groupId>
+            <artifactId>hector-core</artifactId>
+            <version>${hector.version}</version>
+            <exclusions>
+	      <exclusion>
+		<groupId>org.apache.cassandra</groupId>
+		<artifactId>cassandra-all</artifactId>
+	      </exclusion>
+	    </exclusions>
+          </dependency>
+
+            <!-- Misc Dependencies -->
+            <dependency>
+                <groupId>com.google.guava</groupId>
+                <artifactId>guava</artifactId>
+                <version>${guava.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>commons-lang</groupId>
+                <artifactId>commons-lang</artifactId>
+                <version>${commons-lang.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.jdom</groupId>
+                <artifactId>jdom</artifactId>
+                <version>${jdom.version}</version>
+                <exclusions>
+                    <exclusion>
+                        <groupId>maven-plugins</groupId>
+                        <artifactId>maven-cobertura-plugin</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>maven-plugins</groupId>
+                        <artifactId>maven-findbugs-plugin</artifactId>
+                    </exclusion>
+                </exclusions>
+            </dependency>
+
+            <dependency>
+                <groupId>org.hsqldb</groupId>
+                <artifactId>hsqldb</artifactId>
+                <version>${hsqldb.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>mysql</groupId>
+                <artifactId>mysql-connector-java</artifactId>
+                <version>${mysql.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>xerces</groupId>
+                <artifactId>xercesImpl</artifactId>
+                <version>${xerces.version}</version>
+            </dependency>
+
+            <!-- Logging Dependencies -->
+            <dependency>
+                <groupId>org.slf4j</groupId>
+                <artifactId>slf4j-api</artifactId>
+                <version>${slf4j.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.slf4j</groupId>
+                <artifactId>slf4j-simple</artifactId>
+                <version>${slf4j.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.slf4j</groupId>
+                <artifactId>slf4j-jdk14</artifactId>
+                <version>${slf4j.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.slf4j</groupId>
+                <artifactId>slf4j-log4j12</artifactId>
+                <version>${slf4j.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>log4j</groupId>
+                <artifactId>log4j</artifactId>
+                <version>${log4j.version}</version>
+                <exclusions>
+                    <exclusion>
+                        <groupId>com.sun.jdmk</groupId>
+                        <artifactId>jmxtools</artifactId>
+                    </exclusion>
+                    <exclusion>
+                        <groupId>com.sun.jmx</groupId>
+                        <artifactId>jmxri</artifactId>
+                    </exclusion>
+                    <exclusion>
+                      <groupId>javax.mail</groupId>
+                      <artifactId>mail</artifactId>
+                    </exclusion>
+                    <exclusion>
+                      <groupId>javax.jms</groupId>
+                      <artifactId>jms</artifactId>
+                    </exclusion>
+                </exclusions>
+            </dependency>
+
+            <!-- Testing Dependencies -->
+            <dependency>
+                <groupId>org.apache.hadoop</groupId>
+                <artifactId>hadoop-test</artifactId>
+                <version>${hadoop.test.version}</version>
+            </dependency>
+            
+            <dependency>
+                <groupId>junit</groupId>
+                <artifactId>junit</artifactId>
+                <version>${junit.version}</version>
+            </dependency>
+            
+        </dependencies>
+    </dependencyManagement>
+
+</project>
diff --git a/trunk/sources-dist/pom.xml b/trunk/sources-dist/pom.xml
new file mode 100644
index 0000000..4b37e19
--- /dev/null
+++ b/trunk/sources-dist/pom.xml
@@ -0,0 +1,72 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache.gora</groupId>
+        <artifactId>gora</artifactId>
+        <version>0.2.1</version>
+        <relativePath>../</relativePath>
+    </parent>
+    <artifactId>sources-dist</artifactId>
+
+    <name>Apache Gora :: Sources-Dist</name>
+    <url>http://gora.apache.org</url>
+    <description>The Apache Gora open source framework provides an in-memory data model and 
+    persistence for big data. Gora supports persisting to column stores, key value stores, 
+    document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce 
+    support.</description>
+    <inceptionYear>2010</inceptionYear>
+    <organization>
+    	<name>The Apache Software Foundation</name>
+    	<url>http://www.apache.org/</url>
+    </organization>
+    <scm>
+    	<url>http://svn.apache.org/viewvc/gora/tags/apache-gora-0.2.1/sources-dist</url>
+    	<connection>scm:svn:http://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/sources-dist</connection>
+    	<developerConnection>scm:svn:https://svn.apache.org/repos/asf/gora/tags/apache-gora-0.2.1/sources-dist</developerConnection>
+    </scm>
+    <issueManagement>
+    	<system>JIRA</system>
+    	<url>https://issues.apache.org/jira/browse/GORA</url>
+    </issueManagement>
+    <ciManagement>
+    	<system>Jenkins</system>
+    	<url>https://builds.apache.org/job/Gora-trunk/</url>
+    </ciManagement>
+    
+    <properties>
+      <assembly.finalName>apache-gora-${project.version}</assembly.finalName>
+    </properties>
+    
+    <build>
+      <plugins>
+      <!-- Generates the distribution package -->
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-assembly-plugin</artifactId>
+          <configuration>
+            <descriptors>
+              <descriptor>${basedir}/src/main/assembly/src.xml</descriptor>
+            </descriptors>
+          </configuration>
+        </plugin>
+      </plugins>
+    </build>
+</project>
diff --git a/trunk/sources-dist/src/main/assembly/src.xml b/trunk/sources-dist/src/main/assembly/src.xml
new file mode 100644
index 0000000..498e940
--- /dev/null
+++ b/trunk/sources-dist/src/main/assembly/src.xml
@@ -0,0 +1,48 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<assembly 
+  xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
+
+  <id>src</id>
+  <formats>
+    <format>tar.gz</format>
+    <format>zip</format>
+  </formats>
+  <includeBaseDirectory>true</includeBaseDirectory>
+  <baseDirectory>apache-gora-${project.version}</baseDirectory>
+  <fileSets>
+  <!-- include all modules -->
+    <fileSet>
+      <directory>${basedir}</directory>
+      <excludes>
+        <!-- exclude target dirs -->
+        <exclude>**/target/</exclude>
+        <!-- exclude hidden files -->
+        <exclude>**/.*</exclude>
+        <!-- exclude hidden directories -->
+        <exclude>**/.*/</exclude>
+        <!-- exclude all jars in lib directories -->
+        <exclude>**/lib/*.jar</exclude>
+        <!-- exclude build directories -->
+        <exclude>**/build/</exclude>
+      </excludes>
+    </fileSet>
+  </fileSets>
+</assembly>