Release Notes - Kafka - Version 0.8.1

** New Feature
    * [KAFKA-554] - Move all per-topic configuration into ZK and add to the CreateTopicCommand
    * [KAFKA-561] - Rebuild index file for a log segment if there is none
    * [KAFKA-657] - Add an API to commit offsets
    * [KAFKA-918] - Change log.retention.hours to be log.retention.mins
    * [KAFKA-925] - Add optional partition key override in producer
    * [KAFKA-1092] - Add server config parameter to separate bind address and ZK hostname
    * [KAFKA-1117] - tool for checking the consistency among replicas

** Improvement
    * [KAFKA-347] - change number of partitions of a topic online
    * [KAFKA-741] - Improve log cleaning dedupe buffer efficiency
    * [KAFKA-1001] - Handle follower transition in batch
    * [KAFKA-1084] - Validate properties for custom serializers
    * [KAFKA-1127] - kafka and zookeeper server should start in daemon mode and log to correct position
    * [KAFKA-1131] - copy some more files into the release tar and zip that are needed/desired
    * [KAFKA-1136] - Add subAppend in Log4jAppender for generic usage
    * [KAFKA-1158] - remove bin/run-rat.sh
    * [KAFKA-1159] - try to get the bin tar smaller
    * [KAFKA-1160] - have the pom reference the exclusions necessary so folks don't have to
    * [KAFKA-1161] - review report of the dependencies, conflicts, and licenses into ivy-report
    * [KAFKA-1162] - handle duplicate entry for ZK in the pom file
    * [KAFKA-1163] - see whats going on with the 2.8.0 pom
    * [KAFKA-1186] - Add topic regex to the kafka-topics tool
    * [KAFKA-1232] - make TopicCommand more consistent

** Bug
    * [KAFKA-172] - The existing perf tools are buggy
    * [KAFKA-184] - Log retention size and file size should be a long
    * [KAFKA-330] - Add delete topic support
    * [KAFKA-515] - Log cleanup can close a file channel opnened by Log.read before the transfer completes
    * [KAFKA-615] - Avoid fsync on log segment roll
    * [KAFKA-648] - Use uniform convention for naming properties keys
    * [KAFKA-671] - DelayedProduce requests should not hold full producer request data
    * [KAFKA-677] - Retention process gives exception if an empty segment is chosen for collection
    * [KAFKA-712] - Controlled shutdown tool should provide a meaningful message if a controller failover occurs during the operation
    * [KAFKA-739] - Handle null values in Message payload
    * [KAFKA-759] - Commit/FetchOffset APIs should not return versionId
    * [KAFKA-852] - Remove clientId from OffsetFetchResponse and OffsetCommitResponse
    * [KAFKA-897] - NullPointerException in ConsoleConsumer
    * [KAFKA-930] - Integrate preferred replica election logic into kafka
    * [KAFKA-933] - Hadoop example running DataGenerator causes kafka.message.Message cannot be cast to [B exception
    * [KAFKA-985] - Increasing log retention quickly overflows scala Int
    * [KAFKA-1004] - Handle topic event for trivial whitelist topic filters
    * [KAFKA-1009] - DumpLogSegments tool should return error on non-existing files
    * [KAFKA-1020] - Remove getAllReplicasOnBroker from KafkaController
    * [KAFKA-1036] - Unable to rename replication offset checkpoint in windows
    * [KAFKA-1052] - integrate add-partitions command into topicCommand
    * [KAFKA-1055] - BrokerTopicStats is updated before checking for MessageSizeTooLarge
    * [KAFKA-1060] - Break-down sendTime into responseQueueTime and the real sendTime
    * [KAFKA-1074] - Reassign partitions should delete the old replicas from disk
    * [KAFKA-1090] - testPipelinedRequestOrdering has transient failures
    * [KAFKA-1091] - full topic list can be read from metadata cache in the broker instead of ZK
    * [KAFKA-1097] - Race condition while reassigning low throughput partition leads to incorrect ISR information in zookeeper
    * [KAFKA-1103] - Consumer uses two zkclients
    * [KAFKA-1104] - Consumer uses two zkclients
    * [KAFKA-1112] - broker can not start itself after kafka is killed with -9
    * [KAFKA-1116] - Need to upgrade sbt-assembly to compile on scala 2.10.2
    * [KAFKA-1121] - DumpLogSegments tool should print absolute file name to report inconsistencies
    * [KAFKA-1128] - Github is still showing 0.7 as the default branch
    * [KAFKA-1129] - if we have a script to run the jar then we should include it in the build or remove it during release
    * [KAFKA-1133] - LICENSE and NOTICE files need to get into  META-INF when jars are built before they're signed for publishing to maven
    * [KAFKA-1135] - Code cleanup - use Json.encode() to write json data to zk
    * [KAFKA-1140] - Move the decoding logic from ConsumerIterator.makeNext to next
    * [KAFKA-1141] - make changes to downloads for the archive old releases to new old_releases folder
    * [KAFKA-1151] - The Hadoop consumer API doc is not referencing the contrib consumer
    * [KAFKA-1152] - ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
    * [KAFKA-1154] - replicas may not have consistent data after becoming follower
    * [KAFKA-1156] - Improve reassignment tool to output the existing assignment to facilitate rollbacks
    * [KAFKA-1157] - Clean up Per-topic Configuration from Kafka properties
    * [KAFKA-1164] - kafka should depend on snappy 1.0.5 (instead of 1.0.4.1)
    * [KAFKA-1166] - typo in kafka-server-stop.sh
    * [KAFKA-1168] - OfflinePartitionCount in JMX can be incorrect during controlled shutdown
    * [KAFKA-1169] - missing synchronization in access to leaderCache in KafkaApis
    * [KAFKA-1170] - ISR can be inconsistent during partition reassignment for low throughput partitions
    * [KAFKA-1172] - error when creating topic with kafka-topics.sh
    * [KAFKA-1183] - DefaultEventHandler causes unbalanced distribution of messages across partitions
    * [KAFKA-1198] - NullPointerException in describe topic
    * [KAFKA-1200] - inconsistent log levels when consumed offset is reset
    * [KAFKA-1202] - optimize ZK access in KafkaController
    * [KAFKA-1205] - README in examples not update
    * [KAFKA-1208] - Update system test still to use kafka-topics instead of kafka-create-topics shell
    * [KAFKA-1214] - Support arguments to zookeeper-shell.sh script
    * [KAFKA-1228] - Socket Leak on ReplicaFetcherThread
    * [KAFKA-1243] - Gradle issues for release
    * [KAFKA-1263] - Snazzy up the README markdown for better visibility on github
    * [KAFKA-1271] - controller logs exceptions during ZK session expiration
    * [KAFKA-1275] - fixes for quickstart documentation

** Task
    * [KAFKA-823] - merge 0.8 (51421fcc0111031bb77f779a6f6c00520d526a34) to trunk
    * [KAFKA-896] - merge 0.8 (988d4d8e65a14390abd748318a64e281e4a37c19) to trunk
    * [KAFKA-965] - merge c39d37e9dd97bf2462ffbd1a96c0b2cb05034bae from 0.8 to trunk
    * [KAFKA-1051] - merge from 0.8 da4512174b6f7c395ffe053a86d2c6bb19d2538a to trunk
    * [KAFKA-1080] - why are builds for 2.10 not coming out with the trailing minor version number

** Sub-task
    * [KAFKA-121] - pom should include standard maven niceties
    * [KAFKA-1244] - The LICENSE and NOTICE are missing from the jar files
    * [KAFKA-1245] - the jar files and pom are not being signed so nexus is failing to publish them
    * [KAFKA-1246] - The 2.10 version is showing up as 2.10.1
    * [KAFKA-1248] - jars are missing from maven upload that were previously there
    * [KAFKA-1249] - release tar name is different than 0.8.0
    * [KAFKA-1254] - remove vestigial sbt
    * [KAFKA-1274] - gradle.properties needs the variables used in the build.gradle
auto rebalance last commit
1 file changed
tree: 11fe96d585f0336b96be260470682d56fccf8249
  1. bin/
  2. clients/
  3. config/
  4. contrib/
  5. core/
  6. examples/
  7. gradle/
  8. lib/
  9. perf/
  10. system_test/
  11. .gitignore
  12. .rat-excludes
  13. .reviewboardrc
  14. build.gradle
  15. gradle.properties
  16. gradlew
  17. HEADER
  18. kafka-patch-review.py
  19. LICENSE
  20. NOTICE
  21. README.md
  22. scala.gradle
  23. settings.gradle
README.md

Apache Kafka

See our web site for details on the project.

Building a jar and running it

./gradlew jar  

Follow instuctions in http://kafka.apache.org/documentation.html#quickstart

Running unit tests

./gradlew test

Forcing re-running unit tests w/o code change

./gradlew cleanTest test

Running a particular unit test

./gradlew -Dtest.single=RequestResponseSerializationTest core:test

Building a binary release gzipped tar ball

./gradlew clean
./gradlew releaseTarGz  

The release file can be found inside ./core/build/distributions/.

Cleaning the build

./gradlew clean

Running a task on a particular version of Scala

either 2.8.0, 2.8.2, 2.9.1, 2.9.2 or 2.10.1) (If building a jar with a version other than 2.8.0, the scala version variable in bin/kafka-run-class.sh needs to be changed to run quick start.) ./gradlew -PscalaVersion=2.9.1 jar ./gradlew -PscalaVersion=2.9.1 test ./gradlew -PscalaVersion=2.9.1 releaseTarGz

Running a task for a specific project

This is for ‘core’, ‘perf’, ‘contrib:hadoop-consumer’, ‘contrib:hadoop-producer’, ‘examples’ and ‘clients’ ./gradlew core:jar ./gradlew core:test

Listing all gradle tasks

./gradlew tasks

Building IDE project

./gradlew eclipse
./gradlew idea

Building the jar for all scala versions and for all projects

./gradlew jarAll

Running unit tests for all scala versions and for all projects

./gradlew testAll

Building a binary release gzipped tar ball for all scala versions

./gradlew releaseTarGzAll

Publishing the jar for all version of Scala and for all projects to maven

./gradlew uploadArchivesAll

Please note for this to work you should create/update ~/.gradle/gradle.properties and assign the following variables

mavenUrl=
mavenUsername=
mavenPassword=
signing.keyId=
signing.password=
signing.secretKeyRingFile=

Building the test jar

./gradlew testJar

Determining how transitive dependencies are added

./gradlew core:dependencies --configuration runtime

Contribution

Apache Kafka interested in building the community; we would welcome any thoughts or patches. You can reach us on the Apache mailing lists.

To contribute follow the instructions here:

We also welcome patches for the website and documentation which can be found here: