blob: d5ba882e5987d7462abd7d31a1cf20134afba593 [file] [log] [blame]
GENERAL UPGRADING ADVICE FOR ANY VERSION
========================================
Snapshotting is fast (especially if you have JNA installed) and takes
effectively zero disk space until you start compacting the live data
files again. Thus, best practice is to ALWAYS snapshot before any
upgrade, just in case you need to roll back to the previous version.
(Cassandra version X + 1 will always be able to read data files created
by version X, but the inverse is not necessarily the case.)
1.1.12
======
Upgrading
---------
- Nothing specific to this release, but please see the previous instructions
if you are not upgrading from 1.1.11.
1.1.11
======
Upgrading
---------
- Nothing specific to this release, but please see the previous instructions
if you are not upgrading from 1.1.10.
Features
--------
- Pluggable internode authentication.
See `internode_authenticator` setting in cassandra.yaml.
1.1.10
======
Upgrading
---------
- Nothing specific to this release, but please see the previous instructions
if you are not upgrading from 1.1.9.
1.1.9
=====
Upgrading
---------
- If you are upgrading from a version less than 1.1.7, all nodes must be
upgraded before any streaming can take place.
1.1.8
=====
Upgrading
---------
- Nothing specific to this release, but please see the previous instructions
if you are not upgrading from 1.1.7.
Features
--------
- It is now possible override the number of available processor
(with -Dcassandra.available_processors), which can useful when multiple
instance of Cassandra run on the same machine since Cassandra base some
sizings on that value.
1.1.7
=====
Upgrading
---------
- Nothing specific to this release, but please see the previous instructions
if you are not upgrading from 1.1.6.
1.1.6
=====
Upgrading
---------
- If you are using counters, you should drain existing Cassandra nodes
prior to the upgrade to prevent overcount during commitlog replay
(see CASSANDRA-4782). For non-counter uses, drain is not required
but is a good practice to minimize restart time.
1.1.5
=====
Upgrading
---------
- Nothing specific to this release, but please see 1.1 if you are upgrading
from a previous version.
1.1.4
=====
Upgrading
---------
- Nothing specific to this release, but please see 1.1 if you are upgrading
from a previous version.
1.1.3
=====
Upgrading
---------
- Running "nodetool upgradesstables" after upgrading is recommended
if you use Counter columnfamilies.
Features
--------
- the cqlsh COPY command can now export to CSV flat files
- added a new tools/bin/token-generator to facilitate generating evenly distributed tokens
1.1.2
=====
Upgrading
---------
- If you have column families using the LeveledCompactionStrategy, you should run scrub on those column families.
Features
--------
- cqlsh has a new COPY command to load data from CSV flat files
1.1.1
=====
Upgrading
---------
- Nothing specific to this release, but please see 1.1 if you are upgrading
from a previous version.
Features
--------
- Continuous commitlog archiving and point-in-time recovery.
See conf/commitlog_archiving.properties
- Incremental repair by token range, exposed over JMX
1.1
===
Upgrading
---------
- Compression is enabled by default on newly created ColumnFamilies
(and unchanged for ColumnFamilies created prior to upgrading).
- If you are running a multi datacenter setup, you should upgrade to
the latest 1.0.x (or 0.8.x) release before upgrading. Versions
0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is incompatible
with 1.1.
- EACH_QUORUM ConsistencyLevel is only supported for writes and will now
throw an InvalidRequestException when used for reads. (Previous
versions would silently perform a LOCAL_QUORUM read instead.)
- ANY ConsistencyLevel is only supported for writes and will now
throw an InvalidRequestException when used for reads. (Previous
versions would silently perform a ONE read for range queries;
single-row and multiget reads already rejected ANY.)
- The largest mutation batch accepted by the commitlog is now 128MB.
(In practice, batches larger than ~10MB always caused poor
performance due to load volatility and GC promotion failures.)
Larger batches will continue to be accepted but will not be
durable. Consider setting durable_writes=false if you really
want to use such large batches.
- Make sure that global settings: key_cache_{size_in_mb, save_period}
and row_cache_{size_in_mb, save_period} in conf/cassandra.yaml are
used instead of per-ColumnFamily options.
- JMX methods no longer return custom Cassandra objects. Any such methods
will now return standard Maps, Lists, etc.
- Hadoop input and output details are now separated. If you were
previously using methods such as getRpcPort you now need to use
getInputRpcPort or getOutputRpcPort depending on the circumstance.
- CQL changes:
+ Prior to 1.1, you could use KEY as the primary key name in some
select statements, even if the PK was actually given a different
name. In 1.1+ you must use the defined PK name.
- The sliced_buffer_size_in_kb option has been removed from the
cassandra.yaml config file (this option was a no-op since 1.0).
Features
--------
- Concurrent schema updates are now supported, with any conflicts
automatically resolved. Please note that simultaneously running
‘CREATE COLUMN FAMILY’ operation on the different nodes wouldn’t
be safe until version 1.2 due to the nature of ColumnFamily
identifier generation, for more details see CASSANDRA-3794.
- The CQL language has undergone a major revision, CQL3, the
highlights of which are covered at [1]. CQL3 is not
backwards-compatibile with CQL2, so we've introduced a
set_cql_version Thrift method to specify which version you want.
(The default remains CQL2 at least until Cassandra 1.2.) cqlsh
adds a --cql3 flag to enable this.
[1] http://www.datastax.com/dev/blog/schema-in-cassandra-1-1
- Row-level isolation: multi-column updates to a single row have
always been *atomic* (either all will be applied, or none)
thanks to the CommitLog, but until 1.1 they were not *isolated*
-- a reader may see mixed old and new values while the update
happens.
- Finer-grained control over data directories, allowing a ColumnFamily to
be pinned to specfic volume, e.g. one backed by SSD.
- The bulk loader is not longer a fat client; it can be run from an
existing machine in a cluster.
- A new write survey mode has been added, similar to bootstrap (enabled via
-Dcassandra.write_survey=true), but the node will not automatically join
the cluster. This is useful for cases such as testing different
compaction strategies with live traffic without affecting the cluster.
- Key and row caches are now global, similar to the global memtable
threshold. Manual tuning of cache sizes per-columnfamily is no longer
required.
- Off-heap caches no longer require JNA, and will work out of the box
on Windows as well as Unix platforms.
- Streaming is now multithreaded.
- Compactions may now be aborted via JMX or nodetool.
- The stress tool is not new in 1.1, but it is newly included in
binary builds as well as the source tree
- Hadoop: a new BulkOutputFormat is included which will directly write
SSTables locally and then stream them into the cluster.
YOU SHOULD USE BulkOutputFormat BY DEFAULT. ColumnFamilyOutputFormat
is still around in case for some strange reason you want results
trickling out over Thrift, but BulkOutputFormat is significantly
more efficient.
- Hadoop: KeyRange.filter is now supported with ColumnFamilyInputFormat,
allowing index expressions to be evaluated server-side to reduce
the amount of data sent to Hadoop.
- Hadoop: ColumnFamilyRecordReader has a wide-row mode, enabled via
a boolean parameter to setInputColumnFamily, that pages through
data column-at-a-time instead of row-at-a-time.
- Pig: can use the wide-row Hadoop support, by setting PIG_WIDEROW_INPUT
to true. This will produce each row's columns in a bag.
1.0.8
=====
Upgrading
---------
- Nothing specific to 1.0.8
Other
-----
- Allow configuring socket timeout for streaming
1.0.7
=====
Upgrading
---------
- Nothing specific to 1.0.7, please report to instruction for 1.0.6
Other
-----
- Adds new setstreamthroughput to nodetool to configure streaming
throttling
- Adds JMX property to get/set rpc_timeout_in_ms at runtime
- Allow configuring (per-CF) bloom_filter_fp_chance
1.0.6
=====
Upgrading
---------
- This release fixes an issue related to the chunk_length_kb option for
compressed sstables. If you use compression on some column families, it
is recommended after the upgrade to check the value for this option on
these column families (the default value is 64). In case the option would
not be set correctly, you should update the column family definition,
setting the right value and then run scrub on the column family.
- Please report to instruction for 1.0.5 if coming from an older version.
1.0.5
=====
Upgrading
---------
- 1.0.5 comes to fix two important regression of 1.0.4. So all information
concerning 1.0.4 are valid for this release, but please avoids upgrading
to 1.0.4.
1.0.4
=====
Upgrading
---------
- Nothing specific to 1.0.4 but please see the 1.0 upgrading section if
upgrading from a version prior to 1.0.0
Features
--------
- A new upgradesstables command has been added to nodetool. It is very
similar to scrub but without the ability to discard corrupted rows (and
as a consequence it does not snapshot automatically before). This new
command is to be prefered to scrub in all cases where sstables should be
rewritten to the current format for upgrade purposes.
JMX
---
- The path for the data, commit log and saved cache directories exposed
through JMX
- The in-memory bloom filter sizes are now exposed through JMX
1.0.3
=====
Upgrading
---------
- Nothing specific to 1.0.3 but please see the 1.0 upgrading section if
upgrading from a version prior to 1.0.0
Features
--------
- For non compressed sstables (compressed sstable already include more
fine grained checsums), a sha1 for the full sstable is now automatically
created (in a fix with suffix -Digest.sha1). It can be used to check the
sstable integrity with sha1sum.
1.0.2
=====
Upgrading
---------
- Nothing specific to 1.0.2 but please see the 1.0 upgrading section if
upgrading from a version prior to 1.0.0
Features
--------
- Cassandra CLI queries now have timing information
1.0.1
=====
Upgrading
---------
- If upgrading from a version prior to 1.0.0, please see the 1.0 Upgrading
section
- For running on Windows as a Service, procrun is no longer discributed
with Cassandra, see README.txt for more information on how to download
it if necessary.
- The name given to snapshots directories have been improved for human
readability. If you had scripts relying on it, you may need to update
them.
1.0
===
Upgrading
---------
- Upgrading from version 0.7.1+ or 0.8.2+ can be done with a rolling
restart, one node at a time. (0.8.0 or 0.8.1 are NOT network-compatible
with 1.0: upgrade to the most recent 0.8 release first.)
You do not need to bring down the whole cluster at once.
- After upgrading, run nodetool scrub against each node before running
repair, moving nodes, or adding new ones.
- CQL inserts/updates now generate microsecond resolution timestamps
by default, instead of millisecond. THIS MEANS A ROLLING UPGRADE COULD
MIX milliseconds and microseconds, with clients talking to servers
generating milliseconds unable to overwrite the larger microsecond
timestamps. If you are using CQL and this is important for your
application, you can either perform a non-rolling upgrade to 1.0, or
update your application first to use explicit timestamps with the "USING
timestamp=X" syntax.
- The BinaryMemtable bulk-load interface has been removed (use the
sstableloader tool instead).
- The compaction_thread_priority setting has been removed from
cassandra.yaml (use compaction_throughput_mb_per_sec to throttle
compaction instead).
- CQL types bytea and date were renamed to blob and timestamp, respectively,
to conform with SQL norms. CQL type int is now a 4-byte int, not 8
(which is still available as bigint).
- Cassandra 1.0 uses arena allocation to reduce old generation
fragmentation. This means there is a minimum overhead of 1MB
per ColumnFamily plus 1MB per index.
- The SimpleAuthenticator and SimpleAuthority classes have been moved to
the example directory (and are thus not available from the binary
distribution). They never provided actual security and in their current
state are only meant as examples.
Features
--------
- SSTable compression is supported through the 'compression_options'
parameter when creating/updating a column family. For instance, you can
create a column family Cf using compression (through the Snappy library)
in the CLI with:
create column family Cf with compression_options={sstable_compression: SnappyCompressor}
SSTable compression is not activated by default but can be activated or
deactivated at any time.
- Compressed SSTable blocks are checksummed to protect against bitrot
- New LevelDB-inspired compaction algorithm can be enabled by setting the
Columnfamily compaction_strategy=LeveledCompactionStrategy option.
Leveled compaction means you only need to keep a few MB of space free for
compaction instead of (in the worst case) 50%.
- Ability to use multiple threads during a single compaction. See
multithreaded_compaction in cassandra.yaml for more details.
- Windows Service ("cassandra.bat install" to enable)
- A dead node may be replaced in a single step by starting a new node
with -Dcassandra.replace_token=<token>. More details can be found at
http://wiki.apache.org/cassandra/Operations#Replacing_a_Dead_Node
- It is now possible to repair only the first range returned by the
partitioner for a node with `nodetool repair -pr`. It makes it
easier/possible to repair a full cluster without any work duplication by
running this command on every node of the cluster.
New data types
--------------
- decimal
Other
-----
- Hinted Handoff has two major improvements:
- Hint replay is much more efficient thanks to a change in the data model
- Hints are created for all replicas that do not ack a write. (Formerly,
only replicas known to be down when the write started were hinted.)
This means that running with read repair completely off is much more
viable than before, and the default read_repair_chance is reduced from 1.0
("always repair") to 0.1 ("repair 10% of the time").
- The old per-ColumnFamily memtable thresholds
(memtable_throughput_in_mb, memtable_operations_in_millions,
memtable_flush_after_mins) are ignored, in favor of the global
memtable_total_space_in_mb and commitlog_total_space_in_mb settings.
This does not affect client compatibility -- the old options are
still allowed, but have no effect. These options may be removed
entirely in a future release.
- Backlogged compactions will begin five minutes after startup. The 0.8
behavior of never starting compaction until a flush happens is usually
not what is desired, but a short grace period is useful to allow caches
to warm up first.
- The deletion of compacted data files is not performed during Garbage
Collection anymore. This means compacted files will now be deleted
without delay.
0.8.5
=====
Features
--------
- SSTables copied to a data directory can be loaded by a live node through
nodetool refresh (may be handy to load snapshots).
- The configured compaction throughput is exposed through JMX.
Other
-----
- The sstableloader is now bundled with the debian package.
- Repair detects when a participating node is dead and fails instead of
hanging forever.
0.8.4
=====
Upgrading
---------
- Nothing specific to 0.8.4
Other
-----
- This release comes to fix a bug in counter that could lead to
(important) over-count.
- It also fixes a slight upgrade regression from 0.8.3. It is thus advised
to jump directly to 0.8.4 if upgrading from before 0.8.3.
0.8.3
=====
Upgrading
---------
- Token removal has been revamped. Removing tokens in a mixed cluster with
0.8.3 will not work, so the entire cluster will need to be running 0.8.3
first, except for the dead node.
Features
--------
- It is now possible to use thrift asynchronous and
half-synchronous/half-asynchronous servers (see cassandra.yaml for more
details).
- It is now possible to access counter columns through Hadoop.
Other
-----
- This release fix a regression of 0.8 that can make commit log segment to
be deleted even though not all data it contains has been flushed.
Upgrades from 0.8.* is very much encouraged.
0.8.2
=====
Upgrading
---------
- 0.8.0 and 0.8.1 shipped with a bug that was setting the
replicate_on_write option for counter column families to false (this
option has no effect on non-counter column family). This is an unsafe
default and 0.8.2 correct this, the default for replicate_on_write is
now true. It is advised to update your counter column family definitions
if replicate_on_write was uncorrectly set to false (before or after
upgrade).
0.8.1
=====
Upgrading
---------
- 0.8.1 is backwards compatible with 0.8, upgrade can be achieved by a
simple rolling restart.
- If upgrading for earlier version (0.7), please refer to the 0.8 section
for instructions.
Features
--------
- Numerous additions/improvements to CQL (support for counters, TTL, batch
inserts/deletes, index dropping, ...).
- Add two new AbstractTypes (comparator) to support compound keys
(CompositeType and DynamicCompositeType), as well as a ReverseType to
reverse the order of any existing comparator.
- New option to bypass the commit log on some keyspaces (for advanced
users).
Tools
-----
- Add new data bulk loading utility (sstableloader).
0.8
===
Upgrading
---------
- Upgrading from version 0.7.1 or later can be done with a rolling
restart, one node at a time. You do not need to bring down the
whole cluster at once.
- After upgrading, run nodetool scrub against each node before running
repair, moving nodes, or adding new ones.
- Running nodetool drain before shutting down the 0.7 node is
recommended but not required. (Skipping this will result in
replay of entire commitlog, so it will take longer to restart but
is otherwise harmless.)
- 0.8 is fully API-compatible with 0.7. You can continue
to use your 0.7 clients.
- Avro record classes used in map/reduce and Hadoop streaming code have
been removed. Map/reduce can be switched to Thrift by changing
org.apache.cassandra.avro in import statements to
org.apache.cassandra.thrift (no class names change). Streaming support
has been removed for the time being.
- The loadbalance command has been removed from nodetool. For similar
behavior, decommission then rebootstrap with empty initial_token.
- Thrift unframed mode has been removed.
- The addition of key_validation_class means the cli will assume keys
are bytes, instead of strings, in the absence of other information.
See http://wiki.apache.org/cassandra/FAQ#cli_keys for more details.
Features
--------
- added CQL client API and JDBC/DBAPI2-compliant drivers for Java and
Python, respectively (see: drivers/ subdirectory and doc/cql)
- added distributed Counters feature;
see http://wiki.apache.org/cassandra/Counters
- optional intranode encryption; see comments around 'encryption_options'
in cassandra.yaml
- compaction multithreading and rate-limiting; see
'concurrent_compactors' and 'compaction_throughput_mb_per_sec' in
cassandra.yaml
- cassandra will limit total memtable memory usage to 1/3 of the heap
by default. This can be ajusted or disabled with the
memtable_total_space_in_mb option. The old per-ColumnFamily
throughput, operations, and age settings are still respected but
will be removed in a future major release once we are satisfied that
memtable_total_space_in_mb works adequately.
Tools
-----
- stress and py_stress moved from contrib/ to tools/
- clustertool was removed (see
https://issues.apache.org/jira/browse/CASSANDRA-2607 for examples
of how to script nodetool across the cluster instead)
Other
-----
- In the past, sstable2json would write column names and values as
hex strings, and now creates human readable values based on the
comparator/validator. As a result, JSON dumps created with
older versions of sstable2json are no longer compatible with
json2sstable, and imports must be made with a configuration that
is identical to the export.
- manually-forced compactions ("nodetool compact") will do nothing
if only a single SSTable remains for a ColumnFamily. To force it
to compact that anyway (which will free up space if there are
a lot of expired tombstones), use the new forceUserDefinedCompaction
JMX method on CompactionManager.
- most of contrib/ (which was not part of the binary releases)
has been moved either to examples/ or tools/. We plan to move the
rest for 0.8.1.
JMX
---
- By default, JMX now listens on port 7199.
0.7.6
=====
Upgrading
---------
- Nothing specific to 0.7.6, but see 0.7.3 Upgrading if upgrading
from earlier than 0.7.1.
0.7.5
=====
Upgrading
---------
- Nothing specific to 0.7.5, but see 0.7.3 Upgrading if upgrading
from earlier than 0.7.1.
Changes
-------
- system_update_column_family no longer snapshots before applying
the schema change. (_update_keyspace never did. _drop_keyspace
and _drop_column_family continue to snapshot.)
- added memtable_flush_queue_size option to cassandra.yaml to
avoid blocking writes when multiple column families (or a colum
family with indexes) are flushed at the same time.
- allow overriding initial_token, storage_port and rpc_port using
system properties
0.7.4
=====
Upgrading
---------
- Nothing specific to 0.7.4, but see 0.7.3 Upgrading if upgrading
from earlier than 0.7.1.
Features
--------
- Output to Pig is now supported as well as input
0.7.3
=====
Upgrading
---------
- 0.7.1 and 0.7.2 shipped with a bug that caused incorrect row-level
bloom filters to be generated when compacting sstables generated
with earlier versions. This would manifest in IOExceptions during
column name-based queries. 0.7.3 provides "nodetool scrub" to
rebuild sstables with correct bloom filters, with no data lost.
(If your cluster was never on 0.7.0 or earlier, you don't have to
worry about this.) Note that nodetool scrub will snapshot your
data files before rebuilding, just in case.
0.7.1
=====
Upgrading
---------
- 0.7.1 is completely backwards compatible with 0.7.0. Just restart
each node with the new version, one at a time. (The cluster does
not all need to be upgraded simultaneously.)
Features
--------
- added flush_largest_memtables_at and reduce_cache_sizes_at options
to cassandra.yaml as an escape valve for memory pressure
- added option to specify -Dcassandra.join_ring=false on startup
to allow "warm spare" nodes or performing JMX maintenance before
joining the ring
Performance
-----------
- Disk writes and sequential scans avoid polluting page cache
(requires JNA to be enabled)
- Cassandra performs writes efficiently across datacenters by
sending a single copy of the mutation and having the recipient
forward that to other replicas in its datacenter.
- Improved network buffering
- Reduced lock contention on memtable flush
- Optimized supercolumn deserialization
- Zero-copy reads from mmapped sstable files
- Explicitly set higher JVM new generation size
- Reduced i/o contention during saving of caches
0.7.0
=====
Features
--------
- Secondary indexes (indexes on column values) are now supported
- Row size limit increased from 2GB to 2 billion columns. rows
are no longer read into memory during compaction.
- Keyspace and ColumnFamily definitions may be added and modified live
- Streaming data for repair or node movement no longer requires
anticompaction step first
- NetworkTopologyStrategy (formerly DatacenterShardStrategy) is ready for
use, enabling ConsistencyLevel.DCQUORUM and DCQUORUMSYNC. See comments
in `cassandra.yaml.`
- Optional per-Column time-to-live field allows expiring data without
have to issue explicit remove commands
- `truncate` thrift method allows clearing an entire ColumnFamily at once
- Hadoop OutputFormat and Streaming [non-jvm map/reduce via stdin/out]
support
- Up to 8x faster reads from row cache
- A new ByteOrderedPartitioner supports bytes keys with arbitrary content,
and orders keys by their byte value. This should be used in new
deployments instead of OrderPreservingPartitioner.
- Optional round-robin scheduling between keyspaces for multitenant
clusters
- Dynamic endpoint snitch mitigates the impact of impaired nodes
- New `IntegerType`, faster than LongType and allows integers of
both less and more bits than Long's 64
- A revamped authentication system that decouples authorization and
allows finer-grained control of resources.
Upgrading
---------
The Thrift API has changed in incompatible ways; see below, and refer
to http://wiki.apache.org/cassandra/ClientOptions for a list of
higher-level clients that have been updated to support the 0.7 API.
The Cassandra inter-node protocol is incompatible with 0.6.x
releases (and with 0.7 beta1), meaning you will have to bring your
cluster down prior to upgrading: you cannot mix 0.6 and 0.7 nodes.
The hints schema was changed from 0.6 to 0.7. Cassandra automatically
snapshots and then truncates the hints column family as part of
starting up 0.7 for the first time.
Keyspace and ColumnFamily definitions are stored in the system
keyspace, rather than the configuration file.
The process to upgrade is:
1) run "nodetool drain" on _each_ 0.6 node. When drain finishes (log
message "Node is drained" appears), stop the process.
2) Convert your storage-conf.xml to the new cassandra.yaml using
"bin/config-converter".
3) Rename any of your keyspace or column family names that do not adhere
to the '^\w+' regex convention.
4) Start up your cluster with the 0.7 version.
5) Initialize your Keyspace and ColumnFamily definitions using
"bin/schematool <host> <jmxport> import". _You only need to do
this to one node_.
Thrift API
----------
- The Cassandra server now defaults to framed mode, rather than
unframed. Unframed is obsolete and will be removed in the next
major release.
- The Cassandra Thrift interface file has been updated for Thrift 0.5.
If you are compiling your own client code from the interface, you
will need to upgrade the Thrift compiler to match.
- Row keys are now bytes: keys stored by versions prior to 0.7.0 will be
returned as UTF-8 encoded bytes. OrderPreservingPartitioner and
CollatingOrderPreservingPartitioner continue to expect that keys contain
UTF-8 encoded strings, but RandomPartitioner now works on any key data.
- keyspace parameters have been replaced with the per-connection
set_keyspace method.
- The return type for login() is now AccessLevel.
- The get_string_property() method has been removed.
- The get_string_list_property() method has been removed.
Configuraton
------------
- Configuration file renamed to cassandra.yaml and log4j.properties to
log4j-server.properties
- PropertyFileSnitch configuration file renamed to
cassandra-topology.properties
- The ThriftAddress and ThriftPort directives have been renamed to
RPCAddress and RPCPort respectively.
- EndPointSnitch was renamed to RackInferringSnitch. A new SimpleSnitch
has been added.
- RackUnawareStrategy and RackAwareStrategy have been renamed to
SimpleStrategy and OldNetworkTopologyStrategy, respectively.
- RowWarningThresholdInMB replaced with in_memory_compaction_limit_in_mb
- GCGraceSeconds is now per-ColumnFamily instead of global
- Keyspace and column family names that do not confirm to a '^\w+' regex
are considered illegal.
- Keyspace and column family definitions will need to be loaded via
"bin/schematool <host> <jmxport> import". _You only need to do this to
one node_.
- In addition to an authenticator, an authority must be configured as
well. Users of SimpleAuthenticator should use SimpleAuthority for this
value (the default is AllowAllAuthority, which corresponds with
AllowAllAuthenticator).
- The format of access.properties has changed, see the sample configuration
conf/access.properties for documentation on the new format.
JMX
---
- StreamingService moved from o.a.c.streaming to o.a.c.service
- GMFD renamed to GOSSIP_STAGE
- {Min,Mean,Max}RowCompactedSize renamed to {Min,Mean,Max}RowSize
since it no longer has to wait til compaction to be computed
Other
-----
- If extending AbstractType, make sure you follow the singleton pattern
followed by Cassandra core AbstractType classes: provide a public
static final variable called 'instance'.
0.6.6
=====
Upgrading
---------
- As part of the cache-saving feature, a third directory
(along with data and commitlog) has been added to the config
file. You will need to set and create this directory
when restarting your node into 0.6.6.
0.6.1
=====
Upgrading
---------
- We try to keep minor versions 100% compatible (data format,
commitlog format, network format) within the major series, but
we introduced a network-level incompatibility in 0.6.1.
Thus, if you are upgrading from 0.6.0 to any higher version
(0.6.1, 0.6.2, etc.) then you will need to restart your entire
cluster with the new version, instead of being able to do a
rolling restart.
0.6.0
=====
Features
--------
- row caching: configure with the RowsCached attribute in
ColumnFamily definition
- Hadoop map/reduce support: see contrib/word_count for an example
- experimental authentication support, described under
Authenticator in storage.conf
Configuraton
------------
- MemtableSizeInMB has been replaced by MemtableThroughputInMB which
triggers a memtable flush when the specified amount of data has
been written, including overwrites.
- MemtableObjectCountInMillions has been replaced by the
MemtableOperationsInMillions directive which causes a memtable flush
to occur after the specified number of operations.
- Like MemtableSizeInMB, BinaryMemtableSizeInMB has been replaced by
BinaryMemtableThroughputInMB.
- Replication factor is now per-keyspace, rather than global.
- KeysCachedFraction is deprecated in favor of KeysCached
- RowWarningThresholdInMB added, to warn before very large rows
get big enough to threaten node stability
Thrift API
----------
- removed deprecated get_key_range method
- added batch_mutate meethod
- deprecated multiget and batch_insert methods in favor of
multiget_slice and batch_mutate, respectively
- added ConsistencyLevel.ANY, for when you want write
availability even when it may not be readable immediately.
Unlike CL.ZERO, though, it will throw an exception if
it cannot be written *somewhere*.
JMX metrics
-----------
- read and write statistics are reported as lifetime totals,
instead of averages over the last minute. average-since-last
requested are also available for convenience.
- cache hit rate statistics are now available from JMX under
org.apache.cassandra.db.Caches
- compaction JMX metrics are moved to
org.apache.cassandra.db.CompactionManager. PendingTasks is now
a much better estimate of compactions remaining, and the
progress of the current compaction has been added.
- commitlog JMX metrics are moved to org.apache.cassandra.db.Commitlog
- progress of data streaming during bootstrap, loadbalance, or other
data migration, is available under
org.apache.cassandra.streaming.StreamingService.
See http://wiki.apache.org/cassandra/Streaming for details.
Installation/Upgrade
--------------------
- 0.6 network traffic is not compatible with earlier versions. You
will need to shut down all your nodes at once, upgrade, then restart.
0.5.0
=====
0. The commitlog format has changed (but sstable format has not).
When upgrading from 0.4, empty the commitlog either by running
bin/nodeprobe flush on each machine and waiting for the flush to finish,
or simply remove the commitlog directory if you only have test data.
(If more writes come in after the flush command, starting 0.5 will error
out; if that happens, just go back to 0.4 and flush again.)
The format changed twice: from 0.4 to beta1, and from beta2 to RC1.
.5 The gossip protocol has changed, meaning 0.5 nodes cannot coexist
in a cluster of 0.4 nodes or vice versa; you must upgrade your
whole cluster at the same time.
1. Bootstrap, move, load balancing, and active repair have been added.
See http://wiki.apache.org/cassandra/Operations. When upgrading
from 0.4, leave autobootstrap set to false for the first restart
of your old nodes.
2. Performance improvements across the board, especially on the write
path (over 100% improvement in stress.py throughput).
3. Configuration:
- Added "comment" field to ColumnFamily definition.
- Added MemtableFlushAfterMinutes, a global replacement for the
old per-CF FlushPeriodInMinutes setting
- Key cache settings
4. Thrift:
- Added get_range_slice, deprecating get_key_range
0.4.2
=====
1. Improve default garbage collector options significantly --
throughput will be 30% higher or more.
0.4.1
=====
1. SnapshotBeforeCompaction configuration option allows snapshotting
before each compaction, which allows rolling back to any version
of the data.
0.4.0
=====
1. On-disk data format has changed to allow billions of keys/rows per
node instead of only millions. The new format is incompatible with 0.3;
see 0.3 notes below for how to import data from a 0.3 install.
2. Cassandra now supports multiple keyspaces. Typically you will have
one keyspace per application, allowing applications to be able to
create and modify ColumnFamilies at will without worrying about
collisions with others in the same cluster.
3. Many Thrift API changes and documentation. See
http://wiki.apache.org/cassandra/API
4. Removed the web interface in favor of JMX and bin/nodeprobe, which
has significantly enhanced functionality.
5. Renamed configuration "<Table>" to "<Keyspace>".
6. Added commitlog fsync; see "<CommitLogSync>" in configuration.
0.3.0
=====
1. With enough and large enough keys in a ColumnFamily, Cassandra will
run out of memory trying to perform compactions (data file merges).
The size of what is stored in memory is (S + 16) * (N + M) where S
is the size of the key (usually 2 bytes per character), N is the
number of keys and M, is the map overhead (which can be guestimated
at around 32 bytes per key).
So, if you have 10-character keys and 1GB of headroom in your heap
space for compaction, you can expect to store about 17M keys
before running into problems.
See https://issues.apache.org/jira/browse/CASSANDRA-208
2. Because fixing #1 requires a data file format change, 0.4 will not
be binary-compatible with 0.3 data files. A client-side upgrade
can be done relatively easily with the following algorithm:
for key in old_client.get_key_range(everything):
columns = old_client.get_slice or get_slice_super(key, all columns)
new_client.batch_insert or batch_insert_super(key, columns)
The inner loop can be trivially parallelized for speed.
3. Commitlog does not fsync before reporting a write successful.
Using blocking writes mitigates this to some degree, since all
nodes that were part of the write quorum would have to fail
before sync for data to be lost.
See https://issues.apache.org/jira/browse/CASSANDRA-182
Additionally, row size (that is, all the data associated with a single
key in a given ColumnFamily) is limited by available memory, because
compaction deserializes each row before merging.
See https://issues.apache.org/jira/browse/CASSANDRA-16