SimpleStrategy allows a single integer
replication_factor to be defined. This determines the number of nodes that
should contain a copy of each row. For example, if
replication_factor is 3, then three different nodes should store
a copy of each row.
SimpleStrategy treats all nodes identically, ignoring any configured datacenters or racks. To determine the replicas
for a token range, Cassandra iterates through the tokens in the ring, starting with the token range of interest. For
each token, it checks whether the owning node has been added to the set of replicas, and if it has not, it is added to
the set. This process continues until
replication_factor distinct nodes have been added to the set of replicas.
NetworkTopologyStrategy allows a replication factor to be specified for each datacenter in the cluster. Even if your cluster only uses a single datacenter, NetworkTopologyStrategy should be prefered over SimpleStrategy to make it easier to add new physical or virtual datacenters to the cluster later.
In addition to allowing the replication factor to be specified per-DC, NetworkTopologyStrategy also attempts to choose replicas within a datacenter from different racks. If the number of racks is greater than or equal to the replication factor for the DC, each replica will be chosen from a different rack. Otherwise, each rack will hold at least one replica, but some racks may hold more than one. Note that this rack-aware behavior has some potentially surprising implications. For example, if there are not an even number of nodes in each rack, the data load on the smallest rack may be much higher. Similarly, if a single node is bootstrapped into a new rack, it will be considered a replica for the entire ring. For this reason, many operators choose to configure all nodes on a single “rack”.
Transient replication allows you to configure a subset of replicas to only replicate data that hasn’t been incrementally repaired. This allows you to decouple data redundancy from availability. For instance, if you have a keyspace replicated at rf 3, and alter it to rf 5 with 2 transient replicas, you go from being able to tolerate one failed replica to being able to tolerate two, without corresponding increase in storage usage. This is because 3 nodes will replicate all the data for a given token range, and the other 2 will only replicate data that hasn’t been incrementally repaired.
To use transient replication, you first need to enable it in
cassandra.yaml. Once enabled, both SimpleStrategy and
NetworkTopologyStrategy can be configured to transiently replicate data. You configure it by specifying replication factor
<total_replicas>/<transient_replicas Both SimpleStrategy and NetworkTopologyStrategy support configuring transient
Transiently replicated keyspaces only support tables created with read_repair set to NONE and monotonic reads are not currently supported. You also can’t use LWT, logged batches, and counters in 4.0. You will possibly never be able to use materialized views with transiently replicated keyspaces and probably never be able to use 2i with them.
Transient replication is an experimental feature that may not be ready for production use. The expected audienced is experienced users of Cassandra capable of fully validating a deployment of their particular application. That means being able check that operations like reads, writes, decommission, remove, rebuild, repair, and replace all work with your queries, data, configuration, operational practices, and availability requirements.
It is anticipated that 4.next will support monotonic reads with transient replication as well as LWT, logged batches, and counters.
Cassandra supports a per-operation tradeoff between consistency and availability through Consistency Levels. Essentially, an operation’s consistency level specifies how many of the replicas need to respond to the coordinator in order to consider the operation a success.
The following consistency levels are available:
Write operations are always sent to all replicas, regardless of consistency level. The consistency level simply controls how many responses the coordinator waits for before responding to the client.
For read operations, the coordinator generally only issues read commands to enough replicas to satisfy the consistency level, with one exception. Speculative retry may issue a redundant read request to an extra replica if the other replicas have not responded within a specified time window.
It is common to pick read and write consistency levels that are high enough to overlap, resulting in “strong”
consistency. This is typically expressed as
W + R > RF, where
W is the write consistency level,
R is the
read consistency level, and
RF is the replication factor. For example, if
RF = 3, a
QUORUM request will
require responses from at least two of the three replicas. If
QUORUM is used for both writes and reads, at least
one of the replicas is guaranteed to participate in both the write and the read request, which in turn guarantees that
the latest write will be read. In a multi-datacenter environment,
LOCAL_QUORUM can be used to provide a weaker but
still useful guarantee: reads are guaranteed to see the latest write from within the same datacenter.
If this type of strong consistency isn’t required, lower consistency levels like
ONE may be used to improve
throughput, latency, and availability.