In Cassandra, the snitch has two functions:
it teaches Cassandra enough about your network topology to route requests efficiently.
it allows Cassandra to spread replicas around your cluster to avoid correlated failures. It does this by grouping machines into "datacenters" and "racks." Cassandra will do its best not to have more than one replica on the same "rack" (which may not actually be a physical location).
The dynamic snitch monitors read latencies to avoid reading from hosts
that have slowed down. The dynamic snitch is configured with the
following properties on
dynamic_snitch: whether the dynamic snitch should be enabled or disabled.
dynamic_snitch_update_interval: 100ms, controls how often to perform the more expensive part of host score calculation.
dynamic_snitch_reset_interval: 10m, if set greater than zero, this will allow 'pinning' of replicas to hosts in order to increase cache capacity.
dynamic_snitch_badness_threshold:: The badness threshold will control how much worse the pinned host has to be before the dynamic snitch will prefer other replicas over it. This is expressed as a double which represents a percentage. Thus, a value of 0.2 means Cassandra would continue to prefer the static snitch values until the pinned host was 20% worse than the fastest.
endpoint_snitch parameter in
cassandra.yaml should be set to the
class that implements
IEndpointSnitch which will be wrapped by the
dynamic snitch and decide if two endpoints are in the same data center
or on the same rack. Out of the box, Cassandra provides the snitch
This should be your go-to snitch for production use. The rack and datacenter for the local node are defined in cassandra-rackdc.properties and propagated to other nodes via gossip. If
cassandra-topology.propertiesexists, it is used as a fallback, allowing migration from the PropertyFileSnitch.
Treats Strategy order as proximity. This can improve cache locality when disabling read repair. Only appropriate for single-datacenter deployments.
Proximity is determined by rack and data center, which are explicitly configured in
Proximity is determined by rack and data center, which are assumed to correspond to the 3rd and 2nd octet of each node’s IP address, respectively. Unless this happens to match your deployment conventions, this is best used as an example of writing a custom Snitch class and is provided in that spirit.
These snitches are used in cloud environment for various cloud vendors. All cloud-based
snitch implementations are currently extending
in turn extends
Each cloud-based snitch has its own way how to resolve what rack and
datacenter a respective node belongs to.
the most common apparatus for achieving it. All cloud-based snitches are calling an HTTP service,
specific to a respective cloud. The constructor of
AbstractCloudMetadataServiceConnector which implement a method
which, by default, executes an HTTP
GET request against a predefined HTTP URL, sending no HTTP headers,
and it expects a response with HTTP code
200. It is possible to send various HTTP headers as part of the
request if an implementator wants that.
Currently, the only implementation of
DefaultCloudMetadataServiceConnector. If a user has a need
to override the behavior of
AbstractCloudMetadataServiceConnector, a user is welcome to do so by implementing
its own connector and propagating it to the constructor of
All cloud-based snitches are accepting these properties in
URL of cloud service to retrieve topology information from, this is cloud-specific.
Default value of
30s(30 seconds) sets connect timeout upon calls by
apiCall. In other words, request against
metadata_urlwill time out if no response arrives in that period.
A string, by default empty, which will be appended to resolved datacenter.
In-built cloud-based snitches are:
Appropriate for EC2 deployments in a single Region, or in multiple regions with inter-region VPC enabled (available since the end of 2017, see AWS announcement). Loads Region and Availability Zone information from the EC2 API. The Region is treated as the datacenter, and the Availability Zone as the rack. Only private IPs are used, so this will work across multiple regions only if inter-region VPC is enabled.
Uses public IPs as broadcast_address to allow cross-region connectivity (thus, you should set seed addresses to the public IP as well). You will need to open the
ssl_storage_porton the public IP firewall (For intra-Region traffic, Cassandra will switch to the private IP after establishing a connection).
For Ec2 snitches, since CASSANDRA-16555, it is possible to
choose version of AWS IMDS. By default, IMDSv2 is used. The version of IMDS is driven by property
and can be of value either
v2. It is possible to specify custom URL of IMDS by
ec2_metadata_url (or by
metadata_url) which is by default
169.254.169.254 and then a query against
/latest/meta-data/placement/availability-zone endpoint is executed.
IMDSv2 is secured by a token which needs to be fetched from IDMSv2 first, and it has to be passed in a header
for the actual queries to IDMSv2.
Ec2MultiRegionSnitch are doing this automatically.
The only configuration parameter exposed to a user is
which is by default set to
21600. TTL has to be an integer from the range
A snitch that assumes an ECS region is a DC and an ECS availability_zone is a rack. This information is available in the config for the node. the format of the zone-id is like
hangzhoumeans the Hangzhou region,
ameans the az id. We use
cn-hangzhouas the dc, and
aas the zone-id.
metadata_urlfor this snitch is, by default,
100.100.100.200/and it will execute an HTTP request against endpoint
Azure Snitch will resolve datacenter and rack by calling
169.254.169.254returning the response in JSON format for, by default, API version
2021-12-13. The version of API is configurable via property
cassandra-rackdc.properties. A datacenter is resolved from
locationfield of the response and a rack is resolved by looking into
zonefield first. When
zoneis not set, or it is an empty string, it will look into
platformFaultDomainfield. Such resolved value is prepended by
Google snitch will resolve datacenter and rack by calling