DataStax AMI 默认 cassandra.yaml 未实现一致性 2?

DataStax AMI default cassandra.yaml not achieving consistency 2?

我已经安装了使用 EC2Snitch 的 Datastax AMI。配置为:

listen_address: private ip
broadcast_address: same as listen address
rpc_address: 0.0.0.0
broadcast_rpc: private ip
seeds: private ip

我有 2 个这样的实例,但我无法实现两个的一致性。尽管两个实例都在运行,但给出 alive:1。我想从任何客户那里获得一致性。

我试过这个:

broadcast_rpc : public_ip  // same error
rpc_addess : public ip //cassandra wouldnt start

会抱怨:

127.0.0.1:9042 is not running

正确的配置是什么? 节点在同一个区域,同一个机架。 Nodetool 既可以启动又可以运行。

system.peersprefered-ip.

列中给出 peer 和 roc_addressnull 中的私有 IP

Cassandra.yaml:

cluster_name: 'logcluster'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
    - /mnt/cassandra/data
commitlog_directory: /mnt/cassandra/commitlog
disk_failure_policy: stop
commit_failure_policy: stop
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /mnt/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: "10.xxx.xx.xx7"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.xxx.xx.xx5
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address : 0.0.0.0
rpc_port: 9160
broadcast_rpc_address : 10.xxx.xx.xx5 
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
batch_size_warn_threshold_in_kb: 5
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
phi_convict_threshold: 12
endpoint_snitch: Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
    internode_encryption: none
    keystore: conf/.keystore
    keystore_password: cassandra
    truststore: conf/.truststore
    truststore_password: cassandra
client_encryption_options:
    enabled: false
    keystore: conf/.keystore
    keystore_password: cassandra
internode_compression: all
inter_dc_tcp_nodelay: false
auto_bootstrap: false

数据中心:美国东部

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens  Owns (effective)  Host ID                       Rack
UN  10.xxx.xx.xx7   98.21 MB   256     53.9%             d5xxxx-0a59-xxxx-xxx-ab59xxxxx  1d
UN  10.xxx.xx.xx5  50.26 MB   256     46.1%             1xxxxff-xxx-xxx-xxx-13edxxxxxcf  1d

Amazon 节点将在 ifconfig 中显示私有 10.x.y.z ip,节点本身不知道 public IP,而是这些地址被 NAT(网络地址转换)标记。

所以节点应该在 10.x.y.z 网络上闲聊,正如您从 nodetool 输出中看到的那样。

最好的方法是在您尝试以所需的一致性级别读取/写入时检查您在 cqlsh 中看到的错误,这应该可以让您更好地处理这个问题。还要检查密钥空间的复制因子,例如:

CREATE KEYSPACE spam WITH replication = {
  'class': 'SimpleStrategy',
  'replication_factor': '2'
};

在上面的示例中,复制因子是 2。如果您有两个节点,并且您正在以一致性级别 1 进行读取,那么您可能无法获得正确的结果。作为一般经验法则,建议始终以比阅读时更高的一致性级别进行编写。要在 cqlsh 中设置一致性,请使用以下语法:

cqlsh> CONSISTENCY ONE;
Consistency level set to ONE.

希望对您有所帮助