如何在CentOS7上安装yugabyte-2.0.10.0?

How to install yugabyte-2.0.10.0 on CentOS7?

我正在尝试安装 yugabyte-2.0.10.0:

a) 环境:

os: centos7.6
cpu Model: Intel(R) Core(TM) i7 CPU M 620
kernel: 3.10.0-957.el7.x86_64
gcc version 4.8.5
Python 2.7.5

b) 命令:

cd ~
rm -rf /opt/yugabyte
mkdir -p /opt/yugabyte
mkdir -p /opt/yugabyte/data
wget https://downloads.yugabyte.com/yugabyte-2.0.10.0-linux.tar.gz
tar -xvzf /root/yugabyte/yugabyte-2.0.10.0-linux.tar.gz -C /opt/yugabyte
/opt/yugabyte/yugabyte-2.0.10.0/bin/post_install.sh
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" destroy
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" create

错误日志:

[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" create
Creating cluster.
Waiting for cluster to be ready.
Traceback (most recent call last):
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1969, in <module>
    control.run()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1946, in run
    self.args.func()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1706, in create_cmd_impl
    self.wait_for_cluster_or_raise()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1551, in wait_for_cluster_or_raise
    raise RuntimeError("Timed out waiting for a YugaByte DB cluster!")
RuntimeError: Timed out waiting for a YugaByte DB cluster!
Viewing file /tmp/tmptCw8eu:
2020-01-09 21:21:18,413 INFO: Starting master-1 with:
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-master --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/opt/yugabyte/yugabyte-2.0.10.0 --webserver_doc_root "/opt/yugabyte/yugabyte-2.0.10.0/www" --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --master_addresses 127.0.0.1:7100 --enable_ysql=true >"/opt/yugabyte/data/node-1/disk-1/master.out" 2>"/opt/yugabyte/data/node-1/disk-1/master.err" &
2020-01-09 21:21:18,475 INFO: Starting tserver-1 with:
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-tserver --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/opt/yugabyte/yugabyte-2.0.10.0 --webserver_doc_root "/opt/yugabyte/yugabyte-2.0.10.0/www" --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=127.0.0.1:6379 --cql_proxy_bind_address=127.0.0.1:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --enable_ysql=true --pgsql_proxy_bind_address=127.0.0.1:5433 >"/opt/yugabyte/data/node-1/disk-1/tserver.out" 2>"/opt/yugabyte/data/node-1/disk-1/tserver.err" &
2020-01-09 21:21:18,483 INFO: Waiting for master and tserver processes to come up.
2020-01-09 21:21:18,627 INFO: Waiting for master leader election and tablet server registration.
2020-01-09 21:22:15,331 INFO: Master leader election still pending...
2020-01-09 21:22:16,333 ERROR: Failed waiting for None tservers, got None
^^^ Encountered errors ^^^

请帮我解决以上问题!

更新 1:
信息和错误日志:

[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/master.out
[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/master.err
[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/tserver.out
The files belonging to this database system will be owned by user "root".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  C
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /opt/yugabyte/data/node-1/disk-1/pg_data ... ok
creating subdirectories ... ok
selecting default max_connections ... 300
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
syncing data to disk ... ok
[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/tserver.err
In YugaByte DB, setting LC_COLLATE to C and all other locale settings to en_US.UTF-8 by default. Locale support will be enhanced as part of addressing https://github.com/YugaByte/yugabyte-db/issues/15572020-01-13 15:07:18.447 UTC [12159] LOG:  YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2020-01-13 15:07:18.488 UTC [12159] LOG:  listening on IPv4 address "127.0.0.1", port 5433
2020-01-13 15:07:18.595 UTC [12159] LOG:  redirecting log output to logging collector process
2020-01-13 15:07:18.595 UTC [12159] HINT:  Future log output will appear in directory "/opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs".

更新 2:

[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/yb-data/master/logs/yb-master.WARNING
Log file created at: 2020/01/14 13:47:23
Running on machine: srvr0
Application fingerprint: version 2.0.10.0 build 4 revision 83610e77c7659c7587bc0c8aea76db47ff8e2df1 build_type RELEASE built at 06 Jan 2020 08:02:49 UTC
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
W0114 13:47:23.925465 12631 master_service.cc:108] Could not set master raft config : Illegal state (yb/master/catalog_manager.cc:6130): Node 1d36ad7c7b89457197595fc8f9e57f6f peer not initialized.
W0114 13:47:23.928180 12631 master_service.cc:108] Could not set master raft config : Illegal state (yb/master/catalog_manager.cc:6130): Node 1d36ad7c7b89457197595fc8f9e57f6f peer not initialized.
W0114 13:47:23.929930 12631 master_service.cc:108] Could not set master raft config : Illegal state (yb/master/catalog_manager.cc:6130): Node 1d36ad7c7b89457197595fc8f9e57f6f peer not initialized.
W0114 13:47:23.931773 12631 master_service.cc:108] Could not set master raft config : Illegal state (yb/master/catalog_manager.cc:6130): Node 1d36ad7c7b89457197595fc8f9e57f6f peer not initialized.
W0114 13:47:25.277549 12595 log.cc:702] Time spent Fsync log took a long time: real 0.289s  user 0.000s sys 0.000s
W0114 13:47:27.635577 12595 log.cc:702] Time spent Fsync log took a long time: real 0.144s  user 0.000s sys 0.000s
W0114 13:47:29.459060 12595 log.cc:702] Time spent Fsync log took a long time: real 0.088s  user 0.000s sys 0.000s
...
W0114 13:48:17.587898 12595 log.cc:702] Time spent Fsync log took a long time: real 0.068s  user 0.000s sys 0.000s
W0114 13:48:17.652386 12595 log.cc:702] Time spent Fsync log took a long time: real 0.064s  user 0.000s sys 0.000s
W0114 13:48:18.864150 12595 log.cc:702] Time spent Fsync log took a long time: real 0.089s  user 0.000s sys 0.000s
W0114 13:48:25.154635 12654 permissions_manager.cc:1050] Multiple security configs found when loading sys catalog
W0114 13:48:25.181205 12654 catalog_manager.cc:606] Time spent T 00000000000000000000000000000000 P 1d36ad7c7b89457197595fc8f9e57f6f: Loading metadata into memory: real 60.895s    user 0.132s sys 0.026s
[root@srvr0 ~]# cat /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.WARNING
Log file created at: 2020/01/14 13:47:23
Running on machine: srvr0
Application fingerprint: version 2.0.10.0 build 4 revision 83610e77c7659c7587bc0c8aea76db47ff8e2df1 build_type RELEASE built at 06 Jan 2020 08:02:49 UTC
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
W0114 13:47:23.926698 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=0, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable
W0114 13:47:23.928352 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=1, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable
W0114 13:47:23.930130 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=2, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable
W0114 13:47:23.930173 12628 heartbeater.cc:323] P 4bbca70b45944a7e9f66463471e11466: Failed 3 heartbeats in a row: no longer allowing fast heartbeat attempts.
...
W0114 13:48:22.868005 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=61, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable
W0114 13:48:23.869757 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=62, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable
W0114 13:48:24.915241 12628 heartbeater.cc:598] P 4bbca70b45944a7e9f66463471e11466: Failed to heartbeat to 127.0.0.1:7100: Service unavailable (yb/tserver/heartbeater.cc:479): master is no longer the leader tries=63, num=1, masters=0x00000000029c8d60 -> [[127.0.0.1:7100]], code=Service unavailable

更新 3:

master 和 tserver INFO 文件的最后 20 行:

[root@srvr0 logs]# tail -20 /opt/yugabyte/data/node-1/disk-1/yb-data/master/logs/yb-master.INFO
  }
}
table_type: TRANSACTION_STATUS_TABLE_TYPE
namespace {
  name: "system"
}
I0121 07:00:23.625553 12478 catalog_manager.cc:1937] Setting default tablets to 2 with 1 primary servers
I0121 07:00:23.625607 12478 partition.cc:388] Creating partitions with num_tablets: 2
I0121 07:00:23.701505 12478 catalog_manager.cc:2155] Successfully created table transactions [id=ebe4eab3526e4030a8ef44796223f904] per request from internal request
I0121 07:00:23.701651 12478 catalog_manager.cc:741] Finished creating transaction status table asynchronously
I0121 07:00:23.701782 12478 catalog_manager.cc:3790] 5536e8fad1d04d52902a0d9488ab5b4e now has full report for 0 tablets.
I0121 07:00:23.701819 12478 catalog_manager.cc:3796] 5536e8fad1d04d52902a0d9488ab5b4e sent full tablet report with 0 tablets.
I0121 07:00:23.901152 12478 catalog_manager.cc:4037] Peer 5536e8fad1d04d52902a0d9488ab5b4e sent incremental report for 1bd70a13590146de9fa3feb16e90b120, prev state op id: -1, prev state term: 0, prev state has_leader_uuid: 0. Consensus state: current_term: 0 config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I0121 07:00:24.024679 12478 catalog_manager.cc:4037] Peer 5536e8fad1d04d52902a0d9488ab5b4e sent incremental report for 1bd70a13590146de9fa3feb16e90b120, prev state op id: -1, prev state term: 0, prev state has_leader_uuid: 0. Consensus state: current_term: 1 config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I0121 07:00:24.035790 12457 catalog_manager.cc:4037] Peer 5536e8fad1d04d52902a0d9488ab5b4e sent incremental report for ec9bb307331442b3b1fd7ba43a0199a0, prev state op id: -1, prev state term: 0, prev state has_leader_uuid: 0. Consensus state: current_term: 1 config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I0121 07:00:24.035905 12457 catalog_manager.cc:4002] Tablet: 1bd70a13590146de9fa3feb16e90b120 reported consensus state change. New consensus state: current_term: 1 leader_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } } from 5536e8fad1d04d52902a0d9488ab5b4e
I0121 07:00:24.036085 12457 catalog_entity_info.cc:97] T 1bd70a13590146de9fa3feb16e90b120: Leader changed from <NULL> to 0x00000000038ee010 -> { permanent_uuid: 5536e8fad1d04d52902a0d9488ab5b4e registration: common { private_rpc_addresses { host: "127.0.0.1" port: 9100 } http_addresses { host: "127.0.0.1" port: 9000 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } placement_uuid: "" } capabilities: 2189743739 placement_id: cloud1:datacenter1:rack1 }
I0121 07:00:24.069538 12478 catalog_manager.cc:4002] Tablet: ec9bb307331442b3b1fd7ba43a0199a0 reported consensus state change. New consensus state: current_term: 1 leader_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } } from 5536e8fad1d04d52902a0d9488ab5b4e
I0121 07:00:24.069607 12478 catalog_entity_info.cc:97] T ec9bb307331442b3b1fd7ba43a0199a0: Leader changed from <NULL> to 0x00000000038ee010 -> { permanent_uuid: 5536e8fad1d04d52902a0d9488ab5b4e registration: common { private_rpc_addresses { host: "127.0.0.1" port: 9100 } http_addresses { host: "127.0.0.1" port: 9000 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } placement_uuid: "" } capabilities: 2189743739 placement_id: cloud1:datacenter1:rack1 }
I0121 07:00:28.951503 12447 reactor.cc:450] Master_R000: Timing out connection Connection (0x0000000002cc3690) server 127.0.0.1:49899 => 127.0.0.1:7100 - it has been idle for 65.0008s (delta: 65.0008, current time: 751.024, last activity time: 686.023)

[root@srvr0 logs]# tail -20 /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb_tserver.INFO
tail: cannot open ‘/opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb_tserver.INFO’ for reading: No such file or directory
[root@srvr0 logs]# tail -20 /opt/yugabyte/data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.INFO
I0121 07:00:24.024483 13021 consensus_meta.cc:275] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e: Updating active role from FOLLOWER to LEADER. Consensus state: current_term: 1 leader_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I0121 07:00:24.024521 13021 raft_consensus.cc:2803] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Calling mark dirty synchronously for reason code NEW_LEADER_ELECTED
I0121 07:00:24.024586 13021 raft_consensus.cc:838] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Becoming Leader. State: Replica: 5536e8fad1d04d52902a0d9488ab5b4e, State: 1, Role: LEADER, Watermarks: {Received: 0.0 Committed: 0.0} Leader: 0.0
I0121 07:00:24.024760 13021 consensus_queue.cc:207] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e [LEADER]: Queue going to LEADER mode. State: All replicated op: 0.0, Majority replicated op: 0.0, Committed index: 0.0, Last appended: 0.0, Current term: 1, Majority size: 1, State: QUEUE_OPEN, Mode: LEADER, active raft config: opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
I0121 07:00:24.024852 13021 raft_consensus.cc:856] Sending NO_OP at op { term: 0 index: 0 }
I0121 07:00:24.026254 13023 replica_state.cc:1268] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: SetLeaderNoOpCommittedUnlocked(1)
I0121 07:00:24.026321 13023 replica_state.cc:725] T 1bd70a13590146de9fa3feb16e90b120 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Advanced the committed_op_id across terms. Last committed operation was: { term: 0 index: 0 } New committed index is: { term: 1 index: 1 }
I0121 07:00:24.035311 13018 leader_election.cc:239] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [CANDIDATE]: Term 1 election: Election decided. Result: candidate won.
I0121 07:00:24.035398 13018 raft_consensus.cc:2867] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 FOLLOWER]: Snoozing failure detection for 3.178s
I0121 07:00:24.035445 13018 raft_consensus.cc:2773] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 FOLLOWER]: Leader election won for term 1
I0121 07:00:24.035468 13018 replica_state.cc:1268] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 FOLLOWER]: SetLeaderNoOpCommittedUnlocked(0)
I0121 07:00:24.035542 13018 consensus_meta.cc:275] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e: Updating active role from FOLLOWER to LEADER. Consensus state: current_term: 1 leader_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" config { opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I0121 07:00:24.035590 13018 raft_consensus.cc:2803] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Calling mark dirty synchronously for reason code NEW_LEADER_ELECTED
I0121 07:00:24.035641 13018 raft_consensus.cc:838] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Becoming Leader. State: Replica: 5536e8fad1d04d52902a0d9488ab5b4e, State: 1, Role: LEADER, Watermarks: {Received: 0.0 Committed: 0.0} Leader: 0.0
I0121 07:00:24.035706 13018 consensus_queue.cc:207] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [LEADER]: Queue going to LEADER mode. State: All replicated op: 0.0, Majority replicated op: 0.0, Committed index: 0.0, Last appended: 0.0, Current term: 1, Majority size: 1, State: QUEUE_OPEN, Mode: LEADER, active raft config: opid_index: -1 peers { permanent_uuid: "5536e8fad1d04d52902a0d9488ab5b4e" member_type: VOTER last_known_private_addr { host: "127.0.0.1" port: 9100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
I0121 07:00:24.035748 13018 raft_consensus.cc:856] Sending NO_OP at op { term: 0 index: 0 }
I0121 07:00:24.036341 13021 replica_state.cc:1268] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: SetLeaderNoOpCommittedUnlocked(1)
I0121 07:00:24.036391 13021 replica_state.cc:725] T ec9bb307331442b3b1fd7ba43a0199a0 P 5536e8fad1d04d52902a0d9488ab5b4e [term 1 LEADER]: Advanced the committed_op_id across terms. Last committed operation was: { term: 0 index: 0 } New committed index is: { term: 1 index: 1 }
I0121 07:01:28.862228 12462 reactor.cc:450] TabletServer_R000: Timing out connection Connection (0x0000000003fb4490) server 127.0.0.1:49050 => 127.0.0.1:9100 - it has been idle for 65.0008s (delta: 65.0008, current time: 810.935, last activity time: 745.934)
I0121 07:01:28.862249 12463 reactor.cc:450] TabletServer_R001: Timing out connection Connection (0x0000000003fb47f0) server 127.0.0.1:33000 => 127.0.0.1:9100 - it has been idle for 65.0008s (delta: 65.0008, current time: 810.936, last activity time: 745.935)

更新 4:

在CentOS7上安装Python2.7.10(参考:https://myopswork.com/install-python-2-7-10-on-centos-rhel-75f90c5239a5),如下:

cd /usr/src
wget https://www.python.org/ftp/python/2.7.10/Python-2.7.10.tgz
tar xzf Python-2.7.10.tgz
cd Python-2.7.10
./configure
make altinstall
python2.7
###Make python 2.7.10 as default
echo "alias python=\"/usr/local/bin/python2.7\"" >> /etc/profile

执行以下命令安装yugabyte 2.0.10.0

cd ~
rm -rf /opt/yugabyte
mkdir -p /opt/yugabyte
mkdir -p /opt/yugabyte/data
tar -xvzf /tmp/yugabyte/yugabyte-2.0.10.0-linux.tar.gz -C /opt/yugabyte
/opt/yugabyte/yugabyte-2.0.10.0/bin/post_install.sh
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" destroy
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" create
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" setup_redis
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" status

注意:第一次尝试创建数据库失败,销毁它并重新创建它。

日志:

Python 2.7.10:

[root@srvr0 ~]# python
Python 2.7.10 (default, Jan 27 2020, 17:09:56) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit();

安装:

[root@srvr0 ~]# cd ~
[root@srvr0 ~]# rm -rf /opt/yugabyte
[root@srvr0 ~]# mkdir -p /opt/yugabyte
[root@srvr0 ~]# mkdir -p /opt/yugabyte/data
[root@srvr0 ~]# ###cp /root/yugabyte-2.0.10.0-linux.tar.gz /index
[root@srvr0 ~]# tar -xvzf /index/yugabyte/yugabyte-2.0.10.0-linux.tar.gz -C /opt/yugabyte
yugabyte-2.0.10.0/
yugabyte-2.0.10.0/bin/
yugabyte-2.0.10.0/bin/ysqlsh
yugabyte-2.0.10.0/bin/psql
yugabyte-2.0.10.0/bin/bulk_load_cleanup.sh
yugabyte-2.0.10.0/bin/bulk_load_helper.sh
yugabyte-2.0.10.0/bin/log_cleanup.sh
yugabyte-2.0.10.0/bin/yb-check-failed-tablets.sh
yugabyte-2.0.10.0/bin/yb-check-consistency.py
yugabyte-2.0.10.0/bin/configure
...
yugabyte-2.0.10.0/ui/conf/evolutions/default/1.sql
yugabyte-2.0.10.0/ui/conf/application.conf
yugabyte-2.0.10.0/ui/conf/k8s-expose-all.yml
yugabyte-2.0.10.0/ui/conf/application.default.conf
yugabyte-2.0.10.0/ui/conf/default_cmk_policy.json
yugabyte-2.0.10.0/ui/conf/version.txt
yugabyte-2.0.10.0/ui/README.md
yugabyte-2.0.10.0/version_metadata.json
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/post_install.sh
+ /opt/yugabyte/yugabyte-2.0.10.0/bin/patchelf --set-interpreter /opt/yugabyte/yugabyte-2.0.10.0/lib/ld.so log-dump
...
+ /opt/yugabyte/yugabyte-2.0.10.0/bin/patchelf --set-interpreter /opt/yugabyte/yugabyte-2.0.10.0/lib/ld.so vacuumlo
+ /opt/yugabyte/yugabyte-2.0.10.0/bin/patchelf --set-rpath /opt/yugabyte/yugabyte-2.0.10.0/lib/yb:/opt/yugabyte/yugabyte-2.0.10.0/lib/yb-thirdparty:/opt/yugabyte/yugabyte-2.0.10.0/linuxbrew/lib vacuumlo
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" destroy
Destroying cluster.
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" create
Creating cluster.
Waiting for cluster to be ready.
Traceback (most recent call last):
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1969, in <module>
    control.run()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1946, in run
    self.args.func()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1706, in create_cmd_impl
    self.wait_for_cluster_or_raise()
  File "/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl", line 1551, in wait_for_cluster_or_raise
    raise RuntimeError("Timed out waiting for a YugaByte DB cluster!")
RuntimeError: Timed out waiting for a YugaByte DB cluster!
Viewing file /tmp/tmpJb_KSP:
2020-01-27 19:09:25,732 INFO: Starting master-1 with:
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-master --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/opt/yugabyte/yugabyte-2.0.10.0 --webserver_doc_root "/opt/yugabyte/yugabyte-2.0.10.0/www" --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --master_addresses 127.0.0.1:7100 --enable_ysql=true >"/opt/yugabyte/data/node-1/disk-1/master.out" 2>"/opt/yugabyte/data/node-1/disk-1/master.err" &
2020-01-27 19:09:25,792 INFO: Starting tserver-1 with:
/opt/yugabyte/yugabyte-2.0.10.0/bin/yb-tserver --fs_data_dirs "/opt/yugabyte/data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/opt/yugabyte/yugabyte-2.0.10.0 --webserver_doc_root "/opt/yugabyte/yugabyte-2.0.10.0/www" --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=127.0.0.1:6379 --cql_proxy_bind_address=127.0.0.1:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --enable_ysql=true --pgsql_proxy_bind_address=127.0.0.1:5433 >"/opt/yugabyte/data/node-1/disk-1/tserver.out" 2>"/opt/yugabyte/data/node-1/disk-1/tserver.err" &
2020-01-27 19:09:25,800 INFO: Waiting for master and tserver processes to come up.
2020-01-27 19:09:25,934 INFO: Waiting for master leader election and tablet server registration.
2020-01-27 19:10:22,502 INFO: Master leader election still pending...
2020-01-27 19:10:23,504 ERROR: Failed waiting for None tservers, got None
^^^ Encountered errors ^^^
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" destroy
Destroying cluster.
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" create
Creating cluster.
Waiting for cluster to be ready.
.
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1                                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres                                  |
| YSQL Shell          : /opt/yugabyte/yugabyte-2.0.10.0/bin/ysqlsh                                 |
| YCQL Shell          : /opt/yugabyte/yugabyte-2.0.10.0/bin/cqlsh                                  |
| YEDIS Shell         : /opt/yugabyte/yugabyte-2.0.10.0/bin/redis-cli                              |
| Web UI              : http://127.0.0.1:7000/                                                     |
| Cluster Data        : /opt/yugabyte/data                                                         |
----------------------------------------------------------------------------------------------------

For more info, please use: yb-ctl --data_dir /opt/yugabyte/data status
[root@srvr0 ~]# /opt/yugabyte/yugabyte-2.0.10.0/bin/yb-ctl --data_dir "/opt/yugabyte/data" setup_redis
Setting up YugaByte DB support for Redis API.
Waiting for cluster to be ready.
Setup Redis successful.

我们正在尝试在内部重现此问题,并会回复您。与此同时,能否请您检查 tserver.err 文件和 tserver.INFO 日志(如何 find yb-ctl tserver logs 说明)以查看是否发生了任何不良情况?感觉 tservers 没有启动并且 运行.

你在这个问题上成功了 .

你能查看这些日志并报告它们吗:

/opt/yugabyte/data/node-1/disk-1/master.out, /opt/yugabyte/data/node-1/disk-1/master.err, /opt/yugabyte/data/node-1/disk-1/tserver.out, /opt/yugabyte/data/node-1/disk-1/tserver.err.