我可以为 BigTable 节点提供少于 2.5TB 的磁盘吗?

Can I have less than 2.5TB of disk for a BigTable node?

在 GCP 用户界面中,我可以估算我希望使用的任何磁盘大小的定价,但是当我想创建我的 BigTable 实例时,我只能选择节点数,每个节点都带有 2.5TB 的 SSD或 HDD 磁盘。

有没有办法设置一个 BigTable 集群,例如,使用 1 个节点和 1TB 的 SSD 而不是默认的 2.5TB?

即使在 GCP 定价计算器中我也可以更改磁盘大小,但我找不到在创建集群时在哪里配置它 (https://cloud.google.com/products/calculator#id=2acfedfc-4f5a-4a9a-a5d7-0470d7fa3973)

谢谢

如果您只需要 1TB 的数据库,那么只写入 1TB,您将被收取相应的费用。

来自Bigtable pricing documentation

Cloud Bigtable frequently measures the average amount of data in your Cloud Bigtable tables during a short time interval. For billing purposes, these measurements are combined into an average over a one-month period, and this average is multiplied by the monthly rate.

You are billed only for the storage you use, including overhead for indexing and Cloud Bigtable's internal representation on disk. For instances that contain multiple clusters, Cloud Bigtable keeps a separate copy of your data with every cluster, and you are charged for every copy of your data.

When you delete data from Cloud Bigtable, the data becomes inaccessible immediately; however, you are charged for storage of the data until Cloud Bigtable compacts the table. This process typically takes up to a week.

In addition, if you store multiple versions of a value in a table cell, or if you have set an expiration time for one of your table's column families, you can read the obsolete and expired values until Cloud Bigtable completes garbage collection for the table. You are also charged for the obsolete and expired values prior to garbage collection. This process typically takes up to a week.