如何使用 pyrax 将云块存储卷附加到 OnMetal 服务器?

How to attach a Cloud Block Storage volume to an OnMetal server with pyrax?

我想自动附加 Cloud Block Storage volume to an OnMetal server running CentOS 7 by writing a Python script that make uses of the pyrax Python 模块。你知道怎么做吗?

将云块存储卷附加到 OnMetal 服务器比将其附加到普通 Rackspace 虚拟服务器要复杂一些。当您尝试在 Rackspace Web 界面 Cloud Control Panel 中将 Cloud Block Storage 卷附加到 OnMetal 服务器时,您会注意到这一点,正如您将看到的文本:

注意:将卷附加到 OnMetal 服务器时,您必须登录到 OnMetal 服务器以设置启动器名称、发现目标然后连接到目标。

因此您可以在 Web 界面中附加卷,但您还需要登录 OnMetal 服务器和 运行 一些命令。实际命令可以从 Web 界面复制并粘贴到 OnMetal 服务器的终端。

此外,在分离之前,您需要 运行 命令。

但是网页界面其实是不需要的。可以用 Python 模块 pyrax.

来完成

首先在OnMetal服务器上安装RPM包iscsi-initiator-utils

[root@server-01 ~]# yum -y install iscsi-initiator-utils

假设 volume_id 和 server_id 已知,此 Python 代码首先附加卷,然后分离卷。不幸的是,mount_point 参数 attach_to_instance() 不适用于 OnMetal 服务器,因此我们需要在附加卷之前和之后使用命令 lsblk -n -d。通过比较输出,我们可以推断出用于附加卷的设备名称。 (推断设备名称的部分未由以下 Python 代码处理)。

#/usr/bin/python
# Disclaimer: Use the script at your own Risk!                                                                                                                                    
import json
import os
import paramiko
import pyrax

# Replace server_id and volume_id                                                                                                                                                                       
# to your settings                                                                                                                                                                                      
server_id = "cbdcb7e3-5231-40ad-bba6-45aaeabf0a8d"
volume_id = "35abb4ba-caee-4cae-ada3-a16f6fa2ab50"
# Just to demonstrate that the mount_point argument for                                                                                                                                                 
# attach_to_instance() is not working for OnMetal servers                                                                                                                                               
disk_device = "/dev/xvdd"

def run_ssh_commands(ssh_client, remote_commands):
    for remote_command in remote_commands:
        stdin, stdout, stderr = ssh_client.exec_command(remote_command)
        print("")
        print("command: " + remote_command)
        for line in stdout.read().splitlines():
            print(" stdout: " + line)
        exit_status = stdout.channel.recv_exit_status()
        if exit_status != 0:
            raise RuntimeError("The command :\n{}\n"
                               "exited with exit status: {}\n"
                               "stderr: {}".format(remote_command,
                                                   exit_status,
                                                   stderr.read()))

pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('IAD')
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
server = pyrax.cloudservers.servers.get(server_id)
vol = pyrax.cloud_blockstorage.find(id = volume_id)
vol.attach_to_instance(server, mountpoint=disk_device)
pyrax.utils.wait_until(vol, "status", "in-use", interval=3, attempts=0,
                       verbose=True)

ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server.accessIPv4, username='root', allow_agent=True)

# The new metadata is only available if we get() the server once more                                                                                                                                   
server = pyrax.cloudservers.servers.get(server_id)

metadata = server.metadata["volumes_" + volume_id]
parsed_json = json.loads(metadata)
target_iqn = parsed_json["target_iqn"]
target_portal = parsed_json["target_portal"]
initiator_name = parsed_json["initiator_name"]

run_ssh_commands(ssh_client, [
    "lsblk -n -d",
    "echo InitiatorName={} > /etc/iscsi/initiatorname.iscsi".format(initiator_name),
    "iscsiadm -m discovery --type sendtargets --portal {}".format(target_portal),
    "iscsiadm -m node --targetname={} --portal {} --login".format(target_iqn, target_portal),
    "lsblk -n -d",
    "iscsiadm -m node --targetname={} --portal {} --logout".format(target_iqn, target_portal),
    "lsblk -n -d"
])

vol.detach()
pyrax.utils.wait_until(vol, "status", "available", interval=3, attempts=0,
                                    verbose=True)

运行 python 代码如下所示

user@ubuntu:~$ python attach.py 2> /dev/null
Current value of status: attaching (elapsed:  1.0 seconds)
Current value of status: in-use (elapsed:  4.9 seconds)

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk

command: echo InitiatorName=iqn.2008-10.org.openstack:a24b6f80-cf02-48fc-9a25-ccc3ed3fb918 > /etc/iscsi/initiatorname.iscsi

command: iscsiadm -m discovery --type sendtargets --portal 10.190.142.116:3260
 stdout: 10.190.142.116:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
 stdout: 10.69.193.1:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --login
 stdout: Logging in to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] (multiple)
 stdout: Login to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
 stdout: sdb    8:16   0   50G  0 disk

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --logout
 stdout: Logging out of session [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260]
 stdout: Logout of [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
Current value of status: detaching (elapsed:  0.8 seconds)
Current value of status: available (elapsed:  4.7 seconds)
user@ubuntu:~$

只是,补充一点:

虽然Rackspace官方文档中没有提到

https://support.rackspace.com/how-to/attach-a-cloud-block-storage-volume-to-an-onmetal-server/

在 2015 年 8 月 5 日的 forum post 中,Rackspace 托管基础设施支持 还推荐 运行ning

iscsiadm -m node -T $TARGET_IQN -p $TARGET_PORTAL --op update -n node.startup -v automatic

使连接持久化,以便它在启动时自动重新启动 iSCSI 会话。

更新

关于推断新设备名称: 海登少校在 blog post 中写道

[root@server-01 ~]# ls /dev/disk/by-path/

可用于查找新设备的路径。 如果您想尊重任何符号链接,我想这会起作用

[root@server-01 ~]# find -L /dev/disk/by-path -maxdepth 1 -mindepth 1 -exec realpath {} \;