如何在 Cloudify 中正确地自动缩放 VM 组?

How to properly auto-scale groups of VMs in Cloudify?

我正在使用 cloudify 社区版本 19.01.24。

试图找出如何自动缩放一组两个 VM。这是我到目前为止的想法(跳过不相关的部分):

  monitored_vm1_port:
    type: cloudify.openstack.nodes.Port
    properties:
      openstack_config: *openstack_config
    relationships:
      - type: cloudify.relationships.contained_in
        target: proxy_server_network

  monitored_vm2_port:
    type: cloudify.openstack.nodes.Port
    properties:
      openstack_config: *openstack_config
    relationships:
      - type: cloudify.relationships.contained_in
        target: proxy_server_network

  monitored_vm1_host:
    type: cloudify.openstack.nodes.Server
    properties:
      image: { get_input: image }
      flavor: { get_input: flavor }
      resource_id: { concat: ['monitored_vm1-', { get_input: client_name }] }
      agent_config:
        user: { get_input: agent_user }
        key: { get_property: [ keypair, private_key_path ] }
    interfaces:
      cloudify.interfaces.monitoring_agent:
        install:
          implementation: diamond.diamond_agent.tasks.install
          inputs:
            diamond_config:
              interval: 10
        start: diamond.diamond_agent.tasks.start
        stop: diamond.diamond_agent.tasks.stop
        uninstall: diamond.diamond_agent.tasks.uninstall
      cloudify.interfaces.monitoring:
        start:
          implementation: diamond.diamond_agent.tasks.add_collectors
          inputs:
            collectors_config:
              NetworkCollector: {}
    relationships:
     - type: cloudify.openstack.server_connected_to_port
       target: monitored_vm1_port
     - type: cloudify.openstack.server_connected_to_keypair
       target: keypair

  monitored_vm2_host:
    type: cloudify.openstack.nodes.Server
    properties:
      image: { get_input: image }
      flavor: { get_input: flavor }
      resource_id: { concat: ['monitored_vm2-', { get_input: client_name }] }
      agent_config:
        user: { get_input: agent_user }
        key: { get_property: [ keypair, private_key_path ] }
    interfaces:
      cloudify.interfaces.monitoring_agent:
        install:
          implementation: diamond.diamond_agent.tasks.install
          inputs:
            diamond_config:
              interval: 10
        start: diamond.diamond_agent.tasks.start
        stop: diamond.diamond_agent.tasks.stop
        uninstall: diamond.diamond_agent.tasks.uninstall
      cloudify.interfaces.monitoring:
        start:
          implementation: diamond.diamond_agent.tasks.add_collectors
          inputs:
            collectors_config:
              NetworkCollector: {}
    relationships:
     - type: cloudify.openstack.server_connected_to_port
       target: monitored_vm2_port
     - type: cloudify.openstack.server_connected_to_keypair
       target: keypair

groups:
  vm_group:
    members: [monitored_vm1_host, monitored_vm2_host]

  scale_up_group:
    members: [monitored_vm1_host, monitored_vm2_host]
    policies:
      auto_scale_up:
        type: scale_policy_type
        properties:
          policy_operates_on_group: true
          scale_limit: 2 # max additional instances
          scale_direction: '<'
          scale_threshold: 31457280
          service_selector: .*monitored_vm1_host.*network.eth0.rx.bit
          cooldown_time: 60
        triggers:
          execute_scale_workflow:
            type: cloudify.policies.triggers.execute_workflow
            parameters:
              workflow: scale
              workflow_parameters:
                delta: 1
                scalable_entity_name: vm_group
                scale_compute: true

policies:
  vm_group_scale_policy:
    type: cloudify.policies.scaling
    properties:
      default_instances: 1
    targets: [vm_group]

因此蓝图得到正确部署,并且根据指定条件(VM 接口上的流量)触发缩放工作流,但在创建新 VM 实例期间失败并出现以下错误:

2019-11-18 14:54:46,591:ERROR: Task nova_plugin.server.create[f736f81c-7f8c-4f82-a280-8352c1d01bff] raised:
Traceback (most recent call last):
  (...)
NonRecoverableError: Port 3b727b5e-a2ec-47cc-b711-37cb80a7b4e5 is still in use. [status_code=409]

看起来 Cloudify 正在尝试使用现有端口生成新实例,这很奇怪。 所以我想,也许我应该明确地将 VM 的端口也放在缩放组中,以便与 VM 一起复制它们。试过这样:

  vm_group:
    members: [monitored_vm1_host, monitored_vm1_port, monitored_vm2_host, monitored_vm2_port]

但在那种情况下,我收到关于一些缺失对象关系的错误,已经在蓝图验证阶段:

Invalid blueprint - Node 'monitored_vm1_host' and 'monitored_vm1_port' belong to some shared group but they are not contained in any shared node, nor is any ancestor node of theirs.
  in: /opt/manager/resources/blueprint-with-scaling-d79fed3d-0b3b-4459-a851-fedd9ecf50c6/blueprint-with-scaling.yaml

我查看了文档和我能找到的所有示例(数量不多),但我不太清楚。

如何正确缩放?

您遇到第一个错误,因为 Cloudify 尝试缩放 VM 并将其连接到已绑定到第一个 VM 的端口。

第二个错误的意思是如果端口不依赖于也被缩放的节点,则不能缩放端口,这是为了避免缩放无法缩放的资源。

这个问题的解决方案是,拥有一个 cloudify.nodes.Root 类型的节点并通过端口关系连接到它,如果端口依赖于此节点并且此节点将成为您规模组的一部分将能够扩展。

您的蓝图将如下所示:

 my_relationship_node:
    type:  cloudify.nodes.Root

 port:
    type: cloudify.openstack.nodes.Port
    properties:
      openstack_config: *openstack_config
    relationships:
    - type: cloudify.relationships.connected_to
      target: public_network
    - type: cloudify.relationships.depends_on
      target: public_subnet
    - type: cloudify.openstack.port_connected_to_security_group
      target: security_group
    - type: cloudify.openstack.port_connected_to_floating_ip  
      target: ip
    - type: cloudify.relationships.contained_in
      target: my_relationship_node

希望对您有所帮助。