将基于 Drake 的软件部署到真实机器人上的困惑

Confusion on deploying Drake based software to a real robot

我在 drake 方面的经验有限,目前正在尝试学习和实现一些简单的算法。我的代码基于 R. Tedrake 的欠驱动机器人课程中的示例。代码是这个例子here的一个稍微修改的版本,目标是为独轮车机器人模型设计一个Lyapanuv控制函数。

为了模拟机器人模型,使用了 drake 的 SymbolicVectorSystem 实例。

据我了解,这只是“原型设计”和设计合适控制器的构建块,是否正确?如果有真正的机器人,是否还需要 SymbolicVectorSystem?

既然我对控制器很满意,我想将这个控制器部署到 ROS2 机器人上。为此,我编写了一个简单的脚本,用于从我拥有的本地化系统订阅当前机器人状态。我不太确定如何从这里开始,我想我需要将接收到的机器人状态消息连接到控制器的输入,然后将控制器的输出发布回机器人。问题是我找不到任何适合这个用例的例子。

我想到的一种方法可能是我创建另一个继承自 VectorSystem 的 class,并且这个 class 包装 ros 内容以订阅发布 ROS 消息。在我的包装器 class 的重写 DoCalcVectorOutput 函数中,我 assign/retrieve I/O 并发布检索到的控制命令。 最后,我将控制器的 I/O 和包装器 class 与 DiagramBuilder

连接起来

我不确定这是否有效,或者这是否是推荐的方法,我将感谢您的帮助。

这是当前代码的样子;

#!/usr/bin/env python3

import rclpy
import rclpy.node
from geometry_msgs.msg import PoseWithCovarianceStamped
import rclpy.qos
from rclpy.executors import MultiThreadedExecutor

import pydrake
from pydrake.all import (VectorSystem, DiagramBuilder, SymbolicVectorSystem, LogVectorOutput,
                         Variable, Simulator, cos, sin)
import numpy

L = 0.6
K_us = 1.0
g = 9.81
# you will write the control law in it in the next cell
# by defining the function called "lyapunov_controller"


class PointTrackingController(VectorSystem):

    def __init__(self):
        # 3 inputs (robot state)
        # 2 outputs (robot inputs)
        VectorSystem.__init__(self, 3, 2)

    def euclidean_dist(self, x, y, x_obs, y_obs):
        # x and y are vehicle pose
        return numpy.sqrt((x-x_obs)**2) + ((y-y_obs)**2)

    def clip(self, val, min_, max_):
        return min_ if val < min_ else max_ if val > max_ else val

    def lyapunov_controller(self, f_x, f_y, theta, f_theta, theta_d):
        k1 = 1E-4
        k2 = 1E-1
        u1 = k1 * numpy.copysign(1, (f_x*cos(theta) + f_y *
                                     sin(theta))) * (f_x**2 + f_y ** 2 + f_theta**2)

        p = u1*(f_x * cos(theta) + f_y * sin(theta)) + \
            k2 * (theta_d - theta) * f_theta
        if p >= 0.0:
            u2 = k2 * (theta_d - theta) * ((L + K_us/g * u1**2)/u1)
        else:
            u2 = -u1 / f_theta * (f_x * cos(theta) + f_y *
                                  sin(theta)) * ((L + K_us/g * u1**2)/u1)

        return -self.clip(u1, -1.5, 1.5), -self.clip(u2, -0.6, 0.6)

    def DoCalcVectorOutput(
        self,
        context,           # not used
        cartesian_state,   # input of the controller
        controller_state,  # not used
        input              # output of the controller
    ):

        kLAMBDA = 0.0
        kK = 0.05
        kPULL = 10

        # upack state of the robot x,y,theta
        x, y, theta = cartesian_state

        target_x, target_y = 0, 0
        dist = self.euclidean_dist(x, y, target_x, target_y)
        beta = kPULL*dist

        if x >= 0.0:
            f_x = 3 * (x**2) * (beta**(1.0/kK)) + (y**2)*(beta**(1.0/kK)) + kLAMBDA * (theta**2) * (beta**(1.0/kK)) + \
                (x*(x - target_x)*(x**2 + y**2 + kLAMBDA*(theta**2)) * (beta **
                 ((-kK+1.0)/kK))) / kK * numpy.sqrt((x-target_x)**2 + (y - target_y)**2)

            f_y = x * (((y - target_y)*(x**2 + y**2 + kLAMBDA*(theta**2)) * (beta**((-kK+1.0)/kK))
                        ) / kK * numpy.sqrt((x-target_x)**2 + (y - target_y)**2) + 2 * y * (beta**(1.0/kK)))
            f_theta = 2 * kLAMBDA * theta * (beta**(1.0/kK))
        else:
            f_x = -3 * (x**2) * (beta**(1/kK)) - (y**2)*(beta**(1/kK)) - kLAMBDA * (theta**2) * (beta**(1.0/kK)) - \
                (x*(x - target_x)*(x**2 + y**2 + kLAMBDA*(theta**2)) * (beta **
                 ((-kK+1.0)/kK))) / kK * numpy.sqrt((x-target_x)**2 + (y - target_y)**2)

            f_y = -x * (((y - target_y)*(x**2 + y**2 + kLAMBDA*(theta**2)) * (beta**((-kK+1)/kK))) /
                        kK * numpy.sqrt((x-target_x)**2 + (y - target_y)**2) + 2 * y * (beta**(1.0/kK)))
            f_theta = - 2 * kLAMBDA * theta * (beta**(1.0/kK))

        theta_d = numpy.arctan2(-numpy.copysign(1, x)*f_y,
                                -numpy.copysign(1, x)*f_x)

        # evaluate the function below and return the robot's input
        input[:] = self.lyapunov_controller(f_x, f_y, theta, f_theta, theta_d)


class RosNode(rclpy.node.Node):
    def __init__(self, *args):
        super(RosNode, self).__init__("rosnode")
        self.get_logger().info("Starting rosnode")

        self.create_subscription(
            PoseWithCovarianceStamped, "/vox_nav/cupoch/icp_base_to_map_pose",
            self.robot_pose_callback, rclpy.qos.qos_profile_sensor_data)

    def robot_pose_callback(self, msg):
        self.get_logger().info("recieved pose %s" % str(msg.pose.pose))


def main():

    print("Drake is installed under: ")
    print(pydrake.getDrakePath())

    x = Variable("x")
    y = Variable("y")
    theta = Variable("theta")        # vehicle orienttaion
    cartesian_state = [x, y, theta]
    v = Variable("v")                # driving velocity input
    delta = Variable("delta")        # steering velocity input
    input = [v, delta]

    # nonlinear dynamics, the whole state is measured (output = state)
    dynamics = [v * cos(theta),
                v * sin(theta),
                v * numpy.tan(delta)/L]

    robot = SymbolicVectorSystem(
        state=cartesian_state,
        input=input,
        output=cartesian_state,
        dynamics=dynamics,
    )

    # construction site for our closed-loop system
    builder = DiagramBuilder()

    # add the robot to the diagram
    # the method .AddSystem() simply returns a pointer to the system
    # we passed as input, so it's ok to give it the same name
    robot = builder.AddSystem(robot)

    # add the controller
    controller = builder.AddSystem(PointTrackingController())

    # wire the controller with the system
    builder.Connect(robot.get_output_port(0), controller.get_input_port(0))
    builder.Connect(controller.get_output_port(0), robot.get_input_port(0))

    # add a logger to the diagram
    # this will store the state trajectory
    logger = LogVectorOutput(robot.get_output_port(0), builder)

    # complete the construction of the diagram
    diagram = builder.Build()

    # set up a simulation environment
    simulator = Simulator(diagram)

    # set the initial cartesian state to a random initial position
    # try initial_state = np.random.randn(3) for a random initial state
    initial_state = [10, 16, 1.57]
    context = simulator.get_mutable_context()
    context.SetContinuousState(initial_state)

    # simulate from zero to sim_time
    # the trajectory will be stored in the logger
    integrator = simulator.get_mutable_integrator()
    target_accuracy = 1E-4
    integrator.set_target_accuracy(target_accuracy)
    maximum_step_size = 0.1
    integrator.set_maximum_step_size(maximum_step_size)
    minimum_step_size = 2E-5
    integrator.set_requested_minimum_step_size(minimum_step_size)
    integrator.set_throw_on_minimum_step_size_violation(True)
    integrator.set_fixed_step_mode(True)
    simulator.set_target_realtime_rate(1.0)

    rclpy.init()
    node = RosNode()
    executor = MultiThreadedExecutor()
    executor.add_node(node)

    while True:
        executor.spin_once(0.05)
        simulator.AdvanceTo(context.get_time() + 0.1)
        node.get_logger().info("Another step, time is: %s" % str(context.get_time()))


if __name__ == '__main__':
    main()

The code is a slightly modified version of this example here [...]

不幸的是,我无法访问它(我不确定这是否是您的驱动器,或者向 DeepNote 的过渡是否改变了一切)。 你能post一个link直接到课程笔记吗?例如http://underactuated.csail.mit.edu/acrobot.html#cart_pole

As I understand this is just a building block for the "prototyping" and designing a suitable controller, is this correct?, is SymbolicVectorSystem still needed if there is a real robot?.

选择最终取决于您和您的应用程序。 SymbolicVectorSystem 最初可能会表达您的需求,但您的需求可能会超出它(轻松)表达的范围。

Now that I am happy with the controller, I would like to deploy this controller to a ROS2 robot. [...]

有几个使用系统框架的示例,一些在 Drake 本身,其他在下游项目中,但可以理解,可能很难找到它们。

Drake 本身的一些示例,虽然使用 LCM,但不是 ROS2:

下游项目中的一些例子;也主要使用 LCM:


顺便说一句,在 OpenRobotics 的帮助下,我们 TRI 的一些人正在(缓慢地)为 Drake 开发一些 ROS2 接口;你可以在这里看到一些进展:https://github.com/RobotLocomotion/drake-ros/tree/develop
一些注意事项:

  • 这是非常实验性的,我们这边还没有(在任何程度上)经过审查。
  • 在这种情况下,我们的主要客户是我们自己,因此此时它可能不适合您的用例。