Qt 和 Intel realsense 入门

Qt and Intel realsense Getting started

初学者不知道去哪里问。所以如果你能帮忙...

我想创建一个项目使用:

实际上,要开始,我需要一个更简单的项目,例如来自英特尔实感 SDK 2.0 的示例,

  1. im-show
  2. or/and“hello-realsense

代码示例 将是最好的,甚至更好的教程。
到现在我还没有在 google 上找到任何东西。
如果上述内容有困难,建议您如何开始。

例如,我发现:“Qt and openCV on Windows with MSVC
它是我开始的好地方吗,我需要 openCV 来 show/display 像项目 "im-show" 那样的深度图像吗?

提前谢谢你。

根据你的问题,我了解到你正在尝试在 Qt 应用程序中预览相机数据。

  1. 无法在 Qt 应用程序中写入 imshow()。
  2. 您必须像下面那样将 OpenCV Mat 转换为 QImage 并将其显示在您的查看器中(widget/QML)

Mat img;
QImage img1 = QImage((uchar *) img.data,img.cols,img.rows,img.step,QImage::Format_Indexed8);

这是一个非常简单的示例,仅使用 Qt 和 Intel Realsense SDK。

我们首先编写一个 class 来处理我们的相机:

#ifndef CAMERA_H
#define CAMERA_H

// Import QT libs, one for threads and one for images
#include <QThread> 
#include <QImage>

// Import librealsense header
#include <librealsense2/rs.hpp>

// Let's define our camera as a thread, it will be constantly running and sending frames to
// our main window
class Camera : public QThread 
{
    Q_OBJECT
public:
    // We need to instantiate a camera with both depth and rgb resolution (as well as fps)
    Camera(int rgb_width, int rgb_height, int depth_width, int depth_height, int fps);
    ~Camera() {}

    // Member function that handles thread iteration
    void run();
    
    // If called it will stop the thread
    void stop() { camera_running = false; }

private:
    // Realsense configuration structure, it will define streams that need to be opened
    rs2::config cfg;

    // Our pipeline, main object used by realsense to handle streams
    rs2::pipeline pipe;

    // Frames returned by our pipeline, they will be packed in this structure
    rs2::frameset frames;

    // A bool that defines if our thread is running
    bool camera_running = true;

signals:
    // A signal sent by our class to notify that there are frames that need to be processed
    void framesReady(QImage frameRGB, QImage frameDepth);
};
// A function that will convert realsense frames to QImage
QImage realsenseFrameToQImage(const rs2::frame& f);

#endif // CAMERA_H

为了完全理解这 class 的作用,我将您重定向到以下两个页面: Signals & Slots and QThread。这个 class 是一个 QThread,这意味着它可以 运行 与我们的主 window 并行。当几帧准备就绪时,将发出信号 framesReady 并且 window 将显示图像。

先说如何用librealsense打开摄像头流:

Camera::Camera(int rgb_width, int rgb_height, int depth_width, int depth_height, int fps)
{
    // Enable depth stream with given resolution. Pixel will have a bit depth of 16 bit
    cfg.enable_stream(RS2_STREAM_DEPTH, depth_width, depth_height, RS2_FORMAT_Z16, fps);
    
    // Enable RGB stream as frames with 3 channel of 8 bit
    cfg.enable_stream(RS2_STREAM_COLOR, rgb_width, rgb_height, RS2_FORMAT_RGB8, fps);

    // Start our pipeline
    pipe.start(cfg);
}

如您所见,我们的构造函数非常简单,它只会打开具有给定流的管道。

现在管道已经启动,我们只需要获取相应的帧即可。我们将在我们的 'run' 方法中执行此操作,该方法将在 QThread 启动时启动:

void Camera::run()
{
    while(camera_running)
    {
        // Wait for frames and get them as soon as they are ready
        frames = pipe.wait_for_frames();

        // Let's get our depth frame
        rs2::depth_frame depth = frames.get_depth_frame();
        // And our rgb frame
        rs2::frame rgb = frames.get_color_frame();

        // Let's convert them to QImage
        auto q_rgb = realsenseFrameToQImage(rgb);
        auto q_depth = realsenseFrameToQImage(depth);

        // And finally we'll emit our signal
        emit framesReady(q_rgb, q_depth);
    }
}

进行转换的函数如下:

QImage realsenseFrameToQImage(const rs2::frame &f)
{
    using namespace rs2;

    auto vf = f.as<video_frame>();
    const int w = vf.get_width();
    const int h = vf.get_height();

    if (f.get_profile().format() == RS2_FORMAT_RGB8)
    {
        auto r = QImage((uchar*) f.get_data(), w, h, w*3, QImage::Format_RGB888);
        return r;
    }
    else if (f.get_profile().format() == RS2_FORMAT_Z16)
    {
        // only if you have Qt > 5.13
        auto r = QImage((uchar*) f.get_data(), w, h, w*2, QImage::Format_Grayscale16);
        return r;
    }

    throw std::runtime_error("Frame format is not supported yet!");
}

我们的相机完成了。

现在我们将定义我们的主要 window。我们需要一个插槽来接收我们的框架和两个我们将放置图像的标签:

#ifndef MAINWINDOW_H
#define MAINWINDOW_H

#include <QMainWindow>
#include <QLabel>

class MainWindow : public QMainWindow
{
    Q_OBJECT

public:
    explicit MainWindow(QWidget *parent = 0);

public slots:
    // Slot that will receive frames from the camera
    void receiveFrame(QImage rgb, QImage depth);

private:
    QLabel *rgb_label;
    QLabel *depth_label;
};

#endif // MAINWINDOW_H

我们为 window 创建了一个简单的视图,图像将垂直显示。

#include "mainwindow.h"

MainWindow::MainWindow(QWidget *parent) :
    QMainWindow(parent)
{
    // Creates our central widget that will contain the labels
    QWidget *widget = new QWidget();
 
    // Create our labels with an empty string
    rgb_label = new QLabel("");
    depth_label = new QLabel("");

    // Define a vertical layout
    QVBoxLayout *widgetLayout = new QVBoxLayout;

    // Add the labels to the layout
    widgetLayout->addWidget(rgb_label);
    widgetLayout->addWidget(depth_label);

    // And then assign the layout to the central widget
    widget->setLayout(widgetLayout);

    // Lastly assign our central widget to our window
    setCentralWidget(widget);
}

现在我们需要定义槽函数。分配给该功能的唯一工作是更改与标签相关的图像:

void MainWindow::receiveFrame(QImage rgb, QImage depth)
{
    rgb_label->setPixmap(QPixmap::fromImage(rgb));
    depth_label->setPixmap(QPixmap::fromImage(depth));
}

大功告成!

最后,我们编写 main,它将启动我们的线程并显示我们的 window。

#include <QApplication>
#include "mainwindow.h"
#include "camera.h"

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);

    MainWindow window;
    Camera camera(640, 480, 320, 240, 30);

    // Connect the signal from the camera to the slot of the window
    QApplication::connect(&camera, &Camera::framesReady, &window, &MainWindow::receiveFrame);

    window.show();

    camera.start();

    return a.exec();
}