将 X8B8G8R8 转换为 R8G8B8 C++ 代码

Convert X8B8G8R8 to R8G8B8 C++ code

我想将 X8B8G8R8 格式的硬件像素缓冲区转换为 unsigned int 24 位内存缓冲区。

这是我的尝试:

 // pixels is uin32_t;
 src.pixels = new pixel_t[src.width*src.height];





    readbuffer->lock( Ogre::HardwareBuffer::HBL_DISCARD );
            const Ogre::PixelBox &pb = readbuffer->getCurrentLock();

            /// Update the contents of pb here
            /// Image data starts at pb.data and has format pb.format
            uint32 *data = static_cast<uint32*>(pb.data);
            size_t height = pb.getHeight();
            size_t width = pb.getWidth();
            size_t pitch = pb.rowPitch; // Skip between rows of image
            for ( size_t y = 0; y<height; ++y )
            {
                for ( size_t x = 0; x<width; ++x )
                {
                    src.pixels[pitch*y + x] = data[pitch*y + x];
                }
            }

应该这样做

uint32_t BGRtoRGB(uint32_t col) {
    return (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}

src.pixels[pitch*y + x] = BGRtoRGB(data[pitch*y + x]);

注意:如果您愿意,BGRtoRGB 此处可以双向转换,但请记住它会丢弃 X8 位(alpha?)中的任何内容,但它应该保留值本身.

使用 0xff

的 alpha 进行相反的转换
uint32_t RGBtoXBGR(uint32_t col) {
    return 0xff000000 | (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}