在 Android camera2 下将 YUV_420_888 图像转换为位图不正确

Incorrect image converting YUV_420_888 into Bitmaps under Android camera2

我正在尝试将来自 camera2 预览的 YUV_420_888 图像转换为位图。但是输出图像的颜色不正确。

接下来是测试代码我在运行生成位图。只是测试代码,所以请不要对不相关的因素进行任何代码审查,例如正在回收位图,或者不断创建 RenderScript。此代码仅用于测试从 YUV 到 RGB 的转换,仅此而已。

其他因素,代码是为了 运行 从 API 22 及以上,因此使用 RenderScript 特定的 ScriptIntrinsicYuvToRGB 应该足够了,而不必使用旧的手动转换,由于缺乏适当的 YUV_420_888 支持,仅在以前的 Android 版本中需要。

由于 RenderScript 已经提供了专用的 ScriptIntrinsicYuvToRGB 来处理所有类型的 YUV 转换,我认为问题可能在于我如何从 Image 对象获取 YUV 字节数据,但我不知道在哪里问题是。

要在 Android Studio 中查看输出位图,请在 bitmap.recycle() 中放置一个断点,因此在它被回收之前,您可以在变量调试 Window 中查看它使用“查看位图”选项。

如果有人能发现转换有什么问题,请告诉我:

@Override
public void onImageAvailable(ImageReader reader)
{
    RenderScript rs = RenderScript.create(this.mContext);

    final Image image = reader.acquireLatestImage();

    final Image.Plane[] planes = image.getPlanes();
    final ByteBuffer planeY = planes[0].getBuffer();
    final ByteBuffer planeU = planes[1].getBuffer();
    final ByteBuffer planeV = planes[2].getBuffer();

    // Get the YUV planes data

    final int Yb = planeY.rewind().remaining();
    final int Ub = planeU.rewind().remaining();
    final int Vb = planeV.rewind().remaining();

    final ByteBuffer yuvData = ByteBuffer.allocateDirect(Yb + Ub + Vb);

    planeY.get(yuvData.array(), 0, Yb);
    planeU.get(yuvData.array(), Yb, Vb);
    planeV.get(yuvData.array(), Yb + Vb, Ub);

    // Initialize Renderscript

    Type.Builder yuvType = new Type.Builder(rs, Element.YUV(rs))
            .setX(image.getWidth())
            .setY(image.getHeight())
            .setYuvFormat(ImageFormat.YUV_420_888);

    final Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
            .setX(image.getWidth())
            .setY(image.getHeight());

    Allocation yuvAllocation = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
    Allocation rgbAllocation = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);

    // Convert

    yuvAllocation.copyFromUnchecked(yuvData.array());

    ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.YUV(rs));
    scriptYuvToRgb.setInput(yuvAllocation);
    scriptYuvToRgb.forEach(rgbAllocation);

    // Get the bitmap

    Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
    rgbAllocation.copyTo(bitmap);

    // Release

    bitmap.recycle();

    yuvAllocation.destroy();
    rgbAllocation.destroy();
    rs.destroy();

    image.close();
}

没有直接的方法可以将 YUV_420_888 相机帧复制到 RS 分配中。实际上,截至今天,Renderscript does not support this format.

如果您知道,在幕后,您的帧是 NV21 或 YV12 - 您可以将整个 ByteBuffer 复制到一个数组并将其传递给 RS 分配。

回答我自己的问题,正如我所怀疑的那样,实际问题在于我如何将图像平面转换为字节缓冲区。接下来是解决方案,它应该适用于 NV21 和 YV12。由于 YUV 数据已经出现在不同的平面中,因此只需根据它们的行和像素步幅以正确的方式获取它即可。还需要对如何将数据传递给 RenderScript 内部函数做一些小的修改。

注意:对于生产优化的 onImageAvailable() 不间断流程,相反,在进行转换之前,应将图像字节数据复制到单独的缓冲区中,并在单独的线程中执行转换(取决于您的要求)。但由于这不是问题的一部分,在下一个代码中,转换直接放入 onImageAvailable() 以简化答案。如果有人需要知道如何复制图像数据,请创建一个新问题并告诉我,以便我分享我的代码。

@Override
public void onImageAvailable(ImageReader reader)
{
    // Get the YUV data

    final Image image = reader.acquireLatestImage();
    final ByteBuffer yuvBytes = this.imageToByteBuffer(image);

    // Convert YUV to RGB

    final RenderScript rs = RenderScript.create(this.mContext);

    final Bitmap        bitmap     = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
    final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);

    final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
    allocationYuv.copyFrom(yuvBytes.array());

    ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
    scriptYuvToRgb.setInput(allocationYuv);
    scriptYuvToRgb.forEach(allocationRgb);

    allocationRgb.copyTo(bitmap);

    // Release

    bitmap.recycle();

    allocationYuv.destroy();
    allocationRgb.destroy();
    rs.destroy();

    image.close();
}

private ByteBuffer imageToByteBuffer(final Image image)
{
    final Rect crop   = image.getCropRect();
    final int  width  = crop.width();
    final int  height = crop.height();

    final Image.Plane[] planes     = image.getPlanes();
    final byte[]        rowData    = new byte[planes[0].getRowStride()];
    final int           bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
    final ByteBuffer    output     = ByteBuffer.allocateDirect(bufferSize);

    int channelOffset = 0;
    int outputStride = 0;

    for (int planeIndex = 0; planeIndex < 3; planeIndex++)
    {
        if (planeIndex == 0)
        {
            channelOffset = 0;
            outputStride = 1;
        }
        else if (planeIndex == 1)
        {
            channelOffset = width * height + 1;
            outputStride = 2;
        }
        else if (planeIndex == 2)
        {
            channelOffset = width * height;
            outputStride = 2;
        }

        final ByteBuffer buffer      = planes[planeIndex].getBuffer();
        final int        rowStride   = planes[planeIndex].getRowStride();
        final int        pixelStride = planes[planeIndex].getPixelStride();

        final int shift         = (planeIndex == 0) ? 0 : 1;
        final int widthShifted  = width >> shift;
        final int heightShifted = height >> shift;

        buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));

        for (int row = 0; row < heightShifted; row++)
        {
            final int length;

            if (pixelStride == 1 && outputStride == 1)
            {
                length = widthShifted;
                buffer.get(output.array(), channelOffset, length);
                channelOffset += length;
            }
            else
            {
                length = (widthShifted - 1) * pixelStride + 1;
                buffer.get(rowData, 0, length);

                for (int col = 0; col < widthShifted; col++)
                {
                    output.array()[channelOffset] = rowData[col * pixelStride];
                    channelOffset += outputStride;
                }
            }

            if (row < heightShifted - 1)
            {
                buffer.position(buffer.position() + rowStride - length);
            }
        }
    }

    return output;
}

RenderScript 支持 YUV_420_888 作为 ScriptIntrinsicYuvToRGB 的来源

  1. 创建分配和 ScriptIntrinsicYuvToRGB

    RenderScript renderScript = RenderScript.create(this);
    ScriptIntrinsicYuvToRGB mScriptIntrinsicYuvToRGB = ScriptIntrinsicYuvToRGB.create(renderScript, Element.YUV(renderScript));
    Allocation mAllocationInYUV = Allocation.createTyped(renderScript, new Type.Builder(renderScript, Element.YUV(renderScript)).setYuvFormat(ImageFormat.YUV_420_888).setX(480).setY(640).create(), Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
    Allocation mAllocationOutRGB = Allocation.createTyped(renderScript, Type.createXY(renderScript, Element.RGBA_8888(renderScript), 480, 640), Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT);
    
  2. 设置Allocation.getSurface()从摄像头接收图像数据

    final CaptureRequest.Builder captureRequest = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
    captureRequest.addTarget(mAllocationInYUV.getSurface());
    
  3. 输出到 TextureView 或 ImageReader 或 SurfaceView

    mAllocationOutRGB.setSurface(new Surface(mTextureView.getSurfaceTexture()));
    mAllocationInYUV.setOnBufferAvailableListener(new Allocation.OnBufferAvailableListener() {
        @Override
        public void onBufferAvailable(Allocation a) {
            a.ioReceive();
            mScriptIntrinsicYuvToRGB.setInput(a);
            mScriptIntrinsicYuvToRGB.forEach(mAllocationOutRGB);
            mAllocationOutRGB.ioSend();
        }
    });