使用 OpenGL 将位图绘制到 VideoFrame
Drawing a Bitmap to a VideoFrame with OpenGL
我正在研究 class,它扩展了 Camera2Capturer,以便从相机获取帧,对其进行修改,然后将其反馈给观察者回调。
我能够获取帧,将其转换为位图,将该位图修改为我想要的,然后使用 OpenGL 将其绘制为我 return 和 capturerObserver.onFrameCaptured(videoFrame);
的新 VideoFrame
问题是,我新创建的 videoFrame 被拉长了。我检查时位图是正确的,但绘制的视频帧在侧面有拉伸。我在不同分辨率的不同设备上试过,但问题到处都是一样的。
这是我的 startCapture 方法的代码:
@Override
public void startCapture(int width, int height, int fps) {
super.startCapture(width, height, fps);
this.width = width;
this.height = height;
captureThread = new Thread(() -> {
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
Matrix matrix = new Matrix();
matrix.postScale(1f, -1f);
TextureBufferImpl buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textureHandle[0], matrix, surTexture.getHandler(), yuvConverter, null);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
try {
while (true) {
surTexture.getHandler().post(() -> {
if (needsToRedrawFrame) {
VideoFrame lastFrameReceived = capturerObs.getLastFrameReceived();
//This is the bitmap I want to draw on the video frame
Bitmap bitmapToDraw = drawingCanvasView.getmBitmap();
//At this point, bitmmapToDraw contains the drawing and the frame captured from the camera overlayed
//Now we need to convert it to fit into the onFrameCaptured callback (requires a VideoFrame).
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmapToDraw, 0);
bitmapToDraw.recycle();
//The bitmap is drawn on the GPU at this point.
//We transfer it to the VideoFrame
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
VideoFrame videoFrame = new VideoFrame(i420Buf, 0, lastFrameReceived.getTimestampNs());
ogCapturerObserver.onFrameCaptured(videoFrame);
needsToRedrawFrame = false;
}
});
Thread.sleep(100);
}
} catch (Exception e) {
LogHelper.logError(CapturerObserverProxy.class, "RMTEST THIS > " + e.getMessage(), e);
}
});
captureThread.start();
}
这里是 bitmapToDraw 的样子:
下面是在 SurfaceView 上绘制的 videoFrame 的样子:
我到底错过了什么?我对OpenGL一点都不熟悉
事实证明框架绘制正确。但是框架的分辨率与实际绘制它的 Surface 不同,因此会发生拉伸。
我不得不调整(同时保持纵横比!)我将要绘制的位图。如果位图的大小与其渲染的 Surface 大小相同,则不会被拉伸。
我正在研究 class,它扩展了 Camera2Capturer,以便从相机获取帧,对其进行修改,然后将其反馈给观察者回调。
我能够获取帧,将其转换为位图,将该位图修改为我想要的,然后使用 OpenGL 将其绘制为我 return 和 capturerObserver.onFrameCaptured(videoFrame);
问题是,我新创建的 videoFrame 被拉长了。我检查时位图是正确的,但绘制的视频帧在侧面有拉伸。我在不同分辨率的不同设备上试过,但问题到处都是一样的。
这是我的 startCapture 方法的代码:
@Override
public void startCapture(int width, int height, int fps) {
super.startCapture(width, height, fps);
this.width = width;
this.height = height;
captureThread = new Thread(() -> {
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
Matrix matrix = new Matrix();
matrix.postScale(1f, -1f);
TextureBufferImpl buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textureHandle[0], matrix, surTexture.getHandler(), yuvConverter, null);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
try {
while (true) {
surTexture.getHandler().post(() -> {
if (needsToRedrawFrame) {
VideoFrame lastFrameReceived = capturerObs.getLastFrameReceived();
//This is the bitmap I want to draw on the video frame
Bitmap bitmapToDraw = drawingCanvasView.getmBitmap();
//At this point, bitmmapToDraw contains the drawing and the frame captured from the camera overlayed
//Now we need to convert it to fit into the onFrameCaptured callback (requires a VideoFrame).
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmapToDraw, 0);
bitmapToDraw.recycle();
//The bitmap is drawn on the GPU at this point.
//We transfer it to the VideoFrame
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
VideoFrame videoFrame = new VideoFrame(i420Buf, 0, lastFrameReceived.getTimestampNs());
ogCapturerObserver.onFrameCaptured(videoFrame);
needsToRedrawFrame = false;
}
});
Thread.sleep(100);
}
} catch (Exception e) {
LogHelper.logError(CapturerObserverProxy.class, "RMTEST THIS > " + e.getMessage(), e);
}
});
captureThread.start();
}
这里是 bitmapToDraw 的样子:
下面是在 SurfaceView 上绘制的 videoFrame 的样子:
我到底错过了什么?我对OpenGL一点都不熟悉
事实证明框架绘制正确。但是框架的分辨率与实际绘制它的 Surface 不同,因此会发生拉伸。 我不得不调整(同时保持纵横比!)我将要绘制的位图。如果位图的大小与其渲染的 Surface 大小相同,则不会被拉伸。