SDL2纹理更新速度
SDL2 texture update speed
我正在尝试为将来的 运行 软件渲染示例设置 SDL2 环境,因此我需要直接访问像素以进行绘制。这是一些代码,将 1 个红色像素绘制到纹理,然后显示它说 https://wiki.libsdl.org/MigrationGuide#If_your_game_just_wants_to_get_fully-rendered_frames_to_the_screen
#include <SDL.h>
#include <stdio.h>
const int SCREEN_WIDTH = 1920;
const int SCREEN_HEIGHT = 1080;
SDL_Window* gWindow;
SDL_Renderer* gRenderer;
SDL_Texture* gTexture;
SDL_Event e;
void* gPixels = NULL;
int gPitch = SCREEN_WIDTH * 4;
bool gExitFlag = false;
Uint64 start;
Uint64 end;
Uint64 freq;
double seconds;
int main(int argc, char* args[])
{
SDL_Init(SDL_INIT_VIDEO);
gWindow = SDL_CreateWindow("SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED); // | SDL_RENDERER_PRESENTVSYNC); vsync is turned off
gTexture = SDL_CreateTexture(gRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, SCREEN_WIDTH, SCREEN_HEIGHT);
while (!gExitFlag)
{
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
gExitFlag = true;
}
}
start = SDL_GetPerformanceCounter();
SDL_LockTexture(gTexture, NULL, &gPixels, &gPitch);
*((uint32_t*)gPixels) = 0xff000ff;
SDL_UnlockTexture(gTexture); //20-100ms on different hardware
end = SDL_GetPerformanceCounter();
freq = SDL_GetPerformanceFrequency();
SDL_RenderCopy(gRenderer, gTexture, NULL, NULL);
SDL_RenderPresent(gRenderer);
gPixels = NULL;
gPitch = 0;
seconds = (end - start) / static_cast<double>(freq);
printf("Frame time: %fms\n", seconds * 1000.0);
}
SDL_DestroyWindow(gWindow);
SDL_DestroyRenderer(gRenderer);
SDL_DestroyTexture(gTexture);
SDL_Quit();
return 0;
}
正如我在代码注释中提到的那样 SDL_UnlockTexture 全高清纹理时长可达 100 毫秒。 (切换到 SDL_UpdateTexture 不会造成显着差异)我认为这对于实时渲染来说太多了。我做错了什么或者我根本不应该使用纹理 API(或任何其他 GPU 加速的 api,其中纹理必须每帧上传到 gpu 内存)来实时渲染整个帧?
如果您想使用原始像素数据,您应该使用 SDL 的 SDL_Surface
而不是纹理。它是针对您的情况优化的不同 SDL API,请参阅 this example and dont forget to update。
原因是纹理存储在 VRAM 中,从 VRAM 读取非常慢。表面存储在 RAM 中,在那里进行处理,并且只写入速度非常快的 VRAM。
我正在尝试为将来的 运行 软件渲染示例设置 SDL2 环境,因此我需要直接访问像素以进行绘制。这是一些代码,将 1 个红色像素绘制到纹理,然后显示它说 https://wiki.libsdl.org/MigrationGuide#If_your_game_just_wants_to_get_fully-rendered_frames_to_the_screen
#include <SDL.h>
#include <stdio.h>
const int SCREEN_WIDTH = 1920;
const int SCREEN_HEIGHT = 1080;
SDL_Window* gWindow;
SDL_Renderer* gRenderer;
SDL_Texture* gTexture;
SDL_Event e;
void* gPixels = NULL;
int gPitch = SCREEN_WIDTH * 4;
bool gExitFlag = false;
Uint64 start;
Uint64 end;
Uint64 freq;
double seconds;
int main(int argc, char* args[])
{
SDL_Init(SDL_INIT_VIDEO);
gWindow = SDL_CreateWindow("SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED); // | SDL_RENDERER_PRESENTVSYNC); vsync is turned off
gTexture = SDL_CreateTexture(gRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, SCREEN_WIDTH, SCREEN_HEIGHT);
while (!gExitFlag)
{
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
gExitFlag = true;
}
}
start = SDL_GetPerformanceCounter();
SDL_LockTexture(gTexture, NULL, &gPixels, &gPitch);
*((uint32_t*)gPixels) = 0xff000ff;
SDL_UnlockTexture(gTexture); //20-100ms on different hardware
end = SDL_GetPerformanceCounter();
freq = SDL_GetPerformanceFrequency();
SDL_RenderCopy(gRenderer, gTexture, NULL, NULL);
SDL_RenderPresent(gRenderer);
gPixels = NULL;
gPitch = 0;
seconds = (end - start) / static_cast<double>(freq);
printf("Frame time: %fms\n", seconds * 1000.0);
}
SDL_DestroyWindow(gWindow);
SDL_DestroyRenderer(gRenderer);
SDL_DestroyTexture(gTexture);
SDL_Quit();
return 0;
}
正如我在代码注释中提到的那样 SDL_UnlockTexture 全高清纹理时长可达 100 毫秒。 (切换到 SDL_UpdateTexture 不会造成显着差异)我认为这对于实时渲染来说太多了。我做错了什么或者我根本不应该使用纹理 API(或任何其他 GPU 加速的 api,其中纹理必须每帧上传到 gpu 内存)来实时渲染整个帧?
如果您想使用原始像素数据,您应该使用 SDL 的 SDL_Surface
而不是纹理。它是针对您的情况优化的不同 SDL API,请参阅 this example and dont forget to update。
原因是纹理存储在 VRAM 中,从 VRAM 读取非常慢。表面存储在 RAM 中,在那里进行处理,并且只写入速度非常快的 VRAM。