使用 io 在 python 中将字节传递给 ffmpeg

Passing bytes to ffmpeg in python with io

抱歉,Whosebug 的新手

只是想知道是否可以从 io 传递字节数据。
我正在尝试使用 ffmpeg 从 gif 中提取帧,然后使用 Pillow 调整它的大小。
我知道您可以使用 Pillow 从 gif 中提取帧,但有时它会破坏某些 gif。所以我使用 ffmpeg 作为修复。
至于为什么我想要从内存中读取gif是因为我要改变这个所以来自urls的gif将被包装在Bytesio而不是保存。
至于为什么我有额外的 Pillow 代码,我确实通过将实际文件名传递给 ffmpeg 命令成功地使其工作。

original_pil = Image.open("1.gif")

bytes_io = open("1.gif", "rb")
bytes_io.seek(0)

ffmpeg = 'ffmpeg'

cmd = [ffmpeg,
       '-i', '-',
       '-vsync', '0',
       '-f', 'image2pipe',
       '-pix_fmt', 'rgba',
       '-vcodec', 'png',
       '-report',
       '-']

depth = 4
width, height = original_pil.size
buf_size = depth * width * height + 100
nbytes = width * height * 4

proc = SP.Popen(cmd, stdout=SP.PIPE, stdin=SP.PIPE, stderr=SP.PIPE, bufsize=buf_size, shell=False)
out, err = proc.communicate(input=bytes_io.read(), timeout=None)

FFMPEG 报告:

ffmpeg started on 2021-06-07 at 18:58:14
Report written to "ffmpeg-20210607-185814.log"
Command line:
ffmpeg -i - -vsync 0 -f image2pipe -pix_fmt rgba -vcodec png -report -
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --ena  WARNING: library configuration mismatch
  avcodec     configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enab  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Splitting the commandline.
Reading option '-i' ... matched as input url with argument '-'.
Reading option '-vsync' ... matched as option 'vsync' (video sync method) with argument '0'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'image2pipe'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'rgba'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'png'.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option vsync (video sync method) with argument 0.
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url -.
Successfully parsed a group of options.
Opening an input file: -.
[NULL @ 0x55b59c38f7c0] Opening 'pipe:' for reading
[pipe @ 0x55b59c390240] Setting default whitelist 'crypto'
[gif @ 0x55b59c38f7c0] Format gif probed with size=2048 and score=100
[AVIOContext @ 0x55b59c398680] Statistics: 4614093 bytes read, 0 seeks
pipe:: Input/output error

对于单个图像,您的代码运行良好。
看起来你最后缺少了 proc.wait(),仅此而已。

多图可以看我的post .
您可以简化代码,以便处理图像。

我对你的代码做了一些改动,让它更优雅(我认为):

  • 您不需要 '-vsync', '0' 参数。
  • 我把'-'换成了'pipe:'(我觉得更清楚)
  • 你不需要设置bufsize除非你知道默认值太小。
  • 我删除了 stderr=SP.PIPE,因为我不想在控制台中看到 FFmpeg 日志。
  • 我在proc.communicate之后加了proc.wait()

代码示例首先构建用于测试的合成 GIF 图像文件。


这是代码示例:

import subprocess as sp
import shlex
from PIL import Image
from io import BytesIO

# Build synthetic image tmp.gif for testing
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size=128x128:rate=1:duration=1 tmp.gif'))

original_pil = Image.open('tmp.gif')

bytes_io = open('tmp.gif', "rb")
bytes_io.seek(0)

ffmpeg = 'ffmpeg'

cmd = [ffmpeg,
       '-i', 'pipe:',
       #'-vsync', '0',
       '-f', 'image2pipe',
       '-pix_fmt', 'rgba',
       '-vcodec', 'png',
       '-report',
       'pipe:']

proc = sp.Popen(cmd, stdout=sp.PIPE, stdin=sp.PIPE)
out = proc.communicate(input=bytes_io.read())[0]

proc.wait()

bytes_io_png = BytesIO(out)
img = Image.open(bytes_io_png)
img.show()

输出:


传递多张图片:

如果有多个图像,只有当所有图像都在 RAM 中时才可以使用 proc.communicate
与其将所有图像抓取到 RAM,然后将图像传递给 FFmpeg,不如使用编写器线程和 for 循环。

我试过传PNG图片,但是太乱了。
我更改了代码以传递 RAW 格式的图像。
RAW 图像的优点是所有图像的字节大小都是预先知道的。

这是一个代码示例(不使用 BytesIO):

import numpy as np
import subprocess as sp
import shlex
from PIL import Image
import threading


# Write gif images to stdin pipe.
def writer(stdin_pipe):
    # Write 30 images to stdin pipe (for example)
    for i in range(1, 31):
        in_file_name = 'tmp' + str(i).zfill(2) + '.gif'

        with open(in_file_name, "rb") as f:  
            proc.stdin.write(f.read())  # Write bytes to stdin pipe

    stdin_pipe.close()


# Build 30 synthetic images tmp01.gif, tmp02.gif, ..., tmp31.gif for testing
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size=128x128:rate=1:duration=30 -f image2 tmp%02d.gif'))


original_pil = Image.open("tmp01.gif")
depth = 4
width, height = original_pil.size
nbytes = width * height * 4


ffmpeg = 'ffmpeg'

cmd = [ffmpeg,
       '-i', 'pipe:',
       '-f', 'image2pipe',
       '-pix_fmt', 'rgba',
       '-vcodec', 'rawvideo',  # Select rawvideo codec
       '-report',
       'pipe:']


proc = sp.Popen(cmd, stdout=sp.PIPE, stdin=sp.PIPE)

thread = threading.Thread(target=writer, args=(proc.stdin,))
thread.start()  # Strat writer thread


while True:
    in_bytes = proc.stdout.read(nbytes)  # Read raw image bytes from stdout pipe.
    raw_imag = np.frombuffer(in_bytes, np.uint8).reshape([height, width, 4])

    img = Image.fromarray(raw_imag)
    img.show()

    # Break the loop when number of bytes read is less then expected size.
    if len(in_bytes) < nbytes:
        break

proc.wait()
thread.join()

PyAV can handle bytes, see av.open。这应该比打开子流程更有效。很简单:

video_bytes = ... # from file/server/whatever
bio = io.BytesIO(video_bytes)
av_container = av.open(bio, mode="r")

该库还提供构建过滤图和其他功能,请检查docs它是否涵盖了您需要的一切。