MediaMuxer 视频文件大小减小(重新压缩,降低分辨率)
MediaMuxer video file size reducing (re-compress, decrease resolution)
我正在寻找减少视频重量的有效方法(作为 File
,用于上传),显而易见的答案是:让我们降低分辨率! (不需要全高清或 4K,简单的高清对我来说就足够了)我已经尝试了很多应该通过很多 API(需要 10 个)工作的方法,最好的方法是使用 android-ffmpeg-java,但是......在我漂亮的快速几乎当前的旗舰设备整个过程持续大约 length_of_video*4 秒,而且这个库重量是 9 Mb,这个数量增加了我的应用程序大小......不! (12 Mb 到 1 Mb 是不错的结果,但仍然有太多缺陷)
所以我决定使用本机 Android 方法来执行此操作,MediaMuxer
和 MediaCodec
- 它们分别可从 API18 和 API16 获得(旧设备用户:抱歉;但他们也经常有 "lower-res" 相机)。下面的方法 almost 有效 - MediaMuxer
不尊重 MediaFormat.KEY_WIDTH
和 MediaFormat.KEY_HEIGHT
- 提取的 File
是 "re-compressed",权重是小一点,但分辨率与原始视频相同 File
...
所以,问题:如何使用 MediaMuxer
和其他附带的 类 方法压缩和 re-scale/change 视频分辨率?
public File getCompressedFile(String videoPath) throws IOException{
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(videoPath);
int trackCount = extractor.getTrackCount();
String filePath = videoPath.substring(0, videoPath.lastIndexOf(File.separator));
String[] splitByDot = videoPath.split("\.");
String ext="";
if(splitByDot!=null && splitByDot.length>1)
ext = splitByDot[splitByDot.length-1];
String fileName = videoPath.substring(videoPath.lastIndexOf(File.separator)+1,
videoPath.length());
if(ext.length()>0)
fileName=fileName.replace("."+ext, "_out."+ext);
else
fileName=fileName.concat("_out");
final File outFile = new File(filePath, fileName);
if(!outFile.exists())
outFile.createNewFile();
MediaMuxer muxer = new MediaMuxer(outFile.getAbsolutePath(),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if(mime!=null && mime.startsWith("video")){
int currWidth = format.getInteger(MediaFormat.KEY_WIDTH);
int currHeight = format.getInteger(MediaFormat.KEY_HEIGHT);
format.setInteger(MediaFormat.KEY_WIDTH, currWidth>currHeight ? 960 : 540);
format.setInteger(MediaFormat.KEY_HEIGHT, currWidth>currHeight ? 540 : 960);
//API19 MediaFormat.KEY_MAX_WIDTH and KEY_MAX_HEIGHT
format.setInteger("max-width", format.getInteger(MediaFormat.KEY_WIDTH));
format.setInteger("max-height", format.getInteger(MediaFormat.KEY_HEIGHT));
}
int dstIndex = muxer.addTrack(format);
indexMap.put(i, dstIndex);
}
boolean sawEOS = false;
int bufferSize = 256 * 1024;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
} else {
bufferInfo.presentationTimeUs = extractor.getSampleTime();
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
extractor.advance();
}
}
muxer.stop();
muxer.release();
return outFile;
}
PS。关于 muxer here 的很多有用的东西,上面的代码基于 MediaMuxerTest.java
,方法 cloneMediaUsingMuxer
您可以尝试 Intel INDE Media for Mobile,https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials 上的教程。它有一个示例,展示了如何使用它来转码=重新压缩视频文件。
您可以设置更小的分辨率 and\or 比特率输出以获得更小的文件
https://github.com/INDExOS/media-for-mobile/blob/master/Android/samples/apps/src/com/intel/inde/mp/samples/ComposerTranscodeCoreActivity.java
MediaMuxer 不参与视频的压缩或缩放。它所做的只是从 MediaCodec 获取 H.264 输出并将其包装在 .mp4 文件包装器中。
查看您的代码,您正在使用 MediaExtractor 提取 NAL 单元并立即使用 MediaMuxer 重新包装它们。这应该非常快并且对视频本身没有影响,因为您只是重新包装 H.264。
要缩放视频,您需要使用 MediaCodec 解码器解码视频,将 MediaExtractor 中的 NAL 单元输入其中,然后使用 MediaCodec 编码器重新编码,将帧传递给 MediaMuxer。
您已找到 bigflake.com; see also Grafika。这些都不是您要找的东西,但是有各种不同的部分。
最好解码到 Surface,而不是 ByteBuffer。这需要 API 18,但为了理智,最好忘记 MediaCodec 在此之前就存在了。无论如何,MediaMuxer 都需要 API 18。
基于 bigflake.com/mediacodec/ (awesome source of knowledge about Media-classes) I've tried few ways and finally ExtractDecodeEditEncodeMuxTest turned out very helpfull. This test wasn't described in article on bigflake site, but it can be found HERE 以及文本中提到的其他 class。
所以,我复制了上面提到的 ExtractDecodeEditEncodeMuxTest
class 中的大部分代码,现在是:VideoResolutionChanger
。它为我提供了 16 Mb 全高清的 2Mb 高清视频。好的!而且快!在我的设备上,整个过程比输入视频持续时间长一点,例如10 秒视频输入 -> 11-12 秒处理。使用 ffmpeg-java
大约需要 40 秒或更多(应用程序需要 9 Mb 以上)。
我们开始:
VideoResolutionChanger:
@TargetApi(18)
public class VideoResolutionChanger {
private static final int TIMEOUT_USEC = 10000;
private static final String OUTPUT_VIDEO_MIME_TYPE = "video/avc";
private static final int OUTPUT_VIDEO_BIT_RATE = 2048 * 1024;
private static final int OUTPUT_VIDEO_FRAME_RATE = 30;
private static final int OUTPUT_VIDEO_IFRAME_INTERVAL = 10;
private static final int OUTPUT_VIDEO_COLOR_FORMAT =
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface;
private static final String OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm";
private static final int OUTPUT_AUDIO_CHANNEL_COUNT = 2;
private static final int OUTPUT_AUDIO_BIT_RATE = 128 * 1024;
private static final int OUTPUT_AUDIO_AAC_PROFILE =
MediaCodecInfo.CodecProfileLevel.AACObjectHE;
private static final int OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100;
private int mWidth = 1280;
private int mHeight = 720;
private String mOutputFile, mInputFile;
public String changeResolution(File f)
throws Throwable {
mInputFile=f.getAbsolutePath();
String filePath = mInputFile.substring(0, mInputFile.lastIndexOf(File.separator));
String[] splitByDot = mInputFile.split("\.");
String ext="";
if(splitByDot!=null && splitByDot.length>1)
ext = splitByDot[splitByDot.length-1];
String fileName = mInputFile.substring(mInputFile.lastIndexOf(File.separator)+1,
mInputFile.length());
if(ext.length()>0)
fileName=fileName.replace("."+ext, "_out.mp4");
else
fileName=fileName.concat("_out.mp4");
final File outFile = new File(Environment.getExternalStorageDirectory(), fileName);
if(!outFile.exists())
outFile.createNewFile();
mOutputFile=outFile.getAbsolutePath();
ChangerWrapper.changeResolutionInSeparatedThread(this);
return mOutputFile;
}
private static class ChangerWrapper implements Runnable {
private Throwable mThrowable;
private VideoResolutionChanger mChanger;
private ChangerWrapper(VideoResolutionChanger changer) {
mChanger = changer;
}
@Override
public void run() {
try {
mChanger.prepareAndChangeResolution();
} catch (Throwable th) {
mThrowable = th;
}
}
public static void changeResolutionInSeparatedThread(VideoResolutionChanger changer)
throws Throwable {
ChangerWrapper wrapper = new ChangerWrapper(changer);
Thread th = new Thread(wrapper, ChangerWrapper.class.getSimpleName());
th.start();
th.join();
if (wrapper.mThrowable != null)
throw wrapper.mThrowable;
}
}
private void prepareAndChangeResolution() throws Exception {
Exception exception = null;
MediaCodecInfo videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE);
if (videoCodecInfo == null)
return;
MediaCodecInfo audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE);
if (audioCodecInfo == null)
return;
MediaExtractor videoExtractor = null;
MediaExtractor audioExtractor = null;
OutputSurface outputSurface = null;
MediaCodec videoDecoder = null;
MediaCodec audioDecoder = null;
MediaCodec videoEncoder = null;
MediaCodec audioEncoder = null;
MediaMuxer muxer = null;
InputSurface inputSurface = null;
try {
videoExtractor = createExtractor();
int videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor);
MediaFormat inputFormat = videoExtractor.getTrackFormat(videoInputTrack);
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.setDataSource(mInputFile);
int inputWidth, inputHeight;
try {
inputWidth = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH));
inputHeight = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT));
} catch (Exception e) {
Bitmap thumbnail = m.getFrameAtTime();
inputWidth = thumbnail.getWidth();
inputHeight = thumbnail.getHeight();
thumbnail.recycle();
}
if(inputWidth>inputHeight){
if(mWidth<mHeight){
int w = mWidth;
mWidth=mHeight;
mHeight=w;
}
}
else{
if(mWidth>mHeight){
int w = mWidth;
mWidth=mHeight;
mHeight=w;
}
}
MediaFormat outputVideoFormat =
MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight);
outputVideoFormat.setInteger(
MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT);
outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE);
outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE);
outputVideoFormat.setInteger(
MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL);
AtomicReference<Surface> inputSurfaceReference = new AtomicReference<Surface>();
videoEncoder = createVideoEncoder(
videoCodecInfo, outputVideoFormat, inputSurfaceReference);
inputSurface = new InputSurface(inputSurfaceReference.get());
inputSurface.makeCurrent();
outputSurface = new OutputSurface();
videoDecoder = createVideoDecoder(inputFormat, outputSurface.getSurface());
audioExtractor = createExtractor();
int audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor);
MediaFormat inputAudioFormat = audioExtractor.getTrackFormat(audioInputTrack);
MediaFormat outputAudioFormat =
MediaFormat.createAudioFormat(inputAudioFormat.getString(MediaFormat.KEY_MIME),
inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE);
outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE);
audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat);
audioDecoder = createAudioDecoder(inputAudioFormat);
muxer = new MediaMuxer(mOutputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
changeResolution(videoExtractor, audioExtractor,
videoDecoder, videoEncoder,
audioDecoder, audioEncoder,
muxer, inputSurface, outputSurface);
} finally {
try {
if (videoExtractor != null)
videoExtractor.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioExtractor != null)
audioExtractor.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (videoDecoder != null) {
videoDecoder.stop();
videoDecoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (outputSurface != null) {
outputSurface.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (videoEncoder != null) {
videoEncoder.stop();
videoEncoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioDecoder != null) {
audioDecoder.stop();
audioDecoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioEncoder != null) {
audioEncoder.stop();
audioEncoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (muxer != null) {
muxer.stop();
muxer.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (inputSurface != null)
inputSurface.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
}
if (exception != null)
throw exception;
}
private MediaExtractor createExtractor() throws IOException {
MediaExtractor extractor;
extractor = new MediaExtractor();
extractor.setDataSource(mInputFile);
return extractor;
}
private MediaCodec createVideoDecoder(MediaFormat inputFormat, Surface surface) throws IOException {
MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
decoder.configure(inputFormat, surface, null, 0);
decoder.start();
return decoder;
}
private MediaCodec createVideoEncoder(MediaCodecInfo codecInfo, MediaFormat format,
AtomicReference<Surface> surfaceReference) throws IOException {
MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
surfaceReference.set(encoder.createInputSurface());
encoder.start();
return encoder;
}
private MediaCodec createAudioDecoder(MediaFormat inputFormat) throws IOException {
MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
decoder.configure(inputFormat, null, null, 0);
decoder.start();
return decoder;
}
private MediaCodec createAudioEncoder(MediaCodecInfo codecInfo, MediaFormat format) throws IOException {
MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
return encoder;
}
private int getAndSelectVideoTrackIndex(MediaExtractor extractor) {
for (int index = 0; index < extractor.getTrackCount(); ++index) {
if (isVideoFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index);
return index;
}
}
return -1;
}
private int getAndSelectAudioTrackIndex(MediaExtractor extractor) {
for (int index = 0; index < extractor.getTrackCount(); ++index) {
if (isAudioFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index);
return index;
}
}
return -1;
}
private void changeResolution(MediaExtractor videoExtractor, MediaExtractor audioExtractor,
MediaCodec videoDecoder, MediaCodec videoEncoder,
MediaCodec audioDecoder, MediaCodec audioEncoder,
MediaMuxer muxer,
InputSurface inputSurface, OutputSurface outputSurface) {
ByteBuffer[] videoDecoderInputBuffers = null;
ByteBuffer[] videoDecoderOutputBuffers = null;
ByteBuffer[] videoEncoderOutputBuffers = null;
MediaCodec.BufferInfo videoDecoderOutputBufferInfo = null;
MediaCodec.BufferInfo videoEncoderOutputBufferInfo = null;
videoDecoderInputBuffers = videoDecoder.getInputBuffers();
videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
videoDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
videoEncoderOutputBufferInfo = new MediaCodec.BufferInfo();
ByteBuffer[] audioDecoderInputBuffers = null;
ByteBuffer[] audioDecoderOutputBuffers = null;
ByteBuffer[] audioEncoderInputBuffers = null;
ByteBuffer[] audioEncoderOutputBuffers = null;
MediaCodec.BufferInfo audioDecoderOutputBufferInfo = null;
MediaCodec.BufferInfo audioEncoderOutputBufferInfo = null;
audioDecoderInputBuffers = audioDecoder.getInputBuffers();
audioDecoderOutputBuffers = audioDecoder.getOutputBuffers();
audioEncoderInputBuffers = audioEncoder.getInputBuffers();
audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
audioDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
audioEncoderOutputBufferInfo = new MediaCodec.BufferInfo();
MediaFormat decoderOutputVideoFormat = null;
MediaFormat decoderOutputAudioFormat = null;
MediaFormat encoderOutputVideoFormat = null;
MediaFormat encoderOutputAudioFormat = null;
int outputVideoTrack = -1;
int outputAudioTrack = -1;
boolean videoExtractorDone = false;
boolean videoDecoderDone = false;
boolean videoEncoderDone = false;
boolean audioExtractorDone = false;
boolean audioDecoderDone = false;
boolean audioEncoderDone = false;
int pendingAudioDecoderOutputBufferIndex = -1;
boolean muxing = false;
while ((!videoEncoderDone) || (!audioEncoderDone)) {
while (!videoExtractorDone
&& (encoderOutputVideoFormat == null || muxing)) {
int decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
ByteBuffer decoderInputBuffer = videoDecoderInputBuffers[decoderInputBufferIndex];
int size = videoExtractor.readSampleData(decoderInputBuffer, 0);
long presentationTime = videoExtractor.getSampleTime();
if (size >= 0) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
videoExtractor.getSampleFlags());
}
videoExtractorDone = !videoExtractor.advance();
if (videoExtractorDone)
videoDecoder.queueInputBuffer(decoderInputBufferIndex,
0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
break;
}
while (!audioExtractorDone
&& (encoderOutputAudioFormat == null || muxing)) {
int decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
ByteBuffer decoderInputBuffer = audioDecoderInputBuffers[decoderInputBufferIndex];
int size = audioExtractor.readSampleData(decoderInputBuffer, 0);
long presentationTime = audioExtractor.getSampleTime();
if (size >= 0)
audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, size,
presentationTime, audioExtractor.getSampleFlags());
audioExtractorDone = !audioExtractor.advance();
if (audioExtractorDone)
audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, 0,
0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
break;
}
while (!videoDecoderDone
&& (encoderOutputVideoFormat == null || muxing)) {
int decoderOutputBufferIndex =
videoDecoder.dequeueOutputBuffer(
videoDecoderOutputBufferInfo, TIMEOUT_USEC);
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
break;
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputVideoFormat = videoDecoder.getOutputFormat();
break;
}
ByteBuffer decoderOutputBuffer =
videoDecoderOutputBuffers[decoderOutputBufferIndex];
if ((videoDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
break;
}
boolean render = videoDecoderOutputBufferInfo.size != 0;
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render);
if (render) {
outputSurface.awaitNewImage();
outputSurface.drawImage();
inputSurface.setPresentationTime(
videoDecoderOutputBufferInfo.presentationTimeUs * 1000);
inputSurface.swapBuffers();
}
if ((videoDecoderOutputBufferInfo.flags
& MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
videoDecoderDone = true;
videoEncoder.signalEndOfInputStream();
}
break;
}
while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1
&& (encoderOutputAudioFormat == null || muxing)) {
int decoderOutputBufferIndex =
audioDecoder.dequeueOutputBuffer(
audioDecoderOutputBufferInfo, TIMEOUT_USEC);
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioDecoderOutputBuffers = audioDecoder.getOutputBuffers();
break;
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputAudioFormat = audioDecoder.getOutputFormat();
break;
}
ByteBuffer decoderOutputBuffer =
audioDecoderOutputBuffers[decoderOutputBufferIndex];
if ((audioDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
break;
}
pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex;
break;
}
while (pendingAudioDecoderOutputBufferIndex != -1) {
int encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC);
ByteBuffer encoderInputBuffer = audioEncoderInputBuffers[encoderInputBufferIndex];
int size = audioDecoderOutputBufferInfo.size;
long presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs;
if (size >= 0) {
ByteBuffer decoderOutputBuffer =
audioDecoderOutputBuffers[pendingAudioDecoderOutputBufferIndex]
.duplicate();
decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset);
decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size);
encoderInputBuffer.position(0);
encoderInputBuffer.put(decoderOutputBuffer);
audioEncoder.queueInputBuffer(
encoderInputBufferIndex,
0,
size,
presentationTime,
audioDecoderOutputBufferInfo.flags);
}
audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false);
pendingAudioDecoderOutputBufferIndex = -1;
if ((audioDecoderOutputBufferInfo.flags
& MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0)
audioDecoderDone = true;
break;
}
while (!videoEncoderDone
&& (encoderOutputVideoFormat == null || muxing)) {
int encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
videoEncoderOutputBufferInfo, TIMEOUT_USEC);
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputVideoFormat = videoEncoder.getOutputFormat();
break;
}
ByteBuffer encoderOutputBuffer =
videoEncoderOutputBuffers[encoderOutputBufferIndex];
if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (videoEncoderOutputBufferInfo.size != 0) {
muxer.writeSampleData(
outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo);
}
if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
!= 0) {
videoEncoderDone = true;
}
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
while (!audioEncoderDone
&& (encoderOutputAudioFormat == null || muxing)) {
int encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
audioEncoderOutputBufferInfo, TIMEOUT_USEC);
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputAudioFormat = audioEncoder.getOutputFormat();
break;
}
ByteBuffer encoderOutputBuffer =
audioEncoderOutputBuffers[encoderOutputBufferIndex];
if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (audioEncoderOutputBufferInfo.size != 0)
muxer.writeSampleData(
outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
!= 0)
audioEncoderDone = true;
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (!muxing && (encoderOutputAudioFormat != null)
&& (encoderOutputVideoFormat != null)) {
outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat);
outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat);
muxer.start();
muxing = true;
}
}
}
private static boolean isVideoFormat(MediaFormat format) {
return getMimeTypeFor(format).startsWith("video/");
}
private static boolean isAudioFormat(MediaFormat format) {
return getMimeTypeFor(format).startsWith("audio/");
}
private static String getMimeTypeFor(MediaFormat format) {
return format.getString(MediaFormat.KEY_MIME);
}
private static MediaCodecInfo selectCodec(String mimeType) {
int numCodecs = MediaCodecList.getCodecCount();
for (int i = 0; i < numCodecs; i++) {
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
if (!codecInfo.isEncoder()) {
continue;
}
String[] types = codecInfo.getSupportedTypes();
for (int j = 0; j < types.length; j++) {
if (types[j].equalsIgnoreCase(mimeType)) {
return codecInfo;
}
}
}
return null;
}
}
它还需要 InputSurface
、OutputSurface
和 TextureRender
,它们位于 ExtractDecodeEditEncodeMuxTest
旁边(HERE link 上方)。将这三个与 VideoResolutionChanger
放在同一个包中并像这样使用它:
try{
String pathToReEncodedFile =
new VideoResolutionChanger().changeResolution(videoFilePath);
}catch(Throwable t){/* smth wrong :( */}
其中 videoFilePath
可以使用 file.getAbsolutePath()
从 File
获得。
我知道这不是最干净的方式,可能也不是 most-effective/efficient 方式,但过去两天我一直在寻找类似的代码并找到了很多主题,其中大部分将我重定向到 INDE、ffmpeg 或 jcodec ,其他人没有得到正确的答案。所以我把它留在这里,明智地使用它!
限制:
- 上面的 use-it-like-this 代码段 不能 在主循环线程 (ui) 中启动,例如直接进入
Activity
。最好的方法是 create IntentService
and pass 在 Intent
的额外 Bundle
中输入文件路径 String
。然后你可以 运行 changeResolution
直接进入 onHandleIntent
;
- API18及以上(
MediaMuxer
引入);
- API18当然需要
WRITE_EXTERNAL_STORAGE
,API19以上有这个"built-in";
@fadden 感谢您 的工作和支持! :)
我不介意问题的实现和编码问题。但是我们经历了同样的灾难,因为 ffmpeg 将我们的应用程序大小至少增加了 19MB,我正在使用这个 Whosebug 问题来想出一个库,它在没有 ffmpeg 的情况下也能做到同样的事情。显然 linkedin
的人以前做过。检查 this article.
该项目名为 LiTr,是 available on github。它使用 android MediaCodec 和 MediaMuxer,因此您可以在需要时参考代码以获取有关您自己项目的帮助。这个问题是 4 年前提出的,但我希望这对现在的人有所帮助。
使用编译'com.zolad:videoslimmer:1.0.0'
VideoResolutionChanger.kt
class VideoResolutionChanger {
private val TIMEOUT_USEC = 10000
private val OUTPUT_VIDEO_MIME_TYPE = "video/avc"
private val OUTPUT_VIDEO_BIT_RATE = 2048 * 1024
private val OUTPUT_VIDEO_FRAME_RATE = 60
private val OUTPUT_VIDEO_IFRAME_INTERVAL = 1
private val OUTPUT_VIDEO_COLOR_FORMAT = MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface
private val OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm"
private val OUTPUT_AUDIO_CHANNEL_COUNT = 2
private val OUTPUT_AUDIO_BIT_RATE = 128 * 1024
private val OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectHE
private val OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100
private var mWidth = 1920
private var mHeight = 1080
private var mOutputFile : String? = null
private var mInputFile : String? = null
private var mTotalTime : Int = 0
@Throws(Throwable::class)
fun changeResolution(f: File): String? {
mInputFile = f.absolutePath
val filePath : String? = mInputFile!!.substring(0, mInputFile!!.lastIndexOf(File.separator))
val splitByDot: Array<String> = mInputFile!!.split("\.").toTypedArray()
var ext = ""
if (splitByDot.size > 1) ext = splitByDot[splitByDot.size - 1]
var fileName: String = mInputFile!!.substring(
mInputFile!!.lastIndexOf(File.separator) + 1,
mInputFile!!.length
)
fileName = if (ext.length > 0) fileName.replace(".$ext", "_out.mp4") else fileName + "_out.mp4"
mOutputFile = outFile.getAbsolutePath()
ChangerWrapper.changeResolutionInSeparatedThread(this)
return mOutputFile
}
private class ChangerWrapper private constructor(private val mChanger: VideoResolutionChanger) :
Runnable {
private var mThrowable : Throwable? = null
override fun run() {
try {
mChanger.prepareAndChangeResolution()
} catch (th: Throwable) {
mThrowable = th
}
}
companion object {
@Throws(Throwable::class)
fun changeResolutionInSeparatedThread(changer: VideoResolutionChanger) {
val wrapper = ChangerWrapper(changer)
val th = Thread(wrapper, ChangerWrapper::class.java.simpleName)
th.start()
th.join()
if (wrapper.mThrowable != null) throw wrapper.mThrowable!!
}
}
}
@Throws(Exception::class)
private fun prepareAndChangeResolution() {
var exception: Exception? = null
val videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE) ?: return
val audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE) ?: return
var videoExtractor : MediaExtractor? = null
var audioExtractor : MediaExtractor? = null
var outputSurface : OutputSurface? = null
var videoDecoder : MediaCodec? = null
var audioDecoder : MediaCodec? = null
var videoEncoder : MediaCodec? = null
var audioEncoder : MediaCodec? = null
var muxer : MediaMuxer? = null
var inputSurface : InputSurface? = null
try {
videoExtractor = createExtractor()
val videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor)
val inputFormat = videoExtractor!!.getTrackFormat(videoInputTrack)
val m = MediaMetadataRetriever()
m.setDataSource(mInputFile)
var inputWidth: Int
var inputHeight: Int
try {
inputWidth =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH)!!.toInt()
inputHeight =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT)!!.toInt()
mTotalTime =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION)!!.toInt() * 1000
} catch (e: Exception) {
val thumbnail = m.frameAtTime
inputWidth = thumbnail!!.width
inputHeight = thumbnail.height
thumbnail.recycle()
}
if (inputWidth > inputHeight) {
if (mWidth < mHeight) {
val w = mWidth
mWidth = mHeight
mHeight = w
}
} else {
if (mWidth > mHeight) {
val w = mWidth
mWidth = mHeight
mHeight = w
}
}
val outputVideoFormat =
MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight)
outputVideoFormat.setInteger(
MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT
)
outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE)
outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE)
outputVideoFormat.setInteger(
MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL
)
val inputSurfaceReference: AtomicReference<Surface> = AtomicReference<Surface>()
videoEncoder = createVideoEncoder(
videoCodecInfo, outputVideoFormat, inputSurfaceReference
)
inputSurface = InputSurface(inputSurfaceReference.get())
inputSurface.makeCurrent()
outputSurface = OutputSurface()
videoDecoder = createVideoDecoder(inputFormat, outputSurface!!.surface!!);
audioExtractor = createExtractor()
val audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor)
val inputAudioFormat = audioExtractor!!.getTrackFormat(audioInputTrack)
val outputAudioFormat = MediaFormat.createAudioFormat(
inputAudioFormat.getString(MediaFormat.KEY_MIME)!!,
inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT)
)
outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE)
outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE)
audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat)
audioDecoder = createAudioDecoder(inputAudioFormat)
muxer = MediaMuxer(mOutputFile!!, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
changeResolution(
videoExtractor, audioExtractor,
videoDecoder, videoEncoder,
audioDecoder, audioEncoder,
muxer, inputSurface, outputSurface
)
} finally {
try {
videoExtractor?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
audioExtractor?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (videoDecoder != null) {
videoDecoder.stop()
videoDecoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
outputSurface?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (videoEncoder != null) {
videoEncoder.stop()
videoEncoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (audioDecoder != null) {
audioDecoder.stop()
audioDecoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (audioEncoder != null) {
audioEncoder.stop()
audioEncoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (muxer != null) {
muxer.stop()
muxer.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
inputSurface?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
}
if (exception != null) throw exception
}
@Throws(IOException::class)
private fun createExtractor(): MediaExtractor? {
val extractor : MediaExtractor = MediaExtractor()
mInputFile?.let { extractor.setDataSource(it) }
return extractor
}
@Throws(IOException::class)
private fun createVideoDecoder(inputFormat: MediaFormat, surface: Surface): MediaCodec? {
val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
decoder.configure(inputFormat, surface, null, 0)
decoder.start()
return decoder
}
@Throws(IOException::class)
private fun createVideoEncoder(
codecInfo: MediaCodecInfo, format: MediaFormat,
surfaceReference: AtomicReference<Surface>
): MediaCodec? {
val encoder = MediaCodec.createByCodecName(codecInfo.name)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
surfaceReference.set(encoder.createInputSurface())
encoder.start()
return encoder
}
@Throws(IOException::class)
private fun createAudioDecoder(inputFormat: MediaFormat): MediaCodec? {
val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
decoder.configure(inputFormat, null, null, 0)
decoder.start()
return decoder
}
@Throws(IOException::class)
private fun createAudioEncoder(codecInfo: MediaCodecInfo, format: MediaFormat): MediaCodec? {
val encoder = MediaCodec.createByCodecName(codecInfo.name)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
encoder.start()
return encoder
}
private fun getAndSelectVideoTrackIndex(extractor: MediaExtractor?): Int {
for (index in 0 until extractor!!.trackCount) {
if (isVideoFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index)
return index
}
}
return -1
}
private fun getAndSelectAudioTrackIndex(extractor: MediaExtractor?): Int {
for (index in 0 until extractor!!.trackCount) {
if (isAudioFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index)
return index
}
}
return -1
}
private fun changeResolution(
videoExtractor: MediaExtractor?, audioExtractor: MediaExtractor?,
videoDecoder: MediaCodec?, videoEncoder: MediaCodec?,
audioDecoder: MediaCodec?, audioEncoder: MediaCodec?,
muxer: MediaMuxer,
inputSurface: InputSurface?, outputSurface: OutputSurface?
) {
var videoDecoderInputBuffers : Array<ByteBuffer?>? = null
var videoDecoderOutputBuffers : Array<ByteBuffer?>? = null
var videoEncoderOutputBuffers : Array<ByteBuffer?>? = null
var videoDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
var videoEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
videoDecoderInputBuffers = videoDecoder!!.inputBuffers
videoDecoderOutputBuffers = videoDecoder.outputBuffers
videoEncoderOutputBuffers = videoEncoder!!.outputBuffers
videoDecoderOutputBufferInfo = MediaCodec.BufferInfo()
videoEncoderOutputBufferInfo = MediaCodec.BufferInfo()
var audioDecoderInputBuffers : Array<ByteBuffer?>? = null
var audioDecoderOutputBuffers : Array<ByteBuffer>? = null
var audioEncoderInputBuffers : Array<ByteBuffer>? = null
var audioEncoderOutputBuffers : Array<ByteBuffer?>? = null
var audioDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
var audioEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
audioDecoderInputBuffers = audioDecoder!!.inputBuffers
audioDecoderOutputBuffers = audioDecoder.outputBuffers
audioEncoderInputBuffers = audioEncoder!!.inputBuffers
audioEncoderOutputBuffers = audioEncoder.outputBuffers
audioDecoderOutputBufferInfo = MediaCodec.BufferInfo()
audioEncoderOutputBufferInfo = MediaCodec.BufferInfo()
var encoderOutputVideoFormat : MediaFormat? = null
var encoderOutputAudioFormat : MediaFormat? = null
var outputVideoTrack = -1
var outputAudioTrack = -1
var videoExtractorDone = false
var videoDecoderDone = false
var videoEncoderDone = false
var audioExtractorDone = false
var audioDecoderDone = false
var audioEncoderDone = false
var pendingAudioDecoderOutputBufferIndex = -1
var muxing = false
while (!videoEncoderDone || !audioEncoderDone) {
while (!videoExtractorDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
val decoderInputBuffer: ByteBuffer? =
videoDecoderInputBuffers[decoderInputBufferIndex]
val size = decoderInputBuffer?.let { videoExtractor!!.readSampleData(it, 0) }
val presentationTime = videoExtractor?.sampleTime
if (presentationTime != null) {
if (size != null) {
if (size >= 0) {
if (videoExtractor != null) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
videoExtractor.sampleFlags
)
}
}
}
}
if (videoExtractor != null) {
videoExtractorDone = (!videoExtractor.advance() && size == -1)
}
if (videoExtractorDone) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
0,
0,
MediaCodec.BUFFER_FLAG_END_OF_STREAM
)
}
break
}
while (!audioExtractorDone
&& (encoderOutputAudioFormat == null || muxing)
) {
val decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
val decoderInputBuffer: ByteBuffer? =
audioDecoderInputBuffers[decoderInputBufferIndex]
val size = decoderInputBuffer?.let { audioExtractor!!.readSampleData(it, 0) }
val presentationTime = audioExtractor?.sampleTime
if (presentationTime != null) {
if (size != null) {
if (size >= 0) {
audioDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
audioExtractor.sampleFlags
)
}
}
}
if (audioExtractor != null) {
audioExtractorDone = (!audioExtractor.advance() && size == -1)
}
if (audioExtractorDone) {
audioDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
0,
0,
MediaCodec.BUFFER_FLAG_END_OF_STREAM
)
}
break
}
while (!videoDecoderDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val decoderOutputBufferIndex = videoDecoder.dequeueOutputBuffer(
videoDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoDecoderOutputBuffers = videoDecoder.outputBuffers
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputVideoFormat = videoDecoder.outputFormat
break
}
val decoderOutputBuffer: ByteBuffer? =
videoDecoderOutputBuffers!![decoderOutputBufferIndex]
if (videoDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)
break
}
val render = videoDecoderOutputBufferInfo.size != 0
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render)
if (render) {
if (outputSurface != null) {
outputSurface.awaitNewImage()
outputSurface.drawImage()
}
if (inputSurface != null) {
inputSurface.setPresentationTime(
videoDecoderOutputBufferInfo.presentationTimeUs * 1000
)
inputSurface.swapBuffers()
}
}
if ((videoDecoderOutputBufferInfo.flags
and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
) {
videoDecoderDone = true
videoEncoder.signalEndOfInputStream()
}
break
}
while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1 && (encoderOutputAudioFormat == null || muxing)) {
val decoderOutputBufferIndex = audioDecoder.dequeueOutputBuffer(
audioDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioDecoderOutputBuffers = audioDecoder.outputBuffers
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputAudioFormat = audioDecoder.outputFormat
break
}
val decoderOutputBuffer: ByteBuffer =
audioDecoderOutputBuffers!![decoderOutputBufferIndex]
if (audioDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)
break
}
pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex
break
}
while (pendingAudioDecoderOutputBufferIndex != -1) {
val encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
val encoderInputBuffer: ByteBuffer =
audioEncoderInputBuffers[encoderInputBufferIndex]
val size = audioDecoderOutputBufferInfo.size
val presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs
if (size >= 0) {
val decoderOutputBuffer: ByteBuffer =
audioDecoderOutputBuffers!![pendingAudioDecoderOutputBufferIndex]
.duplicate()
decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset)
decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size)
encoderInputBuffer.position(0)
encoderInputBuffer.put(decoderOutputBuffer)
audioEncoder.queueInputBuffer(
encoderInputBufferIndex,
0,
size,
presentationTime,
audioDecoderOutputBufferInfo.flags
)
}
audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false)
pendingAudioDecoderOutputBufferIndex = -1
if ((audioDecoderOutputBufferInfo.flags
and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
) audioDecoderDone = true
break
}
while (!videoEncoderDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
videoEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) break
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoEncoderOutputBuffers = videoEncoder.outputBuffers
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputVideoFormat = videoEncoder.outputFormat
break
}
val encoderOutputBuffer: ByteBuffer? =
videoEncoderOutputBuffers!![encoderOutputBufferIndex]
if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (videoEncoderOutputBufferInfo.size != 0) {
if (encoderOutputBuffer != null) {
muxer.writeSampleData(
outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo
)
}
}
if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
!= 0
) {
videoEncoderDone = true
}
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
while (!audioEncoderDone
&& (encoderOutputAudioFormat == null || muxing)
) {
val encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
audioEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioEncoderOutputBuffers = audioEncoder.outputBuffers
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputAudioFormat = audioEncoder.outputFormat
break
}
val encoderOutputBuffer: ByteBuffer? =
audioEncoderOutputBuffers!![encoderOutputBufferIndex]
if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (audioEncoderOutputBufferInfo.size != 0) encoderOutputBuffer?.let {
muxer.writeSampleData(
outputAudioTrack, it, audioEncoderOutputBufferInfo
)
}
if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
!= 0
) audioEncoderDone = true
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (!muxing && encoderOutputAudioFormat != null
&& encoderOutputVideoFormat != null
) {
outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat)
outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat)
muxer.start()
muxing = true
}
}
}
private fun isVideoFormat(format: MediaFormat): Boolean {
return getMimeTypeFor(format)!!.startsWith("video/")
}
private fun isAudioFormat(format: MediaFormat): Boolean {
return getMimeTypeFor(format)!!.startsWith("audio/")
}
private fun getMimeTypeFor(format: MediaFormat): String? {
return format.getString(MediaFormat.KEY_MIME)
}
private fun selectCodec(mimeType: String): MediaCodecInfo? {
val numCodecs = MediaCodecList.getCodecCount()
for (i in 0 until numCodecs) {
val codecInfo = MediaCodecList.getCodecInfoAt(i)
if (!codecInfo.isEncoder) {
continue
}
val types = codecInfo.supportedTypes
for (j in types.indices) {
if (types[j].equals(mimeType, ignoreCase = true)) {
return codecInfo
}
}
}
return null
}
}
OutputSurface.kt
/*
* Copyright (C) 2013 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Holds state associated with a Surface used for MediaCodec decoder output.
*
*
* The (width,height) constructor for this class will prepare GL, create a SurfaceTexture,
* and then create a Surface for that SurfaceTexture. The Surface can be passed to
* MediaCodec.configure() to receive decoder output. When a frame arrives, we latch the
* texture with updateTexImage, then render the texture with GL to a pbuffer.
*
*
* The no-arg constructor skips the GL preparation step and doesn't allocate a pbuffer.
* Instead, it just creates the Surface and SurfaceTexture, and when a frame arrives
* we just draw it on whatever surface is current.
*
*
* By default, the Surface will be using a BufferQueue in asynchronous mode, so we
* can potentially drop frames.
*/
internal class OutputSurface : OnFrameAvailableListener {
private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
private var mEGLContext = EGL14.EGL_NO_CONTEXT
private var mEGLSurface = EGL14.EGL_NO_SURFACE
private var mSurfaceTexture: SurfaceTexture? = null
/**
* Returns the Surface that we draw onto.
*/
var surface: Surface? = null
private set
private val mFrameSyncObject = Object() // guards mFrameAvailable
private var mFrameAvailable = false
private var mTextureRender: TextureRender? = null
/**
* Creates an OutputSurface backed by a pbuffer with the specifed dimensions. The new
* EGL context and surface will be made current. Creates a Surface that can be passed
* to MediaCodec.configure().
*/
constructor(width: Int, height: Int) {
println("OutputSurface constructor width: $width height: $height")
require(!(width <= 0 || height <= 0))
eglSetup(width, height)
makeCurrent()
setup()
}
/**
* Creates an OutputSurface using the current EGL context (rather than establishing a
* new one). Creates a Surface that can be passed to MediaCodec.configure().
*/
constructor() {
println("OutputSurface constructor")
setup()
}
/**
* Creates instances of TextureRender and SurfaceTexture, and a Surface associated
* with the SurfaceTexture.
*/
private fun setup() {
println("OutputSurface setup")
mTextureRender = TextureRender()
mTextureRender!!.surfaceCreated()
// Even if we don't access the SurfaceTexture after the constructor returns, we
// still need to keep a reference to it. The Surface doesn't retain a reference
// at the Java level, so if we don't either then the object can get GCed, which
// causes the native finalizer to run.
if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender!!.textureId)
mSurfaceTexture = SurfaceTexture(mTextureRender!!.textureId)
// This doesn't work if OutputSurface is created on the thread that CTS started for
// these test cases.
//
// The CTS-created thread has a Looper, and the SurfaceTexture constructor will
// create a Handler that uses it. The "frame available" message is delivered
// there, but since we're not a Looper-based thread we'll never see it. For
// this to do anything useful, OutputSurface must be created on a thread without
// a Looper, so that SurfaceTexture uses the main application Looper instead.
//
// Java language note: passing "this" out of a constructor is generally unwise,
// but we should be able to get away with it here.
mSurfaceTexture!!.setOnFrameAvailableListener(this)
surface = Surface(mSurfaceTexture)
}
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports pbuffer.
*/
private fun eglSetup(width: Int, height: Int) {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("unable to get EGL14 display")
}
val version = IntArray(2)
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
mEGLDisplay = null
throw RuntimeException("unable to initialize EGL14")
}
// Configure EGL for pbuffer and OpenGL ES 2.0. We want enough RGB bits
// to be able to tell if the frame is reasonable.
val attribList = intArrayOf(
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
)
val configs = arrayOfNulls<EGLConfig>(1)
val numConfigs = IntArray(1)
if (!EGL14.eglChooseConfig(
mEGLDisplay, attribList, 0, configs, 0, configs.size,
numConfigs, 0
)
) {
throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
}
// Configure context for OpenGL ES 2.0.
val attrib_list = intArrayOf(
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
mEGLContext = EGL14.eglCreateContext(
mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0
)
checkEglError("eglCreateContext")
if (mEGLContext == null) {
throw RuntimeException("null context")
}
// Create a pbuffer surface. By using this for output, we can use glReadPixels
// to test values in the output.
val surfaceAttribs = intArrayOf(
EGL14.EGL_WIDTH, width,
EGL14.EGL_HEIGHT, height,
EGL14.EGL_NONE
)
mEGLSurface = EGL14.eglCreatePbufferSurface(mEGLDisplay, configs[0], surfaceAttribs, 0)
checkEglError("eglCreatePbufferSurface")
if (mEGLSurface == null) {
throw RuntimeException("surface was null")
}
}
/**
* Discard all resources held by this class, notably the EGL context.
*/
fun release() {
if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) {
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
EGL14.eglReleaseThread()
EGL14.eglTerminate(mEGLDisplay)
}
surface!!.release()
// this causes a bunch of warnings that appear harmless but might confuse someone:
// W BufferQueue: [unnamed-3997-2] cancelBuffer: BufferQueue has been abandoned!
//mSurfaceTexture.release();
mEGLDisplay = EGL14.EGL_NO_DISPLAY
mEGLContext = EGL14.EGL_NO_CONTEXT
mEGLSurface = EGL14.EGL_NO_SURFACE
mTextureRender = null
surface = null
mSurfaceTexture = null
}
/**
* Makes our EGL context and surface current.
*/
private fun makeCurrent() {
if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) {
throw RuntimeException("eglMakeCurrent failed")
}
}
/**
* Replaces the fragment shader.
*/
fun changeFragmentShader(fragmentShader: String?) {
if (fragmentShader != null) {
mTextureRender?.changeFragmentShader(fragmentShader)
}
}
/**
* Latches the next buffer into the texture. Must be called from the thread that created
* the OutputSurface object, after the onFrameAvailable callback has signaled that new
* data is available.
*/
fun awaitNewImage() {
//println("awaitNewImage")
val TIMEOUT_MS = 500
synchronized(mFrameSyncObject) {
while (!mFrameAvailable) {
try {
// Wait for onFrameAvailable() to signal us. Use a timeout to avoid
// stalling the test if it doesn't arrive.
mFrameSyncObject.wait(TIMEOUT_MS.toLong())
if (!mFrameAvailable) {
// TODO: if "spurious wakeup", continue while loop
//throw RuntimeException("Surface frame wait timed out")
}
} catch (ie: InterruptedException) {
// shouldn't happen
throw RuntimeException(ie)
}
}
mFrameAvailable = false
}
// Latch the data.
mTextureRender?.checkGlError("before updateTexImage")
mSurfaceTexture!!.updateTexImage()
}
/**
* Draws the data from SurfaceTexture onto the current EGL surface.
*/
fun drawImage() {
mSurfaceTexture?.let { mTextureRender?.drawFrame(it) }
}
override fun onFrameAvailable(st: SurfaceTexture) {
//println("onFrameAvailable")
if (VERBOSE) Log.d(TAG, "new frame available")
synchronized(mFrameSyncObject) {
if (mFrameAvailable) {
throw RuntimeException("mFrameAvailable already set, frame could be dropped")
}
mFrameAvailable = true
mFrameSyncObject.notifyAll()
}
}
/**
* Checks for EGL errors.
*/
private fun checkEglError(msg: String) {
var error: Int
if (EGL14.eglGetError().also { error = it } != EGL14.EGL_SUCCESS) {
throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
}
}
companion object {
private const val TAG = "OutputSurface"
private const val VERBOSE = false
}
}
InputSurface.kt
/*
* Copyright (C) 2013 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Holds state associated with a Surface used for MediaCodec encoder input.
*
*
* The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
* to create an EGL window surface. Calls to eglSwapBuffers() cause a frame of data to be sent
* to the video encoder.
*/
internal class InputSurface(surface: Surface?) {
private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
private var mEGLContext = EGL14.EGL_NO_CONTEXT
private var mEGLSurface = EGL14.EGL_NO_SURFACE
/**
* Returns the Surface that the MediaCodec receives buffers from.
*/
var surface: Surface?
private set
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports recording.
*/
private fun eglSetup() {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("unable to get EGL14 display")
}
val version = IntArray(2)
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
mEGLDisplay = null
throw RuntimeException("unable to initialize EGL14")
}
// Configure EGL for recordable and OpenGL ES 2.0. We want enough RGB bits
// to minimize artifacts from possible YUV conversion.
val attribList = intArrayOf(
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
)
val configs = arrayOfNulls<EGLConfig>(1)
val numConfigs = IntArray(1)
if (!EGL14.eglChooseConfig(
mEGLDisplay, attribList, 0, configs, 0, configs.size,
numConfigs, 0
)
) {
throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
}
// Configure context for OpenGL ES 2.0.
val attrib_list = intArrayOf(
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
mEGLContext = EGL14.eglCreateContext(
mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0
)
checkEglError("eglCreateContext")
if (mEGLContext == null) {
throw RuntimeException("null context")
}
// Create a window surface, and attach it to the Surface we received.
val surfaceAttribs = intArrayOf(
EGL14.EGL_NONE
)
mEGLSurface = EGL14.eglCreateWindowSurface(
mEGLDisplay, configs[0], surface,
surfaceAttribs, 0
)
checkEglError("eglCreateWindowSurface")
if (mEGLSurface == null) {
throw RuntimeException("surface was null")
}
}
/**
* Discard all resources held by this class, notably the EGL context. Also releases the
* Surface that was passed to our constructor.
*/
fun release() {
if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) {
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
EGL14.eglReleaseThread()
EGL14.eglTerminate(mEGLDisplay)
}
surface!!.release()
mEGLDisplay = EGL14.EGL_NO_DISPLAY
mEGLContext = EGL14.EGL_NO_CONTEXT
mEGLSurface = EGL14.EGL_NO_SURFACE
surface = null
}
/**
* Makes our EGL context and surface current.
*/
fun makeCurrent() {
if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) {
throw RuntimeException("eglMakeCurrent failed")
}
}
fun makeUnCurrent() {
if (!EGL14.eglMakeCurrent(
mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT
)
) {
throw RuntimeException("eglMakeCurrent failed")
}
}
/**
* Calls eglSwapBuffers. Use this to "publish" the current frame.
*/
fun swapBuffers(): Boolean {
//println("swapBuffers")
return EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface)
}
/**
* Queries the surface's width.
*/
val width: Int
get() {
val value = IntArray(1)
EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_WIDTH, value, 0)
return value[0]
}
/**
* Queries the surface's height.
*/
val height: Int
get() {
val value = IntArray(1)
EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_HEIGHT, value, 0)
return value[0]
}
/**
* Sends the presentation time stamp to EGL. Time is expressed in nanoseconds.
*/
fun setPresentationTime(nsecs: Long) {
EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs)
}
/**
* Checks for EGL errors.
*/
private fun checkEglError(msg: String) {
var error: Int
if (EGL14.eglGetError().also { error = it } != EGL14.EGL_SUCCESS) {
throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
}
}
companion object {
private const val TAG = "InputSurface"
private const val VERBOSE = false
private const val EGL_RECORDABLE_ANDROID = 0x3142
}
/**
* Creates an InputSurface from a Surface.
*/
init {
if (surface == null) {
throw NullPointerException()
}
this.surface = surface
eglSetup()
}
}
我正在寻找减少视频重量的有效方法(作为 File
,用于上传),显而易见的答案是:让我们降低分辨率! (不需要全高清或 4K,简单的高清对我来说就足够了)我已经尝试了很多应该通过很多 API(需要 10 个)工作的方法,最好的方法是使用 android-ffmpeg-java,但是......在我漂亮的快速几乎当前的旗舰设备整个过程持续大约 length_of_video*4 秒,而且这个库重量是 9 Mb,这个数量增加了我的应用程序大小......不! (12 Mb 到 1 Mb 是不错的结果,但仍然有太多缺陷)
所以我决定使用本机 Android 方法来执行此操作,MediaMuxer
和 MediaCodec
- 它们分别可从 API18 和 API16 获得(旧设备用户:抱歉;但他们也经常有 "lower-res" 相机)。下面的方法 almost 有效 - MediaMuxer
不尊重 MediaFormat.KEY_WIDTH
和 MediaFormat.KEY_HEIGHT
- 提取的 File
是 "re-compressed",权重是小一点,但分辨率与原始视频相同 File
...
所以,问题:如何使用 MediaMuxer
和其他附带的 类 方法压缩和 re-scale/change 视频分辨率?
public File getCompressedFile(String videoPath) throws IOException{
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(videoPath);
int trackCount = extractor.getTrackCount();
String filePath = videoPath.substring(0, videoPath.lastIndexOf(File.separator));
String[] splitByDot = videoPath.split("\.");
String ext="";
if(splitByDot!=null && splitByDot.length>1)
ext = splitByDot[splitByDot.length-1];
String fileName = videoPath.substring(videoPath.lastIndexOf(File.separator)+1,
videoPath.length());
if(ext.length()>0)
fileName=fileName.replace("."+ext, "_out."+ext);
else
fileName=fileName.concat("_out");
final File outFile = new File(filePath, fileName);
if(!outFile.exists())
outFile.createNewFile();
MediaMuxer muxer = new MediaMuxer(outFile.getAbsolutePath(),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if(mime!=null && mime.startsWith("video")){
int currWidth = format.getInteger(MediaFormat.KEY_WIDTH);
int currHeight = format.getInteger(MediaFormat.KEY_HEIGHT);
format.setInteger(MediaFormat.KEY_WIDTH, currWidth>currHeight ? 960 : 540);
format.setInteger(MediaFormat.KEY_HEIGHT, currWidth>currHeight ? 540 : 960);
//API19 MediaFormat.KEY_MAX_WIDTH and KEY_MAX_HEIGHT
format.setInteger("max-width", format.getInteger(MediaFormat.KEY_WIDTH));
format.setInteger("max-height", format.getInteger(MediaFormat.KEY_HEIGHT));
}
int dstIndex = muxer.addTrack(format);
indexMap.put(i, dstIndex);
}
boolean sawEOS = false;
int bufferSize = 256 * 1024;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
} else {
bufferInfo.presentationTimeUs = extractor.getSampleTime();
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
extractor.advance();
}
}
muxer.stop();
muxer.release();
return outFile;
}
PS。关于 muxer here 的很多有用的东西,上面的代码基于 MediaMuxerTest.java
,方法 cloneMediaUsingMuxer
您可以尝试 Intel INDE Media for Mobile,https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials 上的教程。它有一个示例,展示了如何使用它来转码=重新压缩视频文件。
您可以设置更小的分辨率 and\or 比特率输出以获得更小的文件 https://github.com/INDExOS/media-for-mobile/blob/master/Android/samples/apps/src/com/intel/inde/mp/samples/ComposerTranscodeCoreActivity.java
MediaMuxer 不参与视频的压缩或缩放。它所做的只是从 MediaCodec 获取 H.264 输出并将其包装在 .mp4 文件包装器中。
查看您的代码,您正在使用 MediaExtractor 提取 NAL 单元并立即使用 MediaMuxer 重新包装它们。这应该非常快并且对视频本身没有影响,因为您只是重新包装 H.264。
要缩放视频,您需要使用 MediaCodec 解码器解码视频,将 MediaExtractor 中的 NAL 单元输入其中,然后使用 MediaCodec 编码器重新编码,将帧传递给 MediaMuxer。
您已找到 bigflake.com; see also Grafika。这些都不是您要找的东西,但是有各种不同的部分。
最好解码到 Surface,而不是 ByteBuffer。这需要 API 18,但为了理智,最好忘记 MediaCodec 在此之前就存在了。无论如何,MediaMuxer 都需要 API 18。
基于 bigflake.com/mediacodec/ (awesome source of knowledge about Media-classes) I've tried few ways and finally ExtractDecodeEditEncodeMuxTest turned out very helpfull. This test wasn't described in article on bigflake site, but it can be found HERE 以及文本中提到的其他 class。
所以,我复制了上面提到的 ExtractDecodeEditEncodeMuxTest
class 中的大部分代码,现在是:VideoResolutionChanger
。它为我提供了 16 Mb 全高清的 2Mb 高清视频。好的!而且快!在我的设备上,整个过程比输入视频持续时间长一点,例如10 秒视频输入 -> 11-12 秒处理。使用 ffmpeg-java
大约需要 40 秒或更多(应用程序需要 9 Mb 以上)。
我们开始:
VideoResolutionChanger:
@TargetApi(18)
public class VideoResolutionChanger {
private static final int TIMEOUT_USEC = 10000;
private static final String OUTPUT_VIDEO_MIME_TYPE = "video/avc";
private static final int OUTPUT_VIDEO_BIT_RATE = 2048 * 1024;
private static final int OUTPUT_VIDEO_FRAME_RATE = 30;
private static final int OUTPUT_VIDEO_IFRAME_INTERVAL = 10;
private static final int OUTPUT_VIDEO_COLOR_FORMAT =
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface;
private static final String OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm";
private static final int OUTPUT_AUDIO_CHANNEL_COUNT = 2;
private static final int OUTPUT_AUDIO_BIT_RATE = 128 * 1024;
private static final int OUTPUT_AUDIO_AAC_PROFILE =
MediaCodecInfo.CodecProfileLevel.AACObjectHE;
private static final int OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100;
private int mWidth = 1280;
private int mHeight = 720;
private String mOutputFile, mInputFile;
public String changeResolution(File f)
throws Throwable {
mInputFile=f.getAbsolutePath();
String filePath = mInputFile.substring(0, mInputFile.lastIndexOf(File.separator));
String[] splitByDot = mInputFile.split("\.");
String ext="";
if(splitByDot!=null && splitByDot.length>1)
ext = splitByDot[splitByDot.length-1];
String fileName = mInputFile.substring(mInputFile.lastIndexOf(File.separator)+1,
mInputFile.length());
if(ext.length()>0)
fileName=fileName.replace("."+ext, "_out.mp4");
else
fileName=fileName.concat("_out.mp4");
final File outFile = new File(Environment.getExternalStorageDirectory(), fileName);
if(!outFile.exists())
outFile.createNewFile();
mOutputFile=outFile.getAbsolutePath();
ChangerWrapper.changeResolutionInSeparatedThread(this);
return mOutputFile;
}
private static class ChangerWrapper implements Runnable {
private Throwable mThrowable;
private VideoResolutionChanger mChanger;
private ChangerWrapper(VideoResolutionChanger changer) {
mChanger = changer;
}
@Override
public void run() {
try {
mChanger.prepareAndChangeResolution();
} catch (Throwable th) {
mThrowable = th;
}
}
public static void changeResolutionInSeparatedThread(VideoResolutionChanger changer)
throws Throwable {
ChangerWrapper wrapper = new ChangerWrapper(changer);
Thread th = new Thread(wrapper, ChangerWrapper.class.getSimpleName());
th.start();
th.join();
if (wrapper.mThrowable != null)
throw wrapper.mThrowable;
}
}
private void prepareAndChangeResolution() throws Exception {
Exception exception = null;
MediaCodecInfo videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE);
if (videoCodecInfo == null)
return;
MediaCodecInfo audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE);
if (audioCodecInfo == null)
return;
MediaExtractor videoExtractor = null;
MediaExtractor audioExtractor = null;
OutputSurface outputSurface = null;
MediaCodec videoDecoder = null;
MediaCodec audioDecoder = null;
MediaCodec videoEncoder = null;
MediaCodec audioEncoder = null;
MediaMuxer muxer = null;
InputSurface inputSurface = null;
try {
videoExtractor = createExtractor();
int videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor);
MediaFormat inputFormat = videoExtractor.getTrackFormat(videoInputTrack);
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.setDataSource(mInputFile);
int inputWidth, inputHeight;
try {
inputWidth = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH));
inputHeight = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT));
} catch (Exception e) {
Bitmap thumbnail = m.getFrameAtTime();
inputWidth = thumbnail.getWidth();
inputHeight = thumbnail.getHeight();
thumbnail.recycle();
}
if(inputWidth>inputHeight){
if(mWidth<mHeight){
int w = mWidth;
mWidth=mHeight;
mHeight=w;
}
}
else{
if(mWidth>mHeight){
int w = mWidth;
mWidth=mHeight;
mHeight=w;
}
}
MediaFormat outputVideoFormat =
MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight);
outputVideoFormat.setInteger(
MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT);
outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE);
outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE);
outputVideoFormat.setInteger(
MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL);
AtomicReference<Surface> inputSurfaceReference = new AtomicReference<Surface>();
videoEncoder = createVideoEncoder(
videoCodecInfo, outputVideoFormat, inputSurfaceReference);
inputSurface = new InputSurface(inputSurfaceReference.get());
inputSurface.makeCurrent();
outputSurface = new OutputSurface();
videoDecoder = createVideoDecoder(inputFormat, outputSurface.getSurface());
audioExtractor = createExtractor();
int audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor);
MediaFormat inputAudioFormat = audioExtractor.getTrackFormat(audioInputTrack);
MediaFormat outputAudioFormat =
MediaFormat.createAudioFormat(inputAudioFormat.getString(MediaFormat.KEY_MIME),
inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE);
outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE);
audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat);
audioDecoder = createAudioDecoder(inputAudioFormat);
muxer = new MediaMuxer(mOutputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
changeResolution(videoExtractor, audioExtractor,
videoDecoder, videoEncoder,
audioDecoder, audioEncoder,
muxer, inputSurface, outputSurface);
} finally {
try {
if (videoExtractor != null)
videoExtractor.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioExtractor != null)
audioExtractor.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (videoDecoder != null) {
videoDecoder.stop();
videoDecoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (outputSurface != null) {
outputSurface.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (videoEncoder != null) {
videoEncoder.stop();
videoEncoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioDecoder != null) {
audioDecoder.stop();
audioDecoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (audioEncoder != null) {
audioEncoder.stop();
audioEncoder.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (muxer != null) {
muxer.stop();
muxer.release();
}
} catch(Exception e) {
if (exception == null)
exception = e;
}
try {
if (inputSurface != null)
inputSurface.release();
} catch(Exception e) {
if (exception == null)
exception = e;
}
}
if (exception != null)
throw exception;
}
private MediaExtractor createExtractor() throws IOException {
MediaExtractor extractor;
extractor = new MediaExtractor();
extractor.setDataSource(mInputFile);
return extractor;
}
private MediaCodec createVideoDecoder(MediaFormat inputFormat, Surface surface) throws IOException {
MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
decoder.configure(inputFormat, surface, null, 0);
decoder.start();
return decoder;
}
private MediaCodec createVideoEncoder(MediaCodecInfo codecInfo, MediaFormat format,
AtomicReference<Surface> surfaceReference) throws IOException {
MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
surfaceReference.set(encoder.createInputSurface());
encoder.start();
return encoder;
}
private MediaCodec createAudioDecoder(MediaFormat inputFormat) throws IOException {
MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
decoder.configure(inputFormat, null, null, 0);
decoder.start();
return decoder;
}
private MediaCodec createAudioEncoder(MediaCodecInfo codecInfo, MediaFormat format) throws IOException {
MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
return encoder;
}
private int getAndSelectVideoTrackIndex(MediaExtractor extractor) {
for (int index = 0; index < extractor.getTrackCount(); ++index) {
if (isVideoFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index);
return index;
}
}
return -1;
}
private int getAndSelectAudioTrackIndex(MediaExtractor extractor) {
for (int index = 0; index < extractor.getTrackCount(); ++index) {
if (isAudioFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index);
return index;
}
}
return -1;
}
private void changeResolution(MediaExtractor videoExtractor, MediaExtractor audioExtractor,
MediaCodec videoDecoder, MediaCodec videoEncoder,
MediaCodec audioDecoder, MediaCodec audioEncoder,
MediaMuxer muxer,
InputSurface inputSurface, OutputSurface outputSurface) {
ByteBuffer[] videoDecoderInputBuffers = null;
ByteBuffer[] videoDecoderOutputBuffers = null;
ByteBuffer[] videoEncoderOutputBuffers = null;
MediaCodec.BufferInfo videoDecoderOutputBufferInfo = null;
MediaCodec.BufferInfo videoEncoderOutputBufferInfo = null;
videoDecoderInputBuffers = videoDecoder.getInputBuffers();
videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
videoDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
videoEncoderOutputBufferInfo = new MediaCodec.BufferInfo();
ByteBuffer[] audioDecoderInputBuffers = null;
ByteBuffer[] audioDecoderOutputBuffers = null;
ByteBuffer[] audioEncoderInputBuffers = null;
ByteBuffer[] audioEncoderOutputBuffers = null;
MediaCodec.BufferInfo audioDecoderOutputBufferInfo = null;
MediaCodec.BufferInfo audioEncoderOutputBufferInfo = null;
audioDecoderInputBuffers = audioDecoder.getInputBuffers();
audioDecoderOutputBuffers = audioDecoder.getOutputBuffers();
audioEncoderInputBuffers = audioEncoder.getInputBuffers();
audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
audioDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
audioEncoderOutputBufferInfo = new MediaCodec.BufferInfo();
MediaFormat decoderOutputVideoFormat = null;
MediaFormat decoderOutputAudioFormat = null;
MediaFormat encoderOutputVideoFormat = null;
MediaFormat encoderOutputAudioFormat = null;
int outputVideoTrack = -1;
int outputAudioTrack = -1;
boolean videoExtractorDone = false;
boolean videoDecoderDone = false;
boolean videoEncoderDone = false;
boolean audioExtractorDone = false;
boolean audioDecoderDone = false;
boolean audioEncoderDone = false;
int pendingAudioDecoderOutputBufferIndex = -1;
boolean muxing = false;
while ((!videoEncoderDone) || (!audioEncoderDone)) {
while (!videoExtractorDone
&& (encoderOutputVideoFormat == null || muxing)) {
int decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
ByteBuffer decoderInputBuffer = videoDecoderInputBuffers[decoderInputBufferIndex];
int size = videoExtractor.readSampleData(decoderInputBuffer, 0);
long presentationTime = videoExtractor.getSampleTime();
if (size >= 0) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
videoExtractor.getSampleFlags());
}
videoExtractorDone = !videoExtractor.advance();
if (videoExtractorDone)
videoDecoder.queueInputBuffer(decoderInputBufferIndex,
0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
break;
}
while (!audioExtractorDone
&& (encoderOutputAudioFormat == null || muxing)) {
int decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
ByteBuffer decoderInputBuffer = audioDecoderInputBuffers[decoderInputBufferIndex];
int size = audioExtractor.readSampleData(decoderInputBuffer, 0);
long presentationTime = audioExtractor.getSampleTime();
if (size >= 0)
audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, size,
presentationTime, audioExtractor.getSampleFlags());
audioExtractorDone = !audioExtractor.advance();
if (audioExtractorDone)
audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, 0,
0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
break;
}
while (!videoDecoderDone
&& (encoderOutputVideoFormat == null || muxing)) {
int decoderOutputBufferIndex =
videoDecoder.dequeueOutputBuffer(
videoDecoderOutputBufferInfo, TIMEOUT_USEC);
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
break;
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputVideoFormat = videoDecoder.getOutputFormat();
break;
}
ByteBuffer decoderOutputBuffer =
videoDecoderOutputBuffers[decoderOutputBufferIndex];
if ((videoDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
break;
}
boolean render = videoDecoderOutputBufferInfo.size != 0;
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render);
if (render) {
outputSurface.awaitNewImage();
outputSurface.drawImage();
inputSurface.setPresentationTime(
videoDecoderOutputBufferInfo.presentationTimeUs * 1000);
inputSurface.swapBuffers();
}
if ((videoDecoderOutputBufferInfo.flags
& MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
videoDecoderDone = true;
videoEncoder.signalEndOfInputStream();
}
break;
}
while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1
&& (encoderOutputAudioFormat == null || muxing)) {
int decoderOutputBufferIndex =
audioDecoder.dequeueOutputBuffer(
audioDecoderOutputBufferInfo, TIMEOUT_USEC);
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioDecoderOutputBuffers = audioDecoder.getOutputBuffers();
break;
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputAudioFormat = audioDecoder.getOutputFormat();
break;
}
ByteBuffer decoderOutputBuffer =
audioDecoderOutputBuffers[decoderOutputBufferIndex];
if ((audioDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
break;
}
pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex;
break;
}
while (pendingAudioDecoderOutputBufferIndex != -1) {
int encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC);
ByteBuffer encoderInputBuffer = audioEncoderInputBuffers[encoderInputBufferIndex];
int size = audioDecoderOutputBufferInfo.size;
long presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs;
if (size >= 0) {
ByteBuffer decoderOutputBuffer =
audioDecoderOutputBuffers[pendingAudioDecoderOutputBufferIndex]
.duplicate();
decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset);
decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size);
encoderInputBuffer.position(0);
encoderInputBuffer.put(decoderOutputBuffer);
audioEncoder.queueInputBuffer(
encoderInputBufferIndex,
0,
size,
presentationTime,
audioDecoderOutputBufferInfo.flags);
}
audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false);
pendingAudioDecoderOutputBufferIndex = -1;
if ((audioDecoderOutputBufferInfo.flags
& MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0)
audioDecoderDone = true;
break;
}
while (!videoEncoderDone
&& (encoderOutputVideoFormat == null || muxing)) {
int encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
videoEncoderOutputBufferInfo, TIMEOUT_USEC);
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
break;
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputVideoFormat = videoEncoder.getOutputFormat();
break;
}
ByteBuffer encoderOutputBuffer =
videoEncoderOutputBuffers[encoderOutputBufferIndex];
if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (videoEncoderOutputBufferInfo.size != 0) {
muxer.writeSampleData(
outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo);
}
if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
!= 0) {
videoEncoderDone = true;
}
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
while (!audioEncoderDone
&& (encoderOutputAudioFormat == null || muxing)) {
int encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
audioEncoderOutputBufferInfo, TIMEOUT_USEC);
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
break;
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputAudioFormat = audioEncoder.getOutputFormat();
break;
}
ByteBuffer encoderOutputBuffer =
audioEncoderOutputBuffers[encoderOutputBufferIndex];
if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
!= 0) {
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (audioEncoderOutputBufferInfo.size != 0)
muxer.writeSampleData(
outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
!= 0)
audioEncoderDone = true;
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
break;
}
if (!muxing && (encoderOutputAudioFormat != null)
&& (encoderOutputVideoFormat != null)) {
outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat);
outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat);
muxer.start();
muxing = true;
}
}
}
private static boolean isVideoFormat(MediaFormat format) {
return getMimeTypeFor(format).startsWith("video/");
}
private static boolean isAudioFormat(MediaFormat format) {
return getMimeTypeFor(format).startsWith("audio/");
}
private static String getMimeTypeFor(MediaFormat format) {
return format.getString(MediaFormat.KEY_MIME);
}
private static MediaCodecInfo selectCodec(String mimeType) {
int numCodecs = MediaCodecList.getCodecCount();
for (int i = 0; i < numCodecs; i++) {
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
if (!codecInfo.isEncoder()) {
continue;
}
String[] types = codecInfo.getSupportedTypes();
for (int j = 0; j < types.length; j++) {
if (types[j].equalsIgnoreCase(mimeType)) {
return codecInfo;
}
}
}
return null;
}
}
它还需要 InputSurface
、OutputSurface
和 TextureRender
,它们位于 ExtractDecodeEditEncodeMuxTest
旁边(HERE link 上方)。将这三个与 VideoResolutionChanger
放在同一个包中并像这样使用它:
try{
String pathToReEncodedFile =
new VideoResolutionChanger().changeResolution(videoFilePath);
}catch(Throwable t){/* smth wrong :( */}
其中 videoFilePath
可以使用 file.getAbsolutePath()
从 File
获得。
我知道这不是最干净的方式,可能也不是 most-effective/efficient 方式,但过去两天我一直在寻找类似的代码并找到了很多主题,其中大部分将我重定向到 INDE、ffmpeg 或 jcodec ,其他人没有得到正确的答案。所以我把它留在这里,明智地使用它!
限制:
- 上面的 use-it-like-this 代码段 不能 在主循环线程 (ui) 中启动,例如直接进入
Activity
。最好的方法是 createIntentService
and pass 在Intent
的额外Bundle
中输入文件路径String
。然后你可以 运行changeResolution
直接进入onHandleIntent
; - API18及以上(
MediaMuxer
引入); - API18当然需要
WRITE_EXTERNAL_STORAGE
,API19以上有这个"built-in";
@fadden 感谢您 的工作和支持! :)
我不介意问题的实现和编码问题。但是我们经历了同样的灾难,因为 ffmpeg 将我们的应用程序大小至少增加了 19MB,我正在使用这个 Whosebug 问题来想出一个库,它在没有 ffmpeg 的情况下也能做到同样的事情。显然 linkedin
的人以前做过。检查 this article.
该项目名为 LiTr,是 available on github。它使用 android MediaCodec 和 MediaMuxer,因此您可以在需要时参考代码以获取有关您自己项目的帮助。这个问题是 4 年前提出的,但我希望这对现在的人有所帮助。
使用编译'com.zolad:videoslimmer:1.0.0'
VideoResolutionChanger.kt
class VideoResolutionChanger {
private val TIMEOUT_USEC = 10000
private val OUTPUT_VIDEO_MIME_TYPE = "video/avc"
private val OUTPUT_VIDEO_BIT_RATE = 2048 * 1024
private val OUTPUT_VIDEO_FRAME_RATE = 60
private val OUTPUT_VIDEO_IFRAME_INTERVAL = 1
private val OUTPUT_VIDEO_COLOR_FORMAT = MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface
private val OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm"
private val OUTPUT_AUDIO_CHANNEL_COUNT = 2
private val OUTPUT_AUDIO_BIT_RATE = 128 * 1024
private val OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectHE
private val OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100
private var mWidth = 1920
private var mHeight = 1080
private var mOutputFile : String? = null
private var mInputFile : String? = null
private var mTotalTime : Int = 0
@Throws(Throwable::class)
fun changeResolution(f: File): String? {
mInputFile = f.absolutePath
val filePath : String? = mInputFile!!.substring(0, mInputFile!!.lastIndexOf(File.separator))
val splitByDot: Array<String> = mInputFile!!.split("\.").toTypedArray()
var ext = ""
if (splitByDot.size > 1) ext = splitByDot[splitByDot.size - 1]
var fileName: String = mInputFile!!.substring(
mInputFile!!.lastIndexOf(File.separator) + 1,
mInputFile!!.length
)
fileName = if (ext.length > 0) fileName.replace(".$ext", "_out.mp4") else fileName + "_out.mp4"
mOutputFile = outFile.getAbsolutePath()
ChangerWrapper.changeResolutionInSeparatedThread(this)
return mOutputFile
}
private class ChangerWrapper private constructor(private val mChanger: VideoResolutionChanger) :
Runnable {
private var mThrowable : Throwable? = null
override fun run() {
try {
mChanger.prepareAndChangeResolution()
} catch (th: Throwable) {
mThrowable = th
}
}
companion object {
@Throws(Throwable::class)
fun changeResolutionInSeparatedThread(changer: VideoResolutionChanger) {
val wrapper = ChangerWrapper(changer)
val th = Thread(wrapper, ChangerWrapper::class.java.simpleName)
th.start()
th.join()
if (wrapper.mThrowable != null) throw wrapper.mThrowable!!
}
}
}
@Throws(Exception::class)
private fun prepareAndChangeResolution() {
var exception: Exception? = null
val videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE) ?: return
val audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE) ?: return
var videoExtractor : MediaExtractor? = null
var audioExtractor : MediaExtractor? = null
var outputSurface : OutputSurface? = null
var videoDecoder : MediaCodec? = null
var audioDecoder : MediaCodec? = null
var videoEncoder : MediaCodec? = null
var audioEncoder : MediaCodec? = null
var muxer : MediaMuxer? = null
var inputSurface : InputSurface? = null
try {
videoExtractor = createExtractor()
val videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor)
val inputFormat = videoExtractor!!.getTrackFormat(videoInputTrack)
val m = MediaMetadataRetriever()
m.setDataSource(mInputFile)
var inputWidth: Int
var inputHeight: Int
try {
inputWidth =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH)!!.toInt()
inputHeight =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT)!!.toInt()
mTotalTime =
m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION)!!.toInt() * 1000
} catch (e: Exception) {
val thumbnail = m.frameAtTime
inputWidth = thumbnail!!.width
inputHeight = thumbnail.height
thumbnail.recycle()
}
if (inputWidth > inputHeight) {
if (mWidth < mHeight) {
val w = mWidth
mWidth = mHeight
mHeight = w
}
} else {
if (mWidth > mHeight) {
val w = mWidth
mWidth = mHeight
mHeight = w
}
}
val outputVideoFormat =
MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight)
outputVideoFormat.setInteger(
MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT
)
outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE)
outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE)
outputVideoFormat.setInteger(
MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL
)
val inputSurfaceReference: AtomicReference<Surface> = AtomicReference<Surface>()
videoEncoder = createVideoEncoder(
videoCodecInfo, outputVideoFormat, inputSurfaceReference
)
inputSurface = InputSurface(inputSurfaceReference.get())
inputSurface.makeCurrent()
outputSurface = OutputSurface()
videoDecoder = createVideoDecoder(inputFormat, outputSurface!!.surface!!);
audioExtractor = createExtractor()
val audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor)
val inputAudioFormat = audioExtractor!!.getTrackFormat(audioInputTrack)
val outputAudioFormat = MediaFormat.createAudioFormat(
inputAudioFormat.getString(MediaFormat.KEY_MIME)!!,
inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT)
)
outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE)
outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE)
audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat)
audioDecoder = createAudioDecoder(inputAudioFormat)
muxer = MediaMuxer(mOutputFile!!, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
changeResolution(
videoExtractor, audioExtractor,
videoDecoder, videoEncoder,
audioDecoder, audioEncoder,
muxer, inputSurface, outputSurface
)
} finally {
try {
videoExtractor?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
audioExtractor?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (videoDecoder != null) {
videoDecoder.stop()
videoDecoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
outputSurface?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (videoEncoder != null) {
videoEncoder.stop()
videoEncoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (audioDecoder != null) {
audioDecoder.stop()
audioDecoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (audioEncoder != null) {
audioEncoder.stop()
audioEncoder.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
if (muxer != null) {
muxer.stop()
muxer.release()
}
} catch (e: Exception) {
if (exception == null) exception = e
}
try {
inputSurface?.release()
} catch (e: Exception) {
if (exception == null) exception = e
}
}
if (exception != null) throw exception
}
@Throws(IOException::class)
private fun createExtractor(): MediaExtractor? {
val extractor : MediaExtractor = MediaExtractor()
mInputFile?.let { extractor.setDataSource(it) }
return extractor
}
@Throws(IOException::class)
private fun createVideoDecoder(inputFormat: MediaFormat, surface: Surface): MediaCodec? {
val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
decoder.configure(inputFormat, surface, null, 0)
decoder.start()
return decoder
}
@Throws(IOException::class)
private fun createVideoEncoder(
codecInfo: MediaCodecInfo, format: MediaFormat,
surfaceReference: AtomicReference<Surface>
): MediaCodec? {
val encoder = MediaCodec.createByCodecName(codecInfo.name)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
surfaceReference.set(encoder.createInputSurface())
encoder.start()
return encoder
}
@Throws(IOException::class)
private fun createAudioDecoder(inputFormat: MediaFormat): MediaCodec? {
val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
decoder.configure(inputFormat, null, null, 0)
decoder.start()
return decoder
}
@Throws(IOException::class)
private fun createAudioEncoder(codecInfo: MediaCodecInfo, format: MediaFormat): MediaCodec? {
val encoder = MediaCodec.createByCodecName(codecInfo.name)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
encoder.start()
return encoder
}
private fun getAndSelectVideoTrackIndex(extractor: MediaExtractor?): Int {
for (index in 0 until extractor!!.trackCount) {
if (isVideoFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index)
return index
}
}
return -1
}
private fun getAndSelectAudioTrackIndex(extractor: MediaExtractor?): Int {
for (index in 0 until extractor!!.trackCount) {
if (isAudioFormat(extractor.getTrackFormat(index))) {
extractor.selectTrack(index)
return index
}
}
return -1
}
private fun changeResolution(
videoExtractor: MediaExtractor?, audioExtractor: MediaExtractor?,
videoDecoder: MediaCodec?, videoEncoder: MediaCodec?,
audioDecoder: MediaCodec?, audioEncoder: MediaCodec?,
muxer: MediaMuxer,
inputSurface: InputSurface?, outputSurface: OutputSurface?
) {
var videoDecoderInputBuffers : Array<ByteBuffer?>? = null
var videoDecoderOutputBuffers : Array<ByteBuffer?>? = null
var videoEncoderOutputBuffers : Array<ByteBuffer?>? = null
var videoDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
var videoEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
videoDecoderInputBuffers = videoDecoder!!.inputBuffers
videoDecoderOutputBuffers = videoDecoder.outputBuffers
videoEncoderOutputBuffers = videoEncoder!!.outputBuffers
videoDecoderOutputBufferInfo = MediaCodec.BufferInfo()
videoEncoderOutputBufferInfo = MediaCodec.BufferInfo()
var audioDecoderInputBuffers : Array<ByteBuffer?>? = null
var audioDecoderOutputBuffers : Array<ByteBuffer>? = null
var audioEncoderInputBuffers : Array<ByteBuffer>? = null
var audioEncoderOutputBuffers : Array<ByteBuffer?>? = null
var audioDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
var audioEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
audioDecoderInputBuffers = audioDecoder!!.inputBuffers
audioDecoderOutputBuffers = audioDecoder.outputBuffers
audioEncoderInputBuffers = audioEncoder!!.inputBuffers
audioEncoderOutputBuffers = audioEncoder.outputBuffers
audioDecoderOutputBufferInfo = MediaCodec.BufferInfo()
audioEncoderOutputBufferInfo = MediaCodec.BufferInfo()
var encoderOutputVideoFormat : MediaFormat? = null
var encoderOutputAudioFormat : MediaFormat? = null
var outputVideoTrack = -1
var outputAudioTrack = -1
var videoExtractorDone = false
var videoDecoderDone = false
var videoEncoderDone = false
var audioExtractorDone = false
var audioDecoderDone = false
var audioEncoderDone = false
var pendingAudioDecoderOutputBufferIndex = -1
var muxing = false
while (!videoEncoderDone || !audioEncoderDone) {
while (!videoExtractorDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
val decoderInputBuffer: ByteBuffer? =
videoDecoderInputBuffers[decoderInputBufferIndex]
val size = decoderInputBuffer?.let { videoExtractor!!.readSampleData(it, 0) }
val presentationTime = videoExtractor?.sampleTime
if (presentationTime != null) {
if (size != null) {
if (size >= 0) {
if (videoExtractor != null) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
videoExtractor.sampleFlags
)
}
}
}
}
if (videoExtractor != null) {
videoExtractorDone = (!videoExtractor.advance() && size == -1)
}
if (videoExtractorDone) {
videoDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
0,
0,
MediaCodec.BUFFER_FLAG_END_OF_STREAM
)
}
break
}
while (!audioExtractorDone
&& (encoderOutputAudioFormat == null || muxing)
) {
val decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
val decoderInputBuffer: ByteBuffer? =
audioDecoderInputBuffers[decoderInputBufferIndex]
val size = decoderInputBuffer?.let { audioExtractor!!.readSampleData(it, 0) }
val presentationTime = audioExtractor?.sampleTime
if (presentationTime != null) {
if (size != null) {
if (size >= 0) {
audioDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
size,
presentationTime,
audioExtractor.sampleFlags
)
}
}
}
if (audioExtractor != null) {
audioExtractorDone = (!audioExtractor.advance() && size == -1)
}
if (audioExtractorDone) {
audioDecoder.queueInputBuffer(
decoderInputBufferIndex,
0,
0,
0,
MediaCodec.BUFFER_FLAG_END_OF_STREAM
)
}
break
}
while (!videoDecoderDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val decoderOutputBufferIndex = videoDecoder.dequeueOutputBuffer(
videoDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoDecoderOutputBuffers = videoDecoder.outputBuffers
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputVideoFormat = videoDecoder.outputFormat
break
}
val decoderOutputBuffer: ByteBuffer? =
videoDecoderOutputBuffers!![decoderOutputBufferIndex]
if (videoDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)
break
}
val render = videoDecoderOutputBufferInfo.size != 0
videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render)
if (render) {
if (outputSurface != null) {
outputSurface.awaitNewImage()
outputSurface.drawImage()
}
if (inputSurface != null) {
inputSurface.setPresentationTime(
videoDecoderOutputBufferInfo.presentationTimeUs * 1000
)
inputSurface.swapBuffers()
}
}
if ((videoDecoderOutputBufferInfo.flags
and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
) {
videoDecoderDone = true
videoEncoder.signalEndOfInputStream()
}
break
}
while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1 && (encoderOutputAudioFormat == null || muxing)) {
val decoderOutputBufferIndex = audioDecoder.dequeueOutputBuffer(
audioDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioDecoderOutputBuffers = audioDecoder.outputBuffers
break
}
if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
decoderOutputAudioFormat = audioDecoder.outputFormat
break
}
val decoderOutputBuffer: ByteBuffer =
audioDecoderOutputBuffers!![decoderOutputBufferIndex]
if (audioDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)
break
}
pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex
break
}
while (pendingAudioDecoderOutputBufferIndex != -1) {
val encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())
val encoderInputBuffer: ByteBuffer =
audioEncoderInputBuffers[encoderInputBufferIndex]
val size = audioDecoderOutputBufferInfo.size
val presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs
if (size >= 0) {
val decoderOutputBuffer: ByteBuffer =
audioDecoderOutputBuffers!![pendingAudioDecoderOutputBufferIndex]
.duplicate()
decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset)
decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size)
encoderInputBuffer.position(0)
encoderInputBuffer.put(decoderOutputBuffer)
audioEncoder.queueInputBuffer(
encoderInputBufferIndex,
0,
size,
presentationTime,
audioDecoderOutputBufferInfo.flags
)
}
audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false)
pendingAudioDecoderOutputBufferIndex = -1
if ((audioDecoderOutputBufferInfo.flags
and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
) audioDecoderDone = true
break
}
while (!videoEncoderDone
&& (encoderOutputVideoFormat == null || muxing)
) {
val encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
videoEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) break
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
videoEncoderOutputBuffers = videoEncoder.outputBuffers
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputVideoFormat = videoEncoder.outputFormat
break
}
val encoderOutputBuffer: ByteBuffer? =
videoEncoderOutputBuffers!![encoderOutputBufferIndex]
if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (videoEncoderOutputBufferInfo.size != 0) {
if (encoderOutputBuffer != null) {
muxer.writeSampleData(
outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo
)
}
}
if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
!= 0
) {
videoEncoderDone = true
}
videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
while (!audioEncoderDone
&& (encoderOutputAudioFormat == null || muxing)
) {
val encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
audioEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
)
if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
audioEncoderOutputBuffers = audioEncoder.outputBuffers
break
}
if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
encoderOutputAudioFormat = audioEncoder.outputFormat
break
}
val encoderOutputBuffer: ByteBuffer? =
audioEncoderOutputBuffers!![encoderOutputBufferIndex]
if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
!= 0
) {
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (audioEncoderOutputBufferInfo.size != 0) encoderOutputBuffer?.let {
muxer.writeSampleData(
outputAudioTrack, it, audioEncoderOutputBufferInfo
)
}
if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
!= 0
) audioEncoderDone = true
audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
break
}
if (!muxing && encoderOutputAudioFormat != null
&& encoderOutputVideoFormat != null
) {
outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat)
outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat)
muxer.start()
muxing = true
}
}
}
private fun isVideoFormat(format: MediaFormat): Boolean {
return getMimeTypeFor(format)!!.startsWith("video/")
}
private fun isAudioFormat(format: MediaFormat): Boolean {
return getMimeTypeFor(format)!!.startsWith("audio/")
}
private fun getMimeTypeFor(format: MediaFormat): String? {
return format.getString(MediaFormat.KEY_MIME)
}
private fun selectCodec(mimeType: String): MediaCodecInfo? {
val numCodecs = MediaCodecList.getCodecCount()
for (i in 0 until numCodecs) {
val codecInfo = MediaCodecList.getCodecInfoAt(i)
if (!codecInfo.isEncoder) {
continue
}
val types = codecInfo.supportedTypes
for (j in types.indices) {
if (types[j].equals(mimeType, ignoreCase = true)) {
return codecInfo
}
}
}
return null
}
}
OutputSurface.kt
/*
* Copyright (C) 2013 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Holds state associated with a Surface used for MediaCodec decoder output.
*
*
* The (width,height) constructor for this class will prepare GL, create a SurfaceTexture,
* and then create a Surface for that SurfaceTexture. The Surface can be passed to
* MediaCodec.configure() to receive decoder output. When a frame arrives, we latch the
* texture with updateTexImage, then render the texture with GL to a pbuffer.
*
*
* The no-arg constructor skips the GL preparation step and doesn't allocate a pbuffer.
* Instead, it just creates the Surface and SurfaceTexture, and when a frame arrives
* we just draw it on whatever surface is current.
*
*
* By default, the Surface will be using a BufferQueue in asynchronous mode, so we
* can potentially drop frames.
*/
internal class OutputSurface : OnFrameAvailableListener {
private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
private var mEGLContext = EGL14.EGL_NO_CONTEXT
private var mEGLSurface = EGL14.EGL_NO_SURFACE
private var mSurfaceTexture: SurfaceTexture? = null
/**
* Returns the Surface that we draw onto.
*/
var surface: Surface? = null
private set
private val mFrameSyncObject = Object() // guards mFrameAvailable
private var mFrameAvailable = false
private var mTextureRender: TextureRender? = null
/**
* Creates an OutputSurface backed by a pbuffer with the specifed dimensions. The new
* EGL context and surface will be made current. Creates a Surface that can be passed
* to MediaCodec.configure().
*/
constructor(width: Int, height: Int) {
println("OutputSurface constructor width: $width height: $height")
require(!(width <= 0 || height <= 0))
eglSetup(width, height)
makeCurrent()
setup()
}
/**
* Creates an OutputSurface using the current EGL context (rather than establishing a
* new one). Creates a Surface that can be passed to MediaCodec.configure().
*/
constructor() {
println("OutputSurface constructor")
setup()
}
/**
* Creates instances of TextureRender and SurfaceTexture, and a Surface associated
* with the SurfaceTexture.
*/
private fun setup() {
println("OutputSurface setup")
mTextureRender = TextureRender()
mTextureRender!!.surfaceCreated()
// Even if we don't access the SurfaceTexture after the constructor returns, we
// still need to keep a reference to it. The Surface doesn't retain a reference
// at the Java level, so if we don't either then the object can get GCed, which
// causes the native finalizer to run.
if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender!!.textureId)
mSurfaceTexture = SurfaceTexture(mTextureRender!!.textureId)
// This doesn't work if OutputSurface is created on the thread that CTS started for
// these test cases.
//
// The CTS-created thread has a Looper, and the SurfaceTexture constructor will
// create a Handler that uses it. The "frame available" message is delivered
// there, but since we're not a Looper-based thread we'll never see it. For
// this to do anything useful, OutputSurface must be created on a thread without
// a Looper, so that SurfaceTexture uses the main application Looper instead.
//
// Java language note: passing "this" out of a constructor is generally unwise,
// but we should be able to get away with it here.
mSurfaceTexture!!.setOnFrameAvailableListener(this)
surface = Surface(mSurfaceTexture)
}
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports pbuffer.
*/
private fun eglSetup(width: Int, height: Int) {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("unable to get EGL14 display")
}
val version = IntArray(2)
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
mEGLDisplay = null
throw RuntimeException("unable to initialize EGL14")
}
// Configure EGL for pbuffer and OpenGL ES 2.0. We want enough RGB bits
// to be able to tell if the frame is reasonable.
val attribList = intArrayOf(
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
)
val configs = arrayOfNulls<EGLConfig>(1)
val numConfigs = IntArray(1)
if (!EGL14.eglChooseConfig(
mEGLDisplay, attribList, 0, configs, 0, configs.size,
numConfigs, 0
)
) {
throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
}
// Configure context for OpenGL ES 2.0.
val attrib_list = intArrayOf(
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
mEGLContext = EGL14.eglCreateContext(
mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0
)
checkEglError("eglCreateContext")
if (mEGLContext == null) {
throw RuntimeException("null context")
}
// Create a pbuffer surface. By using this for output, we can use glReadPixels
// to test values in the output.
val surfaceAttribs = intArrayOf(
EGL14.EGL_WIDTH, width,
EGL14.EGL_HEIGHT, height,
EGL14.EGL_NONE
)
mEGLSurface = EGL14.eglCreatePbufferSurface(mEGLDisplay, configs[0], surfaceAttribs, 0)
checkEglError("eglCreatePbufferSurface")
if (mEGLSurface == null) {
throw RuntimeException("surface was null")
}
}
/**
* Discard all resources held by this class, notably the EGL context.
*/
fun release() {
if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) {
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
EGL14.eglReleaseThread()
EGL14.eglTerminate(mEGLDisplay)
}
surface!!.release()
// this causes a bunch of warnings that appear harmless but might confuse someone:
// W BufferQueue: [unnamed-3997-2] cancelBuffer: BufferQueue has been abandoned!
//mSurfaceTexture.release();
mEGLDisplay = EGL14.EGL_NO_DISPLAY
mEGLContext = EGL14.EGL_NO_CONTEXT
mEGLSurface = EGL14.EGL_NO_SURFACE
mTextureRender = null
surface = null
mSurfaceTexture = null
}
/**
* Makes our EGL context and surface current.
*/
private fun makeCurrent() {
if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) {
throw RuntimeException("eglMakeCurrent failed")
}
}
/**
* Replaces the fragment shader.
*/
fun changeFragmentShader(fragmentShader: String?) {
if (fragmentShader != null) {
mTextureRender?.changeFragmentShader(fragmentShader)
}
}
/**
* Latches the next buffer into the texture. Must be called from the thread that created
* the OutputSurface object, after the onFrameAvailable callback has signaled that new
* data is available.
*/
fun awaitNewImage() {
//println("awaitNewImage")
val TIMEOUT_MS = 500
synchronized(mFrameSyncObject) {
while (!mFrameAvailable) {
try {
// Wait for onFrameAvailable() to signal us. Use a timeout to avoid
// stalling the test if it doesn't arrive.
mFrameSyncObject.wait(TIMEOUT_MS.toLong())
if (!mFrameAvailable) {
// TODO: if "spurious wakeup", continue while loop
//throw RuntimeException("Surface frame wait timed out")
}
} catch (ie: InterruptedException) {
// shouldn't happen
throw RuntimeException(ie)
}
}
mFrameAvailable = false
}
// Latch the data.
mTextureRender?.checkGlError("before updateTexImage")
mSurfaceTexture!!.updateTexImage()
}
/**
* Draws the data from SurfaceTexture onto the current EGL surface.
*/
fun drawImage() {
mSurfaceTexture?.let { mTextureRender?.drawFrame(it) }
}
override fun onFrameAvailable(st: SurfaceTexture) {
//println("onFrameAvailable")
if (VERBOSE) Log.d(TAG, "new frame available")
synchronized(mFrameSyncObject) {
if (mFrameAvailable) {
throw RuntimeException("mFrameAvailable already set, frame could be dropped")
}
mFrameAvailable = true
mFrameSyncObject.notifyAll()
}
}
/**
* Checks for EGL errors.
*/
private fun checkEglError(msg: String) {
var error: Int
if (EGL14.eglGetError().also { error = it } != EGL14.EGL_SUCCESS) {
throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
}
}
companion object {
private const val TAG = "OutputSurface"
private const val VERBOSE = false
}
}
InputSurface.kt
/*
* Copyright (C) 2013 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Holds state associated with a Surface used for MediaCodec encoder input.
*
*
* The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
* to create an EGL window surface. Calls to eglSwapBuffers() cause a frame of data to be sent
* to the video encoder.
*/
internal class InputSurface(surface: Surface?) {
private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
private var mEGLContext = EGL14.EGL_NO_CONTEXT
private var mEGLSurface = EGL14.EGL_NO_SURFACE
/**
* Returns the Surface that the MediaCodec receives buffers from.
*/
var surface: Surface?
private set
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports recording.
*/
private fun eglSetup() {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) {
throw RuntimeException("unable to get EGL14 display")
}
val version = IntArray(2)
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
mEGLDisplay = null
throw RuntimeException("unable to initialize EGL14")
}
// Configure EGL for recordable and OpenGL ES 2.0. We want enough RGB bits
// to minimize artifacts from possible YUV conversion.
val attribList = intArrayOf(
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
)
val configs = arrayOfNulls<EGLConfig>(1)
val numConfigs = IntArray(1)
if (!EGL14.eglChooseConfig(
mEGLDisplay, attribList, 0, configs, 0, configs.size,
numConfigs, 0
)
) {
throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
}
// Configure context for OpenGL ES 2.0.
val attrib_list = intArrayOf(
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
)
mEGLContext = EGL14.eglCreateContext(
mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0
)
checkEglError("eglCreateContext")
if (mEGLContext == null) {
throw RuntimeException("null context")
}
// Create a window surface, and attach it to the Surface we received.
val surfaceAttribs = intArrayOf(
EGL14.EGL_NONE
)
mEGLSurface = EGL14.eglCreateWindowSurface(
mEGLDisplay, configs[0], surface,
surfaceAttribs, 0
)
checkEglError("eglCreateWindowSurface")
if (mEGLSurface == null) {
throw RuntimeException("surface was null")
}
}
/**
* Discard all resources held by this class, notably the EGL context. Also releases the
* Surface that was passed to our constructor.
*/
fun release() {
if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) {
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
EGL14.eglReleaseThread()
EGL14.eglTerminate(mEGLDisplay)
}
surface!!.release()
mEGLDisplay = EGL14.EGL_NO_DISPLAY
mEGLContext = EGL14.EGL_NO_CONTEXT
mEGLSurface = EGL14.EGL_NO_SURFACE
surface = null
}
/**
* Makes our EGL context and surface current.
*/
fun makeCurrent() {
if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) {
throw RuntimeException("eglMakeCurrent failed")
}
}
fun makeUnCurrent() {
if (!EGL14.eglMakeCurrent(
mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT
)
) {
throw RuntimeException("eglMakeCurrent failed")
}
}
/**
* Calls eglSwapBuffers. Use this to "publish" the current frame.
*/
fun swapBuffers(): Boolean {
//println("swapBuffers")
return EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface)
}
/**
* Queries the surface's width.
*/
val width: Int
get() {
val value = IntArray(1)
EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_WIDTH, value, 0)
return value[0]
}
/**
* Queries the surface's height.
*/
val height: Int
get() {
val value = IntArray(1)
EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_HEIGHT, value, 0)
return value[0]
}
/**
* Sends the presentation time stamp to EGL. Time is expressed in nanoseconds.
*/
fun setPresentationTime(nsecs: Long) {
EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs)
}
/**
* Checks for EGL errors.
*/
private fun checkEglError(msg: String) {
var error: Int
if (EGL14.eglGetError().also { error = it } != EGL14.EGL_SUCCESS) {
throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
}
}
companion object {
private const val TAG = "InputSurface"
private const val VERBOSE = false
private const val EGL_RECORDABLE_ANDROID = 0x3142
}
/**
* Creates an InputSurface from a Surface.
*/
init {
if (surface == null) {
throw NullPointerException()
}
this.surface = surface
eglSetup()
}
}